Quantcast
Channel: AnandTech Pipeline
Viewing all 9186 articles
Browse latest View live

Holiday Buyer’s Guide 2015: Desktop Processors (CPUs)

$
0
0

2015 on the whole has been an interesting play in the land of computer processor land. For anyone buying a new desktop, we had the movement of Intel’s Haswell-E at the end of last year on the high end desktop, moving through AMD’s 28nm Kaveri Refresh improvements, Intel’s graphics focused Broadwell and just recently Intel’s Skylake. Unfortunately it can be difficult to view a processor in isolation, especially when the connectivity provided by the platform or chipset can be a critical element to a purchasing decision. It also transpires that some mobile platforms are working their way into desktop products, with full on discrete graphics cards, which adds an element to consider.

Our Holiday Guide for processors this year focuses on what you can buy today. Most of what you see here we have tested extensively and you can find the reviews on site or benchmark data in our benchmark database, Bench. Some will be based on expectations and interpolations where we are still expecting samples for review.

The Budget CPU for eSports Gaming: AMD A8-7670K ($90 on sale)

This processor we recently reviewed with our new Rocket League benchmark, giving a good run at both the 1920x1080 Medium settings or 1280x720 on High settings. This is AMD’s latest Kaveri Refresh part to hit the shelves, originally sold at $128 but has been as low as $90 on sale, making it a veritable steal when paired with a $60-$80 motherboard and some DDR3 memory. Make sure you get the DDR3-2133 speed as a bare minimum to fully get at the graphics power.

 

The QuickSync Beast: Intel i7-5775C ($366)

Intel’s Broadwell (5th Gen) desktop platform had an awkward launch, only seeing presence initially in a few markets around the world and then almost completely superseded by the 6th Gen Skylake implementations. It has taken some time but those Broadwell processors have finally reached North American shores, and the offer one big benefit of any other processor: the best integrated graphics performance in a single processor. It comes at a cost of course, but for the users and big-budget prosumers who are intending to thrash some QuickSync related video, this is the best part you can buy.

 

Building a Low Power Mini-PC / Office PC: Intel i3-6300T ($138), or Pentium G3258 ($72) for budget

I know have a mini-PC in the front room which acts as a hub for local streaming, online streaming, gaming, and another full PC should someone in the house needs one. If you want to build your own rather than invest in some of the pre-built mini-PCs, such as a NUC, BRIX, ZBOX or EliteDesk, then something small, low power but performant is usually a good idea. The Core i3-6300T pushes just above the margin into the i3 units with a slightly larger L3 cache but also Skylake’s support for enhanced video decode capabilities and with the lower power ‘T’ moniker. It does come at a slight cost or might be hard to find, so the Pentium G3258 (or most G32xx Pentiums) offer a budget alternative. They don’t come with Skylake’s feature set, but can be found at about half the price of the i3, and our review of the G3258 placed it as an interesting part in this space.

In the office, for some better oomph, the Core i3-6300T should provide a happy medium of performance, cost and power, giving a nice overall TCO.

 

The $800-$1800 Mainstream Gaming PC (1-2 GPUs): The Intel i5-6500 ($192) or AMD A10-7870K ($150)

If there’s room to splash on a large graphics card (GTX 970 or R9 390X and up) or two, then typically it gets paired with a processor that helps it perform to the best of its abilities. The AMD A10-7870K is selected here as it provides a lower price point and gives pretty much the same performance as most Intel parts for single-GPU gaming, although in our reviews we see this as being very game and discrete GPU dependent (e.g. on GRID Autosport, NVIDIA GPUs perform the same as for AMD or Intel CPUs, but AMD GPUs seem to perform better on Intel CPUs). For those with budget, the i5-6500 (or the Haswell equivalent, the i5-4460 at $182) will pretty much survive any single/dual GPU combination you can throw at it. The question will remain over how DX12 plays in all of this, which again depends on how video game engine designers will integrate multi-adaptor technology.

 

Entry Prosumer: The Intel i7-5820K ($390) or i7-6700K ($339)

For the professional who needs some processor throughput, we’ve selected the low end consumer Haswell-E processor with six cores and access to 28 PCIe 3.0 lanes as a primary choice, which means investing in an X99 motherboard as well. There is a caveat though, depending on the need for PCIe storage, Thunderbolt 3 or USB 3.1, all of which is more easily found on the Z170 platform with Skylake processors, but these are limited to four cores, which might hamper the throughput. So in this case, the Haswell-E wins out for raw power, maximum memory and PCIe lanes from the CPU, but the i7-6700K paired with a Z170 motherboard will give more interoperability.

The equivalent Xeon for the i7-5820K is the E5-2643 v3, which comes in at ($1552). In that case, the E5-2620 v3 might be an interesting talking point as it offers the same number of cores but at a few hundred MHz less (going from 3.4 GHz to 2.4GHz on the base clock) but is substantially cheaper at only ($417). Here’s a direct comparison link.

 

3-way or 4-way Graphics: Intel i7-5960X ($999)

For a user that wants everything, the i7-5960X gives a full eight cores with hyperthreading, along with 40 PCIe lanes and on a chipset that supports 128GB of DDR4 memory. Technically for GPU lane counts you could go down the Skylake route, and buy the i7-6700K with a PLX 8747 enabled motherboard such as the GIGABYTE Z170X-Gaming G1 or the ASUS Z170-E WS, which exchanges half the CPU performance for cost. But the i7-5960X tops most of our throughput benchmarks, and should satisfy the high-end multi graphics crowd.

 

Money No Object: Intel Xeon E5-2699 v3 ($4115)

You want 18 cores? Open your wallet. Then put two in one machine.

 

Honorable Mention: AMD FX-8800P in a Desktop, the Dell Inspiron 3656 (~$850)

There have been a lot of requests to see AMD’s latest processor microarchitecture in the form of Excavator cores in something we can use on the desktop. It just so happens that Dell is ahead of you, and sells a desktop based on this high-end mobile part. We assume it’s running at 35W and in dual channel memory mode (it would be silly not to). Sure it’s a mobile part that doesn’t come with a screen, but it does have 16GB of DRAM, a 2TB HDD and I've seen a model in Best Buy with an R9 360 discrete graphics card installed for $900.

 

The Build-A-Rig VR Model: Intel i5-6500 ($192)

When Oculus first announced the requirements for their Rift VR headset, they wanted to create a bare minimum platform or developers to aim for to ensure a consistent performance metric going forward, similar to how console hardware doesn’t change. At the time, they suggested an i5-4590, which is a Haswell based 3.3 GHz processor with a boost up to 3.7 GHz – we’ve suggested the Skylake equivalent on the i5-6500 which is a 3.2/3.6 GHz part but benefits from Skylake’s increased instructions per clock rate and also should provide a system around it with better functionality. We want to get this chip in review for due course, and see what level of details we need to hit the VR like experience (2160x1200 at 90 FPS).


Intel Introduces New Braswell Stepping with J3060, J3160 and J3710

$
0
0

When a processor is manufactured, it has a series of designations to identify it, such as the name. But alongside this, as with almost every manufactured product ever, each product will go through a number of revisions and design reinventions to do the same thing better or add new functionality. For microprocessors, aside from the model and family name, we also get what are called a ‘Revision’ and a ‘Stepping’ for each model, with the stepping being used for enhancements that increase the efficiency or add features. New steppings require a complete revalidation process for yields and back-end work, but for example a typical Intel mainstream processor will go through three or four steppings starting with the first silicon.

What Intel has published in the last couple of weeks through a 'product change notification' is an update to the Atom line of desktop-embedded processors that use Cherry Trail cores. The combination of cores and marketing position gives this platform the name Braswell. The Braswell update is a new stepping which adjusts the power consumption of the cores, raising the frequency, raising the TDP of the Pentium variants for a larger product separation, and renaming both the processor itself and the HD Graphics implementation. This change is referred to in the documentation as moving from the C-stepping to the D-stepping, which typically co-incides with a change in the way these processors are made (adjusted metal layer arrangement or lithography mask update).

Intel Braswell SKUs
SKU Cores /
Threads
CPU
Freq
CPU
Burst
L2
Cache
Graphics TDP Price
Celeron N3000 2 / 2 1040 2080 1 MB HD 4 W $107
Celeron N3050 2 / 2 1600 2160 1 MB HD 6 W $107
*Celeron J3060 2 / 2 1600 2480 1 MB HD 400 6 W ?
Celeron N3150 4 / 4 1600 2080 2 MB HD 6 W $107
*Celeron J3160 4 / 4 1600 2240 2 MB HD 400 6 W ?
Pentium N3700 4 / 4 1600 2400 2 MB HD 6 W $161
*Pentium J3710 4 / 4 1600 2640 2 MB HD 405 6.5 W ?

* New parts

The new SKUs will still be Braswell parts, with the names changed from N to J with the number adding 10.  The Pentium models will go from 6W to 6.5W, have an increase in burst frequency, but at this point the exact value has not been published. Edit: Thanks to @jacky0011 who pointed out that the Intel Download Center auto-complete function has the turbo mode for these listed. Pentium models with 16 execution units in their integrated graphics will have their graphics model changed to Intel HD Graphics 405, while Celeron models with 12 execution units are now Intel HD Graphics 400. In both cases, these are accompanied by new drivers as well. For system designers, it is worth noting that the ICCmax value for the new stepping rises from 7.7A on the old to 10A on the new for the CPU, and from 11A to 12A for the graphics, meaning that the new chips can be plugged into original Braswell designs but only if they meet the new ICCmax criteria.

Intel expects minimal validation for customers wishing to use these new parts, but they will have new S-Spec and product codes requiring a change in ordering. Intel’s timeline puts the first samples for customers are available now, with qualification data at the end of November. Bulk shipments of chips for devices will start from January 15th 2016, with all shipments finishing on September 30th 2016. Chances are we'll see the current Braswell crop of devices (mini-PCs, NAS) with the newer parts, depending on availability and current stock levels.

Source: Intel

 

Broadwell E3 v4 Xeons: Cirrascale at SuperComputing 15

$
0
0

The hubbub about the Broadwell Xeon family was relatively interesting. We managed to get hold of the models that came in a socketed form for testing, the E3-1285 v4, E3-1285L v4 and the E3-1265L v4, but because these parts use Intel’s Crystal Well design giving 128MB of eDRAM, there are plenty of opportunities to accelerate certain types of workload. We saw big increases in large-frame memory transcoding as the frame buffers did fit into the eDRAM, or in other memory intensive benchmarks such as compression. Despite all that, the Broadwell Xeon family is somewhat short lived with Skylake variants being released almost immediately (although Skylake at present is not with eDRAM).

This is where Intel would make the distinction of course, suggesting that the Broadwell variants be used when the extra eDRAM can be exploited to its fullest. At the Intel Developer Conference back in August, we were told that Intel will be producing a ‘Valley View’ PCIe add-in card with three Broadwell Xeons on it to act as an accelerator for video encode. But fast forward a bit, and we are now seeing the first systems being promoted with Broadwell v4 Xeons inside. Insert solutions provider Cirrascale had this on the floor at SuperComputing 15:

This is essentially a 2U, half-width node with a Supermicro motherboard, a Broadwell Xeon, space for two sticks of DDR3 ECC RDIMMs, routing for 2.5-inch SATA drives, networking, and then a PCIe adaptor card for another accelerator, such as NVIDIA (or Valley View?).

The orientation of the system means that Cirrascale has route a lot of things around the chassis, and the rear IO of the motherboard actually has to be funneled out of the case to where it is appropriate. It is interesting that Cirrascale is quoting 144 streams at under $80 per stream, which comes out to $11520 per unit (I assume minimal memory, near-zero storage?). Also, under six watts per stream extrapolates out to 864W total, so I’d imagine that those ‘144 streams’ also means the cost of the PCIe accelerator alongside the Broadwell CPU.

Nevertheless, it is interesting to see the Broadwell Xeon family more out in the open. It will be interesting to see where else it pops up.

ARM on AMD: The A1100 Seattle Silicon at SuperComputing 15

$
0
0

Anyone loosely following AMD’s efforts with ARM intellectual property would have had on their periphery the noise of the A1100 product aimed at servers, codenamed Seattle. The idea was to use AMD’s back-end expertise to produce a multi-core ARM chip based on eight A57 cores for server and professional embedded systems, supporting up to 128GB of RDIMM memory and two 10GBase-KR Ethernet ports. The secret sauce of the processor is in the co-processors – a cryptographic one to offload dedicated acceleration of encryption/decryption or compression/decompression, and a system control co-processor that focuses on security and acts like a ‘processor within a processor’ with its own Ethernet connection, RAM, ROM and IO connectivity for remote management and sensing.

The AMD A1100 – Steven’s piece last year goes into a lot more detail.

The chip was designed to be the ‘scale-out’ ARM part that Calxeda attempted to produce with Cortex A9’s, but only marginally edged out a dual-Xeon running virtual machines – the step up to a more powerful core was vital to attack this element of the industry. Despite Calxeda’s attempts at dense web-server traffic requests, AMD is focusing more on the data center where it feels this chip is more suited for. So even though there have been many delays with the hardware, missing its original time-to-market window by at least nine months, I finally got to see the silicon for my own eyes.

Starting with the platform it is in: this is the SoftIron Overdrive 3000. The silicon has access to eight SATA ports and eight PCIe lanes by default, along with a dual-channel memory controller, but what was interesting in this device was that there are six other SATA ports but no extra controllers. I quizzed the ARM personnel around the product and they said that a future chip might support more SATA ports, so this was almost a long-term drop-in PCB so for the second generation it doesn’t need a redesign.  But this mini-server design is meant for that simple integration into a rack as an ARM development platform or connecting ARM to an accelerator/storage subsystem.

We also had the HuskyBoard from 96Boards on display, aimed at the embedded development market with the same A1100 silicon in the center but with access to a slot of memory on each side (one on the rear), various power options (seems like DC-In and Molex) and a full PCIe 3.0 x8 slot. This almost looks like one of the maker boards we commonly see with other ARM based solutions.

There we go, it exists! Speaking to other people in ARM, they are actually quite pleased that it is now in production, and they are also impressed with the internal metrics they are seeing from it. The march on ARM for server and embedded has been fraught with peril if you go too big too fast, but I wonder how many resources AMD is still putting into this project rather than their core business units.

Best Tablets: Holiday 2015

$
0
0

After kicking off our yearly holiday guides with a look at some of the best Android smartphones on the market, it's time to take a look at the best tablets you can buy this holiday season. Something interesting about the tablet industry is that Microsoft has a fairly significant presence, while in the smartphone space they're still struggling to make Windows a viable platform for users and developers.

While tablet sales have certainly declined, the absolute number of tablets sold every quarter is still very high, and they will undoubtedly be a gift given by many during this holiday season. I'll be going over the options in Apple's iPad line first since they're already pretty well known, followed by the best inexpensive and high end tablets running Android, and the best tablets available running Windows.

iOS Tablets

Even if your phone is an Android device or a Windows Phone, it's difficult to not give the iPad some consideration when looking for a tablet. It's can definitely be difficult to have to manage two different ecosystems with their own apps, but even at this point the iPad still has a significant platform advantage over most tablets as far as applications and multitasking goes, and to improve multitasking and productivity further you really need to move to a full blown Windows tablet.

The iPad line is fairly simple, with only a few options available, and all of them occupying their own screen size. For people who want a smaller tablet, Apple offers the iPad Mini 2 and iPad Mini 4. While the former is definitely getting a bit old, at $269 it offers a fairly inexpensive entry to the iPad ecosystem. $399 gets you the iPad Mini 4 which offers significant improvements to the display, performance, and the size and mass of the chassis. The Mini 4 is definitely my recommendation for a small iPad because of the improved display and additional RAM to enable split screen multitasking, but the $399 price for the 16GB model can definitely be hard to swallow.

Click here to read our reviews of the iPad Mini 2 and the iPad Mini 4.

For a buyer that's interested in a more standard sized tablet, Apple sells the iPad Air and the iPad Air 2 at $399 and $499 respectively for the 16GB models. With how old it is, and the limited amount of RAM it includes, I wouldn't really go for the iPad Air, especially when one considers what the Mini 4 offers at that same price. As for the iPad Air 2, it's definitely my top recommendation for a standard tablet. Although it's actually over a year old, there's still nothing from the major Android players that competes with its performance and build quality, and on top of that you get access to the library of tablet-optimized applications that the iPad is known for. Both the iPad Mini 4 and iPad Air 2 offer upgrades from to 64GB for +$100, and to 128GB for +$200, with cellular capabilities adding on another +$130. Read our iPad Air 2 review here.

At the very top of the iPad line sits the recently introduced iPad Pro. Josh is still working on our review of it, but based on his feedback so far and my time with it I feel it's worth recommending. There are definitely some caveats to consider. The iPad Pro will inevitably be compared to the Surface Pro 4, and I think there are some significant differences between the two that end up determining which one is a better fit for a given user. If you're looking for something that is first and foremost a modern tablet, the Surface isn't the best option because you're dealing with a lot of legacy software design decisions like having to manage files using a file system, and the number of Modern UI apps is quite small which means you end up having to manage a typical Windows desktop with your fingers.

If you're not a user who will benefit from having a full featured copy of Windows installed, but are looking for a large tablet with keyboard and pen support, then the iPad Pro is definitely worth checking out. Price wise, it starts at $799 for the 32GB model, $949 for 128GB, or $1079 for 128GB + LTE. Including the Apple Pencil will bring the price up by $99, and the Smart Keyboard increases it by $169 which means that the entire package can be quite expensive once you factor in the accessories.

Android Tablets

I've taken a look at a number of Android tablets this year, and pretty much all of them occupied a different price bracket from the others. Something particularly interesting this year was the number of tablets that sported Intel Moorefield SoCs, driven by the existing relationships between PC OEMs and Intel as those OEMs began to make their way into the Android tablet space. Unfortunately, it's pretty accurate to say that 2015 hasn't been the greatest year for Android phones or tablets due to issues with the available SoCs from Qualcomm, and in the case of all but one of the devices I've reviewed, issues with display power usage and calibration. That being said, I think there are two notable Android tablets that one should consider.

Starting off with my recommendation for a low-cost Android tablet, I think the NVIDIA SHIELD Tablet K1 is the obvious winner of this category. The SHIELD Tablet K1 was originally sold for $299 before being recalled due to battery issues, but It has just recently been re-introduced at a new $199 price point, and with Google seemingly giving up on offering a Nexus tablet at that price there's really nothing in the Android space that competes with it. The performance provided by NVIDIA's Tegra K1 SoC is far greater than what you'd expect from a $199 device, and the GPU performance is still unmatched by any other Android device. Read our review here.

The only complaint I really have about the SHIELD Tablet K1 is that while the display is a sufficiently high resolution at 1920x1200, the color gamut and accuracy is lacking. While this can be excused somewhat based on the $199 price tag, it's important to note that it did originally cost $299, and the second generation Nexus 7 shipped with an incredibly well calibrated IPS display at a price of $219 over two years ago. Even with that compromise, I don't know of any current Android tablet that competes at this price point, especially when you factor in NVIDIA's very good track record with releasing Android updates in a timely manner, and the relatively few alterations they make to the Android interface. That coupled with the possibility of game streaming from your NVIDIA PC, and the existence of stylus and controller peripherals made by NVIDIA, make the SHIELD Tablet K1 a pretty unique tablet that's definitely worth considering.

If someone is looking for a high end Android tablet, then the Galaxy Tab S2 is going to be their best option. I reviewed the Tab S2 recently, and while I praised its thin and light build, and high quality AMOLED display, I wasn't fond of the use of plastic, the performance, or the battery life. Software aside, the iPad Air 2 is better in pretty much every respect, and so this is really an option for users who want to stay within the Android ecosystem because of functionality that doesn't exist on iOS, or an existing library of apps that they wouldn't be fond of buying again for iOS. Samsung does try to offer tablet-oriented features, like their multitasking features. Unfortunately, they end up being limited by what changes they can make to core parts of Android without breaking other parts of the system, and so some of the features are implemented in a less than optimal manner. Despite that, out of all the Android tablets I've looked at this year, the Tab S2 is the best one even with its flaws.

There is one other device that may be worth considering, although I personally can't speak for or against it as I wasn't able to review it this year. That tablet is the Sony Xperia Tablet Z4.

While I am skeptical of a device powered by Snapdragon 810, the Z4 offers some pretty interesting features like waterproofing, and for a full size tablet it's pretty thin and light. I really wanted to take a look at it but wasn't able to source a review sample, and so I can't really give a definitive answer on whether it's worth purchasing but it's certainly something I would take a look at myself if I was planning on buying a tablet this holiday season.

Windows Tablets

Much like how the iOS tablet market is really an iPad market, the Windows tablet market is pretty much a Microsoft Surface market at this point. I haven't really seen any successors to the inexpensive Bay Trail tablets from a year or two ago, which suggests that Intel's contra revenue strategy has winded down. Most Windows tablets from this year have really been 2-in-1 laptops that either have a rotating hinge or can be split into two parts. That latter segment hasn't seen an enormous number of product launches either. There certainly have been some notable ones like the ASUS T300 Chi, but they often end just being both a mediocre tablet and a mediocre laptop, and you're better off getting a device that does one of those things really well rather than something that does both poorly. Meanwhile, convertible devices like the HP Spectre x360 can be great laptops, but the convertible form factor means that you always have the mass of the keyboard half of the laptop attached, and I've yet to see one that even remotely approaches being light enough to use as a tablet.

With all that in mind, the only two Windows tablets that I truly feel are worth recommending are the Surface 3 and the Surface Pro 4. Going back to what I said in an earlier paragraph about balancing tablet and laptop functionality, while parts of Windows like window management and managing a file system are unwanted by some users, for others they are absolutely essential features to have available. That's why the choice between the two really depends on your workflow, and what sort of experience you're hoping to get from a tablet. If you're a user who wants something that's more similar to a full fledged Windows laptop, but that can also act as a tablet at times, then the Surface tablets are by far your best options. Not only that, but you get a Windows experience that is free of preloaded software and the rarely useful utilities that OEMs tend to include.

The entry model in the Surface line is the Surface 3. This is both a smaller and less expensive device than the Surface Pro 4, but it still runs a full copy of Windows. The display is a 10.8" 1920x1280 panel with a high degree of color accuracy, although I think the resolution is too low for a tablet that starts at $499. Inside is an Intel Atom x7-Z8700 SoC, along with 2GB of RAM and 64GB of eMMC NAND in the $499 model, or 4GB of RAM and 128GB of NAND in the $599 model. The additional RAM and storage for $100 is definitely worth it if you plan to be running any serious Windows software, although as the price moves even further beyond $499 the display's low pixel density becomes more difficult to overlook. Adding on Microsoft's Surface Pen bumps the price up another $50, and the Type cover is $129 so the cost of the accessories brings the price up fairly quickly. It's also worth noting that the Surface 3 doesn't come with the infinitely adjustable hinge of the Surface Pro 3 and 4, which might be an issue for some users as you'll be limited to three fixed angles.

Of course, the flagship Surface tablet is the recently launched Surface Pro 4. The Surface Pro 4 comes in several configurations, and when you include the BTO models there are far more than I could list here. The pricing ranges from $899 for the fanless model with an Intel Core m3-6Y30 CPU, a 128GB PCIe SSD, and 4GB of RAM, to a whopping $2699 for a dual core Intel Core i7-6650U, 16GB of RAM, and a 1TB PCIe SSD. The average price for the Surface Pro 4 should make it pretty clear why I think it ends up competing more with high end laptops than iPads or Android tablets, but it is technically a tablet. That being said, the base model isn't really any more expensive than the iPad Pro once you factor in what Apple charges for accessories, and for that price you're getting a device that you can really use like a full fledged laptop which will certainly appeal to many people.

As far as common specs go, every Surface Pro 4 has a 12.3" 2736x1824 display, 802.11ac WiFi, and Microsoft's Surface Pen included. The battery capacities do vary based on the CPU you get, and the Core i5 and Core i7 models aren't able to be passively cooled like the Core m3 model is so they do use a fan for cooling. Microsoft's Surface Type Cover will still run you $129 on top of the price of the tablet, or $159 if you opt for the version that has a fingerprint scanner for authentication.

Both Surface tablets can legitimately replace a full fledged Windows laptop, and in part that's because they excel at the types of tasks you would do on a laptop. I definitely wish the Windows Store had a wider selection of Modern UI apps that would allow you to use more like you would use an Android tablet or an iPad, but I also think that many of the buyers interested in a Surface 3 or Surface Pro 4 want one specifically because it can run all of their existing Windows software, and so for those users the lack of tablet-oriented apps may not be an issue at all. If you fall into that category, I really recommend you take a look our reviews of the Surface 3 and Surface Pro 4, because Microsoft has executed the hybrid laptop/tablet idea better than any other company has.

AMD Moves Radeon HD 5000 & 6000 Series to Legacy Status - All Pre-Graphics Core Next GPUs Now Retired

$
0
0

Alongside today’s release of the new Radeon Software Crimson Edition driver set, AMD has published a new page on their driver site announcing that video cards based on the company’s pre-Graphics Core Next architectures have been moved to legacy status. This means that GPUs based on the company’s VLIW5 and VLIW4 architectures – the Evergreen and Northern Islands families – have been retired and will no longer be supported. All of AMD’s remaining supported GPUs are now based on various iterations of the Graphics Core Next architecture.

Overall this means that the entire Radeon HD 5000 and 6000 series have been retired. So have the Radeon HD 7000 to 7600 parts, and the Radeon HD 8000 to 8400 parts. AMD and their partners largely ceased selling pre-GCN video cards in 2012 as they were replaced with GCN-based 7000 series cards, so pre-GCN parts are now about 3 years removed from the market. However some lower-end OEM machines with the OEM-only 8000 series may only be 2 years old at this point.

In their announcement, AMD notes that their pre-GCN GPUs have “reached peak performance optimization” and that the retirement “enables us to dedicate valuable engineering resources to developing new features and enhancements for graphics products based on the GCN Architecture.” Furthermore AMD is not planning on any further driver releases for these cards – the announcement makes no mention of a security update support period – so today’s driver release is the final driver release for these cards.

To that end, AMD is offering two drivers for the now-legacy products. The last WHQL driver for these products is Catalyst 15.7.1, which was released in July for the launch of Windows 10 and brought with them official support for Windows 10 for all supported GPUs. Meanwhile AMD has also released what will be the first and only Crimson driver release for these products; a beta build of Crimson 15.11 is being provided “as is” for their pre-GCN products. So at the very least the last of AMD’s pre-GCN parts get to go out on a high-note with most of the feature improvements rolled out as part of today’s Crimson driver release.

Ultimately the retirement of AMD’s pre-GCN cards has been a long time coming; it was clear that their VLIW architectures were at a dead-end as soon as GCN was announced in 2011, the only question had been when this would happen. With pre-GCN GPUs unable to support DirectX 12 and coming up on several generations old, it would seem that AMD has picked the Crimson driver release as the natural point to retire these cards.

Update: As a couple of you have now asked, it should also be noted that this retirement includes all APUs using the legacy GPU architectures. So all pre-Kavari APUs: Llano, Trinity, and Richland, are now also legacy APUs

Omni-Path Switches at SuperComputing 15: Supermicro and Dell

$
0
0

It was clear at SuperComputing 15 that Intel had two main things in mind to promote: Knights Landing, their new Xeon Phi product, and Omni-Path, their new 100 Gbps network fabric aimed squarely at Infiniband. Omni-Path as we have published before will be available as an add-in card as well as being part of the Knights Landing co-processor cards (so you can buy Xeon Phi and it comes with Omni-Path as an outside connector). A few interesting things came out at SC15 worth sharing – switch designs and rack mockups, as well as a few disclaimers worth mentioning.

These are the default Intel switches, in 48-port and 24-port variants in a 1U form factor. Each has two power supplies built in for redundancy, with a network managing port as well as a USB port on the left hand side. For the initial release, customers can buy these switches either direct from Intel or one of their partners, and at the beginning they’ll be pretty much the same with a slight rebrand:

Of course, each has their own model number to play with:

The difference will be in the support packages for their customers, I suspect. Perhaps not unexpected, both Supermicro and Dell also use the same information plaques for the swiches:

As you can see, each port supports QSFP28 type cabling, with a redundant fan and power supply. Intel is stating a 100-110 ns port-to-port latency (includes per-hop error detection and resubmit without forcing an end-to-end resubmit) along with QoS, dynamic lane scaling, and up to 195 million MPI messages per second per port. The other swtch in the family is this 9U behemoth, mocked up through Intel’s news channel:

Alongside the switches are the cards, of which we saw an engineering sample back at IDF. It turns out there seems to be two cards on offer, one for a PCIe 3.0 x8 link and another on PCIe 3.0 x16:

The difference here is that the x16 card gets a larger heatsink, and the x8 gets a sticker on one of the chips on board. Dell had the x16 card in a half-width dual processor node on show.

I found it slightly amusing that on every display for Omni-Path hardware, there was this little disclaimer saying that the technology is still waiting for FCC approval:

Intel specifications for maximum MPI messaging rate for the Intel Omni-Path Host Fabric Interface Silicon 100 Series ASIC (formerly code-named Wolf River). This device that has not been authorized as required by the rules of the Federal Communications Commission, including all Intel Omni-Path Architecture devices. These devices are not, and may not be, offered for sale or lease, or sold or leased until authorization is obtained.

I quizzed both Charlie Wuischpard, VP & GM of Workstation and HPC at Intel, and Raj Hazra, VP & GM of Enterprise and HPC Platforms Group at Intel, about this. Part of my brain was thinking that the wording was a little odd (I’ll be honest, it doesn’t even read that well), and the fact that if Intel is shipping Knights Landing in Q4 to customers then the Omni-Path part of KNL needs to pass the standards and that time is approaching rapidly. Intel has had years of practice dealing with FCC regulations, so I would have assumed that despite the wording and my doubts, it shouldn’t be an issue. Both Charlie and Raj confirmed that this should be the case, and everything is still going to schedule internally, but this qualifier was needed purely for legal reasons. 

Intel’s aim for Omni-Path is directed at Infiniband, attempting to combine their current HPC/server ecosystem with a fabric that saves customers money. As well as being high performing, Intel plans to focusing on metrics that matter to their users. It is interesting that there are rumblings of Omni-Path working its way down to Xeon CPUs in the future, and being equipped there as well.  This makes it all the more interesting when we went to hear what was being said at the Infiniband presentations during the conference. 

Supermicro with Greenlow Motherboards at SuperComputing 15

$
0
0

One of the bigger shakeups of the Xeon ecosystem of late is the recent discovery that Intel will be severing the few ties that the Xeon family of processors from the consumer arm of the company. While it is still fundamentally the same microarchitecture underneath with the same amount of pins, and there are a few additional features on the Xeon, it was mentioned (rather than announced) that the Skylake Xeon processors would not be coded into t­he consumer motherboard microcode, and thus be non-qualified for the 100-series chipset motherboards. This means that if you need a Skylake Xeon, you need a C230 series motherboard which is meant to be a higher validated chipset part over the consumer ones. This leads to the Greenlow platform, where a Skylake Xeon in the format E3-1200 v5 is paired with a C230 series chipset.

At SuperComputing 15 this past week, Supermicro had a few of their workstation based Greenlow motherboards on display.

As you might imagine, they essentially very similar to the 100-series motherboards we have already seen, except with minimal styling and a few workstation focused features. This one is the X11SAE-M, a micro-ATX design that uses eight individual SATA ports at right angles from the chipset as there is only one PCIe 3.0 x16 slot for a PCIe accelerator card. Note the use of PCI (!) as well, and a power delivery subsystem of only five phases. The pre-requisite COM/TPM headers are here, along with two SuperDOM, a PCIe 3.0 x4 based M.2 slot, and two Intel NICs with one being pro-focused with another low power.

The X11SAE-F is the bigger brother, extending the feature set to dual PCIe 3.0 long slots in an x16 or x8/x8 configuration, doubling the PCI slots, and equipping the board with an AST2400 controller chip to allow for remote management. There is also a Thunderbolt header in play for PCIe based TB cards.

We also caught wind of the X11SAT which looks more like the consumer ATX parts, however this one uses a PLX chip under the mid heatsink (I assume it’s an 8747, and not a 9000 sries part) that gives x16/x8/x8 layouts for the PCIe slots. There is also an Intel Alpine Ridge controller, which the blurb states supports Intel’s Thunderbolt as well as Type-C to DisplayPort and USB 3.1.

These should all be part of a larger SM stack of boards, and currently in the hands of SM’s sales people and common distributors. We’re hearing word that some of the Greenlow boards, covering both server and workstation, should offer quad-1G NICs, or dual 10G, some with LSI 3008 RAID controllers on board and even one with four PCI slots (for long-life legacy applications).

 


Best Gaming Laptops: Holiday 2015

$
0
0

Welcome to part two of our best laptop guide for 2015. The first part was traditional notebooks, this installment will focus on gaming notebooks, and our final piece will cover convertible devices.

The gaming segment is one of the few areas of strength in the PC market, and entire companies have moved from supporting mainstream computing to focusing solely on gaming hardware. When you say gaming, you expect a decent GPU, and NVIDIA has been busy in 2015 with the launch of the GTX 960M and GTX 950M to compliment the GTX 970M and GTX 980M which has been out for just over a year already. AMD is not as big of a player in the notebook space, but they have also refreshed some of their Cape Verde line for 2015 with the M375, M360, and M330, but these are lower performance parts. The higher performance AMD parts seem to have found a home in the recent Mac lineup, but little else.

We’ve also seen some other exciting developments for 2015. The biggest one I think is that NVIDIA has released G-SYNC for notebooks and it was a hard launch with several devices available right away. G-SYNC makes even more sense on a notebook due to the less powerful hardware, and often high resolution displays, although the trade-off here is that the GPU must be directly connected to the display, which precludes the use of NVIDIA’s Optimus technology to disable the GPU to save power. Many gaming laptops have already moved away from Optimus, so this is not a huge issue, but it does mean that gaming notebooks that try to get decent battery life are going to miss out on G-SYNC.

When shopping for gaming laptops, be aware that many of the entry level models still ship with TN displays, which you may want to avoid if you like the wide viewing angles of PVA and IPS. There is also a higher chance that the storage will be hard disk drive based rather than solid state drives. I would have a hard time using anything that wasn’t a SSD at this point, but with the huge install size of some current games, I can understand the reasoning a bit more on a gaming laptop, especially an entry level model. Higher end gaming laptops tend to come with SSD plus HDD for game storage.

So with that preamble out of the way, let’s dig in, starting with entry level gaming laptops.

Entry Level Gaming: Less than $1000

You definitely have to make some sacrifices once you go under $1000. Considering a decent entry level Ultrabook is in the $700-$900 range already, once you add in a GPU and the additional cooling required, it’s tough to get under $1000. My pick for entry level gaming is the Lenovo Ideapad Y700 lineup.

Lenovo Ideapad Y700 14-inch

Lenovo sells both a 14-inch and 15-inch model. The 14-inch starts at just $800 and it starts with the display. Lenovo offers a 1920x1080 IPS panel as the standard offering here, despite the lower cost of entry. This is one of the main reasons I’ve picked it over a lot of the competition, where you can still find 1366x768 TN panels. The base model has the Intel Core i5-6300HQ processor which is a quad-core model which turbos up to 3.2 GHz. This should be plenty of CPU for an entry level gaming notebook. You also get 8 GB of memory and a 1 TB hard drive, although it’s a 5400 rpm model which is pretty disappointing. The GPU is an AMD Radeon R9 M375 with 2 GB of VRAM on the base model. This is not super powerful mobile GPU, but it should be plenty for running older games or eSports. You can opt for a more powerful CPU, and they even offer models with SSDs for the OS drive, with a Core i7 model with 16 GB of memory and a 4 GB M375 for $1049. A word of warning though is that AMD only supports their Enduro GPU switching with AMD processors, so it won’t be available on this notebook. AMD may support Enduro on non AMD processors after all, despite their site showing that it's unsupported. If you are going to spend a bit more though, there are better offerings available, even from Lenovo.

Lenovo Ideapad Y700 15-inch

If you can spend even a bit more than the 14-inch model, you can step up to the 15-inch model, which offers a lot more laptop for the money. The very base model is $979.80 and has the same Core i5-6300HQ processor as the 14-inch, as well as a 15.6-inch 1920x1080 IPS panel. RAM is the same 8 GB, and the storage drops to just a 500 GB 5400rpm drive. The big jump in performance though comes courtesy of a NVIDIA GTX 960M GPU. It’s just the 2 GB model on the entry level Y700, but it will offer roughly double the performance of the R9 M375 found in the 14-inch version of this notebook. Right now, Lenovo is offering an upgraded model with a Core i7-6700HQ (quad-core, hyperthreading, up to 3.5 GHz) and impressively you get a 1 TB HDD and a 128 GB SSD for the boot drive. This model is currently only $999, and for $20, the upgraded CPU and storage is an easy decision. They are even offering the 3840x2160 UHD model with a 4 GB GTX 960M, 16 GB of DDR4, the Core i7, and the 128 GB plus 1 TB hard drive solution for $1149 which is pretty fantastic value. I’m not sure I’d like a UHD display on a low end gaming notebook, but for general office work it would be great, and you can always choose lower resolution for gaming.

Mid-Range Gaming: Less than $2000

Stepping it up a bit you can move up to a NVIDIA GeForce GTX 970 based laptop, which is going to up the performance even more. Mid-range models are plentiful, with practically every manufacturer offering one.

Alienware 15 R2

I’m a fan of the new look of the Alienware lineup. They also offer a lot of customization of lighting, so you can really personalize the notebook after the fact and make it your own. The Alieware 15 R2 starts at $1200, with a Core i5-6300HQ processor and a GTX 965. This baseline model comes with a 1920x1080 15.6-inch IPS display and 8 GB of memory. For the $1200, you just get a 1 TB 7200 rpm hard drive, but you can add in a 128 GB SATA SSD for $150, or a 256 GB PCIe SSD for $250 more. For $1600, you can get a Core i7-6700HQ model with a GTX 970M. This comes with 16 GB of DDR4 and the 128 GB boot drive in addition to the 1 TB HDD. Dell also offers some more customization, so you can change up to a GTX 980M, but it’s a $350 upgrade. Dell is also one of the few OEMs to offer an AMD Radeon option, and in this case it’s the top end M395X as well. For those that want a high DPI display, you can opt to replace the standard display with a UHD 3840x2160 IGZO panel for $200. Dell offers quite a range of customizations which should help you fit the Alienware 15 R2 into your budget, and if you like the looks of an Alienware, this is a good mid-range model to consider. On top of this, all of the Alienware models can be used with the Alienware Graphics Amplifier, although it’s not an inexpensive investment.

ASUS ROG G752VT

If you are after a larger 17.3-inch model, the G752VT is a refresh on the G751 that we saw earlier this year. The VT is a less expensive version, which features the GTX 970M GPU rather than the GTX 980M found in the G751JY that was reviewed, but it comes in several hundred dollars less. I quite liked the G751, but I found the styling to be quite dated, so it was great to see ASUS address that with the G752 refresh. It also features a move to Skylake, with the Core i7-6700HQ. There is 16 GB of DDR4 available, and the standard offering has a 128 GB PCIe SSD boot drive with a 1 TB hard drive for storage. ASUS offers a G-SYNC display panel with a 75 Hz refresh, and it’s really great to game on. This does mean there is no Optimus, but battery life really has to be a secondary concern with a large gaming notebook like this. I quite like the new styling, which fixes the biggest issue with the G751. The G752VT can be found for just over $1700.

High End Gaming

Pretty much all of the gaming laptop manufacturers would have something to fit into this category. Performance is generally what it’s all about, and there are plenty of systems which can offer not only one, but two GPUs. NVIDIA has also done some great work this year to remove the performance barrier on mobile even further, with the release of the GTX 980 in laptops. This is not the M version, but the actual desktop GTX 980.

Razer Blade

What Razer gives up in performance over some of the other high end gaming laptops, it makes up for in portability and build quality. This is a 14-inch notebook, so it loses a lot of thermal capacity over much larger models, but Razer has built their premium gaming laptop out of a CNC milled aluminum shell with a fantastic black finish. Many gaming laptops go for some pretty eye-catching style, but Razer keeps a subtle, classy look. With a Core i7-4720HQ and a GTX 970M, there is quite a bit of performance on tap for such a small notebook, and the Razer Blade has sufficient cooling to keep everything going even under stress. It also makes a pretty great general purpose notebook, with battery life that is enough that you can actually get some work done off of the mains. It also packs in a fantastic IGZO 3200x1800 QHD+ display. Check out the review for the 2015 model here. New for the 2015 model year is a lower cost version, which starts at $2000 and comes with a 1080p non-touch display. You lose some pixel density and touch, but it would be able to game at its native resolution quite well, and it also ekes out a bit more battery life. The QHD+ model starts at $2200 with a 128 GB SSD, and moves up to $2700 with a 512 GB drive.

MSI GT72 Dominator Pro

If you want even more performance, you may want to take a look at the MSI GT72 Dominator Pro. This 17.3-inch gaming notebook comes with a couple of models, but if you want great performance for the 1080p G-SYNC panel, check out the GTX 980M model. It comes with a Core i7-6700HQ processor and 32 GB of DDR4 memory. Storage is 256 GB of SSD for the OS drive, and another 1 TB of hard disk for game storage. MSI has gone all in on gaming, and they offer some nice features like RGB backlit keyboards. The GTX 980M has really been the top dog in notebook graphics this year, and for good reason. It offers plenty of performance, without breaking the bank when it comes to heat output. The GT72 Dominator Pro comes in around $2100, depending on configuration.

MSI GT80 Titan

If you are looking for the ultimate in gaming performance, it’s tough to overlook the MSI GT80 Titan. This is almost less of a laptop and more of a portable desktop, and it features a massive 18.4-inch 1920x1080 display. And just because the GT80 Titan has to take everything to eleven, it features not one, but two GTX 980M GPUs in SLI. MSI is also offering it with the desktop GTX 980 modules too, if you need even more performance. Although it is still on Broadwell, you do get the Core i7-5950HQ processor, so it should be plenty fast for almost any task. The GT80’s defining trait though is a fully mechanical SteelSeries keyboard with MX Brown switches. The trackpad gets moved off to the side, where it can double as a number pad. It’s an odd place for a trackpad, but it actually works really well. I would guess that most people will use it with a mouse though. The GT80 Titan has amazing performance, but it also does all of this while staying relatively quiet, even under load. I was surprised how much I liked this 10 lb monster notebook, and it’s an easy recommendation for anyone looking for performance, The GT80 Titan starts at $2500, and the GTX 980M SLI model with the Core i7-5950HQ CPU and 24 GB of memory is $3700. It’s not cheap, but there is nothing else in the same class as this notebook.

That wraps up our look at gaming notebooks. It’s tough to get into this lineup for less than $1000, but after that prices can jump up quite a bit depending on options and features. We’ve seen some nice additions to gaming notebooks this year, with the addition of G-SYNC likely being one of the standout points from 2015. Stay tuned for our third installment of notebooks, which will feature the two-in-one convertible device classes.

Huawei Launches The Mate 8, with Kirin 950

$
0
0

Today Huawei announces their new flagship, the Mate 8. We've already had a look at the Mate S during this year's IFA conference, and while I didn't quite manage to finalize the review of that device at the time of posting this, one thing I can say about it is that the Mate S felt like a tangent to the usual product category that the Mate-series usually targets. Huawei confirms this suspicion with the release of the Mate 8, now bringing the true successor to last year's Mate 7.
 
The specifications of the Mate 8 are a large departure from both the Mate S and Mate 7's, with only the characterizingly large 4000mAh battery and 6" screen tying the similarities between the new device and its predecessor. First, let's go over the full specification list:
 
Huawei Mate 8
SoC HiSilicon Kirin 950
4x Cortex A53 @ 1.8GHz
4x Cortex A72 @ 2.3GHz
Mali-T880MP4 @ 900MHz
RAM 3-4GB LPDDR4
NAND 32GB / 64GB / 128GB NAND
+  microSD
Display 6” 1080p JDI IPS-Neo LCD
Modem 2G/3G/4G LTE Cat 6 
(Integrated HiSilicon Balong Modem)
Networks

(NXT-AL10
 Model)
TDD LTE B38 / B39 / B40 / B41
FDD LTE B1 / B2 / B3 / B4 / B5 / B7 / B8 / B12 / B17 / B18 / B19 / B20 / B26
UMTS 850 / 900 / AWS / 1900 / 2100
( B19 / B8 / B6 / B5 / B4 / B2 / B1)
GSM 850 / 900 / 1800 / 1900
Dimensions 157.1 (h) x 80.6 (w) x 7.9 (d) mm
185g weight
Camera Rear Camera w/ OIS
16MP ( 4608 × 3456 )
Sony IMX298 1/2.8" w/ 1.12µm pixels
F/2.0 aperture, ?mm eq.
Front Facing Camera
8MP ( 3264×2448 ) 
Sony IMX179 1/3.2" w/ 1.4µm pixels
F/2.4 aperture, 26mm eq.
Battery 4000mAh (15.2 Whr)
OS Android 6.0
with EmotionUI 4.0
Connectivity  802.11a/b/g/n/ac dual-band 2.4 & 5GHz
BT 4.2, microUSB2.0, GPS/GNSS,
DLNA, NFC
SIM Size NanoSIM +
NanoSIM (w/o microSD)
Chinese
MSRP
3GB + 32GB ¥2999-3199 (USD~479, ~449€)
4GB + 64GB ¥3699 (USD~591, ~554€)
4GB + 128GB ¥4399 (USD~703, ~659€)

At the heart of the Mate 8 lies HiSilicon's new Kirin 950 SoC. We attended the chipset's launch in Beijing just a couple of weeks ago and for the curious readers they can read a more in-depth look of the SoC in our announcement piece. To recap the new SoC is a big.LITTLE design with 4x Cortex A53 running up to 1.8GHz serving as high-efficiency cores and 4x Cortex A72 high performance cores running at 2.3GHz.

From the data that we've been presented with by Huawei and HiSilicon it looks like the new Kirin 950 has made very large strides in terms of power efficiency, so that'll be definitely a factor in the Mate 8's battery life. Indeed, with a similar 4000mAh battery and an efficienct SoC, Huawei promises that the Mate 8 will be able to last 30% longer than the Mate 7, a device which already topped our battery charts but due to the inefficiency of the SoC wasn't quite able to match up to its own predecessor at the time, the Mate 2.

On the GPU side we see a Mali T880MP4 running at up to 900MHz, while this is by no means a slouch, it'll fall only in the mid-range in terms of performance against 2016 devices as we'll see the competition using Qualcomm's Snapdragon 820 or Samsung's Exynos 8890's with configurations employing much more powerful graphics processing abilities. This is somewhat alleviated by Huawei's choice of keeping a 1080p resolution on the device's 6" IPS LCD screen. 

To quickly refill that big battery, Huawei also announces fast charging of up to 2A at 9V or 18W, promising it can charge up to 37% charge in 30 minutes. Connectivity wise, the device is powered by the Category 6 integrated Balong LTE modem of the Kirin 950, although we'll have to wait on confirmtation of the exact frequency bands of the international model. Huawei has also finally upgraded their Wi-Fi implementation to include 802.11ac and also re-implements dual-band 2.4 and 5GHz support which was notoriously missing from this year's devices.

 

While on a recent trip to China we had the opportunity to have a hands-on with the Mate 8 and experience it live. As mentioned earlier, the device's defining characteristic is the large 6" display, and while the Mate 8 has kept this specification, it has slightly tweaked the ergonomics by giving it a larger curve on the back of the device resulting in edges that are thinner, making the device easier to hold. The front glass now also features 2.5D edges which give it a better feel than the plastic bezels around the Mate 7's screen. Huawei keeps publishing some rather deceptive looking device renders as it appears as if the screen has no bezels. This is unfortunately not representative of the device as it does have a ~1.5mm inactive border around the actual screen.

Other changes in design include the move to a bottom-placed speaker, now similar in design to the ones found on the P8 or the Mate S. The Mate 8 keeps the same camera, flash and fingerprint-sensor positioning of the Mate 7 but they all now use circular designs instead of square ones.

When speaking of the camera, we see the introduction of a new sensor module from Sony. The IMX298 is a new 1/2.8" 16MP unit with 1.12µm pixels and phase-detection auto-focus (PDAF) pixels. The optics on the camera module offer a F/2.0 aperture lens and also offers optical image stabilization (OIS) with up to 1.5° angle of movement. While trying out the camera I found that it seemed to offer quite good picture quality and the new ISP of the Kirin 950 seems to have certainly been part of some of the improvements in terms of the camera. Unfortunately due to the Kirin 950's encode limitations, the device doesn't offer 4K recording and is limited to more traditional 1080p video. On the front-facing camera we see a 8MP shooter, most likely the same sensor and module configuration found on the Mate S.

The device comes with two possible configurations and price-points: 3GB RAM with 32GB of NAND storage or 4GB RAM with 64GB or 128GB of storage. As seems to have become traditional for Huawei, the Mate 8 offers either dual-nanoSIM capability or you can use the second SIM slot as a microSD tray for additional storage. The phone ships with Android 6.0 Marshmallow with a new Huawei's EmotionUI 4.0 customization on top, and will initially be available in China from Q1 2016, while being introduced for western markets later on at CES. Chinese MSRPs for the 32GB, 64 and 128GB models come in at respectively RMB¥2999, ¥3699 and ¥4399.

Best NASes: Holiday 2015

$
0
0

We have already published holiday guides for mobile devices, laptops, CPUs, PSUs and SSDs. Today, we will take a look at the various options available in the commercial off-the-shelf (COTS) network-attached storage (NAS) market space.

The COTS NAS market can't be simply delineated based on price and performance. As a rule of thumb, one can say that the price of a NAS increases with the number of bays in it. However, even within the same number of bays, we get NAS units spanning a wide price range. Any consumer in the market for a NAS needs to consider the following aspects before deciding upon the budget:

  • Amount of storage needed (number of bays)
  • Intended use-case
    • Business-oriented or home / multimedia-focused
    • Expected number of simultaneous clients
    • Downtime tolerance
    • Required processing power (both file-serving and apps)
  • Value of invested time (in the case where there is a toss-up between the COTS and DIY routes)
  • Mobile and native NAS applications ecosystem

We have evaluated a large number of NAS units (with different bay-counts) over the last several years. The lineups mentioned below (in alphabetical order) are the ones that we are comfortable recommending for purchase after putting a few of their members through long-term testing. Compared to last year, we have removed the LenovoEMC i- and p- series, as they no longer seem to be available for purchase and no new products have been announced in the last two years (even though their support forums are still active with official replies).

  1. Asustor Storage Units
  2. Netgear ReadyNAS Series
  3. QNAP Turbo NAS Units
  4. Seagate NAS and NAS Pro Units
  5. Synology DiskStation and RackStation Series
  6. Western Digital Consumer Series

In this guide, we present suitable options for 2-,4- and 8-bay NAS units targeting the home consumer / SOHO market. One important aspect here is that we are not going to talk about the high-end SMB market or the multitude of offerings that come with Windows Storage Server or some similar flavor. Only products based on custom OSes are being considered in this guide.

Option 1 (2-bay): Western Digital My Cloud Mirror Gen 2 [ 2x2TB: $310 , Review ]

Most units sold in the 2-bay market are purchased by the average consumer who wants to back up photos and videos taken with mobile devices. A performance powerhouse is rarely needed in this market segment. While the user experience with the mobile app(s) is vital, the presence of apps on the NAS itself is just an icing on the cake.

Western Digital revamped their 2-bay product line with the My Cloud Mirror Gen 2 earlier this year for the home consumers (with the My Cloud EX2 still available for the SOHO / low-end SMB market). There are no diskless models and the units come with WD Red drives. Integrated Docker capabilities in the My Cloud OS point to the possibility of multiple easily-integrated third-party apps in the future. Western Digital is obviously a big vendor with end-user support appropriate even for non-tech savvy folks. Coupled with the plug-and-play experience, this makes it an ideal gift for the holiday season to anyone who is looking to get started with network attached storage and needs basic data protection.

Option 2 (4-bay): QNAP TS-453 Pro [ Diskless / 2GB RAM: $566 ]

We saw almost all the vendors listed above (except for Seagate) release new 4-bay NAS units this year. Asustor's Braswell-based AS6204T [ $668Review ] is solid and stable, while the QNAP TS-451+ [ Diskless / 2GB RAM: $529 , Review ] also performs admirably despite being based on the previous generation Bay Trail platform. However, the best bang for the buck continues to the QNAP TS-453 Pro. The price is just a little bit higher than that of TS-451+, but the unit comes with extra LAN ports ideal for dedicating to virtual machines running on the NAS.

I wouldn't suggest running intensive VMs on the Intel Celeron J1900-based TS-453 Pro, but the platform is powerful enough to run Ubuntu VMs and the like for, say, acting as a home automation controller. Given the age of the platform, it is likely that the TS-453 Pro will continue to see downward price pressure. However, the unit is quite powerful for advanced users and the software platform is very rich in features (both mobile apps and the NAS apps ecosystem).

Option 3 (8-bay): QNAP TVS-871-i7-16G [ Diskless / 16GB RAM: $2199 ]

Our 8-bay recommendation also goes to a QNAP NAS. The TVS-871-i7-16G is a no-holds barred NAS sporting a Core i7-4790S Haswell processor. With 16 GB of RAM and a minimum of 4x 1GbE ports (additional 2x 10G also possible with the spare PCIe expansion slot), this NAS is ideal for running multiple intensive VMs. The 4C/8T Core i7 CPU ensures that there is enough processing power for the VMs and plenty to spare for the NAS functionality as well as apps running on the NAS itself.

The TVS-x71 units are meant for the high-end SMB market, but, in our evaluation of a TVS-871T-i7-16G unit over the last several months (review is coming out soon), we can say that it is positively drool-worthy for the high-end power users with cash to burn. The Pentium-based model comes in at $1350, while the Core i3-based one is at $1377.

For a more moderately priced 8-bay system on the COTS side, one could opt for models such as the Synology DS1815+ [ $961 , Review ] or the QNAP TS-853 Pro [ $993 , Review ]. Obviously, going the DIY route with, say, an ASRock Rack C2750D4I board and a U-NAS NSC-800 chassis [ Review ] might make for an interesting build, but the price difference is not that big (approx. $845 vs. approx. $1000) when build time and software management aspects are considered.

Honorable Mentions:

Option 4 (2-bay): Synology BeyondCloud Mirror BC214se [ 2x2TB: $333 ]

Option 5 (4-bay): Netgear ReadyNAS RN214 [ Diskless: $500 ]

Synology is surprisingly absent in our list of recommendations this year. It is understandable, as their primary focus has been on the high-end SMB / SME market over the past year. They did release the DS416 based on the Annapurna Labs AL212 platform last month, but the main push on the software side of things has been for business-oriented features. With DSM 6.0 slated for next year, and Braswell-based NAS units slated to appear soon, things will change. That said, earlier this year at CES, Synology also unveiled the BeyondCloud series targeting novice users. Consumers looking for an alternative to the Western Digital My Cloud Mirror Gen 2 can also go for the Synology BeyondCloud Mirror BC214se at a similar price point. Just like the My Cloud Mirror Gen 2, the RAID volume is pre-configured (the BC series uses Seagate NAS HDDs). Obviously, the software ecosystem (DSM + apps) is quite rich compared to the My Cloud Mirror Gen 2, justifying the slight premium.

On the 4-bay side, Netgear's ReadyNAS RN214 with an updated quad-core Annapurna Labs SoC and btrfs support is an interesting option. Coupled with the newly introduced Netgear Nighthawk X8 R8500 tri-band 4x4 802.11ac router and the promise of plug-and-play link aggregation support, it presents a compelling solution for consumers in the market for a router as well as a NAS.

AMD Releasing New Crimson Drivers for GPU Fan Issue

$
0
0

Last week we covered the launch of AMD’s new Radeon software known as Crimson. Crimson is a departure from the Catalyst name, offering an updated interface and promising a larger range of quality assurance testing moving into the new DX12 era. Part of this includes several new features, and it’s worth reading into Ryan and Daniel’s piece on the new software. Despite the best intentions, it happens that this new driver also comes with a few issues that are leaving some users concerned.

As reported in this Reddit thread over at /r/pcmasterrace, the new drivers are causing some graphics cards to adopt an abnormal fan profile, limiting the fan speed to a maximum of 20% by default. As a result, during workloads that require the graphics card, the components on the card are heating up faster than intended. It should be noted that the extent of this issue is hard to determine at this point, as a random spread of users seem to be affected right now.

Technically this should result in the GPU hitting thermal limits and causing the chip to reduce the voltage and frequency, though according to these reports it seems that some of the affected cards are failing, either as a result of VRM overheating or other board design issues relating to high temperatures, even if the GPU throttles down, because of the low fan speed. So despite the GPU throttling, the sustained power draw combined with the low fan speed still increases the temperature, and rather than trip some sort of fail-over giving a BSOD, some GPUs seem to have components that are susceptible to the temperature before a fail-over kicks in.

Some users are reporting that this is a global overdrive setting fixed in the software, which can be re-enabled by following these instructions to remove the 20% fan speed limit. However this hotfix requires re-enabling every time the system is restarted. The fan speed should also be able to be changed using third party software (MSI Afterburner, EVGA PrecisionX).


Instructions from /u/Mufinz1337: 'Make sure it states OFF if you want your fan speeds to be automatic'

We received an email late last night from AMD stating that the problem has been identified and an update to the drivers will be available at some point today, Monday 30th November.

The problem seems to revolve around system configurations that seem confusing to the initial release of the Crimson software, resulting in an odd initial fan setting that is fixed when the software initialises (although it seems to be a random assortment of GPUs affected, even for those with seemingly straightforward systems). Some users have reported their cards have permanently failed, although the exact causes as to why are unknown at this point. We have seen reports pointing to VRM quality of cheaper cards being poor outside the specified temperature window, though at this point we have not heard of any OEM releasing a statement regarding replacement – users with cards in warranty should under normal circumstances be able to get their cards replaced with their retailer, and it will be up to the retailer/OEM to manage the issue further up the chain with distribution.

For users affected, they can either do the manual fan adjustment each time they boot their system, roll back drivers via DDU, or wait for the driver update later today. We will post links here when we get them.

Best Convertible Laptops: Holiday 2015

$
0
0

For our final segment on notebooks, we will take a look at convertibles. We’ve already covered standard notebooks, as well as gaming laptops. This final installment will focus on convertibles. This category sprouted out of nowhere with the release of Windows 8, but it has made some huge strides over the years with better and better devices being released. With Windows 10’s ability to switch the interface depending on which mode you are in, convertibles are now a fully fleshed out member of the notebook family.

I break convertibles down into two different segments. The first are those that are a tablet first, and those are defined by having the CPU and other parts in the display section. Keyboards are an add-on on these devices, and they generally are a better tablet experience, with somewhat compromised keyboards and with the heavy tablet section sitting out over the hinge, they do not have as good of a balance when compared to a traditional notebook. The other segment is ones where the keyboard can flip around underneath the display. As a tablet, they are not as good, since the heavy keyboard section stays attached, but as a regular notebook they have the advantage with balance and generally a better typing experience.

We’ve seen some great additions to the lineup for this year, and with Intel’s updated Skylake platform being used by many. Lower cost devices may turn to Intel’s Cherry Trail or Braswell platforms, which are lower performance Atom cores but with a much lower TDP and greater efficiency.

Tablet devices with attachable keyboards

Budget Convertible: ASUS T100HA

ASUS basically invented this class, with the launch of the original T100 back in 2013. For 2015, ASUS is back with a refresh on their 10.1-inch convertible. It is the T100HA, and it’s now powered by Intel’s Cherry Trail with the x5-Z8500 SoC. This is a quad-core 14 nm processor with a 1.44 Ghz base frequency and 2.24 GHz turbo, and since it is Cherry Trail the power requirements are very low, with it having a TDP of just 2 Watts. ASUS offers two models of this, with a 2 GB memory and 32 GB eMMC offering, as well as a 4 GB memory and 64 GB eMMC model. The latter is a great price, at just $279 USD. With 64 GB and Windows 10, space should be fine for basic tasks. ASUS even includes a USB-C connector on the T100HA. The display is decidedly low resolution, at 1280x800, but that is a 16:10 display for those doing the math. The keyboard and trackpad are small, but for the price, it’s hard to beat the T100HA

Mid-Range: Microsoft Surface 3

The Microsoft Surface 3 is the first of the non-pro models from Microsoft to sport an x86 processor, which opens up the entire Windows software ecosystem to Microsoft’s lower cost tablet. The 10.8-inch display is a wonderful 3:2 aspect ratio, with a resolution of 1920x1280. This makes it a much better tablet than the former 16:9 models, especially in portrait mode. The Surface 3 is also powered by Cherry Trail, but in this case the top end x7-Z8700 model. The base clock is 1.6 GHz and boost jumps to 2.4 GHz. The 2-Watt processor does a decent job running Windows 10, but it still can’t hold a candle to Intel’s Core series. Microsoft has bumped the base storage to 64 GB of eMMC, with 2 GB of memory, or you can jump to 4 GB with 128 GB of eMMC. There is micro SD as well if you need to add a bit more storage, and with the latest Windows 10 updates it’s very easy to use the SD for data or even apps. The Surface 3 has a very premium build quality, with a great magnesium finish, and the included kickstand of Surface has three different stops on the new model. It also adds support for Surface pen, and of course the type cover option to transform it into a laptop. It’s not an inexpensive purchase, but the display and build quality are a step ahead of most of the competition.

High-End: Microsoft Surface Pro 4

The latest iteration of the Surface Pro makes some big strides, and distances itself from the competition even further. There have actually been quite a few Surface Pro clones released this year, but it’s going to be a tall task to overcome the incumbent. Pretty much all of the issues with the last generation Surface Pro 3, which was already a great device, have been sorted out with the new model. The processor is now the latest generation Skylake, with options up to the Core i7-6650U with Iris Pro graphics, and if you want fanless, the base model is a Core m3-6Y30. System memory starts at 4 GB, but you can get up to 16 GB on the higher end versions. Storage is now PCIe NVMe with 128 GB as the base offering, and up to a whopping 1 TB is going to be available soon. One issue with the Surface Pro 3 was that it had a tendency to throttle under heavy load, but the new cooling system in the Surface Pro 4 fixes that too. The display in the latest model is a fantastic 267 pixels per inch, with a 3:2 aspect ratio. Even more, the new type cover improves the typing experience immensely, and the larger, smoother, trackpad is now on par with good notebooks. The Surface Pro 4 starts at $899, with prices going way up from there depending on options. It’s not inexpensive, but the Surface Pro 4 delivers.

Notebook first: 360 degree hinge

14-inch with optional NVIDIA Graphics: Lenovo Yoga 700

Lenovo’s Yoga was the original 360-degree hinge laptop, and it adds a lot of functionality over a traditional notebook by being able to couple touch with the hinge. You gain access to not only the tablet mode, but also tend and stand modes. For late 2015, Lenovo has refreshed the lineup and the Yoga 700 makes our list. It’s a 14-inch notebook, but it packs Skylake processors and even an optional NVIDIA GT 940M GPU inside. The display is a reasonable 1920x1080, which of course includes multi-touch. The Yoga 700 loses out on the weight and thickness battle with Lenovo’s higher end models, but it makes up for that in price. The 3.5 lb notebook starts at just $770 with a Core i5-6200U, and for $900 you can get the Core i7-6500U with double the storage (256 GB vs 128 GB on the base) as well as the GT 940M GPU.

Buy the Lenovo Yoga 700 on Lenovo.com

Beautiful aluminum design: HP Spectre x360

HP released their own version of a convertible notebook this year with the release of the HP Spectre x360. They have recently refreshed it to include Skylake processors too. The HP offers great battery life, as well as a beautiful aluminum finish. The trackpad is enormous, with a much wider model than most devices offer. Although we have not had a chance to review the Spectre, I’ve been using one since April and the build quality is top notch. The HP has a great keyboard too, although I’m not a fan of silver keys with white backlighting since they get washed out in any sort of lighting. The base model offers a 1920x1080 display, and you can also get a 2560x1440 model as well. I would likely stick with the 1080p model for battery life reasons, and the base display is quite good. HP doesn’t break the bank either with their nicely crafted convertible. The HP Spectre x360 starts at just $799 with a Core i5-6200U, 8 GB of memory, and 128 GB of storage.

Buy the HP Spectre x360 on HP.com

High resolution and amazing hinge: Lenovo Yoga 900

Lenovo has once again revamped their Yoga lineup, and the top end of the consumer lineup is now the Yoga 900. This is a successor to the Yoga 3 Pro, and Lenovo looks to fix some of the ailments of that model. The Yoga 3 Pro went for thin and light over pretty much anything, and it did it by using Core M. For the Yoga 900, Lenovo has made it slightly thicker, but by doing so they have been able to move back to the 15-Watt Core processors. They have also increased battery capacity, which is now an impressive 66 Wh. The display is the same 3200x1800 IPS panel, for good and bad. I really hope that Lenovo moves away from this Samsung RGBW Pentile display for future models, since there are plenty of better choices out there now. But still, the overall laptop keeps its thin and light design, along with the beautiful watchband hinge. The 1.3 kg (2.8 lb) convertible is just 14.9 mm (0.59”) thick. You also get USB-C with video out, and the base specifications have gotten a bump. Storage now starts at 256 GB, and can move to 512 GB, and RAM starts at 8 GB and they also offer a 16 GB model. The new Yoga 900 starts at $1200, and goes up to $1400 with 16 GB of memory and 512 GB of storage.

Buy the Lenovo Yoga 900 on Lenovo.com

This wraps up our look at laptops for 2015. It has been a great year for notebooks, with some amazing new models in all categories.

More on Apple’s A9X SoC: 147mm2@TSMC, 12 GPU Cores, No L3 Cache

$
0
0

Over the Thanksgiving break the intrepid crew over at Chipworks sent over their initial teardown information for Apple’s A9X SoC. The heart of the recently launched iPad Pro, the A9X is the latest iteration in Apple’s line of tablet-focused SoCs. We took an initial look at A9X last month, but at the time we only had limited information based on what our software tools could tell us. The other half of the picture (and in a literal sense, the entire picture) is looking at the physical layout of the chip, and now thanks to Chipworks we have that in hand and can confirm and reject some of our earlier theories.

A9X is the first dedicated ARM tablet SoC to be released on a leading-edge FinFET process, and it’s being paired with Apple’s first large-format tablet, which in some ways changes the rules of the game. Apple has to contend with the realities of manufacturing a larger SoC on a leading-edge process, and on the other hand a larger tablet that’s approaching the size of an Ultrabook opens up new doors as far as space and thermals are concerned. As a result while we could make some initial educated guesses, we’ve known that there would be a curveball in A9X’s design, and that’s something we couldn’t confirm until the release of Chipworks’ die shot. So without further ado:


A9X Die Shot w/AT Annotations (Die Shot Courtesy Chipworks)

Apple SoC Comparison
  A9X A9 A8X A6X
CPU 2x Twister 2x Twister 3x Typhoon 2x Swift
CPU Clockspeed 2.26GHz 1.85GHz 1.5GHz 1.3GHz
GPU PVR 12 Cluster Series7 PVR GT7600 Apple/PVR GXA6850 PVR SGX554 MP4
RAM 4GB LPDDR4 2GB LPDDR4 2GB LPDDR3 1GB LPDDR2
Memory Bus Width 128-bit 64-bit 128-bit 128-bit
Memory Bandwidth 51.2GB/sec 25.6GB/sec 25.6GB/sec 17.1GB/sec
L2 Cache 3MB 3MB 2MB 1MB
L3 Cache None 4MB 4MB N/A
Manufacturing Process TSMC 16nm FinFET TSMC 16nm &
Samsung 14nm
TSMC 20nm Samsung 32nm

 

Die Size: 147mm2, Manufactured By TSMC

First off, Chipworks’ analysis shows that the A9X is roughly 147mm2 in die size, and that it’s manufactured by TSMC on their 16nm FinFET process. We should note that Chipworks has only looked at the one sample, but unlike the iPhone 6s there’s no reason to expect that Apple is dual-sourcing a much lower volume tablet SoC.

At 147mm2 the A9X is the second-largest of Apple’s X-series tablet SoCs. Only the A5X, the first such SoC, was larger. Fittingly, it was also built relative to Apple’s equally large A5 phone SoC. With only 3 previous tablet SoCs to use as a point of comparison I’m not sure there’s really a sweet spot we can say that Apple likes to stick to, but after two generations of SoCs in the 120mm2 to 130mm2 range, A9X is noticeably larger.

Some of that comes from the fact that A9 itself is a bit larger than normal – the TSMC version is 104.5mm2 – but Apple has also clearly added a fair bit to the SoC. The wildcard here is what yields look like for Apple, as that would tell us a lot about whether 147mm2 is simply a large part or if Apple has taken a greater amount of risk than usual here. As 16nm FinFET is TSMC’s first-generation FinFET process, and save possibly some FPGAs this is the largest 16nm chip we know to be in mass production there, it’s reasonable to assume that yields aren’t quite as good as with past Apple tablet SoCs. But whether they’re significantly worse – and if this had any impact on Apple’s decision to only ship A9X with the more expensive iPad Pro – is a matter that we’ll have to leave to speculation at this time.

Finally, it's also worth noting just how large A9X is compared to other high performance processors. Intel's latest-generation Skylake processors measure in at ~99mm2 for the 2 core GT2 configuration (Skylake-Y 2+2), and even the 4 core desktop GT2 configuration (Intel Skylake-K 4+2) is only 122mm2. So A9X is larger than either of these CPU cores, though admittedly as a whole SoC A9X contains a number of functional units either not present on Skylake or on Skylake's Platform Controller Hub (PCH). Still, this is the first time that we've seen an Apple launch a tablet SoC larger than an Intel 4 core desktop CPU.

GPU: PVR 12 cluster Series7

One thing we do know is that Apple has invested a lot of their die space into ramping up the graphics subsystem and the memory subsystem that feeds it. Based on our original benchmark results of the A9X and the premium on FinFET production at the moment, I expected that the curveball with A9X would be that Apple went with a more unusual 10 core PowerVR Series7 configuration, up from 6 cores in A9. Instead, based on Chipworks’ die shot, I have once again underestimated Apple’s willingness to quickly ramp up the number of GPU cores they use. Chipworks’ shot makes it clear that there are 12 GPU cores, twice the number found in the A9.

In Imagination’s PowerVR Series7 roadmap, the company doesn’t have an official name for a 12 core configuration, as this falls between the 8 core GT7800 and 16 core GT7900. So for the moment I’m simply calling it a “PowerVR 12 cluster Series7 design,” and with any luck Imagination will use a more fine-grained naming scheme for future generations of PowerVR graphics.

In any case, the use of a 12 core design is a bit surprising since it means that Apple was willing to take the die space hit to implement additional GPU cores, despite the impact this would have on chip yields and costs. If anything, with the larger thermal capacity and battery of the iPad Pro, I had expected Apple to use higher GPU clockspeeds (and eat the power cost) in order to save on chip costs. Instead what we’re seeing is a GPU that essentially offers twice the GPU power of A9’s GPU. We don’t know the clockspeed of the GPU – this being somewhat problematic to determine within the iOS sandbox – but based on our earlier performance results it’s likely that A9X’s GPU is only clocked slightly higher than A9’s. I say slightly higher because no GPU gets 100% performance scaling with additional cores, and with our GFXBench Manhattan scores being almost perfectly double that of A9’s, it stands to reason that Apple had to add a bit more to the GPU clockspeed to get there.

Meanwhile looking at the die shot a bit deeper, it’s interesting how spread out the GPU is. Apple needed to place 6 clusters and their associated shared logic on A9X, and they did so in a decidedly non-symmetrical manner. On that note, it’s worth pointing out that while Apple doesn’t talk about their chip design and licensing process, it’s highly likely that Apple has been doing their own layout/synthesis work for their PowerVR GPUs since at least the A4 and its PowerVR SGX 535, as opposed to using the hard macros from Imagination. This is why Apple is able to come up with GPU configurations that are supported by the PowerVR Rogue architecture, but aren’t official configurations offered by Imagination. A8X remains an especially memorable case since we didn’t initially know Series6XT could scale to 8 GPU cores until Apple went and did it, but otherwise what we see with any of these recent Apple SoCs is what should be a distinctly Apple GPU layout.

Moving on, the memory controller of the A9X is a 128-bit LPDDR4 configuration. With twice as many GPU cores, Apple needs twice as much memory bandwidth to maintain the same bandwidth-to-core ratio, so like the past X-series tablet SoCs, A9X implements a 128-bit bus. For Apple this means they now have a sizable 51.2GB/sec of memory bandwidth to play with. For a SoC this is a huge amount of bandwidth, but at the same time it’s quickly going to be consumed by those 12 GPU cores.

L3 Cache: None

Finally let’s talk about the most surprising aspect of the A9X, its L3 cache layout. When we published our initial A9X results we held off talking about the L3 cache as our tools pointed out some extremely unusual results that we wanted to wait on the Chipworks die shot to confirm. What we were seeing was that there wasn’t a section of roughly 50ns memory latency around the 4MB mark, which in A9 is the transfer size at which we hit its 4MB L3 victim cache.

What Chipworks’ die shot now lets us confirm is that this wasn’t a fluke in our tools or the consequence of a change in how Apple’s L3 cache mechanism worked, but rather that there isn’t any L3 cache at all. After introducing the L3 cache with the A7 in 2013, Apple has eliminated it from the A9X entirely. The only cache to be found on A9X are the L1 and L2 caches for the CPU and GPU respectively, along with some even smaller amounts for cache for various other functional blocks.

The big question right now is why Apple would do this. Our traditional wisdom here is that the L3 cache was put in place to service both the CPU and GPU, but especially the GPU. Graphics rendering is a memory bandwidth-intensive operation, and as Apple has consistently been well ahead of many of the other ARM SoC designers in GPU performance, they have been running headlong into the performance limitations imposed by narrow mobile memory interfaces. An L3 cache, in turn, would alleviate some of that memory pressure and keep both CPU and GPU performance up.

One explanation may be that Apple deemed the L3 cache no longer necessary with the A9X’s 128-bit LPDDR4 memory bus; that 51.2GB/sec of bandwidth meant that they no longer needed the cache to avoid GPU stalls. However while the use of LPDDR4 may be a factor, Apple’s ratio of bandwidth-to-GPU cores of roughly 4.26GB/sec-to-1 core is identical to A9’s, which does have an L3 cache. With A9X being a larger A9 in so many ways, this alone isn’t the whole story.

What’s especially curious is that the L3 cache on the A9 wasn’t costing Apple much in the way of space. Chipworks puts the size of A9’s 4MB L3 cache block at a puny ~4.5 mm2, which is just 3% the size of A9X. So although there is a cost to adding L3 cache, unless there are issues we can’t see even with a die shot (e.g. routing), Apple didn’t save much by getting rid of the L3 cache.

Our own Andrei Frumusanu suspects that it may be a power matter, and that Apple was using the L3 cache to save on power-expensive memory operations on the A9. With A9X however, it’s a tablet SoC that doesn’t face the same power restrictions, and as a result doesn’t need a power-saving cache. This would be coupled with the fact that with double the GPU cores, there would be a lot more pressure on just a 4MB cache versus the pressure created by A9, which in turn may drive the need for a larger cache and ultimately an even larger die size.

As it stands there’s no one obvious reason, and it’s likely that all 3 factors – die size, LPDDR4, and power needs – all played a part here, with only those within the halls of One Infinite Loop knowing for sure. However I will add that since Apple has removed the L3 cache, the GPU L2 cache must be sizable. Imagination’s tile based deferred rendering technology needs an on-chip cache to hold tiles in to work on, and while they don’t need an entire frame’s worth of cache (which on iPad Pro would be over 21MB), they do need enough cache to hold a single tile. It’s much harder to estimate GPU L2 cache size from a die shot (especially with Apple’s asymmetrical design), but I wouldn’t be surprised of A9X’s GPU L2 cache is greater than A9’s or A8X’s.

In any case, the fact that A9X lacks an L3 cache doesn’t change the chart-topping performance we’ve been seeing from iPad Pro, but it means that Apple has once more found a way to throw us a new curveball. And closing on that note, we’ll be back a bit later this month with our full review of the iPad Pro and a deeper look at A9X’s performance, so be sure to stay tuned for that.

Correcting Apple's A9 SoC L3 Cache Size: A 4MB Victim Cache

$
0
0

Along with today’s analysis of Chipworks’ A9X die shot, I’m also going to use this time to revisit Apple’s A9 SoC. Based on some new information from Chipworks and some additional internal test data, I am issuing a correction to our original analysis of Apple’s latest-generation phone SoC.

In our original analysis of the A9, I wrote that the L3 cache was 8MB. This was based upon our initial tests along with Chipworks’ own analysis of the physical layout of the A9, which pointed to an 8MB L3 cache. Specifically, at the time I wrote:

However it’s also worth mentioning that as Apple is using an inclusive style cache here – where all cache data is replicated at the lower levels to allow for quick eviction at the upper levels – then Apple would have needed to increase the L3 cache size by 2MB in the first place just to offset the larger L2 cache. So the “effective” increase in the L3 cache size won’t be quite as great. Otherwise I’m a bit surprised that Apple has been able to pack in what amounts to 6MB more of SRAM on to A9 versus A8 despite the lack of a full manufacturing node’s increase in transistor density.


My Layout Analysis For A9 (Die Shot Courtesy Chipworks)

As it turns out, 8MB of cache was too good to be true. After a few enlightening discussions with some other individuals, some further testing, and further discussions with Chipworks, both our performance analysis and their die analysis far more strongly point to a 4MB cache. In particular, Chipworks puts the physical size of the TSMC A9 variant’s L3 cache at ~4.5mm2, versus ~4.9mm2 for A8’s L3 cache. Ultimately TSMC’s 16nm FinFET process is built on top of their 20nm process – the metal pitch size as used by Apple is the same with both processes – and this is the limiting factor for the L3 cache SRAM density.

Apple SoC Comparison
  A9X A9 A8 A7
CPU 2x Twister 2x Twister 2x Typhoon 2x Cyclone
CPU Clockspeed 2.26GHz 1.85GHz 1.4GHz 1.3GHz
GPU PVR 12 Cluster Series7 PVR GT7600 PVR GX6450 PVR G6430
RAM 4GB LPDDR4 2GB LPDDR4 1GB LPDDR3 1GB LPDDR3
Memory Bus Width 128-bit 64-bit 64-bit 64-bit
Memory Bandwidth 51.2GB/sec 25.6GB/sec 12.8GB/sec 12.8GB/sec
L2 Cache 3MB 3MB 1MB 1MB
L3 Cache None 4MB (Victim) 4MB (Inclusive) 4MB (Inclusive)
Manufacturing Process TSMC 16nm FinFET TSMC 16nm &
Samsung 14nm
TSMC 20nm Samsung 28nm

But what is perhaps more interesting is what Apple is doing with their 4MB of L3 cache. An inclusive cache needs to be larger than the previous (inner) cache level, as it contains a copy of everything from the previous cache level. On A8 this was a 4:1 ratio, whereas with A9 this is a 4:3 ratio. One could technically still have an inclusive L3 cache with this setup, but the majority of its space would be occupied by the copy of the A9’s now 3MB L2 cache.

So what has Apple done instead? Inlight of Chipworks’ reassessment of the A9’s L3 cache size it’s clear that Apple has re-architected their L3 cache design instead.

What I believe we’re looking at here is that Apple has gone from an inclusive cache on A7 and A8 to a victim cache on A9. A victim cache, in a nutshell, is a type of exclusive catch that is filled (and only filled) by cache lines evicted from the previous cache level. In A9’s case, this means that items evicted from the L2 caches are sent to the L3. This keeps recently used data and instructions that don’t fit in the L2 cache still on-chip, improving performance and saving power versus having to go to main memory, as recently used data is still likely to be needed again.

The shift from an inclusive cache to a victim cache allows the 4MB cache on A9 to still be useful, despite the fact that it’s now only slightly larger than the CPU’s L2 cache. Of course there are tradeoffs here – if you actually need something in the L3, it’s more work to manage moving data between L2 and L3 – but at the same time this allows Apple to retain many of the benefits of a cache without dedicating more space to an overall larger L3 cache.

Meanwhile from the software side we can validate that it’s a victim cache by going back to our A9 latency graph. With the exclusive nature of the victim cache, the effective range of the L3 cache on A9 is the first 4MB after the end of the L2 cache; in other words, the L3 cache covers the 3MB to 7MB range in this test. Looking at our results, there’s a significant jump up in latency from 7MB to 8MB. Previously I had believed this to be due to the fact that our testing can’t control everything in the cache – the rest of the OS still needs to run – but in retrospect this fits the data much better, especially when coupled with Chipworks’ further analysis.

Ultimately the fact that Apple made such a significant cache change with A9 is more than I was expecting, but at the same time it’s worth keeping in mind that the L3 cache was only introduced back alongside Cyclone (A7) to begin with. So like several other aspects of Apple’s SoC design, A9 is very much an Intel-style “tock” on the microarchitecture side, with Apple having made significant changes to much more than just the CPU. Though coupled with what we now know about A9X, it does make me wonder whether Apple will keep around the L3 victim cache for A10 and beyond, or if it too will go the way of A9X’s L3 cache and be removed entirely in future generations.


AMD Announces FirePro W4300: Low Profile FirePro

$
0
0

With the slower release cadence for AMD’s FirePro professional cards, we tend to only hear from the FirePro group once or twice per year. And with the last update of the desktop FirePro lineup taking place back in August of 2014, the company has been due for some kind of update to their lineup to close out 2015. To that end, today the company is announcing the latest addition to the FirePro lineup, the FirePro W4300.

After last year’s more sizable refresh, today’s announcement is a lower-key update for AMD. AMD has only released one GPU in the last year, the high-end Fiji, which we’re not expecting to see released as a FirePro workstation card due to its limited 4GB of memory. Instead, today’s announcement is focused on updating the lower-end of the FirePro W lineup with a single new card to cover what AMD sees as product/performance gap in their lineup.

AMD FirePro W Series Specification Comparison
  AMD FirePro W5100 AMD FirePro W4300 AMD FirePro W4100 AMD FirePro W2100
Stream Processors 768 768 512 320
ROPs 16 16 16 8
Core Clock 930MHz 930MHz 630MHz 630MHz
Memory Clock 6Gbps GDDR5 6Gbps GDDR5 5.5Gbps GDDR5 1.8Gbps DDR3
Memory Bus Width 128-bit 128-bit 128-bit 128-bit
VRAM 4GB 4GB 4GB 2GB
Double Precision 1/16 1/16 1/16 1/16
TDP 75W 50W 50W 26W
Form Factor Full Low Profile Low Profile Low Profile
GPU Bonaire Bonaire Cape Verde Oland
Architecture GCN 1.1 GCN 1.1 GCN 1.0 GCN 1.0
Display Outputs 4 4 4 2

The new card is the FirePro W4300. Based on AMD’s Bonaire GPU, the W4300 is essentially a performance bump for this segment of AMD’s lineup, offering improved performance and better features than the Cape Verde based W4100 did in the same performance and size profile.

From a technical standpoint the W4300 is essentially a smaller and lower power version of AMD’s existing W5100. Like its older sibling, the W4300 ships with 768 stream processors enabled, clocked at 930MHz, and is paired with 4GB of GDDR5 running at 6Gbps. The difference between the two cards at the GPU level is that while the W5100 was a 75W card, the W4300 brings this down to just 50W. AMD tells us that the improvement in power consumption comes from better leveraging their PowerTune technology, allowing them to further control the card’s power consumption while also ultimately imposing a 50W limit on total power consumption (with an ASIC TDP of around 35W).

AMD hasn’t provided any benchmarks comparing the two – their focus is how it compares to NVIDIA’s lineup – but the resulting performance should be similar to W5100. Otherwise compared to the W4100, performance should be significantly higher thanks to the 50% increase in stream processors and memory bandwidth, not to mention Bonaire’s more intelligent implementation of PowerTune, which plays a big part in these 50W cards. Otherwise from an architectural standpoint, W4300 improves on W4100 by adding support for AMD’s FreeSync, the company’s name for their implementation of DisplayPort Adaptive-Sync. To date AMD’s focus on FreeSync has been with consumer gaming, however as vendors are starting to ready workstation-class displays with support for the technology, AMD is now ramping up their promotion of and support for FreeSync in workstations. To which the W4300 is now the lowest-end of the FirePro W cards to support FreeSync.

At the other end of the spectrum, the W4300 is now AMD’s most powerful low profile/small form factor FirePro card. Small form factor PCs have become increasingly important in the workstation space – these days a mini-ITX sized board can support a desktop processor, a high-performance M.2 PCIe SSD, and a workstation video card – so AMD has further improved on the amount of performance they offer for small form factor workstations. Similarly, this is why the W4300 is a 50W card as opposed to a 75W card, as these smaller systems have limited heat dissipation capabilities.

Within AMD’s lineup the W4300 is being pitched in the same general category as the W5100 and W4100, which is to say the focus is on lower-end CAD/CAM setups. Overall AMD sees these types of cards being suitable for many graphics-centric tasks, while the higher-end cards like the W7100 and above would be for the large-scale rendering (e.g. movie production) and combined graphics/compute workloads where the GPU is being leveraged for executing and drawing simulations.

Finally, the W4300 will be launching with an MSRP of $379, though AMD tells us that the street price for the card is more likely to be $299. The primary competition for the W4300 will be NVIDIA’s Quadro K1200, which is based on the company’s GM107 GPU and like the W4300 is the top low-profile card in NVIDIA’s product stack. As is typically the case with the FirePro lineup, AMD is aiming to beat NVIDIA on performance for the price; at $299 the W4300 should be anywhere between 10% and 57% faster than the K1200, though it goes without saying that most gains will depend on the nature of the workload and the models being used.

Gallery: FirePro W4300

NVIDIA Releases 359.06 WHQL Game Ready Driver

$
0
0

NVIDIA is continuing their efforts to provide game ready drivers on launch day throughout the AAA/holiday season. Without missing a beat they are giving us a two for one today with game ready support for both Just Cause 3 and Rainbow Six Siege.

Being a game ready driver, updates in the realm of stability and bug fixes are scarce. This time we are left with a bug fix for Call of Duty Black Ops III crashing on NVIDIA hardware under Windows Vista through Windows 8.1. No bug fixes are listed for Windows 10.

Today's driver update also brings us game ready support for Just Cause 3 and Rainbow Six Siege, both of which were just released. Just Cause 3 is the latest in open world madness and appears to need all the help it can get while being a demanding title (NVIDIA themselves recommend a GeForce GTX 970 for 60 fps at 1080p). Rainbow Six Siege has also arrived and along with catering to fans of tactical shooters is also the other half of NVIDIA's Bullets or Blades promotion which began in October and will be continuing for only two more weeks. While not 'game ready' this driver also provides driver optimizations for the Civilization Online closed beta.

Anyone interested can download the updated drivers through GeForce Experience or on the NVIDIA driver download page.

Western Digital Expands HGST Helium Drive Lineup with 10TB Ultrastar He10

$
0
0

HGST, a Western Digital subsidiary, has been shipping hard drives sealed with helium for a couple of years now. Their helium drives have so far come in two flavors - the Ultrastar He drives using platters with traditional perpendicular magnetic recording (PMR) technology and the Ultrastar Archive Ha drives using platters with shingled magnetic recording (SMR). There are two main patented innovations behind the helium drives, HelioSeal and 7Stac. The former refers to placement of the platters in a hermetically sealed enclosure filled with helium instead of air. The latter refers to packaging of seven platters in the same 1" high form factor of traditional 3.5" drives.

The Ultrastar He6 6TB drive was introduced in November 2013, and this was followed by the He8 8TB drive late last year. In June 2015, the Ultrastar Ha10 SMR drive with HelioSeal technology was introduced. Around the same time, HGST also made it known that more than 1M HelioSeal units had been deployed. 1.33 TB platters have become available in air drives now, and HGST is taking advantage of that in the 10TB Ultrastar He10. The launch of the Ultrastar He10 PMR drive today also brings the news that more than 4M HelioSeal units have been deployed in various datacenters - pointing to the rapid rise in adoption rate of this technology.

We have already seen in our reviews that the helium drives offer the best performance to power ratio and watts per TB metric amongst all the drives in their capacity class. HGST also claim a 2.5M hour MTBF - much higher than traditional enterprise PMR drives. The initial cost of the helium drives have been substantially higher compared to the standard drives of the same capacity, but the TCO (total cost of ownership) metric is highly in favor of these drives - particularly for datacenter customers who need the drives to be active 24x7. HGST's press briefing included a slide that presented the potential TCO benefits that come about due to the increased capacity per rack, lower consumption per rack and lower power consumption per TB of the new He10 drives.

HGST indicated that the ramp in volume should help the initial cost to approach that of the air drives in the near future. For datacenter customers, that would mean an acceleration in obtaining the TCO benefits.

Coming to the core specifications, the Ultrastar He10 will come in both SATA 6Gbps and SAS 12Gbps varieties. The drives have 4KB sectors, though SKUs with 512-byte emulation are also available. Various data security options such as instant secure erase, self-encryption, secure erase and TCG encryption with FIPS are available.

The standard Ultrastar He drive features such as rotational vibration safeguard (for better RV tolerance in multi-drive servers) and the rebuild assist mode (for faster RAID rebuild) are retained. The drives come with a 256MB DRAM buffer.

Hard drives are struggling to reach the 10TB capacity point with traditional PMR technology. While Seagate did announce a few 8TB PMR drives earlier this quarter, it really looks like vendors need to move to some other technology (shingled magnetic recording or heat-assisted magnetic recording (HAMR)) in order to keep the $/TB metric competitive against the upcoming high-capacity SSDs. As of now, helium seems to be the only proven solution causing minimal performance impact and HGST appears to have a strong hold in this particular market segment.

Samsung Announces Updated Versions of the Galaxy A3, A5, and A7

$
0
0

Today Samsung announced that updated versions of their Galaxy A3, A5, and A7 smartphones are about to be released. The original Galaxy A7 launched at the very beginning of this year, and both it and the other Galaxy A smartphones represented Samsung's attempt at bringing quality smartphone construction and design to lower price points than what you would pay for a flagship phone like the Galaxy S6. With it being nearly a year since the original announcement of the Galaxy A7, it makes sense that Samsung would want to refresh the lineup. Below you can find the specs for all three of Samsung's new smartphones.

  Samsung Galaxy A3 Samsung Galaxy A5 Samsung Galaxy A7
SoC 1.5GHz Quad Core 1.6GHz Octa Core
RAM 1.5GB 2GB 3GB
NAND 16GB NAND + microSD
Display 4.7" 1280x720 AMOLED 5.2" 1920x1080 AMOLED 5.5" 1920x1080 AMOLED
Camera 13MP Rear-facing, F/1.9, OIS on A5 and A7
5MP Front-facing, F/1.9
Dimensions / Mass 134.5 x 65.2 x 7.3mm
132g
144.8 x 71.0 x 7.3mm 155g 151.5 x 74.1 x 7.3mm 172g
Battery 2300 mAh 2900 mAh 3300 mAh
OS Android 5.1 Lollipop
Network Category 4 LTE Category 6 LTE
Other Connectivity 2.4GHz 802.11b/g/n + BT 4.1, USB2.0, NFC, GPS/GNSS 2.4 / 5GHz 802.11a/b/g/n + BT 4.1, USB2.0, NFC, GPS/GNSS, MST for Samsung Pay
Fingerprint scanner No Yes Yes

As you can see, all three devices have a degree of similarity. The Galaxy A5 and A7 in particular seem to be the most closely related, with many of the differences simply coming down to the difference in size between the two, and the drop to 2GB of RAM on the A5. The Galaxy A3 is clearly the more low end device, with a 4.7" 1280x720 display, no 5GHz WiFi support, and additional reductions to RAM, the SoC, and the cellular connectivity. Because the Galaxy A3 omits the fingerprint scanner present on the A5 and A7 it's also unable to use Samsung Pay.

Of course, some details like the specific SoCs in use are unknown, although one can speculate based on the limited number of offerings on the market that fit the descriptions. Whether or not the Galaxy A3's display uses a PenTile subpixel arrangement will also be an important detail to consider once it's revealed.

As for the design of the phones, they take inspiration from the industrial design of the previous Galaxy A devices but adopt some of the changes made with Samsung's Galaxy S6 and Galaxy Note5, such as the use of glass on both the front and back of the devices. All of the phones are quite thin, and both the design and materials used mean that these definitely won't be targeting the sub-$100 part of the smartphone market. Samsung is also introducing a pink gold color which wasn't available with the last generation models.

While I don't think any of these phones are going to have extremely low prices, it's clear that they'll be competing at price brackets lower than the one occupied by Samsung's flagship phones. The launch prices for the Galaxy A3, A5, and A7 are currently unknown. According to Samsung, the phones will be launching in China later this month, with an expansion to global markets coming in 2016. We'll have to wait and see how much the phones go on sale for in the Chinese market before we're able to guess how much they'll cost elsewhere, and interested buyers will have to wait and see when their availability expands to their country.

Source: Samsung

Scott Wasson Announces His Retirement From The Tech Report

$
0
0

Congratulations are in order this evening for one of AnandTech’s most esteemed colleagues and peers, Scott Wasson. Scott founded The Tech Report back in 1999 and has lead it since, and in those years has operated one of the best deep technical websites in the business. However after 16 years at the helm of The Tech Report, this evening Scott has announced that he is retiring from the site at the start of next year and will be joining AMD.

Among his accomplishment, Scott was instrumental in bringing the matter of GPU frame pacing and overall frame rate consistency to the attention of the wider world of technology. And at the same time he has been equally responsible for holding AMD to task on the subject – a position that isn’t always easy – ultimately driving AMD to improve their drivers and frame delivery mechanisms to the benefit of all users. So to find out that he is joining AMD, though undoubtedly a loss to technical journalism, is wonderful news for both parties, as now AMD will have a strong advocate for quality and user experience within their ranks who can push for even more.

So with that in mind, I wish Scott congratulations and the best of luck in his new position.

As for The Tech Report, Scott has announced that his managing editor and right-hand man Jeff Kampman will be taking over the site. Jeff has done a great deal for the site since joining, so I am happy to hear that Jeff will be continuing the Report's tradition of quality journalism.

Viewing all 9186 articles
Browse latest View live