Quantcast
Channel: AnandTech Pipeline
Viewing all 9186 articles
Browse latest View live

ASUS Unveils PCE-AC88 4x4 Wi-Fi Card with 2167 Mbps Bandwidth

$
0
0

ASUS has introduced its flagship dual-band AC3100 wireless PCIe x1 adapter with four antennas and a maximum supported throughput of 2167 Mbps (270 MB/s). The card is powered by Broadcom’s flagship 802.11ac Wave 2 system-on-chip for Wi-Fi adapters and to show its best, it has to be connected to a router featuring another high-end Broadcom SoC.

The ASUS dual-band AC3100 wireless (PCE-AC88) PCIe adapter is based on the Broadcom BCM4366 SoC introduced early last year. The BCM4366 supports 4x4 802.11ac MU-MIMO technology and can thus use four 802.11n (2.4 GHz/5 GHz) or four 802.11ac (5 GHz) streams. The PCE-AC88 is a small half-height card that features PCIe x1 interface (the Broadcom chip supports PCIe 3.0, but exact data rate of the card is not given) and is equipped with a large big red cooler, and it is particularly large compared to other WiFi heatsinks. The device features four RP-SMA connectors for antennas as well as a special external magnetic base for the antennas which may be installed on top of a desktop PC or positioned somewhere else for better radio performance.

To actually hit the maximum bandwidth, the card has to be connected to routers powered by Broadcom’s latest Wi-Fi SoCs that support TurboQAM (256 quadrature amplitude modulation) and NitroQAM (1024 quadrature amplitude modulation) technologies. For pairing, ASUS currently offers its AC3100 (RT-AC88U) and AC5300 (RT-AC5300) routers, which support both TurboQAM and NitroQAM features. Netgear offers its Nighthawk X8 R8500 AC5300 router, which also supports maximum bandwidth up to 2167 Mbps.

The ASUS PCE-AC88 card was approved by the FCC in late November 2015, and is currently listed on ASUS’ website. The product is ready and may emerge for sale anytime, but its price is still unknown. Keeping in mind that the ASUS PCE-AC68 costs around $95, we expect the newest and fastest one to be at a higher price.

At present, the ASUS dual-band AC3100 wireless (PCE-AC88) PCIe adapter seems to be the only add-in Wi-Fi card for desktops which supports speeds up to 2167 Mbps using a 4x4 SoC. The part should come in handy for various media center PCs or NAS environments where wired Ethernet is absent but can benefit from high network speeds.

Source: ASUS (via TechPowerUp).


Two Months With: The Huawei TalkBand B2

$
0
0

As part of the test with the Huawei Mate S, I also decided to use Huawei’s TalkBand B2 wearable which I got at the end of a press event. I have been using it for a good couple of months or more, from just before MWC. Along with the smartwatch, there is the Huawei Fit app required to digest the data it tracks.

Having never used a smartwatch-like device before, and not being enthusiastic about wearing one, I bit the bullet to at least experience it. I’ve come out the other end not completely changed, and the TalkBand B2 has a list of issues that need solving, but I have softened my opinion to smartwatches as a result.

So, the TalkBand B2: it shows the time/date, steps, calories, time slept, time biked and you can start then stop a ‘run-mode’ with it.

    

The display is a basic black and white affair, but sharp enough for almost all the detail you need to see on it. It’s called a TalkBand, and this means that via Bluetooth you can receive calls and minor notifications on it. The notification pop-up is not as advanced as you might think - merely showing what app or name is causing the notification, rather than any details. This is useful if the user wants to keep an eye on a certain app (I tend to keep it linked to WhatsApp and Twitter which I use several times a day, but not Skype because that is always going off).

Taking calls on the TalkBand B2 is a little odd, and I have not had much success with it. The band will vibrate as someone calls (useful when the phone needs to be silent), but then I will go find my phone to answer it forgetting that I can answer and talk on the watch. So I answer the phone, but the setup is such that it will only accept voice from the band and not the smartphone, meaning I sound muffled as I do not realize that I’m supposed to talk into the watch. Even then, because audio going in and coming out of the smartwatch is usually loud, you can’t really have a private conversation. I don’t think I’m sold on taking calls on a watch just yet.

The screen has a feature that turns itself on when it detects movement that mimics looking at a watch on your wrist. Do the motion repeatedly and it will go through the different screens on the display. However, it has a few flaws. Firstly, the flick wrist motion to see the time only seems to work about 40% of the time. Then, in broad daylight, the display is nowhere near bright enough to even see anything. But, at night, when you want the device to monitor sleep patterns, any time you turn over it turns on the display which is way too bright, either blinding yourself or a significant other at 3am. It’s not great.

 

On the monitoring aspects of the device, I feel that the step counter is not that great. It recorded over 66000 steps and 45 km during Mobile World Congress, but it also detected 500 steps when I was packing my suitcase the day we left. This makes me think that a step counter is better suited for the ankle perhaps. An interesting thing is the cycling detection: while at the gym on the treadmill, I will sometimes rest my hands in front of me on the heart-rate monitors while jogging/walking in a down period, but then my steps do not count and because of the arm position. As a result, the TalkBand thinks I’m cycling. So in a 45-minute run, it will detect running for 25-30 minutes and cycling for 10-15, which isn’t true, and messes up the calorie calculations. A better way perhaps to do this is via the user indicating a cycle time, similar to enabling the start of a run.

 

The sleep monitor works when the watch is worn in bed. The thing is, if the feature that shows the display when you rotate your wrist is enabled, the screen is far too bright at night. When not being blinded by the light, the TalkBand has a feature where during a 30-minute ‘alarm’ period set by the user (say 6:00-6:30am). If it detects the user in a light sleep mode it will vibrate to wake the person without the need for an alarm. This means that the user won’t wake from a heavy sleep, feel fresher, and it won’t wake their significant other. The downside of this is that if you are never in a light sleep during that period, the smartwatch won’t go off. That’s assuming the TalkBand has any battery left (see later). But a final word on the sleep monitor: sometimes when you take the watch off (either for comfort or to charge), it sometimes detects the lack of movement as a sleep pattern. So despite wearing it all day, if I’ve taken the watch off at some point to do the washing up, it might tell me that I slept for a couple of hours in the afternoon (which I definitely did not do).

The TalkBand B2 is a detachable module from the wristband, allowing for configurable straps. In this instance, I used the one supplied - a leather band in a silver backing. The device is easy enough to remove, and small enough to lose if you aren't too careful. On the rear is the charging port and what looks like the speaker which uses the internal open space in the wristband as an echo chamber to amplify the sound.

On the battery, this is going to be a pain point for anyone using the TalkBand B2. In order to charge it, the unit has to be removed from the clasp and the micro-USB port on the bottom used, meaning that the device cannot be used while charging (which takes around 30 mins for a full charge). For the most part, it lasts two days on a full charge. It uses more when you are exercising, up to 10% per hour, but the two days per charge means that I was always destined to either go to bed, or to the gym, on 2% battery.  Even a small 15-minute window to charge it can give it enough juice for most of the day. During some of the time that I tested the device, I remembered to charge it while I was in the shower. But to put this into real world context, in March, I forgot to charge it about 20% of the time meaning it lacked sleep data collection, or for several days during April I forgot to charge it and wear it overnight.

On the data collection, the screen shots here pretty much sums it all up, telling the user how much sleep and how many steps. It is up to the user to decide what to do with the data, and I’m not sure how much might be being uploaded to a personal account. If you are trying to maintain a regular exercise and sleep schedule, it gives rough metrics that can be interpreted on whatever side your confirmation bias ends up on, but at the end of the day the only thing that makes this data collection useful is if it provides recommendations, and the TalkBand and Huawei Wear app currently do not do that. Actually, I’ll adjust that statement: if it detects you are sitting down for more than an hour, it will vibrate to tell you to stand up. Somewhat annoying when watching a film or on a long haul flight, and the time gap is not adjustable.

At the end of the day, I am glad I’ve tried the TalkBand B2. It’s not the best device for me, because of the brightness (especially at night-time) and the battery life really puts a dampener on the user experience, but it comes in a lot cheaper than the Android, watchOS or Tizen-based devices if you absolutely need a screen. If another smartwatch ever floats my way, I’ll see how that compares to this one.

Micron Confirms Mass Production of GDDR5X Memory

$
0
0

Micron Technology this week confirmed that it had begun mass production of GDDR5X memory. As revealed last week, the first graphics card to use the new type of graphics DRAM will be NVIDIA’s upcoming GeForce GTX 1080 graphics adapter powered by the company’s new high-performance GPU based on its Pascal architecture.

Micron’s first production GDDR5X chips (or, how NVIDIA calls them, G5X) will operate at 10 Gbps and will enable memory bandwidth of up to 320 GB/s for the GeForce GTX 1080, which is only a little less than the memory bandwidth of NVIDIA’s much wider memory bus equipped (and current-gen flagship)  GeForce GTX Titan X/980 Ti. NVIDIA’s GeForce GTX 1080 video cards are expected to hit the market on May 27, 2016, and presumably Micron has been helping NVIDIA stockpile memory chips for a launch for some time now.

NVIDIA GPU Specification Comparison
  GTX 1080 GTX 1070 GTX 980 Ti GTX 980 GTX 780
TFLOPs (FMA) 9 TFLOPs 6.5 TFLOPs 5.6 TFLOPs 5 TFLOPs 4.1 TFLOPs
Memory Clock 10Gbps GDDR5X GDDR5 7Gbps
GDDR5
6Gbps
GDDR5
Memory Bus Width 256-bit ? 384-bit 256-bit 384-bit
VRAM 8 GB 8 GB 6 GB 4 GB 3 GB
VRAM Bandwidth 320 GB/s ? 336 GB/s 224 GB/s 288 GB/s
Est. VRAM Power Consumption ~20 W ? ~31.5 W ~20 W ?
TDP 180 W ? 250 W 165 W 250 W
GPU "GP104" "GP104" GM200 GM204 GK110
Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm
Launch Date 05/27/2016 06/10/2016 05/31/2015 09/18/2014 05/23/2013

Earlier this year Micron began to sample GDDR5X chips rated to operate at 10 Gb/s, 11 Gb/s and 12 Gb/s in quad data rate (QDR) mode with 16n prefetch. However, it looks like NVIDIA decided to be conservative and only run the chips at the minimum frequency.

As reported, Micron’s first GDDR5X memory ICs (integrated circuits) feature 8 Gb (1 GB) capacity, sport 32-bit interface, use 1.35 V supply and I/O voltage as well as 1.8 V pump voltage (Vpp). The chips come in 190-ball BGA packages with 14×10 mm dimensions, so, they will take a little less space on graphics cards than GDDR5 ICs.

The announcement by Micron indicates that the company will be the only supplier of GDDR5X memory for NVIDIA’s GeForce GTX 1080 graphics adapters, at least initially. Another important thing is that GDDR5X is real, it is mass produced now and it can indeed replace GDDR5 as a cost-efficient solution for gaming graphics cards. How affordable is GDDR5X? It should not be too expensive - particularly as it's designed as an alternative to more complex technologies such as HBM - but this early in the game it's definitely a premium product over tried and true (and widely available) GDDR5.

NVIDIA Announces Q1 FY 2017 Results: Strong Growth In All Segments

$
0
0

Much of the PC industry has been reporting less than strong results, with declines in PC unit sales leading to lower earnings for many of the big players such as Intel, AMD, and even Apple. NVIDIA has bucked this trend though with a strong quarter to start their fiscal year 2017. Revenue for the quarter was $1.305 billion, up 13% from Q1 FY 2016, with growth in NVIDIA’s GPU business, datacenter, automotive, and professional visualization all showing strong sales. Gross margin for the quarter was 57.5%, up 0.8% from a year ago. Operating income was up 39% to $245 million, and net income came in up 46% to $196 million. This resulted in earnings per share of $0.33.

NVIDIA Q1 2017 Financial Results (GAAP)
  Q1'2017 Q4'2016 Q1'2016 Q/Q Y/Y
Revenue (in millions USD) $1305 $1401 $1151 -7% +13%
Gross Margin 57.5% 56.5% 56.7% +1.0% +0.8%
Operating Income (in millions USD) $245 $252 $176 -3% +39%
Net Income $196 $207 $134 -5% +46%
EPS $0.33 $0.35 $0.24 -6% +38%

NVIDIA also reports Non-GAAP results, which exclude stock based compensation, warranty charges, restructuring fees, and other fees. The Non-GAAP results had revenue of $1.305 billion, which is up 13%. Gross margin was 58.6%, which is up 1.7%. Operating income was up 39% to $322 million, and net income was up 41% to $263 million. Non-GAAP earnings per share were $0.46.

NVIDIA Q1 2017 Financial Results (Non-GAAP)
  Q1'2017 Q4'2016 Q1'2016 Q/Q Y/Y
Revenue (in millions USD) $1305 $1401 $1151 -7% +13%
Gross Margin 58.6% 57.2% 56.9% +1.4% +1.7%
Operating Income (in millions USD) $322 $356 $231 -10% +39%
Net Income $263 $297 $187 -11% +41%
EPS $0.46 $0.52 $0.33 -12% +39%

GPUs accounted for most of the revenue, with GPU revenue coming in at $1.08 billion for the quarter. This is a gain of 15% year-over-year, driven by strong growth in the GeForce lineup. NVIDIA has recently announced their latest Pascal architecture with the GTX 1080 and GTX 1070, with cards coming soon, so I would expect strong sales to continue. Tegra processor revenue for the quarter was $160 million, up 10% compared to Q1 2016, thanks to continued growth in Tegra automotive.

Gaming platform revenue was up 17% to $687 million, consistent with the strong growth in PC gaming compared to the rest of the PC market. Quadro revenue, filed under Professional Visualization, was up 4% to $189 million.

Datacenter revenue which includes Tesla and GRID results were a record $143 million for the quarter, up 63% from a year ago, driven by demand for GPU acceleration for deep learning.

Automotive revenue was $113 million of the $160 million for Tegra, up 47% year-over-year. It appears that NVIDIA made the right call with Tegra by mostly abandoning the mobile market where competition is pretty fierce, and they’ve really gained a good diversified foothold in the automotive sector where higher TDPs can let them drive larger GPUs.

Finally, the licensing deal with Intel, which is going to end soon, accounted for $66 million in revenue.

NVIDIA Quarterly Revenue Comparison (GAAP)
In millions Q1'2017 Q4'2016 Q1'2016 Q/Q Y/Y
GPU $1079 $1178 $940 -8% +15%
Tegra Processor $160 $157 $145 +2% +10%
Other $66 $66 $66 flat flat

NVIDIA has had a pretty successful run since the introduction of their Maxwell products, and these results are all prior to Pascal even coming on the market in a consumer card. NVIDIA’s outlook for Q2 is revenue of $1.35 billion plus or minus 2%, and GAAP gross margin of 57.7%, plus or minus 0.5%.

Source: NVIDIA Investor Relations

Philips Begins to Sell 43” 4K IPS BDM4350UC Display for $799

$
0
0

For many workloads that require a lot of on-screen space, big displays are hugely beneficial — the bigger the better. TPV Technology, the company that produces monitors under Philips brand, recently decided to go very big and introduced a new 43" display with a 3840 x 2160 resolution. While the monitor is intended mostly for prosumer workloads, its price is not too high.

Extremely large displays are generally overkill for everyday workloads, but there are industries where the workloads require more on-screen space than a single monitor can provide. For example, many engineers and financial brokers use multi-display setups to maximize their productivity and view far more info than they could on a single display. While it is impractical to substitute four, six or eight displays in control rooms or in traders’ offices with fewer physical screens, engineers and designers could use one big monitor instead of two smaller ones. Philips is targeting this group of users with its Brilliance UltraClear 43” display, which is more like a television than a monitor.

The Philips UltraClear 43” (BDM4350UC) display uses an IPS panel with a 3840 × 2160 resolution and W-LED backlighting. It has a 300 nit brightness, a 1200:1 contrast ratio, and a 60 Hz refresh rate. According to Philips, the brightness uniformity is 96~105%, which is quite good for a display of this size. Philips also includes a uniformity feature called Smart Uniformity to correct inconsistencies in the backlighting, but it's not clear how well it works in the real world or what limitations it imposes on the display modes that can be used.

Philips BDM4350UC
Panel 43" IPS
Resolution 3840 × 2160
Refresh Rate 60 Hz
Response Time 5 ms gray-to-gray
Brightness 300 cd/m²
Contrast 1200:1
Viewing Angles 178°/178° horizontal/vertical
Color Saturation 1.07 billion colours, 100% sRGB
Pixel Pitch 0.2451 mm
Pixel Density 102 ppi
Brightness Uniformity 96 - 105%
Picture-in-Picture Up to four 1080p PiP images supported
Inputs 1 × D-Sub
2 × HDMI 2.0
2 × MHL
1 × DP 1.2
USB Hub 4-port USB 3.0 hub,
one port supports fast charging
Audio 7W × 2
Launch Price $799.99

The UltraClear 43” comes with two HDMI 2.0 ports with MHL support, two DisplayPort 1.2 ports, and a D-Sub connector. The monitor can be connected to up to four video sources and display images from them in picture-by-picture mode. The display is also equipped with a quad-port USB 3.0 hub as well as two 7W speakers.

Just like TVs, the Philips UltraClear 43” comes with a stand that does not allow adjustment of tilt or height, which is a drawback. Fortunately, the monitor has a VESA mount, so, it should be possible to get an appropriate arm or aftermarket stand that does support this, although it will need to be able to support the display's mass and size.

It remains to be seen whether there's a sizable market for the UltraClear 43”, but for tasks like editing spreadsheets and CAD work it could be quite useful. Right now the Philips UltraClear 43” is available on Amazon for $799.99.

QNAP Expands Thunderbolt NAS / DAS Lineup with TVS-x82T Series

$
0
0

QNAP recently held a product launch event in San Jose. The main announcement was the follow-up to their TVS-871T released late last year. The first generation Thunderbolt NAS came with eight 3.5" drive slots, 16GB of RAM, two Thunderbolt 2 ports and two 10GBASE-T ports. Despite being priced at $2800 (Haswell Core i5 configuration) and $3200 (Haswell Core i7 configuration), QNAP noted that it was popular enough to expand the lineup with more models. The popularity stems from the fact that creative professionals dealing with multimedia content often deal with Thunderbolt DAS units for fast access to data. However, those DAS units are typically not amenable to shared workflows. This is where a NAS / DAS combination like the TVS-871T has been able to make an impact. Thunderbolt networking support means that multiple PCs can access the NAS / DAS at high speed, while the network links can be used by other non-performance sensitive clients to access the data.

The new TVS-x82T series comes in three different varieties, the TVS-682T, TVS-882T and TVS-1282T. They have 4, 6 and 8 3.5" drive bays respectively. The 4- and 6-bay units have two additional 2.5" drive slots, while the 8-bay unit has four additional 2.5" drive slots. The 6- and 8-bay units have LCD screens in the front panel. As the slide below shows, the models come with three HDMI ports (2x HDMI 1.4 + 1x HDMI 2.0), and have two M.2 SSD slots on the motherboard. The TVS-1282T also has a spare PCIe 3.0 x8 slot that can be used to add a 10 GbE / 40 GbE card or NVMe SSD or a USB 3.1 Gen 2 card or even a discrete GPU (that can be used as a passthrough device for a VM running on the TVS-1282T).

QNAP was also heavily promoting their Qtier automatic data tiering technology which helps in improving performance while also enabling archive functionality within the same NAS by using another storage pool. Thunderbolt expansion units (5-bay and 8-bay) are also available.

More information on the TVS-x82T models can be found on QNAP's product page here.

It is interesting to note that the Thunderbolt ports are v2 and enabled by an add-on card. One of our chief complaints is the usage of Thunderbolt 2 instead of Thunderbolt 3 (which can also act as a USB 3.1 Gen 2 host / client as needed). QNAP mentioned that Thunderbolt 3 add-on cards are definitely in the pipeline, but, most current Thunderbolt users are in the Apple ecosystem. Thunderbolt 2 is the best technology currently available in those setups, and QNAP is pitching the TVS-x82T to that market currently.

QNAP also had live demonstrations of the TDS-16489U (for hyperconvergence with a 2P Xeon platform that enables the application server and storage server to be one unit) and the ES1640dc dual-controller ZFS NAS.

We have already looked at both these products in detail as part of our 2016 CES coverage.

The new information presented at the San Jose event included availability details (mid-June for all products) and pricing for the enterprise NAS units (starting at $7700 for the TDS-16489U and $9700 for the ES1640dc).

One of the striking slides that we saw as part of the ES1640dc presentation was the competitor landscape / target market for the unit.

It is great to see QNAP move up to the next level and compete with the big guys such as QSAN, EMC and NetApp in the enterprise space. As the above slide shows, the hardware configuration is very good compared to the competition and I am pretty sure the pricing is more than competitive. However, in the enterprise space, the main challenge is support requirements. Enterprise customers require service contracts that keep any hardware-related downtime to the absolute possible minimum, and the software needs to be stable and production-ready. Quality Assurance steps also need to be more stringent compared to the typical requirements in the SMB market that QNAP has been serving so far. I will be closely tracking how QNAP's aspirations in the enterprise space pan out in the long run.

In other news, QNAP also launched the TS-831X, an ARM-based NAS with the Annapurna Labs AL314 SoC. It comes with dual 10G SFP+ ports. If that sounds familiar, the configuration is similar to the Synology DS2015xs that we reviewed last year. While the DS2015xs uses the 1.7 GHz AL-514, the TS-831X uses the 1.4 GHz AL-314 SoC. Both of them use a quad-core Cortex-A15 as the host CPU. QNAP's MSRP for the TS-831X will be $799, a low lower than the current street price of $1400 for the Synology DS2015xs.

AMD Quietly Unveils Radeon M400 Series: Starting With Rebadges

$
0
0

As our long-time readers are keenly aware, the product cycles followed by PC OEMs and ODMs for their laptops and desktops are rarely perfectly in sync with the development cycles of the underlying processors. With a desire to refresh their PCs on a yearly basis – whether or not new processors are available – OEMs lean on their suppliers to come up with newer parts to fill out these devices. Consequently, it has become a semi-annual ritual for the GPU vendors to rebadge parts of their lineups to meet the needs of OEMs, shuffling together old and new parts as part of a continuous cycle of upgrades and replacements.

Kicking off this latest cycle, this week AMD quietly updated the laptop GPU section of their website to add the Radeon M400 series, the latest generation of AMD’s notebook (and AIO desktop) GPUs. And to cut right to the chase, while this year is going to be an important year for AMD with the launch of their Polaris architecture and its accompanying GPUs, Polaris isn’t upon us quite yet. Instead what AMD has published are the customary 28nm rebadges that will be fleshing out the M400 line, presumably positioned around where Polaris will land a bit later this year.

As these product updates aren’t coming alongside any other formal product announcement, there’s little information on AMD’s branding/positioning direction at this time. So the information we have is limited to some basic hardware specifications. Given the timing of this release – just two weeks before Computex – I expect we’ll see more from AMD as part of their annual Computex presentations. In the meantime if you see Radeon M400 parts start to show up in laptop specification sheets ahead of Computex, here is what’s going on under the hood.

AMD R9 M400 Series GPU Specifications
  R9 M485X R9 M470X R9 M470
Was Variant of R9 M395X Variant of R9 M385X Variant of R9 M380
Stream Processors 2048 896 768
Texture Units 128 56 48
ROPs 32 16 16
Memory Clock <= 5Gbps GDDR5 <= 6Gbps GDDR5 <= 6Gbps GDDR5
Memory Bus Width 256-bit 128-bit 128-bit
VRAM <= 8GB <=4GB <=4GB
GPU Tonga Bonaire Bonaire
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.1 GCN 1.1

The sizable high-performance Radeon R9 M300 family has been trimmed significantly for the R9 M400 family, which at least until the launch of Polaris is composed of three products: M485X, M470X, and M470. The former is identical in specification to R9 M395X, making it a rebadged GCN 1.2 Tonga part with all 2048 SPs enabled. Meanwhile the M470 series follows the M380 series, making it rebadges of the GCN 1.1 Bonaire GPU. M470X would be a fully enabled part with all 896 SPs enabled, while M470 cuts that down to 768 SPs.

Meanwhile, not found here are any parts based on AMD’s venerable GCN 1.0 Pitcairn GPU. After a run of over 4 years, it looks like Pitcairn has set off on a well-deserved retirement.

AMD R7 M400 Series GPU Specifications
  R7 M465X R7 M465 R7 M460 R7 M445 R7 M440
Was Variant of R9 M370 Variant of R7 M370 Variant of R7 M360 Variant of R7 M340 Variant of R7 M340
Stream Processors 512 384 384 320 320
Texture Units 32 24 24 20 20
ROPs 16 8 8? 4? 4?
Memory Clock <= 4.5Gbps GDDR5 <= 4.6Gbps GDDR5 <= 2Gbps DDR3 <= 4Gbps GDDR5 <= 2Gbps DDR3
Memory Bus Width 128-bit 64/128-bit 64-bit 64-bit 64-bit
VRAM <= 4GB <=4GB <=4GB <=4GB <=4GB
GPU Cape Verde Topaz Topaz Topaz Topaz
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.0 GCN 1.2 GCN 1.2 GCN 1.2 GCN 1.2

As for the R7 level of GPUs, AMD is introducing 5 different models. The part that immediately sticks out the most is the 512 SP R7 M465X, which has no immediate predecessor in AMD’s catalog. Based on the limited information here, this looks to be a cut-down version of R9 M370 series, which utilized the GCN 1.0 Cape Verde GPU. This part doesn’t feature all of Cape Verde’s SPs enabled, but does retain the higher bandwidth offered by GDDR5.

Below the M465X things get murkier with the remaining M460 series and M440 series parts. AMD has a number of overlapping parts here, and the underlying configurations are not well documented to the public. The M465 could be multiple parts, but most likely we’re looking at Topaz in two different configurations, with one featuring the full 128-bit memory bus, while another features a neutered 64-bit memory bus. In any case these seem to be rebadges/retools of the R7 M370, meaning we’re looking at 384 SPs with differing amounts of memory bandwidth. The M460 on the other hand looks to be a straight-up rebadge of the M360, another Topaz-esque part with a 64-bit memory bus and DDR3 memory.

Rounding out the R7 collection is the M440 and M445, each of which features 320 SPs. These again are likely Topaz parts, with the published difference being the memory technology. M445 uses GDDR5 on a 64-bit memory bus, while M440 uses DDR3 on the same sized bus. These are essentially rebadges/retools of the R7 M340.

AMD R5 M400 Series GPU Specifications
  R5 M435 R5 M430 R5 M420
Was New Variant of R5 M330 Variant of R5 M320
Stream Processors 320 320 320
Texture Units 20 20 20
ROPs 4? 4? 4?
Memory Clock <= 4Gbps GDDR5 <= 2Gbps DDR3 <= 2Gbps DDR3
Memory Bus Width 64-bit 64-bit 64-bit
VRAM <= 4GB <=4GB <=4GB
GPU Topaz? Topaz? Topaz?
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm
Architecture GCN 1.2 GCN 1.2 GCN 1.2

Finally bringing up the rear are the R5 M400 series. These 320 SP parts are likely candidates to be paired with APUs for dual graphics operation. At the top of the list is the R5 M435, and highly unusual for an R5 part features GDDR5 memory on a 64-bit memory bus. AMD has never offered a 320 SP part with GDDR5 before, so while it’s vaguely similar to the R7 340, it has no real predecessor. Meanwhile the R5 M430 and M420 are almost certainly direct rebadges of the M330 and M320 respectively.

The bigger question of course is where products based on AMD’s forthcoming Polaris GPUs will fit into these lineups. Having undertaken their annual rebadge, at this point it’s safe to assume that the Polaris parts will be sold under the M400 banner as well. And given the performance AMD has been touting, I’m sure we’ll be looking at a fuller R9 M400 lineup once that happens. In the meantime we’ll have to see what AMD has planned for the Radeon Mobility family at Computex.

AMD Changes SSD Strategy: High-End M.2/NVMe SSDs Incoming, Low-Cost R3 Drives Are Here

$
0
0

Over the past week or so, we noticed other news outlets reporting on the AMD R3 series of SSDs, as if there had been a recent press release circulating under the radar. This wasn't the case: despite the fact that the R3 SSDs have been out for a number of weeks, one news outlet decided to run a story and the rest followed the echo without investigating further. We put it direct to AMD about how the R3 SSD release was under the hood, and how the R7 drives had also been removed from listings. We had an interesting response, which we would like to summarize and discuss here.

R7 Out, R3 In

AMD’s Radeon R7 SSDs were developed by OCZ and featured 64 Gb NAND flash chips from Toshiba made using the company’s second-generation 19 nm (A19) fabrication process. In many ways, the drives resembled OCZ’s ARC 100 and Vector 150 drives, but since Toshiba is phasing out its NAND flash made using 19 nm manufacturing technology, the Radeon R7 SSDs are also discontinued and right now online stores are selling the remaining inventory.

For the newer Radeon R3 family of SSDs, AMD chose a different partner. The drives are manufactured by a contract maker and then distributed by Galt, the company that distributes AMD’s Radeon-branded memory modules. Working closely with companies like SK Hynix, or OCZ, allows AMD to tailor certain aspects of SSD performance and offer a different (well, to a certain degree) differentiation point that is not available from anyone else. Moreover, since SK Hynix is a relatively small player on the SSD market, it is interested in increasing its share and may be flexible about pricing. AMD admits that the new R3 drives are indeed slower than the older R7 ones released earlier due to the movement from MLC to TLC, but it notes that they are considerably cheaper too, which was one of the primary reasons why the company decided to sell them. Furthermore, since Galt now handles logistics for AMD's DRAM and SSD products, it can do everything a little more efficiently in terms of costs.

The newer AMD Radeon R3 solid-state drives come in 2.5"/7 mm form-factor and are based on the quad-channel Silicon Motion SM2256KX controller as well as TLC NAND memory made by SK Hynix. The new SSDs are available in 120, 240, 480 and 960 GB configurations, which are rated for up to 520 MB/s maximum sequential read speed as well as up to 470 MB/s maximum sequential write speed. Like other TLC NAND-based drives, the Radeon R3 SSDs use a part of its flash memory in pseudo-SLC mode for caching and performance-acceleration purposes.

AMD Radeon R3 and R7 Series SSD Specifications
  R3
120 GB

R3L120G
R3
240 GB

R3SL240G
R3
480 GB

R3SL480G
R3
960 GB

R3SL960G
  R7
120 GB
R7
240 GB
R7
480 GB
Controller Silicon Motion
SM2256KX
  OCZ
Barefoot 3 M00
NAND SK Hynix
TLC NAND
  Toshiba
64 Gb A19nm MLC
Seq. Read 520 MB/s 510
MB/s
  550
MB/s
550
MB/s
550
MB/s
Seq. Write 360
MB/s
470
MB/s
460
MB/s
  470
MB/s
530
MB/s
4KB Random Read / IOPS 57K 77K 83K 80K   85K 95K 100K
Steady-State 4KB Random Write / IOPS 18K 25K 28K 37K   12K 20K 23K
Pricing at Amazon $40.99 $69.99 $136.99 -   $60.51 $92.97 $191.42

However, the release of the Radeon R3 SSDs does not mean that AMD simply leaves the market of more advanced SSDs and focuses on low-cost models.

New R7/R9 On The Horizon

AMD intends to introduce new higher-end Radeon SSDs towards the end of the year, the company said this week. Quite naturally, AMD remains tight-lipped about exact plans, but it confirmed that the new family will include faster SATA drivers as well as M.2/NVMe drives for future platforms. Keeping in mind that AMD does not seem to stick to one supplier of memory or drives, the new Radeon R7 SSDs (or will they be called Radeon R9?) may come from a new supplier. Nonetheless, if AMD intends to continue working with manufacturers with their own NAND (or, at least, a very tight relationship with actual makers of flash), then the list of its potential partners will be relatively short.

When AMD introduced Radeon-branded memory modules several years ago, the company said that those products were optimized for its platforms, which was important as AMD needed faster DDR3 DRAM to improve the higher supported memory and performance of its APUs in graphics applications. As an added bonus, Radeon-branded memory modules was a way to give something back to its loyal customers as well as modders. With the Radeon R7 SSDs, the company pursued the same strategy but never attempted to expand the family of its storage devices. By now, Radeon-branded non-graphics hardware seems to have become a noteworthy part of AMD’s business, which is why it is gradually expanding the lineup of such products (e.g., the company introduced its DDR4 memory modules months ago, well ahead of any AMD APUs/CPUs with DDR4 support). Since TLC NAND is here to stay, it is pretty obvious that by the end of the year the company will offer SSDs for a variety of market segments: R3 for the entry-level and R7 (or R9?) for the higher-end.

Some Thoughts: AMD in SSDs (and DRAM) Feels a Bit Odd, Right?

The art of selling rebadged components, or using an ODM/OEM relationship and then adding a name on to it, might not seem like a true integration into these markets. While there are DRAM modules and SSDs with AMDs name on them, they are not actually investing much research money into driving the industry forward - these are turn-key solutions, similar to the way that local brands have smartphones that are identical apart from the sticker and the software. The reason for AMD reaching out with SSDs and DRAM (which likely offer little-to-no margin compared to the rest of the products) comes down to support, validation and system integrators.

By offering an AMD brand SSD or DRAM module, it means that if a customer wants guaranteed compatibility and a single source for their parts, they can ring up an AMD distributor. This simplifies support for any component that needs to be replaced and means that inside and out everything comes up AMD (or as much as possible). It allows system integrators to offer their customers validated AMD hardware packages as well. 

 


Western Digital’s Acquisition of SanDisk Officially Closes

$
0
0

Western Digital announced on Tuesday that the Chinese authorities have approved its acquisition of SanDisk. The regulatory approval from China's Ministry of Commerce (MOFCOM) in connection with the planned acquisition completes the regulatory review process required for this buyout. As a result, the transaction officially closed on Thursday. By taking over SanDisk, Western Digital will become one of the world’s largest suppliers of storage devices and solutions, which will be able to address virtually all kinds of storage needs.

“This transformational combination creates a media-agnostic leader in storage technology with a robust portfolio of products and solutions that will address a wide range of applications in almost all of the world’s computing and mobile devices,” said Steve Milligan, chief executive officer of Western Digital. “We are excited to now begin focusing on the many opportunities before us, from leading innovation to bringing the best of what we can offer as a combined company to our customers. In addition, we will begin the work to fully realize the value of this combination through executing on our synergies, generating significant cash flow, as well as rapidly deleveraging our balance sheet, and creating significant long-term value for our shareholders.”

The combined company will control over 40% of the HDD market, around 10% of the SSD market and a substantial chunk of NAND flash supply (together with Toshiba, SanDisk operates the world’s largest NAND flash production complex). The new entity will also be very powerful financially: last fiscal year Western Digital achieved revenue of $14.6 billion and net income of $1.5 billion (down from $15.1 billion and net income of $1.6 billion a year before), whereas SanDisk’s annual revenue for the 2015 fiscal year was $5.56 billion, a decrease of 16% from 2014. But while the combination of Western Digital and SanDisk has a huge potential, it will not be easy for the two companies to become one, especially keeping in mind that Western Digital and SanDisk have not fully absorbed HGST, Fusion-io and a number of other companies they acquired in the recent years. 

Portfolio Overlap

In the coming quarters, Western Digital and SanDisk will have to optimize their product lineups and either integrate or eliminate redundancy projects, two rather tough challenges.

The explosive growth of the solid-state storage market helped to create rather innovative developers of NAND-based storage solutions for enterprises and hyperscale data centers, whereas established players faced a new reality and had to become parts of bigger entities. To strengthen its portfolio of products for the lucrative data center market, SanDisk has acquired Pliant Technology, FlashSoft, Schooner Information Technology, SMART Storage Systems and Fusion-io in the recent years. Because of its multi-billion takeovers, SanDisk greatly expanded its portfolio of products as well as intellectual property. What is important to note is that even now some of SanDisk’s products overlap and address essentially similar market segments.

SanDisk's NAND Flash Related Assets
Former Name Pliant FlashSoft Schooner SMART
Storage
Systems
Fusion-io
Specialization Controllers for enterprise SSDs Software for SSD caching Software for database acceleration Controllers for enterprise SSDs Software and hardware for enterprise storage systems
Type of Products Enterprise SSDs (Lightning) Software for datacenters Specialized servers for storage applications Enterprise SSDs (Optimus) Specialized servers for storage applications
Founded 2006 2009 2007 Spun off from Smart Modular in 2012 2005
Acquired 2011 2012 2012 2013 2014

Western Digital was not sitting idle too. In addition to HGST (which currently absorbs the majority of Western Digital’s enterprise NAND-related assets), the company bought Virident, STEC, VeloBit, Amplidata and Skyera. Western Digital is not new in the SSD world: the takeover of SanDisk adds NAND flash manufacturing capacities, something that Western Digital needed to become a vertically integrated provider of HDDs, SSDs and all-flash storage arrays. This means that in addition to Western Digital gaining a lineup of consumer SSDs, it does mean there are a number of overlapping product lines and a lot of staff in similar roles.

Western Digital's NAND Flash Related Assets
Former Name HGST Virident STEC VeloBit Skyera Amplidata
Specialization SSDs and HDDs Controllers for enterprise SSDs and software Controllers for enterprise SSDs Software for SSD caching Software and hardware for enterprise storage systems Software for storage management
Type of Products SSDs and HDDs PCIe enterprise SSDs Enterprise SSDs Software for datacenters Specialized servers for storage applications Software for storage management in data centers
Founded 2003 (acquired from IBM) 2006 1990 2010 2010 2010
Acquired 2012 2013 2013 2013 2014 2015

A major challenge for Western Digital and SanDisk will be optimization of their workforce in the coming quarters. It can be expected that the two companies may have to lay off a number of people and there are two problems with this. Firstly, takeovers by large corporations are usually followed by brain drain on all levels: both companies have already lost many people from Fusion-io, Skyera, Amplidata and others. The integration of SanDisk into Western Digital could accelerate departures of other people. Secondly, workforce optimizations amid resignations might disrupt product development and execution of roadmaps.

Keeping in mind that Western Digital is now in the midst of HGST integration, the situation becomes even more complicated. Until recently, WD and HGST had separate R&D divisions and product roadmaps. Unification of different research and development operations can have short-term effects on R&D because usual teams and workflows are disrupted and then new ones are yet to be established. Adding SanDisk to the equation essentially makes things even more difficult. But that is not the only challenge.

Product Lineups: The Competition Continues

At present, Western Digital’s subsidiary HGST uses NAND flash memory from Intel (or IMFT/Micron) to make its enterprise-grade SSDs. Keeping in mind that the two companies have a supply and development agreement in place, it is highly likely that development of solid-state drives based on Intel’s 3D NAND is well underway. Moreover, since Intel and Micron seem to pin a lot of hopes on their second-gen 3D NAND (which production starts this summer), it is possible that HGST will have to use this type of memory as well. A good news is that Skyera’s skyHawk FS all-flash storage array should support various types of NAND memory (including SanDisk’s) because Micron, Toshiba and SK Hynix all invested in the company later acquired by Western Digital and Skyera worked with all of them. Meanwhile, HGST’s FlashMax PCIe SSD accelerators (which it got from Virident) rely on Intel’s NAND, whereas S-series SSDs (which the company got from STEC) use Toshiba’s memory.

Since we are talking about enterprise-grade storage, for which development and validation can take years, it is clear that HGST will continue to use third-party NAND for the foreseeable future, unless Western Digital terminates the agreement with Intel, drops already developed products and replaces them with those designed by SanDisk. Even in this case, it will have to sell Intel-based Ultrastar and FlashMax drives for some time so not to upset customers.

Meanwhile, SanDisk naturally uses its own NAND flash as well as its own controllers to make its SSDs for enterprise markets. Such drives compete against products designed by HGST and not all customers of the two companies could easily switch suppliers of SSDs because their storage systems are tailored for particular drives. In short, HGST’s Ultrastar and FlashMax will continue to compete against SanDisk’s Lightning, Optimus and Fusion ioMemory drives/accelerators for quite a while. Moreover, it looks like Western Digital’s skyHawk FS will also compete against SanDisk’s InfiniFlash to a certain degree.

While the integration of the two companies into one seems to be challenging, the addition of SanDisk’s consumer SSDs to Western Digital’s product lineup is a big deal. It is not a secret that PC form-factors that cannot house 2.5” HDDs are gaining popularity, whereas the total available market of HDDs is declining. It is also noteworthy that there is a rather tough competition between low-end SSDs and HDDs for inexpensive notebooks. By offering both SSDs (especially single-chip SSDs like iNAND) and HDDs, Western Digital will be able to address traditional laptops, emerging 2-in-1s, tablets, ultra-thin notebooks and many other new types of PCs. Moreover, with SSDs and HDDs in its portfolio, Western Digital will be able to offer just what PC makers need for low-cost machines (without having to reduce margins to minimal levels just to get a design win). While SSDs will naturally compete against HDDs, this will unlikely hurt Western Digital. On the other hand, entering SSD market and competing against Samsung is a tough challenge, especially with overlapping products, internal competition, redundant workforce and the necessity to align roadmaps of two companies as well as Toshiba.

Impact on Rivals

Two interesting things that we will have to observe in the coming years will be the impact of Western Digital’s acquisition of SanDisk on the market of storage in general as well as on their rivals, namely Intel, Micron, Samsung, Seagate and Toshiba.

While Intel has a broad SSD portfolio, which includes client and server drives, enterprise storage has been Intel’s primary focus from the very start. Development of storage class memory (3D XPoint) as well as its long-term vision of data center architecture indicates that Intel wants a significant and lucrative chunk of the enterprise storage market. Since the company is a major supplier of server CPUs, it is plausible that it will be a main provider of emerging NVDIMM SSDs for the future Intel Xeon platforms. Western Digital (HGST) will remain Intel’s SAS SSD development partner for a while, but Intel does not plan to supply 3D XPoint memory to any other makers of drives at this time, meaning HGST will not be able to address the highest end of the enterprise SSD market with 3D XPoint. However, since both SanDisk and Western Digital have been developing various storage platforms tailored for particular needs (such as HGST’s Active Archive System and Fusion-io’s InfiniFlash), there will be huge parts of the market that the combined company will be able to address without directly competing against Intel. Moreover, SanDisk is developing storage-class memory with HP, which will help to compete against Intel’s 3D XPoint eventually.

Micron was historically focused on the production of DRAM and NAND memory, but in the recent years it accelerated its push into actual products based on their chips. Right now, the company offers a portfolio of SSDs for virtually all market segments, including big data analytics, databases, hyperscale/private data centers as well as virtualized environments. Micron will continue to compete against SanDisk’s SSDs for respective segments and will be able to build hybrid storage solutions when/if needed together with Seagate. What Micron cannot do today is to offer all-flash storage arrays like Western Digital’s Skyera. Moreover, with Western Digital’s takeover of SanDisk, Micron will lose a customer for its NAND flash memory eventually, which is not necessarily a bad thing, given the company’s focus on products and not on commodity chips.

Speaking of Seagate, we should note that the company is facing major challenges. The firm does have state-of-the-art storage technologies, it offers Nytro solid-state storage accelerators, SandForce SSD controllers and various solutions in its arsenal (in fact, the company even claims to  havethe  world’s fastest SSD in its portfolio). However, Seagate does not have its own NAND flash production (unlike two of its competitors, Toshiba and Western Digital), which is why its chances on the market of consumer SSDs are somewhat limited due to cost reasons and strong competition from Samsung  (the only big fabless SSD suppliers are Kingston and LiteOn). Being unable to supply storage products for emerging PC form-factors greatly limits Seagate’s growth potential in general because while HDDs are not going anywhere in the next decade, or even two, their usage in the client PCs is set to decline, and so far sales of high-capacity HDDs for hyper-scale data centers have not offset declines of client hard drives. Recently Seagate announced plans to reduce its own HDD production capacities to improve financial positions, but it remains to be seen how this could help it to guarantee either a steady supply of NAND flash or its own NAND flash production capacities. In any case, Seagate will continue to fiercely compete against Western Digital in the growing market of high-capacity hard drives for cloud data centers and that will impact the third remaining maker of HDDs, Toshiba.

Right now, Toshiba is the only company in the world which sells both SSDs and HDDs, an exclusive advantage it is about to lose. The company does not have helium-filled hard drives in its arsenal to address the most lucrative part of the hyper-scale data center market with 8 TB and 10 TB models. Moreover, unlike its partner SanDisk, Toshiba did not acquire any solid-state storage startups to address the market of all-flash storage systems for data centers. The company still has very advanced SAS and PCIe SSDs in its lineup, hence, it can compete for the market of solid-state accelerators, even though its capabilities are somewhat limited here due to the lack of sophisticated enterprise-grade caching software. While Toshiba has its own NAND flash and HDD production capacities, it yet has to gain technologies (software, hardware, etc.) to better compete for lucrative parts of the storage market. At present, Toshiba is in the midst of a financial scandal and it is hard to expect it to focus on revamping its storage capabilities at this time. However, it remains to be seen what happens next.

SSDs? HDDs? The Industry Needs Solutions!

While client storage technologies are important, enterprise storage will remain the most lucrative segment of the market. The data center today is evolving very quickly and for many applications a dedicated level of multi-tiered storage of HDDs, SSDs or even a combination of them, is still not enough. Don’t get me wrong here: we still need purpose-built hard drives and solid-state drives, but in many new cases they are not installed in general purpose servers but are a part of storage solutions that are designed for particular purposes. Historically, such solutions came from companies like EMC, IBM, HP, Hitachi Data Systems, Netapp and so on. However, in the recent years the market of external storage solutions began to transform.

Firstly, a variety of startup companies has started to sell all-flash storage solutions. These can offer a number of advantages since such firms usually build everything (except flash) in-house and use custom technologies and know what they are doing very well. All-flash arrays, such as SanDisk’s InfiniFlash (512 TB of NAND, 2 million IOPS, 3U form-factor) or Western Digital’s Skyera skyHawk FS (136 TB of NAND, 400K IOPS, 2.4 GB/s, 1U form-factor), provide numerous performance and cost advantages over traditional HDD-based data storage solutions, which is why such all-flash arrays are gaining popularity these days. Furthermore, since they are purpose-built, they deliver performance right out of the box and solve rather complex problems.

Secondly, traditional makers of hard drives in the recent years entered the market of storage systems with Active Archive (HGST), ClusterStor/OneStor (Seagate) and some other HDD-based storage arrays. Such devices are based on high-capacity hard drives and are designed to store massive amounts of data. While sales of many mission-critical storage systems are declining because of all-flash arrays (we see it in the reports of HDD makers as well as industry reports like the one linked above), systems optimized for archive and nearline storage are gaining traction.

To get expertise to develop storage arrays, both Seagate and Western Digital acquired multiple companies in the last few years. However, only Western Digital can now build all-flash storage arrays using its own NAND flash, own controllers and own SSDs. Meanwhile, Seagate will have to rely on SSDs co-developed with third-party providers. A good news is that the company has a long-term development agreement with Micron, but it remains to be seen whether Seagate will be able to offer anything comparable to all-flash arrays from SanDisk or Skyera.

While storage solutions may not be the primary business for Western Digital today, in a world where IT is moving to either private or public clouds, expertise in hybrid-flash and all-flash storage arrays is hard to overestimate. Obviously, the company will need different types of products to address public and private cloud customers. Hypothetically, as one big company with various assets, Western Digital could be able to address everyone with the right products, provided that it manages to smartly integrate SanDisk, HGST and others without losing valuable assets and people en route.

Final Words

The acquisition of SanDisk by Western Digital not only transforms the latter into a vertically-integrated maker of storage devices and platforms, but reflects many industry trends. Client devices transit to flash storage, whereas IT computing is moving to the cloud. To offer the right solutions, Western Digital needed a new set of assets and the takeover brings them to the company.

On paper, Western Digital is now the only company which can build (pretty much) any type of storage device, from a single-chip SSD for tablets to a storage array featuring SSDs and NAND flash. In many ways, Western Digital is now a competitor not only to companies like Seagate, Toshiba, Micron and Samsung, but to EMC, HPE, Intel, Hitachi Data Systems and others. Fighting many wars is a tough business and it remains to be seen how well will it work out for Western Digital.

However, winning market share from EMC or HPE is not the main concern of the company right now. The biggest challenge for Western Digital today should be to integrate all of its parts into one company, eliminate redundancies, streamline product lineups, avoid internal competition and keep up the pace of innovation.

G.Skill Unveils New Trident Z DDR4: Five New Colors

$
0
0

G.Skill has introduced new additions to its Trident Z family of DDR4 memory modules, which are designed to simplify the lives of anyone who wants to color-coordinate their PC. The new Trident Z lineup includes memory sticks with five new color schemes which are designed to match the aesthetics of overclocking and gaming motherboards.

Just 10 to 15 years ago, PC modding used to be reserved for hardcore enthusiasts, who were willing to spend time and money to build beautifully looking PCs using custom-made components. PC cases with built-in LEDs or even transparent windows were in the minority, with only fully custom liquid cooling systems as a deviation, and the vast majority of motherboards were either slowly implementing color schemes or remained green/brown. Memory modules with heat spreaders were considered stylish. Fast forward to 2016, we have plenty of mass-produced components with PC modding features and almost everyone with moderate knowledge of PC hardware can build a PC with matching color scheme as well as lighting. The new Trident Z memory modules further simplify building of computers with unique designs, something that gamers and enthusiasts want to do.

The G.Skill Trident Z series comes with two vectors: the main body and the top bar. The main body is either black or silver, and the top bar can come in orange, yellow, white or black. The original Trident Z modules featuring a silver body and black brushed aluminum heat spreader with a red top-bar highlight will remain on the market. Meanwhile, new color schemes (such as those with orange and yellow bars) will come in handy for those building new PCs based on the latest ASUS ROG, GIGABYTE Super Overclock or MSI XPower/MPower motherboards.

The new G.Skill Trident Z memory modules have 8 GB or 16 GB capacities and are based on Samsung’s 8 Gb DDR4 ICs. The Trident Z will be available in dual-channel and quad-channel kits with 16, 32, 64 and 128 GB capacities, targeting everything from gaming desktops to higher-end workstations. Initially, G.Skill will offer colorful Trident Z kits with DDR4-3200, DDR4-3300, DDR4-3333, DDR4-3400 and DDR4-3466 speed-bins, CL14, CL16 or CL16 latencies as well as the higher DDR4 standard voltage (which is normal for high speed DDR4). The Trident Z Color are aimed at a good price/performance ratio for the majority of PC enthusiasts. Offering too many DDR4-4000+ SKUs with different color schemes could make lives of retailers uneasy since such modules are not too popular because of their high price. Therefore, if you want to go DDR4-4000 and higher, you will have to stick to “classic” Trident Z color scheme: black and silver heat spreader with red top-bar highlight.

I know Ian has taken a delivery of some of these modules for future testing, in the silver body and white bar design. As shown below, these are 16GB modules at DDR4-3200 and 14-14-14-34 sub-timings, installed in the GIGABYTE X170-Extreme ECC.

G.Skill intends to make the new Trident Z modules available this month. Since such modules only feature new heat spreaders, but continue to use the company’s own design PCBs as well as Samsung’s mass-produced 8 Gb DDR4 chips, they should not cost significantly more than existing Trident Z solutions.

Netgear Introduces Second-Gen ProSAFE 10GBase-T Switches for SMBs

$
0
0

NetGear has introduced four new ProSAFE 10 GbE switches for small and medium businesses, upgrading their first generation of XS708E and XS712T parts. The new switches support both copper (10GBase-T) and fiber links, IPv6 management as well as Layer 2+/Layer 3 Lite features. NetGear claims that the new switches are significantly more cost-efficient than previous-generation 10 GbE products.

Internet speeds and file sizes increased orders of magnitude in the last ten years and for many offices (and prosumers), Gigabit Ethernet or basic teaming might no longer be enough. As a result, many businesses and individuals these days are evaluating migration to 10 Gigabit Ethernet, which of course has immediate high performance, but is more expensive on per-port basis than traditional GbE (as well as power consumption when looking for RJ45 compatibility). Fortunately, in the recent quarters, a number of companies have started to offer moderately priced 10 GbE switches supporting both copper and fiber links that can enable a more cost-efficient transition to 10 Gigabit networks.

The four new ProSAFE 10 GbE Smart Managed Switches from NetGear are the 8-port XS708T, the 16-port XS716T, the 24-port XS728T and the 44-port XS748T. The 8- and 16-port switches are based on single-core ARM Cortex-A9 600 MHz processor, whereas the more powerful 24- and 44-port models feature dual-core ARM Cortex-A9 800 MHz processor.

NetGear ProSAFE 10 GbE Smart Managed Switches
  XS708T XS716T XS728T XS748T
10GBase-T RJ-45 8 16 24 44
SFP Ports 2 shared 4 dedicated
CPU Single-core ARM Cortex-A9 600 MHz Dual-core ARM Cortex-A9 800 MHz
RAM 512 MB
Non-Volatile Memory 8MB SPI + 256 NAND
Packet Buffer 2 MB 3 MB
ACLs 100 shared 164 shared
MAC Address Table
RP/NDP Table
VLANs
16K MAC
738 ARP/NDP
256 VLANs
16K MAC
1K ARP/NDP
512 VLANs
Fabric Bandwidth 160 Gbps 320 Gbps 560 Gbps 960 Gbps
Rated Latency 10GBASE-T: <3.012 μs

SFP+:
<2.466 μs
10GBASE-T: <2.624 μs

SFP+:
<2.128 μs
10GBASE-T: <11.649 μs

SFP+:
<2.619 μs
10GBASE-T: <3.674 μs

SFP+:
<3.693 μs
Static Routes
(IPv4 & IPv6)
IPv4: 32
IPv6: 32
IPv4: 64
IPv6: 64
Multicast IGMP
Group
512
USB Port One port for firmware and config access
Rated Power Consumption 49.5 W 96.0 W 134.9 W 262.8 W
Price $1558 $2623 unknown $8198

The switches support modern VLAN features, such as protocol-based VLAN, MAC-based VLAN and 802.1x Guest VLAN; advanced QoS with L2/L3/L4 awareness and eight priority queues including Q-in-Q; dynamic VLAN assignment; IPv4 and IPv6 routing; IPv6 for management, QoS and ACL and so on. The devices come in rackmount form-factors and consume from 49.5 W to 262.8 W of power.

According to a survey of over 550 IT decision-makers in the U.S, U.K., and Germany ordered by NetGear and conducted by Palmer Research, approximately 33% of sub-500-employee businesses today already have 10 GbE switching to connect to modern 10 GbE servers and NAS. Meanwhile, a lot of those who currently do not use 10 GbE are considering to deploy it and plan to do so in the coming quarters. The survey says that by the end of 2017, 75% of the aforementioned organizations will have deployed a 10 GbE backbone by the end of 2017. Analysts from IHS (according to data provided by NetGear) seem to agree with the results of the survey and believe that adoption of 10 GbE is increasing rapidly these days.

Pricing of these switches are a lot higher than standard gigabit ethernet switches as 10GbE networks typically require the advanced management protocols for businesses and enterprise, whereas home users might be happy with a simple pass-through device. Because of the market, and the management requirements, we are still looking at above $150 per port for 10GbE. The NetGear ProSAFE XS708T and XS716T are available now for $1558 ($194.75 per port) and $2623 ($164 per port), respectively. The top-of-the-range XS748T will ship in July for $8198 ($186 per port). Pricing and availability of the XS728T were not touched upon by NetGear. By comparison, the popular first generation 8-port XS708E switch is retailing for $850 ($106.25 per port) at this point in its life cycle.

Sources: NetGear, ServeTheHome.

Intel Releases BIOS Version 0044 for Skylake NUCs

$
0
0

BIOS updates for motherboards and mini-PCs aren't usually important enough to warrant explicit coverage. However, Intel's latest release for the Skylake NUCs (the Core i3 and Core i5 versions - NUC6i3SYK, NUC6i3SYH, NUC6i5SYK and NUC6i5SYH) deserve special mention for a number of fixes that have been made.

Intel's launch of the Skylake NUCs was quite muted, with review units making it to the press a few months after market availability. In the meanwhile, consumers were beset with problems ranging from memory incompatibility issues and Wi-Fi flakiness to unexplained BSODs. We encountered a bunch of these in our own review, and went to the extent of recommending the unit only if the reader wanted to be a beta-tester for Intel.

Fortunately, Intel has been hard at work to get to the bottom of all the reported problems. The last two BIOS releases (0042 and 004) have solved a number of serious issues, including, but not restricted to:

  • Improved electrical overstress protection in the voltage regulator circuitry - this was the reason for BSODs with WHEA_UNCORRECTABLE_ERROR reports.
  • Changed default value for Round Trip Latency to Enabled - this was the reason for incompatibility with some memory modules fabricated by SKHynix.
  • Improved BIOS update function to disable keyboard and power button during flash/recovery process - this could have helped me in avoiding the bricking of our first review sample of the NUC6i5SYK.
  • Fixed issue where Wi-Fi access point occasionally drops out during warm boot - this solves the strange case of the missing 5GHz SSID upon restarting the NUC
  • Changed FITC setting, OPI Link Speed to GT4 - this is the performance fix for PCIe 3.0 x4 NVMe SSDs

If you are facing issues with a Skylake NUC, updating to BIOS v0044 should resolve almost all of the problems. Readers curious about the OPI link speed and its effect on the performance and power consumption characteristics of a Skylake-U system can peruse our detailed coverage posted last week.

 

ARM Announces 10FF "Artemis" Test Chip

$
0
0

Today in collaboration with TSMC, ARM's physical IP division is announcing the tapeout of a 10nm test chip demonstrating the company's readiness for the new manufacturing process. The new test chip is particularly interesting as it contains ARM's yet-to-be-announced "Artemis" CPU core. ARM discloses that tapeout actually took place back in December 2015 and is expecting silicon to come back from the foundry in the following weeks. 

The test chip serves as a learning platform for both ARM and TSMC in tuning their tools and manufacturing process to achieve the best results in terms of performance, power, and area. ARM actually implemented a full 4-core Artemis cluster on the test chip which should serve as a representative implementation of what vendors are expected to use in their production designs. The test chip also harbours a current generation Mali GPU implementation with 1 shader core that serves as a demonstration of what vendors should expect when choosing ARM's POP IP in conjunction with its GPU IP. Besides the CPU and GPU we find also a range of other IP blocks and I/O interfaces that are used for validation of the new manufacturing process.

TSMC's 10FF manufacturing process primarily promises a large improvement in density with scalings of up to 2.1x compared to the previous 16nm manufacturing node. At the same time, the new process is able to achive 11-12% higher performance at each process' respective nominal voltage, or a 30% reduction in power at the same frequency.

In terms of a direct comparison between a current Cortex A72 design on 16FF+ and an Artemis core on 10FF on the preliminary test chip with an early physical design kit (PDK) we see that the new CPU and process are able to roughly halve the dynamic power consumption. Currently clock frequencies on the new design still don't reach what is achievable on the older more mature process and IP, but ARM expects this to change in the future as it continues to optimise its POP and the process stabilises.

As manufacturing processes increasingly rise in their complexity, physical design implementation becomes an increasingly important part of CPU and SoC designs. As such, tools such as ARM's POP IP become increasingly important for vendors to be able to achieve a competitive result both in terms of PPA and time-to-market of an SoC. Today's announcement serves as demonstration of ARM commitment to stay ahead of the curve in terms of enabling its partners to make the best out of the IP that they license.

Crucial Announces 16GB Ballistix Sport LT DDR4-2400 SO-DIMMs

$
0
0

This week Crucial is introducing its first DDR4 SO-DIMMs for enthusiasts, designed for high-performance notebooks and small form-factor PCs. The Crucial Ballistix Sport LT PC4-19200 SO-DIMMs are available in 4 GB, 8 GB and 16 GB capacities and can operate at DDR4-2400 with 16 16-16 timings with 1.2 volts. The modules feature SPD with XMP 2.0 profiles for devices that support XMP.

PC makers focusing on Intel enthusiast mobile parts usually ship their computers with DDR4-2133 memory modules, as per the JEDEC standard and the supported standard on the chips, and provides a peak 34.1 GB/s bandwidth when operating in dual-channel mode. By contrast, a pair of DDR4-2400 SO-DIMMs enables 38.4 GB/s of bandwidth, or 12.6% higher, which could provide a noteworthy performance improvement in applications that demand memory bandwidth (e.g., graphics applications). At the same time, the binned 2400 MT/s data rate and 1.2 volts modules with additional heatsinks are geared to maintain temperature equilibrium similar to the base frequency modules. In short, it should be relatively safe to use such modules even in highly-integrated systems with moderate cooling.

Crucial Ballistix Sport LT DDR4 SODIMMs and Kits
Density Speed
Latency
Part Number Price Price per GB
4 GB DDR4-2400
16-16-16
1.2 V
BLS4G4S240FSD $21.99 $5.4975
8 GB BLS8G4S240FSD $39.99 $4.9988
16 GB BLS16G4S240FSD $89.99 $5.6244
8 GB (2x4 GB) BLS2K4G4S240FSD $43.99 $5.4988
16 GB (2x8 GB) BLS2K8G4S240FSD $79.99 $4.9994
32 GB (2x16 GB) BLS2K16G4S240FSD $179.99 $5.6247

The prices of the dual module kits are slightly above buying two single modules, but that's for good reason: users who want more than one module and want guaranteed system compatibility between modules should buy a complete kit. This is because tertiary sub-timings on a multi-module kit are adjusted to compensate for having more than one module (or rather, a kit with fewer modules has tighter timings as it doesn't have as many modules to compensate for). When a user buys individual modules (or a couple of two-module kits rather than a four-module kit), there's no guarantee the memory will work together. Many users might not have issues putting modules together because there's enough wiggle room in the memory controller or the ICs to compensate, but plenty of problems can arise from this, especially when moving to faster speed kits. AnandTech has historically always recommended buying a full multi-module kit with the required capacity in one go, over buying separate modules/mini-kits over time.

The Ballistix Sport LT DDR4 SO-DIMMs will be available for purchase globally from retailers shortly and are currently available from the Crucial website. The modules are backed by a limited lifetime warranty (except Germany, where the warranty is valid for 10 years from the date of purchase). 

NVIDIA SHIELD Android TV Console Adds Support for Vudu, HDR and 4Kp60 Content

$
0
0

The NVIDIA SHIELD Android TV set-top-box continues to be the most advanced device featuring Google’s TV platform even a year after it was introduced into the market. The credit goes mainly to the high-end Tegra X1 SoC as well as the rich feature-set that has been getting continuous updates from NVIDIA. Today, the company is announcing several important improvements to the unit, including support for HDR as well as 4Kp60 playback.

The NVIDIA SHIELD Android TV has rather rich gaming capabilities. It supports a library of Android games compatible with its gamepad. It also supports GameStream technology, which allows for games to be streamed from a suitably equipped GeForce GTX PC. Besides, it supports the GeForce Now subscription service, which can stream new games rendered in NVIDIA’s datacenters to the console. In addition to games, NVIDIA has also been serious about enabling access to other types of content, including movies, music, sports and news.

At its launch a year ago, the NVIDIA SHIELD Android TV supported various video streaming services. The most popular among them included Netflix, YouTube, Hulu Plus and Vevo. However, certain services such as Vudu were notably absent. At the Google I/O conference today, NVIDIA announced that its upcoming Upgrade 3.2 for the SHIELD Android TV will bring support for new content providers, including ABC, Vudu, Spotify, MTV and Disney.

The Vudu content delivery application for Google’s Android TV will support streaming of ultra-high-definition 4K movies, adding another option for premium TV owners. In addition to UHD video, Vudu promises to support Dolby Atmos surround sound audio. Dolby Atmos is a nice addition to SHIELD’s lossless audio support. The Vudu Android TV app will be an exclusive on the SHIELD Android TV for some time to come. Owners of the device will be able to acquire 4K content from more than six providers, including Netflix, Hulu, HBO Go, Vudu, Plex, UltraFlix and Curiosity Stream.

While Vudu is a new addition to the NVIDIA SHIELD Android TV, the Netflix app is also receiving improvements. With Upgrade 3.2, SHIELD Android TV owners can now play back select 4K titles with HDR (high dynamic range). From a technical standpoint, HDR-mastered 4K video streams contain special metadata flags that help HDMI 2.0a-supporting hardware to display scenes with 10-bit color depth and a greater color gamut properly. We already saw in our review that Netflix streams HEVC 10-bit video for some of the 4K titles, though all 10-bit videos are not necessarily HDR-enabled.

HDR support will not be limited only to Netflix. NVIDIA also plans to add HDR streaming to its GameStream technology later this summer. Encoding of HDR-enabled streams will only be available on GeForce GTX graphics cards based on the company’s latest Pascal architecture. This is not really surprising as the GP104 is currently the only GPU in NVIDIA’s arsenal with the hardware-accelerated 10b encoding necessary for HDR with good quality.

Although the SHIELD Android TV was the industry’s first Android TV STB capable of decoding and displaying 4Kp60 (3840x2160 resolution at 60 fps) content, users were limited to local content in order to experience it. This will be changing shortly, as YouTube will soon make 4Kp60 content available on the STB.

While NVIDIA has not revealed everything that is set to come to the SHIELD Android TV in the next few months, it did confirm that the STB will definitely get an upgrade to the next-generation Android N. The future Android TV operating system will support features such as live TV recording, picture-in-picture and so on. Given the fact that NVIDIA has been updating its SHIELD set-top-box regularly in the past year, it is likely that the company will upgrade it to the new Android TV version once the latter is released.

While NVIDIA is making the SHIELD Android TV better for existing users, the company is also working to expand its customer base. For a limited time only, it will bundle the SHIELD Remote ($49.99) for free along with the SHIELD 16 GB ($199.99) and the SHIELD Pro 500 GB ($299.99) set-top-boxes.


NVIDIA Posts Full GeForce GTX 1070 Specifications: 1920 CUDA Cores Boosting to 1.68GHz

$
0
0

Back when NVIDIA first announced the GeForce GTX 1080 earlier this month, they also briefly announced that the GTX 1070 would be following it. The GTX 1070 would follow the GTX 1080 by two weeks, and presumably to keep attention focused on the GTX 1080 at first, NVIDIA did not initially reveal the full specifications for the card. Now with the GTX 1080 performance embargo behind them – though cards don’t go on sale for another week and a half – NVIDIA has posted the full GTX 1070 specifications over on GeForce.com.

NVIDIA GPU Specification Comparison
  GTX 1080 GTX 1070 GTX 970 GTX 770
CUDA Cores 2560 1920 1664 1536
Texture Units 160 120 104 128
ROPs 64 64 56 32
Core Clock 1607MHz 1506MHz 1050MHz 1046MHz
Boost Clock 1733MHz 1683MHz 1178MHz 1085MHz
TFLOPs (FMA) 8.9 TFLOPs 6.5 TFLOPs 3.9 TFLOPs 3.3 TFLOPs
Memory Clock 10Gbps GDDR5X 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 256-bit 256-bit 256-bit 256-bit
VRAM 8GB 8GB 4GB 2GB
FP64 1/32 1/32 1/32 1/24
TDP 180W 150W 145W 230W
GPU GP104 GP104 GM204 GK104
Transistor Count 7.2B 7.2B 5.2B 3.5B
Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm TSMC 28nm
Launch Date 05/27/2016 06/10/2016 09/18/14 05/30/13
Launch Price MSRP: $599
Founders $699
MSRP: $379
Founders $449
$329 $399

Previously disclosed at 6.5 TFLOPs of compute performance, we now know how NVIDIA is getting there. 15 of 20 SMs will be enabled on this part, representing 1920 CUDA cores. Clockspeeds are also slightly lower than GTX 1080, coming in at 1506MHz for the base clock and 1683MHz for the boost clock. Overall this puts GTX 1070’s rated shader/texture/geometry performance at 73% that of GTX 1080’s, and is a bit wider of a gap than it was for the comparable GTX 900 series cards.

However on the memory and ROP side of matters, the two cards will be much closer. The GTX 1070 is not shipping with any ROPs or memory controller channels disabled – GTX 970 style or otherwise – and as a result it retains GP104’s full 64 ROP backend. Overall memory bandwidth is 20% lower, however, as the GDDR5X of GTX 1080 has been replaced with standard GDDR5. Interestingly though, NVIDIA is using 8Gbps GDDR5 here, a first for any video card. This does keep the gap lower than it otherwise would have been had they used more common memory speeds (e.g. 7Gbps) so it will be interesting to see how well 8Gbps GDDR5 can keep up with the cut-down GTX 1070. 64 ROPs may find it hard to be fed, but there will also be less pressure being put on the memory subsystem by the SMs.

Meanwhile as is usually the case for x70 cards, GTX 1070 will have a lower power draw than its fully enabled sibling, with a shipping TDP of 150W. Notably, the difference between the GTX 1080 and GTX 1070 is larger than it was for the 900 series – where it was 20W – so we’re going to have to see if GTX 1070 ends up being TDP limited more often than GTX 1080 is. In that sense TDP is somewhat arbitrary – its purpose is to set a maximum power consumption for cooling and power delivery purposes – and I’m not surprised that NVIDIA wants to stay at 150W or less for the x70 series after the success that was the GTX 970.

Like the GTX 1080, the GTX 1070 will be launching in two configurations. The base configuration is starts at $379 and will feature (semi) custom partner designs. Meanwhile as previously disclosed, NVIDIA will be offering a Founders Edition version of this card as well. The Founders Edition card will be priced at $449 – a $70 premium – and will be available on day one, whereas this is not guaranteed to be the case for custom cards.

The GTX 1070 Founders Edition card will retain the basic stylings of the GTX 1080, including NVIDIA’s new angular shroud. However I have received confirmation that as this is a lower TDP card, it will not get the GTX 1080’s vapor chamber cooler. Instead it will use an integrated heatpipe cooler similar to what the reference GTX 980 used.

Google’s Tensor Processing Unit: What We Know

$
0
0

If you’ve followed Google’s announcements at I/O 2016, one stand-out from the keynote was the mention of a Tensor Processing Unit, or TPU (not to be confused with thermoplastic urethane). I was hoping to learn more about this TPU, however Google is currently holding any architectural details close to their chest.

More will come later this year, but for now what we know is that this is an actual processor with an ISA of some kind. What exactly that ISA entails isn't something Google is disclosing at this time - and I'm curious as to whether it's even Turing complete - though in their blog post on the TPU, Google did mention that it uses "reduced computational precision." It’s a fair bet that unlike GPUs there is no ISA-level support for 64 bit data types, and given the workload it’s likely that we’re looking at 16 bit floats or fixed point values, or possibly even 8 bits.

Reaching even further, it’s possible that instructions are statically scheduled in the TPU, although this was based on a rather general comment about how static scheduling is more power efficient than dynamic scheduling, which is not really a revelation in any shape or form. I wouldn’t be entirely surprised if the TPU actually looks an awful lot like a VLIW DSP with support for massive levels of SIMD and some twist to make it easier to program for, especially given recent research papers and industry discussions regarding the power efficiency and potential for DSPs in machine learning applications. Of course, this is also just idle speculation, so it’s entirely possible that I’m completely off the mark here, but it’ll definitely be interesting to see exactly what architecture Google has decided is most suited towards machine learning applications.

AMD Announces Computex 2016 Webcast: May 31st, 7pm Pacific

$
0
0

With the annual Computex Taipei trade show quickly approaching, AMD sends word that they will be hosting a live webcast for their annual press conference at the show. The press conference itself is scheduled for 10am local time (02:00 UTC) on June 1st, which for North America translates to 10pm Eastern/7pm Pacific on May 31st.

According to AMD’s announcement, their press conference will have both major CPU and GPU news. On the CPU front, AMD’s 7th generation “Bristol Ridge” APU is scheduled to be shown off. AMD pre-announced Bristol Ridge back in April, and as AMD has made it a habit in recent years to do major APU disclosures around Computex, I’d expect that we’ll get the full architectural and SKU details on Bristol Ridge at the show.

Meanwhile on the GPU front, AMD will be speaking more about their forthcoming Polaris architecture GPUs. When AMD first unveiled Polaris at the start of this year, they announced that the first Polaris GPUs will be available in the middle of this year. With Raja Koduri set to present, it’s very likely that this will be the formal Polaris launch event. In previous generations AMD has held launch events for their desktop products a couple of weeks ahead of retail availability, so it’s likely to be the case here as well.

Given the timing, we should also get an update on AMD’s mobile GPU plans. The company has already announced the rebadged members of the new Radeon M400 series, so this would give AMD a chance in flesh out the lineup with Polaris-based parts.

Remote viewers can catch the webcast at AMD’s Computex website. We’ll be in attendance as well, live blogging the press conference with our own take on AMD’s latest announcements.

G.Skill Reveals 2x8GB DDR4-4266 C19 and 4x16GB DDR4-3466 C14 Kits

$
0
0

Until recently enthusiasts who would like to use the fastest DDR4 memory with their Skylake-S processors had to use 4 GB DIMMs based on 4 Gb chips, typically sold in pairs for a 8 GB total memory. This week G.Skill has introduced three new sets of Trident Z memory modules that come with either an extremely high clock rate, or very aggressive timings. 

The new G.Skill Trident Z memory modules based on Samsung’s 8 Gb DDR4 ICs and are available in 8 GB and 16 GB versions. The 8 GB DRAM sticks are rated to run at 3200 (CL13 13-13-33), 3466 (CL14 14-14-34) or 4266 (CL19 23-23-43) MT/s data rates, whereas 16 GB modules can work at 3200 and 3466 MT/s data rates at the aforementioned timings, with all kits running at the recommended DDR4 enthusiast setting of 1.35 volts. Like the rest of the Trident Z modules, the new sticks feature aluminum heat spreaders and custom black PCBs developed by G.Skill.

The new Trident Z modules are designed for Intel’s Skylake-S processors when used in Intel’s Z170-based motherboards which support XMP 2.0 technology (to automatically set their clock rates when they are installed into appropriate PCs). Using the 'performance index' metric from our memory reviews as a rough indication of general performance (rough in the sense that some workloads are frequency dependent, others are latency driven), the 3200 C13 modules come in at a PI of 246, the 3466 C14 modules have a PI of 248, and the 4299 C19 are at 225. Historically a higher frequency is harder to validate for reliability than a lower CAS Latency, and represents the main challenge when producing high-performance modules.

Because high-speed memory often needs to be validated with specific motherboards, so far G.Skill has validated its DDR4-4266 modules featuring 8 Gb chips on the ASUS ROG Maximus VIII Impact mainboard, but we expect that to expand over time. Meanwhile, the 8 GB and 16 GB DDR4-3200 and DDR4-3466 should work on many other motherboards as well. It is important to keep in mind that Intel’s HEDT platforms (Haswell-E) are more limited for extreme memory frequencies, which is why G.Skill officially has not validated the aforementioned modules on the Intel X99.

Exact prices of the new Trident Z memory modules from G.Skill are unknown, but do not expect them to be cheap: DDR4-4266 modules at this time have only been announced by a few companies, and we believe G.Skill is the first to offer 8 GB modules. Moreover, DDR4-3200 and DDR4-3466 modules with aggressive timings like CL13 or CL14 are also pretty rare.

Availability of memory modules with high clock rates will depend on the availability and binning of chips capable of operating at appropriate frequencies. Typically it is up to the memory companies to find which ICs are capable of these speeds, and companies compete in bidding for certain batches that have high hit rates for fast memory. Nonetheless, if the share of Samsung’s 8 Gb DDR4 chips that can operate in DDR4-4266 mode or with aggressive timings is significant, we may see competing solutions from other companies in the coming weeks or months.

Price Check May 2016: The Intel Core i7-6700K Is Finally Available at Its MSRP

$
0
0

Nine months. This is how long it has taken the retail price of Intel’s Core i7-6700K processor to drop to the level recommended by Intel. Despite slow sales of PCs in general, it would seem that demand for Intel’s latest Skylake processors has so far been rather strong, or the chipmaker could not meet demand for many SKUs months after they were introduced. Right now, virtually all major stores in the U.S. sell Intel’s latest unlocked chips at their MSRPs. Meanwhile, Intel’s Core i7-5820K chip, which used to be cheaper than the Core i7-6700K for months, recently got more expensive.

Intel Core i7-6700K Finally Hits $350

Intel officially announced its most advanced quad-core desktop processor for mainstream enthusiasts, the Core i7-6700K (four cores with Hyper-Threading, 4.0 GHz/4.2 GHz, 8 MB cache, Intel HD Graphics 530 core, unlocked multiplier) in early August 2015. You can read our review here. The chip, which is positioned below the high-end desktop (HEDT) platforms, has always had an suggested retail price of $350. However, since we have been tracking the price in these short pieces, the 6700K has not only been above $350, but it was actually more expensive than the Core i7-5820K, the entry-level HEDT part from Intel. The price of the Core i7-6700K peaked in December at $420-440, then dropped slightly to $412 in February and only in April it hit $350, about eight-to-nine months after its introduction.

According to CamelCamelCamel, a price-tracker that monitors Amazon and its partners, the Core i7-6700K has been available for around $350 for several weeks now. At press time, the retailer charged $349.69 for the processor. At the same time, PriceZombie, which monitors Newegg, reveals that Newegg dropped the price of Intel’s most advanced unlocked quad-core CPU for desktops to $349.99 in late April.

The Intel Core i7-6700K (BX80662I76700K) is currently available for around $350 or so from all major retailers in the U.S., including Amazon, B&H Photo, NCIXUS and Newegg, according to NowInStock. BestBuy lists the CPU for $409.99, but does not have it in stock at the time to press. While I have no idea if Intel is now shipping more Core i7-6700K processors than it did several months ago, it is evident that supply of the part now meets demand and retailers are no longer charging a premium for it, (at least, in the U.S).

Intel Core i5-6600K Available Starting at $242

The popularity of the Intel Core i5-6600K (four cores, 3.50 GHz/3.90 GHz, 6 MB cache, Intel HD Graphics 530, unlocked multiplier) among enthusiasts is well deserved: in most consumer situations its performance is on par with the Core i7-6700K, which costs around $100 more. Unfortunately, just like its bigger i7 brother, the i5-6600K chip was scarcely available. Due to strong demand, the Core i5-6600K was sold for $290-$330 last year, which is considerably more than its official MSPR of $243.

Right now, the Core i5-6600K (BX80662I56600K) is currently available from all major U.S. retailers for around $245 or just slightly lower. Amazon sells the CPU for $242.48, whereas Newegg charges $244.99.

Based on data from CamelCamelCamel and PriceZombie, the price of the Core i5-6500K has been relatively stable at around $240-$245 for several weeks now. If Intel does not cut supply, or demand for the chips explodes, the price will continue to fluctuate in the same range in the coming months.

Intel Core i7-5820K Back to $389

The Intel Core i7-5820K (six cores with Hyper-Threading, 3.30GHz/3.60 GHz, 15 MB cache, unlocked multiplier) is based on the previous-generation Haswell-E microarchitecture, but with six cores, a 3.30 GHz default clock-rate, unlocked multiplier and $396 price-point, it was a very interesting product from day one. Moreover, due to strong demand for microprocessors powered by the Skylake microarchitecture, the price of the Core i7-5820K dropped to $380 in February and $349.99 in late March. Unfortunately, good things do not seem to last forever.

Right now, the Core i7-5820K (BX80648I75820K) costs $382.99 at Amazon and $389.99 at Newegg, which means that the chip got around 10% more expensive in under a couple of months. The reasons for the increase are unclear. Perhaps Intel started to gradually reduce shipments of its Haswell-E processors ahead of the Broadwell-E introduction later this year, or maybe demand for the smallest HEDT chip just got higher for some reason.

To use the Core i7-5820K you will need an X99-based motherboard, which on average can cost more than an Intel 100-series-based motherboards for Skylake chips, an advanced cooler to cover the 140 W CPU, and four DDR4 memory modules to maximize the available memory bandwidth. Such motherboards will support Broadwell-E processors and many of them feature modern functionality like USB-C 3.1, M.2, Thunderbolt 3 and so on. Moreover, makers of motherboards are preparing a new wave of their Intel X99-based offerings (for example, ASUS has already announced them) with further refinements. Right now, the new generation of Intel X99-based platforms is not available, but it makes a great sense to wait for such motherboards to arrive and only then buy an LGA2011 v3 processor.

In the meantime, we can only conclude that the HEDT Core i7-5820K chip is back where it should be: above the Core i7-6700K designed for mainstream enthusiasts. Perhaps, the 5820K will get cheaper in the coming weeks again, or something better comes up, but right now the Core i7-6700K is a more affordable CPU.

Intel’s 14 nm Yields Are Up, Costs Are Down

As our long-time readers are aware, Intel’s 14 nm process technology was a tough nut to crack for the company. Intel had to delay volume production of CPUs using this fabrication process by a year and then start with smaller chips in order to maximize yields and minimize per-unit costs. Due to higher costs and some other reasons, the ramp of 14 nm and Skylake processors took some time, which is one of the reasons why some of the socketed chips were in short supply. Nonetheless, things seem to be getting better.

Intel now uses four fabs to produce its CPUs using 14 nm process technology: D1D, D1C and D1X fabs in Hillsboro, Oregon as well as Fab 24 manufacturing facility in Leixlip, Ireland. Moreover, yields of 14 nm chips are getting better. Back in April, the company said that the Client Computing Group managed to achieve operating profit of $1.9 billion in Q1 FY2016, or 34% more than in the same period a year before, as a result of “lower 14 nm unit costs on notebooks”.

“First quarter gross margin of 62.7% was approximately a point higher than our expectations, driven by lower 14 nm costs,” said Stacy Smith, chief financial officer of Intel, during a conference call with investors and financial analysts.

Besides this, starting late last year Intel has been producing its multi-core Xeon E5 v4 chips using the 14 nm process in volume (and even shipping them for revenue, according to Diane Bryant). Keeping in mind that multi-core chips have large dies, which are harder to produce and which are prone to more defects, production of such dies means that process technology has to be mature enough and yields are under control for suitable sales (although some premium partners will want the first ones off the line regardless of cost). Indirectly, this may mean that Intel can now also produce more unlocked desktop CPUs with high frequency, which is why such products are now widely available and retailers do not charge extra for them.

Gaming on the Rise, So Is Demand for Better Chips

Despite the fact that PC market is down in absolute volume in general, the market of gaming PC is actually expanding. Intel’s Core i7-6700K and Core i5-6600K processors are primarily used for gaming systems, therefore, demand for such CPUs is strong.

“Our gaming PCs are growing at double-digit rates year-over-year,” said Bryan Krzanich, chief executive officer of Intel.

Mr. Krzanich is not the only one to say that gaming PCs are on the rise and their sales do not suffer as a result of any global economic turmoil or adjustment in how users perceive the devices around them. Jen-Hsun Huang, the head of NVIDIA, recently said that people buy expensive gaming PCs regardless of any economic slowdowns, and NVIDIA's recent financial announcements, along with the enterprise products based on gaming technology, show this.

“Gaming is rather macro-insensitive for some reason. People enjoy gaming,” said Mr. Huang during a conference call with investors and analysts. “Whether the economy is good or not, whether the oil price is high or not, people seem to enjoy gaming. […] Gaming is not something that people do once a month, like going out to a movie theater or something like that. People game every day, and the gamers that use our products are gaming every day.”

The CEO of NVIDIA was naturally talking about the success of the GeForce GTX lineup in the Q1 of the company’s fiscal 2017, but he perfectly indicated the behavior of those who buy gaming PCs with high-performance graphics cards and processors in general. These people spend regardless of global economic trends and they tend to get whatever is needed to hit their performance targets (i.e., framerate in their favorite games). While this means that companies like Intel, NVIDIA, AMD and others can enjoy great sales of their high-end products for gamers, it also means that they can predict demand for such parts and ensure that they are not in short supply. NVIDIA has been doing a very good job in the past few years in meeting demand for its GeForce GTX lineup, even though most high-end gamers want the next generation product yesterday. As for Intel, it either underestimated the popularity of its Core i7-6700K and Core i5-6600K CPUs early in their lifecycle, or simply could not produce a sufficient amount of chips it needed to satisfy its customers (Intel has always stated they are running at expected volume in our previous Price Check pieces).

Anyway, right now it looks like the unlocked Skylake-S processors are sold at their MSRPs and all the customers can get the chip they want. Keeping in mind that Intel’s next-generation CPUs slated for sometime in the future, code-named Kaby Lake, are set to be made using now well-proven 14 nm process technology, we wonder if history will repeat itself and the upcoming unlocked processors will/will not be in short supply. Nonetheless, we will keep monitoring the availability of unlocked CPUs.

Relevant Reading

Skylake-K Review: Core i7-6700K and Core i5-6600K - CPU Review
Comparison between the i7-6700K and i7-2600K in Bench - CPU Comparison
Overclocking Performance Mini-Test to 4.8 GHz - Overclocking
Skylake Architecture Analysis - Architecture
Non-K BCLK Overclocking is Being Removed - Overclocking Update
An Overclockable Core i3: The Intel Core i3-6100TE Review - Analysis of an Overclocked Core i3 CPU

Viewing all 9186 articles
Browse latest View live