Showing posts with label ARM. Show all posts
Showing posts with label ARM. Show all posts

Wednesday, April 3, 2013

Applied Micro's cloud chip is an ARM-based, switch-killing machine

Applied Micro Circuits, a chip firm that designs silicon parts for the computing and networking world, has spent the last three years making a big bet on the cloud computing market and the ARM architecture. The results began shipping last week, and the product essentially takes networking and computing and crams it all onto one system on a chip.

Dubbed the X-Gene server on a chip, the product has been touted by Applied as the first 64-bit-capable ARM-based server in existence, the ideal part for webscale users (check out the pic of Facebook's Frank Frankovsky holding one up) and also the future of Applied Micro. It's the first chip to contain a software-defined network (SDN) controller on the die that will offer network services such as load balancing and ensuring service-level agreements on the chip. It's like shoving the networking and computing vision of the Cisco Unified Computing System on a chip.

This is a big deal. Although the first generation won't have enough bandwidth to eliminate the need for a switch at the top of a rack, the following generation will.

Paramesh Gopi, president and CEO of Applied Micro, said that these new chips have now made it past the prototype stage (the board in the picture uses an FPGA instead of a production silicon) AND are now in the hands of several customers, including Dell and Red Hat. Gopi expects physical servers containing the X-Gene to hit the market by the end of this year.

Gopi's big bet

The chip is manufactured at 40 nanometers and contains eight 2.4 GHz ARM cores that Applied has designed, four smaller ARM Cortex A5 cores running the SDN controller software (the pink bit on the block diagram below), four 10-gigabit ethernet ports, and various ports that can support more Ethernet, SSDs, accelerator cards such as those from Fusion-io or SATA drives. In short, this a chip that combines networking and computing in one package.

When about asked about the power consumption of the chip, Gopi said it will run at 50 percent of the total cost of ownership of a comparable x86 product, but wouldn't discuss actual power consumption.

"We'll be able to run your LAMP stack and SQL jobs on Xeon-class ARM cores, and the routing protocols and such will be running on the Atom-class ARMs," Gopi said. "It's the fundamentals of a rack on a single chip."

xgeneblock

Building this chip has taken four years. It required Gopi to visit ARM at its U.K. headquarters to convince them to give him an architecture license to build a chip for servers. In an interview with me at the Open Compute Summit in January, Gopi explained that he saw the flexibility and the architecture that ARM offered could become an asset for webscale computing, so he embarked on turning Applied Micro, a public company with a few hundred million in revenue, into a startup.

Like others, such as Barry Evans of Calxeda or Andrew Feldman of Sea Micro, he saw that power issues were raising the cost of operating data centers - and cutting into the bottom line at web businesses - and he thought he had a solution. His solution was to get an architectural license from ARM, so he could make a 64-bit-capable chip ahead of ARM's plans to introduce that powerful a core. ARM introduced that core last year, and vendors of ARM-based server chips such as AMD and Calxeda expect to have 64-bit-capable chips next year. But Applied is shipping those machines today.

"We'll end this wimpy core vs. brawny core debate once and for all," Gopi said.

The new hardware mindset

Applied Micro CEO Paramesh Gopi.

Applied Micro CEO Paramesh Gopi.

Gopi has taken advantage of several different trends that are finally coming to fruition. The first trend is the use of the ARM core - ubiquitous in cell phones and tablets - for the enterprise and cloud computing market. But he's also taking advantage of a more subtle shift happening in the chip world as it pertains to the data center - namely the opening up of the ecosystem.

The mobile industry has relied upon the common ARM architecture to build a wide variety of chips that give each vendor a slightly different set of features. Both Nvidia and Qualcomm start with ARM cores (hell, even Apple has an ARM architectural license) to build their application processors. This lowers the cost of designing chips, because engineers can start from a higher level when solving problems.

And the modularity of the ARM cores combined with an architecture license also means firms can customize their designs for a certain market without spending a huge amount of time or dollars. Gopi will actually address some of this at our Structure event June 19 and 20, in a presentation on designing hardware at the speed of software.

For Applied, this dynamic plays out in the existence of a new type of chip for the data center, but also in the fact that in nine or 12 months Applied plans to test the second-generation X-Gene chip, one that will support 100-Gigabit Ethernet and will obviate the need for a top-of-rack switch. Ironically, this architecture probably won't be a welcome development for Applied's existing networking clients like Cisco and Juniper.

But it's clearly the direction that large webscale customers want to go. And the second-generation architecture is also important for the first-generation X-Gene products, because without it, Applied may not have a chance at getting technically savvy and forward-looking potential customers that need not just a single interesting product, but a real understanding of the roadmap before they commit to a new architecture.

So even as Applied ships these first products to customers for use in devices that hit the market at the end of this year, it's already developing its production of the next generation 28-nanometer versions of the heavy-duty ARM cores and 100-Gigabit-capable networking while prepping for later versions that may include photonics and other elements that data center customers are already discussing as tomorrow's technology.

It took a bold vision - and that trip to ARM - for Gopi to get Applied Micro to the table as these discussions about the next generation data center are playing out. But with this design, it has earned a seat. Now all it has to do is earn the business.


Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

http://gigaom.com/2013/04/03/applied-micro-cloud-chip


allvoices

Wednesday, February 20, 2013

First ARM-based servers in production support Baidu's cloud storage

Chinese search engine giant Baidu is using ARM-based servers from Marvell making it the first company to depend on servers using the cell-phone chip in a production environment. Baidu is using the new ARM servers in its cloud storage application named Baidu Pan.

ARM, which licenses its IP to a variety of chip makers, had stated its intentions to enter the data center market back in 2010, as worries about energy efficiency increased and the needs of webscale computing customers changed. While less powerful than their Intel counterparts, a cluster of lower-power ARM chips is more power efficient on a performance per watt basis and some workloads don't even need the performance characteristics of a big Intel core.

The combination of these two trends has led to a plethora of vendors from big names like Marvell and AMD to startups such as Calxeda to license ARM's cores with an eye toward making servers. Holding ARM back so far has been the delay in building out 64-bit capable cores (they are expected later this year) as well as a lack of enterprise software running on the ARM platform.

But given the economics of these so-called wimpy cores and the limits of using ARM cores in the enterprise server market today, the use of ARM-based servers in the storage arena is not surprising. Storage usage scenarios are perfect in many ways because they don't need a lot of raw performance, nor do they require 64-bit capable cores.

Thus, Baidu using ARM for storage makes sense. It's also an area where Calxeda expects to see its first production deployments sometime this year, according to a conversation I had with Karl Freund, the VP of marketing of Calxeda last December. As for the Baidu deployment, it's using the quad-core Armada CPU, Marvell's storage controller, and a 10Gb Ethernet switch all integrated on a single system on a chip.

Marvell's release says the chip firm customized the ARM servers specifically for Baidu's cloud storage requirements, taking the concept of server customization common in webscale deployments to the chip level. Marvell says the platform is designed to increase the amount of storage for conventional 2U chassis up to 96 TB, and to lower the total cost of ownership by 25 percent, compared with previous x86-based server solutions. The end result should cut Baidu's power in its data center by half according to the release.


Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

http://gigaom.com/2013/02/20/first-arm-based-servers-in-production-sup


allvoices

Monday, February 11, 2013

Following Raspberry Pi, the $89 Odroid U2 continues small, cheap computing movement

Computing devices are getting cheaper by the day. I'm not talking about the phones, tablets, laptops and desktops you'll find at your local electronics retailer. Think of the Raspberry Pi, the small $25 bare-bones computer that debuted a year ago. Now, a higher-powered computer announced in November, the Odroid U2, is available and will set you back $89. Although Odroid is aimed at developers, anyone with a little technical know-how can use it fo ...

ODroid X2

http://gigaom.com/2013/02/11/following-raspberry-pi-the-89-odroid-u2-c


allvoices

Thursday, December 27, 2012

Calxeda finds a new market in storage

Calxeda, the Austin, Texas-based startup that is building out highly dense, low power ARM-based servers has a new market in the storage world. During a visit last week to the company's headquarters, company executives shared that in addition to web hosting and big data applications it sees a near-term opportunity in the storage world and that is has fielded more than 20 requests for proposals for systems using ARM-based processors.

Karl Freund, the VP of marketing for Calxeda, says the company has shipped about 3,000 nodes and 130 systems although none are deployed in production environments yet. He expects the first production deployments to occur at the end of the second quarter of 2013. But most of the conversation was about how ARM-based systems could be used today in the storage market. Not just for cold storage such as Amazon's Glacier or Facebook's photo storage effort, but even for the big storage systems for scale out storage and enterprise class storage appliances. Named customers who are evaluating the systems include Scale.io, Gluster and Inktank, the storage startup backed by Mark Shuttleworth of Ubuntu fame that is commercializing Ceph.

There are more, notes Freund, (pictured) who says that when Calxeda servers make it into production environments, they will likely be deployed first in a storage capacity, as storage customers don't care if the chips are 64-bit compatible. For now, ARM-based systems are stuck only able to address less memory because ARM only has a 32-bit capable core design. Next year ARM will have a 64-bit capable design and systems will be built around them in 2014 (maybe even late 2013). Calxeda plans its 64-bit capable SoC for 2014.

But Calxeda isn't waiting and in storage, it's also not focusing on power consumption - the initial draw for ARM-based servers in the scale out data center. For the storage world, where spinning hard drives tends to suck huge quantities of electricity, adding a low-power has a negligable affect on the consumption of an overall system. However, Calxeda boasts that popping in more of its systems on a chip (SoC) are both cheaper and make for faster information transfer and retrieval.

Its tests show roughly a 4X improvement in IOPs for a rack of Calxeda SoCs versus x86-based systems. Adding Calexeda's SoCs also cuts complexity because the entire system of processing and networking components are integrated on the SoC, and the terabit-plus fabric between cores also offers more network capacity between cores in a system -the so-called east-west networking traffic.

As the market for scale out computing, storage and networking changes the demands made on IT equipment, Calxeda and others are seeing an opportunity that may have begun in servers and the cloud computing environment, but certainly isn't stopping there. No wonder Intel is trying to catch up with chips of its own. So far, it's recently announced new Atom-based chips haven't made the cut for most customers I've spoken with (the lack of integration of the entworking and processing hardware is a problem), but in 2014 it will have a new, integrated SoC as well. Then, the competition will really get interesting.

http://gigaom.com/cloud/calxeda-finds-a-new-market-in-storage


allvoices

Friday, January 16, 1970

Meet ARM’s two newest cores for faster phones and greener servers

ARM has created a new family of processor cores designed with user demands for always-on computing as well as the need for more efficient computing on the data center side in mind. The new A50 family of cores will be available to chipmakers at the end of next year and ARM expects devices containing those cores to hit the market in 2014 and 2015.

Unlike Intel or even Qualcomm, ARM doesn’t build or design chips, but instead licenses its technology to other chipmakers who take the ARM IP and build chips or systems on a chip around those core designs. The new cores are designed in a “big” version that offers 64-bit processing and more powerful cores, and a 64-bit “little” version (that is also 32-bit compatible) that is aimed at the mobile market. The big A-57 core will deliver three times more performance at the same power consumption as today’s mobile phone chips, according to ARM, while the little A-53 core will deliver four times the power efficiency as today’s phone platforms with a better performance than current generation phone platforms.

Computing is no longer a desk job or sold by the server

Our computing habits have changed in the last five years. Where we once may have sat at a desk and completed our computing tasks, we now wake up and roll over in bed to check out email on phones, before maybe moving to a tablet, a connected car and then finally to a laptop or desktop at work. As we hop from machine-to-machine we expect a similar and continuous experience, which we get thanks to web services that most of us use in the browser or via clients.

To meet that demand, web companies are deploying millions of servers in data centers the size of warehouses. At that scale things change — not just the focus on power consumption, but also the ability to use hardware tuned to a specific workload. For example, Facebook isn’t one app, it’s a combination of more than 20 different services that are tied together with software. But because Facebook is so huge, those services can require a lot of computing resources. Facebook doesn’t buy servers, it buys racks, and at that level of hardware consumption, buying a rack using ARM-based servers to cut power costs may slightly increase the management load on operations team members, but it also cuts the energy requirement.

The combination of these shifts on the user and on the web services side are why ARM has seen a chance to get into the data center. It’s also why players from AMD, Dell and even Intel are embracing heterogeneous compute. During the last 20 years, just like Ford’s original Model T that only came in black, you could only have one type of instruction set as long as it was an x86. But with the rise of webscale, cloud and even a broader use for high performance computing, companies wanted variety.

ARM’s response is modular building blocks

So Intel is working on its MIC architecture, Nvidia is introducing graphics processors in servers and AMD is embracing ARM, x86 and GPUs. Startups ranging from Tilera to Adapteva are also trying to bring new architectures to the market. ARM’s approach with its latest architecture (ARMv8) is to emphasize power efficiency even at the expense of performance. It has always done this in the mobile market, where a poor battery life can doom device to the scrap heap, even if the graphics are vivid and the applications are speedy.

Companies that license these new cores can mix big cores with little cores or build systems containing big cores and ARM graphics cores, or any number of configurations to meet the needs of the device and market they are building for.

The two new cores also will eventually bring 64-bit processing into the mobile device arena, which Nole Hurley, VP of marketing and strategy for ARM’s processor division, said is important because people are creating more content on mobile devices (this will also offer ARM a credible core for the laptop market as well). Now, all of this depends on software that can run using the ARM instruction set, but in both the consumer side and the data center market ARM is building out ecosystem partners.

The new cores should appear in chips built using the 28 and 20 nanometer process node, and will scale down to 14 nanometers and the newer chipmaking processes that build up instead of out. As the process node shrinks and more transistors are crammed on the chip expect additional performance and energy gains.

For those who are still looking for gigahertz performance numbers Hurley sais]d that new A-50 family will deliver performance ranging from 1.3 gigahertz to 3 Gigahertz depending on how the ARM licensees tweak their designs. At that point I wonder if we can still get away with calling a 3GHz ARM-based design a wimpy core. However, Ian Ferguson, who heads ARM’s server ambitions, (and who doesn’t use the phrase wimpy core) noted that ARM isn’t expressing its server goals in terms of the traditional enterprise.

“What we’re not saying is that we’re going to blaze on into traditional enterprise infrastructure … that is not the space we’re planning to attack,” Ferguson said. “We want places where the server is the business.” And as we’ve stated before, that space is where much of the growth in servers will come from in the coming years. Seeing this, ARM has developed a family of processor cores that can be configured to meet the needs of all-day computing from the user side and the server side.

http://gigaom.com/2012/10/30/meet-arms-two-newest-cores-for-faster-phones-and-greener-servers/


allvoices

Meet ARM’s two newest cores for faster phones and greener servers

ARM has created a new family of processor cores designed with user demands for always-on computing as well as the need for more efficient computing on the data center side in mind. The new A50 family of cores will be available to chipmakers at the end of next year and ARM expects devices containing those cores to hit the market in 2014 and 2015.

Unlike Intel or even Qualcomm, ARM doesn’t build or design chips, but instead licenses its technology to other chipmakers who take the ARM IP and build chips or systems on a chip around those core designs. The new cores are designed in a “big” version that offers 64-bit processing and more powerful cores, and a 32-bit “little” version that is aimed at the mobile market. The big A-57 core will deliver three times more performance at the same power consumption as today’s mobile phone chips, according to ARM, while the little A-53 core will deliver four times the power efficiency as today’s phone platforms with a better performance than current generation phone platforms.

Computing is no longer a desk job or sold by the server

Our computing habits have changed in the last five years. Where we once may have sat at a desk and completed our computing tasks, we now wake up and roll over in bed to check out email on phones, before maybe moving to a tablet, a connected car and then finally to a laptop or desktop at work. As we hop from machine-to-machine we expect a similar and continuous experience, which we get thanks to web services that most of us use in the browser or via clients.

To meet that demand, web companies are deploying millions of servers in data centers the size of warehouses. At that scale things change — not just the focus on power consumption, but also the ability to use hardware tuned to a specific workload. For example, Facebook isn’t one app, it’s a combination of more than 20 different services that are tied together with software. But because Facebook is so huge, those services can require a lot of computing resources. Facebook doesn’t buy servers, it buys racks, and at that level of hardware consumption, buying a rack using ARM-based servers to cut power costs may slightly increase the management load on operations team members, but it also cuts the energy requirement.

The combination of these shifts on the user and on the web services side are why ARM has seen a chance to get into the data center. It’s also why players from AMD, Dell and even Intel are embracing heterogeneous compute. During the last 20 years, just like Ford’s original Model T that only came in black, you could only have one type of instruction set as long as it was an x86. But with the rise of webscale, cloud and even a broader use for high performance computing, companies wanted variety.

ARM’s response is modular building blocks

So Intel is working on its MIC architecture, Nvidia is introducing graphics processors in servers and AMD is embracing ARM, x86 and GPUs. Startups ranging from Tilera to Adapteva are also trying to bring new architectures to the market. ARM’s approach with its latest architecture (ARMv8) is to emphasize power efficiency even at the expense of performance. It has always done this in the mobile market, where a poor battery life can doom device to the scrap heap, even if the graphics are vivid and the applications are speedy.

Companies that license these new cores can mix big cores with little cores or build systems containing big cores and ARM graphics cores, or any number of configurations to meet the needs of the device and market they are building for.

The two new cores also will eventually bring 64-bit processing into the mobile device arena, which Nole Hurley, VP of marketing and strategy for ARM’s processor division, said is important because people are creating more content on mobile devices (this will also offer ARM a credible core for the laptop market as well). Now, all of this depends on software that can run using the ARM instruction set, but in both the consumer side and the data center market ARM is building out ecosystem partners.

The new cores should appear in chips built using the 28 and 20 nanometer process node, and will scale down to 14 nanometers and the newer chipmaking processes that build up instead of out. As the process node shrinks and more transistors are crammed on the chip expect additional performance and energy gains.

For those who are still looking for gigahertz performance numbers Hurley sais]d that new A-50 family will deliver performance ranging from 1.3 gigahertz to 3 Gigahertz depending on how the ARM licensees tweak their designs. At that point I wonder if we can still get away with calling a 3GHz ARM-based design a wimpy core. However, Ian Ferguson, who heads ARM’s server ambitions, (and who doesn’t use the phrase wimpy core) noted that ARM isn’t expressing its server goals in terms of the traditional enterprise.

“What we’re not saying is that we’re going to blaze on into traditional enterprise infrastructure … that is not the space we’re planning to attack,” Ferguson said. “We want places where the server is the business.” And as we’ve stated before, that space is where much of the growth in servers will come from in the coming years. Seeing this, ARM has developed a family of processor cores that can be configured to meet the needs of all-day computing from the user side and the server side.

http://feedproxy.google.com/~r/OmMalik/~3/QfV_p5LVvmw/

http://gigaom.com/2012/10/30/meet-arms-two-newest-cores-for-faster-phones-and-greener-servers/


allvoices

AMD will challenge Intel with ARM-based server chips. In 2014.

AMD will license the ARM chip technology as part of a strategy that will bring cell phone chips into its servers. The company on Monday announced that it will design 64-bit ARM technology-based processors in addition to its x86 processors for multiple markets — hoping to cater to the needs of data center and cloud-centric companies looking for low power computing.

The move has been debated within AMD for some time, and represents AMD’s embrace of a heterogeneous computing strategy. The news also shows how AMD is distancing itself from its fellow x86 rival, Intel, and in reality, could prove to be AMD’s best chance to continue on as player in the chip market.

At a press conference on Monday AMD CEO Rory Read said, “Modern cloud is the killer app and it is bringing about the fastest growth across the industry.” He is convinced that ARM and AMD “can change the server and data center landscape.”

In a press statement to accompany the news conference, Reed added:

“Through our collaboration with ARM, we are building on AMD’s rich IP portfolio, including our deep 64-bit processor knowledge and industry-leading AMDSeaMicro Freedom supercompute fabric, to offer the most flexible and complete processing solutions for the modern data center.”

ARM, through its various partnerships, has been slowly gnawing at Intel’s dominance of the chip business, thanks in part to the booming demand for smartphones and other such devices. ARM has been lusting to take a piece of the server business, giving Intel more headaches. AMD, is a perfect partner for such an assault. The chips are likely to be made available in 2014, according to AMD executives.

We’ve been anticipating this move for some time, ever since AMD purchased SeaMicro, a startup that builds ultra-dense low-power servers for cloud computing that use Intel’s low-power Atom chips. SeaMicro uses x86-based chips for its boxes, but it has a technology that enables it to use any type of processor, including ARM-based cores.

The transition to alternative forms of computing in the data center has come about in some market segments, because certain jobs need less computing horsepower to complete their tasks and data center operators are looking for the most energy-efficient processor for the job. Just like you might not take your 12-cylinder Lamborghini to the grocery store to pick up a gallon of milk, the data center guys are increasingly seeing high-end general purpose CPUs as appropriate for some tasks, but overkill for others.

ARM has seen the opportunity for so-called wimpy cores, and has invested in Calxeda, a systems maker that is building a new type of servers using ARM-based SoCs. Dell, HP and others are also getting in on the ARM-server market with new products using chips from Calxeda, Marvell, Applied Micro, and perhaps even Cavium. Now that AMD has jumped on the bandwagon and with ARM servers in production later this year, getting ARM into the data center is looking more and more likely. Your move Intel.

Additional reporting by Om Malik.

http://feedproxy.google.com/~r/OmMalik/~3/bDPFQ5YSGHU/

http://gigaom.com/cloud/amd-will-challenge-intel-with-arm-based-server-chips-in-2014/


allvoices