cross pond high tech
159.9K views | +0 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

AWS Graviton2: What it means for Arm in the data center, cloud, enterprise, AWS

AWS Graviton2: What it means for Arm in the data center, cloud, enterprise, AWS | cross pond high tech | Scoop.it

With Graviton2, AWS is making it clear that it is serious about Arm processors in the data center as well as moving cloud infrastructure innovation at its pace.

 

Amazon Web Services launched its Graviton2 processors, which promise up to 40% better performance from comparable x86-based instances for 20% less. Graviton2, based on the Arm architectuare, may have a big impact on cloud workloads, AWS' cost structure, and Arm in the data center.

 

Graviton2 was unveiled at AWS' re:Invent 2019 conference and ZDNet was debriefed by the EC2 team in an exclusive. Unlike the Graviton effort and A1 instances unveiled a year ago, Graviton2 ups the ante for processor makers such as Intel and AMD. With Graviton2, AWS is making it clear that it is serious about Arm processors in the data center as well as moving cloud infrastructure innovation at its pace.

"We're going big for our customers and our internal workloads," said Raj Pai, vice president of AWS EC2. AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families.

Indeed, Graviton2, which is optimized for cloud-native applications, is based on 64-bit Arm Neoverse cores and a custom system on a chip designed by AWS. Graviton2 boasts 2x faster floating-point performance per core for scientific and high-performance workloads, support for up to 64 virtual CPUs, 25Gbps of networking, and 18Gbps of EBS Bandwidth.

AWS CEO Andy Jassy said the new Graviton2 instances illustrate the benefits of designing your own chips. "We decided that we were going to design chips to give you more capabilities. While lots of companies have been working with x86 for a long time, we wanted to push the price to performance ratio for you," said Jassy during his keynote. Jassy added that Intel and AMD remain key partners to AWS.

Philippe J DEWOST's insight:

Amazon Web Services is so serious about chip design that it has updated its Graviton ARM processor line along with launching a dedicated Inference Chip.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Intel's Pohoiki Beach is a neuromorphic computer capable of simulating 8 million neurons

Intel's Pohoiki Beach is a neuromorphic computer capable of simulating 8 million neurons | cross pond high tech | Scoop.it
At DARPA's Electronics Resurgence Initiative 2019 in Michigan, Intel introduced a new neuromorphic computer capable of simulating 8 million neurons. Neuromorphic engineering, also known as neuromorphic computing, describes the use of systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Scientists at MIT, Perdue, Stanford, IBM, HP, and elsewhere have pioneered pieces of full-stack systems, but arguably few have come closer than Intel when it comes to tackling one of the longstanding goals of neuromorphic research — a supercomputer a thousand times more powerful than any today. Case in point? Today during the Defense Advanced Research Projects Agency’s (DARPA) Electronics Resurgence Initiative 2019 summit in Detroit, Michigan, Intel unveiled a system codenamed “Pohoiki Beach,” a 64-chip computer capable of simulating 8 million neurons in total. Intel Labs managing director Rich Uhlig said Pohoiki Beach will be made available to 60 research partners to “advance the field” and scale up AI algorithms like spare coding and path planning. “We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems,” said Uhlig. Pohoiki Beach packs 64 128-core, 14-nanometer Loihi neuromorphic chips, which were first detailed in October 2017 at the 2018 Neuro Inspired Computational Elements (NICE) workshop in Oregon. They have a 60-millimeter die size and contain over 2 billion transistors, 130,000 artificial neurons, and 130 million synapses, in addition to three managing Lakemont cores for task orchestration. Uniquely, Loihi features a programmable microcode learning engine for on-chip training of asynchronous spiking neural networks (SNNs) — AI models that incorporate time into their operating model, such that components of the model don’t process input data simultaneously. This will be used for the implementation of adaptive self-modifying, event-driven, and fine-grained parallel computations with high efficiency.
Philippe J DEWOST's insight:
Intel inside (your brain)
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Meet the World's Largest Chip : Inside the Cerebras CS-1 System

Meet the World's Largest Chip : Inside the Cerebras CS-1 System | cross pond high tech | Scoop.it

Cerebras Systems' announced its new CS-1 system here at Supercomputing 2019. The company unveiled its Wafer Scale Engine (WSE) at Hot Chips earlier this year, and the chip is almost as impressive as it is unbelievable: The world's largest chip, weighing in at an unbelievable 400,000 cores, 1.2 trillion transistors, 46,225 square millimeters of silicon, and 18 GB of on-chip memory, all in one chip that is as large as an entire wafer. Add in that the chip sucks 15kW of power and features 9 PB/s of memory bandwidth, and you've got a recipe for what is unquestionably the world's fastest AI processor.

 

Developing the chip was an incredibly complex task, but feeding all that compute enough power, not to mention enough cooling capacity, in a system reasonable enough for mass deployments is another matter entirely. Cerebras has pulled off that feat, and today the company unveiled the system and announced that the Argonne National Laboratory has already adopted it. The company also provided us detailed schematics of the internals of the system. 

Philippe J DEWOST's insight:

La puissance de calcul est un des leviers de la puissance tout court. Et l'Europe ne l'a toujours pas compris. Kalray ne suffira pas.

#HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Amazon Web Services introduces its own custom-designed Arm server processor, promises 45 percent lower costs for some workloads –

Amazon Web Services introduces its own custom-designed Arm server processor, promises 45 percent lower costs for some workloads – | cross pond high tech | Scoop.it
After years of waiting for someone to design an Arm server processor that could work at scale on the cloud, Amazon Web Services just went ahead and designed its own. Vice president of infrastructure Peter DeSantis introduced the AWS Graviton Processor Monday night, adding a third chip option for cloud customers alongside instances that use processors from Intel and AMD. The company did not provide a lot of details about the processor itself, but DeSantis said that it was designed for scale-out workloads that benefit from a lot of servers chipping away at a problem.
Philippe J DEWOST's insight:
If you can’t find it, just design it and build it ! Hardware Is Not Dead
No comment yet.