cross pond high tech
159.9K views | +7 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Scooped by Philippe J DEWOST
Scoop.it!

AWS Graviton2: What it means for Arm in the data center, cloud, enterprise, AWS

AWS Graviton2: What it means for Arm in the data center, cloud, enterprise, AWS | cross pond high tech | Scoop.it

With Graviton2, AWS is making it clear that it is serious about Arm processors in the data center as well as moving cloud infrastructure innovation at its pace.

 

Amazon Web Services launched its Graviton2 processors, which promise up to 40% better performance from comparable x86-based instances for 20% less. Graviton2, based on the Arm architectuare, may have a big impact on cloud workloads, AWS' cost structure, and Arm in the data center.

 

Graviton2 was unveiled at AWS' re:Invent 2019 conference and ZDNet was debriefed by the EC2 team in an exclusive. Unlike the Graviton effort and A1 instances unveiled a year ago, Graviton2 ups the ante for processor makers such as Intel and AMD. With Graviton2, AWS is making it clear that it is serious about Arm processors in the data center as well as moving cloud infrastructure innovation at its pace.

"We're going big for our customers and our internal workloads," said Raj Pai, vice president of AWS EC2. AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families.

Indeed, Graviton2, which is optimized for cloud-native applications, is based on 64-bit Arm Neoverse cores and a custom system on a chip designed by AWS. Graviton2 boasts 2x faster floating-point performance per core for scientific and high-performance workloads, support for up to 64 virtual CPUs, 25Gbps of networking, and 18Gbps of EBS Bandwidth.

AWS CEO Andy Jassy said the new Graviton2 instances illustrate the benefits of designing your own chips. "We decided that we were going to design chips to give you more capabilities. While lots of companies have been working with x86 for a long time, we wanted to push the price to performance ratio for you," said Jassy during his keynote. Jassy added that Intel and AMD remain key partners to AWS.

Philippe J DEWOST's insight:

Amazon Web Services is so serious about chip design that it has updated its Graviton ARM processor line along with launching a dedicated Inference Chip.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Tesla Autonomy Day almost Full Report

Tesla Autonomy Day almost Full Report | cross pond high tech | Scoop.it

Cleantechnica has compiled the event video plus tons of liveblogging highlights from the event : there is a trove of insight about where Tesla is going and how they are going there.

They have designed their own FSD (Full Self Driving system) that doesn't need a LIDAR and "learns" from shadow mode driving of the whole deployed Tesla fleet. This is how they will be able to deploy Robotaxi mode with just a software update.

 

For instance,

“Early testing of new FSD hardware shows a 21× improvement in image processing capability with fully redundant computing capability.

“This is all done at a modest cost while delivering a fully redundant computing platform to all of Tesla’s vehicles currently in production.”

General summary from Kyle: “Our shit is really, really fast and we built it better than anyone else.”

Elon notes that Tesla finished this design 1½–2 years ago and then started on the next system design. They are not talking about the next design now, but they’re about halfway through it.

Some additional technical notes from Chanan Bos:

“An enthusiast Intel desktop i7 processor with 8 cores has 3 billion transistors, Tesla’s new chip has 6 billion. But that is till less than some crazy 18 core Intel ships like Skylake-X which has 8.33 billion transistors. An iPhone has about 2 billion.

“So SRAM is much faster but is more expensive and has less storage compared to DRAM.

“Nvidia Xavier (available early 2018) had 30 TOPS (Tera Operations Per Second). Tesla’s FSD chip has 144 TOPS.”

 

Philippe J DEWOST's insight:

Tesla is going vertical at full speed as it designs its own Full Self Driving System. This is what will enable Robotaxi mode and will slash the cost of owning a Tesla by 3x.

The following report contains almost all slides presented with an incredible level of details. A must read for anybody involved in Autonomous Vehicle technology and issues.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Kalray Announces the Release of an Efficient Manycore Processing Solution Dedicated to Deep Learning

Kalray Announces the Release of an Efficient Manycore Processing Solution Dedicated to Deep Learning | cross pond high tech | Scoop.it

Roundingout its embedded technology offer, Kalray has expanded its processingcapabilities to the world of artificial intelligence. The company is introducinga highly optimized deep learning solution targeting embedded applications likeautonomous cars, avionics, drones, robotics and more. The solution is capableof supporting all of the most commonly-used deep learning neural networks andframework, such as GoogleNet, Squeezenet, CAFFE and more.

Kalray’s deep learning solution includes:

  • MPPA®2-256 Bostan manycore processor:industry-recognized 288-core processor
  • Kalray Neural Network (KaNN): deep learningsoftware tool used in the development and evaluation of neural networks on MPPA®.KaNN is compatible with all commonly-used deep learning networks.

In terms of pure performance, Kalray is able to leverage the288 cores of its MPPA®2-256 Bostan processor in order to efficiently processnotoriously compute-heavy deep learning algorithms. To do this, the solutionuses the extensive on-chip memory of the Bostan processor and it spreads thecompute-heavy aspects of deep learning – data dependent layers and weightparameters – across the MPPA®’s numerous cores. The result is particularlyefficient processing – up to 60 frames per second while running “GoogleNet”– outperformingthe most efficient GPUs addressing today’s embedded market.

Philippe J DEWOST's insight:

Impressive to see that #HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Meet the World's Largest Chip : Inside the Cerebras CS-1 System

Meet the World's Largest Chip : Inside the Cerebras CS-1 System | cross pond high tech | Scoop.it

Cerebras Systems' announced its new CS-1 system here at Supercomputing 2019. The company unveiled its Wafer Scale Engine (WSE) at Hot Chips earlier this year, and the chip is almost as impressive as it is unbelievable: The world's largest chip, weighing in at an unbelievable 400,000 cores, 1.2 trillion transistors, 46,225 square millimeters of silicon, and 18 GB of on-chip memory, all in one chip that is as large as an entire wafer. Add in that the chip sucks 15kW of power and features 9 PB/s of memory bandwidth, and you've got a recipe for what is unquestionably the world's fastest AI processor.

 

Developing the chip was an incredibly complex task, but feeding all that compute enough power, not to mention enough cooling capacity, in a system reasonable enough for mass deployments is another matter entirely. Cerebras has pulled off that feat, and today the company unveiled the system and announced that the Argonne National Laboratory has already adopted it. The company also provided us detailed schematics of the internals of the system. 

Philippe J DEWOST's insight:

La puissance de calcul est un des leviers de la puissance tout court. Et l'Europe ne l'a toujours pas compris. Kalray ne suffira pas.

#HardwareIsNotDead

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Apple reportedly developing custom cellular modem for iPhones in-house amid battle with Qualcomm

Apple reportedly developing custom cellular modem for iPhones in-house amid battle with Qualcomm | cross pond high tech | Scoop.it
Apple has been expanding its development and use of custom chips over the last few years. Last month we first heard that Apple was looking to poach employees from Qualcomm on its home turf of San Diego to potentially create custom radio chips. Today, a new report from The Information says that Apple is indeed …
No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Google Ramps Up Chip Design | EE Times

Google Ramps Up Chip Design | EE Times | cross pond high tech | Scoop.it
According to venture legend John Doerr,Google is designing its own silicon for its data centers. But he stopped short of confirming rumors that the search giant was designing ARM-based chips as was reported in December. Doerr, speaking at a chip conference, also said that Facebook would be next. He’s right. Computing is the primary cost for Google, Amazon Web Services and Facebook and designing their own silicon could lower that cost. And thanks to more modular designs and advances in the ARM architecture, the cost of designing custom chips has fallen into a range where the benefits outweigh design costs.
No comment yet.