cross pond high tech
159.9K views | +4 today
Follow
cross pond high tech
light views on high tech in both Europe and US
Your new post is loading...
Your new post is loading...
Rescooped by Philippe J DEWOST from Digital Sovereignty & Cyber Security
Scoop.it!

China Stockpiles Chips, Chip-Making Machines to Resist U.S.

China Stockpiles Chips, Chip-Making Machines to Resist U.S. | cross pond high tech | Scoop.it

Chinese businesses have collectively acquired ~$32B worth of chip manufacturing equipment over the last year, reports Bloomberg; an analysis of trade data shows firms increased spending by ~20 percent when compared with 2019; China also imported $380B worth of chips in 2020, equal to ~18 percent of the country’s total product imports for the year.

Philippe J DEWOST's insight:

Europe is right in the middle of a widening Silicon Rift.

Philippe J DEWOST's curator insight, February 3, 2021 12:51 PM

At the negotiation table, US and China are now seated. Europe is still on the menu.

Scooped by Philippe J DEWOST
Scoop.it!

AWS launches its custom Inferentia AI chips

AWS launches its custom Inferentia AI chips | cross pond high tech | Scoop.it

At its re:Invent conference, AWS today announced the launch of its Inferentia chips, which it initially announced last year. These new chips promise to make inferencing, that is, using the machine learning models you pre-trained earlier, significantly faster and cost effective.

As AWS CEO Andy Jassy noted, a lot of companies are focusing on custom chips that let you train models (though Google and others would surely disagree there). Inferencing tends to work well on regular CPUs, but custom chips are obviously going to be faster. With Inferentia, AWS offers lower latency and three times the throughput at 40% lower cost per inference compared to a regular G4 instance on EC4.

The new Inf1 instances promise up to 2,000 TOPS and feature integrations with TensorFlow, PyTorch and MXNet, as well as the ONNX format for moving models between frameworks. For now, it’s only available in the EC2 compute service, but it will come to AWS’s container services and its SageMaker machine learning service soon, too.

Philippe J DEWOST's insight:

Amazon continues going vertical with custom AI chip design made available in its cloud offerings.

Philippe J DEWOST's curator insight, December 9, 2019 4:04 AM

La puissance de calcul est un des leviers de la puissance tout court - suite : même les libraires se mettent au design propriétaire de processeurs (et celui-ci est dédié à l'IA). On attend toujours le processeur de la FNAC ou le GPU de Cdiscount ... 

Scooped by Philippe J DEWOST
Scoop.it!

Can “Less than Moore” FDSOI provides better ROI for Mobile IC?

The goal for a chip maker supporting “Less Than Moore” is not to displace the Qualcomm or Samsung, following Moore’s law and getting back more than enough revenue to invest and develop IC ever more integrated, targeting smaller technology node, supporting the type of Roadmap you can see below. This roadmap from Samsung shows Discrete Application Processor and Baseband Processor paths, as well as in parallel a roadmap for cost sensitive systems with Integrated (Application + BB) processor.

Philippe J DEWOST's insight:

LTM = PPP x TTM

Interesting introduction to "Less Than Moore" approaches, that combine Price/Performance/Power optimization techniques with Time To Market constrains.

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

Apple emprunte à son tour la "Verticale du Fou" et abandonne Intel.

Apple emprunte à son tour la "Verticale du Fou" et abandonne Intel. | cross pond high tech | Scoop.it

Apple emprunte à son tour la "Verticale du Fou" et abandonne Intel. 3 fois plus performant par Watt dissipé, le processeur "maison" #M1 — annoncé le 10 Novembre dernier — semble confirmer à quel point l'architecture #ARM a réussi à rattraper puis à dépasser l'architecture #X86 du géant de Santa Clara.

Les 16 milliards de transistors du M1 sont en effet gravés en 5 nanomètres, là où Intel ne parvient toujours pas à maîtriser ni 10 ni 7 nanos.

Un excellent article de Tom's #Hardware montre l'étendue de la menace qui pèse désormais sur Intel ; ce n'est pas tant la part de marché du Mac (à peine 9%) que celle de ses développeurs (30%) qui vont de plus en plus exclusivement basculer dans l'architecture ARM.

Philippe J DEWOST's insight:

L'intégration verticale est la tendance "Tech" du moment...

No comment yet.
Scooped by Philippe J DEWOST
Scoop.it!

AI could get 100 times more energy-efficient with IBM’s new artificial synapses

AI could get 100 times more energy-efficient with IBM’s new artificial synapses | cross pond high tech | Scoop.it
Neural networks are the crown jewel of the AI boom. They gorge on data and do things like transcribe speech or describe images with near-perfect accuracy (see “10 breakthrough technologies 2013: Deep learning”). The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips. That slows things down. IBM has now shown that building key features of a neural net directly in silicon can make it 100 times more efficient. Chips built this way might turbocharge machine learning in coming years. The IBM chip, like a neural net written in software, mimics the synapses that connect individual neurons in a brain. The strength of these synaptic connections needs to be tuned in order for the network to learn. In a living brain, this happens in the form of connections growing or withering over time. That is easy to reproduce in software but has proved infuriatingly difficult to
Philippe J DEWOST's insight:
The human brain consumes 4.2 g of glucose per hour. Neural networks are trying to catch up and silicon might be the next step with a 100x efficiency factor
No comment yet.