The global most powerful information hub of high performance & advanced materials, innovative technologies

to market your brand and access to the global demand and supply markets

The world’s largest AI chip developed by Cerebras Systems is 10,000 times faster than the leading GPU

Cerebras Systems confirmed that its CS-1 computing system has reached a new performance milestone, and its speed far exceeds that of existing CPUs and GPUs.

CS-1 is based on Cerebras' WSE processor (the world's largest computer chip), and it can now be called the fastest AI computer. Cerebras’ WSE has 1.2 trillion transistors, 400,000 cores, an area of ​​46225 square millimeters, and 18G of on-chip memory.

According to data from Cerebras and the National Energy Technology Laboratory (NETL) of the US Department of Energy, CS-1 is 10,000 times faster than leading GPU competitors and 200 faster than Joule supercomputer, which is ranked 82nd in the latest TOP500 supercomputing ranking. Times.

"We are very proud to work with NETL and have achieved extraordinary results in a basic work of scientific computing." explained Andrew Feldman, co-founder and CEO of Cerebras Systems.

"This work opens the door for breakthroughs in scientific computing performance. CS-1 and WSE overcome the traditional barriers that cannot achieve high performance, real-time, and scalability. This is because wafer-level integration has huge memory and communications. The acceleration is far beyond what a stand-alone single-chip processor (whether CPU or GPU) can provide."

There is a lot of work to deliver the new CS-1, involving sparse and structured linear equations. This system can be used for modeling in many practical scenarios, including fluid dynamics and energy efficiency.

Although the CS-1 data is impressive, there are still questions about which applications can be run with this chip, high-speed computer modeling seems feasible, and the potential intersection of supercomputing and artificial intelligence in the future is also exciting.

TSMC has reached a cooperation with Cerebras Systems, and mass production of InFO (Integrated Fan-Out Packaging Technology)-derived processes means that TSMC may begin commercial production of AI chips dedicated to supercomputers within two years.

If this super AI chip, which has received much attention since its launch last year, enters commercialization, machine learning may enter a new level.

Please check the message before sending