It once seemed inevitable – a sure-fire thing – that supercomputers would help businesses cope with the demands imposed by massive databases, complex engineering tools, and other CPU-draining challenges. Then all of a sudden technology and business took a different path.
Chris Monroe, co-founder and chief scientist at quantum computing company IonQ, offers a simple explanation for the sudden change in interest. “Supercomputers have failed to adapt because while they promise speed and the ability to handle big computational problems, they have a large physical footprint. [and] energy / cooling needs, ”he notes. “When it comes to mainstream adoption, supercomputers never strike the right balance between affordability, size, access, and value-added enterprise use cases. “
Supercomputers have traditionally been defined by the fact that they bring together a set of parallel hardware offering very high computing throughput and fast interconnections. “This contrasts with traditional parallel processing where [there are] a lot of networked servers are working on a problem, ”says Scott Buchholz, technical director of government and utilities and national director of emerging technologies research for Deloitte Consulting. “Most business problems can be solved either by the latest generation of stand-alone processors or by parallel servers. “
The arrival of cloud computing and easily accessible APIs, as well as the development of private clouds and SaaS software, are putting high performance computing (HPC) and supercomputers in the rearview mirror, observes Chris Mattmann, chief technology and management officer. innovation (CTIO) at NASA’s Jet Propulsion Laboratory (JPL). “Relegated to science and other uses in stores, HPC calculators / supercomputers … have never caught up with the modern era [business] standards. ”
Today, while most companies have given up on supercomputers, science and engineering teams often turn to technology to help them perform a variety of very complex tasks in areas such as weather forecasting, simulation molecular and fluid dynamics. “The sets of science and simulation problems that supercomputers are uniquely suited to solve will not go away,” says Buchholz.
Supercomputers are primarily used in areas where large models are developed to make predictions involving a large number of measurements, notes Francisco Webber, CEO of Cortical.io, a company specializing in extracting value from non-documentary documents. structured.
“The same algorithm is applied over and over again to many observational instances that can be computed in parallel,” explains Webber, hence the potential for acceleration when run on a large number of processors. Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to behavior. myriads of particles.
There is also a growing interest in supercomputers based on a graphics processing unit (GPU) and a tensor processing unit (TPU). “These machines may be well suited to some artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data, ”explains Buchholz. “As these use cases develop, there may be more opportunities to ‘rent’ time through the cloud or other service providers for those who need periodic access,” but do not have a sufficient volume of use cases to justify the outright purchase of a supercomputer. “
Although mostly relegated to large university and government laboratories, supercomputers have managed to gain a foothold in a few specific industrial sectors, such as petroleum, automotive, aerospace, chemicals and pharmaceutical companies. “While the adoption isn’t necessarily at scale, it demonstrates the investment and experimentation capacity of these organizations,” says Monroe.
In the future, the focus will be on new types of supercomputer architectures, such as neuromorphic and quantum computing, predicts Mattmann. “This is where the supercomputing companies will invest to disrupt the traditional model that powers the clouds.”
Classical computing will simply hit a limit, Monroe observes. “Moore’s Law no longer applies and organizations need to think beyond silicon,” he advises. “Even the best-made supercomputers… are dated by the time they are designed. Monroe adds that he’s also starting to see calls for supercomputers to merge with quantum computers, creating a hybrid computing architecture.
Ultimately, however, Monroe anticipates widespread adoption of powerful and stable quantum computers. “Their unique computational power is better suited to solve complex, large-scale problems, such as financial risk management, drug discovery, macroeconomic modeling, climate change, etc., beyond the capabilities of most people. big supercomputers, ”he notes. “While supercomputers are still very present… the biggest business minds are already turning to the quantum. “
To take with
Buchholz doesn’t expect mainstream companies to change their minds about supercomputers in the foreseeable future. “If the question is whether or not most organizations need specialized multi-million dollar hardware, the answer is usually ‘no’, as most applications and systems target what can be done with it. basic material today. ,” he explains.
On the flip side, Buchholz notes that the technological momentum could eventually pull many companies into the supercomputer market whether they realize it or not. “It’s important to remember that today’s supercomputer is the basic hardware for the next decade,” he says.
What’s next for the COVID-19 IT Consortium
Supercomputers recruited to work on COVID-19 research
Is quantum computing ready for prime time?