
Botnets
It'd be interesting if any of the larger botnets out there ran the Linpack benchmark on their er... botnet and posted the results for comparison purposes.
For the first time since the Top 500 rankings of the most powerful supercomputers in the world was started 23 years ago, the United States is not home to the largest number of machines on the list – and China, after decades of intense investment and engineering, is. Supercomputing is not just an academic or government …
Last (only really) Bull machine I ever used (stretching that word a bit, I was testing a driver) was a 6U Itanium system running Suse Linux. Perhaps its descendants still exist.
Of course, those of us of a certain age may recall the Gamma 60. That was quite an architectural leap.
BTW: in a "how they have fallen" moment, the spelling checker in this comment form does not recognize "Itanium"
And probably good news for ARM. Even if the Chinese are using their own chips, I doubt that they would have invented a completely new architecture for this. The Chinese have a ton of experience with ARM thanks to producing the iPhone and its knockoffs, and so it was only a matter of time before they would start building their own server chips instead of importing from America or Taiwan.
Lenovo has inked an agreement with Spain's Barcelona Supercomputing Center for research and development work in various areas of supercomputer technology.
The move will see Lenovo invest $7 million over three years into priority sectors in high-performance computing (HPC) for Spain and the EU.
The agreement was signed this week at the Barcelona Supercomputing Center-National Supercomputing Center (BSC-CNS), and will see Lenovo and the BSC-CNS try to advance the use of supercomputers in precision medicine, the design and development of open-source European chips, and developing more sustainable supercomputers and datacenters.
The US Department of Energy is looking to vendors that will help build supercomputers up to 10 times faster than the recently inaugurated Frontier exascale system to come on stream between 2025 and 2030, and even more powerful systems than that for the 2030s.
These details were disclosed in a request for information (RFI) issued by the DoE for computing hardware and software vendors, system integrators and others to "assist the DoE national laboratories (labs) to plan, design, commission, and acquire the next generation of supercomputing systems in the 2025 to 2030 time frame."
Vendors have until the end of July to respond.
Predicting the weather is a notoriously tricky enterprise, but that’s never held back America's National Oceanic and Atmospheric Administration (NOAA).
After more than two years of development, the agency brought a pair of supercomputers online this week that it says are three times as powerful as the machines they replace, enabling more accurate forecast models.
Developed and maintained by General Dynamics Information Technology under an eight-year contract, the Cactus and Dogwood supers — named after the fauna native to the machines' homes in Phoenix, Arizona, and Manassas, Virginia, respectively — will support larger, higher-resolution models than previously possible.
D-Wave Systems has put its next-generation Advantage2 quantum computer into the cloud, or at least some form of it.
This experimental machine will be accessible from D-Wave's Leap online service, we're told. We first learned of the experimental system last year when the biz revealed its Clarity Roadmap, which includes plans for a gate-model quantum system. Advantage2 sports D-Wave's latest topology and qubit design that apparently increases connectivity and aims to deliver greater performance by reducing noise.
"By making the Advantage2 prototype available in the Leap quantum cloud service today, the company is providing an early snapshot for exploration and learning by developers and researchers," D-Wave said in a canned statement.
Germany will be the host of the first publicly known European exascale supercomputer, along with four other EU sites getting smaller but still powerful systems, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced this week.
Germany will be the home of Jupiter, the "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research." It should be switched on next year in a specially designed building on the campus of the Forschungszentrum Jülich research centre and operated by the Jülich Supercomputing Centre (JSC), alongside the existing Juwels and Jureca supercomputers.
The four mid-range systems are: Daedalus, hosted by the National Infrastructures for Research and Technology in Greece; Levente at the Governmental Agency for IT Development in Hungary; Caspir at the National University of Ireland Galway in Ireland; and EHPCPL at the Academic Computer Centre CYFRONET in Poland.
HPE has scored another supercomputing win with the inauguration of the LUMI system at the IT Center for Science, Finland, which as of this month is ranked as Europe's most powerful supercomputer.
Exclusive A court case which would have seen Atos take on the UK government over a £854 million (c $1 billion) supercomputer contract for the Meteorological Office has ended before it began.
The case, Atos Services UK Ltd v Secretary of State for Business, Energy, and Industrial Strategy and The Meteorological Office, concerns an agreement last year between the Met Office and Microsoft to provision a new supercomputer to "take weather and climate forecasting to the next level."
The system is intended to be the world's most advanced weather and climate system, and was expected to be twice as powerful as any other supercomputer in the UK when it becomes operational in the summer.
Cloud-native architectures have changed the way applications are deployed, but remain relatively uncharted territory for high-performance computing (HPC). This week, however, Red Hat and the US Department of Energy will be making some moves in the area.
The IBM subsidiary – working closely with the Lawrence Berkeley, Lawrence Livermore, and Sandia National Laboratories – aims to develop a new generation of HPC applications designed to run in containers, orchestrated using Kubernetes, and optimized for distributed filesystems.
The work might also make AI/ML workloads easier for enterprises to deploy in the process.
Amid a delayed HPC contract and industry-wide supply limitations compounded by the lockdown in Shanghai, Hewlett Packard Enterprise reported year-on-year sales growth of $13 million for its Q2.
That equated to revenue expansion of 1.5 percent to $6.713 billion for the quarter ended 30 April. Wall Street had forecast HPE to generate $6.81 billion in sales for the period and didn't look too kindly on the shortfall.
"This quarter," said CEO and president Antonio Neri, "through a combination of supply constraints, limiting our ability to fulfill orders as well as some areas where we could have executed better, we did not fully translate the strong customer orders into higher revenue growth."
AI is killing the planet. Wait, no – it's going to save it. According to Hewlett Packard Enterprise VP of AI and HPC Evan Sparks and professor of machine learning Ameet Talwalkar from Carnegie Mellon University, it's not entirely clear just what AI might do for – or to – our home planet.
Speaking at the SixFive Summit this week, the duo discussed one of the more controversial challenges facing AI/ML: the technology's impact on the climate.
"What we've seen over the last few years is that really computationally demanding machine learning technology has become increasingly prominent in the industry," Sparks said. "This has resulted in increasing concerns about the associated rise in energy usage and correlated – not always cleanly – concerns about carbon emissions and carbon footprint of these workloads."
Biting the hand that feeds IT © 1998–2022