
That's all very impressive...
...but can it run Crysis?
Nvidia has lifted the lid on a fresh line of products based on its latest Ampere architecture, revealing its latest A100 GPU - which promises to be 20X more powerful than its predecessor and capable of powering AI supercomputers – as well as a smaller chip for running machine learning workloads on IoT devices. CEO Jensen Huang …
Do you mean directly? As in a workstation/PC?
If so, then no, at least not as far as I know. The A100s' are specifically designed for data-centre usage. They don't even have video output on them.
But Ampere, the microarcitecture the new A100 is built on, is coming to workstation and mainstream cards at some point, we just don't know when yet.
For ref, nVidia have stated the Ampere based chips will replace all current mainstream consumer (i.e. regular GTX/RTX), prosumer (Titan) and professional (Quadro) cards. With the expectation being that the Titan and Quadro cards will be similar, if not the same as the chips being used in the A100, and the other cards being a cut down version (i.e. less CUDA cores etc).
Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.
Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.
The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.
Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.
Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.
Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.
Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.
Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.
Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.
Arm is beefing up its role in the rapidly-evolving (yet long-standing) hardware-based real-time ray tracing arena.
The company revealed on Tuesday that it will introduce the feature in its new flagship Immortalis-G715 GPU design for smartphones, promising to deliver graphics in mobile games that realistically recreate the way light interacts with objects.
Arm is promoting the Immortalis-G715 as its best mobile GPU design yet, claiming that it will provide 15 percent faster performance and 15 percent better energy efficiency compared to the currently available Mali-G710.
Siemens and Nvidia don’t want manufacturers to imagine what the future will hold – they want to build a fancy digital twin that helps them to make predictions about whatever comes next.
During a press conference this week, Siemens CEO Roland Busch painted a picture of a future in which manufacturers are besieged with productivity, labor, and supply chain disruptions.
"The answer to all of these challenges is technology and digitalization," he said. "The point is, we have to make the digital twin as realistic as possible and bring it as close as possible to the real world."
Comment Facebook parent Meta has reportedly said it needs to increase its fleet of datacenter GPUs fivefold to help it compete against short-form video app and perennial security concern TikTok.
The oft-controversial tech giant needs these hardware accelerators in its servers by the end of the year to power its so-called discovery engine that will become the center of future social media efforts, according to an internal memo seen by Reuters that was written by Meta Chief Product Officer Chris Cox.
Separately, CEO Mark Zuckerberg told Meta staff on Thursday in a weekly Q&A the biz had planned to hire 10,000 engineers this year, and this has now been cut to between 6,000 and 7,000 in the shadow of an economic downturn. He also said some open positions would be removed, and pressure will be placed on the performance of those staying at the corporation.
Updated Arm today told The Reg its restructuring ahead of its return to the stock market is focused on cutting "non-engineering" jobs.
This is after we queried comments made this morning by Arm chief executive Rene Haas in the Financial Times, in which he indicated he was looking to use funds generated by the expected public listing to expand the company, hire more staff, and potentially pursue acquisitions. This comes as some staff face the chop.
This afternoon we were told by an Arm spokesperson: "Rene was referring more to the fact that Arm continues to invest significantly in its engineering talent, which makes up around 75 percent of the global headcount. For example, we currently have more than 250 engineering roles available globally."
Early details of the specifications for PCIe 7.0 are out, and it's expected to deliver data rates of up to 512 GBps bi-directionally for data-intensive applications such as 800G Ethernet.
The announcement from the The Peripheral Component Interconnect Special Interest Group (PCI SIG) was made to coincide with its Developers Conference 2022, held at the Santa Clara Convention Center in California this week. It also marks the 30th anniversary of the PCI-SIG itself.
While the completed specifications for PCIe 6.0 were only released this January, PCIe 7.0 looks to double the bandwidth of the high-speed interconnect yet again from a raw bit rate of 64 GTps to 128 GTps, and bi-directional speeds of up to 512 GBps in a x16 configuration.
Computex Nvidia's GPUs are becoming increasingly more power hungry, so the US giant is hoping to make datacenters using them "greener" with liquid-cooled PCIe cards that contain its highest-performing chips.
At this year's Computex event in Taiwan, the computer graphics goliath revealed it will sell a liquid-cooled PCIe card for its flagship server GPU, the A100, in the third quarter of this year. Then in early 2023, the company plans to release a liquid-cooled PCIe card for the A100's recently announced successor, the Hopper-powered H100.
Nvidia's A100 has already been available for liquid-cooled servers, but to date, this has only been possible in the GPU's SXM form factor that goes into the company's HGX server board.
Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?
But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.
Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.
Biting the hand that feeds IT © 1998–2022