back to article It takes big business to make Nvidia's Omniverse tangible

Nvidia is investing mightily in the concept of "digital twins" or large-scale simulations that illuminate real-world processes. This week at the company's GPU Technology Conference (GTC) they demonstrated how several high-profile companies are bringing digital twins into production via their all-encompassing Omniverse hardware …

  1. badflorist

    Both videos seemed depressing...

    ... also 1 of them stated it could pack warehouses more densely than humans can, but it didn't look like it. I'm sure it can be done but, that video didn't seem like proof of such (or anything non-unhappy).

  2. Snowy Silver badge
    Facepalm

    They could

    Use Omniverse to find out a way to make more Graphics cards!

  3. El Bard

    And don't forget automotive

    NVIDIA Drive (https://blogs.nvidia.com/blog/2022/03/23/drive-sim-omniverse-neural-ai-digital-twin/) provides interesting perspectives for the training of AI for autonomous vehicles.

    There is a trend to go past the Tesla model of using real world data for the training of AI

    https://www.calibratevc.com/blog-post/2021/3/15/training-autonomous-vehicles-with-synthetic-data-why-calibrate-invested-in-parallel-domain

    https://news.yahoo.com/training-artificial-intelligence-synthetic-data-153547348.html

    a trend NVIDIA is also spearheading:

    https://spectrum.ieee.org/synthetic-data-ai

    On the side, consider that Omniverse looks quite interesting for small fish in digital design too, with an ecosystem that could allow an independent designer using Blender to collaborate with a studio that uses 3DSMax; I don't know if this last option comes with strings attached, but the potential is surely there.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

  • Nvidia, Siemens tout 'industrial metaverse' to predict the future
    Using Pixar-derived tech to make digital twins immersive

    Siemens and Nvidia don’t want manufacturers to imagine what the future will hold – they want to build a fancy digital twin that helps them to make predictions about whatever comes next.

    During a press conference this week, Siemens CEO Roland Busch painted a picture of a future in which manufacturers are besieged with productivity, labor, and supply chain disruptions.

    "The answer to all of these challenges is technology and digitalization," he said. "The point is, we have to make the digital twin as realistic as possible and bring it as close as possible to the real world."

    Continue reading
  • Nvidia wants to lure you to the Arm side with fresh server bait
    GPU giant promises big advancements with Arm-based Grace CPU, says the software is ready

    Interview 2023 is shaping up to become a big year for Arm-based server chips, and a significant part of this drive will come from Nvidia, which appears steadfast in its belief in the future of Arm, even if it can't own the company.

    Several system vendors are expected to push out servers next year that will use Nvidia's new Arm-based chips. These consist of the Grace Superchip, which combines two of Nvidia's Grace CPUs, and the Grace-Hopper Superchip, which brings together one Grace CPU with one Hopper GPU.

    The vendors lining up servers include American companies like Dell Technologies, HPE and Supermicro, as well Lenovo in Hong Kong, Inspur in China, plus ASUS, Foxconn, Gigabyte, and Wiwynn in Taiwan are also on board. The servers will target application areas where high performance is key: AI training and inference, high-performance computing, digital twins, and cloud gaming and graphics.

    Continue reading
  • Lenovo reveals small but mighty desktop workstation
    ThinkStation P360 Ultra packs latest Intel Core processor, Nvidia RTX A5000 GPU, support for eight monitors

    Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.

    Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.

    Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.

    Continue reading
  • Restructure at Arm focused on 'non-engineering' roles
    Meanwhile, CEO wants to vacuum up engineering talent amid return to stock market

    Updated Arm today told The Reg its restructuring ahead of its return to the stock market is focused on cutting "non-engineering" jobs.

    This is after we queried comments made this morning by Arm chief executive Rene Haas in the Financial Times, in which he indicated he was looking to use funds generated by the expected public listing to expand the company, hire more staff, and potentially pursue acquisitions. This comes as some staff face the chop.

    This afternoon we were told by an Arm spokesperson: "Rene was referring more to the fact that Arm continues to invest significantly in its engineering talent, which makes up around 75 percent of the global headcount. For example, we currently have more than 250 engineering roles available globally."

    Continue reading
  • Will optics ever replace copper interconnects? We asked this silicon photonics startup
    Star Trek's glowing circuit boards may not be so crazy

    Science fiction is littered with fantastic visions of computing. One of the more pervasive is the idea that one day computers will run on light. After all, what’s faster than the speed of light?

    But it turns out Star Trek’s glowing circuit boards might be closer to reality than you think, Ayar Labs CTO Mark Wade tells The Register. While fiber optic communications have been around for half a century, we’ve only recently started applying the technology at the board level. Despite this, Wade expects, within the next decade, optical waveguides will begin supplanting the copper traces on PCBs as shipments of optical I/O products take off.

    Driving this transition are a number of factors and emerging technologies that demand ever-higher bandwidths across longer distances without sacrificing on latency or power.

    Continue reading
  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • Nvidia taps Intel’s Sapphire Rapids CPU for Hopper-powered DGX H100
    A win against AMD as a much bigger war over AI compute plays out

    Nvidia has chosen Intel's next-generation Xeon Scalable processor, known as Sapphire Rapids, to go inside its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

    Jensen Huang, co-founder and CEO of Nvidia, confirmed the CPU choice during a fireside chat Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia positions the DGX family as the premier vehicle for its datacenter GPUs, pre-loading the machines with its software and optimizing them to provide the fastest AI performance as individual systems or in large supercomputer clusters.

    Huang's confirmation answers a question we and other observers have had about which next-generation x86 server CPU the new DGX system would use since it was announced in March.

    Continue reading
  • AMD nearly doubles Top500 supercomputer hardware share
    Intel loses out as Instinct GPUs power the world’s fastest big-iron system

    Analysis In a sign of how meteoric AMD's resurgence in high performance computing has become, the latest list of the world's 500 fastest publicly known supercomputers shows the chip designer has become a darling among organizations deploying x86-based HPC clusters.

    The most eye-catching bit of AMD news among the supercomputing set is that the announcement of the Frontier supercomputer at the US Department of Energy's Oak Ridge National Laboratory, which displaced Japan's Arm-based Fugaku cluster for the No. 1 spot on the Top500 list of the world's most-powerful publicly known systems.

    Top500 updates its list twice a year and published its most recent update on Monday.

    Continue reading
  • Los Alamos to power up supercomputer using all-Nvidia CPU, GPU Superchips
    HPE-built system to be used by Uncle Sam for material science, renewables, and more

    Nvidia will reveal more details about its Venado supercomputer project today at the International Supercomputing Conference in Hamburg, Germany.

    Venado is hoped to be the first in a wave of high-performance computers that use an all-Nvidia architecture, in this case using Grace-Hopper Superchips that combine CPU and GPU dies, and Grace CPU-only Superchips.

    This supercomputer "will be the first system deployed not just with Grace-Hopper in terms of the converged Superchip but it’ll also have a cluster of Grace CPU-only Superchip modules,” Dion Harris, Nvidia’s head of datacenter product marketing for HPC, AI, and Magnum IO, said during an Nvidia press conference ahead of ISC.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading

Biting the hand that feeds IT © 1998–2022