back to article Instant NeRF turns 2D photos into 3D scenes in seconds

Nvidia has hashed out a new approach to neural radiance field (NeRF) technology that will generate a fully rendered 3D scene from just a few still photos, all in a matter of seconds, including model training time. NeRFs themselves were created in 2020 as a method "for synthesizing novel views of complex scenes" based on only a …

  1. Anonymous Coward
    Anonymous Coward

    Good luck buying one if you're not rich

    "Nvidia built the model using its proprietary CUDA platform used for GPU computing mining cryptocurrencies"

  2. Anonymous Coward
    Anonymous Coward

    I would like to turn a series of photos of my clay scupltures into 3D for printing. Unfortunately my Intel 870 W7 PC is still going strong - but won't support anything after PCIe 2.0. I could afford a new PC but it is very hard to justify just to be able to get the right version of CUDA support for the conversion applications. I wouldn't mind if it took a week to process one set on my 870 with its old "top end" GTX card.

    1. matjaggard

      We use GPUs in the cloud for video editing at church. Maybe that would solve your problem for a few £ or $

      1. cornetman Silver badge

        Indeed, this is a tech begging to be offered from a cloud service. It sounds like the kind of thing that a lot of companies and designers would pay for.

    2. Tom 7 Silver badge

      There cant be more than a couple of dozen free programs that turn a few photos into 3d models out there without the need for a GPU. Google may be shit but it seems not as shit as most of its potential customers.

      1. Anonymous Coward
        Anonymous Coward

        1. Anonymous Coward
          Anonymous Coward

          The comparison gives dates for the applications. They all seem to be from the era when W7 was the Windows OS.

          After several days of dead ends - the "123D Catch" seemed a likely one. However - the makers had stopped supporting it and had integrated its features into their latest W10-only application. Managed to find an old download on CN which appeared to install ok on W7. However it doesn't run - nothing happens. Possibly its first action would be to call home for its mandatory sign-up.

          I installed "Blender" on Linux - but like some other candidates it needed the photo in SVG format. Two JPG-SVGconverters produced just black & white abstracts that looked nothing like the picture. A third was a little more recognisable in shades of brown - but far too blocky to be of any use.

          Finally resorted to "PureRef" - which allows you to overlay your picture transparently on any 3D modelling window. So - using my old clay modelling skills with the picture as a rough template - then using my eye. Bought a Wacom tablet for hopefully finer movement control. I appear to have given away my previous ones. As it is a one-off project - then I might as well have used oven-baking clay and my collection of physical sculpture tools.

    3. Anonymous Coward
      Anonymous Coward

      There are quite a few phone apps that do photogrammetry on the cloud, or you can see what web based services are out there. Current techniques (ie not fancy nvidia research) will need you to take a lot more than just a couple of pictures but the results will still be decent.

    4. Dave 126 Silver badge

      If you still have your sculptures, some of the photogrammetry applications can be sped up by using an inexpensive line laser (as used on building sites to project a horizontal or (as you will want) and a simple turntable. You place the model on the turntable and project a vertical laser line on it and take a picture. Then you rotate the model incrementally. Clay lends itself to this because of its uniform matte finish... though if you glazed it it might make things trickier for this technique.

      As far as I know, the 3D laser scanners that are built into more recent Samsung and Apple phones probably don't have the resolution you require (Samsung's ToF sensor was roughly equivalent to VGA last I looked).

      1. Anonymous Coward
        Anonymous Coward

        That's an idea. I have a line laser in my tool box waiting for an application. Several turntables in my clay sculpture tools.

        Currently I'm on a steep learning curve with 3D SketchUp and PLA+ printing on my new Mingda Magician X printer. That is really good - when it works properly. Problems with the touch screen and bed belt might mean it gets an RMA.

  3. Anonymous Coward
    Anonymous Coward

    They do imply that it has to be one of their AI-enabled cards, though, which means 20 series or later, so I and many hundreds of thousands of others are out of luck...

    1. Tom 7 Silver badge

      AI enabled?

      That's a new marketing piece of bollocks I've nor heard of before!

  4. cornetman Silver badge

    > ... self-driving cars to better understand the size and shape of objects based on 2D images or video.

    This stood out for me. A car that could "see" in 3D from pictures that it takes could be game changing for autonomous vehicles, but let's see how it performs in less than ideal conditions and if they can the tech down to subsecond processing in the kind of equipment that could be put in a car or bus.

    That's a lot of big ifs.

    1. Anonymous Coward
      Anonymous Coward

      Wow, you should tell the autonomous car people. I'm sure they've never heard of photogrammetry before.

  5. Mayday Silver badge

    I hope I'm not the only one

    Who visualised GPU-branded Super-Soakers and foam bullet flinging machine guns when they saw this.

    1. Tom 7 Silver badge

      Re: I hope I'm not the only one

      Possibly - I got my PICO to do that for me!

  6. Craig 2

    Move in. Enhance.

  7. teknopaul Silver badge


    Have to say the demo is impressive, there are no moving elements in the demo, just virtual camera movements.

    The walls in the distance are stationary but presumably the effect seen is deliberate, as the camera moves around the central charàcter the image is not blurry at all, it photorealistic and high definition.

    It would be more accurate from the demo to say fast moving objects are rendered with motion blur, which is a pretty standard approach because it looks good, even when rendering from a full 3D model.

    If that is rendered in seconds it's amazing. I remember the days of waiting 30 mins for some spinning text to render in 3D studio.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like