Good luck buying one if you're not rich
"Nvidia built the model using its proprietary CUDA platform used for
GPU computing mining cryptocurrencies"
Nvidia has hashed out a new approach to neural radiance field (NeRF) technology that will generate a fully rendered 3D scene from just a few still photos, all in a matter of seconds, including model training time. NeRFs themselves were created in 2020 as a method "for synthesizing novel views of complex scenes" based on only a …
I would like to turn a series of photos of my clay scupltures into 3D for printing. Unfortunately my Intel 870 W7 PC is still going strong - but won't support anything after PCIe 2.0. I could afford a new PC but it is very hard to justify just to be able to get the right version of CUDA support for the conversion applications. I wouldn't mind if it took a week to process one set on my 870 with its old "top end" GTX card.
The comparison gives dates for the applications. They all seem to be from the era when W7 was the Windows OS.
After several days of dead ends - the "123D Catch" seemed a likely one. However - the makers had stopped supporting it and had integrated its features into their latest W10-only application. Managed to find an old download on CN which appeared to install ok on W7. However it doesn't run - nothing happens. Possibly its first action would be to call home for its mandatory sign-up.
I installed "Blender" on Linux - but like some other candidates it needed the photo in SVG format. Two JPG-SVGconverters produced just black & white abstracts that looked nothing like the picture. A third was a little more recognisable in shades of brown - but far too blocky to be of any use.
Finally resorted to "PureRef" - which allows you to overlay your picture transparently on any 3D modelling window. So - using my old clay modelling skills with the picture as a rough template - then using my eye. Bought a Wacom tablet for hopefully finer movement control. I appear to have given away my previous ones. As it is a one-off project - then I might as well have used oven-baking clay and my collection of physical sculpture tools.
If you still have your sculptures, some of the photogrammetry applications can be sped up by using an inexpensive line laser (as used on building sites to project a horizontal or (as you will want) and a simple turntable. You place the model on the turntable and project a vertical laser line on it and take a picture. Then you rotate the model incrementally. Clay lends itself to this because of its uniform matte finish... though if you glazed it it might make things trickier for this technique.
As far as I know, the 3D laser scanners that are built into more recent Samsung and Apple phones probably don't have the resolution you require (Samsung's ToF sensor was roughly equivalent to VGA last I looked).
That's an idea. I have a line laser in my tool box waiting for an application. Several turntables in my clay sculpture tools.
Currently I'm on a steep learning curve with 3D SketchUp and PLA+ printing on my new Mingda Magician X printer. That is really good - when it works properly. Problems with the touch screen and bed belt might mean it gets an RMA.
> ... self-driving cars to better understand the size and shape of objects based on 2D images or video.
This stood out for me. A car that could "see" in 3D from pictures that it takes could be game changing for autonomous vehicles, but let's see how it performs in less than ideal conditions and if they can the tech down to subsecond processing in the kind of equipment that could be put in a car or bus.
That's a lot of big ifs.
Have to say the demo is impressive, there are no moving elements in the demo, just virtual camera movements.
The walls in the distance are stationary but presumably the effect seen is deliberate, as the camera moves around the central charàcter the image is not blurry at all, it photorealistic and high definition.
It would be more accurate from the demo to say fast moving objects are rendered with motion blur, which is a pretty standard approach because it looks good, even when rendering from a full 3D model.
If that is rendered in seconds it's amazing. I remember the days of waiting 30 mins for some spinning text to render in 3D studio.
Biting the hand that feeds IT © 1998–2022