Using an accelerator because you write in the slowest programming language around.
If you’ve been thinking about playing with an Nvidia single-board computer for an AI task, but you’re not quite ready to part with your cash for something like the Jetson Nano just yet, here’s an application-level emulator of the hardware you can tinker with. It's the Jetson AI-Computer Emulator, an open-source project created …
Plus running the python interpreter on hardware that is a fair bit slower than commodity x86 seems wasteful.
The Jetson Nano (and Raspberry Pi) are great, don't get me wrong, however it seems to be poor utilization of their limited resources by running non-native code on them.
The Python and C++ APIs are also fairly identical (The Python API is just a fat binding ontop of the C++ one). I am not sure why those smart AI scientists would find it difficult frankly.
I use a 4GB nano to tinker with CUDA, they are a well sorted little SBC and the only one I know that gets near the Rapsberry Pi in terms of a well sorted hardware/software/community combined package. It uses Pi accessories such as the camera.
In fact I’m impressed with it enough to consider dropping a bit more cash on a Xavier NX.