back to article OpenAI uses cunning code to speed up GPU machine learning

Researchers at OpenAI have launched a library of tools that can help researchers build faster, more efficient neural networks that take up less memory on GPUs. Neural networks are made up of layers of connected nodes. The architecture for these networks are highly variable depending on the data and application, but all models …

  1. Rebel Science

    Deep Learning Is a Hindrance to Progress Toward True AI

    So OpenAI wants to develop AGI but they are still spending talent and busloads of cash on DNNs, connection weights and backpropagation? It's a sure bet AGI will not come from that bunch. Deep learning experts are the least qualified people to work on AGI. Almost everything they know and think is important is wrong. Just saying.

    1. steelpillow Silver badge

      Re: Deep Learning Is a Hindrance to Progress Toward True AI

      General intelligence will surely find deep learning useful. I could certainly do with it. Besides, neural net optimisation techniques developed for deep learning are likely to be of value in implementing GI architectures too. I could probably do with some neural stripping as well - pass me that Serenity DVD again...

      1. Rebel Science

        Re: Deep Learning Is a Hindrance to Progress Toward True AI

        Deep learning is certainly useful and will always be useful but not for figuring out AGI.

    2. StargateSg7

      Re: Deep Learning Is a Hindrance to Progress Toward True AI

      How true that Deep Learning IS a hindrance!

      TRUE Artificial General Intelligence will come from doing a CHEMICAL/BIOLOGICAL STRUCTURES EMULATION of the human mind. Every synapse and chemical interaction can be modeled in less than 200 PetaFLOPS (32-bit Floats) using nothing more than simple Record Structures, 0-to-10 Input/Output weightings and boolean-style Grades-of-Truth states such as ABSOLUTELY_TRUE,

      LIKELY_TRUE, POSSIBLY_TRUE, POSSIBLY_FALSE, LIKELY_FALSE, ABSOLUTELY_FALSE,

      PROCESSING_ERROR_CONDITION, WAITING_FOR_RESULT, INVALID_RESULT, CANNOT_GET_RESULT, and NO_RESULT.

      Since July of 2017, AMD and NVIDIA consumer-level graphics cards (GPU's) now have 12 TeraFLOPS of 32-bit processing power so it only takes 16,700 of those AMD or NVIDIA GPU cards (less than 10 Million US dollars at bulk pricing) for me to have 200 PetaFLOPS whcih is enough for me to EMULATE the low-level structures of the entire human brain. Add in simple SOBEL edge detection, image object and sound-wave vectorization and categorization code (i.e. some basic "software-based instincts") and within 4 years of system initialization, can be fed enough input from video/audio sources and environmental sensors such that it will eventually self-organize much like human brains do getting to a general IQ level of 100 and SURPASSING THAT within less than two more years of input.

      Direct Full Brain Emulation is the way to go here!

      THAT will get you Human-level and Greater-than-Human-level Artificial General Intelligence!

      1. Mellipop

        Re: Deep Learning Is a Hindrance to Progress Toward True AI

        And once we've solved "intelligence", what do we do about "consciousness"?

  2. John Smith 19 Gold badge
    Unhappy

    "used it internally to train long short-term memory networks to perform sentiment analysis"

    My first thought was "Have you fed it Musk's historical twitter feed and what did it make of them?"

    My second was "WTF. Long short-term memory? Is sentiment analysis a thing?"

    My third was "Oh dear. Looks like like I'm going to have to add another Ahole to my list of ignored commentards. I don't need to lose that much bandwidth and they've never written anything worth reading."

  3. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like