Seems quite a few of you are either not reading the entire article or are missing the point...
Some of you seem to see this article as an attack on AI - this is not the case (and the article is pretty clear on this if you read it in it's entirety).
I am a computer scientist, I have studied AI academically - I have a huge passion for technology which is why I work in #privacy - to ensure that technology is used for good. I even founded a company specifically to use generative AI for good (as a privacy enhancing tool).
The point is to illustrate the very real & significant risks to everyone of us/society, when we release such "tools" that are not ready (and when we are not ready for them).
The point of the article is to highlight the risks when these systems are embedded into decision support systems and we take their output as absolute truth.
As I explained in the piece, there are already unofficial APIs for ChatGPT (created by hackers) that many companies have tapped into with their decision support systems.
OpenAI just this week opened up their entire model with a full suite of APIs so they can start charging for it and make some money.
If you read any social media platform you will find 100s of millions of people raving about how awesome ChatGPT is and how everyone should be using it to do their work. This part really illustrates the "we are not ready for this yet" part - this is the absolute truth problem.
These are the points of the article. Reading it as a "luddite" piece just perfectly illustrates these points...