Re: Risk versus preparedness
Creative and nefarious use of current day "laughable statistical parots" is already possible. See for example
https://www.tomshardware.com/tech-industry/cyber-security/multiple-chatgpt-instances-work-together-to-find-and-exploit-security-flaws-teams-of-llms-tested-by-uiuc-beat-single-bots-and-dedicated-software
Quotes from Toms Hardware article:
* Teams of GPT-4 instances can work together to autonomously identify and exploit zero-day security vulnerabilities, without any description of the nature of the flaw. This new development, with a planning agent commanding a squad of specialist LLMs, works faster and smarter than human experts or dedicated software.
* "Kang describes the need for this system: "Although single AI agents are incredibly powerful, they are limited by existing LLM capabilities. For example, if an AI agent goes down one path (e.g., attempting to exploit an XSS), it is difficult for the agent to backtrack and attempt to exploit another vulnerability." Kang continues, "Furthermore, LLMs perform best when focusing on a single task."
* The planner agent surveys the website or application to determine which exploits to try, assigning these to a manager that delegates to task-specific agent LLMs. This system, while complex, is a major improvement from the team's previous research and even open-source vulnerability scanning software."
=> I don't call that good prospects, especially for a technology only at its infancy. This most certainly isn't true machine intelligence and nothing like advanced AI going rogue. But remember these researchers are poorly funded teams. Tens and tens of billions are pored in new "AI tools" every year since this gold rush started. To call this tech "harmless" and "an idiocy that will completely dissapear after the bubble imploded" seems a bit optimistic to me.