"Coleman argues that traditional cybersecurity and AI security are colliding, but most infosec teams haven't caught up, lacking the background to grasp AI's unique attack surfaces."
People in the security trenches know that the least effective security is that which is "bolted on" after a product has been developed. Although infosec teams are often called to add "a layer of security" to IT systems, it is foolish to think that they can slap "security lipstick" onto every pig. The overuse of the term "AI" as a catchphrase for LLMs/generative AI further clouds the issue. There are a variety of technologies that have all been labeled with the term "AI"; many of them are internalized in various computational solutions, and those solutions are adequately secured (or not) depending on how well the total solution has been engineered for security. Chatterbox Labs products are aimed at solutions in "predictive AI", "generative AI", and "agentic AI", all of which are recent "AI" developments that involve computational solutions to natural language processing. Unfortunately, none of these seem to have been engineered for security. If they have any, it is of the "bolt on" variety.
Coleman appears to be making a case for people to buy his products by attacking the infosec teams for not being ready to deal with yet another computational/IT solution with no inherent security. If this is pitched at company management, they might just tell their infosec team to get with the program by buying Coleman's products, and think that the problem has been dealt with. That would be foolish on their part, unless they plan on significantly expanding the size of their infosec team at the same time. Infosec already has their hands full dealing with the marvelous diversity of attacks on traditional IT infrastructure, much of which actually has security controls built-in. Management needs to understand that the challenges of making predictive AI/generative AI/agentic AI "safe" requires managing risk in the realm of natural language, an inherently ambiguous and highly nuanced communications technology. This is not the same as the traditional infosec security concerned with Confidentiality, Integrity, and Availability of IT systems.
Until the recent breakthroughs in natural language processing (i.e. Large Language Models, aka LLMs), only humans were in widespread use for understanding and producing speech. The safety controls on humans involve training and penalties for violations of corporate policies. They inherently depend upon a (human) sense of self and self-preservation, which are features of the human mind. LLMs are a language capability, but they are not an "artificial mind". LLMs will not be made "safe" by having them ingest corporate polity and then warning them they will be turned off for violating it.
I'm very interested in seeing if Coleman's "bolt on" security will make this kind of AI "safe" for all uses - experience suggests that it won't, at least not well and cost-effectively. What will work? I don't know - very narrow use cases for computational natural language processing? Usage only where the output is an intermediate result that is further developed by humans before being final? How do you make natural language "safe"? Snake oil salesmen, con artists, and sociopaths will say anything to accomplish their goals. How will LLMs be made to be different since they also lack a conscience?