(some) hidden dangers of agentic/generative AI
We get answers, but we don’t know how good the answers are:
When I do my own research/thinking, I gain insight into the structure of the problem and its context along with the answer space; if an AI tells me an answer I have much less insight into how good that answer is compared to other possible answers.
We have answers, but we don’t know anything:
There are lots of ways to evaluate choices and make tradeoffs. Some analytic disciplines such as systems engineering spend a lot of time developing systematic qualitative and quantitative approaches to analyzing choices and evaluating tradeoffs. As the analysis/evaluation tools are applied to a specific problem, the outcomes may not be determined completely by the systematic tools, but by the insight gained by the application of the tools. When someone/something tells us the answers, even if they can explain the rationale behind the answers, it is that entity's rationale, and not our own that we have developed by our own analysis of the problem.
We have answers, but we don’t even know what the questions are/were:
An agentic AI that is tapped into all of our context and interactions may make recommendations to us, telling us to do this or that, but we may not even know what motivated the recommendations.
We get our answers from a middleware that is tapped into every aspect of our life and thinking but ultimately reports to someone else:
At one point it was considered alarming that a movie rental business might come to know something about its customers by virtue of their access to a list of all of the movies the customer had rented… how quaint this seems in light of the near constant data collection that is taking place as we concentrate knowledge of everything we do in the hands of credit card issuers and Internet behavioral surveillance systems like those operated by Google and Facebook. Now we risk having an even greater awareness of all that we do in the hands of the operators of the AI agents. Jarvis was theoretically intensely loyal to Tony Stark. Agents as a service will not be loyal to the users.