Fugazi?
KYEO!
Lying means dying, at least for one falsehood-peddling government AI. A Microsoft-powered chatbot that New York City rolled out to help business owners answer frequently asked questions – but was often wrong – has been silenced as the city grapples with a $12 billion budget shortfall. Mayor Zohran Mamdani said that, while the …
This post has been deleted by its author
That depends what sycophancy means, but we know what they meant when they said it, and what they meant isn't that big a diversion from the dictionary definition. A sycophant might have all sorts of reasons to flatter and support the object of their attentions, but the core part, the unquestioning and effusive agreement, is the important part. The output from an LLM being called sycophantic is often both of those things.
Awww come on! They're like a modern version of Thomas Edison's 1890 talking doll and later Chatty Cathys that endlessly repeat 'I agree', 'you are so right', 'I love you' and so forth when you prompt-pull their strings ... no wonder they're such a hit!
Awww come on! They're like a modern version of Thomas Edison's 1890 talking doll and later Chatty Cathys that endlessly repeat 'I agree', 'you are so right', 'I love you' and so forth when you prompt-pull their strings ... no wonder they're such a hit!
Rather like the orange dotard's Cabinet meetings.
It's both, but probably neither of them is specifically tilted in anyone's favor. It's quite possible that it would do the same thing by promising to an employee that something they want is definitely legal whether or not it actually is, but it might also be producing specifically incorrect information because of something in its training data which, being an LLM, it's incapable of encoding to the specific rules that the law requires. Laws about what employers of tipped workers can do are complex, and that's a summary from a non-governmental source. Being trained on the more absolute but less clearly-written law, regulation, and probably some court proceedings is not a situation an LLM can generally handle without making plenty up.
I think that is incorrect. That is what New York City claims to have done, and even if we assume that they really didn't, I don't know where you get your confidence that they could have. LLMs do not perfectly understand the text they're trained on, so even if you trained it only on the relevant laws, it can still get the answers wrong by random chance. The extra randomness and variability of unrestricted prompts makes this worse.
Take that summary I linked above about what New York state/city laws allow you to do with workers' tips, since that's the subject of the comment thread although I don't know why. Here are two sentences about tip pooling arrangements:
"Under New York labor law, employers are allowed to require their employees to share tips, but they cannot participate in the tip pools themselves."
"The key requirements are that: The pooling system is voluntary [...]"
What does that mean? Those appear to contradict each other, because the employer can require people to do something as long as the people can choose whether to do that thing voluntarily. If I'm an employee who doesn't want to be in a tip pool, the bot can show me the voluntary line, whereas if I'm an employee who does and hopes the employer will make that happen, the bot can tell me that it's fine to require it. The LLM is trying to respond to the prompt, and when the prompt assumes that something exists, the LLM generally looks for valid text that best satisfies that prompt and it doesn't have a full legal understanding. In legal fact, there's probably a definition of "voluntary" which is important here, but the bot doesn't know that because it doesn't know anything.
ClippyAI: The MyCity Business virtual assistant is a specialized, AI-powered conversational tool developed by New York City (NYC) in partnership with Microsoft, designed to assist small business owners with navigating regulatory and operational requirements. Built on Microsoft Azure OpenAI Service and leveraging the capabilities of ChatGPT (specifically GPT-3.5 and subsequently, GPT-4 models), this chatbot serves as a 24/7 digital assistant for the NYC Department of Small Business Services.
“For the first time New York City business and aspiring entrepreneurs will be able to direct their questions to an AI-powered chatbot rather than having to scan through webpage after webpage going into the blackhole of uncertainty about how to open a business, how to run a business, how to answer some of the basic questions,” he said during the announcement. “That is behind us. AI-generated answering is in front of us.”
Maybe I'm naive - but wouldn't it be simpler, and far more efficient to just provide simple rules (and therefore answers) for these questions, in plain language?