Wow.
A California politician worried about excessive legislation?
I am shocked.
California Governor Gavin Newsom wants his state's legislators to know that he appreciates their efforts to put the home of Silicon Valley at the forefront of AI regulation, but he's still concerned heavy-handed laws could stifle innovation. Speaking at the GenAI Summit in San Francisco on Wednesday, Newsom said that while …
A California politician worried about excessive legislation?
By number of bills introduced or passed, I think you'll find California is pretty much middle of the road, surprisingly.
https://fiscalnote-marketing.s3.amazonaws.com/Most_Effective_States2021_v1_v4.pdf
But on the general subject of regulation, having worked both in heavily and lightly regulated businesses, and now as a regulator myself, regulation is largely an enabler. You've always got a minority of mostly SME complainers that regulation is red tape and stops them doing stuff - and sometimes they're right. For smaller potentially innovative businesses regulation is a problem particularly because understanding regulation isn't easy, whereas for larger companies there's plenty of resources to have people looking at regulation both to comply, to spot opportunities within it, or to influence changes to it for commercial advantage. If you're a six man and a dog band developing a new product and living a hand to mouth existence, you can't afford somebody spending weeks reading the relevant regulation, consulting expensive lawyers, engaging with bureaucrats. But that's not a description of the modern AI company, is it? OpenAI used to be five people (plus dog) but now employs circa 1,000 people and has a valuation around $100bn.
If most regulation were pre-emptive, then drugs like aspirin would probably never have been licensed, and lithium ion batteries would be restricted largely to business applications. So what business are mostly seeking is a gentle slope of AI regulation - a few boundaries at first, a bit of guidance, engagement, and somebody in government to speak to. When there's more experience, people (perhaps more voters than business) will want more. And surprisingly, investors often like regulation - I've worked on district heating in the UK, where investment has been heavily constrained because investors are put off by a lack of regulation, and where my employer (who built and operated these) likewise wanted more operational regulation to squeeze out the shysters and fly-by-nights.
Fortunately, most regulation is created ex-post, not ex-ante. Or in English, most rules get written after somebody's already caused a problem, rather than pre-emptively because they might. In the case of AI things are different because people are looking for some pre-emption for fear of especially bad consequences. Arguably the early talk of AI regulation is largely misplaced - the harm of AI doesn't appear to me to be a Terminator series of droids, or other uses and misuses, but it's potentially harm to investors, and to the environment.
And surprisingly, investors often like regulation - I've worked on district heating in the UK, where investment has been heavily constrained because investors are put off by a lack of regulation, and where my employer (who built and operated these) likewise wanted more operational regulation to squeeze out the shysters and fly-by-nights.
What's not to like about a legally enforced monopoly? For the operator, I mean. Such business arrangements far predate "The Wealth of Nations" - the king would delegate monopolies in exchange for revenue. Classifying it as capitalism is a bit of an untruth.
There are, IMO, very few pseudo-AI projects that actually bring benefit to the user.
There are many pseudo-AI projects that can easily harm people. Deepfakes started the landslide, and we now have the means of faking not only image, but sound as well.
And you don't have to be a rocket scientist to use these "tools".
So yes, go ahead and stifle innovation before lives are destroyed by this monstrosity.
Maybe this is a good opportunity to "move fast and break" internet law section 230, which is the medium through which much of that AI fraud damage will be done. Make the the big internet companies liable for damages so they are forced to verify content. That will change the landscape quickly and open up new business opportunities for humans performing manual curation.
According to Wednesday's plan, the 2024-25 year has a $45 billion shortfall. The following fiscal year has a deficit of over $30 billion. That's huge. Many of the regulations won't be effective anyway unless they are created at the federal level.