Damned if you do, damned if you don't.
...stay away from all public buildings...
....due back in court...
16 publicly visible posts • joined 16 Apr 2022
US law is heading towards giving ownership and copyright to the people who can marshal the resources required to build, operate, own, and market AIaaS systems.
The point of the current set of rulings is to slow-walk toward a legal framework wherein it is possible for a Midjourney clone to sell licenses to consume and commercially u$e the output of their system, while retaining copyright and ownership in-house.
The AI is the going to be the art, you dig? The developers will be the artists. The user is just wallet on legs that generates marketable user preference and training data as they use the system for entertainment.
Expect the AI itself to be legally defended, the corporation to be empowered, and the consumer-users to be reduced to dues-paying subscriber licensees (for an extra fee).
The real money is not in generating static images, it's in generated live stereoscopic video. You get there by using millions of paying human customers to train an AI to generate static images, and you track them over time as they enter new prompts. The current crop of AI users are training the systems that will replace them.
Capital is going to start moving when you can generate and deliver a convincingly high FPS stream of conceptually interconnected image frames, along with an audio channel, driven by realtime end user biometrics.
Keeping AI users firmly within the precariat (if that's where they started) is the goal. Giving a kid with $20 the ability to generate copyrightable artifacts that allow them to change their economic fortune and become upwardly mobile is off the table. It's going to remain off the table for as long as captive regulators can keep propping up the over-capitalized technocrats that pay bureaucrats to pretend that the private equity pyramid isn't a just a social compliance tool.
Bing-bot is malware, operated for profit, regardless of the risks to users.
If a human employee of a legitimate business was threatening to destroy the lives of customers, they'd be fired on the spot and charges would likely be filed.
Anyone deploying automation should be held legally responsible for the outcome.
This exposes the underlying problem with the whole ML endeavour:
Getting the skate punks on the corner to do this kind of curation work is trivially easy *if* you pay them what their time is worth.
Convincing them to pay *you* for the opportunity to do your curation work for you, so that your bosses will approve a nice fat bonus for you, that's the hard part.
At some point we're going to have to start holding corporate entities and their officers responsible for the garbage-out their AI systems inflict on users.
"This does not reflect our opinions" is about as helpful as "thoughts and prayers".
If your team puts an AI online, your opinion is explicit: you're stating that the risks to your users and the wider public are worth the benefits you'll reap. At that point, you should be legally on the hook for every piece of content your hype-bot spits out.
"I would like to add that nothing that has been said reflects the opinion of the developers (or anyone else on the staff team)."
Except that it does, quite explicitly, reflect the opinion of the developers and the staff team that the known risk of their pet AI doing something hideously offensive or dangerous at any given moment is more than offset by their ability to benefit from accepting that risk.
Raise your hand if you think Amazon or SpaceX wouldn't violate any agreement, contract, or law they could profitably afford to violate.
When your old gran gets smashed flat by flaming debris that used to be part of a flying sales funnel, they'll just tap the legal department, pay the fine, settle out of court, and launch two more.
"These alarmed developers would prefer an opt-in rather than an opt-out regime, a position the Go team rejects because it would ensure low adoption and would reduce the amount of telemetry data received to the point it would be of little value TO GOOGLE."
This is either transparently dishonest, or incredibly stupid, or a bit of both.
Either option satisfies developers who want the telemetry data more than they want privacy, and that's between them and their userbase.
Only opt-out satisfies Google, however, because they've built a global empire on the concept of monetizing and managing ignorance.
Pretending that this choice is for the benefit of users and developers is absurd.
Definitely on-brand though. Points for consistency.
...but the current crop of consumer-grade AI frequently provide incorrect answers and remain incapable of basic error checking, confidence rating, or source citation, so you'd be an absolute fool to rely on them.
If you're going to train a machine to mimic a human, the best possible outcome is a perfectly human machine. There are billions of perfectly human humans who will be happy to present thoughts and opinions of little to no worth in any given context. You may even think you've just spotted another one.
If the ecological impact genuinely concerns you, then choose a low-carbon chain for your NFT, and compare it to the cost of physically manufacturing pretty art widgets in meat space.
If the economic impact genuinely concerns you, take a hard look at all the other pointless crap in your house, town, country, etc.
Every commercial product is a scam on some level. If someone paid to make you want it, you got played. At least an NFT (probably) won't end up in the landfill, the ground water, or in your lungs. As artifacts of capitalism go, the NFT can be comparatively harmless when it's built on decent infrastructure.
All this web3 cruft is, yes, very silly. That said, it's a lot less silly that all the physically manufactured cruft we're already swimming in. If we're going to keep consuming and by-producing, (which we are), then we're going to need to do it in a context that doesn't kill us. That context will have to be virtual.