The old adage
"Easier to ask for forgiveness, than to ask for permission."
But given this is Meta, then perhaps it should be "do not credit conspiracy, when cock-up is more likely".
US senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) have asked Meta CEO Mark Zuckerberg to address AI safety concerns after its large language system LLaMA was leaked online for anyone to download and use. In February, the social media giant launched LLaMA, a collection of models capable of generating text. The most …
Exactly how has it been abused? If it is widely being abused, I haven't heard about it...can anyone cite any sources?
It really stings to be on the side of Zuckerberg, I can't stand the guy...but I can't recall a single news article or anything that suggests widespread abuse of the LLAMA model.
Also, the whole cloud of existential doom surrounding AI at the moment is getting really boring. What I have seen of current AI (based on what is actually accessible) and what a lot of the harbingers of doom are saying about AI don't seem to line up...which tells me that a) there are way more sophisticated AI models out there behind closed doors that we know nothing about yet or b) a large majority of people that have spent decades working with AI have gone slightly bonkers.
I suspect the former is more likely, there is something we aren't being told which means unfortunately, that it's hard to take people seriously when they talk about "death by paperclip" etc etc.
As it stands, as a "man on the street" as it were, I can't see how we get from a sort of interesting LLM to a robot that wastes all of the resources on Earth making paperclips...that's one heck of a leap and the puzzle piece needed to understand this leap is currently missing for me...mostly because the generative AI that we generally have access to now is not capable of making a decision, much less so, act upon it.
If such an AI exists, it needs to be demonstrated not just for the sake of making these people look sane, but also to allow folks that don't have access to top secret labs etc (especially government decision makers) to actually understand the full current state of AI.
I do get the impression that what we're seeing as "leaked" models etc are the scrapings off a rich mans table...we may well be living in an era of existential change, but we are also in the midst of the era of "fake news" so it's difficult to take so called experts "word for it".
Call me nuts if you want but statements like "worked for 30 years in the AI division at [insert massive tech business here]" means absolutely jack shit to me because up until this year, I've had absolutely zero insight into what these various AI labs are up to, what they've been working on etc etc.
I think the biggest risk with AI right now is that the people that have spent several decades working with it, have become slightly detached from the rest of us and don't seem to grasp that most of us weren't there for the ride over the last 30 years...the only thing they seem to be getting right is that we are "oblivious to the dangers"...well yeah...we weren't there bro...we still aren't...in fact we're even more confused because what is publicly accessible and on show doesn't line up at all with what you're trying to tell us.
Take Google for example, they've had their fair share of nutters talking about how they think their AI is sentient...then we get Bard, which is below average when compared to other models out there and is nowhere near sentient.
Unfortunately, if anyone is going to take any one of these "doombringers" seriously, someone, somewhere is going to have to (safely?) demonstrate what the fuck they're on about.
Where generative AI is concerned, and the dangers...99% of the crap that is being pumped out of those models is lame "waifu" anime bullshit...yes, I've seen fake AI photographs in the wild, and yes people that I know have been caught out by them...but that changes when these people get access to generative AI...my wife for example, sent me an article about Elon Musk building an AI wife...the photograph, to me, was clearly fake...the hands were fucked, Elon looked about 10 years younger than he actually looks and I could just "tell" it was fake...when I showed my wife how these pictures are made and the flaws that exist in them, she now sends me links to "lol, look at the weird hands in this photograph!".
Fake photos and photographic hoaxes have been around since at least the Loch Ness monster. It's not a problem that mankind has zero experience with...in fact, I would say in the last few decades we've become very good at spotting bollocks compared to 100 years ago and now that fake AI images are doing the rounds, we're likely to become even better because at the very least, it will train people to look closely at pictures. The same will likely be true for any generative AI.
The point with the AI developments is that it by now has been proven that it is capable to generate content that is "good enough" for the purpose it serves.
This capability of AI will become a huge issue for many people who until now did create content that is "good enough".
Proliferation of AI may also cause an epidemic of mental parasites like replika.ai, where people replace socializing with other people by an app that gives the illusion of being a nice buddy or even girlfriend.
Be it teams, virtual assistants, phone menus all the automation and progress seems to be directed at reducing human interaction, making us more lonely thus undermining the foundation of what is called society.
The senators would do the world a better service by making legislation to breakup Amazon, MS, Google and Meta into small pieces where they are not able to harm society like they are doing now.
Capabilities of AI combined with the size and political power these tech monsters have is a poison pill for a free and prosperous society, we will end up with a Soviet style oligarchy.
"where people replace socializing with other people by an app"
Humans have a long history of engineering themselves out of nature. It's why we live in heated houses, have running water, drive cars and use refrigerators.
I'm in an air conditioned room, with a server rack, three screens and no natural light right now...is that wrong? I have no desire whatsoever to be outside in this moment. I might later this afternoon at roughly beer o clock, but for now, I'm quite happy here.
In terms of whether socialising with other humans is better than socialising with an AI, we simply don't know...there has never been a point where humans had the option of swapping other humans for robots for the purpose of socialising.
We know that lack of socialising can be a problem and can lead to mental health issues, so at the very least we can probably say that socialising with a robot is probably better than not socialising at all.
I'm not advocating for socialising with robots here, but I'm not willing to call it a terrible idea either...we just don't know.
"The senators would do the world a better service by making legislation to breakup Amazon, MS, Google and Meta into small pieces where they are not able to harm society like they are doing now."
Quite right, but have you considered that all the doom mongering by these businesses to create legislation that regulates AI, might cause the opposite of this...regular man? No he can't have AI...dangerous..he might blow his shed up and hurt himself. "Experts" and centralised organisations...perfectly fine...let's ensure that AI can only be deployed by those who "know what they're doing" it's safe then.
Smell the air once in a while, you'll catch the feint waft of shit every now and then. If the people spreading doom all appear to be from a small collection of places, you can bet your ass there is an agenda.
Think about it for a second, the cat is out of the bag on AI...models are out there and they're cheap to run. They're even cheap to extend and add more training data to. The massive spending on AI training has basically been done. The financial barriers to AI are coming down, and they're coming down fast...if something becomes really cheap, it becomes really difficult to monopolise. This is an existential threat to the big tech firms. They aren't concerned about the existential threat to humanity, they're concerned that they can't hold on to the tech and profit from it.
It cost somewhere in the region of $50m to train the LLAMA model and it took 23 days (ish). Those models are out there for anyone to download...so you can own a copy of $50m AI model and it will run on a machine from 10 years ago, because once it is trained, the heavy lifting is done. There are tools that exist that allow you to add your own training data to LLAMA which extends it's capabilities and it costs a fraction of what Facebook invested because proportionally, specialist data is waaaaay smaller than generalised data...you don't need anywhere near as much.
Essentially, we are in a situation where your first $50m worth of training is done for free and the additional training costs peanuts...this means that we can expect ever more capable models to crop up that cost a teeny weeny fraction of what Facebook spent because you don't have to spend the $50m that Facebook spent...this is a massive problem for the big boys because it creates competition.
The easiest way to solve this "existential" problem the big boys have...is to regulate AI. You convince the masses that AI is extremely dangerous, and wants to kill us by wheeling out your finest lunatics (most of which appear to be former employees of the big boys, there are precious few actual employees coming out slinging doom you'll notice, you want the feat that comes from madness, but you don't want to be closely associated with it, because when the regulation hits, you don't want to see contrarian)...spread some doom and gloom, have the unthinking masses write to their representatives and get a room of fuckwits to agree to create a virtual monopoly in the name of "safety".
Is AI a threat to humanity? Possibly. Would the big boys benefit massively from tight regulation? You fucking bet your ass they will!
The fact of the matter is, as AI gets better it also gets cheaper because the more training that has already been done, the less work you have to do in the future. You don't have to start from scratch with every new model. It's not like teaching children. Once trained, an AI can be copied, then trained some more. Then copied again. Then trained some more ad infinitum.
I think the big boys are more afraid of just how cheap AI will become than how damaging to humanity it might be.
In terms of how quickly an AI can go from where it is now to an army of killer robots...feedback based reward training has been in use in the speedrunning community for making TAS speedruns for donkies years...nobody...yet...has inadvertantly started an AI learning Super Mario Bros and lost control resulting in a human killing death machine.
Following the logic of the "paperclip problem", a TAS tool should have killed someone by now because stopping the TAS from completing the game by powering it off is an existential threat...but as yet, not a single TAS tool has found a way to reach out through the screen and strangle the operator...because even though it seems impossible for a TAS playing Mario to kill someone, logic dictates that given enough time, it might find a way.
@naive "The senators would do the world a better service by making legislation to breakup Amazon, MS, Google and Meta into small pieces where they are not able to harm society like they are doing now.
Capabilities of AI combined with the size and political power these tech monsters have is a poison pill for a free and prosperous society, we will end up with a Soviet style oligarchy."
But surely that is the very essence of 'Trumpism' once he gets back into power?
I haven't seen a lot of abuse of this model yet, although I'm certain we will. The problem with the gloomy pronouncements from politicians and companies is that they appear to think the models will be very dangerous, when what they will actually be is annoying.
For example, you can get models to write you an extortion letter. GPT may balk, but you can reword your message and get it to do so anyway. The letter won't be obviously crap and will probably be as convincing as such letters can be. However, with about two more minutes, you can write your own extortion letter and it will be just as good, since we're talking about a very small set of required information to get across to the victim: "We have your relative, we aren't averse to harming them, you can prevent this by paying us, and here's how to contact us to initiate payment". A chatbot will not help a kidnapper with the harder parts of this, like actually kidnapping someone or successfully pretending to have done so (some AI programs could be used there, but it's not the text generators).
We'll see plenty of AI-generated spam and it will be used to create fake references. This will cause problems when trying to find reliable information quickly, and I also expect there to be more work ahead for people who do moderation on online communities. It's not going to evolve into a world-destroying monster, nor is it going to make criminals' lives so much easier. At least, not yet.
"For example, you can get models to write you an extortion letter"
I seriously doubt that AI will lead to an increase in the rates of extortion because it can type a letter better than a human.
It might be plausible that AI could be used to refine a scam by providing the means to adjust a "script" on the fly to make a person more convincing though..especially where English is not the scammers first language...ironically though, this could result in scams becoming easier to spot because the scammer might have trouble pronouncing words they haven't seen before which would unintentionally make the scam comically bad.
Just for a change, I think I might be kind of on Facebook's side here.
The release they were doing was intended for serious researchers. They goofed, they didn't deliberately release it to the public domain with no thoughts about side effects.
It's unfortunate, but this is an error of omission, not of commission.
File under "dumb mistakes" I think.