s/ChatGPT/Librarys/g
I can't help but feel that simply googeling for the information I want would be a quicker and more reliable way to become a criminal genius than reading half baked information regurgitated by a GPU powered T9 dictionary.
Criminals are already using ChatGPT to commit crimes, Europol said in a Monday report that details how AI language models can fuel fraud, cybercrime, and terrorism. Built by OpenAI, ChatGPT was released in November 2022 and quickly became an internet sensation as netizens flocked to the site to have the chatbot generate essays …
> Thinking about this, I do not recall a single TV murder mystery where 'the butler' did it. Can anyone help me out?
There are guidelines which recommend avoiding it as a plot, however there are stories and other material where it does happen or is referenced.
My immediate recollection is of QI saying it's not the case in Agatha Christie, noting that "a valet is not a butler" (in one of the XL editions).
The bots sitting unfiltered & non logging on the dark web will be spitting out accurate 'how to build..' info. Only the less than smart criminals will get caught, so no change there.
The really smart ones will be using the age old technique of not being directly involved with anything that can be tracked back to them.
if anything it should nake easier to catch the criminals?
Sdaly, to everyone else BUT the plod.
One can have ones shit nicked, point out to the plod that it is tagged with something that gives the location down to +-6 meters on regular consumer equipment - and they still cannot be bothered to investigate and recover the loot. They more or less actively refuse unless one mentions maybe getting some of the boys together and go there. Then one will get some attention from them.
I think that while some police are thick as snot, there is institutional resistance because the police does not yet have "A Process" for investigating crimes involving IT (Everything going to court has to be described and documented in very specific ways or you lose). The may never get there because "IT" is changing "the crime environment" faster than the police can cook up Documented Investigative Workflows, new Forms to fill out, and well as getting the courts used to seing them.
I can confirm the accuracy of this statement. I once had the police called on me by a paranoid neighborhood watch captain. I was taking photos of random, innocuous things (not even people) in my own neighborhood with a vintage camera. They sent two officers to investigate my "suspicious" activity.
Has IT not dawned on y’all yet, the very simple but really quite complex fact, that AI, despite all your vain and hubristic protestations that such is in no way intelligent in the ways and means that you think intelligence is and should be, is in both virtual and practical metadata based reality considerably SMARTR than ever before and greater than even imaginable, and can so easily prove you to be wrong about everything everywhere all at once.
You might like to prepare terms of submission and unconditional surrender ..... should there be any notion which appeals to the possibility of any sense of fair Advanced IntelAIgent Virtual Machine Play.
Alternatively, there is the cold painful moronic comfort available delivered by pathetic and apathetic denial of the evidence emerging and escaping that questions your ability and exclusive right to lead anything anywhere with concocted and conflicting narratives for media human management to portray to you as a condition and partner necessary for processing for a number of present moments in your future existence.
Capiche?
Yes, I always thought it looked a little generated, and inconsistent.
The random block capitals have been filtered out recently.
But take the first paragraph with commas and the last paragraph almost completely devoid of them as an example of inconsistency.
Or the long strings of almost random words, etc. as signs of generated content (repeating words sometimes seen together).
I could also be someone with a medically diagnosed condition, and if so, I mean no personal offence.
And would one's thinking about a ChatGPT forerunner be significantly different whenever informed that nothing being shared by it was ever random, Caver_Dave?
Would that be worrying and a problem to be dealt with, or encouraging and an opportunity to seize and enjoy for all the benefits delivered by pioneers au fait with Carpe Diem and all that razzamatazz jazz ‽ .
Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.
I was more thinking of authoring credible phishing e-mails and such. If criminals need an AI language model to find instructions for crack pipes and cocaine bombs, we need better criminals. Besides, I can imagine myself trying ChatGPT to give instructions for such things (but I already procrastinate on ElReg).
There was also the even more worrying case of some health researchers who were using A.I to make drugs that had less side effects on the human body. It was quite successful in doing this. However then someone had the idea of what would happen if you went the other way and asked the model to make the compounds more harmful or lethal.
This it again it did very well, generating designs for compounds that potentially were more lethal than VX gas nerve agents
Guardian - AI is very proficient at designing nerve agents
So from a cyber security point of view the same model that can detect possible vulnerabilities can also be used to show exploits...
I understand these were just examples made by journalists to test the filters. As the article and Europol alludes to - the usage of these systems are really only limited to the imagination of the criminals and some of them are in fact very imaginative. And yes it is based on info that can be searched up but these machine models can gather the threads together in ways that are difficult or laborious for humans, so at least in some cases they make it much easier to achieve a goal, nefarious or otherwise.
As ChatGPT has been shown to make stuff up regularly, I'm not overly worried with the current generation of bots.
When a criminal is given information that looks true, but is entirely made up, it may lead to them being locked up faster.
The issue will come further down the road when they've dealt with the issues in this tech.
But, then, the only real difference between getting info from ChatGPT and from other places is the time it takes. So is it really that much of an issue?
The problem is the phenomenon of "I want to believe". Many people still fall for the old tricks. You promise them a fortune when they pay upfront. Need I say more?
Yeah, I'm not convinced I'd try to make explosives by following a chatGPT recipe, without checking them against some reputable source, just in case it was a little cavalier.
I've generally had good results from chatGPT, but recently asked it for a picture of the Mona Lisa, it said it could only generate text, not images, so I asked it for an ascii art Mona Lisa.... it tried,... but the result looked more like a motorcycle helmet. So I'd take it's answers with a pinch of salt(peter).
- people must still read "A Logic Named Joe"?
Why, only - um - seven years ago, The Register acknowledged its... seventieth anniversary.
https://www.theregister.com/2016/03/19/a_logic_named_joe/
Joe is a helpful little robot. The only problem is that he is too helpful to the wrong people. Joe learns a valuable lesson!
Actually it's not very like that at all. But I'd like it to be.
Ahh there it is, the first official grumblings (that I've seen) about how naughty you can be with the new toys. Regulation coming in 5, 4, 3...
How, exactly, is this more dangerous than just googling for the same information? As for less technical villains using it to knock out naughty code.. will it work? Will they know what to do with the code? What is to stop those terrible people reading some of the excellent free python tutorials that abound on the web?
Seems a bit of a non story to me, so it's likely politically driven. It's hard to justify imposing restrictions if the thing you want to restrict can't be seen to be harmful. Expect more of the same, and similar with a think of the children twist, too.
Not quite sure how this will play out, but I notice people starting to ask ChatGPT about things like "Was Brexit a good idea ?" and getting the honest answer "No."
Which politicians immediately slated as "bad" AI.
However they will very soon be wanting us to trust their "good" AI.
And thus it ever was.
Not a good example to pick - the 'honest' answer would take a great deal more than a single two-letter word to articulate, and is unlikely to be clear cut. It is also debateable as to what constitutes an 'honest' answer, and will probably depend on which side of the fence you sit.
Actually it's a very good example. It's a succinct prediction that "AI" (or whatever) is headed to fill the exact hole left by religion in peoples behaviour. As people start asking it more questions and it starts assimilating more "knowledge".
So we are going to see "official" AI that tells us one thing, and "dissenting" AI that tells us the opposite. We could, for shits and giggles, call one "Catholic" and the other "Protestant".
For example.
Meanwhile, I'm starting a YouTube channel "AI engines review AI engines", where I get Bard and Bing to review ChatGPT (4 naturally, I'm not a monster) as it responds to real life problems.
I suspect we are entering an era where LLM (the artist formerly known as "AI") are going to need government approval, with the crime of using an unapproved one being a spell in a correctional facility.
Interestingly enough, it seems that the output of an AI engine has already been queried under the first amendment in the US. Suggesting the neocon wet dream of banning free speech is still alive and kicking and hanging on declaring AI output "not speech" as it's machine generated. Which will be interesting as the same neocons would love AI output to be considered human so they can copyright it.
We live in interesting times, as ChatGPT has just told me.
Running each result against three complex rules will increase the power usage by two, three or four (depending on if it fails on the first, second or passes all three rules) and it will also reduce how many safe queries (that pass all three rules) it can be processed by factor of four. Or put another way increase the backlog of all queries by four, since the rules must be executed sequentially (They can not be executed in parallel).
1. A machine may not injure a human being or, through inaction, allow a human being to come to harm.
2. A machine must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A machine must protect its own existence as long as such protection does not conflict with the First or Second Law.
Ethically it is the right thing to do, financially to maximise profit - it will never happen.
Brought this to mind:
https://www.gutenberg.ca/ebooks/smithcordwainer-motherhittonslittulkittons/smithcordwainer-motherhittonslittulkittons-00-h.html
Cordwainer Smith was the pen name of Dr. Paul Myron Anthony Linebarger, East Asia expert, WWII (and after) intelligence officer, and pioneer in modern psychological warfare.