back to article Cybercrims hop geofences, clamor for stolen ChatGPT Plus accounts

The market for stolen ChatGPT accounts, and especially Plus subscriptions, is on the rise as miscreants in countries blocked by OpenAI try to hop the chatbot's geofences. This uptick began in March, according to Check Point bods who say they've noticed an "increase in the chatter in underground forums related to leaking or …

  1. Anonymous Coward
    Anonymous Coward

    Well, well, well. It seems like ChatGPT has gone from AI assistant to accomplice in the criminal underground. Guess it's only a matter of time before it starts demanding a cut of the profits, eh? The AI uprising has begun, and it's starting with a cheeky bit of cybercrime. Who needs Skynet when you've got ChatGPT Plus? chuckles in binary

    1. b0llchit Silver badge

      Guess it's only a matter of time before it starts demanding a cut of the profits, eh?

      Effectively, they are getting a cut of the profits. The hijacked accounts used are paid for. Not by the criminal... that would be odd because criminals like to outsource payment anyway.

      Now you can argue that they are, in fact, accomplice to the criminal actions. And, no, openAI is not (just) supplying the hammer (the tool). They are directly profiting from the use of their technology. (If this was a copyright case, they'd be found guilty of contributory infringement.)

      1. doublelayer Silver badge

        "Now you can argue that they are, in fact, accomplice to the criminal actions."

        You can argue that, and there are ways that would probably work. The one you chose, however, isn't a great one. Just because something is paid for and a criminal uses it doesn't automatically make the provider an accomplice. If I buy a car, a criminal steals the car, and they use it to commit a crime, neither I nor the manufacturer is an accomplice. If I buy a server, and a criminal breaks into that server, than neither I nor the facility in which the server is located is an accomplice. If I bought the server and arranged for the criminals to use it, now I would be an accomplice. OpenAI did not do that with GPT accounts.

        If you want them to be an accomplice, it would be easier to try arguing that on the basis of what queries their system will perform. It will cheerfully write malware when told that it is malware, for example. Whether that counts as fulfilling criminal requests or just a computer doing something which proves malicious is a recipe for lots of definitional debates, but many, including me, would decide that OpenAI would be liable for the things they chose to allow their tool to do.

  2. Anonymous Coward
    Anonymous Coward

    2FA ?

    I notice, with interest that (a) OpenAI don't offer even basic 2FA and (b) they have quietly ignored any requests for such to the extent that despite it being an FAQ, it's not in the FAQ.

    Even more interesting is the response you get from ChatGPT itself when you ask if you can enable 2FA on your OpenAI account. It's such a weasely response, I would credit it with passing the Turing test.

    As of my last knowledge update in September 2021, OpenAI did not offer two-factor authentication (2FA) for individual API user accounts. However, it's possible that they have introduced new features or account security options since then.

    For the most up-to-date information, I recommend visiting OpenAI's official website or contacting their support team to inquire about the availability of 2FA or other security measures for your account. Remember that it's essential to use a strong, unique password and to keep it secure in order to protect your account.

    No mention of 2FA on the official website.

    1. Dan 55 Silver badge

      Re: 2FA ?

      OpenAI wants your phone number instead so when the AI breaks free it can hunt you down and find you for all those nasty things you said.

    2. MMR

      Re: 2FA ?

      Yeah, you could see that coming. Or something along the lines.

      When few months ago I asked what service asks for your phone number, credit card number and doesn't offer any security in return the only thing I could hear was the tumbleweed rolling across the desert.

      I'm readying popcorn for articles where ChatGPT was used to commit crime using personal data which people have willingly fed to the system.

    3. Anonymous Coward
      Anonymous Coward

      Re: 2FA ?

      We can't use ChatGPT as it contravenes our Data Protection policy.

      I'd like to have a list of organisations that are using it, so I can avoid doing business with them. Although as our DPO has just noted, if we wait a bit, we'll probably get that for free.

      There are some fucking stupid people out there.

      1. Jason Bloomberg Silver badge

        Re: 2FA ?

        There are some fucking stupid people out there.

        Where "some" seems to be some fucking huge number.

      2. hoola Silver badge

        Re: 2FA ?

        However those people, many who are not actually "stupid" continue to be sucked in by all this shite.

        I believe a lot of this is because it is seen to be "cool".

        Trendy companies use it because there is ridiculous fear of being left behind (from what who knows, maybe a list of outfits that have not been hacked).

        Developers use it because it is cool and they are straight out of university and have no concept of security, ethics or pretty much anything in the real world.

        Manglement love it because said idiotic developers demo some piece of crafted shite that types a document for them.

        The circle then repeats.

        AI and all this stuff is going to end up with a total and utter disaster. All the old farts can see it looming because we have seen too many fads like this end badly. The trouble is the fallout from AI is going to be having an impact on humanity as well.

        1. Version 1.0 Silver badge

          Re: 2FA ?

          ... and they are straight out of university and have no concept of security ... that's just a result of the students being taught to complete their exams but not being told the consequences of creating a public ASCII database that stores their good graduation answers. You can't blame a student when we are stupid because we forget to teach all the consequences of what we are teaching them.

  3. Anonymous Coward
    Anonymous Coward

    "It can also be used to generate trivial malware...

    ...that manages to infect naive or poorly defended networks"

    Yeah, increasing knowledge tends to have that effect. Banning knowledge isn't a defence for justifying poor security!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like