* Posts by forbiddenera

2 publicly visible posts • joined 21 Mar 2023

Can ChatGPT bash together some data-stealing code? With the right prompts, sure

forbiddenera

Re: So we're all becoming chatbot-herds?

It's not unreasonable at all for a competent programmer to put that together in an afternoon.

What's described in the article sounds like something a reasonable noob could put together quickly.

We don't know that the OP actually ran anything through the VT sandboxes or just a simple scan, which as the article points out is mostly for known signatures; as VT uses several OTS scanners, some do have heuristics but that doesn't mean it's going to catch anything like this.

If it ran in full sandbox mode, VT reports everything. Nothing else runs in the sandbox, so outgoing connection to Google Drive is essentially a positive result, letalone it showing that some files would be modified - although there's not going to be random PNGs on the sandbox anyway. That's what the sandbox is for, if you upload a program and it modifies files or makes connections that aren't expected, that's a positive. Sandbox may mark some suspicious behaviors but it's not just a pass/fail kind of thing, you get a report of what the executable code did while running.

With good documentation or even a generated API reference if someone's familiar with a language, a new library isn't necessarily hard to use. Yes bad docs can suck but maybe that's your first hint the library is shite to begin with? Why spend time cursing at someone's bad docs instead of finding an alternative or rolling your own? How many truly well written/popular libraries are out there with terrible docs AND not having a ton of examples, tutorials and SO posts regarding it? If the docs are so bad you're cursing and there's not a litany of info elsewhere about it then you're probably wasting your time in the first place.

Plus, the mentioned library doesn't sound complicated anyway, you act like it's going to have hundreds of methods.. And even if there's lots I'm sure there's some simplified ones for basic uses, which this sounds like the most straightforward use case for how the library is described, it's probably like (js-style-pseudocode)

let steno = new stenoLib();

fs.writeFile('path_to_targer_file',Steno.hideFile(await fs.readFile('path_to_secret_file'), await fs.readFile('path_to_target_file')));

And that's assuming you have to handle the file operations yourself and the library doesn't do it for you or then it could just be..

Steno.hide('secret_content', 'path_to_target')

No mention of getting around Windows Defender or Mac Gatekeeper etc.. Sure, delaying exfiltrating was mentioned but what about running unsigned code, code with MFTW? Article says assumes that fun stuff is already done, that would be the impressive part, if anything.

TBH this article is unworthy of the Reg, I'm glad the article points out the stupidity somewhat but this whole thing feels like a crapshoot for attention when everyone's thinking of AI, I mean the author admitted to not being experienced but is making claims that it would take 5 to 10 experienced devs weeks? Even if someone couldn't pull it off in an afternoon, that id absolutely ridiculous.

The hard part is again getting the code on the target and getting it running in the first place. Once you have code running with appropriate permissions on almost any OS, it's going to be able to do pretty much anything unimpeded, have you even tried to run any code bot signed by Apple on a Mac in recent years? You have to basicallygo into recovery mode and turn the security down, most people aren't going to be doing that. Not impressive at all, IMO. Writing a program to search for PNGs and using an existing library to hide stuff? Splitting up files into chunks? Yeah, that's all 101 stuff.

You can't just do this and say ignore all the hard parts and assume it's already running with the same permissions as say Photoshop would and then act surprised that it evaded detection when modifying PNGs or talking to Google Cloud/Drive.

Google Cloud's US-East load balancers are lousy with latency

forbiddenera

Wtf

I don't think it's just marketing, it shouldn't be too hard for someone competent enough to be running the infrastructure in the first place.

Literally moving to another region should be no harder than cloning your IaC, changing the region variable and running a terraform apply or something. I only even use cloud providers dashboards during development and design of infra to verify and check things and occasionally test something before it gets put into tf files. It i had to do it all through their console, then yeah itd be frustrating and take a few hours maybe. But using terraform or similar, it's one command away at worst and at best if you've designed the resiliency well then you don't have to do squat unless multiple providers die.

Sure the IaC can be a bit of work in the first place to get going but the results will save you enough time to never regret it, plus you know what you deploy is perfect and exactly how you wanted..no clicking the wrong things adding the wrong role, racing through a dashboard to try and deploy things quickly because things broken..

If its harder than that, then you fail at infra. In fact unless you have strict budgetary or other concerns for being in a specific region than you've already failed. As someone mentioned above, minimum triple redundancy. Ideally with multicast IP so your floaters aren't stuck in a dead location and you're stuck relying on DNS changes with excruciating long TTLs and propagation. Better is fo use multiple providers and maybe even keep an edge provider (eg. Cloudflare) at least in a ready state if not fully proxying. Last year AWS had a huge outage in Canada which took out all azs in the region, it affected many huge companies here with even half of Canadas debit card system down for like 80% of the couuntry and it persisted for, IIRC, like well over 16 hours - GCP in the same region was totally fine though so multi provider is worth considering, even if only in an active passive failover configuration where nodes spin up automatically if the other provider is offline or lhigh latency etc.

The longest part of a deploy or redeploy for me is waiting on AWS or others API to take their sweet time with certain resources sometimes.

As a person from Canadia, all the big cloud providers currently only have ONE region here and we have to keep all data on Canada. AWS is building a west in Calgary but tbh I'm shocked that AWS, GCP, Azure, IBM etc don't have anything in or near Vancouver.