I might be missing something but rather than trying to obfuscate the bugs which scanners will find, why not use the scanner yourself and fix the highlighted bugs in the first place...?
Top tip? Sprinkle bugs into your code to throw off robo-vuln scanners
Miscreants and researchers are using automation to help them find exploitable flaws in your code. Some boffins at New York University in the US have a solution to this, and it's a new take on "security through obscurity". Here it is: add more bugs to your software to throw the automatic scanners off the scent of really scary …
COMMENTS
-
-
Tuesday 7th August 2018 09:52 GMT psychonaut
i was thinking the same.
also - "Because the stack layout of a function is determined at compile time, we can control what data will be overwritten when the overflow occurs, which gives us an opportunity to ensure the overflow will not be exploitable."
surely you should be doing this in your own, non buggy, code anyway?
-
Thursday 9th August 2018 23:02 GMT GIRZiM
Famous last words
"Because the stack layout of a function is determined at compile time, we can control what data will be overwritten when the overflow occurs, which gives us an opportunity to ensure the overflow will not be exploitable."
If it were that simple, there'd be no buffer overrun/underrun bugs in my code to start with.
And they want me to start adding them into my code on the basis that I'll be able to stay on top of them, not screwing up and bugging my non-exploitable bugs, whilst all the while not introducing any others that aren't accounted for?
Do they hail from some academe bastion of ivory, where the closest they ever get to a critical program is a day release programme?
-
-
-
Tuesday 7th August 2018 10:56 GMT Dave 126
Given that released software does contain genuine bugs, it is evident that software devs can't find them all. This suggests bug scanners - and other methods of finding bugs - can't find all bugs all the time.
So if you suspect you have some genuine cracks in your castle wall it does you no harm to paint some some fake cracks as well.
When I first read the article, I was reminded of honey traps - the technique of having fake fake servers that resemble the real thing, so that a, you can detect an attack attempt, and b, attackers waste their time (or if you were being super cunning, come away with fake data that it suits you to have them believe is genuine)
-
Tuesday 7th August 2018 10:58 GMT 's water music
However do you think all scanners are available? If I was to write a scanner I certainly wouldn't make it available for code writers to use for testing their programs as it defeats the purpose of the scanner.
You would be doing it wrong then. First you sell the scanner to script kiddies, then you sell it as a security tool to rubbish devs and PHBs.
-
-
-
-
-
Tuesday 7th August 2018 17:03 GMT tekHedd
Re: If you can generate your 'chaff' bugs algorithmically...
See also: every obfuscation tool. You can strip out all of the useful data, but you can't strip out the executable program and still have useful software.
Now if this were carried to the level of a complete custom "fake buggy compiler" you might have something, but then that sounds an awful lot like making a hardened compiler that automatically protects you from the kind of bugs you're simulating in the first place, with similar overhead, that puts me right back in the camp of "why are we doing this again?"
-
Thursday 9th August 2018 23:10 GMT GIRZiM
Re: If you can generate your 'chaff' bugs algorithmically...
>...surely they can be detected/diagnosed algorithmically too?
You put bugs in your algorithm, so that, every so often, it spits out a real bug and, thus, defeats any algorithm looking out for its signature bug generation pattern.
Simples.
-
-
-
Tuesday 7th August 2018 18:46 GMT amanfromMars 1
Re:So that's what Microsoft have been doing.
Hmmm? .... "Microsoft goes through four stages of stealing someone else’s idea. First they say that what you’re doing is the stupidest thing they ever heard of. Stage two: “Well, there are some interesting pieces in it, but the idea as a whole isn’t very good.” Stage three is: “We have exactly the same thing, but ours is better.” Stage four: “It was our idea in the first place.”
That is as may be, and may have been in the past, but with ideas today nowhere near as stupid as they were/are supposed to be and with them being next to impossible to conceal and administer exclusively to executives, are things today for tomorrow completely different.
-
Thursday 9th August 2018 23:17 GMT GIRZiM
Re: Re:So that's what Microsoft have been doing.
>> That is as may be, and may have been in the past, but with ideas today nowhere near as stupid as they were/are supposed to be and with them being next to impossible to conceal and administer exclusively to executives, are things today for tomorrow completely different. <<
It's only fitting, I think, that the most mindbogglingly batshit insane idea I have ever come come across (namely "put exploits in your code to distract people from the other exploits in your code - no, really it'll be fine") has resulted in AMFM penning the sanest thing I have ever seen him write.
-
-
Tuesday 7th August 2018 11:52 GMT Doctor Syntax
"Because the stack layout of a function is determined at compile time, we can control what data will be overwritten when the overflow occurs, which gives us an opportunity to ensure the overflow will not be exploitable."
That assumes that what the compiler does with the source is predictable. Optimisation might affect that. Even if the result is what was intended it might not be after a new release of the compiler.
-
Tuesday 7th August 2018 16:16 GMT amanfromMars 1
The Abiding Universal SOAB Problem? ..... What/Who Provides Original Source to Follow?
That assumes that what the compiler does with the source is predictable. .... Doctor Syntax
Ah Yes, ..... The/An Unpredictable Source Ably Protected by the Madly Insane and Gloriously Almighty can certainly Create Immense Problems with Disruptive Novel Solutions Freely Shared Online Everywhere there are Cyber Spaces for Discovering AI Virtualised Reality Play Grounds ...... for Future Global Commanders' Remote AI ControlLed Head Quarters. ....... in all Journeyed Ports of Call and Forward Operating Bases.
Doctor Syntax, Hi,
Are we to continue assuming the data compiler controls machine output or are SMARTR AIMachines Pulling Strings and Deciding Frantic Fantastic Courses Behind and Beyond Now Unlocked Doors?
Capiche, El Reg?
What say El Regers?
What machines are providing your tomorrows? Are they any good or are they failed and failing, beleaguered and embattled?
Do they Seed and Feed Bounty and Prosperity, or Need All of Yours to Satisfy Their Onerous and Odious Greeds turned Mortal Sins and Deadly Vices?
They are Good Question you should be asking yourself.
-
-
Tuesday 7th August 2018 12:39 GMT ThatOne
Non-developer wondering
Won't those "chaff bugs" make the program less stable and prone to exploding randomly?
Of course a program incapable of running is 100% secure, since it can't be exploited, but what about the usability? When I buy a program, I mostly buy it to get work done (but that's maybe just me). *scratches head*
-
Tuesday 7th August 2018 16:15 GMT Anonymous Coward
Wait, where did we bury the mines again / was it the red wire or the blue one to disarm the bomb?
'Chaff Bugs' makes for novelty reading but does it actually work in practice? Maybe... But the thing that hits home about software dev after decades of trial & error, is that things have become so multi-faceted / stacked / tiered / complex, that its easy to miss obvious holes elsewhere.
Who will have the last laugh? Hackers / malware writers seem to be winning all the battles right now... Or is it more the result of years of corporation's underfunding IT / underpaying tech workers?!
-
-
Wednesday 8th August 2018 16:05 GMT John Brown (no body)
Re: was it the red wire or the blue one to disarm the bomb?
Yes, it's probably not surprising to most people here when we see on TV shows the protagonist desperately trying to identify which wire to cut, or some "expert" at the other end of the radio when we can all clearly see a det stick poked into some form of plastic explosive and the obvious solution is to simply pull the det stick out of the plastic before the timer triggers.
Then again, if the hackers are as useless as the hollywood script writers or as stupid as the hollowood writers think the audience is, maybe this is a good plan?
-
Monday 13th August 2018 11:07 GMT Robert Carnegie
Re: was it the red wire or the blue one to disarm the bomb?
Perhaps the detonator has a tamper switch. Start to pull out the detonator and boom.
On the other hand, there's an argument that terrorists' home-made bombs are built with an off-switch for safety, as they are liable to be precarious otherwise. But once you place the bomb, you may remove the off-switch. Then, you know, run.
The off-switch appeared in a TV programme I watched recently; I won't say which as it may spoil the surprise i.e. not wiping out the cast of the show. (That is, the characters, but with some special effects, who knows.)
-
-
-
-
Tuesday 7th August 2018 16:38 GMT Drew Scriver
Interesting concept - but code and bugs should be separated...
As an application delivery engineer I dread the concept of introducing (lookalike) bugs at the code level.
However, it would be interesting to configure an application delivery controller (ADC) to respond to probes with bug-like 'features'.
That would keep the code clean, allow implementation of these 'bugs' without involvement from dev and/or app vendors, and still provide troubleshooting/validation without running into the security bugs.
It would, however, cause madness with security teams running (external) scans. I already have to 'patch' non-existing vulnerabilities because the security team's audit scan fails. Quick example: a scan from a well-known security scanning firm sent OpSec into a mad spin because a request to /xyz.cgi resulted in a 200 OK... At times I wonder if Don Quixote secretly is the patron of OpSec, but I digress.
-
Tuesday 7th August 2018 18:11 GMT Updraft102
The law of unintended consequences
In the course of creating fake bugs that falsely make it appear that there is much greater attack surface than there really is, wouldn't it be likely that at some point one or more of the fake bugs will contain real bugs that actually do increase the attack surface in a way the fake bug writer does not anticipate? These real bugs hidden in/behind fake bugs will resist detection by automated means, and quite possibly by human ones, since the tendency will be to look at the big bug and not the small, but more dangerous, one behind it.