Cheaper to pay bug bounties...
... than hire competant devs to write secure code, and hire skilled pen testers to find these yourselves.
Sad really.
The Google Pixel fell to a team of Chinese hackers alongside Apple Safari and Adobe Flash at the PwnFest hacking competition in Seoul on Friday. Mountain View's latest offering was smashed by white-hat friendlies from Qihoo 360, who used an undisclosed vulnerability to gain remote code execution for $120,000 cash prize. The …
You've got that wrong: the bounties are cheaper than paying agencies to try and find these people.
Even with the best developers in the world, software still has bugs and the nature of bugs is that they are unlikely to be found by the internal teams. Bug bounties might start off as apparently cheap PR but if companies don't take the findings seriously and start improving their processes, then they stand to lose a lot more than the cost of a few bounties. But it needs a change of culture at the top to sanction the necessary additional expenditure.
That said: there is still a long way to go. Both Google and Apple have enviable QA processes and code reviews but these are obviously still not good enough. MS was working hard to improve in this area. Then it sacked a lot of the QA teams.
The absolute size of the code base for some applications so large that current, common development tools are inadequate and no matter how competent the developer there is some subtle detail overlooked throughout the process. Also, QA is hard to do and takes commitment from the C-suite.
Plus you can never get rid of gestalt exploits (like a multi-unit race condition) since these are by definition greater than the sum of the individual units which would see nothing wrong in themselves. And these gestalt exploits have such narrow windows as to be more like knock sequences, making them hard to detect in testing especially on a time and money budget.
> Plus you can never get rid of gestalt exploits
It'd be helpful for whatever cunt downvoted the referenced post to add a wee explanation, if it's not too much to ask of their precious time.
Did Mr 9 say something wrong? Did he omit something important? Did somebody not like his use of a single space, rather than two, after a period?
Wrong. It's merely cheap PR.
You can't test in quality or security. Patching found bugs is a doomed to failure method to secure a program or OS. Look at how many subsequent bugs (reliability/functionality or security) are found in the products.
You design in the quality and security.
You hire people expert enough and train more.
You allow proper amount of design time rather than pressure to produce code or a demo version.
You concentrate on improving quality rather than adding stupid features / gold plating /GUI candy on later versions.
You ban "agile".
Agile can work very well for business level stuff, certainly compared to old fashioned development practices. If I can write javascript that compromises the device it runs on that's a problem with the javascript implementation on that device, not my code. Of course I may do stupid things like send passwords in plain text, but that's not the sort of hack we're talking about here.
IT is too big a subject to expect every business dev to understand all the low level stuff. Likewise you probably don't want coders who are shit hot at device drivers designing and implementing client apps. Horses for courses.
Your passwords in plain text was just the sort of thing that turned me aginst Agile/Scrum.
The issue was not addressed in that sprint but merely added to the 'Technical Debt'.
Needless to say, the PHB's decreed that if we met all the functionality requirements then we must ship it.
Sod the lack of regression testing and resolving of issues like Technical Debt.
Never mind the quality, feew the width.
Thankfullly, I was just contracting and was out the door before the shite hit the fan which it did few months later. The PHB tried to point the finger at us contractors.
Thankfully, I had sent myself a copy of the email from the PHB telling me not to be concerned about security etc etc etc. He got the push but landed another job where he did exactly the same.
There never is any time to go back and fix the issues that were brushed aside during a sprint.
IMHO this is a fundamental weakness of the system.
This has happened to me several times for several reasons.
It is the reason I am a contractor, it was impossible for me to survive most environments for more than a couple of years because I got too invested in the product and made too much noise if I saw problems, current or future.
It happens occasionally even as a contractor of course, it seems immoral to stay quiet when one sees an issue, but it is easier to say it and then shut up because I have no skin in the game and it is not my place.
It does mean I am PNG in several places, even where I made objective improvements to systems/code etc. - people are amazing at holding grudges.
My current boss is one of the good ones, I tell him bad news, he doesn't hold it against me and may or may not take appropriate action.
Fortunately, I don't have to worry to much about it anymore as retirement or semi-retirement beckons.
> I got too invested in the product and made too much noise if I saw problems, current or future.
As someone who's been at nearly every rung of the ladder I'll say nobody competent¹ will fault you for pointing out an actual or potential problem, quite the opposite.
However, when you say you made "too much noise", and without judging you personally, I can't help thinking of that kind of person who, while good at their specialist job, they fail to comprehend that there is a much wider context out there to judge things against.
And also, it's cool to know about problems. It's even better to know about solutions. I like problem reports that come with either: a) an implemented fix, b) a possible solution, c) description of unsuccessful attempts at finding a solution, or d) a statement along the lines of "I looked into it and wasn't able to come up with anything, but I'm here if I can be of any help".
> It does mean I am PNG in several places
So they didn't listen to you, but did you listen to them? Having a conflict at one organisation, that happens; at two, well sometimes one just gets unlucky. But you say "several" places?
¹ Most people *are* competent, it's just that the bad ones get all the attention.
There never is any time to go back and fix the issues that were brushed aside during a sprint.
That happens all the time whatever the methodology. A methodology is really nothing more than an agreed set of tools and if the boss says use a hammer or cover the whole with gaffer tape or you're fired then that's what you tend to do.
I don't know how "agile" my own methodology is but I find that writing unit tests is a great way of kicking the tyres and reflecting upon the design.
I don't know how "agile" my own methodology is but I find that writing unit tests is a great way of kicking the tyres and reflecting upon the design.
This.
Unit testing has nothing to do with "agile", not even with testing. It has to do with development.
Anyone who doesn't write unit tests is a knave and a fool and should be railroaded out of the building.
You, sir, are a fool.
I've been told that many times.
And probably a youthful one.
Absolutely. Born as recently as 1955.
My point was about making blanket statements. Unit tests are one way to test code. But, like Agile, it can become a mantra rather than a thought out method. (People older than me would probably say horses for courses.) Before implementing unit tests at least ask yourself why you're doing it.
"Needless to say, the PHB's decreed that if we met all the functionality requirements then we must ship it."
Then you ain't doing it right. If your non-functional requirements weren't in the user story then the story was wrong. If the PHB overruled the team or if you shipped an incomplete story then you aren't following the rules.
> Then you ain't doing it right. If your non-functional requirements weren't in the user story then the story was wrong.
No argument there.
> If the PHB overruled the team or if you shipped an incomplete story then you aren't following the rules.
No argument there either.
But, what you call rules is in my opinion better described as a methodology, and methods are not perfect and can never be¹, but mere aids to help us move towards our goal. It should however be very clear what that goal is and the acceptable latitude in one's means to achieve it—unless you're going to fault Alexander for his actions at Telmissus.
¹ Cf. Suárez Miranda, "Viajes de Varones Prudentes", Lleida, 1658.
Your passwords in plain text was just the sort of thing that turned me aginst Agile/Scrum.
Agile is defined at The Agile Manifesto
You'll note that nowhere there does it say you should write any old crap...
And this is why Agile gets a bad name - Agile itself is actually rather good, but every last one of us has seen some complete balls-up that someone has named "Agile" even when it quite clearly isn't.
Vic.
Agile is good. It should be OK to write business applications really fast. The fastest way I know is in MS Access but I'd be interested in what other people say is good. The sort of mistakes we made in VBA was to compare the password against a variable called password$. The hack was to type code into the password prompt which them compared the variable password$ with the variable password$.
If password$ = password$ then let them in. Duh!
When the code was converted to Java to make it more secure they left the same vulnerability but the hacker just had to write in Java rather than VBA.
What is needed is that all the stuff like user level security should be handled by properly written libraries. The business programmer can then concentrate on the basics of the business task and not get bogged down with techie stuff.
I appreciate that hard core techies think everything should be written in C++ and require years of testing but that does not get the job done cost effectively.
"What is needed is that all the stuff like user level security should be handled by properly written libraries."
Then how do you those libraries are properly written? You can't trust word of mouth since it can be poisoned, and even well-established libraries can have their own sets of holes.
You design in the quality and security.
Some of the most recent bugs attack apparently secure designs.
And even if I fully believe the mantra that untested code is broken code, working code is better than no code. I would also fully agree that features have been allowed to dominate the market for too long. But, guess what, if you're not selling anything then you can't afford the make anything, no matter how good the quality. SaaS may help to realign interests here: happy customers renew versus just make the next sale. Time will tell.
>You design in the quality and security.
That don't be done absolutely.*
Here's Richard Buckland talking entertainingly about why:
http://www.abc.net.au/radionational/programs/bigideas/you-will-be-hacked/7861288
Basically, it is very hard to write bug-free code, even for relatively simple algorithms. And the software we use is far from simple.
*Well, there is some current work on the old concept of formally verifying code, but it isn't widely used yet.
"*Well, there is some current work on the old concept of formally verifying code, but it isn't widely used yet."
Plus there's the fact that the formal proof only applies to a discrete environment; it can't really apply in the general case due to the real world having too many chaos factors. For example, seL4 is ONLY proven in an environment with no direct memory access. Problem is, DMA is a frequently-demanded performance enhancement.
>BB what? BB King?
#1 not a pro article writer so I have no obligation to do shit for your reading comprehension. Seen plenty of other posters simply use BB when in the mobile handset context is assumed to be Blackberry. Still you might say Blackberry who? But that just proves my original post lol.
>You design in the quality and security.
You need to design in the quality and security *into the OS*
Flash and the browser are software exposed to the internet taking in random data. This is a very high-risk activity zone. We know that and we *should* be in the position that, even when they have been compromised, no further damage can be done.
Browsers and other internet-facing features get updated all the time. We need to recognise there will be errors and plan for it by having better sandboxes. What's the minimum you need to get html rendered? Harden all those APIs. How about an automatic sudo to another user ID when you run a browser? How about a hardened API which allows file transfers between users, but only puts them into directory specified by the destination user's settings? (e.g. ~/downloads) and (if executable) marks the file as executable only from an interactive shell? OS's don't support that? Well, maybe its time for some new OS features.
Privacy is also now a problem. Who would trust Android, ChromeOS, iOS, MacOS or Windows not to upload their browser history to the mothership? What we want is to be able to download blacklists of the dodgy sites security wise to our local PCs, without uploading incriminating browser histories to the cloud. Without that assurance, its hard to protect users from themselves. While you might think people deserve what they get, but with DDOS attacks on the rise there's more at stake than someone getting crypto-lockered.
Security in depth please. What could possibly go wrong? Once you've answered that question, you have a roadmap for a response. Hint: patching is a tactical, not a strategic response.
"Browsers and other internet-facing features get updated all the time."
But what if it's the UPDATE that's the weak link?
"We need to recognise there will be errors and plan for it by having better sandboxes."
And we need to recognize that no sandbox can be airtight without it being completely isolated and thus useless. Because everything is interconnected, even a sandbox has a tie to another process: the OS itself if anything. No matter how hard a sandbox you make, the fact you have to communicate with it provides the means to escape it. Java tried to sandbox, someone found a way to escape it, and since then it's been a turkey shoot with sandboxes.
"What's the minimum you need to get html rendered? Harden all those APIs."
The minimum requires graphical access. THAT's been proven to have leaks and exploits. And since performance is being demanded, it's becoming increasingly necessary to get close to the metal with all that entails.
"How about an automatic sudo to another user ID when you run a browser?"
Useless against a privilege escalation which shoots ANY user up to root or whoever the superuser is.
"How about a hardened API which allows file transfers between users, but only puts them into directory specified by the destination user's settings? (e.g. ~/downloads) and (if executable) marks the file as executable only from an interactive shell? OS's don't support that? Well, maybe its time for some new OS features."
Which will break things and cause complaints. Plus, again, there's the matter of privilege escalation, which is ALWAYS a possibility. Even a rootless system (separated privileges) can probably find a way around it by collecting privileges from whatever users are needed.
"Security in depth please. What could possibly go wrong?"
A bad security implementation that lets you leapfrog, for starters. Not even seL4 is immune, as its formal proof ONLY applies if no DMA is used, making it less useful for performance applications.
The fact that there's pretty much nothing out there that can't be hacked suggests the problem is not a lack of competence, but simply down to the fact that software is hard. Millions of lines of code isn't even considered a big system any more, and no matter how careful you are it only takes one slipup somewhere to introduce a vulnerability. Add in multiple threads of execution opening up the concurrency can of worms and this isn't at all surprising.
Having said that, 4 seconds for Flash is just plain pathetic, the fact it's an old vulnerability that got exploited doubly so.
It suggests what good programmers know:
1) They are not given enough time
2) Most programmers have learnt a language and never actually learned to design/program.
Quality of code, from translating, reviewing and translating it is abysmal. Most of the reason for it is bad management, often due to pressures from highest level.
Quality of code, from translating, reviewing and translating it is abysmal. Most of the reason for it is bad management
I recently heard of a group that has finally been forced to perform code review on their output. They've been fighting this for years.
So now they've decided that each coder will review his own code...
Vic.
"Millions of lines of code isn't even considered a big system any more"
No, but in the 90's a bracket wasn't considered a line of code, so don't read complexity from lines of code. There certainly is more code now, but there is orders of magnitude more whitespace and fluff.
Mistakes can happen but the biggest improvements come from compartmentalisation and sensible coding and testing. If a unit of code has clearly defined actions and inputs and outputs then this compartmentalisation allows the code to be tested. Wider ranging logic failures can still happen but at least the building blocks should work as expected.
I've always worked on the "trust nothing" approach and while I taught myself this from the age of 9 (I really should have got out more) it was demonstrably evident how much better this worked when it was time for computing coursework time at around the age of 15 as mine was easy to test/debug therefore I got drafted in to help with other students...
A simple example (bare in mind that these were primitive systems used in a primitive manner because the teachers didn't understand anything more), but 80% of the software that was written made assumptions about the state of variables before a block of code was entered. So instead of initialising flags at the start of the block it was relied upon that every block of code reset the flag once it had finished with it (you should be able to see where this is going). These were global variables on a system with tight constraints on the memory available and given the weirdo manner in which this language worked (global vars unless specified local) and the teachers lack of understanding of functions, despite them being in the language, students were taught GOSUB based code instead. Nothing overly wrong with this and everything worked fine as functionally clearing flags after you're done with them does work and is in fact slightly more efficient than initialising flags that were initially set to zero to start with. Until, of course, one function doesn't reset the flag in which case it was a pleasant experience checking all 5000 lines for the same flag just to see which was the last one that didn't clear the value.
Trust nothing. Assume all input to be bad and work from there.
"It took four seconds for Flash to fall."
On my machine, the average Flash-infested web page takes longer than that to just load. Looks like the Adobe folks are making progress on the optimisation of their stuff. Probably re-assigned a few devs from the safe coding department for that.
Congratulations!
You really can't say that with any certainty, as it's an undisclosed zero day that allowed them to do it. How do you know it's not a bug that's specific to Android 7.1 and not earlier versions?
That being said, the balance of probabilities is that you're correct in your assumption, so no downvote from me!
Effectively, there are infinite vunerabilities.
Even if one believes that the number of vunerabilities is just 'N', it's an endlessly sliding window where new vunerabilities are being introduced on the right faster than they're being fixed on the left.
Effectively, N = infinity.
There's not enough money on Earth to use these contests to significantly enhance security.
Furthermore, simple Game Theory clearly indicates that these teams would have a backlog of 'secret' exploits (that others may know, others may be using in the wild), and they'll carefully use them (one at a time) in these contests to maximize their long term income. Just like dueling mathematical contestants hundreds of years ago would keep their theorems secret. This process discourages disclosure, except one at a time.
Game Theory FAIL.
There's not enough money on Earth to use these contests to significantly enhance security.
This isn't a one-way transfer, consider the money cycle.
And your game theory doesn't hold water either. It's not really about specific bugs, more about attack vectors. The contestants are competing amongst themselves and cannot expect to live of the bounties. It's a recruitment drive: for the companies; for the spooks; for organised crime.
"No excuse for poor security
https://techcrunch.com/2016/10/01/learned-helplessness-and-the-languages-of-dao/
Read it. Learn it. Know it. Practice it.
Stick the excuses where the sun don't shine and fuck off."
Learn what? That the author theorises a bit, don't know how to realistically fix the problem, and probably hasn't ever written any usable code outside of a lab setting?
Eventually your developers won't be able to handle it any more. This is a bit like the "Peter-Principle" where people get promoted until they are utterly incompetent at their position... causing all the positions to be filled by people just barely competent enough to not be fired.
With software, developers like to make things more and more complex, up to a point where they are barely able to handle it themselves. Android is a prime example, its core roughly has the user exposed functionality of Windows 3.1, but achieves this with several orders of magnitude more code.
The fundamental issues are that a focus on security takes effort with no immediate payoff, in a market that selects for laziness.
If you do the right thing but can't turn that into a competitive advantage, your lazy competitors spend your security budget on a fancier landing page or cutting prices and you lose market share or go out of business.
To get this done right you have to appeal to stakeholders who have a long-term investment in the business. The PHB has no skin in the game - he can just get another job when the shit hits the fan. Senior management with options, or in a small company proprietary directors, may have a more sympathetic view.
The problem is that going over the head of the PHB is a "brave" move that may get you fired.
The answer is for the people with skin in the game to employ someone specifically to handle internal whistleblowing of this nature (chief security officer?), and to create a culture where these things can be handled without fingerpointing, where systems get improved without either the PHBs or the devs getting fired for doing what they think is the right thing.
> And the problem with that is that the investors, especially if the company is publicly-traded, usually don't want to see beyond the short term.
There is some truth to it, but that is a bit of a sweeping generalisation. For a start, most companies are *not* publicly traded and investors there often put up with a *lot* of pain. In publicly traded companies, the majority of shares are held long-term (Ok, the data here is a bit vague as, for the US at least, "long-term" means more than six months) and the core investors tend not to sell very easily.
There are of course speculators (both human and machine) and it makes the news when a big company's share price goes down after a bad quarter--that makes things harder for the company, but does not necessarily mean the company is doing badly; a typical example being the quarter following a particularly good one.
Lastly, there is that misconception so popular since after WWII that companies are somehow there for the benefit of the investors.
"Lastly, there is that misconception so popular since after WWII that companies are somehow there for the benefit of the investors."
If it's such a misconception, why is it that companies frequently answer to investors first, customers (from which the bulk of their business relies) second, and employees (the nuts and bolts of a business) third?
Moved into testing with no real experience and was told these are the working commands write tests to prove they are OK. I got it wrong and crashed the system so I decided that as a user should not be able to crash the system, I would include it in the testing and then failed the product for release.
Next time round the entire company changed their stance on testing and it was no longer a pain that had to be endured but a useful exercise they all bought into. Quality of product shot up and stayed there for years.
The basic problem with todays programmers is that they are lazy assuming this or that environment or compiler is handling all the exceptions magically so they don't need to. That is if they even know what an exception is!
> I got it wrong and crashed the system so I decided that as a user should not be able to crash the system
Were you QA or also requirements? If you were only QA you would have failed the release not because it crashed but because it did not return the expected result (presumably). Only if "should not crash" is already written in the specification would you fail it specifically for crashing. Be aware that some systems, including critical systems, rely precisely on the software crashing as a form of error recovery (e.g., by means of a watchdog and bringing the system back to a known / safe state).
> The basic problem with todays programmers
I downvoted your post for the unwarranted generalisation about "todays" [sic] programmers.