back to article Old and busted: Targeting servers and web bugs. New hotness: Pwning devs with targeted poisoned stacks

Hard-working but naive developers are a little known but highly dangerous soft spot in an organisation that attackers can exploit. This is according to Rich Jones, co-founder of security consultancy Gun.io. Speaking at the 2020 Disclosure conference, Jones outlined how the trust many developers put in their software stacks and …

  1. Robert Grant Silver badge

    "Systems are generally hardened - they have patches, they have firewalls, they have monitoring," Jones explained, "but [some] developers will run literally any bullshit they find on Stack Overflow. They keep credentials lying about, they're obviously going to have the source code and some production data sitting on their hardware as well."

    As one example of the tactic, Jones pointed to the July attack at Twitter in which employees were spear-phished, leading to the takeover of 130 celebrity accounts.

    I like that this example is in no way to do with running code on Stack Overflow, and is instead a customer-facing person receiving a support call.

  2. Doctor Syntax Silver badge

    "This was not a hack of the Twitter production system: this was a hack of Twitter employees using classic social engineering tricks,"

    The employees exploited had access to the production system. In my book that counts as a hack on the production system.

    1. Robert Carnegie Silver badge

      "a hack on the production system"

      Indirectly. Through the company's meatware.

      1. jake Silver badge

        Re: "a hack on the production system"

        Is the company's wetware not an integral part of the production system?

        1. Lord Baphomet

          Re: "a hack on the production system"

          Only if your company is stuck with 1980's computing architecture. Modern DevSecOps dude -- everything is automatable.

          1. jake Silver badge

            Re: "a hack on the production system"

            I'll stick with my 1960's computer architecture, ta.

            Age and experience trumps youth and beauty.

            UNIX ... automating everything by design, since 1969

  3. Robert Helpmann?? Silver badge
    Childcatcher

    ...basic steps for devs such as not storing production code on their local machine, scrutinizing the projects they use in their software stacks, not oversharing information about their projects on social media, and, er, actually paying attention to warning messages.

    But all that gets in the way of convenience, slows systems down and makes it harder to meet deadlines! All the fights I have had with devs have come down to time, convenience and performance. If security impacts any of those most important of things, they don't want to deal with it, even in cases when spending a little of one will get much more of the others.

    1. Version 1.0 Silver badge

      Yes, development speed is much more important that security these days - it's not just software either, look at the COVID world, look at ... well damn near everything in Western governments.

    2. cbars Silver badge
      Coat

      time, convenience and performance

      So, time

    3. Paul 195

      This comes down to management priorities. If developers are being judged on speed to market, that's where their focus will be. And frankly, Javascript scares me from a security point of view. It's much harder to do static analysis on than Java, and a typical Javascript front-end project pulls in about a zillion dependencies.

      Having said which, some organizations have security policies which are overbearing to the point that it becomes impossible to do anything. In the long run, this weakens security as there are an overwhelming number of requests to be approved, which means there is no time for scrutiny of the ones that might matter, and a tendency to look for "workarounds", because no one can get their job done.

      In an ideal world, developers would be on their own network domain, with relaxed security rules, but allowed nowhere near production other than via approved CI/CD pipelines to push updates.

    4. Eclectic Man Bronze badge

      Agile development, anyone?

      Reminds me of my first impression of agile software development:

      Have an idea:

      Write the code:

      Work out what it should do:

      Repeat ad nauseam.

      Lots of sprinting and deadlines and 'rah rah' teams, and lots, really lots of effort and bugs. (Fortunately I was on the sidelines, watching as a 'security' consultant, i.e. ignored except for "will you sign this off please, you don't need to read it first?")

      My personal experience of writing code is that once I draw the flow chart first, the only errors in the code are typos. When I code directly, I spend a lot of time de-bugging unless it is a really simple idea.

  4. Throatwarbler Mangrove Silver badge
    Thumb Up

    Truth

    I have a million anecdotes, but the one which springs to mind is the developer who put up an internal proof of concept site on his work PC and wondered why he couldn't immediately access it from the internet.

  5. A random security guy Bronze badge

    Developers hide their stuff from security teams

    The author is being polite. Developers want their stuff to work. Currently working with a team where the Dev team claimed that they had rotating passwords, automated patch management, etc. Nope, they hadn't done it. Told me SAML was difficult to implement (the code was already there and was being used internally).

    We always have to invoke executive privilege in order to get things done.

    1. Robert Grant Silver badge

      Re: Developers hide their stuff from security teams

      Not really, because devs generally came up with SAML as well. The type of employer you've had in your career, or your attitude towards your own career, may make you think ill of devs, but I doubt that's actually the case.

    2. 9Rune5 Silver badge

      Re: Developers hide their stuff from security teams

      Having spent this week trying to get saml working with Okta as an IdP: your devs are right and you are dead wrong.

      Use OpenId Connect like sane people do.

      1. jake Silver badge

        Re: Developers hide their stuff from security teams

        Sane people don't have important development infrastructure connected to TehInteraWebTubes in the first place.

      2. Lord Baphomet

        Re: Developers hide their stuff from security teams

        Use Hashicorp Vault and Okta is easy -- as is everything else.

        1. jake Silver badge

          Re: Developers hide their stuff from security teams

          Only if you're daft enough to pay (!!!) into the Microsoft clusterfuck experience.

  6. Claptrap314 Silver badge

    Never trust a user's machine

    ESPECIALLY if that use is a dev.

    The build system needs to be logically isolated. All code pulled from repositories under the direct authority of the organization, and NOTHING goes into said repos without a security review. Of course, said review might be done (under contract) by outside companies for things like OS distributions.

    Anything else is just Russian Roulette.

    1. Eclectic Man Bronze badge
      Facepalm

      Re: Never trust a user's machine

      Surely not!

      I cannot believe that VW, etc. managed their engine management software builds to prevent rogue computer software writers inserting the emissions test fooling software into the final build without testing first and checking no adverse side-effects. After all they are a German company and we all know their attitude to strict procedures and record keeping.

      Oh hang on a minute ...

  7. Chairman of the Bored Silver badge

    Not quite so simple...

    ...agree pulling random bullshit from Stack Overflow is probably not a good idea. Even if you don't get owned, you've incurred technical debt.

    But what of the millions of lines of code I pull into my projects linking against standard libraries? How confident are we that libc is OK? libqt? lib*? .net?

    1. jake Silver badge

      Re: Not quite so simple...

      I'm not necessarily confidant that all such code is "OK" (whatever standard that is ...), but I am quite confidant that any problems with such code will be reported so I can remove it from world viewable systems in an expeditious manner ... and then patched (either by the maintainer, myself, or a third party), the patch reviewed by many eyes, and given a consensus approval, at which point one can choose to return it to use as and when one sees fit.

      And yes, I am aware of decade old (or older) bugs. They exist in ALL code, regardless of source. Experience suggests that once found, they are fixed faster in FOSS environments than closed. proprietary environments.

      1. Paul 195

        Re: Not quite so simple...

        "Experience suggests that once found, they are fixed faster in FOSS environments than closed. proprietary environments."

        The "once found" is crucial. No bug bounties for reporting errors in FOSS. Heartbleed was around a long time before being "found" and named publicly.

        1. jake Silver badge

          Re: Not quite so simple...

          Closed, proprietary code isn't immune to such ancient bugs. See CVE-2020-1350 and CVE-2019-1162 as two examples.

          Heartbleed went without being noticed for about two years. I removed the offending code from my "live" systems within an hour or so of it being announced. It was patched, and my systems were updated, with the new code running, in under half a day.

          On the other hand, the two above Windows bugs went without official notice for 17 and 20 years, respectively ... I'll leave it as an exercise for the reader to figure out how many months it took Microsoft to get around to patching them after being notified that they existed.

    2. Lord Baphomet

      Re: Not quite so simple...

      We have a high degree of confidence that the libs you mention are actually pretty secure. They are used in millions of applications against which attacks are launched every day. Occasionally a vulnerability surfaces, and almost inevitably it is immediately patched by the developers -- of course, those patches take time to propagate into production systems because the world is yet to fully adopt automated patching. And, if you've got some sort of automated vulnerability scanning in your pipeline (which I'm sure everybody has these days, given the large number of such system that are now available), you're likely to discover new vulnerabilities quite rapidly.

      1. jake Silver badge

        Re: Not quite so simple...

        "because the world is yet to fully adopt automated patching"

        ITYM "because an ounce of prevention before the fact costs real money, but a pound of cure after infection can be amortized over 5 years".

        1. Eclectic Man Bronze badge

          Re: Not quite so simple...

          Well, it is not as if automatic patching has never broken anything that worked really nicely before Windows 10 decided to update, for example, Microsoft webcams, non-MicroSoft peripherals like printers, scanners etc. or bricked your nice shiny iPhone with the non-Apple repair.

  8. Grinning Bandicoot

    In East Los Angeles pronounced Estella there was in idiom 'no kavesa' meaning the type smelling gas would flick the light switch. Brilliance and common sense are neither exclusionary nor complimentary. It has been attributed to Abe Lincoln 'If common sense were so common, why is it valued so highly?'

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020