back to article Devs reverse-engineer 16,000 Android apps, find secrets and keys to AWS accounts

A security firm has reverse engineered 16,000 Android apps on Google's Play store and found that over 304 contain sensitive secret keys. The huge deconstruction effort was made through Delaware-based Fallible which sent the popular applications through its automated code analysis tool*. The researchers did not name the apps …

  1. Your alien overlord - fear me

    So how does a developer use API's of 3rd party systems without putting in credentials, such as Twitter or Google's Gaming system?

    1. Timmy B

      "So how does a developer use API's of 3rd party systems without putting in credentials, such as Twitter or Google's Gaming system?"

      You know, I had the same thought, As most of these apps require some kind of signup and login process you could download the credentials as part of that. But then that leaves the credentials available on a web-site somewhere. That's certainly no better and possibly worse.

      As a win-forms c# developer how I got round this was encrypting any default keys and having my own internal calls that decrypt them before use. It's not perfect but it's the best I could come up with. At least it means that credentials aren't stored in an easily readable way.

      I wonder if anyone more used to this can detail any industry standards and reading material? I'm genuinely interested.

      1. Kevin Fairhurst

        From reading the article, and no specific knowledge, my impression is that you get a "master" api key when you setup the account. This has all of the permissions needed to do everything.

        You're then meant to create an "application specific" api key, which will only ever have permissions to do what is needed for that application. It is this api key that should be hardcoded in to the application, not the master key.

        Additional authentication/obfuscation (to ensure it is the application that is making the api call) would always be welcome :)

        1. Deltics

          Google Sign-In example

          You have an app which has an application id (known to the Google API's). Associated with that application id is an API-key. Also, for an Android app, the application id is configured with the package name of the application and also the fingerprint of the certificate used to sign the application.

          When your application accesses the Google API's to use Google Sign-In the API verifies that the requesting app has the right package name and was signed with the appropriate certificate. Your use of the API is only vulnerable if your signing certificate has been compromised.

          This provides the level of trust required to allow sign-in from that application without requiring any secrets in the application.

          If your application then uses back-end services of other API's you can use the Google Sign-In API to obtain a (single use) id token. Since your sign-in is trusted then this id token can be trusted. To access the back-end API's then that back-end typically then provides some way to exchange a google sign-in id token for a token to access the API itself. This process is server-to-server so any secrets involved are more easily protected (unless your servers are themselves hacked).

      2. xeroks

        Not worked with 3rd party APIs on a commercial basis, but you could create your own webapi which made teh call to the 3rd party. It adds an extra step to any calls you make, but at least it's server to server and optimisable rather than across a potentially dodgy mobile connection,

        1. Anonymous Coward
          Anonymous Coward

          But then someone could just call your api and do the same thing?

          1. Boothy

            Quote: "You're then meant to create an "application specific" api key, which will only ever have permissions to do what is needed for that application. It is this api key that should be hardcoded in to the application, not the master key."

            Yup this.

            But one of the issues tends to be that some API services are very complex, and when generating a new application key, there can be a lot of boxes to tick/untick, and often the provided documentation isn't clear as to what is needed for specific functions/services to work.

            I've gone through similar things myself (just for testing, not live applications), where an application key, ticked with all the permissions you think are needed after reading the documentation, doesn't work! Yet using a key with everything ticked, works fine, and it can be a sizeable tasks trying to whittle down the key permissions, to the minimum needed for the application still to work.

            Not helped by the fact that often once a key is generated, you can't change permissions after the fact, and have to delete the old key, and generate a new key, for each new set of options to test, meaning you have to update your application each time before testing again.

            My guess here is that some developers likely couldn't get a minimal key to work (or just didn't bother creating one!), and just ticked lots of options till it did, and unfortunately left in permissions that an application really shouldn't have!

          2. CrashM

            Quote: "But then someone could just call your api and do the same thing?"

            They can call the intermediate API, but they can only do the things that the intermediate API is designed to do, not everything that the master API can do.

      3. Deltics

        API Security is Hard

        Having developed apps against these API's I can tell you that securing your app and the API it is accessing is hard. For three reasons:

        1) It's just hard. No way around it. At some point there has to be some element of trust involved and the problem is how to protect that trusted element sufficiently. It can be done but ultimately the app is at the mercy of the API. If an API requires some secret to be "protected" by an app but then leaves it entirely in the hands of the app as to how to ensure that protection then all bets are off.

        2) It's made harder. Even the API's that get it right seem to delight in making it harder than it needs to be to get the app implementation right. Documentation is either vague, incomplete or in some cases just downright wrong or the back-end a constantly shifting target with a plethora of once accurate documentation and a devil of a job figuring out which applies now, particularly if the latest docs only build on previous versions and assume a depth of knowledge gained working with previous iterations that someone coming to the API afresh is not equipped with.

        3) When a developer finally stumbles across the right incantation to get things working they have often been through so many iterations of poking and pushing at things to try and cajole the API into working that they've actually lost track of what they actually did to make it finally work so any "assistance" they then offer is itself often incomplete or confused.

        Very often these API's have two modes of access, one for applications which cannot (read: should not) contain any secrets due to being exposed in the wild. That is, desktop applications, mobile device applications, client-side web apps. But then they have a - usually - more straightforward mechanism intended for apps that are intrinsically more secure, such as server-side apps which can "safely" contain secrets.

        These secrets based mechanisms are usually better documented, easier to understand and more straightforward to implement. So I guess some developers just give up and embrace the path of least resistance, re-assuring themselves that their app isn't one that is worth hacking so their inappropriately used "secrets" will be safe.

    2. Voland's right hand Silver badge


      Create an instance/customer specific key, use the key to authenticate to a service, fetch keys from there. The moment you no longer need them, dispose them and ask again.

      1. It allows you to migrate the cloud service. The only "permanent" part is relatively lightweight and 100% under your control.

      2. It allows you to blacklist dodgy app instances.

      3. It allows you to use multiple upstream services.

      And so on...

  2. Nick Ryan Silver badge

    over 304???

    A security firm has reverse engineered 16,000 Android apps on Google's Play store and found that over 304 contain sensitive secret keys.

    So that'll be 305 then?

    1. Doctor Syntax Silver badge

      Re: over 304???

      "So that'll be 305 then?"

      Or even 306. Who knows?

    2. GingerOne

      Re: over 304???

      Or less than 2%

  3. Peter 26

    Isn't this self policed?

    There are bots crawling github looking for AWS keys, one simple mistake with a commit and you'll have bitcoin miners running within minutes racking up your fees.

    I would imagine the same people would have done the same with the play store, or are they missing a trick?

  4. Steve Davies 3 Silver badge

    In the interests of fairness...

    Are they going to do the same on IOS and Windows Store Apps?

    Otherwise, cynics might say that this company is in the pay of Apple...

    1. lglethal Silver badge

      Re: In the interests of fairness...

      Fairness on the Internet? You're havin a laugh!

  5. Anonymous Coward
    Thumb Down


    So i just need to write a little tool i want to sell.. buy some compromised data from the onion web and then The Register will promote it like an advert.

    Really, don't you see these infosec companies are doing this for the article you are writing... and you dont charge them for that... just swallow the PR and write the drivel.

    1. Jim Mitchell

      Re: Promo?

      Promotion on The Register works! I had never heard of "Urban Airship", but now I have! Honestly, I would have guessed it was a band name, rather than a mobile push notification services vendor.

  6. GeezaGaz


    >>It's 2017 and developers are still doing really dumb things

    Surely not as dumb as some massive Internet company foisting a massively insecure OS onto beelions of devices????

  7. richard_taylor

    There is strong tendency for developers to not think too hard about the consequences of secret API keys or even AWS credentials falling into the wrong hands. There are some techniques that can be used to hide the secrets a little better, see this . Another way is to not put the secrets in the app at all but proxy all the traffic through another service that adds the secrets, as described here . Of course you then need to make sure that this proxy is not abused by an attacker to make use of these keys indirectly to access an API. Full disclosure, I work for the company that develop's Approov which is for solving this exact problem. We work with customers that need to maintain tight control of the secrets that their app needs to use. So there are solutions out there, but the many apps are still developed without regard to best practice. It is great that this work is highlighting this ongoing failing.

    1. tr1ck5t3r

      Most app developers are not security experts, throw in the low cost to market & ease of use for producing these apps so John Smith down the pub with his whippet can knock one out, and you can see quickly how these mistakes occur.

    2. Anonymous Coward
      Anonymous Coward

      Frankly, I found going through a third party proxy service even dumber... you're just adding a point of failure - nor I would give a third party keys that should be fully secret - like AWS credentials. That's also the reason I would never use a "cloud" version control system for my company code.

  8. Nimby

    But ... but ... it's an APP!

    Wait, so a clueless hack sits down to write a 10 line of code app, instead of representing the time and skill necessary to write a full and robust real application, and we're supposed to be surprised that some script kiddie hardcodes his full credentials into his cute widdle bittie app?

    To me the surprise is that there are not more instances!

  9. ecofeco Silver badge

    It's 2027 and developers are still doing really dumb things

    I've got your future headline ready.

    No need to thank me.

    1. Anonymous Coward

      Re: It's 2027 and developers are still doing really dumb things

      No, in 2027 it will be "It's 2027 and AI developers are still doing really dumb things"

      1. Anonymous Coward
        Anonymous Coward

        Re: It's 2027 and developers are still doing really dumb things

        It's 2027 and this meme hasn't died yet.

  10. Anonymous Coward
    Anonymous Coward

    Since when did any dev ever give a damn about any security unless there was someone behind them prodding them with a big Security stick.

    Anon because my Dev team would not be able to handle the truth!

  11. Lou 2

    ... back to Cobol ...

    .. and that scare mongering is just what any Dev doesn't want his management to see. Because next day - "Stop doing those API thingy ma-jig things - they are very unsecure. Start using COBOL again."

  12. sashtuna

    Google has been warning App Developers against doing this since 2014

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like