back to article Mine all the data, they said. It will be worth your while, they said

Good developers instrument their applications. Good ops teams monitor everything. This near-fetishisation of telemetry has been extended to DevOps, where it now risks becoming something of a problem. It's good to collect data on your applications and your infrastructure. An important part of Doing IT Unstupidly is baselining: …

  1. Anonymous Coward
    Anonymous Coward

    "Good developers instrument their applications."

    Unluckily, I've see many just collecting useless data, flooding systems with then, and even spending ore time handling them than doing real actual work. Mostly, they are developers quite unsure about the code they wrote (often, with good reasons...), incapable to profile it correctly. Moreover, when something nasty or weird happens, they are not able to collect the right information for a diagnosis.

    The first rule about good tracing is exactly to collect what you need when you need. Just collecting everything creates only noise, impact performance, and risks to disclose sensitive data and break many laws. It's a dream of marketing departments, but they need to be put again into their cages and given some cheese to keep them quiet.

    Tracing has to be dynamic so you can increase the collected data on demand to respond to events. Events should be properly triggered when something important happens. Flooding traces and logs with useless events just increase noise and make analyses very difficult. Most of the time, it's just "butts covering".

    That's to keep system working at their full speed.

    Then if you want data to spy your customers that's a wholly different need. A need I don't approve at all and refuse to "instrument" my applications for, and the sooner it's forbidden by laws, the better.

  2. Harry Kiri

    If anyone offers you a lot of answers

    and you're left to figure out the questions, ignore them.

    Trying to come up with useful emergent properties of things from random observations is a fools errand.

    This applies to monitoring (including surveillance data), EA models, SysML models, endless cyber-security SIEM 'notifications' etc etc etc...

  3. Tom 38

    No mention of Prometheus?

    1. CloudWrangler

      Pretty much the same as the Elasticsearch/Logstash combo, only for time series monitoring instead of logging. Same thing applies here: monitor too much and you're never going to be able to split out the signal from the noise. Less is more is very much the case in monitoring, especially given the volumes of monitoring traces a running large system generates.

  4. Doctor Syntax Silver badge

    Data > Information > Knowledge > Wisdom

    How far along that path does any of this stuff get?

  5. Wibble

    And along came GDPR...

    Deleting customer data becomes a nightmare when it's hidden in system log events.

    1. Anonymous Coward
      Anonymous Coward

      Re: And along came GDPR...

      GDPR doesn't actually change anything in this regard. If you're going to be in breach of GDPR you were already in breach of the DPD.

  6. Michael H.F. Wilkinson Silver badge

    Adding more hay does not make finding needles easier

    as I have often said to students in Pattern Recognition. A very good magnet is called for instead. Same principle applies here

  7. Coofer Cat

    For my money, a TSDB is good for 'telemetry' and ELK or similar good for log aggregation (more like a SIEM). For all that to work out though, you need to have some idea of what's important. Most application log files I've ever seen are loads of "just in case" information because the developer knew that at some point in the future (s)he would have to debug a production problem using nothing but those logs. IMHO, there's justification for most apps to have a general log for all the usual chatter (which you don't bother to index), and an 'audit' log which contains just the things that have happened (eg. 'user logged in', 'user requested statement', 'user made a payment' etc). The audit log wants to have no personally identifiable information in it though - anonymised data only (and even then only sparingly).

    Sadly, non-functional features generally get pretty short-shrift in most commercially driven organisations. I'd imagine the beauty and simplicity of the log files would be pretty close to the bottom of the non-functional priority list. If it's not there in day 1, you're never going to get around to doing it.

  8. DCFusor

    Riiiiight

    Sure, as a friend of mine at CERN says - apply enough filters and you're guaranteed to only see what you thought you'd find....and your setup becomes useless if something unexpected happens. In other words, just plain useless.

    Did I see "and the old approach wasn't friendly to app developers"?

    It's supposed to be their adversary and tell them where they messed up. Test equipment isn't something you're supposed to fool with - or have to - old school, its's supposed to be around 10x better than what you're measuring, which generally comes with maturity, not fiddling, despite that being the DevOps religion.

  9. Andromeda451

    Um

    Shouldn't it be "throwing away 99.9975% " of the data, not <1%?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like