
Is it possible to hive off part of the reg as dev ops and then leave the rest of us in peace? still not sure what it even is frankly. A pint to the first person whom can give me a sensible definition of it...
The set up of an Elasticsearch cluster can differ strongly depending on its scenario. In order to quickly deliver visible, individually customized results to our customers we have automated the installation process for Elasticsearch clusters and are now able to run a local demo cluster at the flick of a switch. A local …
I use ELK a fair amount and have absolutely nothing to do with dev ops.
I wish this article was better written and wasn't under the dev ops category because ELK is a really nice open source product (Paid features available but there are open source plugins that replicate) that can really help you understand what is happening in your environment as a whole.
Beyond the basics, it does take a bit to setup and you do need to understand the data its handling but its a good product that scales really well.
We use it for both high level dashboard type analysis as well as detailed troubleshooting.
From a single platform I can:
See any servers that have been stopped/started
See any exceptions that have been generated, broken down by cluster type
get application response times for individual calls or the average as a whole
report any transaction timeouts
track a user transaction from end-to-end across multiple different systems
long term trend analysis reporting
From a Dev perspective:
DevOps = Getting devs to specify what services their applications will expose to the outside world and require in a declarative manner. Getting ops to build sandboxes which run and monitor these "packaged" applications in a uniform way.
Getting Ops mostly homogeneous workloads. Ops handle monitoring for the platform which runs these applications, Devs comply with the platform requirements doing the adaption as needed, (e.g. you want a snowflake, you get to package it.)
Ops manage the platform around a list of what (compute/network) assets exist and what "packaged" applications are supposed to be on which node in something more cohesive and structured than the usual ad-hoc collection of homegrown tools.
For my sins, I need to check some arcane detail of $JOB's LDAP library (don't ask) - hence needed a test OpenLDAP to play with.
The following yaml config snippet gives me a running ldap server in a box that I can throw away afterwards/ but it's also fairly easy to hand over to an ops team expert to apply the "production" gloss.
a git clone, a couple of shell commands, and it's usable by a person who really rather not read RFC4510
#
OpenLDAP - a ldap server providing authorization on a network far far away
ldap:
image: nickstenning/slapd
environment:
TZ: UTC
LDAP_DOMAIN: mycorp.com
LDAP_ORGANISATION: "LDAP Example Corp"
LDAP_ROOTPASS: ldap1
volumes_from:
- ldapdata
networks:
- client
# Ldap volume container
ldapdata:
build: ./ldap
volumes:
- "./ldap/data:/var/lib/ldap"
Season with Bullshit and Business models, and you have "DevOps(tm)".
I have mentioned this before, I have been working in devops for a decade now but even I find these articles cringeworthy.
Did anyone at the reg even read this before letting it go up? The article is about elastic search, but goes on to mention the ELK stack before even explaining what the L and K stand for or why you would want to use them.
Come on Reg, some of your readership work in this area, why don't you ask them for some input before buying these off the shelf how to guides?
Not sure there was any buying involved. Vendors almost invariably let you reprint their PR fluff pieces for free. Especially when you don't even edit them.
I'm not in DevOps, and I don't even play much with VMs. But this reads like the sort of stuff I use to copy wholesale for one of our newsletters back when I was still in the desktop publishing biz.
Elasticsearch is pretty much just a fast search system that uses JSON.
The K from ELK is Kibana, which is a visualisation layer for Elasticsearch, the L is for logstash, which is an awesome logging framework.
Typically, you would use the ELK stack as part of your day to day development, a lot of places use ELK to provide an error logging system that can drop into your application. And its pretty agnostic about whatever location you are trying to log an error from, i.e. you can have a .Net web application logging to an ELK back end. Going further into Logstash, it can use Log4net etc underneath the hood to append to Logstash...
As others have pointed out, its not a DevOps tool at all, but this is something that would probably be thrown over the fence to DevOps because 1) a sysadmin isnt going to install this for you. They will provide the boxes(virtual or not) as well as any network configuration, ports, firewall rules etc, but they wont be installing this and then 2) there is no chance you will get a dev setting this up. So who else will be left to do it? Hello devops.
Your typical devops team will normally have some scripts to allow for the setup of the ELK stack to be largely automated, these scripts can be something as simple as a batch file.
Hey guys,
It seems that I was not able to point out the real purpose of this article. Please let me explain some things:
This article is NOT intended to be an easy throw-away installation guide for elasticsearch or the ELK stack, even if it is necessary to know the ELK stack to understand this article. In a way, this article is an "advanced" guide, it would have taken too long to explain all used technologies (elasticsearch, logstash, kibana, redis, hiera, puppet, ...).
Thats also one reason why using docker would not make things easier, because simply installing the ELK stack is not what we wanted to achieve. Handling the complexity of configuration dependencies is what we are dealing with here.
If you have ever tried to install a complete elasticsearch cluster, including logstash and kibana, encryption between all nodes and between kibana and the client users, authentication, redis, elk and kibana plugins and so on, you surely have realized very soon that this could never be done in 30 minutes or less. It's really a stiff piece of work. And that's where you can benefit from this article.
This approach is called "infrastructure as code", and I'm absolutely convinced that this is a DevOps topic. Maybe you want to read the article again with that in mind.
Kind regards,
Tobias
There's much more to it. Installing and starting elasticsearch with no data is easy (untar, modify config files, run it). Automating that buys you very little. This is where the article above stops.
Hardening the setup, with snapshot backups, on deploy restores of said snaphots, and the ability to perform automated in place upgrades (i.e. rolling restart), monitoring, elasticsearch curator, etc. is where things get more interesting. Anything else just sets you up for manually managing your cluster for its entire lifetime (or epic failure the first time you mess it up).
For the record, we have a setup such as I describe. I can fire up a cluster of any size with our testing data restored in AWS in about 10 minutes with a single command. We use this a lot for development and integration testing as well as maintaining our production clusters.
I'm sorry Tobias, but your article is called "Elasticsearch cluster in a jiffy: Step by step". Furthermore, you mention prototyping this on your laptop before taking it to production and that all of this can be done in 30 minutes. If that isnt a throw away ELK implementation, I dont know what is.
What your article is missing is any mention of CMDB, maintaining an accurate and comprehensive CMDB is essential when it comes to building out new environments and the software and services they are going to be running.
Finally, this isnt really infrastructure as code is it? This is simply infrastructure configuration that is consumed at the point when what ever tool you are using executes the configuration script.
Even if this article came close to being about devops, then you still didnt mention infrastructure as code in the article, or what it means and how it differs from pure configuration. And lets face it, no one is going to come away from this article thinking "yeah, that infrastructure as code is a really great thing", are they?
An example of true infrastructure as code would have been a lot more complex than the scripts you mention and would have been a slightly larger article. To really achieve infrastructure as code, you really need that CMDB and a way for code to interact with it..
there really isnt that much division, if you have been working in a company that has a relatively large development function, then you have probably been working alongside a devops team, but they might have had a different name like release team etc.
the devops term has been conjured up by people who cant actually do things, giving them the chance to become thought leaders.