Re: Ribbon cable?
The motherboard is in the screen section. The keyboard section is for the battery & keyboard only. Hopefully the keyboard connector isn't a ribbon. :D
88 publicly visible posts • joined 9 Nov 2010
Google might well patch 4.4 and beyond each month, but it's up to the OEMs to take that patch and distribute it to the phones they sold. You might possibly have a phone that's getting these regular updates two or three years after the launch of the phone. You might, but not many other people do.
So if security's your thing then your phone has an effective life of a couple of years at most.
Sure, you can carry on using your phone after updates have ended; apps themselves will continue to get updates up until the point that the version of Android you're stuck on is no longer supported by Google's app store.
At this point you might be lucky and have a phone that you can flash with a newer, community-supported version of Android. That's only going to work for a couple of Android version updates because you're stuck on a specific kernel version because the OEM only ever gave binary blob drivers to support the hardware, and so you can't move on to versions that require a newer kernel.
Bear in mind that realistically phone hardware hasn't evolved that much in the last two or three years, and so now you've got a phone in which the hardware is perfectly good but is rendered obsolete by the inability to upgrade the software. Time for a new phone, but deca-core instead of octa-core because, you know, those two cores really help when watching YouTube or browsing Twitter.
But of course if everyone took this approach projects like this would never get off the ground due to a lack of funding.
Take the approach that crowdfunding is a form of investment and that the exit strategy is the item you're funding. As with all investments there's a chance you'll lose you money, but if you don't then you'll have helped to develop something new and received it for less cost than if you buy the item when it's generally available.
Out of all the config management stuff Ansible is certainly one of the best, if not the best.
Other systems like Puppet and Chef suffered from bloat and complexity (and stupid agent-based nonsense). When Ansible arrived it was a breath of fresh air - a relatively simple approach to config management an one that encouraged you to keep things simple.
Unfortunately Ansible's gone the way of other CM systems - more modules are being piled in, increasing the complexity. A rapid development cycle often breaks compatibility between one release and another either intentionally or down to bugs.
The documentation on the site is pretty poor too; this is quite common with a lot of modern 'devop' tools. (Hashicorp is a particular offender).
The basic fact is that configuration management is, and always will be, difficult when run against long-lived instances. As someone above mentioned, you'll inevitably encounter a time when running Ansible/Chef/whatever against a server will break something.
You'll also find that your CM code becomes more convoluted and harder to maintain over time.
This sort of thing leads to the scenario where you need to make a change to your infrastructure; you _could_ do it in your CM system, but that might take a day or two with updating, debugging and refactoring your CM code (often to find it doesn't work due to months-old bugs).
Then you're a bit scared to run the stuff against your systems because you can't truly be sure of the outcome.
Or you could do it manually in less than an hour with much less risk of things breaking
Personally I'm fully embracing containerisation; it still leaves the infrastructure provisioning, but with things like Docker for AWS that becomes much less of a headache.
Your points are valid ones. Over the last few years developers spent a lot of time and effort to replace something that was familiar and actually worked with messy implementations of the latest trends in GUIs. The move from Gnome 2 to Unity or Gnome 3 was pretty much made mandatory by most distros, even though those interfaces were unfinished and buggy.
A bit like the mandatory move to systemd.
However, I suspect your sysadmin friends that swear by Linux either swear by it as a server operating system, or if it's a desktop they use something sane and sensible for the desktop environment (I'm using Cinnamon, but XFCE and Pantheon are quite popular).
On the other hand, I had a quick go on WIndows 10 for the first time the other week and found it to be a confused, mis-mashed, monstrous, un-intuitive nightmare of a GUI. So it's swings and roundabouts really.
Or alternatively a poorly put-together package might include such a unit file and then you end up with a service running as root when you weren't expecting it.
Most people agree that it's a difficult security hole to exploit easily, but that doesn't detract from the fact that it it a security hole. The issue people have is the systemd team's response to this bug report (mainly your man Poettering here) and to other bug reports that people submit. The response is essentially "I refuse to acknowledge this is a bug - this is an issue with everyone else."
In a world where everyone else is scrambling to fix years of sub-optimal security in code (both open-source and closed-source), the systemd team seem to be adopting a policy of doesn't-apply-to-us, which is bizarre when the code they're writing is such a fundamental part of the operating system.
The Linux server stats are way out.
What goes on behind company firewalls is a different matter, but 12% sound highly dubious.
I'm not sure what the point of the article is apart from use a long term support version of a distro and keep your software up-to-date.
Someone mentioned it's hard on Linux because of the amount of local compiling you need to do. Perhaps that was an issue in 1999, but the only thing I've had to compile on a server in *years* is OSSEC - an intrusion detection system. And you compile that so that you can trust the binaries.
Live kernel patching is also available (for a price) which means you don't even need to reboot your server to apply all the latest patches.
As someone mentioned, for Debian/Ubuntu systems, apt-get update; apt-get upgrade will sort you out. apt-get dist-upgrade if you want to upgrade held packages, but careful with that because it'll restart services.
You'll also want to subscribe to the security advisory mailing lists for your chosen distribution. They'll let you know when vulnerabilities are patched and available to upgrade from the repositories.
There's no reason why IMAP can't accept OTP. It's not something I've looked into, but I imagine it's possible to, for example, get Dovecot to use PAM for authentication and then configure the Google Authenticator PAM module.
Dovecot does seem to have a built-in OTP mechanism, but a very brief search in their documentation doesn't give any details on how it works.
If you used the PAM approach above you'd be able to configure to accept the OTP code by appending it to your password. That way you avoid having to modify the IMAP protocol in any way.
There's a difference between disaster recovery and high-availability (though they do overlap).
It's perfectly reasonable that disaster recovery is a manual fail-over process. Fully resilient systems over two geographically separated locations can be hard and expensive to implement for smaller companies with not much in the way of a budget or resources, and so you have to compromise you expectations for DR.
Even if failing-over can be automated, there might be a high cost in failing-back afterwards, and so you might actually prefer the site to be down for a short while instead of kicking in the DR procedures; it works out cheaper and avoid complications with restoring the primary site from the DR site.
Not every company runs a service that absolutely needs to be up 24/7.
A lot of people designing the DR infrastructure will be limited by the (often poor) choices of technology made by the people that wrote the in-house stuff.
As an example, replicating your MySQL database between two datacentres is more complicated than most people would expect. Do you do normal replication and risk that the slave has lagged behind the master at the point of failure, losing data? Or use synchronous replication like Galera at the cost of a big latency hit to the cluster, slowing it right down?
If it's normal replication, do you risk master-master so that it's easy to fail-back, with the caveat that master-master is generally frowned upon for good reasons?
I think it's disingenuous to berate people for implementing something that can be very difficult to implement.
Though of course, large companies with lots of money and lots to lose by being down (like BA) have no excuses.
At school I was taught to use an apostrophe to pluralise initialisms. I suppose it came from the now near outmoded practise of using an apostrophe to abbreviate words (although some examples are in common use, e.g. "it's" as in "it's hot today").
I'm not saying it's correct, it's just that that's what we were taught to do back then, so I have a lot of tolerance for that type of apostrophe usage.
webOS had a hardware sensor for gestures and made using the phone so intuitive it was almost insulting. I haven't found another mobile OS that comes anywhere near it yet. SailfishOS is the closest, but not quite there. BBOS10 is a vague nod towards gestures, but there are too many discontinuities. Android's a joke - I find a lot of the time hitting 'back' doesn't do at all what you'd think it would.
No, I understood fully well. You're saying that developers shouldn't know/care which orchestrator they're using. The article says that AWS should use Kubernetes because that's what devs want. I agree with you - devs shouldn't be mucking with stuff like Kubernetes. If they honestly have no 'devops' resources and really, really need an orchestrator then Swarm is a good choice because it's part of Docker and re-uses the same API that they're using anyway.
So my point is that the article is wrong - it's not devs clamouring for Kubernetes on AWS.
Kubernetes has its specific uses, but it's by no means a developer-friendly container orchestrator. It's really intended for legacy style application architecture where services need tight integration.
It's more suited to migrating existing stacks into containers rather than a system to allow developers to develop their new whizzy containers easily.
Have you looked at the heavy, complicated architecture for Kubernetes? I'm sure most devs wouldn't want to roll that out in a hurry. Sure, things like Rancher allow you to spin it up more easily, but "easily" is still relative.
Compare that to Docker which has it's own built-in, easily clustered orchestration (Swarm). Sure, perhaps not as feature-rich as Kubernetes or yet as scalable, but again that's something only very large outfits are interested in at that point it'll be the systems team that implements it, not the developers.
So I wouldn't call Kubernetes the common standard - not by a long shot.
We keep being told by May that we're doing the Brexits because it's the will of the people (despite being roughly half-and-half). Yet the people show their will by having over a million people express their opinion on something, and May's already dismissing them.
Is there some kind of guide as to when the peoples' will is to be respected? Is it some kind of lunar cycle thing?
To be fair, Iceland agreed years and years ago that Iceland (food) could be called Iceland for the purpose of selling food. But Iceland (food) is branching out and its other ventures are keeping the branding. So Iceland (country) are a bit miffed about this and asked Iceland (food) to stop. They think that people could confuse the new ventures as being endorsed by Iceland (country) which is fair enough.
It reads like one.
I get portable data volumes. However whatever happened the concept of stateless containers? Wasn't that the dream?
No, don't tell me. You've got a database in a container and you want it to fail-over. My suggestion is to stop looking at containers as a solution to everything.
I think flockerhub is solving a problem that, if people architected properly, oughtn't exist in the first place.
The package system is quite good, and there's community stuff similar to FreeBSD ports or Gentoo portage.
I've used it on ARM devices (cubox-i etc.) and uses systemd. I'm not sure if that's mandatory, but if it is then I feel sorry for anyone using Arch to learn Linux and then having to use systemd.
To avoid that horror, flee to Slackware!
but the reality is that DevOps has been confused with CI/CD by almost all agencies/employers now.
I'm currently looking for a contract - my expertise is Linux infrastructure/operations. Those contracts don't exist any more. Go to a job site of your choice now and search using the keyword 'Linux'. A year or more ago that would have given you mixed results; Linux support, Linux Admin, Operations, Engineers etc.
These days over 90% of the results are 'DevOps Engineer'.
Look at the spec for a DevOps engineer and it'll vary, but it's typically Puppet/Chef/Ansible, Python/Ruby, AWS, Jenkins/Travis etc. The role is generally "Manage the CI/CD toolset, package up the software for deployment, write the deployment scripts, do the deployments."
So DevOps is actually "that bit where dev and ops overlap which no-one else wants to do."
The issue with this is that it doesn't solve the problems that the real DevOps aims to solve. Instead of separate dev and ops silos, you now have dev, ops and devops silos.
The "DevOps" roles also put a lot of emphasis on the candidate being able code (and that's code, not script), so evidently this favours developers that, for reasons either good or bad, have moved from their development career into DevOps. They'll tend not to have (and the employers aren't looking for) deep or broad experience in ops, and therefore won't be aware of the niceties of how to properly apply the stuff they're doing in dev environments to production.
Oh, and for extra hoots, in the past week I've also come across adverts for TechOps, WebOps and NetOps. The future's looking bright!
There's no reason that a container can't have direct hardware access. I think you're mistaking containerisation for virtualisation. It's misleading to call containers a form of virtualisation - they're not running on emulated hardware but rather directly on the host hardware, which is why they can access it.
It's actually best described as a way of bundling software and it's dependencies and running them so that they're isolated from other containers; more like a super-fancy chroot.
And of course you wouldn't put your actual data within a container anyway - there's stuff you can do with data volume containers, but the real question is, what's the point? There's no real benefit to it.
I think the point you made about putting GUI components etc. into containers is the only sensible use of containers when it comes to storage, but just because you've put something in a container it doesn't mean you're committed to updating the container every few days? What makes you think that this is forced on containers any more than it's forced on 'traditional' applications?
So you're suggesting that host servers are monitored? Good idea - I don't think sysadmins ever considered monitoring servers before. That might be a new market!
Docker containers can leave a load of cruft behind; stopped containers that haven't been removed, volumes from removed containers that weren't removed along with the containers and of course container images that aren't being used by any running containers any more.
I think it's worth suggesting that, in the same way that containers should be ephemeral, the cloudy hosts running them ought to be too, so avoid the accumulation of this slough.
If you're on bare metal then your ops people should be putting the normal housekeeping monitors and scripts on the server that they'd run on a 'normal' server.
If your 'devops' doesn't grasp the concept of general server husbandry then I'd rather not buy shares in your slack startup.
I agree with the thrust of the article, but the reality is of course that there aren't as many system 'architect' jobs going as sysadmins or helpdesk. You'll still need some form of IT support; to imagine that all support and administration tasks will be automated is pretty naive.
I don't think it's fair to dismiss readily the people who are happy to work in a support role for their whole career. Why should they aspire to an 'architect' role if they don't want to?
The other issue is that compared to ten years ago the amount of new technologies is astounding. It's hard, no, impossible to keep up to date with them all. So yeah, an architect should have a good idea of what's out there but it's unavoidable that a lot of it will be shallow knowledge; vague impressions even.
So the idea that an architect will come in, look at the issues and splurge out the ideal solution immediately from their vast store of knowledge is a nonsense. Anyone worth the money will spend a while analysing the issues and then a good deal of time research potential solutions and offer up different options to their employer or client.
This is a nonsense. If DevOps is supposed to mean *anything* it's supposed to mean a form of collaboration between dev and ops which allows everyone to do their job as painlessly as possible, be happy and fulfilled in life and churn out crappy software updates on a weekly/daily/hourly basis in order to fix the last crappy software update you did.
You don't have such a thing as a DevOps 'role'. You still work either in dev or ops. It's supposed to be about getting the procedures and toolkits right.
So telling someone that they need to learn how to be a developer and a sysadmin at the same time suggests that whichever paid-per-word pseudo tech journalist drone wrote this "article" knows less about DevOps than the average CTO.
I wouldn't trust many developers I know to write a a B+ tree library in C. But then this shows a flawed mindset; why not use one of the pre-existing B+ tree libraries?
There are different levels of coding with developers too, you see. I know sysadmins that code better than a good proportion of developers.
Well about improving Linux - systemd *attempts* to improve Linux desktop/mobile installations. None of the systemd "improvements" make any sense in a server environment. Faster boot times? Fsck a thumb drive?
Have you seen systemd's cron replacement? Yuck! Binary logging is in no way an improvement over text logs. The FreeBSD init system is a much better fit for server systems.
I've got no real issues with systemd on a desktop system aside from the fact that it becomes a dependency of other stuff. How on earth that came about I have no idea. :(
The gesture based stuff has gone a bit downhill, yes. The really annoying thing is the big focus on Android integration rather than getting the native stuff up to scratch.
If I wanted to use Android apps I could, you know, buy an Android phone.
They seem to have spent a lot of development effort on that rather than fixing some long, long-running bugs. Like crappy network management, or buggy IMAP IDLE.
Try ElementaryOS on a old Acer netbook. :) Or if you want to stick to Ubuntu, switch your Desktop manager to something like Enlightenment. I think we should all be well pas the time when you mistake "Linux" (i.e. the kernel, which supports loads of older hardware) for "Ubuntu with Unity".
I imagine these days you might even be able to get a Linux distro with a GUI running on something as underpowered as a RaspberryPi!
Now I'm sure there's a mobile "version of Linux" knocking about some shops somewhere. Now I think of it, I recall there was an amazingly successful crowdfundy thing to produce a tablet for it to run on. Now what's it called again? HoverfishOS? SailpigOS? No, it's gone. If only I was a technology journalist writing about Linux on mobile devices then I'd be bound to have heard of it.
And then there was that other one, netOS? webOP? You know, the one that used to be in phones and tablets in the shops, and is now on TVs and Audi watches.
And that new one just released in India that'll be on phones, tablets and TVs soon. Tizer? Tiger?
I recently bought a Nexus 4 in order to try out other Linuxy-based OSes. Ubuntu easily trounces FirefoxOS, but I found the UI to be a bit of a mess - a half-hearted implementation of gestures, a confusing home screen system with "scopes" that offered difficult-to-get-to home screen personalisations.
No mobile OS around at the moment beats the user experience of webOS. Shame no-one's reviving that for phones and tablets.
I think you've missed the point of Docker - it's not meant to run simultaneous operating systems on the same server. It's not virtualisation - it's a encapsulation and deployment solution. You're not "giving up" high-availability - that's provided by the design of your production infrastructure and software architecture.
Your article is essentially pointless.