
Up In The Air
As in the movie Up in the Air.
That movie introduced the - fictional, at the time - method of layoffs by remote video call. In 2009, it was just a movie. In 2017, it's reality.
More than 80 workers at Ubuntu-maker Canonical are facing the chop as founder Mark Shuttleworth takes back the role of chief executive officer. The number, revealed today by The Reg, comes as Shuttleworth takes the top role from CEO-of-eight-years Jane Silber, who was previously chief operating officer. Silber, now no longer …
As in the movie Up in the Air.
That movie introduced the - fictional, at the time - method of layoffs by remote video call. In 2009, it was just a movie. In 2017, it's reality.
Yeah, TBH as a remote employee (not of Canonical...) myself I can't really get angry about that. If I was gonna get laid off I wouldn't expect - or, really, want - the trouble of being flown to an office somewhere just to be told I was getting canned. (Or having someone flown out to me, or whatever).
If they're doing it to people who actually work in Canonical offices with managers that's one thing, but if you're a remote employee, it seems kinda par for the course.
Speaking of movies: assume the position.
I was a remote worker, and laid off last year.
The only time I physically met anyone from my former employer while on the way out was for knowledge transfer to a customer that we wouldn't be supporting any more. Didn't really have a problem with that. If you are remote already then become a remote not-a-worker isn't any real change. I really wouldn't have appreciated being shipped to some home office somewhere, nor would I have minded (would make getting my phone and laptop back to them a little easier).
They've got no loyalty to us at all, so we've got none to them; I'm still in contact with some of my coworkers, that company is just someone I used to know. It's possible to feel loyalty to a small-ish team, to Big Faceless Corp (tm), not so much unless they've got a real sense of mission going on.
The Man With The Stubble was talking about 30% to 60% cuts. Given the figure of 700 people, that translates by my arithmetic to 210 to 420 jobs gone. So numbers look lower than expected at the moment. I can also relate to the need to comply with different laws in different territories and I think that Canonical did a daring thing in having a really distributed workforce.
PS: I recollect, in a previous century, standing with a union branch official, on the steps of the college I was then teaching in, greeting staff as they came into work in the morning with names read off a fax of those who were being made redundant. Shit times - not even the rose-tinted spectacles of nostalgia can smooth over that one.
Pint: for all those seeking alternative employment. I'm assuming a really sharp skill set and a short holiday all.
The VC's saw the cost base and the revenue figures and.... after recovering from the shock, they told Canonical, 'This is how it is going to play...'
Unity was IMHO doomed from the outset. The writing was on the wall once they took what it seemed to be an age to deliver even a prototype. Even MS couldn't (IMHO) get it right.
I felt that Canonical kept trying to re-invent the wheel with a lot of things they did/tried.
I gave up on Ubuntu with 12.04 and went to CentOS. I find that it is a lot more stable.
Shame about all the good folks being let go though. They obvsiously tried their hardest to make it work.
For my desktop/laptops I went from Ubuntu 10.04 to Mint (MATE) 17. The servers at the company I work at are on Ubuntu(inherited, too much work to make it worth switching to something else), and my personal servers are Debian.
I never tried unity but believed I wouldn't like it so never tried it. Gnome 3 was bad enough (on Debian anyway -- fortunately the Debian system I have that runs GNOME 3 basically runs a screensaver with pictures on it, I rarely interact with the UI).
I really like the Gnome 2 UI and will do just about anything to keep that around on my systems for as long as I can, for now MATE(with "brightside" for virtual desktop edge flipping I have 16 virtual desktops) does a perfect job at it.
(Linux on desktop/laptops since 1996(Slackware in my earliest days), Debian user since 1998, Ubuntu user since maybe 2007 ??)
>I gave up on Ubuntu with 12.04 and went to CentOS. I find that it is a lot more stable.
Getting a lot of development stuff and 3rd party uncommon libraries is a real PITA in RHEL land. PPAs are why LTS Lubuntu is still my POSIX development VM of choice. For LAMP out of the box or other common server configs I could see the use case for RHEL/CentOS obviously.
Yep. As a sys admin I find myself constantly explaining to some of our newer devs why they can't have the latest shiny software on our Enterprise Linux systems because it's very nice and stable and because of that somewhat behind the times. And then there's all those package/library dependancies...
Generally we stick to the line that if it's in our in-house repositories you can have it and if it's not, then I'll make an excecutive decision regarding how much of a pain its going to be to source and install it (where possible I prefer RPM rather than tarball stuff).
>why they can't have the latest shiny software on our Enterprise Linux systems
Well by only doing straight up pre C++11 development using only the standard library I can pretty much set up my dev environment however I want irrespective of target platform (yes yes staging, native testing, etc is necessary). POSIX and gcc is nice that way.
I"ve been working with Ubu and derivatives off and on, and also for a client. Ubu has good things and bad things, and mostly good. Offering a Mate distribution is a good thing. And I'm happy they dumped Unity.
So here we are, doing some (apparently) VC-prompted re-organization. This is what venture capitalists do, and in many ways it's a VERY good thing, because these people generally know how to turn a company profitable. It's also good that a VC (or two, or ?) is interested in Canonical.
Can we expect a Linux distro that UNSEATS MICRO-SHAFT out of this? I certainly *HOPE* so!
Aside from "it sucks" when you're laid off, the economy should be doing well enough to 'absorb' the small number of people who got pink slipped.
And since the layoffs appear to be related to Unity, it should have been expected when that sub-project was killed.
Reboots are suggested every time you update the kernel. If you don't reboot after updating the kernel, some things, particularly anything that looks at the symbol table for the running system by looking at the image on disk could cause problems.
This should be less of an issue than on traditional UNIX systems, because they used to change the default kernel image on disk that contained the addresses of most kernel data structures, so the symbol table in /unix (or whatever it may have been called) did not match the actual addresses in /dev/kmem.
Since /proc. /sys et.al. are now used to access most kernel data structures in Linux without having to look in /dev/kmem, there should be fewer problems, as the kernel symbol table should not be used as much.
If kernel updates really bug you, then black-list one or more of the kernel packages, and allow all of the package updates that do not affect the kernel to be updated. At your convenience, remove the black-list entry, and allow the kernel to update, and then reboot the system.
LTS does not mean fewer updates. It just means that you are guaranteed support for a longer period of time. Just because it is an LTS release does not mean that there are fewer bugs that need patching, or that the rate of patch delivery is any slower.
This doesn't make sense to me. Upgrading the kernel(without reboot) doesn't remove the old kernel in my experience. It doesn't change anything. I have never ever in 20 years seen a problem from upgrading and not rebooting like you describe (symbol issues and stuff ) on linux.
On my mint systems I don't upgrade kernels often maybe once a year. My last upgrade(very recently) had problems with sound (maybe those pulse audio problems I read so much about, fortunately there was an even slightly newer kernel that I was able to upgrade to which resolved the sound problem). I don't plan to change kernels again this year on my main system, too much risk. Two kernel updates before my laptop would panic once every few days. The kernel before that wifi and SD card didn't work. So am very weary of changing something that is working as a result of that (laptop is lenovo p50 which has high linux compatibility)
The security issues aren't nearly as bad(as in risk you will be attacked) as some try to make them out to be. Or maybe I've just been lucky running internet facing linux hosts since 1996.
The example I used was for commercial UNIXes, where the on-disk image of the kernel is actually overwritten with a kernel update. This is mainly because the initial boot loader is designed to load something like /unix.
For quite some time, Linux has had the ability to have multiple kernels installed on a system. In this respect, you are correct in saying that not rebooting will not cause symbol table mis-matches of the type I described, although I would not like to say there would be no issues (especially if there were any kernel API changes, not unheard of in the Linux kernel).
But I'm pretty certain that the early Linux systems, using Lilo rather than Grub, still relied on there being a link of some kind to a fixed named file in the top level root directory.
My first experience of Linux was with Red Hat 4.1 (original numbering system, not RHEL) around 20 years ago, and I'm sure that is how it worked in those earlier releases. I'm pretty certain that in-place online kernel updates were almost unheard of back then, and nobody would even think of not rebooting after updating the system from a CD. In fact, if I remember correctly, to update a systems back then normally required you to boot a system from the CD containing the updates, so rebooting was mandatory.
My Unix experience at source level goes back to 1978 (goodness, 40 year anniversary of first logging on to a Unix system next year!), so I'm pretty certain of the behaviour of traditional UNIX systems
Prior to the /proc pseudo-filesystem, the normal way for a process like ps, for example, to read the process table was for the process to be set-uid to root, and then open /dev/kmem and seek to the process table using the symbol table obtained from the /unix file. This behaviour was copied from traditional Unix systems in early Linux command sets, and you would be surprised about how many processes actually needed access to kernel data structures.
... shouldn't be needed.
The kernel loads itself into memory at the start and then, apart from loadable modules, it doesn't need to re-load (page) itself from disk. The only thing that stops working after installing a new kernel is hibernate. That fails (in the sense of starting to hibernate, but not going through with it) because on the next reboot, a new kernel is in place and it wouldn't make sense to reload the memory image belonging to an old kernel.
Regular programs/services should also be restartable without needing a reboot. Even upgrading loadable kernel modules on the fly is fine because they are (like the kernel) just loaded once and the init system (or something like dbus) knows about dependencies and can restart affected parts of the system in the right order.