This can't be Linux
This is probably the Linux subsystem on a Windows 10 installation. Because Linux never breaks. Never ever ever. Heresy!
In today's edition of sickly signage, we have a prime example of transatlantic bork from one of Canada's finest retailers. Snapped by Register reader Ralph Grabowski, in an Abbotsford Walmart, the afflicted Samsung screen can normally be found encouraging shoppers to buy more stuff. Now it is simply bereft space. Crammed to …
"Been a *very* long time since I’ve done any windows admin work, but I don’t recall it being so useful if you filled the disk and rebooted."
I did exactly that recently and actually Windows booted, limped very badly and let me address the issue. Win10 seems reasonably OK with it.
Regarding WIndows 10 with a full disk, biggest issue is normally a user profile has to exist or you can't login (Because it will fail to create a new folder and files, then attempt to use a TEMP profile and then fail again), if you login to an existing profile it may not load the profile properly but on the 3 occassions I have come across this was enough to clear some space.
This post has been deleted by a moderator
It may surprise you but software doesn't actually wear out -- provided the underlying hardware doesn't break it will go on doing its job for ever.
The only reason why we've been trained to think in terms of stuff rapidly going out of date and needing constant updating is that most of our consumer kit has software pushed on from external sources. Much of this is either neeless feature enhancement -- necessary to justify subscriptions -- or adware and 'analytics' (spyware) pushed on us from websites. Since the pressure's on to rapidly produce this code the mechanisms for doing this not only promote code bloat but produce software structures and techniques that expand to reach and then exceed the available computing resources. (Its all done in the name of "enhanced user experience" but with a fair amount of "security" FUD added in). So you get trapped on a treadmill that no supplier's going to stop since its all income for them. But, seriously, if the box just does one job then it will continue to do that job until the hardware breaks (a possibility in this case -- the storage issue might be due to a careless upgrade but it could also indicate incipient disk failure).
I can tell you, from first hand personal experience, that NT 4 was unable to boot with a boot disk full. Killed himself when trying to add info to its event log. That was a lot of time ago, but created a critical incident that lasted a whole day.
Perhaps they have become better with later versions... haven't had to support anything for a while, you see.
This post has been deleted by a moderator
A couple of days ago I fell asleep with megasync running: the free disk space on the main drive wherein Linux Mint 18 resides was down to 0. Exactly 0.
Still working, e.g.: I could do anything else.
Just deleted some files [ via grsync to another drive actually ] and all was well. Didn't even have to reboot.
I guess you never updated Ubuntu distros?
Because when upgrading from one Ubuntu distro to another I tend to get so much borkage that nowadays I prefer to backup and whipe the disk clean first.
And the LIVE CD/DVD IS A LIE! Everything works perfect on the live CD/DVD but things are not going so well once you actually install it.
*Video Card works on live CD, it doesn't work after install. I needed to install a different video driver.
*Keyboard works on live CD, goes crazy after install, I ended using a different keyboard
And that's with fresh installs, the borkage I get when upgrading distros tends to be worse.
To be fair, Ubuntu has a lot of the same problems that Redmond and Cupertino have, and for all the same reasons. That's what happens when corporations decide that an OS can be all things to all people ... you get kitchen-sink-ware, and a myriad of problems as code that the user doesn't need or want react among themselves in all kinds of weird and wonderful ways.
Any idiot can fill up their storage. That's not "Linux" that's stupid users and (possibly custom) software that doesn't clean up after itself.
Unless overridden in the filesystem, 5% of the storage will still be available for root owned processes. That's why they can still boot in single user mode and fix it.
Odds are that /var/log has growed and growed and growed, despite old logs being turned over and gzipped. Those logs probably date back to the date of install and nothing ever gets deleted. It's quite possible been running for years with the free space slowly being nibbled away day by day. That added to the adverts being more complex and space hungry as the designers and ad people come up with "better" and "newer" ideas and the likely relatively small space originally allocated and...oops,,,disk full error.
yeah I wish that were true, but probably - since it's using LILO - it's a very very very old distro with a pile of uptime, and it wouldn't surprise me at all if the SD card storage [assuming it uses that] or maybe hard disk [if it uses that] has enough bad sectors by now to create problems like this...
meanwhile the occasional fsck may have corrected things up until now, until drive-cancer is larger than free space, and, well, there she blows!!!
/me replaced hard drives recently on 2 different FreeBSD systems because ZFS warned me of their impending doom... and so got new brain-storage, a few months apart, the same 2TB drive that was going for for around $70
AND, I might add, I'd much rather re-build a Linux or FreeBSD system from scratch onto a new hard drive (or restore via a file-by-file tarball backup) than to replace an ailing hard drive from a Windows box... [hello MS? I need an activation code. No, I'm NOT running Windows 10. I have XP. Oh, you can't give me a code any more? What the HELL??? It's an OLD system and it came WITH XP on it... but you don't care? Thanks a LOT, Slurp!]
When do you want to use read/only filesystems? There, where your system is embedded. You do not log or write braindead (just filling the filesystem) but build an infrastructure. So many things you need to think about and take care of. But that would require experience, which does not come cheap. And that goes for any operating system.
Who has tried a RPi with a writable SD card? It survives about 3 months before the flash wears out. Ah well,...
For the PIs I run various single function settups I also mount /var/log elsewhere
tmpfs for things that I don't care much about... an NFS mount for when I do
Even cheepie SD cards seem to survive just fine
But if you are looking for a desktop/family machine class10 from reputable brands can do wonders for speed & reliability :)
I bought I Pi from RS and another from Farnell on the day of release. One is still running on its original SD card. The other died from a failing power supply and took its SD card with it. I have had one Pi get into trouble from defective USB flash. I switched it to SATA flash and it has lasted years. If your SD cards are dying you are buying defective flash.
It all depends on what you use them for. Out of my dozen Pis, the ones writing on average less than 10MB a day are on their original cards. The ones writing 500MB or more a day would go through a class 4/6 SD card in about a year. Those are now either attached to SSDs (yet to fail) or USB 3 memory sticks (1 failure after 4 years).
"Who has tried a RPi with a writable SD card? It survives about 3 months before the flash wears out. Ah well,..."
Had to downvote you for this. I'm running RISC OS on a Pi and my current mSD card is about three years old. The one in the dashcam (a lot of writing) is going strong after two years.
Maybe you ought to stop buying yours at Poundland? ;)
A dash cam is just continuously writing 1 (or 2 if it has a rear camera) streams, this is pretty much what SD cards are designed for. Using an SD card as a general computer filing system in a Raspberry Pi with lots of random access writes far smaller than an erase block, cause an SD card to perform a vast number of re-writes. SD cards just aren't meant for that, it what SSDs are designed to do, and they have more sophisticated controllers and lots of RAM to consolidate writes.
Run level 3 means that all networking and "user space" processes (web server, time daemon, login daemon etc) have started, but X Windows has not. A user can now log in at the terminal or via SSH (or probably TCP, given the obvious age).
At a guess the failed processes are part of a log writer, so probably /var is full, since that is where the logs go. It depends on the distribution, but probably /var is just part of the root file system. There should be a log rotation and expiry job run by cron once a day which deletes anything older than a week. At a guess, this hasn't been running.
Runlevel 3 is standard if the device is only intended for console use. It's the minimum "normal operation" runlevel (1--single-user-- and 2--basic multi-user--are considered fallbacks as networking and daemons aren't supposed to be running). To use GUIs, you normally go up to Runlevel 5.
A bone-stock Slackware inittab says:
# These are the default runlevels in Slackware:
# 0 = halt
# 1 = single user mode
# 2 = unused (but configured the same as runlevel 3)
# 3 = multiuser mode (default Slackware runlevel)
# 4 = X11 with KDM/GDM/XDM (session managers)
# 5 = unused (but configured the same as runlevel 3)
# 6 = reboot
If you want Slack to boot into a GUI, edit /etc/inittab and change the "3" to a "4" at the appropriate line (inittab is pretty much self-documenting). Your distro's variation on the theme may vary. If you use the systemd cancer, I feel very, very sorry for you.
runlevel 2: multiuser (everything except networking, so hardwired "terminals" only)
runlevel3: multiuser with networking
Linux etc. originally followed the same model, allowing for hardwired terminals in /dev and /etc/securetty, but runlevel 2 is pretty much obsolete now. If you really don't want networking, disable or remove it.
this is actually distro-dependent but is reminiscent of a typical SysV init. I normally disabled the session managers and booted to console by altering the S01gdm to be a K01gdm in the most common run level (2 I think, on debian-based systems) and using startx. I like startx and console logins. They give the USER more control.
if you do 'man init' on your older SysV system, you'll get a breakdown of how it was set up by the distro-maker.
even on very old systems, logs tend to rotate. you keep the last 10 or something...
Now, if there's some application that keeps creating temp files that are never deleted, someone forgot to enable "clear out /tmp and /var/tmp on boot" which is a simple fix, but I suspect it's not that.
Most likely just an old system with failing hardware. SD cards and hard drives don't last forever. Let's say 10 years of continuous use... that'd be about right I think, to kill off an early solid state drive... or just your average hard drive.
It looks that that poor computer worked for almost a decade before giving up. It's like that forgotten Japanese soldier still standing at his post, long after the war was over.
Memo for companies large and small, you may use Windows, Linux or whatever suits you but make damn sure you employ competent admins. It's not an issue of good OS vs. bad OS, in the end it is all about competence vs. incompetence.
Hey Hey Hey! Enough with being reasonable. You cant go around having even handed views, and making valid points. That's just not what the internet is designed for. Keep that up and people will stop getting into inconsequential flame wars and who knows what might happen!
Pick a side, make nonsensical arguments, and dont let us catch you being reasonable again or we might have to take away your internet user card...
The problem isn't likely a lack of competent tech support, but a lack of ENOUGH tech support. I'm sure Wally-world is as cheap as they come when it comes to IT, like so many companies, and their IT staff is likely overworked. If it ain't broke, don't fix it, right? Or it could be they've had enough turnover in IT that no one even remembers or notices this 10+ year old box.
Looks to be a distro from around the year 2000, so yes, that'll probably be ext2 ... Linux was a trifle late to the world of journaling file systems. ext3 didn't start to become common in Linux until late 2001 or early 2002, when the various distros started using a kernel with it built in.
init 2.78 was the version most distros were shipping when the whole Y2K thingie was on all our minds. That's 20 years of service, and the only reason it quit was because the idiot in charge didn't know it needed more disk space (which is hardly the OS's fault).
And please note that it probably didn't actually quit. That's a secondary display screen, not the console. The console probably has a nice, friendly multi-user login prompt on it. (Why it was configured to display boot info on that screen is anybody's guess ... Probably a RedHat 6.x or Debian 2.x thingie).
"And please note that it probably didn't actually quit. That's a secondary display screen, not the console. The console probably has a nice, friendly multi-user login prompt on it. (Why it was configured to display boot info on that screen is anybody's guess ... Probably a RedHat 6.x or Debian 2.x thingie)."
If it's actually got more than one physical screen then, depending on the gfx adapter, it may well default to mirroring the display contents until a driver is properly loaded and running in a graphics mode. The display in the photo still has the BIOS boot info enumerating the discovered hardware on the screen so that is the primary screen or a current mirror of it.
Thinking about it further, it's probably just a so-called "thin client" that has lost it's network connection. init detects the loss, and attempts to drop into runlevel 3 to correct the error, but the developer didn't have that runlevel setup as it would supposedly never be needed. Would explain both the lack of GUI on what is obviously a customer facing graphical display, and the lack of a friendly login prompt.
(I'm pretty sure that version of init had that capability, but I don't have a machine running it handy to check ... corrections welcome.)
Check the network cable, take two aspirin, reboot and call me in the morning.
Usually, especially for a circa-2000 machine, a "thin client" setup would call for PXE, and the PXE prompt would've been visible between the BIOS data and the point where a local linux would boot.
This looks more to me like a locally-hosted setup that, as others have pointed out, has filled up so completely that Murphy has struck and even fallbacks are failing. Perhaps this is indicative of the internal storage failing as parts of it become unusable, get marked bad by the hardware, and choking the OS (just a hypothesis).
Yes, they should have been careful about letting the cruft build up. Almost all distros rotate log files in /var/log/, but sometimes have a few other logs they don't rotate. It's common for them to update the kernel, but not remove any old ones (keeping one previous is sensible... I've never felt the need to go back like 5 kernel versions though!) A worst case offender, the binary blob Linux Canon print driver leaves GIANT files in /var/tmp/, like 70MB a page, and DOES NOT delete them! (/tmp/ is deleted on bootup, but by design /var/tmp/ is not!) I had to put some bit in /etc/rc.local to delete it's temp files! I've also seen where on some systems, in case it was powered off in the middle of an update or something, this doesn't cause a problem (the updater just re-runs the updates) except it does leave some junk files on the disk; again, no problem but over the decades this'll use up your free space.
Of course, they "should" set the filesystem read-only, but I'm sure they didn't since they needed somewhere to stick the ads.
It's easy to make a distro that "more or less" doesn't leak disk space. But, it's harder to make one that doesn't leak disk space AT ALL, and if you don't, years later you end up out of disk space as seen here.
Here at Linux Mansions my usual set up makes a / partition of 10GB (for the software), small partitions (for /boot, /boot/efi, and so on), and the partition for /home gets everything else.
Usually Fedora/XFCE installs around 6GB of software on / ... but recent events have meant unusual software installs (like Zoom, Cheese). Today we're up to 7.5GB and if it gets any worse I'll need to do (another) bare metal install with the / partition bumped a bit, maybe to 15GB, just in case.
To get to the point......it isn't always "cruft" that does you in!
Biting the hand that feeds IT © 1998–2020