JavaScript has been doing this for years.
You don't need custom malware to saturate Chromium, take all memory, load your CPU and crash your browser. Plenty of websites already do that if you leave them open long enough.
A critical, currently unpatched bug in Chromium's Blink rendering engine can be abused to crash many Chromium-based browsers within seconds, causing a denial-of-service condition – and, in some tests, freezing the host system. Security researcher Jose Pino found the flaw, and created a proof-of-concept exploit, Brash, to …
If you've configured noscript to temporarily allow scripts from the same domain. Then very likely examined the list for whatever cdn(s) they are using. By the 2nd or third attempt I decide that their dribblings aren't that important to me after all.
It's still less hassle than letting everything run, but I can't recommend it to my non-techie friends.
Local newspaper news portals are the worst websites that I ever see on my daily web use. They are at the level: appallingly bad. And they are trying to monetise, asking people to subscribe.
That's like asking people to subscribe to a ripped and shredded printed edition.
It's even better when it gets ingested and messed up by the Google news app - you'll get several screenfuls of the exact same headline with the exact same image for a load of publications you've never heard of, like The Leicester Early Afternoon Herald or The South Side of Portsmouth Shouty One... It's all the same drivel masquerading as "local" news.
Chrome uses all that virtual memory to sandbox Javascript, so they don't (think they) have to worry about things like this. It's easier to pretend that sandboxing is the solution to all "overload the systm" ECMA/javascript exploits, but... obviously it isn't.
One of retrograde steps Chrome took a while back was the removal of the standalone management console/task manager. So that when Chrome ground to a halt and some website prevented it from running, you had a way of killing specific tabs, now with functionality embedded in Chrome, Chrome stops running or some website blocks menu access your only option is to kill chrome via the Windows task manager.
This post has been deleted by its author
Unfortunately that's happened anyway. We use a number of SaaS services where the answer to any support request is "are you using Chrome?", and when told that we aren't the next response is " we advise using Chrome".
Basically, they can't be arsed to test it properly in anything other than Chrome.
Firefox (1 version back) will run for about 10 days before it starts to collapse on itself, at which time its too late to save any open tabs, since you can't get them to open. The only fix is to start task manager and start killing off firefox processes until you get them all. Its been behaving this way for about 6 versions and I have zero hope that they will fix it.
doesn't seem to affect ESR? I have firefox ESR 140.3.1 on linux processes dating back to Sept 27. I run 4 different isolated firefox browsers simultaneously in linux each running under a different username, for better isolation(I still have 104G/128G of memory available so ram is not a problem).
I checked a Win10 system that I use less frequently it is running Firefox ESR 128.13.0, I ran a powershell script command ((Get-Process firefox).StartTime | % {New-TimeSpan -Start $_}) and it says some of the processes for firefox have been running for 75.4 days. I haven't used that system in a few weeks at least I think. Browser sitting there with 3 tabs open on basic websites nothing fancy(cygwin.com being one of them)
I suspect that it depends on which web sites you have open (and how many). For me, FF gradually sucks up memory until my machine becomes unresponsive, and I have to kill the processes. It's been doing that since FF moved to independent processes, which was years ago. Maybe 15+ tabs, mixture of news, retail, technical etc, so all JS heavy.
Turn down the setting that splits off "Isolated Web Content" processes for everything, and things start working much better. They went a bit too far with the sandboxing (individual web pages don't really need to be sandboxed much from other web pages on the same site, but Firefox now seems to do it by default.)
Somewhat counter-intuitively, I also found that turning off swap entirely leads to better performance with a RAM heavy computer, as Firefox will try to avoid using too much memory, and if has swap to use it can use too much of it leading to thrashing EVERYTHING.
if has swap to use
Why on earth is a user-level app like Firefox trying to handle something so low-level itself? Swap is an OS issue, an app should just ask for the memory it needs and rely on the OS to provide it as it sees fit, according to the requirements of the whole system.
> Firefox is like "cool, eight gigs!"
It's not for a program to make that determination, it has no idea what part of the 8gb is available. Firefox should simply decide that it wants, say, 4g and ask for it. If the os says no then FF either aborts with an insufficient memory error or reconfigures itself to require less, and then repeats the request for that much. No user space programme should be trying to second guess the OS's resource usage, that's totally incompetent programming.
Why on earth is a user-level app like Firefox trying to handle something so low-level itself? Swap is an OS issue, an app should just ask for the memory it needs and rely on the OS to provide it as it sees fit, according to the requirements of the whole system.
It's not handling OS swap itself but it does manage it's own memory usage. That's quite reasonable when you consider a modern browser functions exactly like a basic operating system.
The issue is that if you leave a tab open in the background there will be lots of potentially quite fragmented memory associated with that tab which goes untouched for an extended period of time. Windows will eventually swap that to disk if swap is available. This might be exacerbated by process isolation. It's also worse if the backgrounded tab is doing some awful tracking or ad rotation that causes a script-level memory leak which the browser can't do much about. When you want the tab back a lot of that memory needs to be swapped back in before the tab will function. And many people's reaction when a tab fails to load quickly is try another long-forgotten tab to see if that's broken too.. which only makes things worse. Another application that suffers this problem is VS Code because it runs child processes that can go untouched for long periods of time and can sometimes use lots of memory.
However when Firefox knows memory getting low, it will proactively 'unload' tabs and also run garbage collection and heap optimisation routines. If it does that when half the memory it's using is already in swap - maybe even on a platter disk (avoid that) - because you've allowed Windows to use lots of it, this is going to exhaust some people's patience. Not entirely sure but I think Chrome started more aggressively unloading unused tabs recently? But then it also do more pre-loading too.
Too much swap really can be a bad thing, ironically more so on low-RAM systems.
"Definitely sounds like a swap issue to me."
That was my thought as well. If your system's "disk" light (assuming there is one, but that's another rant) is on almost solid, page thrashing is a pretty good bet.
On Linux, instead of playing Whac-a-Mole with Firefox processes, you can "pkill firefox", which kills all of them at once. (Well, all of your own. "sudo pkill firefox", if necessary and appropriate, to take out every user's.)
Be patient. Once it actually runs, pkill is pretty fast, but it can take a *very* long time to get to the point of running it. On my 16 GiB system, if page thrashing is especially severe, it can take minutes for the terminal window and then the shell process to page in, both of which are prerequisites to running that quick pkill command.
I don't know the correct incantation on Windows, but from another comment here, PowerShell might be a good starting point.
On Windows Ctrl-Shift-Esc, wait patiently until Task Manager appears and tread carefully from there. End task or end process tree on the correct process to kill all Firefox at once. I do think both Windows & Linux could do more to ensure a minimal set of process control functionality was always given priority to run and never swapped but this doesn't seem the case. On Windows 11 Task Manager even runs on Normal priority now, while I'm sure it was always elevated on Win7 and before?
Last year I was routinely exhausting for 128GB RAM + 128GB swap for some project. To it's credit Win11 was mostly remarkably stable (Win7 would not have been because it always flaked under multi-core low memory situations) but sometimes quite randomly would get itself into trouble and on one occasion it took over over 4 hours to get the right processes killed off. After that I always opened Task Manager and set it High priority pre-emptively.
> After that I always opened Task Manager and set it High priority pre-emptively.
Except that Windows (at least Win11) apparently likes to change that on its own: I had some heavy calculations to do, and they seemed to run sluggishly (not using the full CPU capacity), so I checked the task's priority, and for some reason it was running as "low priority"! I manually changed it to "high priority", went away, came back an hour later and lo and behold, it was back to "low priority"... Apparently Windows decided my work should take a backseat to its own internal shenanigans (nothing else was supposed to run at that time)...
Okay, more info: People seem to report defeating efficiency mode is now a moving target and a mystery. All I can advise is that I managed to stop it triggering last year, and that needing the solution was unexpected but it wasn't complicated to figure out and it still worked as of around 9 months ago, but I can't remember for sure whether it involved anything other than setting 'Best Performance'. I haven't confirmed whether it still works. In ~3 weeks I'll be returning to lengthy CPU-intensive workloads so I'll report back with any new findings/solutions. We may need some kind of support group... :|
The solution then was robust over various workloads and I was even able to run CPU-intensive code which set itself IDLE_PRIORITY_CLASS and achieved the expected functionality of properly using all spare CPU (on all cores) while having minimal impact on any higher priority programs, although that wasn't (at least originally) necessary for stopping Windows triggering efficiency mode - it was defeated for Normal priority processes just fine too. Note that PROCESS_MODE_BACKGROUND_BEGIN & THREAD_MODE_BACKGROUND_BEGIN are the real killers because they throttle I/O and memory resources (rather than just affecting scheduling priority), and efficiency mode appears to do the something similarly nasty. So Low (aka Idle) process priority itself is probably not the true issue in your case, but rather a tell-tale symptom. (However no two workloads are the same so you shouldn't take my word for that, it could be confirmed by manually setting the program Low manually before efficiency mode kicks in, then checking that CPU is still high and only drops later once efficiency mode is triggered).
Interesting. Doesn’t happen here. Currently looking at Firefox running on a Win10 system, on a Ubuntu system, and a Mac. Multiple tabs and windows open on each. The Win10 machine hasn’t beenn restarted since Patch Tuesday; that’s more than 10 days. The Ubuntu and the Mac were last restarted last month; that’s also more than 10 days. Firfox starts up on startup on all three, and it’s never closed until shutdown.
unfortunately, Firefox has similar vulnerabilities. this script, for example, will take out both Chromium-based browsers and Firefox: https://dpaste.com/G8R94ESDC
I actually reported that iframe method as a vulnerability in Phoenix, and was told that it was working as intended. it's still vulnerable to the exact same DoS attack more than a decade later.
The article states that "... other rendering engines, Firefox (Gecko engine) and Safari (WebKit engine), and both were immune to the attack ..."
Your example of a vulnerability across two different "engines" suggests to me that the Javascipt/ECMAScript Specification may be the issue and that the two different "engines" correctly reproduced the fault in the Specification...
And which "Phoenix" web browser are you referring to? The 2002 browser that became Firefox or some other, later, web browser using the same name?
I would be more impressed with an example of a Firefox-Only vulnerability with the same severity as this!
This is one of those still open debates that are frequently considered from only one side. If the bug remains unfixed after your repeated attempts to show the company the problem, most tech journos wouldn't go to print with a "company X didn't fix bug Y" without a somewhat thorough description of bug Y. Non-tech journos wouldn't bother at all.
Getting enough public perception that "this is an actual problem that company X should fix", without providing proof of bug Y somewhere, is very difficult. Especially with larger company X's whose actual customers are the advertisers, and not the web browser users.
Trouble is with open source and in public/open development, you effectively have to “go public” with the details, just to communicate with the maintainers.
Obviously, this open communication can be picked up by interested parties such as ElReg writers.
The article does not say the author “went public” or approached ElReg, just that they spoke exclusively to ElReg - who initiated that communication? Ie. How did ElReg discover this potential news item.
I suggest you walk through the entire reporting and fix process where a public repository such as Github is being used to manage changes.
From what I can determine this bug finder didn’t wait 90 days from report to publication, but only circa 60 days.
@Tron:
I'm not downvoting you, but am pointing out this type of situation is not as clearcut as you present it as being.
Without releasing details, companies selling/providing vulnerable software could easily spin up their PR machine against the researcher, muddy the waters, and continue to not fix the problem.
"This person is lying. This person has no proof. This person is seeking publicity [which may be true, but is irrelevant to the existance or non-existance of a bug]. We at Megasoft employ the finest software engineers [yet let unpaid interns do commits to production code without sufficient supervision and review], follow industry best practices [arguing what those may be], and are ISO-9000 certified [but we outsource some coding and program maintenance to non-ISO-9000-certified companies] ..."
Why are people using the presentation layer (browser) to store infinite numbers of active web pages/sites/data? That's like filling your home with infinite newspapers just in case you ever get back to those articles you started to read thirteen years ago.
Howard Hughes was a warning, not a goal.