I used to run a printing company, and 15 - 20 years ago Publisher files were the absolute bane of our lives. By the mid 2010s Publisher had relinquished its role as the Bane of a Printer's Life, replaced by PowerPoint. Oh, the joy of full-bleed-but-bleedless PDFs which aren't even the same shape as the piece of paper they're supposed to be printed on!
64 publicly visible posts • joined 4 Aug 2022
Re: Thank you Liam
The phrase I've used for it in the past is "30 foot high Jenga tower".
I have a small amount of hope that people tinkering with RISC-V cores on FPGAs will have more appreciation for lean, lightweight software than those of us using "proper" computers, since they typically have less than a gigabyte of RAM to play with, and often as little as 32 meg - not to mention relatively slow SD card storage.
For my own toy CPU project* (not RISC-V - I was crazy enough to create a whole new ISA from scratch) I took one look at GCC and thought "no way in hell am I tangling with that" - so I wrote a backend for the vbcc C compiler instead, and an assembler and linker to go with it. The whole toolchain builds in a few seconds, and as a bonus it's lightweight enough to compile and run on an Amiga!
Interesting - I didn't know about UASP - but it appears to be a pretty new development?
It's not performance I'm concerned about - it's lifespan.
The way wear levelling works (or at least did on CompactFlash 20 years ago, when I learned about it after killing a CF card in an IDE adapter*) is that the device does logical-to-physical block mapping, and keeps track of how many times each physical block has been written to. There's a small pool of "spare" blocks (not counted in the nominal capacity) which it swaps out when a heavily-used physical block is re-written.
When the device is brand new, all the blocks on the device are unwritten and thus in the "spare" pool - so if you re-write one small file 1000 times, those writes will go to 1000 physical blocks instead of hammering one block with 1000 writes. If you do a test that involves writing to the entire device and then don't trim it, the pool of spare blocks is now depleted to just the small number of over-provisioned blocks, so a bad write pattern can now wear out the device much faster.
[* use dd to write a disk image to the card, then mount it without the noatime flag. It won't last long!]
"Despite Universal's false narrative and rhetoric, the fact is they have chosen to walk away from the powerful support of a platform with well over a billion users that serves as a free promotional and discovery vehicle for their talent," *
Does this remind anyone else of pubs and bars expecting bands to play for free in return for "exposure"?
It's hard to root for either side here, but anything which serves to reduce TikTok's appeal is probably a net gain for humanity...
[* a continuation of TikTok's quote, omitted from TFA, but which I read elsewhere.]
Re: All the more reason to avoid self-checkouts
Yup, my local Morrisons has done the same, with the result that I now use the store significantly less than before. The staff tell me I'm far from the only one.
When the terminals were first installed they paid a member of staff to wander up and down the queues for the real checkouts, attempting to harrass shoppers into using the new self-service terminals.
On one visit I noticed a shopper in the next queue getting visibly agitated until she was able to place her shopping on the conveyor belt, at which point she heaved a visible sigh of relief and said "Now they can't come and get me."
I miss the days when the business / consumer relationship was mutually beneficial, not outright adversarial.
Re: The real lesson...
> 3a. What sort of lousy assembler lets you accidentally begin assembling an instruction on an illegal address?
Actually neither the 68000 nor 68020 can execute instructions from an odd address - so I suspect what actually happened in the story was an instruction's word- or longword-sized *operand* was on an odd address - which would cause a bus error on 68000 but be OK on 68020. (Probably using move.l to copy the credit string as quickly as possible!)
While I feel an ideological revulsion towards this, I do have to concede that it's not really obscure as such: it's just symptomatic of having been designed by a member of the "phone first" generation - people to whom the idea of an internet-enabled phone not being available at all times is unthinkable.
Pointing a phone camera at the screen is arguably more convenient than typing some error message or code into google (especially if you only have a crappy on-screen keyboard available because your systemd infested computer won't start).
> No, MS wanted it -- I don't know why or what for
MS's behaviour and attitiude towards Linux in recent years has been interesting - on the one hand they've embraced it with WSL and VMs and Azure - and on the other hand they've continued their long-term efforts to lock it out of consumer hardware (Pluton, signing key shenanigans, etc.). It makes me wonder if they have designs on Windows becoming an underpinning BIOS-like layer which Linux runs on top of - in which case controlling the RTOS used for IME makes perfect sense. If that RTOS is no longer used by IME and they have no further use for it, then they might as well toss it into the copilot code grinder.
Re: Horses *did* protest
> boil down to exactly the same thing.
They kind of do, but carry very different connotations: The latter implies that those people's basic needs will still be met on some way, shape or form. You could replace "doing more interesting things" with "gently decomposing in preparation for a future of pushing up daisies" and still be consistent with the former.
> Surely the chip will largely generate the same quantity of heat, for the same task, regardless of clock speed?
For the same task, yes - but if it can complete the task sooner and start working on another task in the same time frame, then the total heat output during that time frame will be higher.
Re: The sheer f**king arrogance is breathtaking
Unfortunately GNOME doesn't exist in a vacuum, so the decisions made there do affect the rest of us, which is why people get disproportionately frustrated. Hamburger menus and client side decoration are now infesting previously sane distros. (Again, hamburger menus have their place, but I'd argue that a text editor is not such a place; client side decoration isn't a huge deal except that window appearance and behaviour is now inconsistent between old-style and new-style windows.)
I was reading something on the GIMP website recently about the work they put in to port the app to GTK3 (yes, 3) - it's somewhat depressing how much time and effort they had to expend creating their own replacements for now-missing functionality.
Re: "the sudden imposition of subscription fees"
My provider annoyed me last week by sending an email titled "An essential smart meter install", inviting me to book an appointment - very carefully worded to imply that it's compulsory without actually saying so.
This was followed a few days later by a phone call, which terminated abruptly when I politely said "no thank you".
Re: A desktop...
I read the article with the word "Ewwww!" on my lips for much of it - but I have to admit to laughing out loud when I got to the bit about the extension connector not working with sandboxed browsers. Hard to believe this exists in the same universe as web browsers that can access a USB port.
I know I'm getting old, but it feels to me that 15 years ago we were *so* close to getting desktop Linux *right*, and it's so sad to see how much has succumbed to design fads, how much no longer works as well as it used to, and how much has even fallen into disrepair.
(To be fair, I'm arguably part of the problem - I wrote some software back in the GTK2 days, and I have absolutely no intention of rewriting it to work with a newer GTK.)
Re: What goes around...
Why am I reminded of Douglas Adams's description of labour-saving devices? "Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself. Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe."
I've always described it as feeling like you have the computer's full attention.
It's not just about raw execution speed, either - there's a lot to be said for keeping the UI's footprint as small as possible. The moment Linux gets short enough of RAM that code required for interactivity gets swapped out, you might as well be trying to get a response from a sullen teenager whose nose is buried in its phone for the eighth consecutive hour.
Re: That screenshot - it's hideous
> Me, I'd say, if they are going to embrace bling, lean in and make it look amazing, like something from a Hollywood scifi film. All flashing lights, animations, transparency, 3D surfaces that spin and flip like Sun's Looking Glass desktop, and so on.
We had some of that years ago with Compiz - some of its features (key bindings for different zoom levels, and the freehand drawing layer) are really nice for screencasting. Sadly that appears to be yet another project that's now on life support.
Re: That screenshot - it's hideous
> I guess my visual senses may be different to others - that desktop screenshot is so unbelievably ugly you may as well just use LXqt and have a very lightweight desktop that's also ugly.
I can't imagine why they went with that as a default theme when literally all the others are better looking: https://www.bodhilinux.com/softwaregroup/themes/
Some of those are really nice.
(But arguably less nice than the ones available a decade ago: https://web.archive.org/web/20140331075316/http://art.bodhilinux.com/doku.php?id=bodhi_e17_themes_v3 )
Ironic that Enlightenment used to be thought of as the heavyweight option that needed a powerful machine!
I remember playing with Bodhi a few years ago, before the Moksha fork. The Japan theme back then was absolutely gorgeous. I'm pleased to note that its Moksha equivalent still looks pretty good.
Just a shame the default theme has that <chyrosran22>Hhhhhhhhiiiideeeeeeous</chyrosran22> flat green icon set.
Re: Sounds like this cloud thing was programmed as if it was a local server
> If you didn't want to use it, no harm and no foul: no internet connection, no job scheduling screw up.
...and no printing directly from the PC. You'd think in 2023 there'd be some middle ground between cloud-based-middleman* and sneakernet.
I was about to construct a snarky comment about how maybe someone should invent a standardised cable for connecting peripherals to computers - something universal - and low pin-count, so perhaps some kind of serial bus...
Then it dawned on me, for a significant number of today's developers, the head-in-the-cloud model the only model they know. It's arguably easier to cobble together an IOT device and associated service on a remote server than it is to create a USB printer class device, write a driver and supporting software for it, and then get those approved and driver-signed. It's also easier to go the cloud route than to deal with the support issues that go with getting people's computers to talk to the device directly on their own LAN. So perhaps my cynical footnote is just a bonus, not the main driving force after all...
[* AKA tollbooth-in-potentia]
Interesting that the first paragraph of the blog post accidentally touches on the real problem with traditional window management as it's implemented today:
"In this metaphor, each app can spawn one or more rectangular windows, *which are stacked by most recently used*..."
That, right there, is your problem - the assumption that any interaction with a window must cause it to pop to the front. That hasn't always been the case - and for as long as I can remember even Windows has had a workaround for the problems it causes for drag-and-drop. (Try it: open a filer window in Windows and in Linux, open a smaller window on top, and try and drag a file from the large window into the small one. On Linux the large window will immediately pop to the front, obscuring the drop target. On windows the pop-to-front is deferred because the click initiated a drag-and-drop.)
There were heated arguments about how to solve this in the Gnome 2 days, but as usual more heat than light was generated.
My own preferred way of working when developing is to have a text editor running full screen, and a terminal window on top of the lower-right ninth of it - with raise-on-click disabled so it doesn't vanish every time I click in the text editor. If I want to see the full editor momentarily I can just click its titlebar, or middle-click the terminal window's titlebar (unless its a CSD window - rant for another day!)
That's apparently such a radical way of working that of the main desktop environments I think only XFCE makes it available as an option to the user.
(MATE's window manager can do it to, but you have to find the setting using dconf-editor - where it's accompanied by a rather antagonistic and no-longer-entirely-accurate message left over from Metacity. Reading that message for the first time back in 2008 or so was the moment I first realised that my ability to use the computer the way I preferred was going to be limited not by technical matters but by politics.)
All that aside, there are some interesting ideas in the blog post and I look forward to seeing what comes of them.
 this one: https://github.com/mate-desktop/marco/blob/master/marco.pot#L431
Re: My takeaway from this article...
In my nearest city there are buildings where you can explore celllars which have been excavated to reveal the remains of a whole previous generation of buildings. The current city was literally built on top on them. Modern software reminds me a lot of this; application code built upon frameworks which use APIs delivered by libraries which are built upon now-long-forgotten frameworks which call dusty old APIs that no-one has called directly in a decade or more...
As for how to cut back the bloat; I think it's too late for Windows and Linux. I'm keeping a weather-eye on the likes of Haiku, SerenityOS, AROS and various other projects - but unfortunately none of them is likely ever to gain enough critical mass or momentum to make serious inroads.
I keep hearing about fractional scaling and suchlike, but what I really want to know is this:
On a technical level, is there anything stopping a window manager or compositor from freely scaling windows independently of each other? I've wanted this for a long time - and I think it was kind-of-possible using Compiz (something else that's on life-support, if even that.).
Basically, what I want this is:
When, say, a calculator app (i.e. something non-resizable) is using an unreasonably large amount of screen space, I want to hit a key combination (or hold down a key and roll the mouse wheel) and have just that one app shrunk by 10%. Shrunk, not resized - as far as the app's concerned nothing's changed - instead the window manager / compositor is just presenting a shrunken view of the window, and transforming input events accordingly.
Likewise, found an app with unreadably tiny text? Hit a key combination to resize the application to, say, 75% of the screen size, then scale the smaller window up so it fills the screen. (On high-DPI displays - especially with some suitable filtering - I'd expect the result to look reasonable, even if not as razor-sharp as it'd be without scaling.)
Is this possible today, or is anyone working towards making this possible?
Here's the thing about CSD: I don't mind it too much where it's appropriate. It's arguably appropriate for the calculator app. It's most definitely not appropriate for a text editor.
If CSD windows could regain the missing functionality (like send-to-back on middle-click) to be more consistent with SSD windows then I'd be more inclined to tolerate them. (Note, however, that removing capabilities from proper windows is not an acceptable way of achieving consistency!)
I'll definitely give this a spin, though - I'll be curious to see whether there's any change to the breakage the previous release (and the upstream Ubuntu) caused to Quartus (13 was totally broken, newer versions had major USB-related problems.)
Re: The dice-and-slice photocopier
While it's easy to fall into the trap of assuming US-style fair use rules apply everywhere, they are at least a reasonably well thought out set of rules-of-thumb for reasoning about such scenarios.
In your artwork analogy the most interesting points to argue would be to what degree is the new work transformative, and to what degree it usurps the need for the original work. Artwork generally has an easier time clearing those two hurdles than something which aims to exist in the same space as the source material. (Furthermore, artwork which remains in the same space as the source material - and your examples seem carefully constructed so that this is the case - will have a harder time clearing those two hurdles.)
I can see some value in shipping large, monolithic and especially commercial software in Snap or Flatpak format, but I think of it as something you hold your nose and tolerate, an ugly compromise that should be a painful reminder that the compatibility problem turned out to be too hard to solve - it certainly shouldn't be a long-term ideal for software distribution.
Snap in particular offends my sense of elegance and minimalism, with its littering of loopback mounts, and inability-by-default to access files in the home directory. (Yes, I know there's some magic incantation to make that possible; whatever it is failed to work for me when I tried to use OpenSCAD recently - I eventually gave up and used the AppImage instead.)
I can only assume the Hamburger Menu thing is a product of very young engineers entering the workplace having spent their formative years using phones instead of computers. What I wish they understood was that the Hamburger Menu is not in fact the pinnacle of user interface design, but a lousy compromise introduced because phone manufacturers were too cheap to put actual menu buttons on their phones like they did until about a decade ago.
To be fair it works reasonably well for some types of software - but I cringe when I see it crowbarred inappropriately into software like GEdit or GtkWave.
All I really want is for windows to open on the screen I'm currently looking at. The computer can make a reasonable guess at which one I'm looking at based on where the mouse is, and I'm quite happy to have launchers and suchlike on both screens, since that reduces mouse movement anyway.
(My preferred way of working, which no longer seems to be possible in modern distros, is actually pre-Xinerama multi-monitor X, where the screens have independent sessions running on them. You can't drag windows between them, which I guess is why it fell out of fashion - but in every other respect the behaviour's more logical and predictable. If a screen's switched off or displaying another source you simply won't interact with it, so nothing will appear there.)
I have four basic tests for any desktop environment:
* If I have two monitors but one of them is turned off, displaying another source, or is otherwise unavailable, can I be sure that a newly-opened window won't appear on the unavailable monitor?
* Can I set a movie playing on one monitor and then switch virtual desktops on the other monitor without the movie vanishing?
* Does middle-clicking a window title bar push the window to the back?
* Can I disable auto-raise-on-click?
My current favoured desktop is Mate (Mint 20.3) but even that fails the first two tests.
Re: A wee bit of context, for your consideration---
> And, for reference, most people consider Linux Mint 13, "Maya", to be "...one of the tightest, slickest Linux distros of all times"
I'll second that - when a forced Firefox upgrade left me with little choice to upgrade from Mint 13 (Mate) a couple of years ago I went to Mint 20 and was shocked by just how many things didn't work properly any more.
As for CSD - what I object to most is that I can't send the window to the back by middle-clicking the titlebar.
The sad truth is that the generation raised on smartphones now considers the mouse to be an archaic input device, so mouse-oriented interfaces will take a back seat to touch-oriented interfaces even when the latter makes no sense (looking at you GEdit and GtkWave!).