Re: Glad I'm using KDE, and keep your hands off Firefox!
>On KDE, one can also easily assign shortcuts (such as Meta+Z / Meta+Shift+Z) to select the next / previous item in the clipboard history.
No kidding. That might be worth trying out!
3387 publicly visible posts • joined 12 Jun 2009
"But this so-called "developer" probably hasn't heard of xmodmap."
*x*modmap? Oh you know Wayland will have none of that. (But seriously, I agree with you -- if one really does want that middle mouse button to be a useless lump instead of having a fast and convient way to cut and paste, then configure it that way if you really want to.)
Title says it -- this change to Gnome makes me glad I use KDE. And keep your hands off Firefox! Who the crap cares if select and middle-click is "discoverable". It's great! Like, why would I usually select text? To copy it. What do you do with text you've selected? Paste it. So why should I have to pick "copy" off a menu, then "paste" off a second menu -- so clunky! But it's there if you want. Also, what is the point, are they planning to use the middle-click for something else? No? So then why just have a dead mouse button.
Also, not that I've had reason too very often, but a handful of times I've had cases where it was convenient to have two different things to paste. So I could middle-click to paste one, and do the clunky paste (or press control-V) to paste the other.
Actually no, the ILS (Instrument Landing System) uses radio navigation beacons (RNAVs) as waypoints, and additional radio hardware at an ILS equipped runway. There are now "GPS RNAVs" that are just a GPS coordinate on the map but generally RNAVs are $100,000s of physical hardware.
I think that's what happened; it was prudent to let it run, they would already have been incapacitated very quickly if their oxygen masks didn't work, and possibly were for a minute or two even when they put the masks on (depending on how fast the plane depressurized), I imagine by ground level if the thing had started acted squirrely they would have taken back over.
I wonder how much of a slowdown you'd get from a system compiled with -Os? Apparently this can save 25-50% on the size of the executable, and I do wonder if memory allocation could have plenty of padding and 'bubbles' in it when programs are making many smaller allocations rather than a few large ones.
Having used C, C++, and Rust (I prefer Python myself, but even there you end up with a fair number of addons that wrap a C implementation of it), you can have simple C code where you replace the #include lines with mod or !include equivalents, and it's one for one the same. It's really like using C with "maximum paranoid" compiler flags (that error out on anything questionable) and like a paranoid Lint that runs at compile-time, with restrictions on pointer usage and other things that lead to memory use errors.
If I were converting C code to Rust, I would first write some code that handles simple cases, and flags code it can't handle automatically. Handle more of those cases. Find the ones it can't handle and handle more. There will be some cases where the code is doing something questionable, a human will have to intervene and fix something up (or you can give the AI a go and see if it helps out). If you want to use AI to iterate this code development so be it, but you then end up with an actual toolchain tool that converts one code to another predictably, versus an AI where you might give it the same task twice and get different results.
Doing an actual code review would be better, but I think converting 'simple' code with a tool and then manually reviewing the rest you could get through many lines of code without just having an AI like Vibe code it for you (with the inevitable unpredictable results.)
Given they did not know how to code, I imagine if they had left turbo mode off, they still would not have caught the error and would have still wiped the D drive. I mean, if it printed up "DEL /s D:\*.*' it'd be pretty obvious that's not what you want. But if you've got some Python code (for instance) and they aren't a programmer it'd be very easy to miss that it didn't change the working directory (or the working directory had a typo so it *intened* to change directories but didn't), something like that.
If someone is having an assistant write some functions or code fragments, taking a look at them to integrate into a program.. have at it. But if one doesn't know anything about programming, I would seriously recommend setting up a test environment to run it in. I.e. run a VM, copy some pics in, the program would have still deleted the wrong files (probably) but then one can just roll back to a pristine snapshot.
I'll just say.. I've toyed with having an LLM write code. It was passable but not outstanding, and generally needed a little work (which I fixed myself rather than trying to like iteratively prompt it to fix whatever). But vibe coding (where someone who knows nothing at all about coding just 'vibes along' and lets the LLM write everything?) Total madness. The quality of code made is just too hit-and-miss, and all too often non-progammers are not going to be precise enough in requesting what they want it to do, leaving it free to do something unexpected even if it strictly follows the parameters it was given in it's prompt.
Are Fabrikam etc. really retired thouigh? Maybe zava will be used JUST for AI nonsense, so when the AI bubble pops (not like there'll be 0 AI, but less than the AI vendors are hoping for), when that happens fabrikam ans coseco can carry on with examples of how to do normal stiff while Zava chases the next fad.
Me to! Jusr as you say, the 'flirst draft' of this code worked, i asked for some tweaks, it's like 'well i really should use this tool instead' then wrote up code using a differnt tool that (per the docs) apparently would never work (the new tool it switched to physically won't do what i want.)
spot on really. Heavy RAM requirements? Check. Heavy storage requirements? Check. High speed interconnects? Check. And I'll note the newer supercomputers have (almost if not entirely) moved toward having GPUs available for compute. One can argue semantics but these AI clusters have very much in common with the traditional HPC builds.
Perhaps once the AI bubble bursts (I don't think AI will become irrelevant or anything, but really AI in your fridge and etc? Really...) some of these will be repurposed for high speed compute.
'I would have thought a cross build environment would be possible - at least under Darwin but only ever compiled Unix text/terminal code on a Mac so I am only guessing.'
Yoiu'd think so wouldn't you? Franly, I would guess distcc or something could do exactly this, but, ither than OCLP (Opencore Legacy Patcher) and associtated projects, I just haven't seen macos users pushing the envelope, i think the kind who would justr use gentoo or arch etc. instead and simply steer clear of Macs. I would love to develop without a macos vm (I'm not about to buy the physical hardware) but found it to not even be something peiple pursue. I mean visual studio code has a thing for it but it literally just connects to xcode over some remote procedure call or socket or something, not move any work off the Mac.
I' run my iMac with several different accounts, does anyone know if you can use multi GUI users on the same remote machine?'
Yes, but their absurd licensing makes it so EACH user must pay for 24 consecutive hours at a time. (You may still see some clioud provuders offering macOS instances of much older macOiS instances, this is partly because these terms to put the screws on virtualized users kicked in aboout 5 years ago, probably in resoponse to this very type of use where peoole may have only had to pay for a few minutes of use at a time otherwise.)
'What is a no-brainer is buying a properly configured desktop case. Hell, even RAM doesn't cost all that much these days. l
a) These are Macs. RAM is quite costly. Nonexpandable so you can't just buy afternarket RAM.
b) Have you oriced RAM recently? Spot pice has hit about as high as it was back in 2010. Pricing is downright dystopian.
(This doesn't negate your point though.)
Yeah that's how they do it here in the states too. Used to get a phone 'included' with the contract on the postpaid phone service, as companies moved away from this (a stealth price increase, same cost service but no 'included' phone), a few companies THOUGHT they would just make you pay off the phone but pretend they could penalize you for cancelling phone service, but the FCC did tell them "No, you can't have it both ways." Of course Nutjob Trumps FCC will probably try to let companies do as they wish, but who knows.
I was at a store that sold both Verizon and AT&T devices -- it was rather shady (I mean, the terms were clear but most people don't look at the fine print), AT&T had all these phones listed for like $1 a month less than Verizon -- but the Verizon phones were on a 24 month payoff while the AT&T were 36, so in reality you were paying like $200-400 more for the AT&T phones, not $24 less.
(Side note -- one reason Verizon for one did the "new every 2" on phones back in the day, it turned out the CDMA and EVDO technology was improving so much between one generation of Qualcomm chips and the next, increasing call and data capacity using the same amount of wireless spectrum and hardware on Verizon's end, they ran the numbers and found they were actually saving money replacing customer equipment and getting free capacity increases compared to having to install more cell sites, put more equipment (like going from a monopole or a 4-way sectorization to 6 or 8-way sectorized antennas), etc. on the phone company end of things.)
It's madness. Violating the Hatch act, these jokers even wanted the airports to play a message saying something like "Due to the Democrats, wait times will be longer due to the gov't shutdown." or something like this. Some airports pointed out it violated the hatch act and they would not play it; some pointed out they use it to play informational messages, not political ones. Some pointed out the Dept. of Homeland Security did not own the screens in their airports and had no say over what played on them.
As for the shutdown -- I'm an indepdendent, it's both main parties faults really, there's nothing at all stopping them from funding the non-controversial stuff and sorting out the rest later. The US has only 2 main political parties, and most in the US pretend the 3rd parties simply don't exist, so any time there's a shutdown, both parties blame the other one. You usually don't have one party making inappropriate political messages as they are now though.
Linux would do this too but the overlay color was a specific shade of blue instead. If you had the sub-pixel font rendering going you'd sometimes get individual pixels in text have video bleed through too (... if your text window was covering the video playback area.) If you force playback using xv or xvideo it may STILL do this.
The incompetence of it.
Even if you're all for having ICE thug around the country (masked, anonymous, picking people up off the street and taking them to detention centers, when there's friends or relatives asking for their whereabouts syaing they aren't allowed to tell them..I'm not for this in case you couldn't tell..), Even if you were all for it, I'm just picturing like Dilbert, the guys from Office Space, and the IT Crowd, being handed vests, tasers, and guns and told to have at it. I'm just saying, besides it not making sense to shrink down CISA as they are, I *seriously* doubt a bunch of computer nerds and IT types are going to make particularly good ICE officers.
Or, CISA has a few BOFH types and they end up with a couple BOFHs on their hands.
I'm a big ARM fan, and don't care too much about Intel CPUs (although my current computers have one I ran an Nvidia Tegra K1 RM chromebook on Ubuntu with no issue and full opengl and cuda support as well. and with box86/box64 and fex out now even x86 on ARM is fine.)
But lets face it, integrating an Nvidia GPU into an Intel chip would be sick! Maybe Nvidia can help put the CPU on a power diet too while they are at it.
I am not about to do vibe coding (the 'lets just let the AI write the code and trust it's right') but if I were having one spiff up a bit of code (I'd make a diff and see what it actually changed) I'd run it locally. Probably slower but 0 the cost. (Electricity costs, but my GPU and CPU if both flat out draw like 135 watts altogether so it won't be all that high.)
If I had an elderly Mac I'd try OCLP first, it's pretty good. If you find these options unsatisfactrory, you can also install Linux of your choice on the Intel models. Just like older PCs a regular 'full fatl distro runs pretty well on anything newer than the Core 2 series and runs 'OK' on them, as long as you have 4GB in there. 2GB or less you may want a lightweight distro.
I'll note the HLT instruction actually ONLY halted the CPU until an interrupt came in (no power saving mode) on anything before the 486DX4 (marketing at it's finest, this was a clock tripled CPU not quadrupled as the DX4 name would imply.). Not a common chip so basically the Pentium. So on many Win95 machines this wouldn't have even saved power.
Also, shocking given modern CPUs but that 486 used like 3 watts and Pentium 90 7.5 watt TDP so it's not like now where a 'lower powered' Intel CPU (other than Celeron N) would use like 30 watts without power saving.
Oddly, for me, it seems like the *other* updates downloaded okay and it was ONLY the linux-firmware package that was either failing or downloading dead slow. (Which still resulted in a failed update unless you wanted to manually run apt --ignore-missing, which I didn't bother doing.)
I do have to wonder, what backlog? I mean, the update system typically goes to download updates on some schedule... not retry like minutes later if it fails. Do that many people just constantly see failed updates and IMMEDIATELY try to re-update to have a 36 minute update cause problems for days? To be honest, I could see snap doing something like that, but snap doesn't use security.ubuntu.com. I feel like something else was probably going on.. but *shrug*, also not too concerned about it as long as it doesn't just start happening on some regular basis.
I run the models I've played with locally. Slow (I have a 4GB VRAM card so all to often I must run on CPU) but effective, and it's not THAT slow. No usage limits, no cost per use, and of course as soon as I would have encountered usage limits, throttling, or having to wait due to the provider seeing usage exceed capacity, at that point my local model is also faster.
So I can see why some company may not like ad blockers.
But copyright infringement? What a nonsense argument.
* The web site serves the content up to anyone who requests it and at that point the web server is making a copy.
* Ad blocker apparently modifies the DOM. It's not firing off a copy of that DOM, no copy distributed so no copyright infringement.
Now really. I've used C, I've used Rust, and essentially Rust is like C with all the warnings turned on, 'Werror' (treat warning as error) turned on and 'lint' running at compile time and throwing errors if it finds anything it doesn't like.
Don't flame me too much, I'm pro Rust and I know there's a lot more to it than that.
I'm just saying you coiuld (in theory) run low on C programmers because they prefer Python (I prefer it, although I am aware it's inappropriate for an OS...) but I don't think you'll find a lack of C programmerrs because of Rust.
You're right regartding the advantages of a microkernel. But the big disadcvantage is speed. All that jumping back and forth between user and kerne mode and message passing tends to be ruinous for speed.
I'll just note here Linus and Andrrew Tannenbaum had a heavy discussion (that some dubbed flame war but both participants agree was a friendly technical discussion and not personal) over this very issue, back in 1992, there's a wikipedia article about it and the thread was public so I'm sure it's still available. Essentially Linus liked the microkernel design from an aesthetic viewpoint but preferred monolithic from a practical standpoint (and Tannenbaum's Minix is a microkernel, he thought by 1992 making a monolithic kernel makes it obsolete from the get go.)
Why wouldn't they have Trello and then modify it under a new name for personal time management or whatever they think they are going to pivot it to. They might think they'll just move existing customers to Jira, but they could just retain them on Trello AND have 'Trello personal' or whatever for what they apparently are intending to turn Trello into instead of pissing off their existing customer base.
I don't like throwing in AI features without a care either but...
* Language translation works well and now does it locally. Pick your poison, use AI or send it to Google Translate which is what they did before. I prefer local translation.
*Alt text description of images. If someone didn't follow those web accessibility requirements and provide alt text this should be genuinely useful for visually impaired users. Run locally.
* 'AI enhanced tab groups' sound diumb.
* AI summary of a web link. i don't trust this to work right but you have to hit some combo of keys to run it anyway.
* Put a chat bot in a sidebar. This runs remotely. Dumb but iit's not there unless you turn it on and there are those who LOVE their chat bots.
In other words, no fan of just throwing AI in but I do think seveal of these features are genuinely useful and they run locally so they are privacy respecting, and none are 'in your face.'
Ahh yes sovereign citizens. If you ever want some light court related entertainment see when these guys end up in court. They self issue drivers licenses, go on about how the court doesn't have authority over them under marittime law (even 1000 miles from any coastline -- it's not like they got caught on the beach and then try to argue this), that the flag is the wrong kind of flag so the court isn't valid. They bring up the Magna Carta even here in the US where the Magna Carta was never a legally active document. And of course they don't use a lawyerr because a lawyer wouldn't try to apply marittime law, 1800s era common law (ignoring legal precedents that override it and ignoring the US generally doesn't use common law unless there's truly no laws governing some action), etc.
One judge that keeps getting these is like 'I *STRONGLY* advice you to get legal council, and we can provide it free of cost if you can't afford council. Do you want to hold off proceedings until you have legal council. I can't formally advise you but you should answer yes.'. 'No I'll represent myself.'. 'OK then...' and proceeds to tear them a new one.