Why do this to yourself? I migrated from Microsoft to Ubuntu Linux even before Windows 8 and then took the family with me when 8 came out. Now my household is ‘NIX only. UNIX in the form of MacOS and Linux in the form of Ubuntu. On all three I run LibreOffice, GIMP, Blender, DaVinci Resolve, Office 2016 through WINE on all three, Firefox and Brave browsers,Thunderbird email client, even Vim and Zsh on all three. Pretty much with the exception of Apple specific apps that came with my MacBook all three platforms are harmonized software wise. They all talk to each other via my home network. I even set Ubuntu’s dock from the side to the bottom of the screen so when anyone needs to use either rhe Mac or the Ubuntu computer it’s painless. All three devices have run for years without incident. What a joy it has been since leaving Microsux for good.
Posts by Jumbotron64
61 publicly visible posts • joined 20 Dec 2023
Microsoft is a national security threat, says ex-White House cyber policy director
This headline should not be the shocker. The shocker should be that he said this on a podcast and not make it official US policy and position to name and shame Microsoft in public and announce that the Federal Government along side the Pentagon and all the alphabet agencies were now going to look at alternative OSs and Open Source software stacks and unless and until Microsoft got their shit together.
October 2025 will be a support massacre for a bunch of Microsoft products
First off, mums dead. Secondly she never had a basement as the water table here is only about 30ft down. Thirdly I have a religion and Linux is not it. Fourth, there are quite literally trillions of dollars sloshing about corporations who if they really gave a shit could either develop or outsource the development of an Exchange alternative. A multi trillion dollar open source foundation could be set up between them all for just such a purpose. You already have the base OS. Pour their filthy lucre into LibreOffice and an Exchange alternative and be done with it. But nobody cares to pay rent to Microsoft because they are all too busy collecting rent from their own enterprises (cough , Oracle, cough )
So to on-prem sites I would recommend you start evaluating Linux and in particular Ubuntu 24.04 LTS when it comes out in a week or two and then any and all open source stacks that could be alternatives and then when Microsoft pulls the rug out by 2025 and tries to force everyone to be renters then pull the plug on Microsoft by going with Ubuntu 26.04 LTS when it comes out in 2026 after your two year evaluation and transition to Ubuntu and your open source alternatives. And if you find you still need to run now unsupported Microsoft software run it in Ubuntu with Wine or the more polished Crossover Office. And then fill in any cracks with online / cloud services. It’s high time to fob off Microsoft.
Tesla Cybertruck turns into world's most expensive brick after car wash
Anything by Musk is comedy gold. His solar company has gone to shite. Twitter has gone to shite. My night time viewing has gone to shite due to his shiny happy wagon train Starlink satellites in LEO. His belly flopping (no seriously that’s how it proposes to land on Mars) spacecraft keeps going boom ( sorry…unscheduled dissambly). Tesla is going to shite. And now his idiotic and manifestly fugly Cyberteuck which is utterly a truck version ripoff of the 1980’s stainless steel DeLorian made iconic in the “Back To The Future” movie franchise. And even before this “Car Wash Mode” hilarity go to YouTube and check out how a baseball defeated the bullet proof glass at Cybertruck’s rollout with Musk standing right there. Comedy gold ! Finally…..you know it’s shite when you put the word “Cyber” on it. What is this? 1980s William Gibson “Neuromancer”?
Google squashes AI teams together in push for fresh models
The whole procedure of unification of departments comes once there is a need to (A) Reduce labor costs through “rightsizing” (ahem)…(B) Better visibility of what’s going on by the CEO, in this case Sundar, so that when the Board of Directors and major shareholders (Wall Street, High Street, whatever Street) start asking why they aren’t attaining even higher amounts of largess (profits) Sundar has a clearer, more concise answer (well, the possibility of one. Never underestimate the ability of a CEO to utterly bullshit his/her way out of bad news on any given conference call). And (C) Increase control so as to better execute the CEOs plan. Which the entire purpose of (C) is to service (B) in which the quickest way to service (B) is to invoke (A).
Whaddaya know….not a spot of AI was needed for that observation. But I’m quite confident that I also just contributed to the AI apocalypse as my words will be getting scraped by some AI company soon after I post my cheeky reply.
Boston Dynamics' humanoid Atlas is dead, long live the ... new commercial Atlas
Logitech intros free tool for ChatGPT prompts... plus a mouse with an AI button
Latest AMD Ryzen Pro chips are similar silicon, more smarts
GCC 15 dropping IA64 support is final nail in the coffin for Itanium architecture
Re: Take some credit
Apple can do it and has done it 3 times ( Motorola 68k to PowerPC, PowerPC to Intel, Intel to ARM/Custom In House) simply because they are both an OS and Hardware. Plus for each transitio/breakage Apple had various compatibility schemes to soften the blow. Intel was never going to be able to do that.
Intel over the Moon as Lunar Lake’s NPU performance TOPS Meteor Lake
OpenAI CEO wants UAE into his plan for a global AI cabal
We've taken care of everything
The words you hear, the songs you sing
The pictures that give pleasure to your eyes
It's one for all and all for one
We work together, common sons
Never need to wonder how or why
We are the Priests of the Temples of Syrinx
Our great computers fill the hallowed halls
We are the Priests, of the Temples of Syrinx
All the gifts of life are held within our walls
Peter Higgs, daddy of the Higgs boson, dies at 94
PCIe 7.0 first official draft lands, doubling bandwidth yet again
Canonical cracks down on crypto cons following Snap Store scam spree
Re: Snaps just ruins the biggest advantage of Linux OS'es...
It’s not Snaps as a containerized framework that is the problem, seeing as how it is head and shoulders ahead of Flatpak and the laughable App Image in production, manageability and security, but Canonical not yet realizing they have become the Google or Apple of Linux world in Snaps creation and in a Snap Store. They need to quickly and competently scale up their inspection, verification and certification department to the order of a Google or better yet Apple.
Sega grabs tech layoff baton and dumps couple hundred Euro staff
Can a Xilinx FPGA recreate a 1990s Quake-capable 3D card? Yup! Meet the FuryGpu
Farewell .NET 7, support ends in May – we hardly knew you
PostgreSQL pioneer's latest brainchild promises time travel to dodge ransomware
Standardization could open door to third-party chiplets in AMD designs
Sigh….what’s old is new again. This was the promise of the pre-chiplet era of HSA…Heterogeneous System Architecture which was designed by AMD with buy in from ARM, Imagination, Qualcomm and a pre-AMD Xilinx to the point they all created a foundation to support it. In their technical docs and presentation slides they touted that HSA was the way that 3rd party non x86-64 chips could directly interface with x86 chips and system RAM and system cache. But even if there were no x86 chips involved there was still interoperability between ISAa like MIPs and ARM. If I remember correctly there was a diagram showing a board layout with an AMD 64 bit CPU directly communicating with shared RAM and cache with a MIPS based DSP and an ARM based Xilinx FPGA. Granted once again this is pre-chiplet. But 12 years ago there were working designs employing 3rd party chips on motherboard directly interfacing with the CPU. Now, instead of HSA we have CXL for heterogeneous memory, UXL for heterogeneous accelerators and an alternative to CUDA and UCIe for heterogeneous interconnects. Now if AMD would just abandon their disastrous ROCm and adopt UXL which is based on and centered around Intel’s oneAPI and stop wasting anymore money on the compute stack version of 3D Now! the better off AMD will be and the industry as a whole as they all need to crack Nvidia’s near monopoly on compute.
SWIFT embraces central bank digital currencies after sandbox success
Which is precisely why ISO 20022 is so important. One global standard messaging framework for all banks and FIs. And machine readable as ISO 20022 is based on XML. Now there will be no need for batch processing every 4, 6, 12 or 24 hours but 24-7 near real time processing because the friction of having a multitude of message formats amongst the world’s banks and FIs each with a different way of even formatting something as a date ( one bank uses MM/DD/YR and another uses DD/MM/YEAR ) which have to be reconciled and modified to match sometimes by hand …all of that is pretty much eliminated with ISO 20022. If you want to participate in SWIFT going forward the due date is November 2025. That’s when they turn off the old messaging system. It’s actually astonishing to think that it took this long because of the cost savings alone for the banks but we still have critical systems running DOS and Windows 95 so there you go.
Re: Do Not Give Up Your Freedom
Acceptance is not required. Once a critical number of central banks around the world officially roll out their version of a CBDC even if that CBDC is used only for intrabank, interbank or both transactions all outliers will fall into place fairly quickly because the cost and friction of being the outlier in a global system of CBDC becomes too great. Also there’s no reason and nothing stopping a sovereign entity (nation state) from having a transitional banking system where CBDC, P2P blockchain and old fashioned cash co-exist. But the inevitable march to digital currencies including CBDCs whether tokenized or blockchained began first with fractional reserve banking followed by all currencies going fiat and then subsequently with the advent of the computer. One could make a hand waving argument we have had “tokenized” money since the advent of paper fiat money. The bill or coin in your hand is not your actual wealth. It is a physical representation…a token if you will…that is recognized by transacting parties and is usually generated by and granted legitimacy by a state actor.
Non sense. It may grease the wheels in your dystopia but nothing today stops banks and financial institutions at the behest of legal entities and sovereigns from controlling your finances. It’s the age old tactic of freezing assets due to criminal activity and the clawing back of said assets once a crime has been adjudicated. In fact a CBDC ( which doesn’t necessarily have to be blockchain just algorithmically tokenized ) can cause fraud to decrease just from the extra information encoded in it from the use of the ISO 20022 data schema coming online now and the only format used by the end of 2025 and run through SWIFT, although blockchain could increase that trust and anti-fraud factor. Actually blockchain apart from SWIFT will INCREASE fraud as blockchains have now been proven to be breakable and hackable not to mention that ad hoc tokenized and blockchain networks not to mention Bitcoin are rife with criminals and criminal financial transactions.
Re: Barking up the wrong tree
I’m afraid you fundamentally do not understand the multi-trillion dollar per day worldwide corporate and even sovereign financial payment world. There is NO worldwide payment and clearing system without a trusted intermediary. None. Yes, one of the features of blockchain is that you technically would not HAVE to have a trusted intermediary if in your blockchain you had set up the necessary message framework consisting of all the information needed by all the parties involved in a transaction in order to alleviate any trust issues amongst all parties. But when scaled to planetary scale as well as involving sovereigns this ad hoc, contract by contract feature of blockchain in general breaks down. The processing and clearing time latencies become too large and then there is the trust factor of who engineered the blockchain to begin with. Hence the need for a planetary intermediary. Also you don’t necessarily have to have a blockchain to initiate a CBDC. Tokenized currency is itself not a blockchain. Algorithmic…yes. Blockchain…not necessarily. One of the reasons that SWIFT can even begin tests in a sandbox environment for various CBDC is that they have adopted ISO 20022 which is the latest worldwide standardized payment message schema. It is a universal data format for payment and financial transactions that by the end of 2025 all the world’s banks and financial institutions will have to be converted to in order to participate in SWIFT as they will shut off the old messaging service by the end of 2025. As long as you have converted to ISO 20022 and have tokenized your currency you don’t need blockchain to create and utilize a CDBC. Now…for a lot of sovereigns they may CHOOSE to go full blockchain if and when they introduce their CBDC. But even then they will also choose to have that blockchain CBDC run through and clear through SWIFT because in their blockchain the payment messages will be encoded in ISO 20022 format which was developed by SWIFT. And they will want the added assurance and trust factors that SWIFT provides.
AI bubble or not, Nvidia is betting everything on a GPU-accelerated future
Correct and I would add that it doesn’t hurt that love them or hate them Apple Silicon chips have the most performant NPU of any consumer platform or ISA. Not to mention their class and industry leading memory efficiency metrics and from the ground up integrated in house engineering of firmware, software and hardware. And this is before any real A.I. integrations that are coming by years end and probably announced this summer. Particularly considering what Apple will do with integrating Generative AI into development platforms such as their Xcode IDE. Also an interesting thought is how Apple will use their Generative AI possibly in the designs of new iterations of their Apple Silicon chips. An AI optimized AI chip to run AI software optimized by AI optimized Xcode and Swift embedded into an AI optimized CPU, GPU and interconnects. Sorry..:.I’m about to go down into an AI optimized rabbit hole.
It’s both actually. There is a two front war going on in A.I. One is the obvious one now which is massive scale models with trillions of parameters eventually scaling to 100’s of trillions. Think global high definition climate and meteorological models, high definition, by the way, both scalar and temporal. Then secondly once those models are sufficiently trained how does one shrink that model down to the point of sitting comfortably in the RAM constrained iDevice with an NPU so that AI models can be constructively used on a local or edge device. In fact Apple just bought an AI startup from Canada that uses their AI to make AI optimizations to the tune of a 500%+ reduction in the size of the model needing to be run. I have read from a developer that it allowed him to run a 70 billion parameter model on his Apple Silicon MacBook where as most PCs would struggle to run a 7 billion parameter model.
UXL Foundation readying alternative to Nvidia's CUDA for this year
Re: Is very likely this will fail, like many others before
Apart from your scathing takedown of AMD which I alluded to above and wholeheartedly agree, I disagree with you that UXL will fail . Here’s why. Google…Intel…ARM. Between those three you cover the entirety of X86 and ARM ISA for hyperscalers down to Edge and local computing and IoT. There is actually very little outside Gigawatt Cloud Warehouses that can support Nvidia’s wares but there are quite literally billions and billions more to come low power compute platforms that need acceleration and support for on board if not on die accelerators where there is absolutely no need for CUDA or the power requirements that an Nvidia GPU requires to run said stack. It would be trivial for AMD to walk away from yet ANOTHER in house GPU/Accelerator/Compute stack because over the last 30 years they’ve become good at it ( remember 3D Now! from the 90s ?) and retool for UXL and by extension Intel’s oneAPI stack which is the basis for UXL for both X86 and ARM. Which makes sense seeing as how Intel’s FPGAs and AMDs Xilinx FPGA’s are ARM based. AMD can still compete with Intel over hardware optimizations that give better performance with Intel’s own software than even Intel can do such as what EPYC already does. And cheaper as well.
It is and has been for some time obvious that AMDs ROCm is a failure. After Lisa Su came aboard in 2015 or so and gutted the Fusion and HSA program to start over with ROCm it has been a disaster. Yes she has shepherd in Ryzen/EPYC, Infinity Architecture and Xilinx. AMD hardware is second to none in the X86-64 world. But their software work is very sub-par. I recently stated on Phoronix that at this point AMD should just abandon ROCm and adopt Intel’s oneAPI and their entire compute stack. Maybe UXL can be the bridge in which AMD walks away from ROCm and cuts their losses.
Sorry, Siri: Apple may be eyeing Google Gemini for future iPhones
Enter Darwin A.I.
So Apple earlier this year quietly acquired a Canadian A.I. startup named Darwin A.I. Since Apple usually forces the shutdown of the website of the company they have bought and in the case of Darwin A.I. they have also done so I did some sleuthing on the World Wide Wibble (hence the icon ). It seems Darwin A.I.’s focus was two fold. ( 1 ) They have an A.I. finely tuned for CV in the production and cataloging of chip parts and manufacturing. Ok…could come in handy for Apple manufacturing I suppose. But then, more interestingly, there’s ( 2 ) what the former CEO of Darwin A.I. had to say about their reason for being …
Darwin CEO Sheldon Fernandez.
“Our technology uses ‘AI to build AI’, to make neural networks both smaller and explainable. This can be especially powerful when you want to put deep learning on edge-based devices such as phones, TV, watches, and cars.”
Other sites I could find that provided pre-Apple acquisition information on Darwin A.I. stated that co-founder Alexander Wong had this to say…
“Most of today’s AI applications (such as Apple’s Siri or Amazon’s Alexa) require massive computing resources in huge data centres. That limits the growth of AI because it can be impractical and costly to deploy new solutions. There are also privacy concerns, in medical AI applications for example, when sending data off-site.
The company’s GenSynth platform helps developers “generate compact yet powerful AI that sits completely on board, so that data can be processed in real-time on a device,” Wong says.”
Wong is now director of Apple’s A.I. department. Perhaps Darwin A.I.s GenSynth neural net training and size reduction tech will be used, in part, to shrink Google’s Bard down to reside comfortably inside Apple’s RAM constrained products ( though with the most capable NPU of any consumer device on the planet )
Can AI shorten PC replacement cycles? Dell seems to think so
That’s impossible in X86-Wintel-land where hardware and software are not developed under one roof like at Apple. And when I say software I mean the OS kernel. Way too many hardware vendors each with their priorities, technology, firmware differences, and software and API stacks, not to mention multiple OEMs simply looking at the lowest BOM to squeak out 10 cents more profit per unit.
Russia plans to put a nuclear reactor on the Moon – with China's help
I smell a reboot
Here’s the pitch Mr. Netflix executive…..
Picture an inhabited base on the moon. Watch NASA designed spacecraft come and go. Oh no…minimal CGI as we will hire Christopher Nolan to do the pilot episode and rotate people in like the guy who directed “Moon”. Ok…ok…now the secret unbeknownst to the crew of the moon base with the exception of a Russian and Chinese spy in said base station is that the largest pile of volatile nuclear waste in the Solar System is on the other side of the moon. The base station has lost several crew of the “Sikorsky Sky Crane” style NASA designed space craft flying unknowingly into said radioactive area. The secret is revealed and the race is on to control the problem. TOO LATE !! The whole pile goes critical and BOOM !! The moon is hurtled out of orbit into a neighboring mini black hole also not seen before. And we’ll call it….SPACE: 2099
Google advances with vector search in MySQL, leapfrogging Oracle in LLM support
Boffins caution against allowing robots to run on AI models
Starting over: Rebooting the OS stack for fun and profit
PARC and Wirth
Just a brilliant article and essay. Even before the author got to the payout I was beginning to wonder if Smalltalk and Nicklaus Wirth would be mentioned. Not because I am brilliant ( I am most certainly not as compared to the author ) but because I ponder such things in my perpetual state of boredom running idylls in my often idle mind ( perhaps a wetwork loop function ). But when he mentioned Smallttalk I was piqued and in my mind was urging him forward ( say his name…say his name ). And then POW…Nicklaus Wirth. And here is where once again my lack of brilliance versus the author shines through. I thought that somewhere after all the talk of Object Oriented languages and PARC and Smalltalk and Apple the author would have to mention Nicklaus Wirth and Pascal and eventually Apple’s object oriented dialect known as Object Pascal. But with a shout of “YES” and with a fair amount of giggling on my part when the author got to Wirth he did so by way of Oberon. And the heavens opened and I heard a voice roughly sounding like Niklaus Wirth saying “Behold, I have seen thine Register article….and it is good !”
The Land Before Linux: Let's talk about the Unix desktops
Your argument breaks down at this point.
Can you download, install and run a Snap package on Red Hat or Suse WITHOUT FIRST twiddly fiddly on a terminal to download and install all the necessary files needed to download, install and run a Snap package on a Linux distro where Flatpaks are native ?
The answer is manifestly no.
Can you download, install and run a Flatpak package on Ubuntu WITHOUT FIRST twiddly fiddly on a terminal to download and install all the necessary files needed to download, install and run a Flatpak package on Ubuntu where Snaps are native ?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download, install and run a 32 bit Windows program on a 64 bit version of Windows OS ?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download install and run a 32 bit Intel MacOS program on a 64 bit Intel based Mac?
The answer is manifestly no.
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download install and run a 32 bit Apple Silicon MacOS program on a 64 bit Apple Silicon based Mac?
The answer is manifestly no.
Here's a tricky one....
Now...do you need to open a terminal, do some twiddly fiddly in said terminal to simply download, install and run an Intel based MacOS program on an Apple Silicon base Mac?
The answer is....manifestly no....most of the times. Because, at least in my experience the first Intel based MacOS program I am now currently running on my Apple Silicon based Macbook M3 Pro automagically downloaded and installed Rosetta 2 Intel to Apple Silicon real time translator before downloading and installing said Intel based MacOS program. And now every other Intel base MacOS program just downloads, installs and runs becasue Rosetta 2 was already automatically downloaded and installed. I've heard that sometimes you have to download and install Rosetta 2 yourself because your first Intel based MacOS program did not download Rosetta 2 automatically.
Either way....you were NOT forced to open a terminal and do some twiddly fiddly in order to download and install Rosetta 2 in order to download, install and run an Intel based MacOS program on an Apple Silicon based Mac.
So the ONLY computing platform that still insists on making it hard for end users by continuing to engage in Unix-Like Wars...( and really, doesn't it make sense that Linux being created with the express purpose to be "Unix-Like" that it would stand to reason that the Linux world would engage eventually into their version of the Unix Wars )...with 42 flavors of DE's, 5 App Container schemes, ( Snap, Flatpak, App Image, and Crostini with ChromeOS and whatever Google has baked into Android ) not to mention RPM's and DEB's ?
Re: What Unix cost us
Hands down the best 34 minutes 13 seconds I have spent since entering the world of Linux over 20 years ago and since first using a NeXT computer and assorted SGI IRIX machines even further back. This should be required viewing for anyone interested in Linux or already enmeshed in it.
Meet the New War....same as the Old War
While I observed from afar the Unix Wars while using at various jobs a NeXT box and various SGI boxen, I had to chuckle at the author's assertion that we are not seeing that in the Linux world as well as with his mentioning the containerization of Linux packages via Snap or Flatpak.
I mean....there are two. Three if you want to make a handwaving argument about App Image. So....three versions of containerization. Each incompatible with the other. Only one ( App Image ) is platform agnostic. That is to say that while you CAN use Flatpaks on Ubuntu you have to go twiddly fiddly with the command line for a while in order to set up Ubuntu to use Flatpaks as Ubuntu does NOT do that for you as they support their own in house containerization scheme namely Snaps. And the same holds true for Red Hat and Suse as they have native Flatpak support but you have to go twiddly fiddly with the command line for a while to setup for Canonical's Snap containerization scheme.
Now you might say...well...let me use App Image instead since it's platform agnostic. Well, you have to right click your App Image program icon in order to select "RUN" in order to lauch the program. Sorry....this is the 21st Century. Here is the process we have had since the original Apple Macintosh in 1984. Download program. An icon is place on the desktop of computer. Single or Double click the icon to launch. Only in the nerd land of Linux is it acceptable to force a user to learn how to launch a program all over again.
And let's not even discuss that before the Linux Wars of 3 different containerization schemes we had the Linux Wars of RPM's vs DEB's, and the continuing Linux Wars of DE's such as but not exhaustive as KDE, GNOME...( that's GNOME 2 vs GNOME 3 so a war inside of a war ), Xfce, Cinnamon, MATE, LxQT....and the list could go on ad infinitum, ad nauseum.
Oh...right....ChromeOS DE from which the majority of Windows 11 DE was copied. And speaking of ChromeOS and Chromebooks, here is THE MOST successfull Linux desktop in history. And it can't even run Linux programs natively because it only uses the Linux kernel not the entire Linux desktop and userspace bits to make it a "real" Linux desktop. BUT....as Linux nerds will retort...YOU CAN RUN LINUX PROGRAMS IN A CONTAINER CALLED CROSTINI. And my reply is....LINUX WARS !!! Now we have 4 Linux container schemes. App Image, Snaps and Flatpaks which only run on actual Linux desktops. And the forth...Crostini...only running on Chromebooks because it actually isn't a Linux platform even though it uses the Linux kernel.
So....in the end....how are the Linux Wars of today any different from the Unix Wars of yesterday other than the closed source nature of Unix back in the day vs. the open source nature of Linux today?