I like your blog and it is useful to know that there are options for RHEL 10.
One thing I despise more than being stuck with Wayland is being stuck with Gnome on Wayland.
1552 publicly visible posts • joined 10 Apr 2015
> This vulture, though, is not a gamer and finds emulation a little unsatisfying.
I kind of agree. Though the virtualization via VirtualBox is overkill for DOS (and especially writing).
As a middle ground, to scratch my retro itch, I ported / hacked on the original DOSBox to use libdrm(7) and wscons(4) directly on OpenBSD, so I can skip booting up Xenocara/Xorg or some compositor. It feels a lot more native and fun now.
I am still struggling to implement a widescreen 3.1 display driver for my hacked DOSBox. It is simpl(ish) in principle and I feel I am close but it is always taking a backseat from all my other projects because it is not very satisfying to write that kind of code in my spare time.
Ultimate was a home version.
It was targeted at "enthusiasts who want every feature in Windows".
https://news.microsoft.com/2009/02/03/windows-7-lineup-offers-clear-choice-for-consumers-and-businesses/
And another source here:
https://news.microsoft.com/2006/02/26/microsoft-unveils-windows-vista-product-lineup/
"The Windows Vista product lineup consists of six versions, two for businesses, three for consumers, and one for emerging markets: Windows Vista Business, Windows Vista Enterprise, Windows Vista Home Basic, Windows Vista Home Premium, Windows Vista Ultimate and Windows Vista Starter"
You can downvote me if you are feeling strangely sensitive about it, but it doesn't really change anything ;)
Fidelity wise, absolutely. But I like the indexed colors. It feels weirdly "homely".
And its not like modern software is even going to run on NT 4.x (in some ways I want to run the old software anyway because it is fast, light and simple).
i.e Office 97 and Adobe PDF printer driver has never really been improved upon for the last decade for word processing.
> what Microsoft did with Windows Phone thereafter was most definitely a misstep. A lack of an upgrade path for devices, combined with changing development frameworks, left users cold.
It was also the developer DRM. You couldn't upload code without a special license from Microsoft. Later, similar was done by Windows RT. It kills passion for early adopters and developers knowing their own code will stop working in the near future.
Windows NT 4.0 Terminal Services is still my favourite to emulate. With rdesktop you can get any resolution you need. NT 4 on a 3440×1440 monitor looks great. So much screen real-estate (the whole purpose for getting a large monitor).
GoboLinux is very interesting. Spraying files all over my filesystem when I want to install stuff with lots of dependencies is not a good idea IMO. It is a shame this kind of thing hasn't hit off. The way macOS *used* to do things was simple and elegant without installers doing creepy stuff.
For OpenBSD I developed pkg_bundle:
https://codeberg.org/kpedersen/pkg_bundle
It basically grabs package, dependencies, applies some hacks and makes it completely self-contained and relocatable (I usually dump this stuff in /opt).
It also has a benefit on shared UNIX systems, because you can install stuff without root access.
Heh, I rarely follow this kind of stuff to be fair. This DESQview/X article was just particularly strong.
Right. And emulating is also not so smooth either. I notice that QEMM386 has issues with Qemu when using specific DPMI hosts (it misses the occasional key press).
This affects Vim (unless using the little known PMODE/DJ[1]) and most DESQview products which I believe use DJGPP, so GO32 from that era which has a bug in the keyboard handler.
I do have some patches to the assembly to fix the faulty keyboard handler but it is quite a niche problem admittedly.
PMODE/DJ really is cool though. You can basically embed the DPMI host into any executable so it no longer needs an external one to be present. i.e:
exe2coff origvim.exe vim.bin
copy /b pmodstub.exe+vim.bin vim.exe
[1] https://www.delorie.com/djgpp/dl/ofc/current/v2misc/pmode13b.zip
Nice. When I feel like a "reset", I go and write some small fun software for DOS. It reminds me how computers really can be quite simple.
FreeDOS and OpenWatcom is actually great for this.
As for GUI environments, a much more substantial one is DESQview/X. I wonder if that works on FreeDOS?
https://lunduke.substack.com/p/desqviewx-the-forgotten-mid-1990s
>> We're removing the bypassnro.cmd script from the build to enhance security and user experience of Windows 11. This change ensures that all users exit setup with internet connectivity and a Microsoft Account.
Stick to LTSC. Microsoft won't remove this or it will fail the "offline from inception" requirement most businesses have for imaging. It would be commercial suicide.
Nah, asymmetric public / private keys are the better solution.
Reaching for your stupid mobile just to log in is tedious and masks the real problem... You keep having to "log in".
Imagine you instead simply upload your public key to sites as you create an account. Never need to log in again. Web browsers could make this even more simple by acting as an agent for your private key.
> gently persuaded
? Why do you need to persuade it? Just ask it.
the whole point of GPT algorithms is to churn out information that its database knows. It has scanned a bunch of code, including malware, so why would that be treated any differently?
- Do you need to gently persuade your calculator to also emit the answer to a sum?
- Do you need to gently persuade your government to be crooked?
No, they are machines, that is what they already do as part of their existence.
> The only reason people don't is because it's bad security practice to do so
I disagree. It is very common practice to not have arbitrary timeouts for TLS certificates. Just look at most SSH installs. Arguably these are also much higher profile targets too.
It would be like saying that every server in the world is applying bad security practice by not having certificates timing out every so often for SSH.
> do stuff that everyone needs rather than trying to pick a club to belong to.
This part I really do agree with. Whether England sides with NASA or ESA, our contributions are going to be tiny and insubstantial. We have spent the last few decades focusing purely on re-selling crap mobile phones and houses to ourselves for a quick buck rather than progressing in any bit of science.
So if we can pick a niche, really focus on that and then offer that to *both* NASA, ESA (and heck CNSA and ROSCOSMOS once we are all friends again); then we might actually have some impact.
If we do pick a "club", ESA or NASA makes little difference, will end up just acting as a "good little worker" and lose much autonomy preventing us from finding our niche.
(Yes, I do still think leaving the EU was risky but that ship has sailed).
> And I have enough C++ books on my shelves that describe the remarkable complexity of trying to write a decent smart pointer to suspect your implementation was as subtly broken as most were.
Whilst it certainly can't have been any worse than C++98's attempt via auto_ptr, broken heuristics is not such an issue, so long as it was safe. And it was.
I find it strange that so many books did not cover smart pointers in that era. Again, this contributes to C++'s bad name for "manual memory management", when really there was just an extremely large number of weaker developers, trying to use ANSI C with a C++ twist rather than embracing it properly (it didn't help that the language was complex so fairly few compilers supported correct features). Again C++'s popularity is its blessing and its curse when it comes to making a name for itself.
> C++98 certainly depended on manual memory management
It actually didn't, we just used to roll our own smart pointers (mainly shared pointer because we didn't have correct move heuristics required for unique_ptr).
The problem was that all of our smart pointers were different per project and it was a pain to marshal between them.
The legacy of C++ is what leads people to think it is much less safe than it is. 98% safe for C++ and Rust when dealing with raw C libraries is certainly workable.
> Rust is well designed, modern language, and the memory safety is a great feature.
And if the Linux kernel was 100% Rust then you can benefit from these great features.
But unfortunately the fact that it will no longer be a homogeneous codebase, undermines all of the benefits you stated.
... But this whole Rust argument has been done to death. Lets just wait and see. If this experiment works, it works, if it doesn't it doesn't. Opinions are split almost 50:50. Either way, the BSD community will be very welcoming to skilled developers who dislike Rust and are looking elsewhere.
> you cannot call C directly from Rust
You can call a native library from Rust (as you can with any language). But you cannot call a C function directly from Rust without creating bindings (Creating a specific Rust "header" is whats called creating a binding).
> SWIG
SWIG / bindgen perform the same task. Binding generators. Nothing special between them.
> The Rust safe abstractions are not the same thing as a bindings
They are; they are called "fat" bindings. C++ has direct access to C functions but you can also create "fat" bindings as part of RAII wrappers.
> You can call C directly from Rust
No you can't
> you can autogenerating bindings
Not really. SWIG / bindgen will only be ~75% successful. This is why the Linux developers are complaining about all these Rust "abstractions" (basically bindings) in the kernel. The problem is MACROs and non-opaque APIs.
> am not sure what tiny C front end you would want, nor where you would bolt it.
Check out C-based languages to get a feel for how this would work. C++/clr, Zig, C++, Objective-C are quite nice examples of tools that can directly consume C code but providing safer abstractions.
Its strange, I know plenty of younger people who have jumped on mailing lists and immediately got involved contributing code (in C).
Way more than a decade or so ago. Perhaps the big gap between the Z81 / ZX Spectrum era and the Raspberry Pi is what caused the temporary decrease. It looks like that is resolved now.
> Maybe componentizing it and letting it run in-browser will give it a new lease of life
I think it will be the other way round. Once Microsoft no longer provides a desktop version of Office, then LibreOffice might win by default. Working entirely within a web browser is a very poor experience.
Yeah, but that guy twists words and its not worth arguing with him. Better just disengaging with his crap. BSD and non-systemd operating systems are actually flourishing.
I mean, check out this issue log:
https://github.com/systemd/systemd/issues/2402
P, the sodding idiot didn't know the difference between hosing the install, vs bricking the system.
He is a fool, and the community is filled with them. Just ignore them; the great thing about open-source is choice. You can skip the broken and replace it with something workable.
In many ways this is because Microsoft will take note of Windows 11's lack of popularity *and* ultimately tone down their nonsense.
That said, I think people should always just wait for the LTSC release and torrent it if Microsoft refuses to sell them a license. Even with Windows 11, it is far cleaner than the consumer "Pro"/Home versions.
> but Linux is forever
Linux basically wins by default these days but I certainly can't say it is forever in that it is constantly changing. I would even say that GNU, Linux, etc is an umbrella term for a group of operating systems that continuously die and get replaced every few years by a loosely similar project with the same name.
This needs to be done, otherwise it is proof that this membership is not legitimate and instead merely for show (perhaps to help Google avoid having to break up their monopoly slightly).
(I will also be disappointed if any individual distros providing chromium in their repos don't revert the manifest breakage)