I've been using FreeBSD as my main OS at home for FOSS development for the past 4 or 5 years.
No real issues (unlike Fedora which has given me countless issues with either kernel panics at boot or borked Nvidia drivers).
47 publicly visible posts • joined 22 Jan 2007
I don't know much about FreeDOS, my impression was that it was used when a small free bootable system was required.
FreeBSD has two main strong points IMO.
1. The permissive license. Some companies like the fact that they can use it without being obliged to make the source available. Older PlayStations are an example of this.
2. Networking. TCP/IP was developed on BSD (the original BSD at Berkeley University in the USA). The BSDs and FreeBSD in particular contine to have leading edge network performance. Netflix use FreeBSD for their 'content delivery network'.
FreeBSD also gets some interesting contributions from academia. An example of that is Capsicum and Cheri/Morello.
On the downside, FreeBSD has a relatively much smaller community and number of contributors compared to Linux. That makes it more like one of the smaller Linux distros. It's certainly nowhere near the size of Fedora / Ubuntu / Debian.
Windows and Linux are both also as old if not older (though you could argue that current Windows has nothing in common with the 80s and 90s versions of Windows). I'd count the jump to Mac OS X as a real discontinuity.
IBM x390 has its roots going back even further to the 60s.
As long as people find software useful they will use it. Well, that and Microsoft attempting to apply 'market distortions'.
You aren't just a it puzzled. You simply don't have a clue.
First, MTE isn't used for memory leak detection. It's used for access control to memory. The leak that the article talks about are information leaks concerning the tags.
Second, the tags are not at all an attack vector. Without MTE zero effort would be required. This article is saying that it is relatively easy to get around the MTE access control.
Debian never fails to disappoint. The Purify condiitonal compilation should heve been a red flag.
The correct fix for the Valgrind errors was to
a) #include <valgrind.h>
and
b) change "#ifndef PURIFY" to "#if !defined(PURIFY) && !RUNNING_ON_VALGRIND
If adding a dependency on valgrind.h is too much to ask, Valgrind has a suppression mechanism for things like this.
> replacement hardware -- Windows 3.1 doesn't run well on modern hardware
You are quite likely missing the point there.
I'd say that there is a good chance that this is running on old kit with something like a custom adapter card (perhaps as old as PC-AT format).
Windows 11 doesn't work well on ancient hardware.
Generally I would expect compiled code on the amd64 architecture to be smaller than anything ARM based.
I don't think that Apple can claim with any honesty that current macs have the same or less bloatware than previous generations (my rough estimation of bloat based on the number of processes that are running after booting is that it has quadrupled over the last 15 years).
It could be that Windows and the Unix-likes are horribly ineffecient and that the super smart Apple engineers have managed to fit a quart in a pint pot. But I don't believe that either.
My theory is more prosaic. Apple can't be arsed with the cost and effort of modifying their designs and are happy enough overcharging for their current offer.
Phones and tablets are a diferent story but you can't extrapolate from a phone to a desktop.
Why did Solaris take off in a big way? SunOS had been doing well enough on M68K, but when SPARC was released it wiped the floor with the competition. Still, relatively low volume and high profit. Fast forward about 10 years to the late 90s. Chips like the Pentium III have now caught up with the SPARC (chips like the Ultra SPARC II) but are cheaper.
Skip forward another 5 years to 2003 and AMD releases the Opteron. By now the SPARC is no longer competitive on cost/performance. But at that time Sun had a big opportunity. Sun already had several years of experience of 64bit OSes, the V9 SPARC architecture having been released about 8 years earlier. Sun also released a 64bit amd64 version of Solaris 10 in 2005.
Though Linux had already supported amd64 for a few years my memory was that it was all very flakey. Sun did push fairly hard for amd64, but also they had done some serious damage as already noted when they dropped Solaris 9 x86 for a short while. Even though Sun was selling Opteron workstations and servers, most customers were using them for Windows or Linux. And though the writing was on the wall, Sun was still making most of its money from SPARC and couldn't bring themselves to switch focus to amd64. OpenSolaris was too little, too late. And finally along came My Little Pony and finished the job off https://www.theguardian.com/technology/blog/2010/feb/04/jonathan-schwartz-sun-microsystems-tweet-ceo-resignation.
French typists may be faster
But for software development (especially the C family of languages) the keyboards are a nightmare.
# - right next to the enter and shift keys on a UK board, AltGr-3 on a French board
\ - next to the left-shift on a UK board, AltGr-8 on a French board
Square brackets and braces aren't on adjacent keys, instead they are symmetric bu spread over the number row.
English is mostly un-genendered when it comes to (pro)nouns and articles. Inclusive French seems to be getting more common. Stuff like 'iel' instead of 'il' or 'elle', and making a mish mash of gendered nouns. That's not so bad when it's just an 'e' at the end (so etudiant-e) but it looks a lot worse when the masculine and feminie versions differ more. For instance "Conducteur·rice de train" (train driver, copied and pasted from this ad https://metiers.siep.be/metier/conducteur-conductrice-train)
OpenIndiana - not really. Due to lack of available hardware (and maybe people able/willing to do the work) OpenIndiana Hipster no longer supports SPARC.
https://www.openindiana.org/es/documentation/faq/#does-openindiana-provide-a-sparc-release
There seem to be a few (more) obscure spins that still support SPARC.
It's a coil. An inductor. One of the 3 types of passive circuit element that exhibit electrical impedance.
Quicky refresher, for ideal components and steady state. A resistance lets current flow through it, proportional to the voltage. A capacitor will have current flow until it is charged and then it will have a voltage across it but 0 current. In inductor will have a voltage across it until the magnetic field builds up when it will have current but 0 voltage across it.
These numbers for coil voltage and resistance are pretty close to zero are pretty close, which is why there is such a high current and magnetic field.
For the electronics, it's not just a question of the field strength. It's more a question of the rate of change of the field.
For the hard disk, the write head is **very** close to the magnetic material. The magnetic filed falls off with the cube of the distance. So I guess it would have to be a fairly enormous MRI-style superconductor magnet that would be required.
You are seriously misinformed. If you wind your clock back to 2005 then the picture is very much different. Back then, Linux was pretty much only 32bit, and the 64bit version was not yet fit for purpose. Sun, by that time had a lot of experience of 64bit Solaris on SPARCV9. So when Solaris 10 came out, the 64bit amd64 version was probably the best OS for the platform. Remember, Intel was still sailing towards the iceberg on the Itanic, and AMD was trying to steal a march with Opteron. AMD stumbled with Barcelona and Bulldozer, Intel struck back with em64t, Sun failed to get much traction for Solaris amd64 even on its own hardware, OpenSolaris was too little, too late and the rest is history
Whilst I've not worked in the field for a long time, my PhD was on pit-forming mechanisms in dye-polymer optical storage.
I'm not too impressed by the FAQ. "the M-DISC™’s data layer is composed of rock-like materials known to last for centuries". The Wikipedia M-DISC article is a bit better.
Normally optical media uses a polycarbonate substrate. This has pretty good optical and physical properties (and is dirt cheap). However it is somewhat hygroscopic, and when under stress, the optical properties are less nice. On top of this, CD/DVD/BD-LTH have an organic dye-polymer layer (100nm or so thick), a layer of aluminium (50nm) and a layer of laquer (10um) on the label side. 10 microns of laquer doesn't offer much physical protection. Back on the polycarbonate substrate side, small scratches tend not to be too much of a problem because the disk is fairly thick (1.2mm) and the incident laser has a fairly high numerical aperture (i.e., it's still a fairly wide spot when incident on the disk, but converges at a high angle to the dye/reflective layers [as high an NA that you can get with a lens that probably only costs 20p]). Also the Reed-Soloman ECC does a pretty good job. I don't know if this has improved with DVD or BD.
I doubt that the M-DISC deviates that much from the CD/DVD structure. The main difference seems to be in the recording layer, which is more like BD. I'm not sure about the thousand year claim, but if you take care not to scratch the disks, write on the label with a felt pen that won't dissolve the laquer and keep them somewhere fairly dry, then I reckon they'll last well compared to alternative data storage media.
There are many issues involved in continued die shrinkage. Just to list a few. There's the problem of making masks. Currently there are large sets of design rules in order to be able to create masks with dimensions using light that is of a much larger wavelength. People have talked about moving to shorter wavelengths, but again there is a big economic barrier. Next there is the issue of what exactly scales. Back in the old days, you had 5V and you could just shrink the dimensions and nothing else. But then the electric field (voltage/distance) started getting too high, so the voltage had to start dropping. Second but, it couldn't drop as fast as dimensions shrank. This has a speed/power tradeoff, but basically silicon transistors don't work below about 0.6 to 0.7 volts (the threshold voltage where a transistor switches between off and on). High-k dielectrics were introduced to help with the electric field breakdown issues. Next there is the problem of variability. One of the important aspects of IC design is that while it isn't easy to exactly control the parameters of the transistors (e.g., to have precise resistances and gains), it used to be the case that transistors physically close on the die would be very closely matched in characteristics. When you scale down to small numbers of atoms, then each transistor has much more statistical variation. This makes design much harder.
I don't think Mike Wright has much experience with Solaris, to be honest. For free you can get the OS and critical and securty patches, and any such patches that are dependencies. You have to pay for the rest. With Red Hat Enterprise Linux or SuSE Enterprise Linux you pay for the OS and annual support. If you want compare it to an all-free Linux like Fedora Core, then you should compare it to Solaris Express. That's free and has updates roughly every month (though no patches of any kind).
sh as the only shell? Of course not. Personally I use ksh, but all of the shells are there. sh may be limited, but it's not as buggy as bash. Since Sun puts a lot of emphasis on compatibilty, there is no way that they will ever replace the Bourne shell with a symlink to bash.
From what I've seen, it's not just on high end hardware that Solaris gives Linux a run for its, err, money.