* Posts by jcday

22 publicly visible posts • joined 8 Nov 2023

Ex-Meta exec: Copyright consent obligation = end of AI biz

jcday

There are options.

The smarter the AI, the less material it needs. If you need lots of training material, then your design is defective.

If you require only limited material, then you can afford to do this amazing thing known only to The Wise: you can buy the book.

AIs suffer a known problem: when there's too much poor material, or just too much material in general, the quality of results falls off. That's why you want a small training pool of carefully curated material, not the Internet as a whole.

Boeing 787 radio software safety fix didn't work, says Qatar

jcday

Re: 90 Minutes to install a patch????

There may also be ISO 900x paperwork for change control. I've never worked on aviation software in this context, but a typical embedded computer would use a JTAG port for uploading firmware of this type, I inagine aviation computers would use that or something functionally equivalent, as USB is not really safe for this sort of use.

But, yeah, what you've outlined is pretty much standard for mission-critical, and even just commercially-sensitive, environments.

Microsoft to mark five decades of Ctrl-Alt-Deleting the competition

jcday

Good question

I remember MSDOS coming out. It had a fair amount going for it, but the PET3032 supported intelligent peripherals and the BBC "B" had a more intuitive syntax, superior graphics control (such as split modes), and sound. Still, MSDOS had nice features, such as mapping directories to drives and drives to directories, and support for more memory.

After a while, things started getting murky. DOS 5 and D DOS 6 got embroiled in lawsuits, often copyright.

I remember RISCOS, DesqView, GEM, and early Windows. Windows 3.11 was, again, embroiled in lawsuits, this time anticompetitive action.

Windows was available for PCs, RISCOS only for the Archimedes. RISCOS was better, but the Archimedes failed to sell and Acorn never leveraged the OS anywhere else.

I will decline to comment on later events, beyond saying Microsoft increasingly disappointed me on ethics and security, and whilst I respect others who feel it has changed, I'm not convinced.

But when it started, Ido freely admit they had good ideas and a stronger starting point than Commodore, Acorn, or Atari were able to devise, even though there were noticeable weaknesses.

Palantir suggests 'common operating system' for UK govt data

jcday

Hmmm.

There would be massive benefits to switching the entire governmental IT network to HaikuOS - for a start, it'll prevent Oracle botching any more contracts.

Feds want devs to stop coding 'unforgivable' buffer overflow vulnerabilities

jcday

Lemon laws

You cannot prove zero defects in non-trivial software, so a hard lemon law wouldn't work. However, softer versions should be fine.

Valgrind, Dmalloc and Electric Fence mean buffer overflows and memory leaks can be detected in Linux software trivially. So those defects are 100% avoidable.

There are plenty of static checkers (eg: Coverity, Klokwirk, PVS-Studio), so common defective coding practices can be avoided.

For mission-critical software, a development strategy similar to that used by SEL4 would be able to reach extremely high levels of software assurance. This is a market where the extra cost can reasonably be included.

So we can, quite reasonably, argue that certain classes of defect should be 100% absent from specific classes of software.i think that is perfectly reasonable.

From this, we can argue that limited lemon laws should be in place.

Since we can also estimate software defect density, it would also be possible to have software certified to some established level of quality, entitling the vendor to market the software accordingly.

The user then gets to see the specific risk level for that product.

jcday

Doesn't require safe languages

C for Linux has valgrind, dmalloc, and Electric Fence.

For Windows, there are copious commercial memory debuggers that integrate well with Visual Studio.

Yes, memory-safe languages prevent other memory-related bugs, but for simple stuff like overflows, there has never been a shortage of tools, just a shortage of common sense.

'Maybe the problem is you' ... Linus Torvalds wades into Linux kernel Rust driver drama

jcday

Not a trivial problem.

The characteristics that make for good programmers doesn't generally make for good communicators.

If Rust was seamlessly able to work with C's ABI, adding Rust code would not be difficult.

Whilst memory issues are a big problem, there are many types of coding error. Perhaps it might be helpful to see how many classes of error we can practically remove, and whether this can be done more effectively in a C variant or a Rust variant, or some other high performance low level language entirely.

Absolute Linux has reached the end – where to next?

jcday

Not necessarily the best approach

One of the problems with any distro is getting all the dependencies you need for the functionality that you need, not necessarily all the dependencies that there can be (especially for lightweight systems) and definitely not too few to meet your requirements.

The second problem is the interface, which should be based around how you think and work and not by how the distros admins think and work. But, again, that creates interacting dependencies.

The third problem is the kernel, you really want a kernel configured to make the most of your system, not the dev's system.

Probably the simplest solution would be to not have a distro at all, at least not in the normal way of thinking, but rather have a requirements document you post plus a systems analysis, which goes to the distro provider. They then build a custom roll, configured to suit you, that still meets the provided test suites plus any distro test suites.

You then get that roll. It has what you need, and nothing more. Everything else the builder knows about can be installed as an extra, but what boots up first time is a distro that only has what you want.

The downsides of this are that updates and extensions take a lot longer to get to the user, a change in requirements can lead to a huge rebuild, validation is going to have to be a lot more basic, and the distro provider will need a powerful server farm, but it means you should get the best performance, the lowest footprint, and the least clutter.

Australia moves to drop some cryptography by 2030 – before quantum carves it up

jcday

Quantum methods don't have to be a factor.

https://valerieaurora.org/hash.html

We've known the SHA2 family was vulnerable since 2008. The probability of an unofficial breech, at this point, cannot be ignored. I did not see SHA3 mentioned, which lends weight to the idea that the Australians have found flaws they're keeping under wraps.

QNX 8 goes freeware – for non-commercial use

jcday

Intriguing.

There are a number of non-microkernel RTOS' (VxWorks, FreeRTOS, etc) and these will be the chief competition as most software corps really don't understand what kernel architecture has to do with things.

Of the microkernel RTOS' out there, SEL4 is probably the most studied, out of necessity due to the requirement that it meet very high standards of proof of correctness.

There are others, but they seem to be mostly proprietary and niche.

And, of course, these days, Linux does soft realtime (now the final patches are in).

QNX, to compete, is going to have to have a very convincing selling point.

Clearly, the makers think they have one. I will be watching this with interest.

The NPU: Neural processing unit or needless pricey upsell?

jcday

Intriguing

I wonder why they're doing in hardware stuff that actually is better done in software. "Efficiency" isn't a reason, since there's better efficiency gains elsewhere.

Neural Nets are, ultimately, a very extreme example of SIMD-type work. Ideally, you want something akin to a vector processor, because all neurons are performing exactly the same instructions at the same time, just on different data sets.

This is partly why GPUs are good, because GPUs can do limited amounts of SIMD stuff.

Intel: Our finances are in the toilet, we're laying off 15K, but the free coffee is back!

jcday

Re: Caffeine is the key here

Tea and coffee act very differently here.

Tea has compounds that are sedatives, and multiple stimulants. In consequence, tea produces a calm, relaxed, but highly active mind, and this will last for quite some time. Tea also has compounds that protect the brain.

Coffee also had multiple stimulants, but does not have any sedating effect. In consequence, you've a lot more energy and focus, but the effect is spikier with more drive and less calm. Coffee protects the heart in spades.

It's not really rational to try and "optimise" tea and coffee use, so providing both makes a lot of sense, but in the end both have a strong value for engineers.

Rackspace internal monitoring web servers hit by zero-day

jcday

This is fascinating.

The idea of classical security procedures is that you minimise attack surfaces, but also minimise what can be done in the event of a successful hack, and to maximise active detection and elimination of attacks as they take place.

More modern security practices have multiple layers of castle wall for things that don't actually need direct access.

In order for an attack that results in remote access to succeed, you need a minimum of three different security failings. Depending on what level of access was achieved, and the level of network segment isolation, it can take six of seven failings.

In practice, operations take shortcuts, so it's rare you get to quite that degree.

Regardless of how many layers there were, a breech of security requires problems in far more than just an application. I'd want to know what the additional failings were.

Feel free to ignore GenAI for now – a new kind of software developer is being born

jcday

Re: "Programming should be easier"

Agreed on all points.

Oh, and PS, Oracle never did deliver a working product to Birmingham city Council in the UK, despite being paid over ten times the original estimate, and will likely be paid as much again to make what they have work due to the sunk costs fallacy.

jcday

A simple challenge

There are various different requirements in software, depending on the intended audience. The sectors with the deepest pockets, though, will need products that have one or more of these qualuties:

1. Highly compact software

2. Highly secure software

3. Highly robust software

4. High Performance software

5. Highly distributed software

6. Algorithmically precise software

If AI is to do anything more than replace the developers of Candy Crush clones, then it has to be capable of operating in at least one, ideally several, of these areas.

There's obviously more to it, skilled programmers are language-agnostic, paradigm-agnostic, and specification format-agnostic.

I could eaily write out a full-length challenge that would test to the limits the capacity of an AI, but let's start with those six domains.

If a vendor or a prompter can demonstrate AI's capability to do these six things, then it's worth talking about.

CISA boss: Makers of insecure software must stop enabling today's cyber villains

jcday

Re: I disagree.

I know more about security than you.

No, I have never committed a bad PR. That's because I know what I'm doing. I've been doing this stuff a lot longer than you. I don't even have to ask how long you've worked in IT security to know that.

You should NEVER have applications access databases. Ever. Partly because it's stupid and not secure, but also because SKILLED programmers NEVER tightly couple.

I/O should NEVER be in your primary code, it should be segregated out so that if something changes, you change only the interface, never the innards.

Christ on a pancake. This is like programming 101.

High horse? I own the bloody battlefield, because piss-poor wannabes like you get involved. Get the hell out of IT and programming, your ilk aren't welcome.

jcday

I disagree.

In most code, there will be software defects, achieving a defect density of zero in the general case is impossible. (Church-Turing.)

But SEL4 demonstrates that you can produce certain classes of code with provably zero defects. So, for those categories, your argument has no validity. (Church-Turing applies to the general case, not to all cases of non-trivial software.)

So let's consider those classes of problem where you can't prove zero defects. What can we achieve?

Let's look at that luzt of yours.

Buffer Overflow. Very often, this can be detected with a static checker. You can therefore write a compiler that detects and fails code with buffer overflows.

Cross-Site Scripting. This can be blocked. If it's a problem, it's something that has been intentionally chosen to be problematical. I feel no desire to defend idiots.

SQL inhection. Code shouldn't contain SQL at all. If your code has SQL, it's bad code. The SQL should always be on the database itself, which should not be directly callable but accessible through a front end. Do it like this and all the SQL injection in the world will do nothing.

Use after free. As with buffer overflows, you can simply have compilers reject code with such bugs. You can also use languages like Occam, where there's no dynamic memory.

OS command injection. If the program is executing as per the doctrine of least privilege, any injected command can't do much. And you should always run with exactly the privileges you need, not a single privilege more. But your program shouldn't call other programs, at least not directly, and should certainly never use data for any such call. Again, compilers can block such things.

In both the injection cases, it also requires inputs aren't validated. Inputs should ALWAYS be validated.

So, frankly, that entire list is self-inflicted damage. You could make all five categories of defect a criminal offence, and it wouldn't stop software being written

We can actually go a bit further. Nothing stops a C or C++ from supporting the same compile-time precondition and postcondition static tests supported in SPARK (a variant of Ada).

Can we go further still? Yes. The existence of malloc alternatives like Hoard means that programs with bounded memory requirements can be given bounded memory pools that are guaranteed available. Such programs will always have the memory they need to run, regardless of what else is running.

There's also test-driven development, which can be pushed in preference to rapid turnaround.

I would not regard myself as a language purist, as you can probably tell. If you can't, then it might help if I said I knew 20+ languages.

To me, a language is syntactic sugar used to describe a problem to a computer. If you can't describe the problem well, then the sugar needs to be changed so you can.

jcday

I'm not sure that's valid

Space probes are unreliable in part because radiation causes system damage. Radiation-hardened processors and radiation-hardened memory still suffer from radiation-caused glitches, and no program can fix that.

But there's also a hardware issue. Rockets impose enormous stresses on the probes they carry, from the acceleration and the vibration. Micrometeorites pepper the hardware constantly, causing structural damage. And the radiation will damage the materials still further.

And then there's the descent onto Mars. Very thin atmosphere, but significant gravity and very uneven ground. Not a good combination.

The probes being lost is almost never due to buggy software.

Microsoft's code name for 64-bit Windows was also a dig at rival Sun

jcday

Sun's death

Sun was killed by two things, neither of then tech giants.

Linux was replacing Unix on servers, and Sun had become fixated on Network Computing - an expensive distraction.

Oracle bought Sun, but Sun was dying at that point.

Its Solaris on the Intel x86/x64 architectures was slower than either FreeBSD or Linux, and the Intel/AMD line of CPUs was beating UltraSPARC.

This forced Sun's hand. It open sourced both Solaris and the UltraSPARC T2 processor. (Yes, you could actually download the hardware description for a CPU.)

But it was too little, too late. OpenSolaris lives on in the form of OpenIndiana (open source) and Oracle Solaris (where they re-closed the license and sacked virtually all the devs). There's no trace of the T2 amongst the open source hardware crowd.

Ransomware-hit British Library: Too open for business, or not open enough?

jcday

It security

Money alone isn't sufficient. There are plenty of very rich corporations out there that get hacked regularly. The reason? IT is seen as a cost with no return, IT security doubly so. The cybersecurity incidents are just rare enough for managers to see compromises as merely the cost of doing business. Managers are also very rarely held accountable for successful attacks.

So, attitude is a big problem, as is accountability. Without these two, nothing is going to change, no matter how much money is invested.

This is not going to be easy to fix. In the case of public sector services, such as the British Library, it means massive investment is needed now (just as we're entering a new round of austerity), but it also means new laws governing unsecured sites (whether attacked or not) and penalties of sufficient magnitude to get managers to take things seriously (just as we enter the countdown to the next election).

Because of circumstances, what we'll actually see is reduced funding, worsening security, and an attempt to disavow any responsibility at all. You win elections through tax breaks, not better library services.

And whoever ends up winning the next election is going to have higher priorities than fixing public service websites, so don't expect drastic changes.

If the attack that practically shut down the NHS didn't wake us up, attacks on the British Library are unlikely to do more.

This problem won't be fixed any time soon, and unless the public wake up to the potential consequences, it won't get fixed at all.

Hardware hacker: Walling off China from RISC-V ain't such a great idea, Mr President

jcday

Re: have we learned nothing from wipo ?

Internet history is littered with the remains of industries that opted for security through obscurity. It's a terrible strategy that has been largely abandoned in the security field and even Microsoft is starting to learn.

Computer networking, as a whole, has seen the utter destruction of proprietary protocols, even when they've been superior. (We use, what, three of CCITT's X protocols today, compared to how many of the IETF's?)

Complexity is the enemy of closed-source, which is why Intel CPUs have had issues throughout much of their history. The earliest well-known one was the FPU bug in the Pentium, but there were defects from before then and there are defects in modern designs that expose secrets including cryptographic keys, or break memory protections.

One reason the military and space industries don't use the latest and greatest is to ensure the issues are fixed before strapping a chip into a billion dollars' worth of hardware or use them in key servers on the ground.

If the US ends up forking RISC-V in order to keep their tech secret, the US fork will progress slower and contain more defects. That's just the nature of complex systems in an overly-closed inhibited ecosystem. It's inevitable. Just as Microsoft simply can't afford the same manpower Linux can draw on.

Since the US can't contribute back, the only way to stay ahead is to do as little new stuff as possible. Same applies to China.

With the smaller market and the lack of value-add, I see US companies struggling outside the USG. And why would the USG be buying RISC-V chips from multiple vendors in quantities sufficient to make it profitable for all of them?

And, in turn, that's going to inhibit the development of any fork further.

We've seen this sort of political interference before. It's directly responsible for both Shuttle losses, it's the reason Beowulf clustering is no longer really a thing, it's why Boeing's Blended Wing aircraft was cancelled (and, thus, indirectly why the 767 Max8 disasters happened instead).

Politicians aren't generally competent to understand risks or consequences with technology and make a mess of things frequently.

jcday

Re: Disagree

The first problem is that RISC-V is now operating from a neutral third country. ITAR and similar laws can restrict what crosses American borders but can't regulate what crosses Swiss borders.

In consequence, restrictions on US companies will force those companies to fork RISC-V, as they won't be able to export their changes, or to not use RISC-V at all. There's no other way around d it. Since the US market is small, globally-speakings, non-US developers are unlikely to make use of USian-only modifications. No sense spending more to sell to less.

The second problem is that the authorised rest of the world won't buy US RISC-V chips because the export regulations will create extra bureaucracy, extra costs, and extra restrictions. Companies will buy from suppliers who don't add all the extra burdens.

That means the US companies will have fewer customers, so prices will need to be higher, which will limit uptake in the US as well. It could well be that the only customers will be the US government, as the USG is restricted on its use of foreign companies.

The third problem is RISC-V is open source, which means China will already have the code and will be able to clean-room implement anything the US adds that looks interesting.

Indeed, so will the rest of the world, and US regulations won't cover features cloned by European companies.

The fourth problem is that militaries don't generally use the latest and greatest. They prefer tried and tested technology, because there's nothing worse than a buggy CPU in a missile or an aircraft. What's more, milspec-ing a CPU is hard. Protection against heat extremes, radiation, and shock require a lot of R&D.

Typically, the CPUs intended for such use will be old designs where the hardening has been accomplished and all the defects are ironed out. Hot patching a CPU with microcode in a fighter at 50,000 feet isn't really desirable.

And then the system has to be built around it, tested, verified, and finally actually turned into something that cam be mass produced.

This means that China's military is very unlikely to be using any US changes for at least a decade, maybe two. It's a very significant lead time.

The extra time required to either steal some else's hardened design or to reverse-engineer's it independently is negligible in the process, so you have to assume China will do one or the other.

Finally, this reminds me a lot of the debate around strong cryptography. Keeping the algorithms secret didn't help in any way. Rather, it led yo inferior communication and thus inferior designs with easy to exploit flaws. Security through obscurity was a disaster. The strongest algorithms are widely known and widely studied. Yes, that means hostile nations can use them too, but it's far more important that hostile nations be kept out of what friendly nations are doing.

This isn't a joke. If the US relies on security through obscurity for their RISC-V changes, it pretty much guarantees that there will be defects. Intel is no novice, but there hasn't been a generation of CPU since the Pentium (which had a serious FPU defect) that hasn't had serious bugs.

Security through obscurity is a very dangerous strategy. As I said earlier, you can't patch a CPU with new microcode at 50,000 feet.

No, I see absolutely no advantages to secrecy in this. Especially for the military.