"There are definitely alternate universe 'What If's to be had, not only what if IBM had gone ... with a 68000 processor..."
There was the short lived IBM System 9000.
3002 publicly visible posts • joined 1 Mar 2007
One of the sources of annoyance with the way segment registers worked is that when you were compiling software you had to select the memory model you wanted - if I remember correctly you had a choice of 5:
everything (code, data) shared the same 64K segment.
code and data could each be up to 64K, but in different segments.
code was up to 64K, but data could be larger.
code was as large as you wanted, but data was up to 64K
code and data could each be larger than 64K.
The problem that resulted was that let's say you opted for one of the smaller memory models in version 1 of your software, and then your program or data requirements grew and you wanted to create a more capable version 2 - it could suddenly mean that either or both of your 16-bit data or function pointers could suddenly change to 32-bits, which had the capability of royally screwing things up, particularly if they were parts of data structures, and meant the sizes of those structures changed. If serialising data to save it consisted of just dumping a block of memory to disc (and remember storage was limited and speeds weren't that great so it was common to copy it out the fastest way possible), you could easily create incompatibilities between data from old and new versions.
Incidentally, long pointers were often stored as Segment:offset. Since the segment register was 16 bits, you have the added inefficiency of needing 32 bits to represent a 20 bit physical address, and as someone pointed out elsewhere, that meant a whole load of different segment:offset combinations representing the same physical address making long pointer comparisons a pain.
"When I eventually read the text I was struck how derivative the material was"
When I first came across Harry Potter, another novel, written some 29 years previously, immediately came to mind. This is Wikipedia's introduction to A Wizard of Earthsea by Ursula K. Le Guin.
"It is regarded as a classic of children's literature and of fantasy, within which it is widely influential. The story is set in the fictional archipelago of Earthsea and centers on a young mage named Ged, born in a village on the island of Gont. He displays great power while still a boy and joins a school of wizardry, where his prickly nature drives him into conflict with a fellow student. During a magical duel, Ged's spell goes awry and releases a shadow creature that attacks him. The novel follows Ged's journey as he seeks to be free of the creature."
Incidentally, the Earthsea books are well worth reading.
True, the schematics created by LLMs are pretty way off for many things.
However, even without asking for the schematics, and just asking it to say in text form what should be connected to what, it still manages to mess things up. The trouble is, it's partially correct so the wiring diagram of a 555 must be in its network somewhere, but I'm guessing it's polluted with wiring diagrams from other stuff, or alternate 555 configurations, so it gets to a certain point and then the probabilities of the pollutants take over. Even though it gives a confident step-by-step guide for wiring things up, you get half way through and think 'hey, that pin shouldn't go to that one', and notice that other things don't make sense either.
It also seems to be the case that when asking for the circuit, it can reproduce the standard databook formulae for the frequency and mark:space ratio from the passive components, but then fails to use that correctly to compute suitable values - again I'm guessing that because the timing characteristics involve multiplying resistor and capacitor values together, there are multiple ways of getting to the frequency you want (e.g. scale up the resistor values by the same amount you scale down the capacitor value), there are multiple versions of the circuit that it's been trained on, but even for the same frequency, the original authors have chosen different values, and for a system that works on probability those different values may all be similarly probabilistic pathways and it ends up with a mish-mash of components that don't work together.
"you would continue to think about efficiency and better coding practices throughout life."
I used to be in the situation where I had to think about efficiency. My first computer had 5K of program space, and I've programmed EPROMs and PICs down to the last byte.
Efficiency doesn't always yield better coding practices - when you've got a hard limit of the end of your ROM space sometimes you have to take short cuts to make things fit, and from a coding practice perspective they didn't always look nice!
To cover a wide variety of coding problems you need a lot of sample code to build your patterns from.
You could cover at least some of the 'power of 10' rules by filtering the learning examples to be just those that follow the rules (e.g. ensuring no training data includes gotos, no compiler directives other than #include, #define, very restricted use of pointers, etc.) so it doesn't 'know' code outside of the power of 10 coding subset. However, I doubt there is enough power of 10 code in the wild to cover the spectrum of questions likely to be asked of it.
This would require a lot of manual work to 1. curate any 'power of 10' code that does exist, and 2. rewrite and validate a significant amount of non-compliant examples that do exist to provide a wide enough training base of code.
While that might go some way to getting it to produce power of 10 compliant code, by virtue of it not having patterns for certain non-compliant coding structures, I doubt it would be enough to ensure an LLM produced code that completely followed the rules, and it would be a mammoth human undertaking to create the training data, rather than lifting non-compliant code from the many sources where that already exists.
From these very pages from a few years ago…
https://www.theregister.com/2018/02/02/adult_fun_toy_security_fail/
"Classic Example: How to make a cup of tea.
And then think about instant coffee"
Both examples that can't even rely on common knowledge but also have to be cross-referenced to the individual drinker's personal preferences and the environment.
Nowhere in the common knowledge of tea or coffee making does it say that one of my colleagues is lactose intolerant, so I have to use pea milk in that tea instead of cow milk, or my wife only ever wants her coffee cup half filled, and kills it with too many sweeteners, or that I might only have decaf in the evening if I hope to sleep, and that while I have milk in a lot of tea, I prefer Earl Grey with lemon.
And am I at work throwing a tea-bag in a mug, or going all posh and using tea leaves in a pot, or having chimarrão with my wife, who is Brazilian, which is a whole different way of making and drinking tea.
"Have we gone so long without a real war threat for people to forget war is about killing people?"
Since the other side have the same aim, surely war is also about trying not being killed yourself, or getting your unit killed. It therefore makes sense to ensure your side are well trained in not providing your enemy with a digital signature to target you.
Remember that war is also about propaganda, and an enemy may well intercept and broadcast unguarded communications to bolster their own propaganda, particularly if you're whinging about stuff, which needs training out of professional fighting forces.
Defense Secretary Pete Hegseth directed the department's chief information officer to "relax the mandatory frequency for cybersecurity training,"
Is this the same Pete Hegseth who thought it would be a good idea to post details of airstrikes in Yemen on a Signal group that included a journalist?
"Wheelie-bins blocking them - put them in the front garden!"
Our bin-men don't collect them unless we move them from the inside of our garden wall to the outside onto the pavement. Since they arrive at some random time between 5am and 10am, that means putting them out the night before and often bringing them back in after work the next day.
When I was checking the above quote from that movie (Stealth, that is), I came across this one
"Once you design something to learn, you can't put stipulations on *what* it learns! Learn this, but don't learn that? He could learn from Adolf Hitler, he could learn from Captain Kangaroo! It's all the same to him!"
which, for an otherwise ridiculous movie, seemed like quite a good description of ChatGPT, Grok, etc.
"It's amazing how much "AI research" is slapping a prompt into a chatbot"
Indeed. It's like since phones started having cameras built in, everyone thinks they're a photographer.
Now you can get ChatGPT or Copilot as a button on your browser, everyone suddenly thinks they're an AI expert.