* Posts by Duncan10101

33 publicly visible posts • joined 23 Nov 2020

Netherlands digital minister smacks down Big Tech over AI regs


No it isn't

It's nothing like cars.

With cars, the drivers need to be trained, the vehicle needs to be maintained, there must be insurance, and the vehicle must have a load of safety devices, and taxes have to be paid for the upkeep of all the roadside safety devices such as traffic lights.

If you really want to make parallels to the automotive industry (which isn't really a good analogy anyway) you'd have to say something more like this:

It's like building a fleet of vehicles, each of which has zero safety features (like, er, brakes), no insurance, no training for any drivers, not paying for the upkeep of the roads, and then handing them out like sweeties to every person that wants one, then running them all on public roads.

I did say it wasn't a good analogy, because if you did that, a load of people would be dead immediately. But the scope for social harm is huge, and saying "Hey people die in cars so AI must be allowed too" is downright disingenuous.

Red Hat promises AI trained on 'curated' and 'domain-specific' data


I've really got my doubts

So we have this situation where the LLMs start spouting lies. Now, why is that?

Everybody on the LLM bandwagon will cry "It's the training data set". But is that really true? I seriously do not think so.

These models are just statistical guessing machines. Whatever words are most likely to come-up next, it outputs. It has no concept of truth, of facts, nor the meaning of its own output. It has no concept of meaning. It has no concepts at all.

So imagine you give it a clean, truthful dataset. Just because the individual inputs may have been truthful, it doesn't mean that this property will be preserved after its "Great Statistical Remix." Until somebody can show me an algorithm that has the proven property of "Truth Preservation" then this whole idea seems deeply flawed, and I suspect the pursuit of a "Curated dataset" will be nothing but a wild-goose chase.


In images, there are often features that we humans think of as "Fingers." Now, to the AI, this might be just a statistical pattern, but however you look at it, in a photograph containing a hand, the feature that we might call a "Finger" is statistically very likely to be followed by another "Finger." Now, unless you have some sort of understanding of what a finger is, and how many a typical human might have, the simple probability is that a finger is very often right next to another finger. So what do our favourite image-generative AIs produce? That's right, pictures of people with the wrong number of fingers.

Go on: try it. Ask one for an image of "A man covering his face with his hands." I bet you get six or seven fingers per hand.

So ... what's my point about the fingers? Let me ask this: Exactly how many photos in the image-generator-AI's training set do you think depicted people with seven fingers? I'm betting it's a very, very low number.

We all understand the truth of "Garbage-In, Garbage-Out" ... but it does NOT imply "Truth-In Truth-Out" at all. Not one little bit.

Professor freezes student grades after ChatGPT claimed AI wrote their papers


Re: re. Why not go back to oral exams for the Finals?

Here's a suggestion: play language models (or students??) at their own game.

Get a statistical analysis of the student's previous coursework. Grammar, spelling, sentence structure, word usage, you can probably get a metric-of-some-sort for style. Then compare that with the incoming-paper-to-be-marked.

To get a ChatGPT assignment through that, would mean they'd have to have cheated everything they ever submitted. And I kinda think you could spot that.

Gartner: Stop worrying and love the cloud, with all its outages and lock-in


Re: Distribute your servers

Seriously? I think it's quite obvious that when I wrote "Failure" I was referring to the overall system, not a single, redundant component. So the be clear: I'm advocating redundancy across multiple DCs ... and that is exactly what the cloud providers suggest. Example: search for AWS failover regions.


I'm probably my own worst enemy but ...

I kinda agree with him. I know, I'm about to get a deluge of down-votes, but the thing is, being from an annoying consultancy who makes bad analogies doesn't actually make him wrong. If you look at cloud "Outages" over the years, most of the down-time is caused by all customers going against the cloud-vendor's advice, and putting all their eggs in one basket/DC. His advice that you should use the existing technology to mitigate the risk of downtime is actually good, and more people should take heed. Distribute your servers, and the chances of failure are amazingly low. So the real problems are lock-in and expense. I also agree with him that lock-in isn't such a biggie, because you'll get some form of that no matter what you do. So ... expense. That's a personal choice, I won't judge.

Datacenter fire suppression system wasn't tested for years, then BOOM


"Risk to life" ... I think you're only looking at one side of the equation. There are risks to life caused by installing too many sensors: an obvious one being that "More sensors" = "More false alarms". This makes people insensitive to them, and they might well ignore a real fire. In fact, that's a fundamental theme of the article (and the many many stories in the comments too).

So, yeah, I believe the sensors shouldn't be disproportionate to the actual risk.

Exactly what level is "Proportionate" is another question. When governments get involved in "Making people safe", I doubt very much if anybody has really done that calculation.

The end of Microsoft-brand peripherals is only Surface deep


I never loved them.

Yes I know, I know ... there are some people who like those curved or split keyboards. And good luck to 'em. But I hate those damned things.

I've always found them very unnatural to use, and I end-up contorting my arms into weird angles to be able to type. But they're not the worst, and that prize goes to:

Space-Saving Keyboards

Yes, MS and many other manufacturers make a line of "Space-Saving" keyboards. Sometimes they're called "Compact" or "Minimal" or whatnot, but the concept is always the same.

If you REALLY struggle for space .... yeah, you can lose the number-pad, but at that stage you need to stop.

But they don't. No, they shuffle the keys. Now, humans learn type by muscle-memory, so the worst possible thing you can do is SHUFFLE THE FUCKING KEYS.

"Oh left's move the arrows across so they're in with the other keys. That saves some space."

"Oh let's turn that 3x2 block (with home, end, pageup etc.) on its side to make it 2x3. That saves the width of a whole key."

"Hell, while we're at it, let's swap the M with the Q, the P with the A and the Z withe the bloody G"



Here's how the data we feed AI determines the results


It doesn't know Star Wars from Wall Street

I think you have totally nailed it there :)

How DARPA wants to rethink the fundamentals of AI to include trust


Interesting lecture

Many moons ago (when I was but a whippersnapper of a student) I was in an AI lecture. We were told that it was very difficult to work on the problem, because we don't have a definition of intelligence.

However, a definition was given.

At the time, we all laughed. But thinking about it now (in the world of test-driven development, and behaviour based testing) and considering we don't seem to have ANY standard of testing for these things whatsoever, perhaps it wasn't so crazy after all. It was this:

"Intelligence is the ability to pass intelligence tests."

Huawei replaces ERP with homebrew effort, claims it’s perfect and shows company will thrive despite sanctions


The real accomplishment here

... is escaping from Oracle. I say "Well done!". I am ready for the downvotes ...

Scientists speak their brains: Please don’t call us boffins


Re: Hmmm

In the face of the evidence presented, I have to agree with the deboffination idea.

The thing is- English is as English does. So if the majority of people are using Boffin in a derogatory or sexist way, then it essentially IS derogatory and sexist, and the OED will have to be updated to accommodate its modern use.

I'm a software developer, not a scientist, but I've seen "Boffin" applied to devs more times than I can count, so let me give a dev's perspective. Our industry is horrendously skewed in favour of males. I'm sure many readers will agree it's pretty depressing. But the workforce is shaped that way, because people are employed from a talent pool that is shaped that way. The talent pool is mainly from universities or previous jobs, both of which suffer from the same problem in the relevant subjects.

(And, yes, the skewed workplace produces a work culture that's downright misogynist)

So real change will have to come-up through culture change in homes, schools, colleges and universities before the workplace starts to look sensible ... it's not a problem you can "Fix" with quotas. Losing the word Boffin will probably have a positive effect on all stages of the system, so I say "Go for it." It definitely won't solve the problem but (given the perception of the word) it's a small step in the right direction.

Workers don't want these humanoid robots telling them to be happy


Re: Apologies from a railway computer

You are so right. When I traveled by train, I often heard "Please accept my apologies for [insert predictable failure]". So YOUR apologies? YOU are a freakin' robot. It would have helped (very very slightly) if it had said "Please accept the apologies of [insert name of useless train company]". Damn them all.

The UK's bad encryption law can't withstand global contempt


Re: Words......and then there's meaning.......

The words themselves are actually quite important. I'm not saying that we shouldn't discuss the issues and instead focus on the words (I don't know how you managed to read that into my message). I also understand if people don't like my suggestion of "Rape." What I'm saying is that "Snoop" gives the whole hideous exercise a veneer of "It doesn't really matter." But it damned-well does matter, and I think it needs more powerful words to describe it ... whatever they might be. There are so many examples of important issues being brought into focus by a well-chosen word or phrase (or hashtag). There are also many examples of individuals and communities putting great effort into choosing words that they feel adequately describe themselves. And I support them in that. I think these words ARE important, and an anti-surveillance movement is unlikely to be sparked by the title "Snoop."


I utterly disagree that "Snoop" is a word that has the true connotations of what is really going on. That is really the crux of what I'm saying. However, if you don't like that it is conflated with "Rape" then I see your point. Please feel free to suggest a different word that means "Takes by force what is most private and intimate to us," because "Snoop" does not cut it.


Can we PLEASE stop using the word "Snoop"?? It's the language of minimization. It makes it sound like a cute-and-harmless dog from Peanuts. Can we use something more appropriate for what's actually happening. May I suggest we replace it with the term "Data-Rape"?

Example uses:

It's a snooper's charter -> It's a data-rapist's charter

They are snooping on you -> They are data-raping you

I don't like the snoops -> I don't like the data-rapists

The govt will snoop your internet traffic -> The govt will data-rape your internet traffic


Silicon Valley Bank's UK arm bought by HSBC for 1 British pound in rescue deal


"No man is an island" ... what about the Isle of Man?

OpenAI CEO heralds AGI no one in their right mind wants


Re: Synthetic test cases ...

You didn't finish the joke:

The first real customer walks into the bar and asks where the toilet is.

The bar bursts into flames and everyone inside dies.


Scientists conclude cats only have three personalities after YouTube clip binge


No grey area

My cat never falls into the middle category. She's very black and white, like that.

New York gets right-to-repair law – after some industry-friendly repairs to the rules



Don't agree. While it's entirely possible that new kit COULD be massively more reliable, it's just not. Bought a TV in the last few years? I know I did. I bet it died on ya, didn't it? I bet it took an age to get it fixed. And all that time, the warranty clock was ticking-down. TVs, computers, phones, white-goods, music equipment ... basically anything that requires power to run ... the quality has fallen off a cliff in recent time, IMHO. I don't think it's even built to survive the warranty period any more, I reckon they rely on people saying "Huh it died I'll buy a new one anyway". It's just maddening.

Tesla driver blames full-self-driving software for eight-car Thanksgiving Day pile up



I'm not so sure. Not because autonomous vehicles are especially good, but because human drivers set a very low standard for safety.

If you look at your car, it's smothered with safety devices, each of which is there because we learned (the hard way) that people really aren't that good at driving. Seat-belts, air-bags, anti-lock brakes, traction control, crumple zones, bumpers, alerting devices that stop you from wandering across lanes when you fall asleep at the wheel, the list goes on. That's just on-car ... what about all the surrounding stuff, like lane markings, traffic-control-lights, "Pedestrian Crossing" signs, flashing beacons, and barriers that stop you driving into the path of a train (not that drivers don't go around those!)

People crash their cars all day, every day. The difference is that you don't get news articles like "Man blamed for crashing his own car". But with an autonomous vehicle crash, there are stories aplenty ... even if (as is this case) the full facts aren't known yet. Really this is a story about a driver blaming his autonomous car. "Er ... it wasn't me, the computer did it". Indeed, if you watch the accompanying video, it seems likely that the driver was doing something unsafe, that the software wouldn't be expected to handle. I'd rather see the outcome of the investigation, but by that time the incident will be forgotten and all Joe Public will remember is "Nasty unsafe Tesla car crashes itself and injures innocent people."

One positive of all this human-inability-to-drive is that the safety devices work very well for computer-caused crashes too. An 8-car pile-up in a tunnel only resulted in once person going to hospital, so I'd call that a win for standard safety devices.

Ultimately, we need more data. And I'd like to see a follow-up story once the facts are known.

AI won't take coders' jobs. Humans still rule for now


Move along, nothing to see

Software developers have nothing to fear from AI coding. All the AI will do is automate-away some of the heavy lifting, allowing us to focus on things like actual problem solving. Remember: software development is one of (if not the) most heavily automated jobs in the world. This hasn't come-about because somebody is trying to replace us or make us redundant ... we do it ourselves. This is because we get bored with writing the same code again-and-again, so we find ways to automate it away. All aspects of our jobs are already assisted or automated ... think about the technology we use: compilers, linkers, package managers, collaborative coding tools, whole communities with "Known good solutions to problems" ready-to-paste, IDEs that highlight your errors, debuggers that examine your running code and reveal its inner workings, code analyzers, automated merging of multiply-edited files. The list goes on; it is very big. These things are so common we don't even think about them, but each one does in a spit-second what not-so-many-years-ago would have taken hours. AI coding? Bring it! We'll chuck it in the toolbox with everything else.

GitHub saved plaintext passwords of npm users in log files, post mortem reveals



This is basic stuff. How could they get it so wrong?

How did it ever pass a security review?

Or even a code review?

It makes me sad to see stories like this.

Corporate investments are a massive hidden source of carbon emissions


Re: Double counting?

I some ways I agree about double-counting. However, I don't think that the extra carbon emissions are being attributed to Microsoft directly. More a case of pointing-out that if they are serious about the environment, one of the most powerful controls they have available, is to move their investments ... which they don't seem to have done. The same applies to all of us who have pension funds. I've moved my pension into sustainable funds (and to be honest, their performance is pretty good). I feel my position would be untenable if I invested in oil, and then said said "Don't blame me for the destruction of the environment ... just blame the company that I invested my money in, hoping for a quick buck."

Datacenters in Ireland draw more power than all rural homes put together


Lies, damned lies, and statistics


There are 100 way to spin these data. I really don't see any meaningful underlying fact. And I'm writing as a guy who gets his hot water from solar and most of his electricity from solar (yes, I'm somebody who actually cares enough to invest in a home system that will NEVER pay for itself, in the hope of reducing my carbon footprint). Puh-lease, let's see a meaningful analysis. Anyone care for a beyond-meat burger?


India to adopt digital rupee and slap a 30 per cent tax on cryptocurrency income


Re: "cryptocurrency"

Ultimately, it price HAS to be the cost of the electricity used to mine it, plus the outlay on the compute hardware.

Ably blog claims company doesn't need Kubernetes to scale, surge in traffic takes down entire website


Keep walking

So ... you interview a candidate who seems "Interesting." Then they discover you don't use the shiny-shiny-du-jour. They take this as a signal that something is wrong with your company and choose not to continue. I don't want to rant about the obvious logical fallacy at the heart of this candidate's thinking ... suffice it to say you have had a lucky escape.

Pre-orders open for the Mini PET 40/80, the closest thing to Commodore's classic around


Re: The PETs inspired me.

The PET was also the first computer I saw and touched, and that experience pretty-much sent me down a road that I'm still traveling.

I was a school-kid of about 8 and one of the children's fathers was nice enough to bring it to the school. He demonstrated a program ...

Of course, it said "Hello whats your name?" and when I typed my name it said "Hello Duncan Im pleased to meet you."

Everyone kid went "Whhooooooaaa!!" - we all thought it was both intelligent and polite.

The moment that set the course of my life came when the guy said "Who wants to see how it works?" so we all yelled "Yeah" and he typed "LIST" and we got something like this:

10 INPUT "Hello whats your name"; NAME$

20 PRINT "Hello "+NAME$+" Im pleased to meet you"

30 END

Even to a child it was obvious what had happened. And in those days (early 1980s) UK unemployment was A THING. I said to myself "I can do this ... and I will never want for work." And I was right.

I'll never forget that day, but would I buy a remade PET for the sake of nostalgia? Nah.

What's that, Lassie? Dogs show signs of self-awareness according to peer-reviewed academic study?


I'm trying to see the IT angle .........

Facebook and Apple are toying with us, and it's scarcely believable


Fo shiz! How can anyone claim to be a customer if they don't pay for anything. A (very) quick goggle for the word reveals: "A person who buys goods or services from a shop or business." ... so, duh. No, I don't use social media. Oooohhh .... unless El Reg counts. Does it????


Re: Customer?

I would do it, if there was a service. I can't be bothered to start my own though.

Borkem ipsum: Supermarket gifts Thailand a tech fail that will echo down the millennia – and probably choke a turtle


Re: Thanks!

Cat Ipsum is awesome. And I'm not the kind of guy who says "Awesome."

The GIMP turns 25 and promises to carry on being the FOSS not-Photoshop


Re: GIMP.. the poster child for terrible UI..

Oh you are so so so right. The thing is so fucking incomprehensible that it's a joke. Why is Amazon so successful ? --- they put the customer first. Why is GIMP so unbelievably fucking horrible? --- they put the developer first. "Go figure" as the septics have it.




I tried GIMP. I failed.

I installed Paint.Net. I succeeded.

GIMP (and I'm writing this as somebody who came to it with an open mind) is one of the most horrible pieces of crap I've ever had the displeasure of using. Every feature feels like it was developed by a different person (or team) from every other part of it. And that's probably because it was. Every feature needs to be learnt individually because there is zero consistency across the user experience. Every new modification turns into a world of pain for the customer. This is the saddest, most pathetic piece of user-hating-and-victim-blaming, developer-centrict piece of crap I have ever encountered in my 25+ years of commercial software develpment on a wide variety of platforms. GIMP go home. Or learn to grow up. Beter still, just fuck the hell off and die.

I struggle to see how anybody beyond idealists can percieve this as a viable product in any way, shape or form.

I am an open-minded individual, and am not entrenched in my opinions. Please feel free to change my mind.