* Posts by LionelB

236 posts • joined 9 Jul 2009


University of Cambridge to decommission its homegrown email service Hermes in favour of Microsoft Exchange Online


So the creeping Microsoftisation of UK universities continues apace. My research group were recently invited to participate in a consultation on the future of my institution's HPC facilities. During the "interview" with us users, the "independent" consultancy "hired" by the university admitted they were paid by MS rather than the university. Strangely, they are pushing for a cloud-based solution... that's "cloud" as in "Azure". Ho hum.

From Accompli to Microsoft to Google: G Suite chief Javier Soltero chases the 'complete collaborative experience'


Re: Collaborative working

How many people does it take to write a document? On my last research paper it was six (on half as many continents, Latex on Overleaf). Some of the software projects I've been involved in, many, many more than that (GitHub, all the continents). Day to day stuff shared on Google Docs, chat on Slack, Zoom, whatevs. Yes, it gets messy sometimes, but it's a helluva improvement on emailing stuff back and forth (been there, done that back in the day).

Strangely, collaboration is a thing.

Email blackmail brouhaha tears UKIP apart as High Court refuses computer seizure attempt


Re: @NeilPost


"Trying to distinguish his humour and trolling isnt easy."

Arr, you'll not be from around these parts then?

Astroboffins baffled after spotting solar system with great gas giant that shouldn't exist


Indeed. Or at the very least, we'd be talking about a star orbiting around a planet. Which definitely sounds odd.

Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can


Re: The adjusted pixels

An image classifier is really a type of hashing algorithm.

Not that simplistic: google Convolutional Neural Newtorks.


Re: One-dimensional, hence exceedingly fragile

That fragility sounds to me like a consequence of a crappy training regime. If you want robust behaviour you need to train your networks on noisy data, and even design deliberately confounding/deceptive training data.

A promising and increasingly popular avenue to achieving better robustness is "adversarial networks", where you have one network trying to get better at the task at hand, while another network tries to get better at deceiving the original network.


These days, anything that uses a neural network is classified as AI, no matter how it is used.

I think we're going to have to learn to live with that. You could argue that the term "AI" has become de facto re-defined (debased?) But since nobody really knew or agreed on what AI ought to mean in the first place*, that's hardly tragic.

*Seems to me that a majority of Reg respondents appear to conflate "real AI" with "human-like intelligence". Okay, that's a thing, but that's not the only game in town, and, with respect to the current state of play, sets the bar impossibly high. I'd be happy, at this stage, to see more research and engineering of "insect-like intelligence", or even "bacteria-like intelligence" - we're not even there yet.


So is the general rule that AI pattern recognition works off perceptual hashing?

There must be more to it - that seems fundamentally not AI but just massive statistical guesswork.

Not sure what you mean by "statistical guesswork", but perhaps a bit closer to "hierarchical perceptual hashing" or something like that. Google "convolutional neural networks" (CNNs). Roughly, they're multi-layer feed-forward networks that apply spatial filters to patches of an image and pass the results on to the next layer, thus building up recognition of patterns at a hierarchy of spatial scales.

They should be converting to vectors, like SVG etc, then looking for patterns.

I doubt a vector format would work terribly well for spatial pattern search, insofar as the spatial relationships between visual elements of the image would be more difficult to extract. CNNs use spatial convolution, as that works rather well for extracting spatial "motifs" from a (raster) image.


Re: Geoffrey Hinton Is Right. Backpropagation Must Go

Don't entirely disagree, but that article annoyingly conflates backpropagation and supervised learning. Backpropagation is an efficient iterative algorithm used to implement supervised learning in feed-forward networks. It is not synonymous with supervised learning.

The article argues strongly for unsupervised learning - which is fair enough - but to my mind over-eggs the pudding, "Learning" in humans (and indeed other animals) involves both supervised and unsupervised learning. Try sticking your hand in a fire; the supervisor (i.e., "the world") will send you a very powerful error signal.

(Nor is STDP the only game in town for unsupervised learning).

Software update turned my display and mouse upside-down, says user


Re: Every day's a school day

One thing I don't get about this: do these people have five thumbs and one opposable finger?

Calm down, Elon. Deep learning won't make AI generally intelligent


Re: Bishop Bollocks

@Rebel Science

I could go on but then I would barf out my lunch.

I think you just did...


Re: The more I study AI the more ot looks like conciousness is essential to it.

It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.

Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.

In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.


... and since chaotic systems have so far defeated mathematical modelling ...

Errm, no they haven't. Here's one I made earlier:

x → 4x(1–x)

That's the "logistic map". Here's another:

x' = s(y-x)

y' = x(r-z)-y

z' = xy-bz

That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.


Re: "AI is more artificial idiot than artificial intelligence"

Because in any reasonable definition of AI ...

Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?

Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.

Shock! Hackers for medieval caliphate are terrible coders


Re: People who want to kill other people for stupid sky fairy reasons are not clever

Jihads, Crusades, Intifadas - they're all the same.

Not quite: Intifada was, in its original meaning, a political term with connotations of "rebellion against oppression" (the first Intifada was a socialist protest against the monarchy in Iraq). Of course it is now more strongly associated with the Palestinian struggle against Israeli occupation - which may or may not (depending on who you are talking to, and when) have been hijacked by religious extremists.

Agreed on the others, though.


Re: C'mon, ElReg.

The particular sort of barbarism practised by Desh doesn't respect even those rules even if they were often evident more in the breach than the observance.

This is a an entirely deliberate strategy. You need to appreciate their motives: they are a doomsday sect. They believe that the global Caliphate will arise only after an apocalyptic showdown between Islam and the non-believers. Their avowed intention is to evoke the highest levels of disgust and abhorrence in order to hasten that showdown.

AI in Medicine? It's back to the future, Dr Watson


Re: Experience and subtle clues

Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !

That would be easy to encode in an expert system - if the system designers were able to pin down what "something like this" actually meant. And that's the Achilles' Heel of expert systems: identification (and encoding) of the explosion of edge cases and hard-to-articulate intuitions that constitutes the deep knowledge of an experienced human expert. This is why knowledge-based systems hit a brick wall. We've known about this for a long time.

'Don't Google Google, Googling Google is wrong', says Google


Re: re: Contacting someone implies you were successful;...

My personal favourite IndEng word is "prepone", meaning "to bring forward in time", by analogy with "postpone".

Mine is "doubt" to express a misunderstanding. As an academic I am sometimes contacted by Indian students/researchers expressing a "doubt" about some aspect of my published work. The first few times this happened I thought they were being a bit cheeky, until I twigged that they were just seeking clarification.

Boffins fear we might be running out of ideas


Re: "They're all people who, in past times, would have been doing something more useful."

... Thomas Edison is known for having electrocuted elephants ...

How is that not useful?


Re: Semiconductors are getting hard to fill

Babel fish are worth a punt. If everyone understood each other it would improve not only research but a lot of other aspects of commerce.

Actually, in research and academia language is hardly an issue; English is already de facto the lingua franca of science.

Fruit flies' brains at work: Decision-making? They use their eyes


Re: eyes as brains ...

Not sure about fruit flies, but I have a colleague who studies genetically-modified zebra fish (same technology - calcium imaging). Young zebra fish are almost completely transparent, so you can image their entire brain/neural system in one shot. It's pretty impressive watching screeds of individual neurons (~ 10,000) flickering away in real time.

Turns out that there are more neurons in the zebra fish visual system than in the rest of the brain/nervous system in its entirety. Seeing well is pretty damn important to those critters - wouldn't surprise me if fruit flies were similar in that respect (although their visual system is very different).

Climate-change skeptic lined up to run NASA in this Trump timeline


Re: Belief has nothing to do with it: The fundamental difference between religion and science

The problem is that the models are continually getting it wrong.

"All models are wrong, but some are useful" - George Box

"The best material model for a cat is another cat, or preferably the same cat" - Arturo Rosenblueth

Non-scientists routinely misunderstand the purpose and utility of models in science. Here's a famous example of an exceptionally useful - but completely "wrong" - model: it's the Ising model for ferromagnetism. When a ferromagnet is heated up to a specific temperature (the Curie point), it abruptly de-magnetises. This is a classical phase transition (like the boiling of water, etc.). The Ising model was proposed in 1924 by Ernst Ising, in an attempt to understand the ferromagnetic phase transition (phase transitions were poorly understood at the time). It is elegant, abstract, and - as a model for ferromagnetism - completely wrong. It's absolute rubbish. It's childishly simplistic. Real ferromagnets are, in reality, nothing like the Ising model - they're way more complex in structure and (quantum) electrodynamics. But here's the strange thing... the Ising model completely nails the ferromagnetic phase transition. It describes the behaviour of the relevant physical quantities near the Curie point astoundingly well. The Ising model (which was finally solved analytically in the 1940s by Lars Onsager) subsequently became the "fruit fly" of the physics of phase transitions. It's probably not far off the mark to say that almost everything we know about phase transitions (and we now know a lot) is rooted in studying the Ising model. It is one of the most elegant, successful and influential models in the history of science.

It's instructive to consider just why the Ising model is in fact so successful. It turns out that, in general, phase transitions fall into distinct "universality classes": that is, many apparently completely different physical phenomena which demonstrate phase transitions turn out to behave in identical, stereotyped ways near their critical point - they may be described, not just qualitatively but quantitatively, by the same mathematics. (This is a rather deep discovery, which stems from studying - you guessed it - the Ising model.)

So the Ising model didn't have to be "correct", or even "accurate" (it's not). It just had to nail the one phenomenon it was intended to model. It abstracts the problem. That is what useful models do - that's what they're for.

In climate science, as in any other science, that is how we should view models: not as "right" or "wrong" ("another cat"), but as useful in abstracting and pinpointing the crucial aspects of the phenomenon we wish to understand.


Re: I don't mine a skeptic. I prefer a skeptic in this position

BB, your misunderstanding of science is quite extraordinary.

Not another Linux desktop! Robots cross the Uncanny Valley


One thing the article might have mentioned (although I'm really not sure what to make of it), is that Hiroshi Ishiguro, due to the effects of ageing, had cosmetic surgery to make him look more like his android. He apparently claims it was more cost-effective than updating the android.

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it


Re: So it's Core War played with "real" virtual processors between machines

The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.

That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.


Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.

If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.

So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.


Re: Scary

Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.

More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).

Machine 1, Man 0: AlphaGo slams world's best Go player in the first round


Re: Newsflash

Just don't call it "AI" until it can design a game like Go by itself. From scratch.

You can do that? Hats off, sir/madam.


Re: And how much hardware it takes....

To be fair, human brains throw massively more hardware than any computer system in existence at doing ... just about anything. Plus they have had the benefit of aeons of evolutionary time to hone their algorithms and heuristics.

Looked at that way, it hardly seems like a fair contest.


Re: Cracked and good PR

However it's not AI

According to ... what/whose definition of AI? (not a rhetorical question).

Can the system play any other game not programmed in?

I play a fair game of chess, but am absolutely rubbish at Go. Never had the time or motivation to program it in.

Did the computer "learn" the previous matches? No, they were loaded into a database.

Correction: it learned from previous matches. Perhaps those matches were "loaded from a database" during the training phase. I used to load matches from databases for training during my chess days - we called them "chess books" back then.

Hint: why not find out how AlphaGo really works.

74 countries hit by NSA-powered WannaCrypt ransomware backdoor: Emergency fixes emitted by Microsoft for WinXP+


Re: Risk Management

Of the >140,000 million NHS yearly budget, only about 40,000 million is available for things like buying drugs, new hospitals, MRI scanners and desktop refreshes. The rest goes on wages. That's a political failure.

Yeah right, why should we pay people to do this stuff?

Take a sneak peek at Google's Android replacement, Fuchsia


Re: Old joke

It was named after a German botanist, so no, it would be closer to "Fook-sia", or "Fooch-sia" with the ch similar to that in the Scottish "loch".

Sorry, no "spoilsport" icon.

Linux homes for Ubuntu Unity orphans: Minty Cinnamon, GNOME or Ubuntu, mate?


Re: windows manager choice

FWIW, Fluxbox (which has been my WM of choice for a decade) is still under development - albeit at a somewhat leisurely pace. It knows what it is, and is comfortable to stay that way - which suits me fine.

Apple fanbois are officially sheeple. Yes, you heard. Deal with it


Re: "the grammar is relatively simple"

What would you consider a language with complicated grammar then?

Basque, Finnish, Navajo, Adyghe, Abkhaz, Korean, Icelandic, Thai, ...

English grammar is in fact pretty simple compared even with its latinate and germanic progenitors. I recall learning Spanish, that the trickiest things to get to grips with were the imperfect past tense (which doesn't exist in English) and the subjunctive mood (which English has almost lost).

Now Afrikaans - there's a really, really simple grammar.

Facebook decides fake news isn't crazy after all. It's now a real problem


Re: "I would trust Mark on this," de Alfaro said in an email to The Register.

More specifically, he called them dumb f**ks for trusting him with their data - jokingly, perhaps, but in the context you have to say he had a point.

(Beer icon because it's 5.00 ... somewhere. Here, in fact. Now.)


Re: FB wants to go the same way as the MSM

Now, I only look at the MSM to learn what is the latest lie that they are propagating or which piece of information they are trying not pass on to the general public.

Out of interest, having rejected mainstream media as a source of reliable information, what are your alternative sources of information, and how sure can you be that they are any more reliable than the mainstream media?

Shock horror: US military sticks jump leads on human brains to teach them a lesson


"Deep brain stimulation" is already a thing, and an active area of research, particularly for the treatment of severe epilepsy, Parkinson's disease, depression and Tourette Syndrome.

The article should surely have mentioned this.

A bot lingua franca does not exist: Your machine-learning options for walking the talk


Re: WTF is this crap

A group of cells in any brain learns to manage all its systems in the body and learns to balance itself so as not to destroy itself as things change from conception. We can do this too guys!

How? The Nobel committee is waiting to hear from you.

Hasta la Windows Vista, baby! It's now officially dead – good riddance


Re: Oh alright...

Hasta la Vista

You do realise that that translates roughly as "until we meet again"?



Re: Embiggen

Big: adjective. Verb forms: embiggen, bignify. Adverbial form: bigly. Abstract noun: bignation.


Re: @LionelB Vista Capable

Yes, agreed - and apologies (and an upvote): it wasn't clear to me what you were getting at.

Riddle of cannibal black hole pairs solved ... nearly: Astroboffins explain all to El Reg


Re: They solved nothing. Just fairy tales.

I expect you're also one of those people who refuses to believe that any historical event took place unless you were there to see it with your own eyes.

... while trees fall silently in deserted forests ...

(Couldn't be arsed to go full haiku.)


Re: They solved nothing. Just fairy tales.

Are you sure your arse is different from your elbow? Better send a probe up there.

You don't really get science, do you?

Google's video recognition AI is trivially trollable


Re: Is it a bug?


LionelB wrote earlier:

That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet.

IOW, I don't entirely* disagree with you. I just thought your analogy was crap.

*OTOH, I don't think "real" AI (whatever that means) is unattainable - always a duff idea to second-guess the future (cf. previous unattainables, like heavier-than-air flight, or putting humans on the moon). Basically, I don't believe that there are magic pixies involved in natural (human) intelligence.


Re: Is it a bug?

No, you have to go back, create test cases for every imaginable scenario, ...

Sorry, no. You seem to have a total misconception as to how machine-learning in general, and "deep-learning" (a.k.a. multi-layer, usually feed-forward) networks in particular, function. You seem to have latched onto the bogus idea that a machine learning algorithm needs to have "seen" every conceivable input it might encounter in order to be able to classify it.

In reality, the entire point of machine-learning algorithms is to be able to generalise to inputs it hasn't encountered in training. The art (and it's not quite a science, although some aspects of the process are well-understood) of good machine-learning design is to tread the line between poor generalisation (a.k.a. "overfitting" the training data) and poor classification ability (a.k.a. "underfitting" the training data).

It's a hard problem - and while the more (and more varied) the training data and time/computing resources available, the better performance you can expect, I'd be the last person to claim that deep-learning is going to crack general AI. Far from it. But it can be rather good at domain-specific problems, and as such I suspect will become a useful building-block of more sophisticated and multi-faceted systems of the future.

After all, a rather striking (if comparatively minor) and highly domain-specific aspect of human intelligence is our astounding facial recognition abilities. But then we have the benefit of millions of years of evolutionary "algorithm design" behind those abilities.


Re: "why the algorithm..such a heavy weighting on..only 2% of the footage."

@John Smith 19

Yes, deep-learning networks (usually) are just multi-layer networks - but that doesn't imply that "people could actually work out how they work". It's notoriously hard to figure out the logic (in a form comprehensible to human intuition) of how multi-layer networks arrive at an output. I believe the so-called "deep-dreaming" networks were originally devised as an aid to understanding how multi-layer convolutional networks classify images, roughly by "running them in reverse" (yes, I know it's not quite as straightforward as that).


Re: Is it a bug?

So your reply to "homoeopathy is not medicine" is "write a new treatise on it and make it better!" Yup, got it.

Sorry, but that's a fantastically lame "analogy".


Re: Is it a bug?

If you have to insert an explicit rule, it's not AI. It's a human-written heuristic.

You might well argue, though, that natural (human) intelligence is a massive mish-mash of heuristics, learning algorithms and expedient hacks assembled and honed over evolutionary time scales.

That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet. And of course it's hyped - what isn't? Get over it, and maybe even appreciate the incremental advances. Or better still, get involved and make it better. Sneering never got anything done.


Re: Is it a bug?

If it was a tiny bug it could be fixed.

What makes you so sure it can't be fixed? FWIW, I suspect it is probably not a "tiny" bug, but may not actually be that hard to fix (off the top of my head I can imagine, for example, a training regime which omits random frames, or perhaps a sub-system which recognises highly uncharacteristic frames, which might mitigate the problem).

This research may well turn out to be rather useful to Google (although I'd also be slightly surprised if they weren't aware of something similar already).



Biting the hand that feeds IT © 1998–2020