So the creeping Microsoftisation of UK universities continues apace. My research group were recently invited to participate in a consultation on the future of my institution's HPC facilities. During the "interview" with us users, the "independent" consultancy "hired" by the university admitted they were paid by MS rather than the university. Strangely, they are pushing for a cloud-based solution... that's "cloud" as in "Azure". Ho hum.
236 posts • joined 9 Jul 2009
University of Cambridge to decommission its homegrown email service Hermes in favour of Microsoft Exchange Online
From Accompli to Microsoft to Google: G Suite chief Javier Soltero chases the 'complete collaborative experience'
Re: Collaborative working
How many people does it take to write a document? On my last research paper it was six (on half as many continents, Latex on Overleaf). Some of the software projects I've been involved in, many, many more than that (GitHub, all the continents). Day to day stuff shared on Google Docs, chat on Slack, Zoom, whatevs. Yes, it gets messy sometimes, but it's a helluva improvement on emailing stuff back and forth (been there, done that back in the day).
Strangely, collaboration is a thing.
Re: One-dimensional, hence exceedingly fragile
That fragility sounds to me like a consequence of a crappy training regime. If you want robust behaviour you need to train your networks on noisy data, and even design deliberately confounding/deceptive training data.
A promising and increasingly popular avenue to achieving better robustness is "adversarial networks", where you have one network trying to get better at the task at hand, while another network tries to get better at deceiving the original network.
These days, anything that uses a neural network is classified as AI, no matter how it is used.
I think we're going to have to learn to live with that. You could argue that the term "AI" has become de facto re-defined (debased?) But since nobody really knew or agreed on what AI ought to mean in the first place*, that's hardly tragic.
*Seems to me that a majority of Reg respondents appear to conflate "real AI" with "human-like intelligence". Okay, that's a thing, but that's not the only game in town, and, with respect to the current state of play, sets the bar impossibly high. I'd be happy, at this stage, to see more research and engineering of "insect-like intelligence", or even "bacteria-like intelligence" - we're not even there yet.
So is the general rule that AI pattern recognition works off perceptual hashing?
There must be more to it - that seems fundamentally not AI but just massive statistical guesswork.
Not sure what you mean by "statistical guesswork", but perhaps a bit closer to "hierarchical perceptual hashing" or something like that. Google "convolutional neural networks" (CNNs). Roughly, they're multi-layer feed-forward networks that apply spatial filters to patches of an image and pass the results on to the next layer, thus building up recognition of patterns at a hierarchy of spatial scales.
They should be converting to vectors, like SVG etc, then looking for patterns.
I doubt a vector format would work terribly well for spatial pattern search, insofar as the spatial relationships between visual elements of the image would be more difficult to extract. CNNs use spatial convolution, as that works rather well for extracting spatial "motifs" from a (raster) image.
Re: Geoffrey Hinton Is Right. Backpropagation Must Go
Don't entirely disagree, but that article annoyingly conflates backpropagation and supervised learning. Backpropagation is an efficient iterative algorithm used to implement supervised learning in feed-forward networks. It is not synonymous with supervised learning.
The article argues strongly for unsupervised learning - which is fair enough - but to my mind over-eggs the pudding, "Learning" in humans (and indeed other animals) involves both supervised and unsupervised learning. Try sticking your hand in a fire; the supervisor (i.e., "the world") will send you a very powerful error signal.
(Nor is STDP the only game in town for unsupervised learning).
Re: The more I study AI the more ot looks like conciousness is essential to it.
It seems that the best way to know what leg to move forward next uses a simulation of a simple model of the mechanisms involved to help predict what to do next in the current environment.
Having worked a little in robotics, it turns out that that's a really, really bad way to "know what leg to move forward next", and almost certainly not the way you (or any other walking organism) does it. The idea that to interact successfully with the world an organism maintains an explicit "internal model" of its own mechanisms (and of the external environment) is 1980s thinking, which hit a brick wall decades ago - think of those clunky old robots that clomp one foot in front of the other and then fall over, and compare how you actually walk.
In biological systems, interaction of an organism with its environment is far more inscrutable than that (that's why robotics is so hard), involving tightly-coupled feedback loops between highly-integrated neural, sensory and motor systems.
... and since chaotic systems have so far defeated mathematical modelling ...
Errm, no they haven't. Here's one I made earlier:
x → 4x(1–x)
That's the "logistic map". Here's another:
x' = s(y-x)
y' = x(r-z)-y
z' = xy-bz
That's the famous Lorentz system, which has chaotic solutions for some parameters. Chaotic systems are really easy to model. In fact, for continuous systems, as soon as you have enough variables and some nonlinearity you tend to get chaos.
Re: "AI is more artificial idiot than artificial intelligence"
Because in any reasonable definition of AI ...
Well what is a reasonable definition of AI? Genuine question: I get the impression that most commentators here equate "real" AI with "human-like intelligence" - under which definition we are, of course, light-years away. But does the "I" in AI have to be human-like? Or, for that matter, dog-like, or octopus-like or starling-like, or rat-like?
Perhaps we need to broaden our conception of what "intelligence" might mean; my suspicion is that "real" AI may emerge as something rather alien - I don't mean that in the sinister sci-fi sense, but just as something distinctly non-human.
Re: People who want to kill other people for stupid sky fairy reasons are not clever
Jihads, Crusades, Intifadas - they're all the same.
Not quite: Intifada was, in its original meaning, a political term with connotations of "rebellion against oppression" (the first Intifada was a socialist protest against the monarchy in Iraq). Of course it is now more strongly associated with the Palestinian struggle against Israeli occupation - which may or may not (depending on who you are talking to, and when) have been hijacked by religious extremists.
Agreed on the others, though.
Re: C'mon, ElReg.
The particular sort of barbarism practised by Desh doesn't respect even those rules even if they were often evident more in the breach than the observance.
This is a an entirely deliberate strategy. You need to appreciate their motives: they are a doomsday sect. They believe that the global Caliphate will arise only after an apocalyptic showdown between Islam and the non-believers. Their avowed intention is to evoke the highest levels of disgust and abhorrence in order to hasten that showdown.
Re: Experience and subtle clues
Why? Because Drs and nurses say, hmm I have seen something like this before, and short circuit the differential diagnosis. Try encoding that in an expert system !
That would be easy to encode in an expert system - if the system designers were able to pin down what "something like this" actually meant. And that's the Achilles' Heel of expert systems: identification (and encoding) of the explosion of edge cases and hard-to-articulate intuitions that constitutes the deep knowledge of an experienced human expert. This is why knowledge-based systems hit a brick wall. We've known about this for a long time.
Re: re: Contacting someone implies you were successful;...
My personal favourite IndEng word is "prepone", meaning "to bring forward in time", by analogy with "postpone".
Mine is "doubt" to express a misunderstanding. As an academic I am sometimes contacted by Indian students/researchers expressing a "doubt" about some aspect of my published work. The first few times this happened I thought they were being a bit cheeky, until I twigged that they were just seeking clarification.
Re: Semiconductors are getting hard to fill
Babel fish are worth a punt. If everyone understood each other it would improve not only research but a lot of other aspects of commerce.
Actually, in research and academia language is hardly an issue; English is already de facto the lingua franca of science.
Re: eyes as brains ...
Not sure about fruit flies, but I have a colleague who studies genetically-modified zebra fish (same technology - calcium imaging). Young zebra fish are almost completely transparent, so you can image their entire brain/neural system in one shot. It's pretty impressive watching screeds of individual neurons (~ 10,000) flickering away in real time.
Turns out that there are more neurons in the zebra fish visual system than in the rest of the brain/nervous system in its entirety. Seeing well is pretty damn important to those critters - wouldn't surprise me if fruit flies were similar in that respect (although their visual system is very different).
Re: Belief has nothing to do with it: The fundamental difference between religion and science
The problem is that the models are continually getting it wrong.
"All models are wrong, but some are useful" - George Box
"The best material model for a cat is another cat, or preferably the same cat" - Arturo Rosenblueth
Non-scientists routinely misunderstand the purpose and utility of models in science. Here's a famous example of an exceptionally useful - but completely "wrong" - model: it's the Ising model for ferromagnetism. When a ferromagnet is heated up to a specific temperature (the Curie point), it abruptly de-magnetises. This is a classical phase transition (like the boiling of water, etc.). The Ising model was proposed in 1924 by Ernst Ising, in an attempt to understand the ferromagnetic phase transition (phase transitions were poorly understood at the time). It is elegant, abstract, and - as a model for ferromagnetism - completely wrong. It's absolute rubbish. It's childishly simplistic. Real ferromagnets are, in reality, nothing like the Ising model - they're way more complex in structure and (quantum) electrodynamics. But here's the strange thing... the Ising model completely nails the ferromagnetic phase transition. It describes the behaviour of the relevant physical quantities near the Curie point astoundingly well. The Ising model (which was finally solved analytically in the 1940s by Lars Onsager) subsequently became the "fruit fly" of the physics of phase transitions. It's probably not far off the mark to say that almost everything we know about phase transitions (and we now know a lot) is rooted in studying the Ising model. It is one of the most elegant, successful and influential models in the history of science.
It's instructive to consider just why the Ising model is in fact so successful. It turns out that, in general, phase transitions fall into distinct "universality classes": that is, many apparently completely different physical phenomena which demonstrate phase transitions turn out to behave in identical, stereotyped ways near their critical point - they may be described, not just qualitatively but quantitatively, by the same mathematics. (This is a rather deep discovery, which stems from studying - you guessed it - the Ising model.)
So the Ising model didn't have to be "correct", or even "accurate" (it's not). It just had to nail the one phenomenon it was intended to model. It abstracts the problem. That is what useful models do - that's what they're for.
In climate science, as in any other science, that is how we should view models: not as "right" or "wrong" ("another cat"), but as useful in abstracting and pinpointing the crucial aspects of the phenomenon we wish to understand.
Re: So it's Core War played with "real" virtual processors between machines
The joker of course is have you developed a system perfectly adapted for finding only the malware that the attacking ML system produces.
That's an excellent point, and one which you can be sure is not lost on the designers of this system (or adversarial ML in general). I could imagine ways of getting around this, though. First of all. you would have to ensure that the malware detector does not "forget" earlier attempts at evasion. This could be done, for example, by continually bombarding it with all thus-far generated malware attacks. That's the easy part. Getting the malware generator to diversify wildly is likely to be much harder. It probably needs to be "seeded" with exploits from the real world, not to mention the designer's imagination in full black-hat mode.
Re: Pattern matching is dumb, thus anomaly detection, with history and rollback.
If I was writing malware, I'd probably use random salted compressed and encrypted launch/payload sections, including deceptive "buggy" code/data and resource access, to defeat easy binary-pattern and behaviour detection.
So perhaps the malware generator could discover and deploy this strategy (with a bit of nudging, perhaps) - and the malware detector could then attempt to mitigate against it.
Sounds like the equivalent of loading a bacterium on a petri dish with increasing doses of antibiotic.
More like loading mutating bacteria on a petri dish with increasing doses of "mutating antibiotics"; you get an arms race - kind of what's happening in the real world with antibiotic-resistant bacteria (cf. the Red Queen effect).
Re: And how much hardware it takes....
To be fair, human brains throw massively more hardware than any computer system in existence at doing ... just about anything. Plus they have had the benefit of aeons of evolutionary time to hone their algorithms and heuristics.
Looked at that way, it hardly seems like a fair contest.
Re: Cracked and good PR
However it's not AI
According to ... what/whose definition of AI? (not a rhetorical question).
Can the system play any other game not programmed in?
I play a fair game of chess, but am absolutely rubbish at Go. Never had the time or motivation to program it in.
Did the computer "learn" the previous matches? No, they were loaded into a database.
Correction: it learned from previous matches. Perhaps those matches were "loaded from a database" during the training phase. I used to load matches from databases for training during my chess days - we called them "chess books" back then.
Hint: why not find out how AlphaGo really works.
74 countries hit by NSA-powered WannaCrypt ransomware backdoor: Emergency fixes emitted by Microsoft for WinXP+
Re: "the grammar is relatively simple"
What would you consider a language with complicated grammar then?
Basque, Finnish, Navajo, Adyghe, Abkhaz, Korean, Icelandic, Thai, ...
English grammar is in fact pretty simple compared even with its latinate and germanic progenitors. I recall learning Spanish, that the trickiest things to get to grips with were the imperfect past tense (which doesn't exist in English) and the subjunctive mood (which English has almost lost).
Now Afrikaans - there's a really, really simple grammar.
Re: FB wants to go the same way as the MSM
Now, I only look at the MSM to learn what is the latest lie that they are propagating or which piece of information they are trying not pass on to the general public.
Out of interest, having rejected mainstream media as a source of reliable information, what are your alternative sources of information, and how sure can you be that they are any more reliable than the mainstream media?
Re: Is it a bug?
LionelB wrote earlier:
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet.
IOW, I don't entirely* disagree with you. I just thought your analogy was crap.
*OTOH, I don't think "real" AI (whatever that means) is unattainable - always a duff idea to second-guess the future (cf. previous unattainables, like heavier-than-air flight, or putting humans on the moon). Basically, I don't believe that there are magic pixies involved in natural (human) intelligence.
Re: Is it a bug?
No, you have to go back, create test cases for every imaginable scenario, ...
Sorry, no. You seem to have a total misconception as to how machine-learning in general, and "deep-learning" (a.k.a. multi-layer, usually feed-forward) networks in particular, function. You seem to have latched onto the bogus idea that a machine learning algorithm needs to have "seen" every conceivable input it might encounter in order to be able to classify it.
In reality, the entire point of machine-learning algorithms is to be able to generalise to inputs it hasn't encountered in training. The art (and it's not quite a science, although some aspects of the process are well-understood) of good machine-learning design is to tread the line between poor generalisation (a.k.a. "overfitting" the training data) and poor classification ability (a.k.a. "underfitting" the training data).
It's a hard problem - and while the more (and more varied) the training data and time/computing resources available, the better performance you can expect, I'd be the last person to claim that deep-learning is going to crack general AI. Far from it. But it can be rather good at domain-specific problems, and as such I suspect will become a useful building-block of more sophisticated and multi-faceted systems of the future.
After all, a rather striking (if comparatively minor) and highly domain-specific aspect of human intelligence is our astounding facial recognition abilities. But then we have the benefit of millions of years of evolutionary "algorithm design" behind those abilities.
Re: "why the algorithm..such a heavy weighting on..only 2% of the footage."
@John Smith 19
Yes, deep-learning networks (usually) are just multi-layer networks - but that doesn't imply that "people could actually work out how they work". It's notoriously hard to figure out the logic (in a form comprehensible to human intuition) of how multi-layer networks arrive at an output. I believe the so-called "deep-dreaming" networks were originally devised as an aid to understanding how multi-layer convolutional networks classify images, roughly by "running them in reverse" (yes, I know it's not quite as straightforward as that).
Re: Is it a bug?
If you have to insert an explicit rule, it's not AI. It's a human-written heuristic.
You might well argue, though, that natural (human) intelligence is a massive mish-mash of heuristics, learning algorithms and expedient hacks assembled and honed over evolutionary time scales.
That's what general (i.e., non-domain-specific) AI is up against - and yes, it's hard, and we're nowhere near yet. And of course it's hyped - what isn't? Get over it, and maybe even appreciate the incremental advances. Or better still, get involved and make it better. Sneering never got anything done.
Re: Is it a bug?
If it was a tiny bug it could be fixed.
What makes you so sure it can't be fixed? FWIW, I suspect it is probably not a "tiny" bug, but may not actually be that hard to fix (off the top of my head I can imagine, for example, a training regime which omits random frames, or perhaps a sub-system which recognises highly uncharacteristic frames, which might mitigate the problem).
This research may well turn out to be rather useful to Google (although I'd also be slightly surprised if they weren't aware of something similar already).