back to article Regulate This! Time to subject algorithms to our laws

Algorithms are almost as pervasive in our lives as cars and the internet. And just as these modes and mediums are considered vital to our economy and society, and are therefore regulated, we must ask whether it's time to also regulate algorithms. Let's accept that the rule of law is meant to provide solid ground upon which our …

Page:

  1. Zog_but_not_the_first
    Unhappy

    Booting up my Mystic Meg algorithm, I predict...

    This won't happen.

  2. Tom 64
    Facepalm

    what...

    The implementation of an algorithm is usually just the automation of an existing process.

    If the process is illegal, that's hardly the fault of the programmers is it?

    1. Anonymous Coward
      Anonymous Coward

      Re: what...

      "If the process is illegal, that's hardly the fault of the programmers is it?"

      IIRC the use of Formal Definition Techniques was proposed as a way of proving that sets of rules were consistent.

      The English parliament these days even seems to bypass close scrutiny of any draft legislation. The Executive prefers to have vague bills passed - whose details are later set arbitrarily by unscrutinised "secondary legislation" powers of decree.

      1. Steven Jones

        Institution unknown

        "The English parliament these days"

        I was unaware that there was such thing as an English Parliament. Pray, where does it convene?

        1. Anonymous Coward
          Anonymous Coward

          Re: Institution unknown

          "I was unaware that there was such thing as an English Parliament. Pray, where does it convene?"

          As distinguished from the Scottish/Welsh/Northern Ireland Assemblies. They are separate law making bodies whose laws do not always align with those being passed by Westminster that only affect England. Hence the vexed problem of "the West Lothian question".

          1. Trigonoceps occipitalis

            Re: Institution unknown

            The West Lothian Question was identified by Tam Dalyell, the Labour MP for the Scottish constituency of West Lothian. It is specifically the right of Scottish MPs, as opposed to MSPs, to vote on bills debated in Westminster affecting England only but have no right to vote on bills on the same matter in the Scottish Assembly. English MPs have no rights to vote on Scottish devolved matters, as is reasonable. There is a democratic deficit inherent in the arrangement that one day will have to be addressed.

        2. Pen-y-gors

          Re: Institution unknown

          I was unaware that there was such thing as an English Parliament. Pray, where does it convene?

          At Westminster. Let's face it, any decisions made there are purely in the interests of English Tories, they don't give a damn about the interests of Wales, Scotland and Norn Ireland, even on non-devolved matters, so I think 'English parliament' is pretty accurate.

        3. Geoffrey W

          Re: Institution unknown

          RE: "I was unaware that there was such thing as an English Parliament. Pray, where does it convene?"

          These kind of responses are so tedious; nit picking on some trivial point or mis-chosen words, when its perfectly obvious what is meant. It serves no purpose except to demonstrate what a clever dick the responder is and to divert the discussion down a totally irrelevant path. Stop it at once, you naughty boy!

      2. Anonymous Coward
        Anonymous Coward

        Re: what...

        The E̶n̶g̶l̶i̶s̶h̶ Henry VIII parliament.

        TFTFY.

      3. David Shaw
        Flame

        Re: what...

        The {national} parliament{s} these days even seems to bypass close scrutiny of any draft legislation.

        I seem to recall one of the early ILETS data-retention laws being passed entirely by fax!

        One noble Lord in the UK briefly noticed, but he was told to calm down as "it wasn't that important" - just seemingly - at present illegal per ECJ

        As for the algos, by the time the Amazon Cloud has finished training my software defined architecture, can even I understand the rules, never mind explain it to the Palatial incumbents?

    2. Anonymous Coward
      Anonymous Coward

      Re: what...

      > The implementation of an algorithm is usually just the automation of an existing process.

      And many decisions made by humans are pretty arbitrary anyway - such as the binning of applications based on a cursory scan of a CV. Are all such decisions to be regulated, even in the absence of a computer? Will you be able to challenge why you weren't called up for an interview?

      If your bank decides not to offer you a loan, will the law compel it to do so? This implies not only that the bank will have to reveal its reasons not to offer the loan - the so-called "algorithm" under discussion here - but also for those reasons to be challenged and potentially overridden.

      This in turn implies that you would have a statutory right to receive a loan from a bank, if you meet some criteria decided in law or by a judge - not those criteria chosen by the bank itself.

      1. shrdlu

        Re: what...

        > And many decisions made by humans are pretty arbitrary anyway -

        > such as the binning of applications based on a cursory scan of a

        > CV. Are all such decisions to be regulated, even in the absence

        > of a computer? Will you be able to challenge why you weren't

        > called up for an interview?

        Your CV is sensitive personal data and any processing of that is already required to be done accurately fairly. It doesn't matter whether it is done by eye or by algorithm. In agencies who routinely use search algorithms they should be required to prove that the algorithm is fair and accurate. The Information Commissioner should be auditing these.

        > If your bank decides not to offer you a loan, will the law compel it

        > to do so? This implies not only that the bank will have to reveal

        > its reasons not to offer the loan - the so-called "algorithm" under

        > discussion here - but also > for those reasons to be challenged

        > and potentially overridden.

        The same rules apply. Failure to process the data accurately and fairly is an offence. If your application is rejected unfairly then the bank must either grant the application or pay compensation. They may also need to consider whether they have a taste for cocoa and porridge. The ICO should be auditing these decisions.

        > This in turn implies that you would have a statutory right to receive a

        > loan from a bank, if you meet some criteria decided in law or

        > by a judge - not those criteria chosen by the bank itself.

        That is how courts work.

        1. Anonymous Coward
          Anonymous Coward

          Re: what...

          > That is how courts work.

          There are specific laws about discrimination on grounds of race/religion/gender/sexual orientation. If the algorithm used any of those factors as inputs then it would be at odds with those laws. Care would be required for anything which might be a proxy for those attributes (such as surname).

          But apart from those, is there a general statutory duty of "fairness" in business decision-making? And if there is, why are all the more specific laws required?

          1. Anonymous Coward
            Happy

            Re: what...

            There are specific laws about discrimination on grounds of race/religion/gender/sexual orientation. If the algorithm used any of those factors as inputs then it would be at odds with those laws. Care would be required for anything which might be a proxy for those attributes (such as surname).

            It's not that simple. A set of features which individually weakly select for particular characteristics can be combined together to strongly select for race/religion/gender/sexual orientation and thus inadvertently discriminate against these groups. As a trivial and rather obvious example, a retailer might use information on, amongst many other things, makeup and clothing colours purchased. But I bet there are some really subtle things that are not at all obvious because individually they only have a weak effect.

    3. John Smith 19 Gold badge
      Unhappy

      "If the process is illegal, that's hardly the fault of the programmers is it?"

      Apparently you've never heard the expression "Ignorance of the law is no defense."

      You're attitude is why a lot of people think all devs are nothing but bodge merchants.

      By analogy with the construction industry that would be designing a building below the known safety standards or building it with materials which don't meet their specs.

      So yes it is illegal.

      1. Yes Me Silver badge
        Headmaster

        I am not a lawyer but...

        Re: "If the process is illegal, that's hardly the fault of the programmers is it?"

        The computer's defence is clear: it was only obeying orders. The programmer's defence is less clear. If she was obeying orders but by doing so told the computer to break the law, the defence that she was only obeying orders or that she was ignorant of the law doesn't hold water. She's as guilty as her boss.

        In any case, the idea of regulating algorithms is a nonsense. Blame the humans, not the machines.

        1. Anonymous Coward
          Anonymous Coward

          Re: I am not a lawyer but...

          Can a computer break the law? Literally speaking. I don't think any inanimate object can be treated as capable of understanding the law. (Neither is any human being, but that's another rant).

          This is actually a very deep and potentially very embarrassing inquiry. Seeing how many human systems and organizations - such as governments and corporations - are largely designed to diffuse blame and prevent any specific person or people from being held legally responsible.

          When decisions are embodied in a computer program, they become definite, exact and undeniable. But the program, and the computer that executes it, are not the kind of entities that are capable of legal or illegal behaviour.

          So the computer program becomes a kind of "confession in advance" by those who can be held legally responsible if anything goes wrong. Once this doctrine becomes established and widely understood, there may be a very noticeable decrease in the amount of automation.

    4. Anonymous Coward
      Anonymous Coward

      Re: what...

      The two programmers who worked for Bernie Madoff were convicted. They accepted suggestions about desired output and then created programs that created that output which was used to mislead people into handing over their money to madoff.

      Programmers can be held accountable.

      It would be a great idea if the legal lack of clarity (input can always contain errors, so we try to make programmers responsible for all output) was reduced. The option to "give a programmer files and then let him make something of it" is still quite prevalent.

    5. Anonymous Coward
      Meh

      Re: what...

      "But if he had been 36 instead of 19, he would have received a more lenient sentence, though by any reasonable metric, one might expect a 36-year-old to receive a more punitive sentence."

      Because ageism rocks!

      1. Anonymous Coward
        Anonymous Coward

        Re: what...

        "But if he had been 36 instead of 19, he would have received a more lenient sentence, though by any reasonable metric, one might expect a 36-year-old to receive a more punitive sentence."

        One presumes that the result of the algorithm is because, statistically, a 19 year old is a greater risk, and may require more emphatic punishment to reliably change behavior.

        In that case, the algorithmic result is, logically, the most reasonable choice.

    6. Anonymous Coward
      Meh

      Re: what...

      If the process is illegal, that's hardly the fault of the programmers is it?

      It would be so simple if there is was line in the code which said, e.g.

      if (sex == MALE && skinColour == WHITE) price *= 2;

      You might expect that such an algorithm would be illegal. (Or maybe not, since discrimination is generally a one way street, but that is another discussion).

      However, AI isn't like that, generally the programmer doesn't design the algorithm. The computer learns the algorithm itself, in the case of neural networks by adjusting internal weights to optimise the results towards the desired outcome. It is hard to understand what the effects of the individual weights are, and the bigger the network the harder it is. In any case, few legal professionals have the necessary graduate level mathematical training. So the "algorithm" is a black box, which has tuned itself to maximise the number of desired outcomes in a large sample of test cases, with perhaps a dozen factors (and often many more). You can't easily work out how it is operating internally, and the only way to find undesirable outcomes from specific combinations is to test them. Which is going to take a while if you want to fully explore the 12-dimension (or whatever) space.

      1. j.bourne

        Re: what...

        Nail - head - on - hit. The problem is not the algorithm per se, it's the data that's allowed to be used to base the outcome on. E.g. if Gender, age, ethnicity, etc... weren't parameters in the first place then they wouldn't be available to discriminate on....

        Oh, so it's just short people you pick on then .... Yep that's ok nothing on the statute about height discrimination (or is there?).

      2. Anonymous Coward
        Anonymous Coward

        Re: what...

        Nevertheless, however the computer works and regardless of whether any human being can understand how it arrives at decisions, the people responsible for using it to make decisions must carry the can legally. The buck cannot stop with a machine, so it must stop with the people who installed the machine as part of their system. In principle, I suspect it's not very different from hitting someone with a spade. It's not the spade's fault!

      3. EBG

        Re: what...

        Differential pricing based on the customer as a variable, rather than being routed in the cost of product variation, should be illegal. E.g if we want to subsidise OAPs' travel, up their pensions and let them make choices. A big threat comes from the degradation of the fundamentals of money and un-impeded, non-discriminatory choice as to how we use our money once we have earned it.

    7. Oh Homer
      Headmaster

      "Swapping liberty"?

      What an odd characterisation of labour.

      An equitable exchange of labour for money or goods is not a loss of liberty, it's the voluntary utilisation of one's liberty for personal gain.

      I am not somehow "less free" because I choose to work for someone else, to earn a wage so I can buy goods, rather than work for myself to make or grow those goods directly.

      Either way I still have to work. To characterise such work as a loss of liberty is like saying that merely being born is comparable to slavery, as if the only true "freedom" is being strapped to a couch for 75 years, being spoon fed jelly by a nurse.

  3. Anonymous Coward
    Anonymous Coward

    Not just computers

    "When an organisation doesn't know what it wants from an algorithm, how can it measure what the results are? "

    Many of the algorithmic rules that affect people inconsistently in England are in laws set by politicians. They are algorithms that are being devised without a thorough investigation of their consistency when applied.

    Examples are the minimum sentencing rules - capable of being as unjust as the USA recidivism machine rules. It appears that they tie a judges hands even when they recognise the injustice in a particular case. The weighing of evidence itself is by a human algorithm. For example - is a picture illegal? There are algorithms people are required to follow to make that subjective judgement.

    Every day there are examples of people in the benefits system being unfairly disadvantaged by a particular combination of the rules.

    People applying for residency in the UK - who have lived here most of their lives - are being denied that privilege because there is a requirement for an "evidence" that they could not meet. eg children who cannot produce utility bills in their own name.

    To illustrate the general ignorance: the BBC "Thinking Allowed" programme recently mentioned the problem of computers making such decisions - however the presenter called them "logarithms". That is a word of Greek origin that has nothing to do with the Arabic derived "algorithm".

    1. P. Lee

      Re: Not just computers

      >is a picture illegal?

      I wonder if this article off the back of the BBC article which suggested that Google's algorithms are racist because if you ask for baby picture you only get white baby pictures? I'm just going to take a moment to laugh at the SJW's... ok I'm back.

      I'm with you in that it "isn't just computers" but mostly processes are relatively observable and have audit trails. However I don't think "processes" is where the article goes. We're into knowledge/decision systems. The problem I see with these is not the "AI" or whatever, but the massive consolidation in many industries and the lack of competition. This may be via corporate consolidation or merely that all the corporates are running the same software. Either way, that is unhealthy and intervention may be required to stir things up and bring back competition. Perhaps we do need to think about splitting Google, AWS or MS up. Hmmm... I'm guessing that won't happen for the next four years.

      The point about cartels offloading decisions to software with the objective of maintaining the cartel is interesting, but again, I think we're talking about business practice rather than particular algorithms.

      1. chelonautical

        Re: Not just computers

        >The problem I see with these is not the "AI" or whatever,

        >but the massive consolidation in many industries and the

        >lack of competition. This may be via corporate consolidation

        >or merely that all the corporates are running the same

        >software. Either way, that is unhealthy and intervention

        >may be required to stir things up and bring back competition.

        Good point. Another consolidation concern is consolidation in the data sources that record people's lives. Companies like Google and Facebook hold such vast quantities of data about the general public that their datasets will probably end up being given a very high weighting in any decision-making process. These companies know so much about us, why wouldn't every employer, bank, insurance company or any other business use them to find out much more about our habits and risk profiles? There is a lot of power in a few hands.

        As a result of this consolidation of personal data into a small number of internet services, any errors or unfavourable entries in the records of Google/Facebook/etc. could easily follow us around. If someone once said something foolish on Facebook or Googled something risky, these things could be added to their digital "permanent record" and result in life-long disadvantage in employment, credit, insurance and many more areas. This is already the case to some extent, as people can be found online but it used to require a degree of manual effort and patience on the part of a human. The next human at a different organisation might not bother or may search less thoroughly, so you have less chance of any past online embarrassment becoming permanently life-ruining.

        What's new is that large-scale automated information sharing and deep AI-based analytics of people's life history is becoming possible such that organisations can automatically judge people's entire digital lives and reject them in a matter of milliseconds. As a simple example, imagine going to an online car insurance comparison website and being told "all companies declined to quote for you" without knowing why. It might be because you posted a couple of things about social drinking on Facebook so they have all wrongly concluded you are a drunk and therefore a bad risk. It might be for some other reason entirely. Will it be possible to find out? Will companies admit any responsibility or will they just pass the buck to their third-party "lifestyle analytics" provider in another jurisdiction off-shore? Will there be an ombudsman who will help you find the culprit? Ultimately, who can you sue for the damages caused by incorrect automated decisions?

        These issues already occur today on a smaller scale (e.g. several times a call centre operator has told me "the computer says X" without being able to explain why), but on a small scale it's easier to handle or shop around elsewhere. The danger is that unexplained incorrect or biased decisions become automated and repeatable to the extent that you can never escape them in any aspect of your life. If everyone uses the same software and the same data sources that becomes increasingly likely.

  4. Anonymous Coward
    Anonymous Coward

    Even if

    These crusty old law makers could look at the algorithm, they probably wouldn't understand it anyway.

    > "require organisations using algorithms to retain records on all of the data they are using".

    This looks to be the real golden egg, access to the data.

    1. Doctor Syntax Silver badge

      Re: Even if

      "These crusty old law makers could look at the algorithm, they probably wouldn't understand it anyway."

      Let's examine your ageism.

      First of all, look at the summary from the table here http://parliamentarycandidates.org/news/the-age-of-the-new-parliament dating from 2015

      18 - 29 2% 30 - 39 14% 40 - 49 32% 50 - 59 32% 60 - 69 16% > 70 4%

      How does this compare with your concept of "crusty old". BTW, without looking it up, how do you think those 4% over 70 are distributed between parties?

      Now let's think what we might consider as an ideal age distribution. I think most of us would like our MPs to have some practical experience of the world they're trying to administer. My least ideal candidate would be a newly graduated or even younger policy wonk who has no concept of life outside of their own party machine. Such an MP isn't going to come into Parliament without being well into that age distribution, is going to spend some extra years broadening their experience in dealing with governance at all levels from constituency matters upwards and then should remain there so that their experience adds value. Does that distribution seem particularly unreasonable.

      There's also the notion implicit in the A/C's statement that somehow it's only the young who are aware of algorithms. So here, my young A/C is a little research exercise for you. Who are Whitfield Diffie and Martin Hellman? How old are they? Why do you think they should be unable to understand what algorithms are much less understand them? And, if you bothered to look up the answer to the question I posed earlier about >70 MP's parties, how did that fit you preconceptions?

    2. Anonymous Coward
      Anonymous Coward

      Re: Even if

      > These crusty old law makers could look at the algorithm, they probably wouldn't understand it anyway.

      If it was a traditional algorithm (like a flowchart or decision-tree) it would be fine.

      If it's some newfangled AI "machine learning" algorithm, where you just blast a load of data into a neural net and somehow "train" it to do the right thing, that's a lot harder to scrutinise for anyone - IT professionals included.

      1. Anonymous Coward
        Anonymous Coward

        Re: Even if

        It's certainly true (or very plausible) that some neural networks or even computer programs may reach reasonable decisions by methods that no human being can ascertain (or understand).

        But anything arising from such automated decision-taking is still entirely the responsibility of whoever used the computer to make decisions. It can't be any other way.

        If you can't be sure exactly how your decision-making system will work in ALL circumstances that might possibly arise, don't deploy it. If you do, you are like someone firing off a gun in random directions and hoping you never hit anyone.

  5. hplasm
    Big Brother

    Treading on politicians' toes?

    "...in the hands of a few programmers who have no accountability for the decisions that they're making,"

    and

    "in the hands of a few politicians who have no accountability for the decisions that they're making,"

    Seems like the former offends the latter- which is the status quo...

    1. Nick Kew

      Re: Treading on politicians' toes?

      Is it the politicians? They're just doing what they've long done: bewailed the new that isn't under their control.

      It's more the old-media (including to a great extent organs like El Reg which still use an oldfashioned Journalist/Editor model) crying about their own loss of the minds of their followers.

  6. Ken Hagan Gold badge

    Plase stop using the word algorithms

    To someone who actually develops algorithms for a living, your use of the word as a short-hand for "using a computer as a legal or PR fig-leaf" really grates.

    Algorithms are intellectual constructs and their form is constrained (if not determined, in simple cases) by what they are intended to do. Calling for algorithms to be regulated makes about as much sense as calling for mathematical theorems to be regulated.

    You can regulate whether people can *use* particular algorithms for particular purposes, but I think you'll find that hard to regulate in the case where someone chooses to run the algorithm on neurons rather than silicon. (Societies that try to regulate what goes on inside someone's head have a Bad Track Record, historically.)

    If I'm reading you correctly, your gripe is not the algorithm, nor even the fact that it is running on a computer, but simply the fact that the people who choose to run those algorithms on the computer are using the computer to put themselves at arm's length from the legal consequences of the algorithm delivering an anti-social or illegal answer.

    Happily, I believe that even in this case there is ample legal machinery and precedent already in place. If a corporation directs an employee to perform an algorithm and that employee ends up breaking some law, the corporation carries the can. Directors are liable, etc. This system has been tested on several generations of corporate shysters and crooks and appears to work. Using a computer rather than an employee merely increases the calibre of the cannon directed at the corporation's feet.

    1. TRT Silver badge

      Re: Plase stop using the word algorithms

      And on top of that the algorithm may only be sensible over a limited range of input conditions. Now, you might be able to open up the mechanism of the algorithm to inspection, but you can't release all the possible input values because these might well contain personally identifiable information.

    2. Pen-y-gors

      Re: Plase stop using the word algorithms

      Calling for algorithms to be regulated makes about as much sense as calling for mathematical theorems to be regulated.

      Isn't that exactly what dim-but-crazy Amber and barking-mad Theresa are wanting to do with encryption theorems.

      1. Doctor Syntax Silver badge

        Re: Plase stop using the word algorithms

        "Isn't that exactly what dim-but-crazy Amber and barking-mad Theresa are wanting to do with encryption theorems."

        Looking at the state of governance of nuclear powers around the world I'm starting to think that by comparison BoJo is a rational and diplomatic negotiator, Rudd is a competent technocrat and May a benevolent internationalist. We're doomed, I tell you, dooomed.

    3. Frumious Bandersnatch

      Re: Plase stop using the word algorithms

      I totally agree, Ken. We should be talking about "automated processes" or the like.

      It seems to me that the only thing that needs legislating here is in the realm of data protection (or FoI) requests. Let's say that someone is refused insurance cover. I think that it's quite possible and reasonable to make a data request asking the organisation to clarify the factors leading to the decision. I'm pretty sure, though not certain, that this sort of request is allowable and that it should receive a reply.

      However, once you start using automated processes, there is a great risk that the organisation being asked for such information will, deliberately or not, seek to obfuscate what their processes are. You'll just get a response "computer says no". If you kick this up to the ombudsman or whatever, there's every likelihood that the organisation will argue two main points: first, they'll say that their algorithms are a trade secret, and second, they'll say that the cost of satisfying the request is excessive. I don't think that the first point needs much comment, but for the second, it's quite possible that they'll be able to make a good excuse: since software is so much more complicated than manual processes (which they'll no doubt have documented as part of their quality certification or whatever), the cost to audit it will be so much more. Since data requests can legally be refused on grounds of cost, this will end up with more data requests being refused, with little or no recourse.

      So, as a result, I think that the only changes that need to come about are to ensure that the same transparency standards are applied to automated processes as manual ones. This needs to happen both in terms of privacy/FoI legislation and non-legislative areas, such as ISO quality standards (which I assume is immune to Brexit).

  7. Pat 11

    confidence intervals

    These kind of algorithms are using statistics to estimate parameters and make decisions. One way to make them more accountable without be to require all such processes to produce confidence intervals. For example, predicting recidivism - if the algorithm says X has a 68% chance of committing another crime, that sounds worrying, but if it also says that estimate has a 95% confidence interval of 26-81% then it looks much less certain. And if they can't generate confidence intervals, it's a shit algorithm that should not be trusted.

    1. allthecoolshortnamesweretaken

      Re: confidence intervals

      Good point.

      Also a good argument for the "let's look inside the black box" suggestion.

      1. Anonymous Coward
        Anonymous Coward

        Re: confidence intervals

        IIRC when Expert Systems were getting popular - it was considered essential that the way they reached their conclusion in each particular case was clearly tracked for human verification.

    2. Doctor Syntax Silver badge

      Re: confidence intervals

      "if they can't generate confidence intervals, it's a shit algorithm that should not be trusted."

      And even if they can but make no distinction between different offences and the circumstances in which they were committed then it's still a shit algorithm.

    3. Nick Kew

      Re: confidence intervals

      You only get confidence intervals from real data.

      You only get real data after a system has been operating long enough to collect them.

      Then there's the joker in the pack: someone's sure to mess with the "other things being equal" part of any study.

  8. Anonymous Coward
    Anonymous Coward

    But more importantly...

    When are they going to replace MPs with algorithms?

    At least algorithms can be programmed to have ethics and not lie...

    1. This post has been deleted by its author

    2. Primus Secundus Tertius

      Re: But more importantly...

      MPs could be replaced at any time by men with guns.

      Most of the time in most parts of the world the MPs are less worse than the men with guns.

      1. Steve Davies 3 Silver badge

        Re: But more importantly...

        MP's? Ah, you mean Military Police. They have guns.

    3. allthecoolshortnamesweretaken

      Re: But more importantly...

      "When are they going to replace MPs with algorithms?

      At least algorithms can be programmed to have ethics and not lie..."

      As Ken has hinted at already, algorithms can also run on neurons instead of silicon. And there are enough experiments (B F Skinner comes to mind among others) that demonstrate that humans can be conditioned very much like lab rats. Therefore it is entirely possible to program MPs.

      In fact, most of them are, although calling simple if/then conditions or goto loops algorithms is stretching things a bit.

      1. Charles 9

        Re: But more importantly...

        "As Ken has hinted at already, algorithms can also run on neurons instead of silicon."

        I think the problem here is there's no assurance it'll run consistently and precisely on neurons.

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like