back to article UK finally vows to look at 35-year-old Computer Misuse Act

Portugal has become the latest country to carve out protections for researchers under its cybersecurity law. The move increases pressure on the UK after a government minister admitted last week that the 35-year-old Computer Misuse Act needed updating to protect cybersecurity pros from prosecution. Illustration of someone …

  1. Peter2 Silver badge

    This is the CMA 1990:-

    https://www.legislation.gov.uk/ukpga/1990/18/crossheading/computer-misuse-offences

    And i'm just going to highlight the part that I think is pertinent.

    (1)A person is guilty of an offence if—

    (a)he does any unauthorised act in relation to a computer;

    (b)at the time when he does the act he knows that it is unauthorised

    Anybody in IT should not have the slightest difficulty understanding the computer misuse act which is simply a series of IF & ANDIF statements combined with the occasional ELSEIF.

    The key point there is "does any unauthorised act", AND "at the time when he does the act he knows that it is unauthorised". Causing the condition to fail is incredibly, incredibly easy and can be summed up with two words, namely "GET PERMISSION".

    Obtaining written permission from somebody at the organisation who might reasonably be expected to be able to grant that permission (ie, somebody in IT or the office manager, and not the office junior, cat, pot plant etc) represents a complete and total bar against any form of prosecution. Even if it later turns out that the person later turns out not to have permission to grant that authorisation, if you reasonably believed that you had permission at the time then you are legally in the clear.

    I personally do not understand why this is a problem for security researchers. If you are conducting unauthorised "security research" upon a computer system then this appears to simply be a digital version of "casing a target for burglary". This appears to me to be legally correct; the entire distinction is one of "do you have permission to do that?".

    If I hire a security company do check the physical security on my house then they are authorised to attempt to case the joint and produce a list of suggested improvements and estimates of the cost benefit ratio of said improvements etc and the same situation is true in the digital realm. If a random person does the same thing without authorisation then they are subject to arrest and prosecution either "casing my house", or in the digital realm "casing my computer system".

    I'm not seeing that as being unreasonable, and I would like to know very exactly and specifically what "security researchers" actually expect to be allowed to do with legal immunity from prosecution.

    From where i'm sitting as a sysadmin I don't think that conducting unauthorised "security research" should be legal any more than a locksmith should be allowed to conduct "security research" against the lock on the door of my house without my permission. I would suspect that most property owners and sysadmin types will agree with me on both points.

    "Security researchers" probably disagree, but I have a sneaking suspicion that it is not unreasonable to suggest that one persons security researcher is another persons hacker, and there are obvious objections to granting any form of legal protection to unethical, unsupervised, unlicensed, uninsured and unwanted intruders.

    1. af108

      @Peter2 I think the issue is highlighted where the article says

      > spot and share vulnerabilities if they met certain safeguards.

      Take the M&S hackers for instance. If they had identified vulnerabilities in M&S' systems and privately disclosed them, that's very different to a group of people trying to hold an organisation to ransom off the back of the same information.

      But it does raise the question, would that group of people still been prosecuted even if they'd done the former?

      Note that this is in a scenario where no permission had been obtained either way. I suspect they could (and would) still have been prosecuted.

    2. ParlezVousFranglais Silver badge

      I understand your position and comments, but actually I think your locksmith analogy sits a little wide of the mark.

      To correct it, imagine an "internet" of doors - some locked, some not, some marked, some not, some doors need to be secured, some don't, some doors identify their owner, some don't, some doors don't even have locks, they just push open

      Now in that world, you want to check that everyone who is supposed to have locked their door has done so - you can't do that without opening the door, and as as soon as you try to do that, even if the door is unlocked but should have been secure, even if the door has no lock and just pushes open when you touch it, you immediately incriminate yourself, even if the state of the door and it's owner was unidentifiable prior to touching it.

      You can't get permission first if you don't know who the owner is...

      I completely agree that any legal protections would have to be absolutely ironclad though, but for this, the scales are currently tipped too far in one direction, and it at least needs some consideration and input from various stakeholders to try to bring it back into balance again

      1. Androgynous Cupboard Silver badge

        And lets not forget that, typically, at the other end of this process is a company that has your personal data and that you suspect isn't managing it properly.

      2. Daniel Pfeffer

        Using your analogy, who appointed the researchers Guardians of the Public Security?

        In the days when we still had policemen on the beat, they (in some jurisdictions) might have been expected to check whether shops or warehouses were locked at night and to notify the proprietors if they were not. This was considered part of their job as Public Safety Officers. The researchers have no such public (or private) sanction.

        1. doublelayer Silver badge

          The analogy to doors is almost always flawed and makes this tricky, but I will try anyway. The difference is that these are doors in a very public space where the general expectation is that they will be unlocked and open unless the owners wish otherwise. Websites are intended to be accessed by all sorts of people from almost anywhere, and thus they should be treated differently than private property.

          Consider a simple example: I'm navigating a website to download a driver I need. I click the link that says it's for the driver. I get a 404. So far, I'm accessing public resources available to anyone through paths created by the website owners. But now, since I didn't get the file the link was supposed to go to, I try removing the last part of the URL to go up a level. I'd argue this is still completely normal, no more invasive than going around the back of a shelf to see if they just misplaced it. If they don't want me to do that, blocking that is incredibly easy.

          This brings me to a directory of files, and I choose the one that looks like it's the file I wanted. But when I open it, I discover that this is sensitive internal information they really don't think should be public. Have I broken the rules here, just because I didn't only follow their own links? Is it different if I followed someone else's link to the same page? In both cases, this is available to anyone who knows a predictable URL and would have been very easy to block.

          This is not a theoretical example. This is an example I went through, reported to the owner of a website, and received an irritated response to. While they didn't go to the effort of threatening me with legal consequences, that does happen. I did not apply to the corporate headquarters for a written permission to navigate their website, nor in my opinion should I need to in order to avoid prosecution because they were unhappy that they put things they didn't want people to see there.

        2. rg287 Silver badge

          Using your analogy, who appointed the researchers Guardians of the Public Security?

          In the days when we still had policemen on the beat, they (in some jurisdictions) might have been expected to check whether shops or warehouses were locked at night and to notify the proprietors if they were not. This was considered part of their job as Public Safety Officers. The researchers have no such public (or private) sanction.

          Not so long ago, my wife was walking down our street when a dog ran out of a door towards her. No biggie, she sort of recognised it as one we've seen at the park (we also have a dog). She walked it back and called into the house to let them know their front door was ajar.

          Did anyone appoint her a Guardian of Public Security? Well yes actually, Robert Peel did when he stated:

          “The police are the public and the public are the police. The police being only members of the public who are paid to give full time attention to duties which are incumbent on every citizen in the interests of community welfare and existence”

          And y'know, it's the civic thing to do - they'd probably be really upset if their dog got hit by a car. Yes, it would be their fault for leaving their door ajar. But it would take a sociopath to not say "I'll just shout through the door".

          Anyway, how is that relevant? Well, firstly it demonstrates a general principle that in a civil society people (should) try to look out for one another. After all, the Police are an inherently reactive group, who rely on members of the public calling 999 when something happens. To wash your hands of any social responsibility ("that's the Police's job") and tell others to do likewise is at best thoughtless, and at worst downright ignorant.

          You called 999?! Who appointed you a guardian of the public safety?! If the person getting stabbed need Police help, they're perfectly capable of calling themselves. They've got a phone haven't they?! /s

          Moreover, in the UK (and we are discussing UK law here) no offence is actually committed by pushing an ajar door open - you haven't committed "breaking and entering" because entry was not forced. There might be a case for (civil) trespass, but there's a clear legitimate interest defence in your presence, no different to a postie delivering mail. Other offences such as burglary, theft or robbery would require you to actually steal stuff or otherwise cause harm.

          But the CMA considers the equivalent action to be an absolute offence.

          Now, security researchers are slightly different from that since they're actively looking for improperly ajar doors on a professional basis, but there's no real ethical difference, and it is extremely strange that the offence is considered absolute and a court has no latitude to hear a defence.

          When Marcus Hutchins registered the trap domain and stopped WannaCry in it's tracks, he didn't have to touch any of the attacker's C2 infra. But what if he'd realised - in decompiling the malware - that a specially crafted command or string sent to a C2 server would propagate a shutdown command? Given that WannaCry was crippling the NHS and public infrastructure, he'd have a clear public duty to send that command and stop the spread. But under the CMA, this could be considered unauthorised access to a computer, and he would not be entitled to submit a defence in mitigation. Bizarre, no?

          The entire reason we have courts is because politicians can't envisage every possible corner case - and neither can you or I. So we have laws which allow the court to exercise some latitude in saying "well yes, technically you've done a thing, but it's clearly not a crime".

          It's why homicide is not a crime - because homicide committed in (legitimate and demonstrable) self-defence is considered reasonable. The crime is murder (or possibly manslaughter) and the court can weigh up all the circumstances before convicting and sentencing.

          If the world was as black and white as you make out, then we wouldn't need courts at all. We could just skip to Judge Dredd - police, jury and executioner all in one.

      3. Phil O'Sophical Silver badge

        In the door analogy, if you don't have permission to check that the door is locked then you don't have the right to check it, even by a gentle push. If the owner has set an alarm but forgotten to lock the door you could trigger an 'intruder alert ', and be responsible for any costs related to handling it, even if you had the best of intentions.

      4. Anonymous Coward
        Anonymous Coward

        Based on this, pretty much any website, tracking and esp. AI seems to breach the act. I can’t imagine one time click on something counts as a lasting power of consent.

    3. This post has been deleted by its author

    4. Yet Another Anonymous coward Silver badge

      >appears to simply be a digital version of "casing a target for burglary"

      Or trespassing onto the railway line to find that the train full of oil takers on the hill above Lac-Mégantic is on fire - and then calling 911 ?

    5. Jamie Jones Silver badge

      The extreme cases are obvious - that's not the problem here. Only yesterday I manually stripped the last "/" on a github url to go the parent directory.

      It didn't work that way. so it 404'ed.

      Could some sites consider this an "authorised access attempt"?

      A few years back I received an email and password change url for a stripe account that wasn't mine.

      I reported it to them, but before doing so, checked that it worked - I got into someone elses account.

      Should that have got me prosecuted? You can bet your life they wouldn't have followed it up if i didn't have tangible proof - as it was, it took me quoting law to say "yes, you are liable for the wrong address being on an account once you're made aware - even if you didn't make the mistake. They initially tried the "We can't lock down access to the account with that email address, as you aren't the account holder, despite you proving you have that email (or words to that effect)

      Anyway, I sidetrack....

      1. Anonymous Coward
        Anonymous Coward

        before doing so, checked that it worked - I got into someone elses account.

        Should that have got me prosecuted?

        Yes, it was none of your business whether it worked or not. If you found a car key on the ground and tried it in a few cars to see if it was a real key, would you be surprised to get your collar felt?

        1. Richard 12 Silver badge

          You're actively handed a key, and you use it.

          Then you discover that it actually opened the wrong door. It was impossible to know this before opening the door.

          Physical analogies are often highly misleading on the Internet. It's not a big truck.

          1. Fred Daggy
            Pint

            I'd be saying that the email sender actively used my resources without consent. Email server, yes, was open, but no, no consent. Argument goes both ways - at least enough to make the legal think twice.

          2. Anonymous Coward
            Anonymous Coward

            You're handed a key and you use it. That makes you the perfect dupe for letting in the Trojan horse.

            1. Jamie Jones Silver badge

              "You're handed a key and you use it. That makes you the perfect dupe for letting in the Trojan horse."

              The problem with that comment is that you mistakenly assume that I'm as dumb as you are.

              Cheers!

        2. doublelayer Silver badge

          The information was sent to their personal email account. That's a lot more akin to having a key mailed specifically to you, and you would have a few reasons to try to figure out what it was doing there. For example, did you have an account you created years ago and forgot about, the most plausible reason that they would think there was an account using your address? Was someone specifically impersonating you, in which case you have a personal security reason to identify it and close it? In both cases, you would have every legal right to do exactly that because mail that is directed to you at an address you legally control is yours.

          1. MachDiamond Silver badge

            "Was someone specifically impersonating you, in which case you have a personal security reason to identify it and close it?"

            It might have been a flaw in a scammers plan that led to you getting the key so it would have been a good move to go into the account and make sure it's permanently closed/information changed. Often there is no way to report an account issue unless you have an account. My first move might be to "update" my information so if plugging the hole takes any time, anybody accessing that account will be fed a big bowl of nothing soup. I see that as a good tactic in some cases.

            If somebody is trying to set up a shopping account and plans to swipe expensive items from the porch, making sure the mailing address to the account is at a tax authority office or some other location that would be dangerous to try and steal a package (other than a police station which is easy to look up and obvious). Maybe a boarded up house on the other side of town would also work as a delivery company would immediately flag it as suspicious.

            1. Jamie Jones Silver badge

              Exactly. These were my thoughts. It was my email address, I knew I hadn't set it up, but it was addressed to me.I'd never even heard of stripe at the time.

              My first instinct was foul play.

              It turned out that it was set up by someone who had my domain a few years prior. When I realised what was going on, I immediately informed stripe, and that's when I had to fight to get to do anything.

              1. Jamie Jones Silver badge

                I actually recorded the full interaction at the time, and just found it. It was long enough ago that it should be ok to make it public.

                My memory of the event was that it was more of a battle than this. Apologies if I hyped it up too much:

                https://www.catflap.org/jamie/misc/stripe.txt

        3. Jamie Jones Silver badge

          I maybe wasn't clear. It was addressed to me. It was my name, and my email address. I'd never even heard of stripe at the time, so expected it to be a scam.

          Turns out it was some female called Jamie Jones who ran some sort of church ministry. When I realised it was a legit mixup, that's when I had to fight to remove my email address from her account!

    6. steviebuk Silver badge

      I'd also say they could of just used a woman to do the attacks considering the act has always been sexist and states "he".

    7. sozz

      This is what it says. It doesn't really need changing - apart from getting rid of "he"

      A person is guilty of an offence if—

      (a)he causes a computer to perform any function with intent to secure access to any program or data held in any computer [F1, or to enable any such access to be secured];

      (b)the access he intends to secure [F2, or to enable to be secured,] is unauthorised; and

      (c)he knows at the time when he causes the computer to perform the function that that is the case.

      1. doublelayer Silver badge

        Of course it needs changing. The problem in the text is the word "unauthorized". What does that mean? People who don't like that you've pointed out they've got a gaping hole often interpret that as "we did not like that you told us this, therefore you were unauthorized to do so". It's not defined, so what authority can authorize you to do what thing is entirely unclear. Do you need prior written permission to view a public site? How about to take apart hardware you legally purchased and own? Or take things I do at my job. I work in security and have plenty of authorization to poke lots of places looking for problems, including things we bought or licensed from others with access to our data. Do I also need authorization from those others to do that, with my employers' being insufficient?

        In practice, this has no definition other than what a complainant and law enforcement are willing to accept. Therefore, if I find a problem, report it, and get threatened that if I say anything to anyone about this ever again, they'll have me arrested as an evil criminal hacker, I don't have any certainty that I won't find myself in police interviews and there's at least some chance I might find myself convicted. Probably not, which is why I still report things, but I know people who have gotten plenty of legal threats. The fact that I and most of those can point to professional work in security, will push back against threats, and are not asking for money act in our defense. We should not need any of those things to be able to report problems without fear, and there are people who don't have those and still have valid reports.

        1. Huzza

          PaaS

          Another issue with who should authorise is when it comes to PaaS - is it the company buying the service or providing that could authorise security testing?

    8. Anonymous Coward
      Anonymous Coward

      If feels like a pretty narrow reason to update the act. The whole thing likely needs a refresh dating from 1990 and much else far more important is likely not fit for purpose.

    9. AnonymousCward

      Get permission from who?

      So if I’m testing the security of Google Workspace or Microsoft 365 as implemented by a customer, for CMA purposes, Google/Microsoft could still prosecute if I find vulnerabilities in the infrastructure, despite having permission from the customer to perform the test. That’s one of many scenarios. You’ll encounter ISPs providing internet filtering that you may want to test for schools and/or libraries, then you have building management systems, monitored security alarm systems, public websites on shared web hosts and a myriad of other scenarios where getting permission spans multiple organisations, some of which the customer themselves may not know the full chain of command for, and where legal ownership is unclear.

      If RM is happy for SafetyNet to be tested, will the upstream ISP be happy? Will the upstream to the upstream be happy? Will I need Level3’s permission too? The packets would flow through their equipment too after all…

      It’s functionally difficult, and sometimes impossible, to guarantee legal written authorisation for everything a customer may ask for. Also, the CMA has always been able to prosecute unauthorised connections, making most advanced discovery techniques illegal, and you don’t know what you don’t know. A prosecutor could argue that you must know because $qualification and sometimes decent judges will put prosecutors in their place and sometimes they won’t, that is all pot luck too.

      Yes, we need the law fixing.

      1. Anonymous Coward
        Anonymous Coward

        Re: Get permission from who?

        You get permission from whoever's responsible for the company's IT admin, or someone from the management system above them.

        If you, as a regular normal member of the public, can type in a URL and get to a file then it's fair game. You have issued a request for a file, and their server has evaluated this against its security procedures and decided you have access. It's then taken a positive action to send you the file you requested.

        If you have to use a compromised or easy-to-guess password, then you're immediately unauthorised. Even if it's 'password'. Because then you're pretending to be someone else, someone who is authorised.

        If you're talking through some other supplier's infrastructure then... well, did they give you permission or not? Of course they did, or at least you have as much of a right to use it as any other random person. ISPs are supposed to be common carriers, or they become responsible for whatever's carried over their infrastructure. If they're filtering it should be from their own subscribers- people with whom they have an agreement to do so.

        If your policies have made it accessible to the public, it's publicly accessible.

        If you don't know that it's publicly accessible, that's on your IT guys and management above them. If you don't have the right to make it publicly accessible, that's on your IT guys and management above them and action should be taken against them. If they took steps to keep sensitive information safe and someone circumvented those, then that intruder was unauthorised.

        And if you can't find who owns them, or at least someone who claims to have a reasonable degree of ownership/access control? Then you can't ask for permission. So don't gain unauthorised access, even if you think you should, unless you're willing to end up in court over it.

    10. Anonymous Coward
      Anonymous Coward

      There's a second consideration.

      What are you going to do if the discovery of an issue (lets; assume it's accidental, i.e. without permission) is then not acted upon?

      If you go public, you admit willing to a breach of the law, but you serve the greater good (which is unfortunately really only in the rarest of cases a succesful defence). If you do not, you serve the company that is negligent or has possibly made a mistake (let's be fair, it's not always cost cutting or incompetence, things happen).

      That still made and makes this law a train wreck and, I must add, this was pointed out while it was still being considered. The latter might explain the reluctance to (a) have it amended then and (b) get it corrected now, because it's politics where optics seem to matter a great deal more than reality.

    11. Anonymous Coward
      Anonymous Coward

      What if a company doesn't give permission but you know they are insecure and mishandling data. What if they aren't proactive in security matters and simply just don't care.

      How do you expose that? Just wait until the inevitable happens and let people get their data stolen or misused, possibly unnoticed for possibly years?

      The only company who wouldn't be happy with an ethical hacker pointing out flaws in their system would be a company who has something to hide and know they would be caught with their pants down.

      Why do you think there are bug bounties? The more responsible companies out there reward these researchers, they know the value of it.

    12. MachDiamond Silver badge

      "summed up with two words, namely "GET PERMISSION"."

      And how would you do that? The bigger the company, the more impossible that task becomes. Although, the bigger the company, the more damage that will be done if a vuln turns into an exploit/0-day. Just asking for permission might trigger a call to the police and your being collected at 4am and taken in for a chat in the back room (with or without rubber hoses).

  2. af108
    Joke

    Before

    before modern cybersecurity research, ecommerce, cybercrime, vulnerability reporting, or even The Register existed.

    But let's not focus too much on when the world was a better place!

    1. ParlezVousFranglais Silver badge
      Happy

      Re: Before

      Images of Snake Plissken destroying his cassette tape and later using the "Sword of Damocles" just brought a contented smile to my face...

  3. Mike 137 Silver badge

    "I personally do not understand why this is a problem for security researchers"

    (1)A person is guilty of an offence if—

    (a)he does any unauthorised act in relation to a computer;

    (b)at the time when he does the act he knows that it is unauthorised

    The problem arises where a legitimate researcher wants to investigate e.g. a critical vulnerability in the public interest but the software vendor/host refuses to respond or co-operate. This is much more common than many believe as "reputation" typically takes precedence over protecting the public.

    When the CMA was being drafted I suggested that a defence could be reasonable documented and certified attempts to obtain consent before proceeding, but this didn't get into the bill.

    1. Elongated Muskrat Silver badge

      Re: "I personally do not understand why this is a problem for security researchers"

      Really, the CMA needs redrafting so that it protects from the perspective of the citizen, not of business. As it is, I can be prosecuted if I gain unauthorised access to someone's server, but any number of random website operators are perfectly free to run scripts in my browser or try to plant tracking cookies on my machine, without gaining my permission, which they should be requesting from me in writing, not via a pop-up that renders their website otherwise unusable (at least until I open the debug window and start removing elements from the DOM)

  4. Uncle Slacky Silver badge
    Trollface

    Still waiting for Kemi's prosecution...

    ...but as she's apologised, that's all right then:

    https://www.bbc.com/news/uk-politics-43694295

  5. hrolf_kraki

    Free Pass??

    .....and then there's all the folk at Hubble Road and Nova South....doing illegal acts all the time.....and getting a free pass!

    What don't I understand?

  6. Rob Dyke

    Woohoo! It's great to hear that the Govt is looking to create a "statutory defense" for researchers to spot and share vulnerabilities. I'm grateful to Security minister Dan Jarvis for hearing the concerns raised by Cyberup! and others that the CMA is not-fit-for-purpose.

  7. Anonymous Coward
    Anonymous Coward

    "I have never seen a situation so dismal that a policeman couldn't make it worse"

    Anonymous comment because about 30 years ago I was hacking my company - I was employed to control all our internet access and saw us getting malware every week. Learning to hack all our internet access helped me to prevent everything eventually.

    The title is a Brendan Behan comment years earlier, these days I agree with the police working actions, but the "illegal" situation is dismal when you are working to prevent attacks.

  8. Anonymous Coward
    Anonymous Coward

    Until....

    we can outlaw the imposition of updates WHEN WE THE USER has not given explicit permission, not implied or in some 1pt white text on a white background then this act and its successor will remain useless if not downright impotent.

    It is my computer and if I don't want to upgrade it then that has to be my decision NOT YOURS.

    Hey MS... I'm looking at you (no surprise there then). To a lesser extent, Apple is in the frame.

    1. Evil Scot Silver badge
      Joke

      Re: Until....

      Beware of the leopard

    2. find users who cut cat tail

      Re: Until....

      Pretty much all ads are malicious code run by a third party (i.e. not even the website you are browsing) on your computer against your will. No prosecution there.

  9. Alistair Kelman

    What I said in February 2022 on the Computer Misuse Act ...

    In February 2022 PC Pro ran a feature article entitled "Is the Computer Misuse Act fit for purposes?" where I was interviewed. The feature article by Davey Winder is available to PC Pro subscribers. The feature article concluded as follows

    "We’ll leave the final word to two people who were instrumental right at the beginning of this journey into cybercrime legislation: ex-barrister Alistair Kelman and the former hacker Robert Schifreen:

    “The CMA can be seen as a bit of a sweeping-up measure which has depended for its validity and acceptability on it only being sparingly used in prosecutions,” Kelman said. “It would be appropriate, in my view, that prosecutions could only be brought under a revised Act with the prior consent of the attorney general or the director of public prosecutions, leaving the CMA available for use in unusual and extreme circumstances without causing long-term issues.”

    Kelman insists that we have a sound system of checks and balances in our legal system in the form of the Association of Chief Police Officers’ “Good Practice Guide for Digital Evidence”, which, he said, “should be read into a prior-consent system under a revised CMA”.

    With such safeguards in place, Kelman is happy that a person engaged in penetration testing, an infosecurity researcher or someone investigating a vulnerability disclosure issue “would not be at genuine risk of prosecution, while bad actors would always remain at risk”.

    Schifreen adds that the deterrent aspect of the initial Act simply isn’t working, as evidenced by the continued rise in cybercrime. “It’s unlikely that increasing the penalties or tweaking the wording of the Act is going to stem the tide,” he said. “Far better than reforming the CMA would perhaps be to give some more resources to Action Fraud, so that they can investigate and prosecute more people. This might then act as a real deterrent.”

    https://sites.google.com/safecast.co.uk/alikelman/computer-misuse-act?authuser=0

    1. ExampleOne

      Re: What I said in February 2022 on the Computer Misuse Act ...

      Schifreen adds that the deterrent aspect of the initial Act simply isn’t working, as evidenced by the continued rise in cybercrime. “It’s unlikely that increasing the penalties or tweaking the wording of the Act is going to stem the tide,” he said. “Far better than reforming the CMA would perhaps be to give some more resources to Action Fraud, so that they can investigate and prosecute more people. This might then act as a real deterrent.”

      Perhaps we should simply ban the use of "no liability" licenses in commercial paid for software.

      If a civil engineer builds a bridge and makes a mistake leading to failure and fatalities, they can expect to get dragged over the coals for the mistakes. Why do we so casually let software engineers off the hook?

      1. Anonymous Coward
        Anonymous Coward

        Re: What I said in February 2022 on the Computer Misuse Act ...

        OK, but for companies that go beyond a certain line of turnover.

        Just for argument's sake, let's say above EUR/GBP/USD 1M. Turnover because profit can be 'engineered' out of the books (a well established routine for avoiding tax) and above a certain level because (a) you don't want startups that barely have gotten their act together or your average innovative coder to get hit by this, (b) you can ensure your strategy is towards making it decent and (c) if you've reached that level you can plan for that exposure and set aside a budget.

        At the moment that money is spent on gaslighting via marketing that a product is "safe" by, for instance, finding security problems elsewhere while you have so many things to fix that you have th reserve a special day on the month so nobody notices just how much you have to keep updating (this may sound familiar)..

    2. Pier Reviewer

      Re: What I said in February 2022 on the Computer Misuse Act ...

      Any law that relies on the government of the day pinky promising to use it in a specific way is a bad law. The way in which it should be used needs to be explicit so the courts can determine Parliament’s intent.

      A future government could readily decide to use the CMA as a political weapon and prosecute someone in circumstances the current government said they wouldn’t. Either fix it, or suck up the consequences, which are primarily a huge amount of UK businesses sitting on unmanaged risk that the bad guys will happily find and abuse for profit because no sane person will report a vulnerability they stumble across even accidentally

      Remember - outside of extremely limited circumstances English law never punishes inaction. Under the CMA, if you know/think an org is sitting on a tech time bomb, unless you already have written permission you STFU and move on. You do not ask for permission, because you basically admit to an offence under the CMA in doing so. Is that really what we want? Because that’s where we’re at.

  10. David_J
    Megaphone

    Incentives need to change radically

    Better than just amending this law for researchers, there should be incentives to encourge non-criminals to find the security holes and be recompensed for finding them. The companies exposing those holes should be fined. Misuse of data found through such holes should, of course, remain a criminal act (as would "techniques including denial of service, social engineering, and phishing etc").

    1. sten2012

      Re: Incentives need to change radically

      Very, very extreme take. I don't agree companies should be expected to pay for work they haven't agreed to being undertaken. If you choose recompense as optional then bug bounties already exist.

      If you're going that far though - why should social engineering be off the table?

      Companies should be expected to write bulletproof software but no expectation they should train their staff?

      1. heyrick Silver badge

        Re: Incentives need to change radically

        Agreed with the bit about paying for work not agreed to.

        A while back I had an email pointing out a security flaw in my site. A really obscure thing that could lead a carefully crafted URL to change the page content. As payment for services rendered, I was told I should pay a large amount of money for a fix or else the problem would be disclosed on a bug bounty site.

        It took no time at all to track the problem down to a line of debug code that I forgot to remove, which was echoing the URL parameters. It was my own PHP code so quite how they planned to provide a fix is anybody's guess...

        I had a look at the user profile of the "researcher" that found the bug, and it was clear from the massive list of URLs that they were just running a bot to look for problems and collect other URLs. So I thanked the anonymous person on my blog and then ignored the rest.

        It's a bit much to demand money I don't have for a personal site that makes no money after getting a bot to perform a bunch of tests without asking first...

        1. sten2012

          Re: Incentives need to change radically

          I agree completely. _Most_ unsolicited bug reports for most companies are probably dumps of low hanging fruit line SSLLabs, automated tools and/or complete chancers.

          I do, however, agree with protections for researchers and there are questionable vagueness in the laws for professionals (even solicited pentesters and researchers) that mean we/they essentially have to work at risk - section 3A has kept me up at night on occasion:

          "Making, supplying or obtaining articles for use in offence"

          Particularly (2).

          "A person is guilty of an offence if he supplies or offers to supply any article believing that it is likely to be used to commit, or to assist in the commission of, an offence" which rules out.. well, releasing anything, frankly. How Microsoft get away with windows being on sale is beyond me.

          But let's not quite go so far a normalising the unsolicited blackmail attempts.

  11. Infused

    I find it interesting the CMA was passed only after a member of the Royal Family had been compromised.

  12. Anonymous Coward
    Anonymous Coward

    A time in IT before ElReg??????

    Can't be right?

  13. Anonymous John

    There's also Ethical Hacking when used against scammers. Technically illegal but the BBC Scam Interceptors programme openly admits to using one. I'm not aware of any being prosecuted. Or of any scammer complaining to the police about being hacked. Although I have baited a couple stupid enough that I wouldn't have put it past 5hem.

  14. MarkTheMorose
    Headmaster

    That bloke again, but...

    "Security minister Dan Jarvis told a Financial Times conference that the government had "heard the criticisms" and was looking to create a "statutory defense" for researchers to spot and share vulnerabilities if they met certain safeguards."

    ... I bet, being a UK minister, he said 'defence' and not 'defense'.

  15. Peter Sommer

    Precise and testable definitions will be essential

    I have acted as an expert witness in mamy CMA cases and been consulted in many. other situations which didn't maker it to court. (Dan Cuthbert's case,mentioned here, was one of them). Laws require precision so that they can be enforced consistently and fairly. A simple declaration by an actor/hacker of good intent isn't going to be enough. But if that route isn't followed what criteria for activity will need to be proved in order to get the benefit of a defence? It is this problem which has so far dfefeated the plans for a revised CMA. Let's see what the government can come up with.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon