Patel
will want to include a ban on encryption, no doubt.
Infosec firm Rapid7 has joined the chorus of voices urging reform to the UK's Computer Misuse Act, publishing its detailed proposals intended to change the cobwebby old law for the better. The cloud-based SIEM company specifically highlighted section 3A of the CMA, saying this potentially "imperils dual-use open-source …
I'm presuming if someone hacks my computer 'in good faith' but without my prior permission it's acceptable to smack them in the mouth in good faith with no warning in return? As long as it's done in a manner reasonably designed to minimise loss of teeth, of course.
(The proposed 3A changes do make more sense)
Under ethical hacking (i.e. hacking into somebody else's computer or system(s) is usually contractually defined, to ensure that such things, or being arrested, don't happen - although there have been a couple of instances, where a researcher has signed in good faith and still ended up in a police cell, until the matter could be cleared up.
The grey area is websites and cloud services.
When I did my ethical hacking course, the most important fact that was constantly drummed into us was, you get an agreement up front. You define exactly what you can and can't do - how far into their systems you can go, not alter any data, alert the customer as soon as the first intrusion was successful, or do you go further?
If you are checking your own kit, then there are no problems about hacking it and sending bug reports to the developer.
If you are checking a company's security, likewise, you have a very hard contract to cover you, as long as you stay within the lines.
Where it becomes troublesome is for researchers looking for, for example, open databases on AWS or specific weaknesses in a cloud service etc.. In those instances, you are tripping over flaws in the configuration of systems that are open on the Internet. If you make your own account on the service and are testing the data in your account is properly locked down, you might be okay, depending on how the cloud provider feels on that day... But actively looking for faults to report is difficult, if you are caught, before you have submitted a bug report.
This is the grey area that needs to be covered by the law. If you are a genuine security researcher, you need to ensure that you have your back covered.
A couple of weeks back, in Germany, a security researcher (Lilith Wittmann) found a flaw in the CDU's campaign app. The app let canvasers register the houses they had visited and what they had discussed, so that houses weren't contacted by multiple people or if there were follow up questions etc.
She reported the bug to the political party. As a thank you, for finding a gaping security flaw and GDRP breach, they set the police on her! It created such a stink on social media in among the IT community, that they quickly retracted the complaint they had lodged with the police.
I think the CDU/CSU probably lost a lot of votes in the IT and Security communities in the upcoming election.
That is why we need exceptions. Even if you do it right, report responsibly, you can still land in hot water.
Maybe accreditation would also help. If you are a registered security researcher, that would at least give you a "get out of jail, until the full facts are known" card.
Obviously, researchers can go rogue or overstep the bounds, so there do need to be checks and balances, but legitimate researchers doing their normal work should not have to live in fear of being arrested.
I put forward a suggestion on these lines quite some time ago. The demonstration of good faith would be notification of the research to a central authority such as the NCSC, and the exemption would have been subject to some quite strict rules that provided for loss of the exemption if they were broken.
What we currently have is merely a defence, which is not assured enough.
Although you sometimes stumble upon a problem by accident - for example you register for a COVID test and, looking at the URL, you see it is open to direct attack.
Several in Germany were set up to give out sequential test numbers, so, if you had your number, you would know the numbers of everybody else tested before you and also those currently being tested. Proving that and reporting it ASAP is required. By the time the NCSC is informed, you've already started your research, because you have stumbled on a programming security 101 error.
Accreditation would be a better way to go, with reporting to NCSC, if you are starting a project, if you want. But it gives the researcher a free card to carry on researching, if they stumble on a problem and are waiting for NCSC confirmation.
"Although you sometimes stumble upon a problem by accident"
That doesn't preclude notification of your investigation to an authority. The point of notification is that a register would exist of legitimate investigations. That would distinguish between those with beneficial intent and the others. I didn't specify prior notification. It would have to be done by the time any investigation was complete though, and there would be rules attached (e.g. wrt reporting and release of the results).