At least they are thinking the unthinkable already
Tech companies that are co-conspirators with our adversaries must be regulated.” Ahem take note, Tim Cook.
The US House Committee on Homeland Security grilled a panel of experts to understand how foreign adversaries could weaponise emerging technologies like AI and quantum computing in cybersecurity. “The rapid proliferation of new technology is changing the world,” Cedric Richmond (D-LA), chairman of the Cybersecurity, …
Hmmm... Way-points? Guilt by association?
So, what kind of regulation? With everything so intertwined, is every business going to be subject to an InfoSec audit by the government? Or did you have something else in mind? I'm favoring stronger standards and laws that will make liability more enforceable. Even still, this stuff is going to get complicated.
Oh wait, it's already complicated, and too many businesses are really not stepping up yet. Well, hang on everybody; we're in for a bumpy ride.
He warned that components sold in the US, whether networking equipment or smartphones, should be manufactured stateside or in allied countries.
If you accept (as you should) that where dangerous adversaries are concerned, you must address capabilities rather than intentions, this statement is absolutely correct. Even leaving soft- and firmware aside, it is simply too easy to hide nano-nasties in hardware, and incredibly difficult to find those intelligently crafted to lurk dormant until some set of conditions exist. The Internet of Shite means that many erstwhile dumb devices are now net-connected, often for no good reason, and it's entirely plausible that a seemingly gormless appliance, supposedly intended to monitor your stock of chilled dairy, is (a) much smarter than it appears and (b) relaying everything heard and seen in your kitchen to Beijing. Or Maryland.
And when you consider the fantastic complexity of, say, PC mobo design—no individual has a perfect knowledge of every component and connection in one of these things today, nor ever will—there are so many places and methods of sneaking in a hardware nasty no bigger than a blob of solder that it would be brave indeed to claim that one you hadn't personally designed and built was guaranteed free of infection. When you're talking national security like nuclear systems, utilities, military logistics, political data, banking and high-tech research, 'brave' becomes 'foolhardy'.
Pretty soon every country and alliance that can self-source its high tech will do so.
If you're still wondering why, ask yourself a question. Supposing your adversary is an authoritarian, secretive regime with complete control over its citizens, scientists and corporations; that it employs supremely gifted and capable biologists; expends huge resources on research in this area; and that it has developed advanced technology to support these efforts: would you allow it to contribute to your national blood bank?
I'll believe it when we see actual compromising exfiltrated data encased in a blood vessel.
Plus, how does something so tiny exfiltrate data without giving itself away? Try to hide in noise and you can just end up getting lost in noise. Any louder and sensitive radio stuff, such as in TEMPEST settings, would pick them up.
Yes, Jump directly to the "Well so and so is doing it".
Instead concentrate on the fact that everyone is doing it, and if they are not doing it they are falling behind everyone else. We live in a world that is changing by the day, and while I hate the basis of a Zero Sum game that's the one everyone seems to be playing. A government/corporation/nation state/tribal group/net collective/religious group is required by its nature to ensure the success and survival of its group even it is to the detriment of every other group, to not partake in research and escalation of technology even if sometime unethical could be construed as unethical not to do.
The best defense we can have is open information, transparency in creation, oversight that is also transparent, and sometimes just faith (with transparency wherever possible. Aka Trust but Verify) that the core belief system of your group will prevent the worst of those offenses from occurring.
Instead of concentrating on one group, or justifying your groups actions because so and so is doing it. Find a way of creating Transparency, make test or regulations that nation states must comply to, create negative consequences for inserting supply chain poison pills, break up over consolidation of production under one region or power and ensure competition and thus honesty at the lowest level. If region A gets identified for introducing firmware changes or board level bypasses, Region B can instantly replace its supply thus incentivizing Region A from not doing a dumb action if it wishes to continue operating.
This article gave me an AI password cracking idea:
Take the lists of all the revealed passwords, group by person, then categorize the groups to predict other passwords that a person is likely to use (currently, next) based on category match or pattern matching.
Hopefully someone will pick this up to kill off passwords for good.
In a fashion that's already what is done for most fast successful password crackers (aka non brute force methods). I don't know the rules for discussing such practices on this site so I will err on the side of cautious and say tacking AI onto it just overcomplicates something that can be done with present tools.
> Politicians have also recently fallen prey to deepfake attacks, where their likenesses have been manipulated to say and do things they haven’t actually done.
Any citations? I've been waiting for this to explode in the news for years and so far, nothing. It would be trivial to create one of Trump or Johnson saying something sketchy and leaking it. Since TERROR in the news is $$$$$$$$$ leaking that our leaders can be impersonated with ease at any time, accompanied with video footage, would sell papers like hotcakes.
The "Pelosi slurred speech" video is the standard example.
Frankly, I don't think it matters much. These things already exist, but there's been a lot more attention to their possibility than to any of the actual examples.
The electorate is highly polarized, and most people seem well into the stage where they're happy to believe anything that supports their position, regardless of whether it's objectively credible, and disbelieve anything that contradicts it. Those who are still inclined to keep an open mind have mostly learned to be highly suspicious of visual media, regardless of the source. The press has lost what standing it had as a reputable institution, and the half-assed blogging-and-social-media channel that's largely replaced it likely doesn't have much effect either.
I don't think deepfakes have the leverage to affect future elections very much. The ranks of the credulous are swollen, but it's damned difficult to get them to march in a different direction than the one they've already chosen.
Biting the hand that feeds IT © 1998–2020