Run like crazy
No Microsoft software.
130 posts • joined 20 Apr 2019
Just the nature of NEC; it will never let go of its IP. However biased. It just doesn't make sense for ANY country's legal system to outsource the training data set to a foreign agency. Essentially, the UK government just abdicated its responsibility to protect its citizen's privacy and legal rights to a foreign entity.
If we have have an embedded device which has a tiny web server (e.g. router, temp monitor, etc.) and it supports TLS, then do they expect the embedded server to renew its certificates every year or so?
For that to happen automatically, they need to be connected to the internet. However, many of these devices should never be connected to the internet.
You assume that a customer knows what to do. Assume I sell you a million webcams with a guarantee for security updates for 10 years. After 3 years I decide to shut shop and retire. You installed these devices at YOUR customer's premises (a grocery chain?) and you also made a ton of money and retire.
The grocery chain knows how to move vegetables. They have no IT skills.
Old Joe installed the webcam at his home but he got run over by a truck. His widow is now getting scammed by people watching her ...
By that you mean not only consumers but the developers in the Bay Area who are actually building these devices right? You mention security and they say they don't have the time, ship the products without a security assessment and even hide their schedules and software from the security team.
If that is the attitude, how can you blame the common man?
"Tech companies have also tried to thwart Clearview's slurping of photos. In February, Google, YouTube, Twitter, and Facebook all served the startup cease-and-desist letters ordering it to stop stealing images from their platforms, and to delete existing pics in its massive database"
The only reason FB, Twitter, and others will object is if they can't monetize that information they have collected. When was FB ever interested in our privacy? They want to sell our information, pure and simple.
Even if it breaks the law.
How many devices have ARM's TrustZone? And how many actually use them? And how many devices actually inject keys into the devices? And which software actually uses the processors?
The secure element that Samsung is making needs to be USED properly. I'll believe it when I see it.
They are Microsoft. Whatever broken system MS comes out with, it will prevail. That is how MS has always operated. It starts killing the competition right away by putting a question mark regarding the viability of their competitors‘ product life.
Find a popular product space.
Make a Badly implemented competing product
Rinse and repeat.
My bet is cc that they also talked to all the companies with the professed intention of acquiring them and then, after learning everything from them including their revenue model, dumped them and started a project with the same PM who ran the evaluation game.
I think we are putting the onus on the people and assuming that a perimeter mindset And security training alone would work.
I would like to postulate that operations should not be so porous as to allow a simple workstation hack to bring down the castle.
This is really a back to the basics badly engineered systems That have been configured And maintained poorly.
I don’t think their business processes have changed even a bit. It is just that the advent of the iPod, then the iPhone, the cloud, the business around cloud services, the rapid growth of better web services api, The non-sql Database systems, etc. left MS behind and for the first time exposed to the whole world and themselves what a laggard they were in everything.
Plus the fact that the entire software security ecosystem was really fed by MS. It was only after the DoD twisted its arm that MS even decided to look at security.
Even then, MS is still playing catch-up on securing its systems.
If it weren’t for AAPL or Google, we would still be struggling with Palm PC.
On the embedded side they tried to take over with windows embed.
So, no, your brother has a big heart but the machine will do what it always does.
I audit C code and I hear these mystifying commentS all the the time:
1. “Once the code is tested we don’t need to have the checks in place.”
2. “These parameters have been checked before. Yeah right“. Probably at a certain point in time.
3. “Prove to me that this is a security issue.”
4. “This code is complex because...”. Trust me, if I can’t read a snippet of code after 40 years of programming no one else should waste their time either.
5. “Only Jack/Jill/Godfather can explain what this code does”.
6. Please add a few more.
Right now I am fighting a Program Manager who doesn’t believe her project needs to fix a Critical website vulnerability.
Don't trust them at all.
What does "using AES-GCM" even mean?
How is the counter mode managed?
How is the key chosen?
How is the key exchange performed?
How are users actually authenticated?
Why are they decrypting at the server?
Are they running analytics or transcription services on the streams?
Can we question their engineers and support staff about their accessing the data? I know we can't; 80% are in China.
Any company that deliberately wrote extra code instead of using TLS for secure streaming is suspect.
Talked to a famous device company here in the Bay Area. The engineers claimed that no one would be able to steal their IP because their bitstream was so complex.
And no one could hack it and make their devices do dangerous things like killing people because (see above).
Hence they didn’t need to encrypt their bitstream. Nor did they want to sign it.
Is this a joke? Everything can and will go through Analytics and maybe speech to text.
Routing with a dubious encryption algorithm, clear text on the server, single key, a virus type installer on Mac, etc.
OTOH WhatsApp has been able to achieve the lofty goals except that it is owned by Zuch. So there is that. But technically Zoom could use the Signal protocol. But how will they make money? And the Chinese military will not be happy.
All our IoT devices, laptops, phones, etc. are riddled with security bugs, call home features, spyware, etc.
Throw in Zoom. Add printers, scanners, car parts, etc.
Round it up with a large amount of disposable capital.
Isn’t it too little too late?
China has grabbed the end points.
Or is it just the large companies we are worried about?
1. FISA was bound to be abused due to the secrecy
2. The nature of the violations are not clear.
3. Most warrants are requested in a hurry, with no real oversight They are bound to have issues.
4. Was this review used to clear Carter Page?
5. Is the FBI is being dismantled and/or being made the boogeyman?
6. The Investigation was run by the very people who support Carter Page.
7. The heads of the organizations are clearly not capable of defending their organizations as they report to trump.
I am absolutely against the PATRIOT act and the way FISA works is wrong.
But one has to wonder if this is a case of The executive branch neutering our FBI to protect one of their own.
Just saw a note from a person, a technology director at a video streaming company, who said that Zoom’s security issues aren’t a problem because the ease of use is more important.
Someone asked him if he had read Bruce Schneier’s blog. He was was very dismissive
Another was a CISSP who stated that the issue was overhyped.
So it is very clear that people don’t care unless they are personally impacted. Like they get fired or lose money.
Intel moved to a RISC core ages ago. It makes sense. The x86 architecture, based (sort of) the VAX architecture is successful because of the RISC simplicity. They also have had a lot of expertise in build RISC processors: the 80960 was a processor I worked on.
And as someone pointed out: more ARM processors get shipped than x86 chips.
If you ever have to use AWS, you will see ARM processors as options. I have seen this story before: IBM had to lose market share on the lower end (they were EVERYWHERE) to DEC PDP/VAX which gave way to Sun Sparcs, which gave way to Intel, which gave way to ARM.All of this was happening at the lower end. And soon, the scale of the invasion overwhelms the entrenched processors.
There are several ways to fight. You can either have a pitched battle (World War I) or drive around the Maginot wall. We chose the direct frontal attack. It has its advantages, especially when you have overwhelming superiority. I guess we don't have the financial clout any more. China can do as it pleases.
A more subtle approach escapes the general US public's view of the world, however. Sometimes I am ashamed and sometime I think it is our strength. This time it is not the best thing we could have done.
If every CPU vendor could extend the Risc-V instruction to his liking, we would have an incredible software mess .
Look, I work in security and one of the major reasons ARM processors are so hard to lock down and boot securely (most hardware manufacturers skip the start entirely) is that there may be very little in common between processors families from the same CPU vendor, let alone different CPU vendors. As far as I am concerned, TrustZone is vague suggestion to CPU vendors to do something. Each CPU vendor implements TrustZone differently.
Every part number may have to setup differently and the software has to be customized differently. And a little bit of 'different' is the difference between secure and not-secure. The kernel patches are exhausting to maintain.
And that is just about security.
ARM is trying very hard to fix that by shipping its own version of software (e.g. mbed) but that is a hard sell: companies are not willing to be tied to a single vendor of anything, even free.
I can see China using Risc-V to create a whole class of instructions that give them a competitive advantage. The software stack may be decider, though.
Why would they even do that? There are better ways of generating, storing, and protecting keys in HW during manufacturing. Unless Intel, in its infinite wisdom, decided to 'simplify' this whole process by simplifying the injection of keys.
WTF. Basic ABC of root of trust.
The Unicode standard tries very hard to make characters that look the same to map to the same Unicode character across many scripts. e.g. Many Han (CJK) characters belong to both the Japanese and the Chinese character sets and have the same Unicode.
In fact, most scripts use the same numeral system.
However, as this article points out, it is still possible to have two very similar looking characters with different codes. It just slips through or it just happened that something that looks like an 'o' also exists in another script. To pull the 'o' from one script into another can create a pockmarked character set making many string operations difficult; (is 'o' < 'p') if 'o' is pulled in from another script?
There are also political ramifications.
Pulling in characters from other, similar scripts, can create a sudden rise in the temperature of the injured party. However, if the characters look similar, it is probably because they probably fought a few battles leading to an exchange of ideas and knowledge.
The idea of Zero trust is to do your OWN security checking and not let someone else (the perimeter) do your checking for you. Perimeter checking has gotten us into a state where 90% or more of IoT devices don't even have a password for authentication. Each component must verify its inputs and outputs. That is just good engineering.
If I want to build a reliable hardware product, I want check all my inputs (length, types, buffers, commands, etc.) rather than have MS Windows verify it for me. Firmware upgrade verification is just another type of verification. Just an ECC or an RSA check against a public key burnt into OTP/ROM/etc.
Having done tons of these devices, I have realized that the manufacturers do not want to do any software as they only make money off the sale of parts, not software. Hence the reluctance.
There are solutions ... Some processors have the OTP (One Time Programmable) option to boot even when the signature check fails if a particular line is held high (or low). There are other variations to the theme.
Nothing prevents a manufacturer putting a jumper on the board to help bypass the signature verification. That way, people like us can use the system for white-hat analysis.
Signed firmware is just ONE of the fundamental tools we use to protect against hackers. Without signed firmware, it is hard to prove (impossible?) that what the processor is running is legit. That doesn't mean that signed firmware will protect you from buffer overruns and other memory issues, MITM, etc.
I completely disagree. I have lead teams of security engineers where we did secure firmware updates on extremely low powered devices. These devices run for 20 years on a pair of batteries, communicate using encrypted communication (nowadays it is AES),
We have probably 50 million of these devices out there.
It is possible to do these things with careful design and implementation. Many processors don't permit these things directly so you have to use good design principles.
Furthermore, our devices were evaluated for security vulnerabilities by well reputed research labs, security testing companies, and certain government agencies,
Biting the hand that feeds IT © 1998–2020