Re: I know they were a bit fraudulent but.....
"...if every overhyped claim by a company resulted in criminal charges most of our stock of executives would be in jail by now."
You say that like it's a bad thing!
607 posts • joined 3 Nov 2007
The fake invoices tend to get noticed sooner or later. Instead, direct real purchasing and contracts to a company you control, which resells products and subcontracts the work. All at a healthy markup for profit, of course. Sure, the company could have spent less elsewhere, but it's hard to make that into a case for fraud.
Reading the settlement, clause IX.D says that if funds remain in the Consumer Fund (i.e., for credit monitoring) at the end of the claims period, it shall be used to lift the Alternative Reimbursement Compensation Cap (the 'up to $125' part). So if no one wants the credit monitoring, there's potentially up to $425 million available as cash – enough to give the full amount to 3.4 million people. Slightly better, though I'd still rather have them have to set aside $18 billion to fully compensate everyone affected.
That was my initial thought as well. For a general-purpose email filter, it would have to check quite a few languages in order to not block every email written in, say, Italian or Thai. Also needs to be context-aware, so they can't avoid triggering the filter by just putting half of Moby Dick in an HTML comment.
Might be simpler to have a check for embedded fonts. If found, render the message into an image, apply OCR, and run filters again on the result.
An immediate slowdown or emergency stop at any uncertainty is extremely reckless and will kill people. Not could, will. It sounds like a great idea for a car on an empty road or test track, but think about the consequences if you're being tailgated, or are in dense 70 mph freeway traffic, or have a passenger who isn't wearing a seatbelt.
Remember that the majority of collisions with Google's self-driving cars occurred when they followed the rules of the road as written, but the person behind them wanted to run a yellow/red light.
WINE is also a volunteer project, made by people in their spare time. They also had to reverse-engineer the entire Windows API, including its many, many quirks. Considering that, they've done pretty well.
Rather than re-implementing the same, 30+ year old, crufted together API, I'd rather see them design a new, modern OS with a WinAPI virtualization layer. Somewhat like what Apple did with OS X and the Classic environment.
In addition to an interest in computers, I spend a lot of time dealing with structural steel fabrication. It's astonishing how different the attitudes in these two worlds are. Computer programmers tend to take the approach of "it compiles - ship it". Security, efficiency, quality in general seem to be an afterthought at best. Structural engineering is governed by various industry codes, which make frequent use of the word 'shall', and often have the force of law. If a structure fails, and the designer or fabricator did not follow the relevant code(s), they can be held liable for that failure. I wonder if something similar is needed to tame the Wild West of programming: an RFC or ISO standard that establishes requirements for the design and testing of quality software. I wouldn't want it to be too restrictive, to allow development of new languages and methods of development, but at least set a minimum standard for proper engineering of software, extent of testing, what sorts of bugs identified during testing must be fixed before shipping.
It's probably impossible to prevent all bugs and security flaws, but we ought to be able to do something about this chronic parade of embarrassingly bad mistakes from companies that have the resources to do better.
Virus writers have been doing similar things for years. Over a decade ago, I found an infected website that used similarly obfuscated code to extract the browser and JS engine version numbers and sent them as part of a GET request to another domain, which presumably delivered the actual virus. Visiting that domain without providing a vulnerable version returned an empty document. Not too surprising that they'd be looking for other, more subtle ways to identify browsers.
Because they're worse than useless: they make a site look secure, but don't actually make it any more secure than an HTTP-only site. Anyone can write a self-signed cert for any domain, so MITM attacks are easy: the attacker just makes their own self-signed cert, and it looks just as valid as the original.
What happens if and when someone is able to hijack the DNS record? By changing the public key, they can redirect traffic to a site they control which will be 'verified' as the real thing. Putting both address and authentication information in the same record creates a single point of failure.
Many specifications reference other standards. For example, say you want to build a data center. Most jurisdictions will require that you follow the International Building Code (IBC) for the structure, and I'm not even touching fireproofing, electrical, HVAC, etc. here. For the steel frame of the building, IBC says it shall be constructed in accordance with AISC 360. That will in turn require you to follow AWS D1.1, which invokes other standards for several things. Then you have steel decking, rebar, concrete, soil preparation, and so on. Even if the standards incorporated into law themselves are made available, they directly and indirectly require the use of dozens of other standards. Should all of those be made freely available as well?
Most shops don't have the ability to fabricate or program components like this; I'd worry about problems starting much higher in the supply chain. I can see a (probably Chinese) component manufacturer being paid to include ad-injection code. Not terribly different in principle from the bloatware cruft that PCs come preloaded with so often, but much harder to get rid of.
As far as user experience, this wasn't a bad idea. It's fairly common in some industries (e.g., restaurants) to provide worst-case estimates of wait times, so that when customers are provided service sooner than estimated, they are pleasantly surprised. So it's not unreasonable to give the passenger one estimated arrival time, while giving the driver a route that will get there slightly earlier, barring unexpected traffic delays.
What will get them in trouble is if they have been calculating charges and payments separately, as the lawsuit alleges. Without highly-improbable macroscale quantum effects, both routes cannot be taken at the same time, so either passengers are being charged for more time and distance than they actually spent in the car, or drivers are being paid for less time than they actually spent driving. Whichever one it is, that's a pretty strong case for fraud.
It's the same reason many companies have switched to electronic door locks. When properly implemented, each person has a unique access code. Hard to duplicate, usage can be tracked, access can be revoked without affecting anyone else. Of course, when it's not properly implemented – as in this case – it ends up weakening security.
I've been struggling to come up with a reasonable situation in which one would do this.
If you're sending video from one webcame to multiple recipients, you're probably using a single program to do it.
Using multiple programs for multiple video sources could make sense (for example, videoconferencing on a webcam while sending security camera footage to archive storage), but that situation is unaffected by this change.
I suppose you might want to split a video source if you want to stream live footage over the internet and record it at the same time, but only if you're using brain-damaged programs that can't do both.
A lot of FOSS isn't signed – many developers don't seem to want to bother with the hassle – so the warning isn't too unusual. The only way it would have prevented an infection is if someone had installed the program enough times to notice that it's usually signed, but this time it wasn't.
To be pedantic, organic chemistry originally was the study of compounds found in living things, and inorganic chemistry was everything else. After Friedrich Wöhler demonstrated that urea (a known organic chemical) could be synthesized from inorganic compounds, they had to scrap that definition and redefined organic chemistry to be about carbon instead.
Ads based on a celebrity event, or sports game, or TV show are reasonable. Nobody really likes ads, but they can understand why they're there and no one will raise a fuss about it.
Then someone like David Bowie dies, and everyone talking about it sees ads for Bowie-themed merchandise, and it looks like a crass attempt to cash in on someone's death.
Then you get an incident like what's currently going down in Orlando, Florida, and everyone sees ads for guns. People get upset, looks like Twitter is happy to profit from a tragedy, lots of drama and PR damage control.
So maybe this isn't the best idea.
The problem with having a software-defined return address stack is that there's nothing to keep malicious code from manipulating it; as far as the processor is concerned, it's just another region of the process' memory. A hardware-defined shadow stack can more effectively restrict access: the processor itself is the only thing that should manipulate this area of memory (as a side effect of call and return instructions), so any attempt to alter it directly can trigger an exception.
I'm not intimately familiar with x86 instructions (I'd rather be dealing with Power or ARM), but it looks like this could be defeated if there's a way to write arbitrary data to the EIP register. Overwrite EIP, call the next instruction, and you've put your desired return address on the shadow stack.
If someone steals the database, they don't need to reverse the hashes. They'll just throw a dictionary file at your hashing algorithm and look for matches. Doesn't take too long to brute-force every password up to 6 or 8 characters long as well. This is why you should be salting the passwords before hashing them, and forcing users to have sufficiently long passwords.
I suspect the ad networks' inaction is a deliberate strategy, even though poisoned ads have been a known problem for years. As long as they act as a neutral host without filtering anything, they can claim they're not liable for anything that happens. If they try to block bad ads, they could be blamed for anything that they don't catch.
Corporate lawyers can suck snozzberries.
The biggest problem I see with the OpenSSL code is that it leaves you at the mercy of your compiler/optimizer. You have to trust that the optimizer will properly traverse all possible code paths and not strip out the entire if (0) block as unused/unreachable code. It may work fine for whichever compiler and optimization settings the developers used, but there's no guarantee it will work for everyone else.
If Nokia hadn't sold out to Microsoft and killed Symbian, there might still be a viable alternative for manufacturers to switch to. Ironically, it probably would be easier for Win10 to get a foothold in the market if it was more fragmented between iOS, Android, and Symbian.
Definitely not obvious - at least it didn't end in a 5 - but at the same time, any decent factorizing program would have reached 271 fairly quickly, so it's clear they didn't double-check the number in the code for primeness. Since one of the factors is so small, my guess is there was a typo of some sort; if I wanted to backdoor an encryption routine, I'd use a semiprime whose only two factors are roughly equal in length (~150 digits in this case), so it would take some significant number crunching to discover that it's not prime.
ISP filtering makes a lot more sense. If malicious traffic is detected coming from a particular IP address, they can sinkhole anything coming from it until the issue is fixed. Redirect any webpage requests to an information page explaining the issue and how to obtain tech support to fix it. No backdoors needed, and if they ever finish rolling out IPv6, individual devices can be blocked rather than cutting off an entire household.
11 billion device-hours in December using Windows 10. 200 million monthly active Windows 10 devices. 31 days in December.
On average, each device is being used for 55 hours per month; less than two hours per day. Of course, some get used much more than that, which means many of their 'monthly active devices' are hardly being used at all. Not exactly encouraging numbers.
Another problem: I'll bet the URI the voice data is sent to is hard-coded in that firmware. Hack the home router (and frequent Reg readers will know how secure those are), set a rogue DNS, and a malicious server can intercept everything it transmits. Knowing how well IoT devices are designed, there probably isn't any attempt to verify the identity of the server it's talking to.
The manual says it will automatically download and install software updates. Hopefully that process isn't vulnerable to the same sort of MITM attack.
2) The size of a struct can be determined at compile time, no need to store it in a variable. Hardcoding the value isn't a good idea, as it reduces platform independence, maintainability, and readability.
3) I'm not familiar with the code in question, but 'mtu' is probably a local variable, initially set to the MTU size and decremented as a packet is processed. You could use a 'packet_size' variable instead, but then you'd have to look up the MTU size every time you check for overflow, whereas this way you just check if 'mtu' is negative or not.
Biting the hand that feeds IT © 1998–2020