* Posts by patrickstar

643 publicly visible posts • joined 1 Dec 2015


Nest's slick IoT burglar alarm catches crooks... while it eyes your wallet


Re: No, it's stupid

Any alarm system with the keypad integrated with the base station and/or siren is pretty much useless.

Or even worse than useless if there's a sticker advertising the specific brand, since then burglars will know who to target since they know the alarm system can be trivially disabled.

And yes - this actually does happen. Including burglars specifically targeting those houses which display particular stickers.

The same thing does apply to wireless systems as well, although to a smaller extent since radio jamming requires more than simply smashing the panel. See for example http://www.thesidebar.org/insecurity/?p=856

Hot chips crashed servers, but were still delicious


Re: My keyboard stupidity.

This is why God's greatest gift to Windows users is Ctrl2Cap.

(You'll find it at Sysinternals and not your local church though)

Guilty: NSA bloke who took home exploits at the heart of Kaspersky antivirus slurp row


Re: You're only as good as 'your weakest links'...

I did start my initial post with a disclaimer saying I don't necessarily support what the security services do, you know...

But my point is - if you want the spooks of your country to have "cyber" capabilities, you need to allow the people doing that to have the tools needed. And those tools do include exploits for 0day vulnerabilities.

And the probability of hurting your own capabilities by disclosing something is exactly 100%, while the probability of hurting even one of your opponents is much less. Unilaterally disarming would be interesting, to say the least, but I'm not sure the people calling for it would be very happy with the result.

Plus the fact that fixing individual bugs often do very little to improve security for anyone, which is always worth hammering into people's heads. If your security is only as good as the "weakest link" (whether that's buggy software or stupid users), you should fire whoever is in charge of it and hire someone who can actually do the job instead.


Re: You're only as good as 'your weakest links'...

Did you actually read what I wrote? Because I specifically explained why that line of reasoning is inaccurate.

Also, exploits aren't "discovered". Bugs are discovered. Exploits are developed to use those bugs.

Finding and exploiting bugs is significantly more work than developing patches. If you report bugs as they are found, in the typical case you will be weeks or months away from having a usable exploit by the time most of your targets have patched.

And, again, since apparently my original message wasn't clear enough: It's pretty rare that the same bug is discovered and successfully exploited independently by multiple actors. So reporting them can be expected to hurt your side much more than the opponents. And it doesn't help your defense much either, since you can be pretty darn sure your opponents have bugs you don't.

This is somewhat related to the "90's mindset" I have lambasted before - that if we just keep fixing bugs after the fact then eventually all bugs will be gone and our problems over. Sorry, but doesn't work that way.

If your threat model includes a nation-state or similar, you have to assume that there are exploitable bugs you have no idea about in pretty much everything and design proper layered security around that. Your security posture doesn't improve much because of individual bugs getting patched.


Re: You're only as good as 'your weakest links'...

I'm not saying that I endorse all, most, or any of the activities of the TAO or similar spook groups, but I feel like I should point out some obvious things:

1. Reporting individual vulnerabilities does, on its own, little to actually improve security. You can be assured that for each vulnerability known to the NSA/TAO (or any other actor), there's atleast one more that's unknown to them. So reporting the vulnerability they have would hurt their own abilities without necessarily hurting the abilities of their opponents.

2. If you want your country to have the ability to spy on other countries, today this means that they must be able to conduct hacking operations. Which means that they must be allowed to stockpile 0days, because having those is an important part of actually hacking stuff.

This, by the way, is not limited to purely offensive/"first strike" actions but also includes defensive things like counter-intelligence and retaliatory actions to discourage future incursions.

It also includes realistic simulations of attacks by foreign powers to test your own intrusion detection and incident response. To do this in a realistic scecnario, you need 0days, because you can be damn sure that's what an actual attacker is going to use. (What would the option be - hold back on patching so you can use public vulnerabilities? Intentionally introduce vulnerabilities and use those?)

While I'm all for some sort of international utopia where all countries hold hands and dance under the rainbow, this is not how the world works for any foreseeable future. You simply can not be worried about actions taken by foreign powers, say, Russia or China, and simultaneously want the US to unilaterally "cyber-disarm" (to use a somewhat stupid term). It's not logically consistent.

What they CAN and SHOULD do is stop spying on everyone all the time as described in the Snowden revelations, but this is not very related to stockpiling 0days, or groups like TAO. 0days quickly stop being 0days if you use them for mass exploitation so there's an obvious built-in incentive to only use them for the most important targets.


Re: The mind boggles.

AV software might possibly be somewhat useful for random home users. It might also be useful to scan all incoming mail in a corporate setting and such.

It's definitely not useful in a high-security setting with an advanced threat model. Attackers in that case are much more likely to compromise you through the AV than be stopped (or even considerably hindered) by it.

OVH goes TITSUP again while trying to fix its last TITSUP


Well, there is no backbone network built to your principles.

Even when you build everything important fully redundant, the way the routing protocols work mean that a single configuration error or software bug can bring down the entire thing. See Level3 disaster a number of years ago.

There is also no backbone network built with enough vendor diversity that a single bug (such as, say, configs magically disappearing) won't have widespread effects. When it comes to fancier features, interoperability is still so crappy that you need to stick with a single vendor to use them.

The only alternative would be having two identical (but with different vendors) but separate networks in a passive/active configuration. And for the obvious cost reasons, noone even considers doing something remotely like this on a backbone-wide level.


Are you seriously suggesting they build three backbone networks instead of one?

Your approach works very well with servers. It doesn't work for networks.

Munich council finds €49.3m for Windows 10 embrace


Re: It's the software, not the OS!

Have you actually tried using any fancy "browser based" software on an older computer?

It's a truly awful experience. (OK, I'd argue that "browser based" software is frequently an awful experience on modern systems as well, but this is far worse)

Even visiting fancier web sites is pretty horrible on say, a 10 year old average computer.

Sci-Hub domains inactive following court order


Re: Streisand Effect

To be honest, I don't think it won't lead to any big increase in Sci-Hub's user base.

The reason being that very close to 100% of all potential users already know about it. It's been a big thing in academia for quite some time now.


Re: I am leet hacker

That command line wouldn't even work on an actual Linux system with sudo, since the >> redirection will take place in the original shell as the current user and not the one started as root.

And to top that off, none of my Linux systems have sudo...


Re: re: I think the advantage is supposed to be ...

I have access to a major university library with subscriptions to basically everything that's on Sci-Hub. I still use Sci-Hub - it's much, much quicker and easier than getting articles from the publishers' sites.

'Gimme Gimme Gimme' Easter egg in man breaks automated tests at 00:30


Re: Unprofessional bollocks

The bear was the mascot for Windows 3.1

See https://blogs.msdn.microsoft.com/oldnewthing/20030818-00/?p=42873/

Samba needs two patches, unless you're happy for SMB servers to dance for evildoers


And I thought only the Windows SMB implementation had vulnerabilities?

Microsoft's memory randomization security defense is a little busted in Windows 8, 10


Older versions of Windows are available on a lot of platforms as well.

Itanic, PPC, MIPS, Alpha, some archs I forgot...


Re: yet ANOTHER reason

Win 10 is actually a major security improvement over 7...

And unexpected behavior in a non-standard ASLR configuration isn't exactly the end of the world or a huge security lapse. I suspect the reason this wasn't noticed earlier is simply that there isn't much software left that's not built with DYNBASE (ASLR opt-in)...

And the ONLY issue here is a user interface issue in Exploit Guard - nothing else.

Some 'security people are f*cking morons' says Linus Torvalds


Most OSes don't even have an OOM killer, and yet they run just fine. It's an example of one of the many trade-offs that might very well be valid for some scenarios but reduce reliability in others.


Re: Linus Torvalds is not a Security Expert

So, let me get this straight - you seriously think you are better off getting compromised than the system crashing? What if the attacker grabs all your data, wipes the disks, and then crashes the system anyways? The potential damage from an intrusion is unlimited - the potential damage from a crash is limited and manageable.

And why do you think there even is a concept of 'kernel panic', the BUG() call in the Linux kernel, etc? Sometimes the system simply can't continue.

What if a something has corrupted random memory, for example? This could include disk buffers so it ends up writing garbage to the disk.

Or what if it just enters one of the many possible weird twilight states where things just fail at random? Try troubleshooting that at 3 AM - I have, far too many times, and would certainly prefer a kernel panic any day.

Still think it's preferable to keep running? If so, are you by any chance utterly insane?

And I take it you haven't read the patch and understood what it does? The kernel would be very likely to crash shortly afterwards anyways if the type of bug it's meant to detect is triggered. If anything having a deterministic crash with a known cause would make post-mortem debugging a lot easier and thus help avoid crashes in the future...

And Mom says I can't have big boy pants yet :-(


Re: Aircraft Engine Example

If you think a CNC machine counts as "hard realtime", or that Linux is a "hard realtime OS", you obviously have no idea what the term means.

You can even do things that are 'harder' than controlling a typical CNC machine (the specifics differ depending on the exact hardware in the CNC of course), like bitbanging various serial protocols, from a Linux driver/kernel module or even userland (see iopl(2) and the CLI/STI x86 instructions). This is not what constitutes a hard realtime OS:


Probably. Cook is for all practical purposes an idiot when it comes to anything security related. I'd guess he just took the entire idea from Grsecurity and re-implemented it poorly without understanding the full implications... That's what he usually does when it comes to kernel hardening, atleast.


Re: Security has become a buzzword for non security groups.

Those computers are totally separate, although there have been some interesting attacks where you can travel from the media center to more interesting stuff over the CAN bus.

And with the possible exception of Tesla, no car runs Linux on the actual ECU. The ECU typically runs some custom OS - I think Bosch is the most common vendor.


The Windows kernel is already very modular. Much more so than Linux in fact. It's basically designed as a microkernel (though everything runs in ring 0 for performance reasons).

Plus, the code of the kernel itself is a LOT cleaner than anything in the Linux kernel. (Note that this does not apply to things like Win32k and some of the drivers - they are pretty hairy.)

Maybe you should read both kernel sources and compare them before trying to be funny? Or at the very least Read The Fine Wikipedia Entry: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT


Re: " allowing 'buggy' processes to run"

The kernel would be pretty darn likely to crash if a bug like what this patch is targeted at would be triggered. This just crashes it in a way that doesn't turn it into a security vulnerability (plus simplifies debugging since it immediately tells a developer what's wrong and where the problem is, as opposed to crashing some random time later).


This doesn't apply to userland. Obviously, userland stuff should never be able to kill the kernel.

The only changes required are within the kernel itself - so unless you are a kernel developer you don't have to care one iota. And if you are a kernel developer, there are regularly breaking changes made between versions, so it's not like you could sit around twiddling your thumbs if it wasn't for this.

If your code is in the mainline kernel, Kees Cook has already made any required changes for you.

Userland code won't be affected in any way. It won't have to know anything about this and no observable behavior will change in any way (well, the kernel will panic in case it triggers certain kernel bugs, but it probably would have paniced regardless).


Re: Linus Torvalds is not a Security Expert

Note: 'stock' Linux kernel.

As in straight from a vanilla distro or whatever, with standard config and no customization.


Re: Linus Torvalds is not a Security Expert

No. Have you even read the patch?

This is not some signature-based engine to detect kernel exploits or whatever you seem to believe it is.

What the patch intends to do is basically restrict copies to/from userland to the memory regions where it's valid to do so. (You do know what that sentence means, right? *)

This is not something which is going to have "false positives" which randomly and unexpectedly shows up sometime in the future. If it's properly implemented ( == all relevant areas whitelisted) then if it ever triggers it's because of an actual kernel bug (== trying to copy outside that area, either intentionally or unintentionally). If you introduce a new potential area as part of adding some code and forget to add it, it's not going to randomly cause the computer to turn into a bomb later. It's going to crash the very first time, 100% of the time, that you try to run your shiny new code.

It's not some fuzzy guess or Bayesian logic. Either an address is within those areas or it's not.

The only scenario where you'd have actual "false positives" would be if the CPU ended up doing something other than what the code actually says due to some hardware issue, and that's obviously not related to the code itself.

* Well, either you don't or you have a much wider definition of "false positives" than I do. Mine should have been perfectly clear from the mentioning of 'properly implemented'.


Re: ... most projects don't have project managers like Linus Torvalds.

Most Linux kernel developers are in fact paid to do so*. Just that it's typically not Linus who pays them.

Many work at RedHat, IBM, Intel, etc - or Google like Kees Cook. This is literally his dayjob, not something he is fiddling with as a hobby.

* Well, probably not most in terms of total number of contributors. But definitely in terms of total lines of code.


Re: Aircraft Engine Example

Yes - but not for any of the hard realtime stuff. Linux is not a hard realtime kernel any more than say FreeBSD or Windows is.

Typically for the smallest systems you don't reall have an OS, just a scheduler and some libs. Or something like VXworks which is just one step above that.

For the larger ones you normally use a microkernel like QNX or Integrity RTOS. There's also RTLinux which is a (commercial) microkernel that can run Linux as a preemptible process. Integrity has some virtualization stuff as well that lets you run hard realtime stuff in one VM and Linux in another.

There's the PREEMPT_RT patch for Linux of course which does improve the timing characteristics of standard Linux and is usable in some scenarios, but it's a far cry from a full RTOS. And you would definitely use a very custom kernel for that kind of task, so whether or not stock Linux is too kernel panic happy on certain errors isn't exactly relevant.


Re: Did Google implemented it on its servers to test it fully and at scale?

Does Google actually have fanboys? I thought it only had victims, i.e. those who have given up and surrendered to the almighty Google overlords.


Re: Build statues in honor of Linus

Kees Cook is not some diversity hire.

He's a long-time Linux kernel developer and head of the Kernel Self-Protection Project.

Not that he has been doing a particularly good job at that, or shown much security clue, but certainly more clue than Linus.


Re: Aircraft Engine Example

Actually, pilots do reset (read: power cycle by pulling circuit breakers) the computers in planes to resolve various issues as part of the standard checklists.

But nitpicks aside - critical embedded systems (and not-so-critical as well) use hardware watchdog timers for exactly the reason that you can't trust the software to never, ever crash. Even if you had guaranteed 100% absolutely perfect software/firmware, there are still scenarios like voltage spikes, cosmic rays, slightly off-spec-components, etc that can cause random glitches.

You'd never get something like an engine controller approved if it didn't have a proper hardware watchdog. And probably not if it was running a standard Linux, BSD, Windows, etc. kernel either. To start with, none of them are hard realtime systems.

Arguing about the optimal behavior in a safety critical system isn't even a slightly bit relevant for a general purpose OS kernel.


Re: Linus Torvalds is not a Security Expert

If you run hard safety-critical systems on a stock Linux kernel, you are in for a world of hurt anyways.

As for false positives - with this kind of mitigation there are no false positives. If it's properly implemented, triggering it means there's a kernel bug and not someone joking around in userland. If the system continued to run without the mitigation, it was sheer luck, and you don't know for how long. Atleast in the case of copying TO the kernel. In the case of copying FROM, it's the userland process triggering it that's gonna malfunction instead.


Re: Design

SELinux has little to nothing to do with exploit mitigation. It's an access control system. In the normal case, it doesn't stop someone from pwning the kernel and disabling SELinux - in fact, kernel exploits regularly do this.

To be fair, there are some scenarios where a proper SELinux ruleset can prevent you from getting to the point where a kernel exploit can actually be executed, but it's not the main purpose.


Re: Design

This is Linus displaying exactly the attitude that many people (me included) have been complaining about for well over a decade.

He is stuck in a 90's mindset when it comes to security.

Back then it was a common delusion that we could somehow just fix/avoid all memory corruption bugs and introducing mitigations (even from the start with the very first implementations of noexec stack, what later expanded to DEP) was seen as somehow being "impure". Most people have advanced since then, but apparently not Linus.

He has grudgingly accepted SOME mitigations due to outside pressure, but clearly he hasn't understood why they are actually needed or why lots of work remains to be done.

What others have realized is that there are always going to be bugs in this kind of software. Some of them will turn out to be exploitable security issues. Even if you somehow magically fix all of them at

some point in time, new ones are going to be introduced.

And the proper mitigations can be very, very effective at preventing exploitation. Sometimes you can kill entire bug classes. Other times it makes exploitation less reliable ( == more likely to draw attention due to stuff crashing), more complex ( == raising market prices for exploits thus reducing the amount of attackers having access to them, and making the rest less likely to risk them against all potential targets) and/or require chaining bugs and thus requiring new exploits as soon as one of them is killed.

There aren't less security issues in the Linux kernel now than say 10 years ago. This in itself should be all the evidence needed to conclude that exploit mitigations are needed.

And yes, security issues are fundamentally different than other bugs. Not only because of their potentially severe (unlimited) damage, but also because how they should be dealt with. You shouldn't just fix them and move on. You need to actually learn from the past bugs to prevent introducing similar ones in the future and find those that slip by earlier.

Now that we are living in a world where your adversary might very well be an intelligence agency with unlimited funding, and not just some random kid or criminal gang, proper software security - where exploit mitigations have an important role to play - is more important than ever.

Though, Kees Cook doesn't exactly have a stellar record when it comes to kernel security work, so I'm sure this patch is crap for other reasons...

Does UK high street banks' crappy crypto actually matter?


Re: 2 Factor Authentication

In addition to the issues with the phone company, the networks themselves (SS7 and other vectors), there's also the fact that nowadays the phone might very well be the same device they are banking from. Or it might be hooked up to the computer used for banking regularly and thus be susceptible to being compromised that way. So even non-SMS based schemes are off. Plus you can't expect ALL your customers to have smart phones. No, really!

It's better than nothing, but still... While I am not much into the whole 'false sense of security' thing in this case - it's clearly better than just a password - if you're gonna roll out 2FA you better do it properly from the start. Getting people tokens and instructing them in using them isn't THAT much more work than getting everyone's cellphone numbers and instructing them in how that works.

You really want an actual separate hardware token. That also means there's a lot less opportunities for things like shoulder-surfing PIN codes, since you enter it a lot less often and probably not in a lot of public places.


As far as I know, not a single cent has ever been stolen because of sub-optimal TLS settings...

The real push should be to enable 2FA - I've actually seen claims that not all UK banks have it...? Totally absurd if that's actually the case.

Inside Internet Archive: 10PB+ of storage in a church... oh, and a little fight to preserve truth


ITYM "and if it's accessible via SciHub".

Easy to automatically find PMIDs and DOIs and link them straight there as well...

Parity's $280m Ethereum wallet freeze was no accident: It was a hack, claims angry upstart


Re: Piece of p**s to think up a new crypto currency.

I doubt the customers had signed an agreement that let the bank hold their money and later confiscate parts of it to pay off their own debts (both things happened, by the way).

You are not allowed to use funds held for customers to prop up a failing business - end of story. If this wasn't banks but an actual sane line of business, the executives would rightly be in jail and personally liable for the full amount they stole and/or gambled away from their customers.

I can't believe anyone here is actually defending a bunch of crony capitalists colluding with the governments to make money at everyone else's expense...


Re: Piece of p**s to think up a new crypto currency.

Try telling the people who had money in Cyprus during the crisis that getting robbed by Merkel et al. wasn't so bad after all.

WikiLeaks drama alert: CIA forged digital certs imitating Kaspersky Lab


Re: HTTPS much?

This has nothing to do with browser security. It's the cert used by the backdoor when it's phoning home. If someone tried serving up a HTTPS web site using it, the browser would rightly flag the cert as being invalid.

The only purpose is to look a bit better if someone sniffs the traffic. Unless you actually verify the cert - which network monitoring tools typically don't - it'll look like it's just a Kaspersky AV product phoning home.

While I agree that TLS in general and the entire CA security model in particular is fundamentally flawed, unfortunately it's the only universal thing we have for encrypting HTTP traffic for the foreseeable future. Even just using self-signed certs is many, many times better than sending the traffic unencrypted, since at the very least you now need an active attack as opposed to passive traffic sniffing to see it. Plus you get forward secrecy if the proper TLS magic is supported by both parties.

US government seizes Texas gun mass murder to demand backdoors


Re: Easy to crack (for any governments engineers)

No. Provided that he just used a PIN and not a proper passphrase, all you have to do is at most bruteforce the PIN. Depending on the particular attack there might be a limit to how fast you can actually try PINs but a 4-digit PIN would still be crackable within a reasonable time period. Like, lets say it takes you 10 seconds to try each PIN. Then it'd still just be a bit more than 24h. A lot of people, me included, have even cracked 4 digit PINs (or scanned the similar amount of phone numbers in the phreaking days, etc) by hand in various contexts.

The whole purpose of this 'secure enclave' thing (atleast in this context) is that it holds the actual encryption key but won't give it up without the proper PIN, and also enforcing limits/delays on PIN attempts. This lets you achieve a decent security level (far more crackable than 'proper strong' encryption, but still needs time/decent budget) without having to enter anything more than a PIN to use the phone.

As to the previous poster, what you describe is probably a NAND mirroring attack. This indeed worked against older iPhones but doesn't work against the newer ones. Search the Fine Web for details.

I don't do iPhone stuff, but I think that now you either need some software/firmware/hardware bug, or have a long nice chat with the surly uncooperative chip using pretty darn expensive chip reversing gear (SEM, FIB, yadda yadda).

Don't worry about those 40 Linux USB security holes. That's not a typo


Re: Another day, another bug

What I meant to say was that if you write software without any regard for security - with things like basic overflows in unbounded string operations (strcpy strcat sprintf et al.), spawning external processes in insecure ways, etc - chances are the bugs will go undiscovered longer if its closed than if its open source. While they are normally pretty easy to find by fuzzing, firing up grep (or whatever) on the source tree is still less effort than implementing whatever protocol it speaks in a fuzzer.

But that's not the kind of bugs that plague any major software today.

Any argument that either open or closed source is inherently more or less secure is, of course, totally bogus. But you can definitely draw the conclusion from today's vulnerability landscape that there's no inherent all-encompassing advantage for open source projects.

And to elaborate on my previous argument a bit: I really, really doubt there's significantly more people studying Chrome or Firefox than Edge source for security vulnerabilities before a version hits production. You simply can't rely on people doing this kind of work for free. And these projects are basically entire universes of their own - you need to have lots of experience with each particular project to be able to audit them in any meaningful way.

Regardless of the source code availability or distribution model you need to have a solid security team that are actually in the loop about all the internals.

You can't rely on the mythic "lots of eyeballs" because there's simply not gonna be a lot of people reading that kind of source code at all, let alone people who are good at finding vulnerabilities in it. And you need solid exploit mitigations in place because there ARE gonna be vulnerabilities that gets discovered by 'bad guys' long before any 'good guys' as long as you stay within current software development paradigms.

Google has clearly understood this, probably Firefox too.

For the record, I hate Google with a passion and avoid their products like the plague, but atleast their security efforts are pretty good when it comes to Chrome... Certainly a lot better than many open source projects AND commercial closed source vendors (not MS on a good day though).


Re: Another day, another bug

Remember what happened last year? No?

A Linux kernel bug that's been in there for 10 years was fixed. Because it was caught in the wild. Turned out Bad Guys <TM> had been using it for many, many years until finally their luck gave out...

Gazillions of people had read the vulnerable code, but noone except whoever wrote that exploit ever spotted it before.

Very few people are good at finding security vulnerabilities in software, even with full source code availability. Even fewer are willing to do this painstaking work for free. Open source might have an inherent advantage when it comes to the really simple stuff you can basically 'grep' a codebase for (simple overflows and such), but that's not very relevant for any major software these days.

Apart from kernels, the most attacked software is probably web browsers. Firefox and Chrome (both open source) bugs are certainly caught in the wild with some frequency. Edge/IE bugs are certainly not more frequent in the wild, though comparisions are hard because of their different popularity and usage patterns (Chrome has bigger market share, Edge/IE probably higher % of interesting targets because lots of enterprises are stuck on them).

And most bugs that are discovered by "good guys" in those are found by fuzzing, not reading the source code.

In case you don't know (since you obviously has no software security background whatsoever considering the statement you just made), fuzzing is not dependent on having the source code. Admittedly it does help somewhat because it means you can build with ASAN et al., but apart from that it only helps with root causing which is only relevant if you are either an attacker or trying to fix the bug.


Re: Wakey, wakey.

Like I have explained, there are also lots of scenarios where an attacker DOES have physical access but it's either not complete (like just having access to the screen, keyboard and some USB ports), the attacker only has a small amount of time (walking past a computer and sticking a USB stick into it vs. having to take the damn thing apart or atleast rebooting it), or simply that the disk is encrypted, the console locked, and you need to get the encryption key from memory to be able to access the data.

The whole "you're screwed if an attacker has physical access" near-truism simply doesn't apply for all values of "physical access", "attacker" and "screwed".

If it was universally true, there wouldn't be any reason to use full-disk encryption. The entire purpose of that is to protect against attackers with physical access, after all.

Or even the real classics like screen lockers, console login prompts, BIOS/bootloader passwords, or lockable computer cases. Much less tamper-resistent hardware or such where you can actually make a calculation for how much time/money is needed to bypass it.

Threat models and layered defense, you know...


Re: Meanwhile...

Imagine if the disk is encrypted, no user is logged in and/or the screen is locked, and you can't do a cold-boot attack. Now the ability to execute code by plugging stuff into various orifices (on the computer, not on yourself) suddenly becomes a very, very relevant security issue.

Not to mention various kiosk-mode systems with exposed USB ports...

Or perhaps you ever plug in USB sticks that have been used on another system? Then this also suddenly becomes a security issue. A wormable security issue, even.

As is publicly known to have been done against Windows systems in the past - see Stuxnet for the most famous example. I wouldn't be surprised if these and other vulnerabilities have been exploited against Linux as well, just not made headlines in doing so... or perhaps never even been discovered by the victims.


Re: Wasn't that the primadonna maintainer project

Linus has stated in public that he does not consider security vulnerabilities any different from other bugs. That's a pretty apathetic attitude to security concerns in my book...

And he has basically told real experts trying to improve the security of the Linux kernel to go fuck themselves (probably not literally - I'd expect him to use much more creative insults than that). See refusal to interact with the Grsecurity guys in any meaningful way, for example, and the half-assed Kernel Self Protection Project that followed public pressure to improve the situation (which, by the way, is most certainly not composed of 'real [security] experts')

Plus, black hat kernel security wizards are paid handsomely for their efforts at doing black hat kernel security stuff nowadays. You can't just ask them nicely to start doing work for free instead and expect anything but a chorus of laughs.

'Lambda and serverless is one of the worst forms of proprietary lock-in we've ever seen in the history of humanity'


Re: "really depends on one enorrmous binary firmware"

Are you assuming genders just because of the username "JulieM"? What if the person identifies as an attack helicopter, with the preferred pronoun "it"?

Anyways - just wanted to point out that binary blobs to get random HW going isn't what people really are complaining about with the Pi. It's the fact that the entire fundamental firmware / "BIOS" is closed source and locked down. You can even get x86 systems that are far more open than the Pi (see libreboot).

You wouldn't see people bitching (as much) if you just needed some blob to get graphics going for example, and could just avoid it if you don't need that - or even better, get basic unaccelerated graphics without it. But as it is now, if you would somehow manage to remove the closed source stuff the Pi wouldn't just fail to boot, it would never even execute a single instruction.


Re: AWS Lambda “lock-in”

I wouldn't consider a language where you don't even have to declare variables for anything where actual important things are at stake...


Re: "really depends on one enorrmous binary firmware"

This - i.e. pluggable firmware for various hardware - is not (only) what he's referring to.

The entire GPU firmware on the RPi is closed source, to the point where you can even buy licenses (!) to enable additional features (like the MPEG2 decoder). The hardware is there, it's just disabled until you convince the firmware that you've paid for using it.

And there's no way around using the GPU and thus running closed-source code, because the GPU is what actually initializes and boots the Pi from the SD card (yes, really, the GPU, look it up!). It basically does the job of what would be BIOS/EFI on a PC. And there's no "libreboot" or whatever equivalent for the Pi.

The Pi is far from an open platform despite schematics and such being available. I get it - this was probably the only reasonable option given the price point - but if you're an open source purist you most definitely shouldn't use the Pi.

I doubt the GPU is happy to give up all its firmware, so it's actually worse than running closed source software in general. Atleast in the software case you have the option of reverse engineering it, without having to bring out a scanning electron microscope and focused ion beam workstation.


Re: AWS Lambda “lock-in”

Why would you ever want to write anything that's actually important in either Python or JavaScript?

They are, beyond reasonable doubt, two of the worst languages for the purpose.

I know you can compile other languages to JS, but still...