Amen to that
The right to repair laws forcing hardware firms to open-source their firmware sounds about right.
Nothing more to add, really.
Most data theft does clear harm to the victim, and often to its customers. But while embarrassing, the cyberattack against MSI in which source code was said to be stolen is harder to diagnose. It looks like a valuable company asset that's cost a lot to develop. That its theft may be no loss is a weird idea. But then, firmware is …
There's no firm foundation for firmware any more.
A clear understatement. There never was any user-based foundation to keep firmware hidden and proprietary. The reason to keep it hidden and proprietary is for vendors to enforce vendor-lock-in and to enforce (planned) obsolescence(*). It is purely a protection racket and it has always been a protection racket.
We should not only open up the software side, but we should enforce the "old" tradition of getting the (electric and mechanical) schematic diagrams together with your device. Manufacturers and producers should be forced to supply service manuals for free. Then you open a new market for both manufacturer, product and user.
(*)Vendor arguments about "security" and "safety" are security-by-obscurity and safety-by-obscurity arguments. They always fail scrutiny.
Should the firmware for a safety-critical sensor be made available so the device can be hacked?
If so, how does anyone who ends up with one of the devices know that the software has not been modified and that it still satisfies its functional safety requirements? Who is legally responsible if the failure of a modified device leads to an accident?
Or are we only talking about firmware in "consumer electronics"?
Quite. As someone who works in the industrial/commercial embedded systems sector, I'd willingly participate in such an open-doors policy only if
Full responsibility for any issues that arise from use of the product following a third-party modification were borne by the developer responsible for providing said modification,
Any existing warranties, certifications, regulatory approvals etc. would be voided immediately any third-party modification were performed, and
Any damage (reputational, financial etc) caused to the original manufacturer as a result of being associated with such issues would be considered reasonable grounds for bringing legal action against the third-party developer and/or anyone else involved in causing such damage to occur.
The television set repair shop could repair your (old) television using the available schematic diagrams and skill. The manufacturer has no obligations when a third party repair shop has been contracted. If the television blows up after repair, then the repair shop is your party to talk to.
Same with modified firmware. The fact that you can change you own devices (or let someone else do it for you) has no bearing on the manufacturer's responsibility. The manufacturer is off the hook once the device is modified. As simple as that. If your modification kills everyone in your vicinity, then you will be held responsible for that. Just like you'd expect.
There may be legal limits to modifications when you use a device publicly. Just like a car must adhere to certain protocols. You may do what you like at home. But taking it out on public roads has limits. The same goes for your gadgets, devices and sensors.
A case given in a class held otherwise. The original manufacture of the equipment in the 1930s was purchased by another firm. This firm failed in the 1950s and portions were purchased as part of the bankruptcy. Meanwhile the equipment sold originally was modified somewhere in time and resold several times. An accident occurred and a negligence claim was filed. The claim against the company that purchased the assets on the bankruptcy was held to be valid by the jury. Currently the herbicide Roundup is being held to be the cause of non Hodgkin's lymphona and ads for legal teams (Plecos) run rampant despite the Oregon State University found no link. Nor was a link found between Dupont's Silicon that was used for BOOB jobs back when. The culture has arose that "its not by fault". Get intoxicated and drive your vehicle into a wall. A legal animal will be present to file your claim that the wall should not jumped out in front of your vehicle.
If the right to repair is fully effect, concurrently (in the US) there should be a major revamp of negligence laws. If not not nothing will be manufactured in US, not that the Chinese would mind and if pestered off would go the phone.
In response to your company disclaimers, it's perfectly fair to disclaim responsibility for others' deliberate events outside your control.
Consider the similar case of engine remapping... ECU's with modified software will invalidate not only warranties, but insurances also. Accident investigations will go as far as extracting code images if necessary.
However, it is possible to publish your code for expert scrutiny without revealing how a particular unit might be reprogrammed.
We assume your unit does not blindly accept any update, it has some sort of signing, crypto, secure boot, and you've disabled JTAG...
This means that units cannot be intentionally "hacked" by the user - unless the secret key is requested - and given..
Moreover, if the relevant secret key is disclosed to the legitimate customer, all warranties and responsibilities can be voided, being conditions of the release. This unburdens the manufacturer from the need to prove that new hacked software was installed.
So, we can separate the code-inspection aspect - a wanted outcome, from the arbitrary reprogramming by users - an unwanted outcome, unless authorised.
... Should the firmware for a safety-critical sensor be made available so the device can be hacked?...
Bit of a loaded question, I’d say that the firmware should be made available so the device ## can’t ## be hacked…
Code that has been or can be reviewed is always going to be stronger, eventually. The open-source community has immense knowledge regarding which methods and approaches have been easy targets in the past – like for instance all the buffer-overrun mechanisms. These are easily eliminated with the right toolchain, with bounds-checking, address randomisation – but the adoption of better techniques is slow. It’s very much a case of “stuff that doesn’t get checked, doesn’t get done”.
The downside to publishing your code is that malevolent hackers have easier access – but the real threat actors don’t need it, they’ll work it out anyway. Take a look at the iPhone jailbreakers (I’m not saying they’re evilly intentioned) – but they are able to break all manner of unpublished state-of-the-art security measures, in just a few days.
Relying on “obscurity” has never been a successful approach, you must assume the attacker knows everything about the system and the code, apart from the secret keys. This is how cryptographic validation-attacks are set-up.
However, it takes time for code to be reviewed and flaws found. The maker of the “safety-critical sensor” in this case will need to delay release and/or update devices in the field.
In fairness, “safety-critical” and “secure” are independent requirements, pulling in opposite directions.
Many safety-critical interfaces, protocols are not cryptographically secure - there’s a lot more to go wrong, which affects reliability. They often simply rely on network security to ensure that there are no bad actors sending out spoof messages.
That hasn’t worked out well for factory-automation busses – SCADA, Modbus and the like, if Stuxnet is anything to go by.
So, coming back to the question, it’s tricky…
The safety-critical aspect of the device is better if reviewed, and releasing the code won’t reveal anything of use to an attacker – he already has details of the protocols and can mess-up the system using just those.
The firmware-update protocol needs to be secure, in the cryptographic sense. This is also improved by review, and releasing the code only puts all attackers in the “known starting position” against which security is measured.
The only real cost of release is that hackers might more easily find a backdoor, an exploit, that can bypass the secure-update method. I’d say that your real threats can already get all the information they need, with just a bit more effort.
So the cost of release is outweighed by the opportunity for free review and “bounty” type code-fixes. Bounties are much cheaper than employing an equivalent level of talent on the payroll. Your existing team will gain expertise through this process.
Maybe there is a way to release code under some sort of NDA, so it isn't available to all and sundry - and the NDA explicitly permits "white hat" penetration testing.
Finally, the customer is in a better position, he doesn't need to "blindly" trust your code - he can see the review discussions, set his own experts on it, whatever.
I don't disagree, I've worked on ABS and airbag systems that have separate hardware safety, "Safing" as they call it.
This only really works for systems that have a simple "safe" state they can default to - like "airbags off". OK it's not very safe at the point you need them, but this is so rare that it's OK to just raise a fault, which will get fixed, and you're back on cover.
More complex systems with no simple "safe condition" - like oil refineries - need a better approach.
I didn't know it existed, but it is possible to write software that is guaranteed (by formal methods in mathematics) to produce the correct output only, or an error. If you couple-up two or three of such systems, you have safety and reliability that insurers will cover.
I'm not an expert in this, and clearly the combination of the outputs needs special attention, but I do know that it is do-able, proven, and accepted.
I'm hoping it gives some sort of answer to the "Quis custodiet ipsos custodes?" - a question so old that it's known in its latin form.
Who watches the watchers?
You can, and do have safety critical firmware that watches itself.
At the simplest, there is a "watchdog" which kills and restarts a process if it does not correctly feed the doggy.
For most systems, that is enough because they start up quickly enough. A software failure causes a restart, the live state of the system is determined and it is then put into a safe state.
Language and OS features that are hard to test/prove correct or take indeterminate amounts of time may be prohibited, eg dynamic memory allocation.
For more dynamic and critical systems, there may be multiple controllers, sometimes even written and tested by different teams so they are likely to have different bugs. That's expensive, so Boeing don't do it of course.
Formal proofs are used to confirm the design will not fail dangerous, but testing is required to show that the actual system meets the design.
I have only proven this correct, I have not tested it.
This post has been deleted by its author
You want attestation that what you are actually running is what you think you are running.
That can be built-in easily enough... GM had a car where if you swapped the radio it wouldn't start... The raido didn't pass the 'what was supposed to be there' list from the factory.
The technology to do this has existed for decades and the algorithms to implement the check securely have existed longer.
So, to answer your question... All the way. You go all the way because that's the only way you can be sure. Right now you have no idea what your sensor is actually running or even if it is physically there and not spoofed by any nuber of attacks.
> There never was any user-based foundation to keep firmware hidden and proprietary.
I recall reading years ago (and I don't know how well founded the claim was) that some hardware vendors kept firmware secret to conceal the fact that their hardware might have been infringing a competitor's patents. It's certainly not impossible, I suppose.
There's an argument that publishing could avoid pointless court cases.
We've had people sue us for patent infringement because they assumed the only way to do something was a method they had patented.
Discovery included us having to hand over source code to prove we used a different method.
There's rarely a user-based reason for something a company does, but the major reason for hiding the firmware is because they want you to pay for their device, not someone else who copied the firmware and installed it on their hardware, and not someone who can only build hardware but you can use their firmware on it. It makes it harder to modify the system later, which is why I appreciate when firmware source is made available, but it's a rather normal thing for a company which paid for code to be written not to give it away. This does vary depending on what level we're talking about, but nearly everything complex enough that firmware is running on it is also made by other companies, and manufacturers don't tend to enjoy giving out their work to the competition.
This would make life wonderful for Chinese cloners. Not only do their costs of developing the software fall to zero, they won't even have to reverse engineer the hardware. Conversely the "real" innovators need to invest money to design the hardware and write the firmware to go with it. What really hurts is that without "locked" firmware, when over time they provide software support patches they are inadvertently supporting the clones as well as their own devices.
I've worked on two versions of a process control instrument that used identical hardware but had different firmware. The simple version provided basic functionality that was suitable for 90% of applications. The upgraded version had a big investment in the firmware to give it vastly more functionality. It retailed for twice as much, and sold well to the 10% who needed it. Without protecting the firmware it was known that people would load the simple instrument with the enhanced firmware getting all the extra functionality for free, and thereby making development of the enhanced version of firmware non-economic.
While I agree that the vast majority of the Open Source community are striving for the common good, security is not necessarily their main objective. How do you know the firmware you have just downloaded is free of anything to create a bit of mischief?
The SolarWinds incident showed what can happen when someone broke the weakest link in a supply chain.
"How do you know the firmware you have just downloaded is free of anything to create a bit of mischief?"
That is true for open or closed source software.
With closed source it comes down to a matter of trust, do you trust the salesperson telling you that this is free from backdoors or other vunrabilities that could be there on purpose or by mistake?
History and the number of CVEs would suggest that you probably shouldn't trust that to be the case.
Now the same can be said for open software (or firmware) as well, but at least in this case when the vendor has stopped providing (at least a semblance) of support, you can either fix it yourself, pay someone else to fix it for you (or more likely just whine that that the code is buggy)
There's also accountability. If Asus brick my router with a firmware update then I can send it back and get a new router. If it's bricked as a result of me installing custom firmware then I'm on my own.
Not saying it shouldn't be an option, but how do you communicate to people that custom firmwares invalidate your warranty. Sounds like a support nightmare.
Same way you communicate that removing all the screws and prying components off the circuit board invalidates the warranty.
The thing that is necessary is for the firmware update process to ensure only the equipment owner is applying the firmware update.
It must not be possible for either the manufacturer or a drive-by attacker to do so without explicit authorisation.
I remember the press reports from those early days. The IBM-compatible market was enabled by clean-room implementations based solely on the published APIs. The BIOS vendors were very careful about avoiding any possible basis for suggesting their code could have been copied from the originals. After all they were confronting a a massive law firm with a computer company bolted on the side of it.
>What is this Tech Manual you speak of?
A traditional three ring binder (small) with documentation. How stuff was done back then.
All the original PC clone manufacturers copied the PC's BIOS. It worked great until IBM sent out nastygrams giving them a month to change things, OR ELSE. So there was a mad scramble to rebuild the code. Easy enough --- except that you've always got clever buggers out there who exploit 'features' in the ROM code that aren't really features so if your code isn't exactly the same as IBM's things stop working and you get customer complaints.
(I was there. A living witness. I got roped in to do a serial port driver. No big deal.)
(However, I did get my come uppance later with a plug in card, a PC on a card. I used the simple Intel model and the software makers changed their code to detect this and defeat the card. Moral -- its got to be not just a copy, but an exact copy.)
Ignoring my Mint Linux box, we are a household that strenuously avoids allowing updates to install themselves. Like most heavy users of computing equipment we've been hammered more than once by updates that break things.
The latest is our maybe four year old Brother MFC-L3750 laser printer. After ignoring the "Firmware update" prompt for months I finally gave in and clicked "install." I mean, it's a printer, what could possibly go wrong?
We now have printer, WIFI connected, that can only be printed on if your laptop or other device is literally in the same room as the Brother. If you're anywhere else in the house, despite a good strong WIFI network connection, it does nothing.
Or, more accurately, usually does nothing. Once every three days it will suddenly work fine for a couple of hours. Linux, Apple, Android.... same pattern.
This is a machine that worked stunningly well for years. Plugged it in, popped in the toner, and ignored it for months. Now it's bordering on useless, and I'm faced with probably hours on the phone with Brother to find out how to back out of the update.
The point of all of this is obvious: if you expect people to apply updates to remain secure, you need to properly test them to make sure they don't ruin the user experience or disable your product. Yes that will cost you money, but it's part of doing bjusiness.
It’s worse than that; it only takes being burned a few times for the someone to, understandably, be scared to install updates. When those updates address security vulnerabilities, the situation becomes more dire. Unfortunately I see a downward trend in software across the board, where the rush to ship is far outstripping concern over code quality.
I work in an IT security role, and this is very much the case. The thing is, we recently went through full risk assessment, and it raised some serious questions about "security" updates. We discussed the productivity losses from all of Microsoft's borked updates, versus the risk of a security incident. It still comes down to being better to install the updates (and hold you breath). But, it was starting to get closer to the don't update, and take our chances scenario.
I had this problem with an expensive enterprise class IP phone. Thank god I tested the firmware on the single device before pushing it out to over 40 other phones.
After installing the latest firmware, the phone got so buggy that it was unusable as phone anymore. It would receive an incoming call, start ringing, and when you pick up the handset, it would just keep ringing. If you put the handset back down, and picked it up again, it would answer. Sometimes it would just re-boot in the middle of a call. Sometimes it would receive an incoming, and then re-boot after a single ring. It worked fine before the "upgrade".
I called the manufacturer to see how to roll back the firmware, and was told it was not possible. There was a spot on the PCB for obvious JTAG connector, but I really didn't want to waste the time.
I told the manufacturer that I need an RMA number and an address to send the phone to. They refused. I went back to my vendor and told them the same thing. They told me to contact the manufacturer. At that point, I called Amex and said I wanted to do a charge-back (dispute) for the purchase price. They agreed, and reversed the amount off of our Amex bill. I told the vendor that they can work it out with the manufacturer. I asked them for an address to send the phone to, since we received a refund, they said just keep it.
At this point we halted all software updates from this manufacturer. Every once in a while, I would pull out this phone and load the newest firmware, just to see if it worked any better. About a two years later, they had firmware that worked well enough to put it back in service in a little used location. We stopped buying anything from this manufacturer, and didn't trust their firmware updates at all.
I avoid firmware updates they sometimes remove features for security, maybe I want to telnet locally into my ip cam. I have a Brother and it logs into the wireless router instead, then it's only one connection to fix. Ethernet is better but they don't put it on the lower models any more.
Accidental borkage? Worse than that.
There have been instances - I’d cite links but CBA right now - where printer manufacturers have snuck in restrictions on 3rd-party ink cartridges. Deviously, they’ve time-bombed them to activate at a later date. So one day your printer suddenly stops working with non-OEM cartridges, and of course you have no idea it’s because of that update you installed 3 months ago, so you blame the cartridge supplier not the printer manufacturer.
Stuff like that poisons the well of trust and is a very powerful argument for open-source firmware.
I suspect I know the underlying problem, as it sounds very familiar (Brother printer with same issue).
The printer is trying to use uPNP packets to do network discovery, via the router. The services that run on top of uPNP are very much not standardised, Bonjour etc. Routers now often block uPNP packets for security reasons, and don’t even necessarily tell you they are doing it. But nobody told peripheral manufacturers not to depend on that - so some printers are happy with router functionality, and some are not.
Hence, if you are in the same room, the printer auto-connects to your device directly via WiFi Direct, and it’s all happy. But if your pad is closer to the router, the printer connects via the router, the printer still sees some packets but not others, which causes it to confuse MAC addresses and have some network segmentation fault (can’t remember the error code).
My solution was to enable uPNP packets in the router, which was awkward because many home routers don’t even allow you to in configure uPNP blocking. But PlusNet routers do allow it. For me at least, I’m 100% certain of that diagnosis, as I can switch the printer to reliably working/non-working state just by flipping the router uPNP config. Of course, you may decide that the security justification for your router blocking uPNP on home network is *correct*. Then you’re stuffed.
I remember back in the late 70s/early 80s RAM was one of the more expensive components in a computer (more so than ROM), and backing storage was slow or expensive, so various computers were sold back then with the ability for the end user to buy extra firmware either in the form of a ROM to plug into the motherboard (e.g. the various word processors, spreadsheets, graphics extension ROMs for the BBC Micro), or in the form of cartridges, so whole applications were there instantly and didn’t eat valuable RAM.
I say ‘buy’, but probably more likely buy a blank EPROM, and borrow someone else’s firmware and a EPROM programmer.
MSI's customers aren't buying firmware from anyone, they're getting it for free from the company itself.
I know it is Monday, but let me see.
You mean free (as in obscure and undocumented) like the nice free Intel Management Engine* baked into millions of Intel motherboards since ~2008?
The one that is, to all intents and purposes, impossible to disable** and can run a small OS even with the box shut down?
Yes, I thought so.
* no links or references needed, every commentard knows (or should know) exactly with IME is and what it does.
** I know that too well, I run a Sun Ultra 24 WS.
>every commentard knows (or should know) exactly with IME is and what it does.
Yes, I do.
I worked for Intel for over two decades, closely involved with the ME from the beginning.
I was on first name terms with many of the vPro technology engineering teams. I spent more time in their lab than at my own desk. I debugged ME firmware. I wrote many, many pieces of software to interface with it. I filed bug reports and change requests. I met with innumerable customers and government figures to discuss what it could do. I demonstrated it at industry events all over the world.
It's an incredibly cool and useful piece of design. On consumer platforms it monitors fan speeds, voltages, temperatures and so on; on the vPro platform sold to enterprise customers it also has a network stack that, once it has been turned on by the system owner, allows for authenticated remote remediation and out-of-band control over wifi, Thunderbolt or ethernet. In the enterprise space, it's a huge competitive advantage. Lights-out maintenance capabilities and hardware-level security.
And I - personally - get very, very irritated at the constant sniping and innuendo about it. I shouldn't, but I do.
Yes, it's an embedded processor. Yes, it runs its own code. No, you can't see the source for it unless you're an OEM customer and ask very nicely. No, you as an end-user can't prove that there aren't backdoors, secret Masonic messages, proofs of Fermat's Last Theorem or next week's winning lottery numbers embedded in it. But enough pairs of eyes have scrutinised the code over the years, and absolutely nothing malicious has been discovered.
Intel as a company is not stupid. The reputational damage from FDIV, Spectre and other unintentional flaws was bad enough and reverberated throughout the company, believe me. The shame and embarrassment that we had collectively screwed up was tangible. We had emergency response teams where people were pulled in from the highest levels of the divisions to jump on the issue and resolve it. Disclosure was full and frank. We proactively encouraged, and still do, researchers to pen-test our products to discover flaws.
Do you really think that anyone would dare put in intentional flaws? No, one can't prove a negative, but I never met anyone there, or saw anything, that gave me even the slightest hint of concern.
I rather suspect that if the firmware source code were made available the flakiness of much of the hardware and lack of software quality in the firmware would be clearly evident.
I think the Intel ME had an embedded 32 bit Minix kernel but that was probably an outlier of quality. I understand AST wasn't too pleased.
Several years ago when I was trying to recover a colleague's data from a sata disk whose logic board had died I discovered the spinning rust's firmware is pretty much a mini OS (whose image is stored on part of the media that you thought you had bought) and of course a logic board transplant from an identical disk doesn't work because the media's firmware (mini OS) is effectively "node locked" to the logic board.
Heaven only knows what is running inside a ssd.
AIUI, the issue with not being able to resurrect SATA drives by swapping out their electronics is because various drive-specific calibration parameters are now stored in nonvolatile memory on the controller PCB, and unless you're able to transfer those from the failed PCB onto the new one, then the new one has no chance of being able to accurately read the data on the platters, nor any way of being recalibrated outside of the factory.
I have a thirteen year old "netbook" that still runs perfectly well. Since mictosoft now requires what used to be considered super-computer specs to even run win11 my little engine that can won't. Fortunately FossPuppy 64 Linux is perfectly content on it. Why perfectly good computers are relegated to the dust bin when one can still get parts and service for Model T is beyond me. I'd bet, except for PFY gamers, most if not all users don't need the latest bleeding edge bit of kit.
... Linux is perfectly content on it.
My Asus 1000HE (Atom N280@166GHz+2GB DDR2-SDRAM) runs Devuan Chimaera with the Openbox WM.
Much like Philip Newborough's Debian based #! Waldorf.
Not a speeder to run games on but a truly excellent ROI from a ca. 2010 as-new second hand purchase (50% discount on street price) which still has a few years' life in it.
Just like me, my trusty Umax S-6E and the Palm IIIxe I ocassionally use ... 8^)
Part of me, the selfish individual hobbyist part, is thankful for Microsoft's ongoing churn of software-enforced hardware obsolescence.
For years (decades, really) I have run my laptop and home lab with used and last-year's-model kit (sometimes many years), rendered unusable or at least undesirable to Windows users by Microsoft progress, quite happily bumping along with BSD and Linux.
If I have a regret it's that my lab is nearly all x86 at this point, whereas I used to run Sun and a couple SGI and DEC systems. But those all found good homes with other hobbyists, to the relief of my power bill, so my regret is tempered somewhat.
I have a 2006 white macbook running LM XFCE 21.1 and a 2009 Eeepc 901 running LMDE 4. Both are usable and still being updated. Then there is my pair of Eeepc 701s that run an older version of LinuxMint that need something newer installing on them.
I have a recent Motorola phone with Android 11 that will cease to have updates soon and no obvious upgrade path to Android 12. Plus a paperweight Cosmo Communicator stuck on Android 9. In the case of the Communicator you can in theory install Gemian linux although the last time I checked the absence of a Planet Computers server stops you updating or doing much useful with their linux install. Which is one of the reasons why linux on x86/AMD64 or widely supported ARM platforms (Raspberry Pi for example) makes it easy to keep old kit working.
My 2003 Corolla has firmware in the form of an electronic engine management system and ABS control system but the car is not connected to the internet. So security issues due to unpatched code are very unlikely. A modern electric car is a computer on wheels with an always on internet connection. The cars's firmware controls things like the brakes and power train as well as insignificant stuff like the infotainment system. Will today's new cars still be getting updates for their "firmware" in 2043? Will your 20 year old Tesla fail its MOT as there are known unpatched vulnerabilities in the car's firmware? Will you be allowed by legislators to install "LinuxMint Tesla Edition" on your car when the car's firmware stops getting updated?
So I was stupid enough to buy a Samsung product, a 49" Odyssey monitor. It's flaky as hell, where when it comes back from sleep mode, it randomly flickers hand-size portions of the screen and you have to power-cycle it.
I did a firmware update and now it sporadically just doesn't come back from sleep mode and doesn't respond to the power button or anything except yanking the power cord.
How much of an idiot are you when you can't make a monitor work properly?
Given the size of the thing, I'm going to feel moderately safe in assuming the connection you're using has HDCP. In my experience, whenever such copy protection schemes enter the room, stuff that would usually be utterly bulletproof starts misbehaving in ways that span the entire gamut of failure modes, from "glitches once in a blue moon" all the way through to "attempting to out-do a Norwegian Blue in the looking like a very very dead thing stakes"
The challenge, and this is especially tough with printers I've discovered, is finding the brand whose particular rubbish you can tolerate the most, because invariably they all suffer from some form of crapness. Of course, the specific type of rubbish is only apparent after you've bought the damn thing, despite doing exhaustive review hunts and investigation. There is always something that surprises you.
For me, I still have to wonder why the two Dell monitors on my home PC randomly just stop being active until I physically remove power from them for a bit. The Dell port replicator I had on my old work laptop needed something similar, but not just the power removing, but for some strange reason every single connector plugged in the back.
My HP wireless printer would go to sleep and not wake up unless you turned it off and on again, and despite being only a few years old will not be updated to WPA3.....
It's all a bit rubbish really.
I have a Samsung monitor here which cheerfully goes into standby when the computer goes to sleep, then wakes itself up for a couple of seconds... to tell that it's going to sleep.
I suppose it makes as much sense as the power led on the TV which goes off when the thing is turned on, and turns on when the TV is turned off, with random flashing in the wee small hours - I assume when it has a bit of a sulk upon discovering that it can't contact the mothership.
UEFI is the part that will make computers completely untrustworthy, even though its sold on the premise to secure them. It's basically a huge operating system beneath your operating system with God-like powers. Any backdoor or hack in there will undermine any effort of your OS to secure itself.
I personally believe UEFI was instigated by LEA and intelligence agencies. There's no real reason for it to exist.
"Goddamn UEFI straight to hell."
Serious questions: what are your pain points with it? And how much of that pain is due to UEFI per se, and how much to the TPM?
My experience of TPMless UEFI is much more positive. Sure, there's a learning curve, but that's the case with anything new. And there's some frustration, but my distant memory of BIOS-based machines includes a significant amount of frustration with those too.
But as I say, that's on a couple of older machines that, if they even have TPMs (I honestly don't recall), they're disabled. Since I run Linux and FreeBSD, I've never had to open that can of worms.
There are a couple of things about UEFI that I actively like:
GPT: 128 partitions (it's hard to imagine needing that many, but you can format a drive with more if you want them); no primary/extended/logical ickiness; effectively no size limits (we're a fair few Moore's-law generations away from exceeding 64-bit sector numbers, whereas we're already maxing out MBR's 32-bit fields).
All the on-disk boot stuff lives in a file system -- no more magic sectors (other than the partition table of course), and no more of the contortions GRUB has to go through to fit itself into an MBR-formatted drive.
Firmware, like FPGA images, is the special sauce that turns a piece of hardware logic into something useful. This is why writing this code tends to be less of a programming exercise -- you have to be aware of all the system components, how they're set up and interact, in order to write the code. You just use a language like C or VHDL to implement whatever design you're making.
MSI's hack isn't that spectacular. Its really just carelessness that got the product source code exposed. (1.5Tb is about what I'd expect for the full source tree, specialist tools and so on)(yes.....been there....). I'm not exactly sure why anyone would want to try to hack this source, there's no real secrets there (you buy code from people like MSI to save the hassle of building -- and testing -- your own code. Sure, its possible to roll your own version but there's nothing to be gained from it.)
Real firmware -- the code of 'things' -- tends to be very conservative because of the rather extended texting cycles involved. Unlike 'normal' code you can't just drop a fix into the box every time you fancy it, that box has to perform predictably from the moment its released (and the firmware version is often a SKU). Often this code is not just tested but certified, an extremely tedious process, especially if the box is a medical device. This approach to code design and build is old fashioned, its anathema to the RAD brigade, but one person's glitch is a product failure with wide implications -- there's absolutely no room for the 'pile it up, shove it out and wait for the user bug reports' mindset.
There is often significant benefit to the designer in keeping the hardware unpublished.
For high volume products, if you overdocument the product, then copy shops can make hardware that will run the exact same firmware, without having to write it. While high level competitors can reverse engineer, they have engineers who are more than capable of re-engineering the whole thing there own way, and thus also also need to make more profit, so you stand a chance of competing.
The lowest rungs of the copying world will make stuff at pennies over the cost of making the hardware alone. For component (e.g. IC and SOC) manufacturers, you lose your customers to compatible knockoffs made by people who have no idea how it even works.
This sadly is being shown up glaringly with open hardware,
My hardware designs were pulled from the web years ago, and are now on specific customer request. And this year, my open source software is also gone, this time because of giant corporations scraping everything for their AI.
>> and not much to be gained by keeping it secret anyway. So why lock it down?
If you're in the business of making consumer electronics your firmware *is* the device. Not only might it enable functionality you use to give your products a differentiating factor over the competition, it will also give them everything they need to know to determine exactly what components you've used to build the device. Quite often those components will be blank, unlabelled ICs- blanked out precisely to keep them a secret. Releasing your firmware source code would give your competitors all the information they need to copy the features you've spent thousands of hours developing, allowing them to create clone devices at a fraction of the development cost. The result of releasing your firmware would therefore be that you put in all the effort and then your competitors are able to release identical products at a fraction of the price. Why would any company do that?
The company I work for already spends hundreds of hours of legal time e very year chasing Chinese clone manufacturers who literally build a cheap-ass shitty device and put our firmware on it and then sell it (pn Amazon, quite often). Releasing our firmware is basically giving everyone a right to do that.
This is not the same as "right to repair".
True. I manufacture kit like that and although someone could reverse engineer the hardware, the firmware is really difficult to get out of the processor (it can be done with enough effort but probably wouldn't be worth it).
The only time someone successfully copied my design was when I developed a thing for someone else, and their manufacturer failed to set the processor security bit. Thus if you desoldered the chip and read its contents out, you got the whole firmware image. A well-known German hobby kit company did that and cloned the product right down to the unused port hardware. Trying to cover their legal bottoms they didn't sell it in the UK.
I got paid though!
Microchip stopped setting the security bit on my chips a couple of years into their manufacturing run, so there are millions of unsecured ones out there. Luckily, none of the customers appear to have tried reading them.
(I "updated" the firmware so the old version could be detected, and made some production test software that reports they have an obscure, but reputation harming fault, that I am pretty sure will make customers call me)
I have used a computer that had no firmware at all. Not one byte of it.... That was a long time ago.
Firmware is just software, nothing else. The location it's stored is what prompts the name "firmware".
For a modern compute platform, it's a total nightmare getting the hardware and initial software up. Things like plug and play make it simple for users, but it requires a quite complex set of interactions between hardware and software. This complexity is what's driven the blooming of things like UEFI, and once a sufficient level of complexity was incorporated, adding useless dross like network stacks is inevitable.
It's not going to get better. The eventual architecture for computers will be a CPU with a RAM bus, and an enormous number of Ethernet lanes. PCI will probably eventually die out, because of the cost of developing two separate families of interconnect and associated switches. There's already some systems like this (fairly obscure ones). This means that Ethernet will become the only way of chattering to peripherals. With fairly significant consequences for firmware...
A lot of pricing differences in motherboards lies behind features set in firmware.
Most manufacturers don't have a strong drive to change their practises. Though, like certain routers that are compatible with alternative firmware, a board that were open would definitely have it's supporters willing to pay a bit of a premium for it.
This already exists, see for example the Raptor Power9 products or some of the Orange Pi boards.
People do in fact pay a premium for the extra flexibility, the business model apparently works, therefore I must conclude that the locks on consumer products are to force HaaS and overall cloudification with personal data theft and sale as a side benefit. You're more valuable to them as a source of data for sale than as a customer.
I also point a large accusatory finger at Hollywood and their streaming services. They love the DRM aspects of the locked firmware, and have been fairly successful in getting people to sign up to multiple services at extortionist monthly rates. Frog in the pot and all that, if you're renting the device in your rented flat why not rent the content too? It's only money after all.
I knew this was going to be fanboi drivel as soon as I read the words “ Only it doesn't, the experience of people who flash their Android phones with new firmware”
Nobody, in the history of ever, has flashed their Android phone with new *firmware*. They only like to think they do, because it says so on the forums. What they actually did was download an OTA update for *Host CPU OS*. That’s it. That’s all CyanogenMod et al are, not firmware.
The firmware for a *phone* (because that is what it is) runs on CPUs that Android has no access to, with totally different toolsets, to drive the modem and other devices on the Snapdragon. Only Qualcomm can touch this. Not Google, their engineers wouldn’t have a scooby either.
However “jailbroken” it is, can you run user code on the Spectra image processor? No. You cannot. You get an API, that’s all. Can you modify the Bluetooth codec from AptX to something user-specified? No. You cannot. Could you write specialist eSIM code? No. You cannot. Can you even change what RF band the phone is using? No. You cannot.