The CEOs ....
.... of the companies involved should be forced to have at least one of their IMDs fitted to them. Just to show that they are sure that they are safe from hacking!
Software Freedom Conservancy's (SFC) Executive Director Karen Sandler was last year awarded an honorary doctorate by Belgium's Katholieke Universiteit Leuven for her work for open source and software freedom. There was only one problem. Her heart was beating strangely, and she couldn't get the data out of her implanted …
Probably the only area of life with more caution in how things are regulated might be pharmaceutical or aviation. My wife works in this field. After hearing the culture that exists in this space, I had zero problem getting a medical implant from a firm she used to work for. It is a lower risk implant than a coronary one but companies that make these things take their responsibility seriously.
but companies at least one company that make these things take their responsibility seriously. FTFY.
If many companies did take their moral and technical responsibilities seriously, then we wouldn't have situations in which only company magician X -- currently unavailable -- can access the data. And, we wouldn't have situations in which security is a piss-poor after-thought, or even a no-thought!
It's difficult to see any real path to success here: legally, there's no difference between a medical device and anything else we buy the software can be considered the IP of the manufacturer.
But the data collected by the device is another matter entirely. This should be clearly documented and the patient and/or their clinician should be given the means to access it and control who else has access. This is much easier to legislate and implement.
Exactly. Equally, just how far does opening up the device design (it wouldn't just be the firmware we're talking about here - you'd probably also need the schematics to know how to connect to the physical comms interface) actually need to go in order to be considered open source? Because presumably I could see a time when a manufacturer does open-source their firmware and hardware designs to enable third-party scrutiny, yet uses encrypted comms to which they alone hold the key, making it still impossible for anyone else to talk to the devices even though they now have all the data needed to fully replicate one.
So this idea that open source == open comms (or, as the article is implying, that closed source == closed comms) is a fallacy - if the issue is that we want doctors to be able to have immediate access to patient data held on implantables/wearables, then the solution is making sure the comms interfaces are open, doesn't matter in the slightest whether or not anyone outside of the manufacturers has access to the underlying source code, schematics etc.
Surely the "thought" process is:
1) this data is sensitive and important
2) that means it shouldn't be easily accessible
3) I know, we'll make it so only a trusted (!!!) company rep can access it
4) surely we'll have plenty of those available 24x7x365 because #1
But then it never gets communicated (or at least understood) and #4 never happens, because that would cost money and someone's bonus depends on not spending money.
Either that, or Fred, who was responsible for one of the key steps in the process, quit or got RIFfed and nobody took the task on.
What is needed is the proper enforcement of functional safety standards on these devices - the current medical standards are not fit for purpose, and (some of) the medical device companies are lobbying to prevent there being any changes.
And then there are the user interfaces - there were (are?) infusion pumps that give orders of magnitude different doses for what appear to be the same settings (and this has lead to serious accidents).
On its own, opensource will not help as the code is generally not compatible with a number of the objectives that have to be satisfied when functional safety is required (where are the formal requirements, test plans, etc. ?). However, there is nothing to stop an open source project from being created that does tick all of the boxes, though many will not what to take part because of the amount of (tedious) supporting documentation that needs to be provided.
"On its own, opensource will not help as the code is generally not compatible with a number of the objectives that have to be satisfied when functional safety is required"
That applies to the volunteer development community, but that's not the entirety of open source. There's nothing to stop a critical system vendor fulfilling all the necessary objectives before releasing the source. However, in the life-critical application space, full public disclosure would probably be counterproductive, as all software has bugs and it wouldn't be a good idea to alert malicious actors to them before they can be fixed. A system whereby the source could be released on demand to authenticated legitimate independent examiners would serve the necessary purpose though. The problem of course is that "intellectual property rights", for most vendors, cap potential hazards to users.
Part of the issue re: security of Medical Devices generally is that they are shipped to the Clinicians essentially as a Black Box that has a Certification for use as a Medical Device. Usually certified several years previously. Any "change" to the state of that device, for example: patching the underyling OS, deviates from the original specification meaning the device is no longer certified and can't be used.
Essentially, Medical Devices are a huge security problem because they, and the whole ecosystem is designed for a point-in-time configuration not security.
I work on pharmacovigilance software, and this is essentially correct. You're describing the requirements for validation.
You *can* update the state of the device. You can patch the OS. You can update the code. *But* if you do so, you have to revalidate it with the new configuration - basically, produce a whole metric craptonne (it's 10% bigger than an Imperial crapton) of documentation that the device, with its new configuration, still meets its formal specifications. This can be done by either the manufacturer, or the user, but it *does* have to be done to comply with medical regulations. It's a royal pain.
There's nothing preventing you from making the software open source, but the issue is that you can't just update it at random and remain in compliance.
《metric craptonne (it's 10% bigger than an Imperial crapton)》
I take it that is a short imperial ton? Or is merde 10% denser than anglosphere shite?
Why is one still calling avoirdupois etc "Imperial?" I thought the UK went metric yonks ago, not only is the Empire long gone, the former possessions are repaying the "debt" by providing comparably competent rulers in Westminster today. :))
The only nation that avoirdupois etc now has any legal currency must be the US? Probably time to call these units US ton or 'Murcan ton. Only readers of old manuals and old cookbooks need to know these old units - the difference between US and Imperial pints for example and even then treading carefully.
Similar for a lot of other types of embedded system as well - for the emergency comms stuff we develop for parts of the world which require UL certification, its granted for a specific combination of hardware and firmware revisions, and any deviation from either would require at best a paperwork exercise and at worst a full system retest from the certification body.
Independent of open source or not, it should be easily possible to extract data from these devices. Say some Bluetooth interface that can send the data to my iPhone. Or android phone.
For security, having a gut repository with public read-only access would be a great way if you don’t want to open source it. I mean I’d love to have this software safe, but I really don’t think having different versions would be a good idea.
If you want to know why this is so difficult, take a look at the American medical insurance industry and consider the issue of medical regulation.
You want both security AND the ability to easily pull data from device? You want to prevent hacking AND share all of the code with the world? (No, sharing code does not magically make it secure). You want changes to software not to put people at any increased risk AND have them every three months? You want companies to spend years and billions of dollars not only developing this technology, but also going through the costly process of approval AND have them share all their work in public for free?
That's a lot of contradictory requirements, most of which overlap hugely with regulation, safety, corporate governance, costs and highly specialised workers.
Those things are not contradictory.
Why would closed source software be safer than open source?
Security by obscurity it not security, it' just obscurity.
Patient access to pull data does not require it to be broadcast to all and sundry - only needs one key to be available (and copied into the device) - the patient will share it with the appropriate clinician. They could even update it if they think it's been leaked.
There could be an unauthenticated "what (basic data) happened in the last two hours" for emergency services as they start to put defib pads in place, but that might not even be needed.
And requiring critical software to have updates when vulnerabilities are found is the minimal effort required.
Note I never claimed closed source is more secure - just pointed out that opening source code up does not magically make it secure, but without question does make it easier for people to look for vulnerabilities. There is an important difference here between the idea of 'security' and 'vulnerable to attack'. I can grep "software that uses library X with newly discovered vulnerability Y" in moments on GitHub.
Equally there is nothing about software being closed source that prevents it from having sane data availability. Clinicians and insurance companies should probably make this a base requirement for any device they approve for their patients. The fact that they don't probably says more about how they think about such devices than the implementation choices of the manufacturers.
The same applies to updates - and I note that a family member with a pacemaker had regular updates on the 'high end' unit she had fitted, and almost none on the 'standard' one - this is entirely under the control of the medics (and, sigh insurance and government) that make selection decisions between devices. Whether or not it's open source is utterly irrelevant.
For large scale attacks fuzzing is now the weapon of choice. Sure, if a quick static analysis reveals potential vectors, they can be considered nice to have, but the best thing is simply run attacks in a sandbox and keep the results quiet.
Medics are notoriously unqualified to assess the safety of software devices. You really need trained engineers for that who can get the necessary medical information from the clinicians. But we then run into the usual problem: regulators are underfunded and understaffed, so lightweight "self-regulation" usually gets the nod: medical devices, cars, planes, financial products, etc.
This post has been deleted by its author
It seems odd that the pacemaker implant could only be read by the manufacturer. I could see a point in limiting ability to tweak settings and maybe they locked down any access. That said, read only access to the data should be very low risk. Certainly lower than someone going in and tinkering with something that could kill your if you’re not perfect in your coding.
Regarding software bugs, the bugs may exist as they do in every piece of software but aren’t of a magnitude that it would trigger a recall by the regulator.
This seems like an untapped market for recurring subscription services. Want read-only access to your pacemaker data? Pay us $20 a month and we'll show it to you on a janky dashboard our intern slapped together in Flask. Until someone in a boardroom gets a whiff of the idea, and then it'll quickly turn into "pay us a monthly fee for the device to continue functioning"
I came here to say much the same. Totally made my day, given that I'm on my second pacemaker.
However, the story doesn't make that much sense to me. Although my GP can't get data out of my device, the local hospitals have loads of mutant laptops in heavy duty cases that can, plus cardiac techs and cardiac nurses who can operate them as well as the cardiologists. I've also got a remote unit at home that will send a read out over the mobile phone network in about two minutes, so the idea that only a company rep can get the data either isn't right, or whoever chose to use those devices should be fired.
This is in the US.
Aside from that, the need for "mutant laptops" is the real problem. The comms is closed proprietary, and the manufacturer has no incentive to write decent software for reading & configuring.
So it ends up with every brand needing a laptop running a specific operating system version, that cannot ever be updated lest it break the fragile custom software.
It's not the only such industry of course. How many ATMs still run Windows XP, for example...
I've had a St Jude ICD since 2007. I'm on my third one now (they get replaced - surgically - about every 5-6 years). From the second one onward, they've been running Open Source code. Just for grins, I downloaded and disassembled the code a few years ago. It's nothing particularly revolutionary, and it has proper passwords for parameter changes, and those changes can only be practically done from one of their specially configured (running Debian) laptops.
When I was having the third one installed, there were engineers from St Jude's visiting the hospital, and I had several hours with them (I'm a retired engineer) and found out a lot about their electronic design too. It's a fascinating technology, and has kept me going since 2008!
The code for the ICD is actually just about what you'd expect, but has some pretty nifty logging routines. For the sake of safety and my own sanity, I won't go into details about its operation.
I could hand you an executable assembled by my personal assembler (or compiled by my personal compiler), along with a tarball of what I claim is the full, complete, unadulterated source.
How are you going to verify the executable is actually built from the contents of the tarball without access to my assembler or compiler?
@Jake: the poster I replied to wrote they had "downloaded and dis-assembled the code." If you have the source (as he did), which you (he) just downloaded, that source file doesn't need to be, and can't be, "disassembled," because it already is in source form. Are you thinking the poster dis-assembled the binary provided with the device, and compared the disassembler-generated source file to the publicly-released source file? Doing so is almost-guaranteed to result in a mismatch, even if the binary and source are uncontaminated.
But your general point as I understand it, while not addressing what was actually written by the OP, is extremely-relevant.
I once downloaded the assembly-language source of a program I use fairly extensively, "GRUB4DOS", and assembled that source (the same version as I was using: "0.4.5c"). I did a SHA256SUM on the binary of the freshly-assembled source, and of the binary package I had downloaded. They did not match! Why not?
* Did the author use a different assembler than I (a near-100% likelihood), with different options, resulting in different binaries?
* Did the author use a different source file -- with some sort of back-door included -- than the publicly-available source file purported to be version 0.4.5c?
* Did some asshole exploit the website and replace the author's binary with one which said asshole had modified to contain a back-door or other malware?
I had scanned the binary I had downloaded at VirusTotal.com, and via locally-installed ClamAV, before I used it. The ani-virus and anti-malware scans reported nothing bad was found. It's a problem that many freeware/shareware/open-source authors and organisations do not use effectively-strong cryptographic verification methods. Throwing up a "Here's the SHA256SUM (or MD5SUM) of the binary" on a website is ineffective because that site might become compromised, and the attacker could simply change the text on the website to report the checksums of the now-malware-laden binary.
Don't even get me started on projects for which the installation procedure is, "Download and run our magic CMD file/PowerShell script/BASH script -- as Administrator/root."
Don't even get me started on projects for which the installation procedure is, "Download and run our magic CMD file/PowerShell script/BASH script -- as Administrator/root."
In virtually all cases, you can clone and build yourself just fine. The script is there to make that easier, and it needs root for the same reason that most installers ask for it: it's going to be installing the program in the typical places, for example /usr/bin, and your user does not have permission to write to that. This is especially true if it's also installing a service, cron jobs, manual entries, modifications to a linked configuration, etc. You can usually replace that instruction with clone, configure, make, and manually move files where you want them, but since that's not what most users are doing, it's not the script they write.
What do you think they should be doing instead?
"What do you think they should be doing instead?"
Force the user to install as an ordinary user, asking for the root/administrator password only as actually needed, and warning if anything is about to be written over as root. (Granted, the average idiot will just answer "yes" to "<blink>Warning!</blink> The INstant Internet Terminal installation procedure is about to overwrite the file "init"! Y/N?", but you can't always protect against the completely ignorant, just minimize the damage. Hopefully.)
Obviously, if the user doesn't have the password, abort the install, and clean up the mess before exiting.
I assumed (yeah, yeah, yeah, I know) he downloaded the binary because he used the word "disassembled". Presumably he downloaded a copy because he didn't feel comfortable extracting a copy from the device that was keeping him ticking over. Which is quite understandable. We'll never know, unless the OP cares to comment.
As for the rest of yours, see ken's classic 1984 address to the ACM titled "Reflections on Trusting Trust".
Am I the only one who reads "ICU" as "In circuit debugger"?
Seriously, OpenSSL springs immediately to mind. It needed a disaster to sort that one out.
Open source could possibly help but the real issues are extensive testing and certification, and more importantly open standards for extracting data. It is incredible there is no external code review on these devices.
I definitely agree it's unreasonable that only the vendor involved can easily extract data, their software should be readily available for medical professionals by law.
I agree. For decades we've heard "many eyes on the code!" as a plan for FOSS security and it was a dog whistle. OpenSSL, Log4j, Glibc, Dirty Cow, Shellshock...
So exactly how many people were indeed downloading that source code and then both reading and understanding it? Not as much as the hype led the world to believe.
I suspect the driving force behind the lack of open source as a solution is the liability issue. CEO John Doe is probably uncomfortable being the material risk taker when the software isn't written by someone he's legally tied to, either through employment or contracts.
Not to say it's a better solution or otherwise, but that's almost always the issue.
So she announced a scenario she experienced as a hook to go off about medical device software being open.
Problem is it doesn't add up. Manufacturer representatives aren't the only ones who can pull the device data, there's plenty of others who could including her cardiologist. Or anyone else in the same profession with the interface kit. It's routine. There's even kits for patients to pull data themselves for home monitoring & diagnostics.
The problem comes from travelling well outside the market where that device was sold and that's not solved by open source, it's an interface standards issue between markets. And even where standards exist open sourcing the core runtime isn't going to have been the solution to getting an EU cardiologist to pull data from a US device if the standards aren't aligned and no one local had the right kit. Same issue exists for all sorts of kit.
And open sourcing certainly isn't going to be a panacea for any issues that exist, there have been some horrific security issues with widely used well scrutinised open software. At least the medical device has had to meet the appropriate development & certification standards so even if not perfect it's still not going to suddenly be much better because you've let a crowd of randoms browse a load of esoteric high reliability embedded code.
Of course the solution that appeals to a lawyer doesn't necessarily have to rely much on facts or reality...
Agree faulty diagnostic and reasoning: The problem was firstly her lack of a reader to hand - remember this was a “life and death” situation.
Secondly, having got the data she would have to had the right application (*) to make sense of the data.
(*)Right application includes a spreadsheet with appropriate layouts, formulas and graphs capable of importing data set. But in todays world it’s a phone app.
>” One of the reasons why people get these devices is so they and their doctor can track their condition”
Clearly this was not the case here, otherwise her cardiologist would have selected a model that had a data export function they could use and the tools to read the data and given her the app for her phone so she could monitor it.
The open-closed source debate is a red herring in this instance; if I were an investor in the SFC, I would be raising questions about the competence of the executives…
Is a strange thing. I worked on one while back and noted a bunch of things. The device was based on a 65C02 and ran assembly language. It had a timer that turned on the CPU every beat to save power (one half of the device was its battery). When they shipped the thing out, they included a dedicated laptop to interface with the thing (simple expense) that was the monitoring software. Cost wise about 1/2 of the cost was liability insurance (I believe that was the number), and to total cost was over $20k (back in 2000). My function was to test the silly thing (we had a bog breadboard we used). To do the testing we had a setup of 4 desktop machines to simulate the patient and record results. They were concerned about interference that they had to switch off ethernet access to the switch because they thought it might interfere with the testing. I worked on trying to make sure that all the execution paths worked correctly by devising several procedures to check all the "if then else" sequences. All of this for a device that was smaller than a deck of cards. Of course it could have been "open source", but there was a certification process that was tedious to say the least. All in all an interesting project. When you get into medical devices everything is difficult. If it weren't you wouldn't trust it AT ALL. Yes, data dumps should be easy to get, but nobody has setup any standards. Until then I don't hold out much hope.
Life goes on, one beat at a time.
Anybody remember, during the initial bad old days of the COVID-19 pandemic, some security researchers discovered how to hack a CPAP device to make it function as a ventilator? This opened up a legal can of worms, but given the urgency of the need, with people dying daily, it seemed a better risk than doing nothing at all.
When I was reading this story SNMP jumped into my mind ... I think the dark side is calling ... just install an SNMP agent on the device and publish the device's MIB what could go wrong?
History would say heaps.
At 70k lines of code that is a lot more than I would have imagined but I am pretty sure that the one bug per 100 lines is pessimistic but without a verifiable (formal) specification that defines the behaviour of the device who is to say what is a bug.
In the pacemaker case I would assume the cardiac stimulus would be provided an autonymous hardware signal generator and timing component and that the processor's software only monitors, logs and fine tunes the autonymous hardware.
If your pacemaker's software borks you still have limp mode. Pretty much what the heart itself does. :)
I assumed that medical devices used software like AdaCore's SPARK Ada to develop the soft-/firm-ware and the idea of coding in assembler or C is frightening.
The article's main or at least addressable gripe was data access for clinical purposes. This could be remediated by layered standards and once you get to the application layer its just the format of the records that requires specification and definitely wouldn't require ASN.1.
Vehicle control computers suffered from the same nonsense but I believe was becoming less of a problem... pre Tesla/EV anyway.
To think Musk wants to put an IMD (neuralink) in your brain. You wouldn't need Tesla FSD to T-bone another vehicle your implant will save it the trouble.
Agreed, a bug per 100 LOC seems very pessimistic in something that's firmware. Not because firmware is magically better, but because the cost of updating devices means that ?most? companies realize that it's worth spending a bit more QAing firmware than it is on QA for their glitchy website, which can be updated instantly(ish).
With over four decades in the software business, I'm by no means suggesting complacency: our industry has serious problems, and this case appears to exemplify several of them. This XKCD:
https://xkcd.com/2347/
is 100% correct. I wake up every day wondering if this is the day that everything breaks. But exaggerations like 10 bugs/KLOC don't help, as they're too facile and make it easy for the people who *can* do something about it to dismiss articles like this.
Errors in the software are potentials but open source helps everyone find them. However while an EKG interpretation issue could be a software error, it's also possible for the software to be processing data collection errors in a medical/technical EKG data collection environment, often an array of electrodes are placed on the subject accurately over the heart but over time the electrodes can become loose and then stuck back on in a different location ... so there may be a slightly different data collection even if the heart contractions have not changed.
This is just an observation of issues that I've saw 40 years ago when I was working to collect accurate EKG data for doctors, I didn't tell them that they had made an error, they always told me exactly what had happened when we examined the raw data ... people collecting data often spot errors that software spins past.
The issue isn't closed or open source code, the issue is implementing security to prevent hacking of these devices. These devices are too open to outside access. We need to end the remote access mentality for everything; medical devices, cars, utilities, industry...etc, some systems need to be in-person hands-on with no exceptions. I am at a loss as to why cars need remote access to internal systems, any command, diagnostics or troubleshooting should be done on-site not remotely. As long as security is an after thought, these problems will only get worse no matter the access to source code.