
Sounds like an open and shut case.
Criminals could waltz into secure zones in airports and government facilities by hacking and jamming open doors from remote computers over the Internet, DVLabs researcher Ricky Lawshae says. The since-patched vulnerabilities affect HID's flagship VertX and Edge controllers which are distributed in scores of busy locations and …
What's more worrying is that most door access control systems cache the 'permitted' swipe cards in the controllers to survive comms issues with the server. This means that you can spoof the server and inject cards into the controllers directly.
Or run a gateway hijack using Cain and Abel and pop the doors by sending unlock commands from your MITM door access server.
We demoed this to our door access supplier which caused them to implement an encryption mechanism for their controllers. Which is great for us - but more of a problem for all of the existing installed systems which are generally fitted by alarm companies who have no need for network security capabilities ....
AC 'cos you know, embassies, airports, pharmacies, hospitals, police stations etc might make tasty targets.
That's just plain laziness on the part of the programmers. There's no need for that, and if they had practiced a defensive approach to security that assumes bugs are likely they could have avoided this. They could have run the service as an unprivileged user belonging to a 'led' group that had access to that device (or have it use a setuid binary owner root:led mode 110 that flipped the LEDs, if root was somehow mandatory to do this)
Sad that a company that designs security hardware understands absolutely fuck all about software security. Sad, but not surprising.
This post has been deleted by its author
No, the real problem is the use of functions like system(). The whole idea of building applications by the outdated technique of executing command line tools through calling a shell should be banned in any sensible modern application (no true parameter check, no true error handling). It's just looking for trouble, and it's lazy programming - it's just like car-stacking to build an house of cards that will eventually crumble.
This post has been deleted by its author
Less than zero? Definitely not.
I beg to disagree. I have a great need for a device that will flash an LED a negative number of times - it would eliminate the annoyance of all those other blinking LEDs in the bedroom at night. And if the same device is to be used in the server room, it will probably need to be a 32 bit signed integer.
I await a reply explaining the practical use of a negative float to enumerate LED flashes. Although I think an imaginary count might be an aid to handling hypothetical and virtual LED flashes.
"I beg to disagree. I have a great need for a device that will flash an LED a negative number of times - it would eliminate the annoyance of all those other blinking LEDs in the bedroom at night. "
No, what you need is a DED - flashed a positive number of times. Digikey has lots of high intensity dark emitting diodes...
Sure, you should do that also. But to assume that because you sanitized input you're fine is the height of hubris. There have been many many bugs over the years where input that was sanitized was found to not be fully sanitized or in the right way, i.e. if changes were made to what was being input that caused the sanitizing code to behave differently.
That's why step one is to make it not run as root, so if an exploit happens you don't give away root. Sure, maybe you use non-privileged access as a building block towards privileged access, but that requires finding a separate privilege escalation bug. The idea is to raise the bar for exploit as high as possible by protecting yourself in every way possible, rather than say "my input sanitizing code is perfect, so the job is done".
"Good code by real programmers is exploit free, and doesn't run as root anyway, even though it could. Belt and braces atitude to security."
In my experience the code quality isn't the real issue, and I think it's a nigh-on guaranteed that everyone makes mistakes - and the environment around a bit of code changes too - and that can transform previously legit code into garbage... ;)
The real issue is finding the resources (aka money + time) to do the work. Typically the folks I work with are buried under a mountain of must-do changes and no one who calls the shots sees fixing code that works (eg: stop running as root and employ priv-sep) as being a priority.
"The service runs as root? ... is just plain laziness..."
"If the input was sanitised, it wouldn't even matter that it was running as root"
Any single failure probably wouldn't be an issue, but this problem and more importantly the subsequent blindness in the comments just highlight that *ALL* areas of the system need to take responsibility for the security of the entire system. No piece can ever rely on security being provided by some other piece.
I don't normally play with such things in anger, but I did spend a few minutes looking at a HID access control system a few years ago (shared office building and my client was only a tenant, so the controllers etc were inaccessible to me).
Not only was the thing using the hellishly broken Miffare Classic cards, but the system wasn't even checking the encrypted blocks of the card, everything was done on the UID that is read without authentication and sent in clear.
What took the biscuit was possibly the world's most ironic encryption key. The keys when converted to ASCII read as 'HID IS' and ' GREAT'.
Who, what, when, where and why did the "crack" concept become replaced with the "pop" meme in this forum? That's a totally serious question. This is a news outlet, kindly elucidate us on the concept. (I fully expect this comment to be nixed (again) ... What are you afraid of, anyway.)
@Jake
Well, as far as websites go - a cursory look- you might try a popular US chain of locksmiths - www.popalock.com, and there is always the term used for opening (elderly - I've seen it done on a 1980s Rover) car doors using a tennis ball - useful for those who have locked themselves out.
If you want to go earlier (I have a large collection of crime fiction), it's used in at least two 1930s/40s crime thrillers, The Saint in Miami (I believe) by Charteris, and an Edgar Wallace book. Many of the latters books are available on Gutenberg. I'll leave it as an exercise for you to complete. More recently, try Time to Crime: Doing Time, Listening to Crime By Michael Lee (2009/10?) and you'll find it again.
So there we are within 90 seconds, even corporate security professionals can learn something on the Reg :-)
Popping something, like the hood, a safe, etc, is old slang and certainly existed back in the 60's in the UK.
http://www.popalock.com/ is also a locksmith company, they will come out when you need a lock popped.
Unless this forum invented time travel, popping as a euphemism for breaking has existed since way before I was born, and I am distinctly middle-aged.
I've worked on door entry systems in the past. In a company that sold our "proof of concept demo" to a UK Based Building Society. Installed in branches and head office. Kinda scary to find out as that system had zero security in it, everything transmitted in plain text, pretty trivial to spoof.
Trouble is sales men just sell boxes and features. They don't want any development time wasted on features their client cannot see as it will cut into their commissions. To the layers of management, if the product works then sell it. "We can add security layers at a later time - when a client requests and pays for it".
Same old story... this was the mid-1990s when the above happened. So it does not surprise me it still goes on as Management are still Management and Sales Staff are still brainless. You can see how little things change just by looking at BOFH
As I have said to one particular colleague of mine.
NEVER EVER pass something user-supplied to a shell. EVER. That includes system(). Of course he went ahead thinking he could escape data himself not realising that no you can't escape single quotes inside single quotes.
I would now add NEVER put user-supplied data into an environment variable that might be passed to bash even if it was supposedly fixed as the fix was a nasty hack. In fact don't even have bash.
Preferably don't use a heavyweight OS at all.
I always thought that the scenes in movies where "hackers" (sic) are shown remotely opening secure doors was just a bunch of nonsense. It turns out that it's true - who knew?
I've been watching too many TV episodes of "leverage" lately I guess..
A "battening down the hatches" icon seems appropriate..
How would one gain access to the lock, to send these UDP packets? Most of the systems I've managed/come across are hard-wired (Salto, etc) and don't communicate using TCP/IP at all.
I'm trying to establish where this attack is carried out, can you walk up to a door or do they emit a local SSID of some sort? (Or does the controller?)
I've never used HID branded kit so please excuse my ignorance, I've deployed Salto via Mifare plenty of times but never HID.
The swipe reader and physical lock will be electrically connected to a controller box nearby. That box will be connected via an RJ45 lead to an ordinary switch. IF someone had their brain in gear, it will be on a different VLAN to the rest of the kit in the building.
Once you are inside the building, say in a visitors' area, you can attack the RJ45 and open the rest of the doors.
Actually, you put the control panels in the most secure area you can find - they should never be in an unlocked electrical riser in the public space. That would be the cheapest contractor getting it done and gone, that's not how things get done today!
I've always wondered what would happen if you drilled carefully into a reader from outside and then hooked out and patched the wires across to a handy mains socket (or 10000 farad capacitor) when the magic smoke cleared would the door be open or closed? What would the door access software have logged?
What does fail safe mean in this context, fail so no one is trapped or fail so the door is still secure ?
The people should be told!
Regarding "fail safe" and where it needs to be implemented, depends on the door's location in relation to access and egress routes in the event of an emergency and the security of what's being protected.
Some door locks will fail open, some will fail secure - it all depends on where the door is, and if it's legally required to be open in the event of an emergency... This is why in some places you'll find an access card reader, and then next to it a "break glass" override.. or why if you have a card to access an office, often for egress you can just use the doorhandle ("Escape set" configuration).
Depending on how the system's implemented, there isn't always a direct link between reader and door lock - so overloading the signal/power lines on a card reader may only succeed in blowing out the port on a local controller node, whilst the lock may be separately connected. By releasing the magic smoke on the card reader, you likely just make it harder for anyone to open the door at the location, and in a properly monitored install, flag that there's a fault in the reader equipment at a given location.
This post has been deleted by its author
Why the heck does a door entry device need a Linux OS? I've designed a few such units, and the most advanced functionality you would need to implement can be written in assembler and run by a Z80 or PIC in less than 32KB of ROM with a bit of flash to store what you need to enable offline operation - and that included PK and AES encryption algorithms. Unless you want the unit to do iris or fingerprint comparisons of course.