Er ...
'... the password used to gain access to the VPN was "complex" – it wasn't just "colonial123" ...'
So "Coloni4l123!" then?
The boss of Colonial Pipeline has appeared before a Senate Committee to explain the events which led to US East Coast fuel supplies running dry last month and some $5m being handed over in ransom. Speaking yesterday before the Senate Homeland Security Committee, Joseph Blount was quizzed about the incident before it became …
Maybe it was "1complexpassword" - but most likely it was something like GJt75$fhSwE09^ but written down on a sticky note attached to the underside of the keyboard because good safe passwords are very hard to remember... If I was in the malware business I would be financing an office cleaning company...
A "complex password" can't be brute-forced. So the password was picked up from the configuration of the VPN client on an employee's laptop. That's most likely because the employee's laptop was hacked and that VPN configuration file was one of the files exfiltrated and then offered for sale on a darkweb site.
Later evidence by Colonial's CEO mentioned locking accounts of departed staff. That might be a generic suggestion, or it might be a hint.
The main VPN used by the company already had 2FA. So there's also a lesson here about withdrawing old services, and in making sure replacement services fulfill the full range of requirements so that old services can be withdrawn.
The hackers didn't shut down the pipeline. The hackers hacked the billing system. Colonial shut down their own pipeline because they couldn't bill their customers.
That's what I call "Putting the Country First".
Also: Using a "legacy VPN", reusing passwords, and no recovery infrastructure is a security plan. Got it.
Many of you know just how cheap corporate types are when it comes to IT and security. There will be more like this, unfortunately.
It isn't a single pipe from point A to point B. It is a huge number of pipes that go to a lot of places and carry a lot of different stuff like various grades of gas, jet fuel, and other products. They have to know what to send where, and their internal systems manage that as well.
It isn't as simple as "they should have just turned it on and made everything free"
Fair enough. So, what could they realistically do without the system? I don't want to just jump on people, but it sounds like they were at fault for not securing their systems. I was sort of hoping to hear more from knowledgeable industry types in this publication. So far it's only us in the peanut gallery giving Bronx Cheers.
They, and other similarly situated companies hold our lives in their hands. Ask Texans, who froze to death when their power went out because companies went cheap by not protecting their turbines against freezing, after they had been warned. This is a common problem.
I've been reading about the threat to SCADA and industrial systems for years. What's being done? People seem to wait until the "Big One" to do something.
Worse, the systems were almost secure, leading to a bit of complacency. If you read the neighboring story about Identity and Access Management, you'd know how difficult it is maintaining a list of every VPN and other hole in Hadrian's Wall that let those annoying Pictsies in. Oopsie!
It's a pretty standard response to go on lockdown when a breach is suspected, so I don't blame Colonial for their first actions. I do blame them for having a network design that was easy to move about once inside. The days one can trust local systems to be clean ended with the "I Love You" email virus, and Management will just have to pay to do things a bit differently.
To other admins: I'm in the midst of a similar security upgrade, so I share your headache.
The CEO said that their finance system just came back online this week. So they're obviously happy to run the pipeline first and work out the bills later.
It was the production supervisor who used their "stop work" power to halt the pipeline, without reference up the management chain. The supervisor did this because a hack of the SCADA systems which control the pipeline could kill people across the east coast of the USA. When the pipeline was shut down it wasn't clear if the ransomware had got that far, but the production supervisor simply followed the firm's safety policy that people matter first and acted to minimise the risk to people.
As for recovery, that's a fair argument. The pipeline was restarted manually. Using the expertise of long-serving employees from the era when manual operation was the norm. Many of those employees are near retirement age. The CEO said to the Senate Committee that they'll make manual operation part of training going forward.
It took until two weeks after the incident for Mandiant, the contractors Colonial employed, to determine that the SCADA system hadn't been affected. So it's unreasonable to think that someone at Colonial could have made that decision shortly after the ransonware attack. There's a lesson there for SCADA software developers -- it shouldn't be that hard to determine the integrity of the software.
[The facts above are from the CEO's evidence to the Senate Committee, the interpretation is mine.]
I worked for a company with a particularly...interesting system. They did their billing on the second of the month, starting at 0200. It needed to lock the database that our company ran on, it took hours to run, and it tended to break.
In that area, Comcast did it's monthly network thingy on the second of the month, starting at 0100.
Guess who drove 40 minutes to work once a month at 0100?
Explain to me exactly why the correct business decision is to allow critical infrastructure controllers to be connected to the internet in the first place. Use small words.