Whats the word I'm looking for?
Im sure it had something to do with hips. Or maybe some bloke named Chris. As our yorkshire readers might say, Eeee.
Google has dismissed an engineer who had access to its back-end systems after he violated the company's internal privacy policies. On Tuesday, Gawker reported that the engineer, David Barksdale, was dismissed in July after he accessed at least four user accounts. And Google later released a statement confirming the dismissal …
Therein lies the classic problem, the bigger the pie you have at the picnic, the harder it is to keep the pests away from it!
I think Google do a great job of indexing information, however I wouldn't trust them to look after my kid's piggy-bank, just too many systems and way to much info under their control.
If yo leave your front door open, expect to get robbed, if you leave your WiFi unsecured, expect poeple to use it.
All Google have done from this is show people they need to lock their Wifi down with a password
So if anything, Google actually IMPROVED security for a lot of people.
You Wouldn't leave your doors and windows open, so why leave your WiFi open to anyone...
This post has been deleted by its author
1. This news will get out, sooner or later. After all, there must be some people who know he used to work at Google (a sought-after gig) and now does not.
2. If you (i.e. Google) don't announce it, world+dog will say you covered it up. Maybe even Gizmodo. Then you're on the defensive explaining that it wasn't really all that significant, blah blah.
In this case, I don't think that Google went public soon enough. They should have announced it about the time it happened, although probably without naming the person involved. This should have been out in July, not September.
"That said, a limited number of people will always need to access these systems if we are to operate them properly – which is why we take any breach so seriously"
Actually where I work, noone has any back end access by default access is granted with a change management record or an incident management record, by the sysops. The temporary access is supplied with a password which enables you to sudo (or runas) and gain access to what you need. DBAs get the database, Storage guys get storage etc. etc. In the rare event that someone needs root/administrator access the ID is handed out and the password is automatically changed a fixed amount of time after it's issue.
"..but who issues/hands out the passwords?"
An organisation in which I worked had this system. The passwords were under the control of a Security Manager. He had no role as a systems or network administrator - his role was solely to control and monitor access and changes to operational systems.
To the person who suggested many people would have backdoor access: not likely. Regular audits would uncover this and anyone found to be accessing systems in this way would be subject to instant dismissal and possibly, legal action.
@AC1 - Our company sysops hand out the passwords, having verifed an Incident Management Record or Change Management Record. The scripts that they use only give access to the intended users and can't give access to themselfs.
@AC2 - You can suspect all you want, but the systems are regularly audited for local accounts which shouldn't be installed or are configured to have the wrong access levels. Also root/admin passwords have to be checked in and out with time limits and the reset of the passwords is handled automatically with no user interaction, there is no way of finding out a root/admin password without setting a load of alarm bells off. We have no unicorns, just well designed heavily audited security.
I can still think of a few probable ways around that. Bent sysop, user not caring about drawing attention to themself because they can achieve their objective sooner than they will be discovered, and (of course) the granddaddy of them all: physical access. I'm confident that I could think of some more if I knew your site.
Usually, the more attention someone is focussing on the back door, the less attention they are paying to the front door, the windows and the roof .....
Here is a rough overview of how it works (for unix/linux at least) The sysops can run a script which grants the right for sudoers, they have no rights to the sudoers file themselfes. There is a series of sudo configurations for administering different aspects of the system tailored to the different departments (storage, oracle, sybase, unix, etc.etc.) your normal logon ID is granted access for you to temporarily be a sudoer with these command sets. The right to sudo to these commands sets is automatically revoked after a pre-configured amount of time.
As for proper root access there is a piece of commercial software which stores all of the root passwords in a database. You can check root IDs in and out, when they're checked back in to the system the particular root ID's password is reset and the database updated. It requires two people to check out an ID.
This really isn't advanced technology, it has been round for years, possibly even decades, I really don't see why you're all so suspicious.
.. they're security wannanbeez :-).
The correct security models have been around for years, including "four eyes" access to systems which meant you needed two people with defective morals collaborating before you had a risk.
The problem is that implementing good security requires, effort, overhead and costs money. I'm happy to hear of a company that takes its responsibilities seriously, and has external audits to prove it.
Now for the next step: protecting the executives. Practically nobody gets that right at all, which is why we started our business, and we're doing so well we had to merge with another company to have enough resources handy :-).
(1) Someone, once legitimately granted sudo access, could not use the fact of having sudo access to make this status permanent? $ sudo visudo anyone? $ sudo bash ? $ sudo passwd ?
(2) Your "piece of commercial (therefore, presumably closed-source; therefore, most probably not audited by you) software which stores root passwords in a database" could be sending those passwords elsewhere?
(3) The script that grants sudo access must modify /etc/sudoers, which itself requires root access, might be vulnerable?
(4) Someone could obtain sudo access by exploiting the oldest known vulnerability (human stupidity)?
Like I said, there's bound to be a weakness in that system *somewhere*.
>>your normal logon ID
This would be the one with perminent back end access then?
And don't give me that 'read only account' rubbish, there's no such thing, plus it's one step up the ladder towards privilage escallation.
>>I really don't see why you're all so suspicious
Probably because you're describing "ideal world" and not "real world"
p.s. you've disclosed enough about the systems (and your name), that I'm sure I know who you are ;-) time for a social engineering 'hack' I think it is the gulible people that get targetted :-o
This post has been deleted by a moderator
Was Ken Thompson's lecture "Reflections on Trusting Trust" from 1983. ( http://cm.bell-labs.com/who/ken/trust.html is a reprint.) Namely, he made a proof of concept C compiler that did two special things. When detecting code for a a login function, it would inject a back door in the code. When it detected code for compiling, it would inject the detecting code (Both for login and compiler) into the new compiler. He then compiled the compiler with clean code and hid the detecting source code.
The result was a back door that was undetectable even if you had an audit of the source code, and recompiled the compiler to make sure.
Oh, yes, I know all about that. It's basically a classic sleight-of-hand manoeuvre, using the compiler source code as a blind to misdirect the audience's attention from the pre-compiled -- and gimmicked -- compiler binary. But there is one way to be sure you have a clean C compiler:
Rewrite the C compiler, in assembler, from scratch.
A slightly quicker method, functionally equivalent but using the computer to do more of the work for you:
Write a C *interpreter*, in assembler, from scratch, which understands just enough of the language to interpret the source code of the C compiler. Then you can run the compiler's source code interpretatively, and use this temporary compiler (which probably will be as slow as a snail swimming upstream in treacle, but you only have to use it once) to compile the real compiler.
Unless there's a backdoor right in the instruction set of the processor, you should be safe.
Not even then the assembly would help. The point of the exercise was that he could put the injection anywhere. In the assembler, in the firmware, in the chip itself. Heck, with the advent of hypervisor technology, you're not even sure of your system.
Point being, unless you make the system yourself (And nobody does that), it's all on trust. It has to be.
"Actually where I work, noone has any back end access..."
"...access is granted... by the sysops."
Uh... so how can the sysops grant access if they don't have the access to grant the access??
Of course, if you emphasize the "...noone has any back end access BY DEFAULT...", that makes more sense, but the press release also states that Google employees don't have back end access by default...
So this was a "sysop" level person who muffed up, or was someone who was likewise granted access temporarily.
Of course, it might also be that this person (a) gained access improperly or (b) was inappropriately given access and he ended up where he shouldn't have been. If the first case, dismissal all well and good. If the second, I would be suing Google, or at least filing for unfair dismissal.
I feel the same way, since it'd essentially be a black-balling move on Google's part. However, it does send a strong message to anyone thinking of doing something similar: there are serious consequences for privacy invasions.
All said, of course Google wouldn't know how to keep a person's private info secret.
"dropping the guy's name in public is a violation of his privacy"
No it's not. It's bad luck for him that his name is so publicized, but all these investment firm guys that royally failed at investing had their names publicized, embezzlers have their names publicized, all kinds of fuckups, illegal or not, get their names publicized. The fact of the matter is he SERIOUSLY breached company policy and may have broken the law. Some companies choose to keep this stuff hush-hush but there's certainly no obligation for them to.
Biting the hand that feeds IT © 1998–2021