We're gonna need some kind of AI to check up on people all the time and report them to the authorities ?
It's exasperating how each high-profile computer security breach reveals similar patterns of failure, no matter the organization involved. One such reoccurring theme is that IT departments find it can be hard to stop employees going rogue, or spilling their login details into the wrong hands, ultimately leading to damage or …
One thing I've seen make a difference is recent regulatory changes insisting that all new documents have a defined confidentiality classification attached to them. With that, you can pick up when people send confidential details out, or 'declassify' documents, or attempt to make customer details non-confidential (using admittedly simple pattern matching to spot things like NI numbers).
The main thing is that it gets everyone to think "this data is confidential, why does Bob need it?" slightly more often than "Bob wants a copy of the Koala report for Q2 2017, I've seen that somewhere".
...also you can mark emails "highly confidential" when you're saying unkind things about your third-party partners and be reasonably sure they won't get accidentally forwarded. *cough*
Some jobs are only done infrequently (once per quarter or once per year or on an ad-hoc request). For an automated system to detect abnormal access but not give false alerts on infrequent valid access will be very difficult if not impossible.
Also the case of worker 1 being unavailable for some reason and worker 2 having to take his/her place on a temporary basis will cause a big change in the access patterns for worker 2.
There is also the question of who does the automated system report to - if the bad actor is the one who receives the reports then the system becomes useless.
...you need not to know.
While it can be convenient, or even fun (in a a sense of curiosity) to have blanket access to data, if you simply don't have access, you can't be accused of mishandling it.
However...The problem I have often found is that the processes for allowing only the necessary access to data are usually poor to the point of being unusable, and it is often not only expediency, but simply having the ability to do the job you are asked to that ends up with you having much more access than should actually be required.
A case in point: the official way to get product information in a company I worked for was to use an appallingly clunky web-application. If, however, you were on good terms with the product management team, you could get a copy of the Excel spreadsheet which was used to populate the web-application's database. This meant you could get the necessary information extremely efficiently, but had the side effect of being exactly the document that could be saved to a USB drive and 'shared' with a competitor. Of course, no disgruntled salesman ever did that.
As ever, a process that is easier to follow than avoid will get used. Make things too difficult, and people will find workarounds.
I've been using Cisco StealthWatch recently. Got to say I'm impressed with it. Can detect this sort of threat very well. Not perfect, no tool is, but would alert me to anyone taking a large download of data from an internal server or uploading large datasets too. Plus all the other intelligence built in around C&C and regular threats. It'll even detect threats in HTTPS without the need to decrypt the traffic. It's linked to ISE and if configured could automatically disconnect the user. Although we don't quite yet have full confidence to let it loose on its own so still requires manual intervention for now. Worth a google of anyones time.
“The logical answer to misbehaving insiders is user activity monitoring (UAM) and/or user and entity analytics (UEBA)”
The logical answer is to encrypt all users data, failure to do so should be grounds for dismissal. Require a hardware security dongle present on the client to access that data. Given the nature of modern systems, relying on a firewall and deep packet inspection is just so much palliative medicine as in it won't prevent the patient from catching a 'virus'.
First rule of good information security is to limit access to confidential information on a need to know basis. If you do not need know the information you should never have access rights. Second rule differentiate between those who need access to the information and those who be able to change the information. If you need to use the information you should only have read only rights. Only those who will be actively entering/updating information should have write access.
The rules are simple but often not applied because it requires the local PHBs to actually think and manage.
Biting the hand that feeds IT © 1998–2020