Re: CVV should never be held
Keys were probably held in the database alongside the data being protected by them.
511 posts • joined 15 Apr 2010
I am hoping that does not mean EasyJet and any other company is being let-off their legal responsibilities. I expect the ICO to enforce the law and issue fines at a later date for those GDPR violations which occur during COVID-19, but I fear ICO will go easy on them and they won't be held accountable.
1. It is interesting the ICO won't release details of when they were notified. There is a legal time limit of notifying the ICO upon detection of the breach. This makes me think that EasyJet did not comply with the time limit.
2. There is no legally mandated need or time limit to notify the customers, but if the ICO thinks you should have done, given the potential impact on the customer, and you haven't, the ICO can take that into consideration when determining the size of the fine
If the breach was detected in January and EasyJet didn't notify customers to the beginning of April then the ICO should throw the book at them. But somehow I don't think they will.
Yes it does make a difference as it means a certain attack vector is no longer possible. That reduces the risk of compromise of the card data.
It doesn't completely eliminate the risk as different threat actors may adopt a different attack vector.
But the idea that it's not worth implementing the measure because it doesn't entirely eliminate the risk is flawed.
Repair work may involve replacing components. If you don't repair it, then it doesn't work.
So what's the value of the item unworking and compare that to the value working?
Re doing the soldering is unlikely to devalue the board, replacing the odd passive component isn't either.
But if you replace the original 6502 processor chip, which has a date stamp of 1978 (or whatever year it was), with one dated 2005, then I can see that might have an effect on the value.
Oh, Dwarf clearly knows all the theory but has little practical experience on large production systems of high complexity.
Often, documentation is missing, it shouldn't be, but that's the real world. And even if the documentation is present, that's not the answer, at the end of the day, it's down to people and what they know about the system, and keeping information in their heads for fast recall. Understanding doesn't always come from reading a document, it comes from real world hands-on practical experience of a system.
One one system on which I work, it has taken literally me several years to build the knowledge and understanding of the system of daily use, such is its complexity.
Quote:If they do, then you don't just reverse whatever change you made: you have to fill in a great mass of forms which describe what you're going to do, apply for the access which lets you do what you're going to do, get approval from a bunch of very cautious people many of whom don't understand what you did to break it, how your proposed fix
Not really. Because when you book in a change window, that change window should allow for reversion of the system. And the post deployment testing should be conducted within that change window too, so the failure should have been detected within the change window.
There's probably a little huddle of people that occurs to make a decision to revert, but there won't be any more lengthy than that.
>chances are it was tested but someone made a mistake in the final implementation I suppose.
Chances are it was not tested. Why? Because in my experience, test environments usually contain the applications and the logical solution architecture, not the real physical hardware with the firewalls.
The network infrastructure element of the production system is rarely duplicated in a test environment, or duplicated to a sufficient fidelity to reality.
The 5 hours probably wasn't for the reversion of the firewall configuration. It was most likely down to tracking down why the system wasn't working.
Remember, that the guys who make firewall changes rarely understand the system and how it works.
Somebody would have reported that some functionality wasn't working, but if you've just carried out a large deployment, the firewall change is just one small part of that.
Yes, it is possible the ICO may continue to be toothless and fine lightly.
But consider this.
Any data subject in the EU that wishes to make a complaint about a data abuse or breach, has the power to report the breach to ANY GDPR supervisory authority in the EU, not just the ICO.
The GDPR regulation requires that supervisory authorities across member states share information and work together.
If the ICO develops a reputation for being weak on issuing penalties, UK data subjects can take their complaints to other supervisory authorities outside of the UK.
That is probably true for the Data Protection Act, which is now defunct. But GDPR was specifically developed with social media companies in mind, given the way the data was being shared. This was recognised by the EU. Under GDPR, there is no single fixed maximum fine which applies to everybody.
The maximum fine payable by any company is dependent upon their company turnover.
The fine payable, is determined by the ICO, taking many factors in to consideration, including how cooperative the company has been with the ICO, and lies between zero and the upper limit calculated from the company's global turnover.
There is a maximum fine under the now defunct Data Protection Act, there is no maximum fine under GDPR. There is an upper limit which is determined by a percentage of the company's turnover, and the fine, in pounds sterling, can be anywhere from 0 to that upper limit, but the higher the company turnover, the higher the upper limit There is no limit to the upper limit.
In Facebook's case the fine they would pay under GDPR would be anywhere from zero to $1.6 billion.
A company with a higher turnover, the upper limit on the fine would be higher.
It might be a free service but that does not give the company providing that service the right to break the law.
The law sets out everybody's expectations, it's a standard from which everybody works and complies. The public knows what their rights are and the suppliers of services know what they have to provide.
It's completely inappropriate then to say "There is a legal standard which you must follow, but if you're providing a free service, you can totally ignore it". How do customers know what their rights are if the providers of free services are given complete carte blanche to ignore the standard and do whatever they want?
What is particularly worrying about the shadow accounts, is that firstly people didn't consent to Facebook collecting their data on them, and data subjects have no way to request that Facebook cease processing and storing the data.
These are both in themselves breaches of the GDPR regulation.
The lawyer is right about law not being applied retrospectively, but there is an interesting legal issue here. That of when they reported the breach. They could have reported the breach under DPA but they left it and reported it under GDPR. So which is relevant, when the breach occurred, or when they detected it, or when they reported it?
They have known about a possible data breach since last year. The company's data protection team must be staffed by morons. They could have reported the breach under the Data Protection Act and received a maximum of £500,000 fine, now they have chosen to report the breach under GDPR the fine could theoretically run into the hundreds of millions of £££. Why? Because their turnover is £10billion
It seems to me FB have two issues to contend with: Firstly this court case and secondly compliance with GDPR which they must have in place by 25th May. They can't use this court case as a delaying tactic to comply with GDPR. They have had two years in which to prepare for GDPR.
But nobody can bring a case under GDPR right now, so this case can only be prosecuted under whatever existing data protection legislation Ireland has in place.
Yes, it's kind of moot, or will be in a few weeks, but the claimant is acting under existing legislation and FB need to fight their case based on that legislation. It will be up to the claimant whether he wants to discontinue proceedings feeling that matters have been overtaken by GDPR on 25th May.
I doubt the claimant will be able to change the case and say he wants to progress the case under GDPR law. Need to start new case.
Easy to detect fraud. They record details of all transactions and the time and date of when it was performed.
All they have to do is sit back and wait for people to complain money has been stolen from their account. Then when the complaints roll in, investigate those complaints relating to transactions which occurred during the time window of the change.
The problem is always going to be that when you construct test data how realistic is it? I did some work on a system a few weeks ago and I could not obtain a data model of my source database and only in time I discovered problems in the live dataset, which I needed to cater for. Had I constructed test data I would have built it to what I expected the data model to be and my software would have failed.
An employee working with sensitive live data simply has to sign an NDA, now that doesn't guarantee they won't steal the information, so you have to also consider who the people are that are working on that data, and which country they are in. And worst case, you can pseudoanonymise it by tokenisation.
What is the problem with using live/real data in a test system? As long as it is protected in all the usual ways. And as long as you ensure the test system is kept separate and isolated from the production system so you don't inadvertently update the production system with test transactions from the testing?
When my employer offshored system support to the Asian subcontinent to save money, we saw one key effect - the duration of operational outages increased dramatically.
One incident took several months to investigate;I got involved. Gave them a few pointers, made them think, and within 24 hours they had found the fault.
There are huge cultural problems with using low cost workers from the Asian subcontinent, and you shouldn't use them for any kind of support or development. I s'pose I shouldn't complain too much: their incompetence kept me in work.
Hold on. There may be irony involved because of the government's appaling record of IT failure, but that does not preclude the government from criticising companies. These companies need telling off and holding to account, and the government is the right party to do that.
It is just a shame that the people they employ on House of Commons Select Committee hearings know nothing about IT.
The government can demand what it likes, companies like this hide behind obfuscation. They rarely disclose what actually happened.
Look at the big BA scandal recently, they bluffed their way through with an incomplete explanation claiming they had a power surge when too many of their systems were turned back on at the same time, but they never disclosed what caused the original power failure and why their battery and generator systems did not kick in.
>The retention period itself isn't the main factor. It's what you're doing with the logs in that time that really >matters, and how you enforce that pattern of use.
No that is not right.
You can do whatever you want with the logs so long as the data subjects whose data in those logs have given you consent. One of the lawful reasons that you can provide for processing data is "Consent". The other main lawful reason is "To satisfy the performance under a contract", in other words, you are collecting, processing (which includes storing) the personal data into order to deliver the service to them.
What you can't do is, collect PII data from a user, tell them you are collecting it for it to be used for a particular purpose, and then later, do something different with the data which the user doesn't know about. If you want to do something new with the data, use it in a new process or for some other purpose, you need to go back to the user (data subject) and ask for their permission.
The retention time issue comes under a different principle of GDPR. And it is a fundamentally important principle of GDPR. You should only keep PII data for as long as is necessary, and you need to be able to justify why you are keeping it for that length of time.
You are pretty much correct.
But I think summarising the GDPR in a couple of paragraphs like that is too simplistic. There are a set of principles and data subject rights that need to be adhered. A full description of those cannot be provided in a couple of paragraphs. You have covered in your text one right and one principle only.
Personally identifiable information is any information from which a living individual can be identified.
-> GDPR doesn't cover any data on dead people.
Things like IP addresses, names, addresses, email addresses which contain a person's name from which they can be identified, even if a business related email address), post codes, medical information, political affiliations.
You raise an interesting point in relation to aggregation. The key question is this: Can a person be identified from the data (whatever that data is, aggregated or not). The answer might be no, but the the next question is, if this dataset is combined with another at some point in the future, can the person then be identified? If the answer is yes, and that aggregation of datasets occurs and you haven't taken adequate steps to protect that data from a breach, then you are at risk of being fined under GDPR.
Biting the hand that feeds IT © 1998–2020