The writer is spot on
"Why does consent even need to be assumed?"
Because the government knows full well that express consent would rarely be given, so it uses the oxymoron of "assumed consent".
Newsflash: if it's assumed, it isn't consent.
Welcome to the latest Register Debate in which writers and experts go head to head on technology topics, and you – the reader – choose the winning argument. The format is simple: we propose a motion, the arguments for the motion will run this Monday and Wednesday, and the arguments against on Tuesday and Thursday. During the …
Indeed. The term 'assumed consent' is a reflection of the dishonesty at play here: in honest terms the question is to what (if any) extent patient data can be used *without consent* e.g. to derive aggregated data for research; I'm open to this as long as the aggregated data is derived in a way which precludes any individual's data leaking out, but the discussion as to how to go about it ought to be on an honest basis.
"Newsflash: if it's assumed, it isn't consent."
Not necessarily. Assumed consent is an old idea which exists because there are circumstances where explicit consent cannot be obtained. Medical care and emergency response, for example, can often involve situations where a person is not capable of consenting to anything, but can usually be assumed to want help. Even amateur first aiders enjoy legal protections based on the assumption that a person they try to help would have consented to it if able.
The problem is not with assumed consent as a concept, the problem here is taking a concept that exists for situations in which consent cannot be obtained and using it in situations where it's just not convenient to ask for it. Either because it's more effort to ask everyone individually, or for the more cynical among us because the consent you're assuming wouldn't actually be given if you did bother asking.
So while I agree with the overall sentiment, I think it's important not to be too general. There's nothing wrong with assuming consent in principle. What you need to worry about is whether it's appropriate to apply that principle in a specific situation.
Anyone working with datasets knows that anonymised data can easily be de-anonymised. The higher the level of detail in a dataset, the easier it is to find unique points that can trace back to individuals.
I can accept that certain treatments will vary and have different effects depending on the approximate age, sex* and race of the patient, together with other existing health conditions. I simply do not believe that there are such large differences between people that treatments need to be targeted more individually than that.
*I advisedly use 'sex', of which there are only 2 as opposed to 'gender', since I highly doubt that gender has any relevance to different outcomes of treatments / medications etc
>Anyone working with datasets knows that anonymised data can easily be de-anonymised. The higher the level of detail in a dataset, the easier it is to find unique points that can trace back to individuals.
This. And the fact that datasets can be combined and once they are, what appeared "securely" anonymous in only one of them may not remain so after the data is combined.
I respect the use of the * in this case, but it's tricky. While it seems you wanted to make it clear that you are trying separate the genetic from social and political constructs, even that isn't so easy.
XXY Trisomy is genetically a thing, and has real biological consequences, so "only 2" doesn't track.
There aren't a ton of survivable sex chromosome duplications/deletions, but there are more than two viable human sex gene combinations. While a much smaller population they deserve equal access to health care, including in medical research. Preserving their specific genetics in the studies as opposed to treating anyone with one (or more) y chromosomes/SRY genes as a male with binary value of male and everyone else as female my serve them better in the long run.
Klinefelter's (XXY) is about 0.2%, while XYY is about 0.1% of live (viable) births. Trisomy 21 (Down's) is comparable. SRY being on an X chromosome is I think far rarer. I think the outcome of these types of population analyses is they'd have benefit mostly specific to the bulk of the other genetics, and to as lesser (but comparable) extent to SRY status (sex). Trisomy-specific effects would have orders of magnitude less statistics so they would be out of luck regarding enough numbers to have outcomes that help them specifically but it's worth looking for large effects.
Thanks for the clarification on the *, I stand corrected, and yes in these cases it certainly would be a useful data point for researchers. Although of course the very rarity of these conditions makes anonymisation trickier.
It really is a tightrope walk to find the right balance, even more so because everybody is likely to have a different limit of what they find acceptable to most other people. Which is exactly why consent has to be given not implied.
This is the kind of situation indicating why research using data sets drawn on the 'personal' details of human beings should be put on a public register, and list all the organisations claiming access to that data, without which such registration and public announcement they are acting illegally and forfeit any right to access 'personal' data for ten years (or some such punishment).
There should be no excuse or justification for doing any of this 'behind closed doors'. After all: nothing to hide, nothing to fear.
But before any of that: consent, clarity of usage, legal responsibility for consequences of usage, and 'opt in' being the legal default.
Assumed consent is only achievable if you have trust. Governments and society rarely have those at a sufficient level for this to work.
The bigger point here though, I think, is that assumed consent is a short term measure anyway.
The way data is handled is going to change radically over the next 5-10yrs.
I found this article fascinating: https://www.theregister.com/2021/10/04/column_data_privacy/
It points the way to the future of consent.
Trust is a major issue here and many governments of differing hues and nationalities have repeatedly abused the trust of their people. The juxtaposition of the opening of this debate with the reporting of the warning from England's Data Guardian on wider police access to NHS data was incredible [almost smacks of editorial mischief].
The repeated violations of peoples data privacy is compounded by a lack of meaningful "holding to account" of offenders. Too often it seems that it is a case of publishing a "privacy is our highest priority" statement before returning to business as usual.
The medical business does have a long-established ethics infra-structure but it is not perfect and cannot ensure absolute integrity - individuals/companies can violate ethical rules as easily as they violate other laws. What sanctions there are can punish those found guilty - but cannot make good all the harm that has been done.
That should mean we should carefully consider the harm that could be done by sharing data vs the benefits that can be gained, and who is going to suffer that harm vs who is going to benefit. It will be a trade-off between individual rights and societal benefits - but it needs to be an equitable trade-off. It would be interesting to hear from various ethics bodies on the subject of ethical sharing of medical data.
Information and Data Technology are rapidly changing the privacy landscape, some consider that personal privacy will be totally lost. Many [the majority?] seem either ignorant or blasé about it.
I don't think medical privacy should be spear-heading this change, but it seems perverse to deny potential benefits to society by holding onto a concept of privacy that might no longer exist. How many, on receiving bad news from their GP would be Googling for information - so who now has what information.
I don't welcome it, but I am not sure the medical profession, researchers or even pharmaceutical companies are the worst here [individual cases excepted].
This is really about cultural sensibilities and priorities.
Human beings have had to cope with the reality of uncertainty and mortality for thousands of years.
The fact that we have the technical ability to do 'x' neither implies that 'x' is either necessary or desirable.
Very often 'x' is simply about making someone wealthy. Benefits to third parties are merely symptoms of achieving that primary aim.
We end up having to choose what really matters. So let's wheel out that old Ben Franklin quote about those who would sacrifice freedom for the sake of security deserving neither.
The 'government' does not 'own' me. I am not their 'possession', nor am I an 'object'.
But if 'society' decides that government/corporate interests take priority over the personhood and integrity of a human being we all get to live with the dystopian consequences.
Insatiated big pharma industry will corrupt any government to get access to data.
The problem is that big pharma wants patients to be dependent on their drugs, in their interest is to have people take as much expensive meds as they possibly fit in their mouth every few hours.
They don't have any incentives to create a cure.
It's not just big pharma - many industries shift to subscription models where you have to pay for the product every month in order to use it.
Big pharma wants you to take their pills in order to live.
They need this data to see what demographics they can exploit and which illnesses could yield the most profit.
There are loads of preventable diseases we know about already that we do little to remedy. Latest forecast is for 10% of the UK to be diabetic by 2030: the solution to that problem is not medical, it's diet and exercise. Life expectancy is now falling in poorer parts of the UK with a quoted 27 years difference now between a resident of Kensington and a resident of Blackpool. The solution to that problem is not medical, it's socio-economic.
We already have the knowledge to improve the health of, literally, millions of people, but we choose not to. We have a long way to go before we exhaust the improvements we could make now.
Please DO NOT change the headline used for each part of the debate because that just adds confusion, the more so as the original subject is not visible in this post until after you've voted For or Against.
Ideally the first three lines on each of the four propositions would be:
PROPOSITION BEING DEBATED - This exact same wording appears in all four parts
Headline for the next debater's argument
and the main proposition should be repeated at the top of the 'vote now' box.
I agree with most of these, but they are improving. This time, they have a clear statement about what we're debating:
This week’s motion is: Assumed consent is the right approach for sharing healthcare patients’ data, beyond their direct care. Or to put it another way: patient records should be shared with medical researchers on an opt-out basis.
Which helps clarify what for/against mean. Last time, the motion was vague and the debaters didn't really help.
There's also something to be said for debating tactics where the debaters respond to one another after their statements. I realize that's harder to do in a written medium, but I think that would help. The summary article posted once our votes have been counted goes through our comments for illustrative statements, and they could have the debaters respond to the more common of those and to points made by the opponents. That means the debaters actually have to acknowledge the deficits in their arguments rather than just taking one aspect that works for them and ignoring the rest.
"Claims that prior informed consent is just “too difficult” or “too burdensome” self-interestedly disregard people’s lawful rights to dissent or opt out."
Entirely correct. This is another example of a clearly prevalent mind set that advocates abandoning data subject rights in general in order to support (in the words of a recent DCMS request for comments) 'driving growth and unlocking innovation'. It sits very well alongside the current government proposal to repeal Article 22 of the GDPR, which provides for a data subject to require explanation and review of entirely automated decisions that have a material effect on their rights or freedoms. The result would be "computer says no" enshrined in law, with no route for redress.
It's become perfectly clear that 'digital progress' is more important than the people it's supposed to serve, quite possibly because it's a big bucks business that promises to make a few very rich people even richer. So it's being rushed ahead regardless of possible consequences. However, as I said in my submission to the RFC referred to above "Where lives and livelihoods are at stake, even in the name of progress the public can not legitimately be considered an involuntary test bed for systems development".
While I largely agree with the arguments put here, I take issue with the phrase "mutant algorithm". It's meaningless rhetoric to use the word mutant there, and it originated from an attempt last year by the UK government to put the blame for their decisions on some "algorithm" as if it was beyond their control or that of the programmers who wrote it. The algorithm in question was not some uninterrogable machine learning model, but quite simply a choice to take teachers of small classes (private schools) at their word for predicted grades while thresholding results for large classes (state schools) at those of previous years. Talking about a mutant algorithm suggested some unforeseeable and incomprehensible consequence of computer programming beyond the government's control, when these were actually the entirely predictable consequences of fairly simple rules.
The phrases we unquestioningly adopt shape our thinking, and this is one to be rejected. Whatever the algorithm in question, it hasn't crawled out of the sea in some 50's B movie.
Health-related personal data is some of the most sensitive personal data there is and, in the majority of cases, unlike a leaked phone number or bank account details, it is data that cannot be changed.
Some of the risks to personal data include:
- anonymisation of data is hard to perform correctly, especially in a health context where for example certain medical conditions might be 'rare' enough to defeat any anonymisation attempts. Also some (much?) medical research cannot work with anonymous data and rather requires pseudonymised data. So you are relying on the competence and willingness of organisations to anonymise or pseudonymise personal data correctly, especially in the case of pseudonymisation where the org in question may not understand the 'trackability' of particular items of personal data and so leave them present/unaltered in any 'pseudonymised' data they create.
- even if organisations implement proper data security (which is in of itself questionable) there still remains the risk of data breaches
- once personal data is shared there is a loss of control over that data (at least by the data subject themselves, if not by other orgs) from that point onwards. If the recipient org(s) later decide to change what they do with the personal data then its realistically too late (for the individual) to do anything about it. The same applies in the case of anonymised data (which is *not* personal data) - if, for sake of argument, I don't agree with animal testing and an org who receives my anonymised data stated at the time they do not test on animals I may be happy for my data to be used to create their anonymous dataset but if that org later decides to start animal testing there is nothing I can do. Any Privacy Notices or other policy documents reflect a particular point in time and do not reflect on any future intentions.
- inadequate enforcement of data protection law which means that many organisations do not take the laws seriously (or even consider them at all) as the risk of being "caught" is low, the risk of enforcement action is lower, and the risk of a substantial fine or action taken against individuals is even lower.
So as someone who has worked in 'enterprise'/largescale IT for his whole career "I've seen things you people wouldn't believe" (https://en.wikipedia.org/wiki/Tears_in_rain_monologue) and so I am reluctant for my most sensitive personal data to be at risk of either accidential or purposeful misuse, a risk that increases greatly as more organisations have access to it.
With this mind I do not agree with opt-out scenarios for health data - I have already suffered from unlawful processing of my health data in the past due to this: healthcare systems were launched where allegedly letters where sent out notifying of a cut-off date for opting out and I never received the letters and so was unable to opt-out in time to prevent my health data being shared against my wishes (and also in breach of data protection law).
I keep telling this story because, well, it's funny and I'm the butt of the joke.
I asked for a copy of my health records, and my doctors admin were too busy at the time. I'd anticipated that in advance and gave them an encrypted email account to forward them to my encrypted email account. They decided that wasn't secure enough, so they printed them out and posted them to a neighbour, who got to read them before me.
The inaccuracies in my medical records ranged from curious to scary. The most ridiculous was I'd been treated in hospital after my cat attacked me - actually he attacked another cat but I got in the way. 100 bite wounds in my hand, ripped apart by his claws. The daft senior nurse assumed I was a heroin addict and wanted to have me arrested. That's in the notes. I look unkempt but I've never injected, surely any nurse should know junkies just don't repeatedly stab their own hand and then walk into hospital?
Anyhow, most of the stuff in there wasn't that horrendously wrong, but still wrong and it wouldn't help any medical study. It's not just I don't trust the NHS to give my data away, I'd prefer they didn't have it.
If you are like me you have an inordinate respect for doctors and nurses and trust them to the hilt - but take a morning to read your medical records. The mistakes they make are - well, we all know examples - my mum was sent home from her first heart attack diagnosed as a panic attack.
I guess you all are happy giving up a significant tool healthcare organizations are using to improve treatment protocols and understanding the progression of diseases.
Today's computerized health records (anonymized) provide MASSIVE databases of diagnoses/treatments/results. Large organization mine these to find the most effective treatments, to discover what symptoms lead to what diagnoses, etc.
As with any political poll, opting in/opting out skews the results. People who opt out will have a different world view than those who opt in, and based on that world view may have different lifestyles, leading to different outcomes.
Here's a recent paper where COVID vaccine effectiveness is being assessed using these very tools.
You are correct, but researchers already have to account for these effects, and will still need to under both an opt-in and opt-out regime. For example people with limited or no access to traditional health care won't be fairly represented in the aggregate data even under a "no-option to opt out" regime.
There is a fair concern that the total amount of information researchers would lose all access to (due to the expected low rate of acceptance under an opt-in regime) would seriously impact the quality of the science.
I think a better answer to that problem is to NOT treat this as a problem that one binary choice solves, and to handle the problem in steps instead of implementing a blanket explicit consent or nothing edict from the get go. Limit use of non-anonymized in general initially and allow deeper access to data in particular cases.
I think that trust and transparency of access is more important to building a workable system then mandating that every person explicitly opt in to every use of their data. Others will disagree, but I think more of them would live with a system where academic studies from research organizations with a 3rd party approved human subjects approval had access they could opt-out of and drug and insurance companies had to get explicit opt-in consent to use or transfer data that was derived from their records. In all cases people deserve to be able to review the information and have it corrected for accuracy, and to be able to review who else has looked at the information and why. This database shouldn't be a black box the NHS/NIH/FDA operates, it should be a system they have transparent, managed access to.
I am not happy to give it up. I am quite happy to keep it alive and to feed it with my data. I only ask that my data and those of others be treated with respect, which isn't happening. Respect includes telling the people what is happening to their data and what it is. Given the risks the loss of the data can have, people have several good reasons to want to keep it sufficiently anonymous, and some people may have sufficient reasons to deny its inclusion.
Those last sentences may sound a lot like opt out, and I think most people should be happy including their data in a system which respects those aspects. However, the systems as they exist do not, and I have never seen an opt out mechanism that nonetheless maintains rigorous methods of informing the subjects what's happening and allowing them the control needed to opt out. If they can create one and prove its effectiveness, maybe it can be considered, but until that point, opt in is the only way to ensure sufficient information is provided. Opt out incentivizes people to hide the information and opt out path from the subjects because if they don't find out, they don't stop you. Opt in incentivizes information and control, because the more people who agree to what you're doing, the more valuable data you get.
This. A thousand times this.
Any "Opt-out" system strongly encourages making opting out as difficult as possible, to the point of hiding that the data usage exists at all.
The recent aborted attempt by UKGOV is an excellent example:
- Hardly anyone knew it was happening at all. You can't opt out of something you don't even know exists.
- The actual system and purposes have still not been described.
- Opting out required multiple forms.
- Opting out had to be done at extremely short notice, and could never be done later.
An "Opt-In" system strongly encourages wide publication of the benefits, and making it as easy as possible.
For example, organ donation.
Everyone knows it exists, everyone knows the benefits, and there's multiple really easy ways to say yes - a card you can ask for, tattoos you can have, a tickybox when renewing a driver's license photocard etc.
And you can change your mind at any time - in either direction. (And after death your relatives can change it too, which is sometimes a problem. And Wales broke it, of course. Nothing's perfect.
Not many people are arguing to 'give up the tool'.
They ARE arguing to use 'the tool' properly, with integrity, and by protecting highly personal information against the venality, mendacity and incompetence of self-serving and irresponsible corporate and government interests.
I do think that individuals should have the final say how their data is used. I feel that the article doesn't make the strongest case for why and how obtaining consent is both possible and important. I also think it it bangs the "FREEEEDOOOOOM" drum a bit hard, like so many other authors writing from the heart on privacy issues. The trap there is that while I may be sympathetic to the cause, the appeal to emotion has the opposite effect on a reader who starts out from the opposing position.
It's not a super effective debate strategy, and I feel both authors succumb to it to some degree, but this article also fails to delve very deep into the issues that further limiting access to patient data will cause. It also muddies how it treats purely academic access and commercial interests that pharma companies might have, or insurance companies in non-NHS jurisdictions, where many people would draw a distinction between the two.
In the paragraph "streamline access" the author conflates consent to access with transparency of access, and fails to make any case why all types of access must be opt-in, instead stating correctly(but less relevantly in this context) that the way the gov and NHS have handled it badly. If they have failed to execute the safeguards and transparency that were promised, then the article only succeeds in making the obvious case that that the government should be made to do the things it promised. Which it most definitely should, but that isn't a topic of the debate.
I think a stronger case could have been made to rebut the argument that an opt out regime was sufficient. That's not my position, as I don't think ALL requests for data need to be treated as explicit op in with fully informed consent. But a stronger case for it could be made, and in my opinion should if such a regime was to be considered.
Talking about the US, which is not directly applicable to this article - we have "The Health Insurance Portability and Accountability Act of 1996 (HIPAA) required the Secretary of the U.S. Department of Health and Human Services (HHS) to develop regulations protecting the privacy and security of certain health information.".
When my mother was in and out of hospital almost a decade ago, I had to go through hundreds of recorded phone sales pitches and scams aimed at vulnerable elders on her phone, deleting them one at a time to find message from actual friends and important messages from doctors offices etc. No surprise, it is almost impossible to enforce HIPAA, and one of the most lucrative uses of patient information is creating lists to sell to scammers.
I expect if HIPAA were revoked, we would see more direct targeted advertising from larger companies. "Hello I am a medical professional and I see you have been diagnosed with XXX and YYY. We are concerned for your health and fortunately our AAA product is especially suited to your health and life quality needs." Showing familiarity with intimate aspects of a persons health issues is a perfect opening for a Trojan horse approach, and it will be very, very lucrative.
My data is my data. Anything collected by medical professionals in the course of my care is not to be used for research, marketing, insurance, or any other form of profiteering.
I haven't ever opted out of the organ donation scheme because, despite my title above, I'm in favour of it. Sadly, having cancer is a great big red flag on that one, anyhow.
I work in the NHS in this exact field and I’m responsible for handing over patient data to non-profit organisations run by doctors, to provide datasets for future studies of rare diseases always anonymous.
We work on an assumed consent model for somethings (approved by Caldecott guardian) and consent for others.
If you have a complex and rare disease you should assume your data has been harvested for storage in a research registry.
This discussion is tricky, because the harm you fear to incur by sharing your personal health data (e.g., denied employment) is highly personal. But the benefit is only meaningful on a society wide scale.
There is enormous value in population-wide medical data collection: How can we assess the effectiveness of lifestyle choices and medical procedures on health if we don't have population-wide data? Without good data, you can't tell whether a homeopathic remedy works better than chloroquine or a vaccine. Individuals suffer and die randomly, but society-wide you can identify correlations and deduce causation. After collecting the low-hanging fruits of medical progress (hygiene, antibiotics, ...) and the corresponding increase in population-wide health and life expectancy, future progress requires tackling medical problems that are statistically 'rare' (cancer, chronic disease) and where the outcome of single cases won't teach us how to tackle the larger problem.
In an ideal world, society should collect the relevant data and use it only to tackle societies' health issues, maybe in an academic setting with clear rules-of-conduct. In the real world, certain entities see fit to use any available data for personal / corporate / political profit. We live in this non-ideal world and the resulting lack of trust explains the overwhelming majority opinion in this forum. Can we move closer to that ideal world scenario, or should we just accept that our personal liberty comes with a significant cost in public health and well-being?
"None of your business"
I disagree - there is a legitimate medical interest in that it can inform the population about the consequences and benefits of different lifestyle choices; further it would allow better medical treatment of patients that present by advising lifestyle changes that would improve their outcome. It would also allow the medical profession to forward plan by predicting future illness trends based on changing lifestyle patterns.
A quid pro quo of sharing in the benefits of a society should be that you are willing to support that society.
Of course there should be limits on what society can demand - but using personal data in an anonymized and aggregated manner to inform society and improve medical outcomes seems reasonable.
Using your specific data to directly bombard you with nanny-state interventions would seem less justified; and to use it to directly intervene in your lifestyle choices would be even less justified.
I suggest you all read (although everybody will tl;dr this) the annual report from the SCAL Kaiser Permanente Medical research department on what they accomplished in 2020 using this kind of data.
SCAL KP covers 4-5 million members... what they were able to do with this data is breathtaking; but sure, your (mainly unfounded) privacy fears trumps my ability to receive better and more effective healthcare based on things medical researchers are able to discover by mining data on symptoms, treatments, and outcomes. This rant is focused on the 'STUDIES' part of what SCAL KP DOR does; based on data collected over years from millions of members; although that data also hints at what 'TRIALS' may be fruitful.
"In 2020 Kaiser Permanente Southern California researchers and clinicians responded to the COVID-19 pandemic with dozens of studies and clinical trials that helped assess risk, build decision support tools, and assess the safety and effectiveness of vaccines and treatments. In the midst of all this, vital research continued on many other fronts. The Department of Research & Evaluation’s 2020 Annual Report highlights some of that important research, an overview of the research program, profiles of our investigators, and a list of all studies published in 2020. "
Here's a snippet from one of the dozens of trials and studies performed in 2020; one that is only possible with a COMPLETE set of data note the phrase below "examined KPSC member data from
the first few months of the pandemic":
"Mortality risk factors for patients under 50
Ahmed Dehal, MD, MPH, a surgical oncologist at the Kaiser Permanente Panorama City Medical Center, noticed that while older patients had a higher risk of dying from COVID-19, the disease could also be deadly for younger people. He decided to determine who among young people would be most at risk.
In an initial investigation, Dr. Dehal and his colleagues examined KPSC member data from the first few months of the pandemic. They found that COVID-19 patients under age 50 had a significantly higher risk of dying from the disease if they were of nonwhite racial or ethnic backgrounds, had obesity, and had underlying illnesses, such as diabetes.
“When it comes to younger patients, you have to dig a little deeper to see who is at risk,” Dr. Dehal said. “We identified this high-risk subgroup that should be considered in future planning for resource allocation, such as vaccine distribution.”
The team continued to analyze data from additional patients before publishing the results.
“Our integrated system with its huge number of patients gives us the opportunity to conduct a strong analysis with follow-up over time,” Dr. Dehal said."
You can't do meaningful medical research without complete data. If people are willing to benefit from the healthcare system they depend on, the LEAST they can do is allow their data to be used to benefit the community as a whole, so that better healthcare processes and treatments can be developed.
This whole thing is an utter mess. There are two programmes:
1: To allow GP patient data to be shared with the rest of the NHS.
2: To allow NHS "aggregated, anonymised" data to be shared with third parties.
1: is a no-brainer - I have personally suffered because my local hospital didn't know what the GP was prescribing.
2: is harder. Given the Cambdridge/Deep Mind problem a few years ago one might say "no way". But supposing the data in question was about vaccine responses and the third party was Astra-Zeneca?
The PROBLEM is that the two are munged into one by the Health Department, and the opt-outs are simply labelled "type 1" and type 2" with no indication of what that means.
I have heard a comment: "they tried this a few years back and got bounced, so now they are trying again when everyone is distracetd by the pandemic".
Not wishing to be political, what we see here looks like an attempt to foist a questionable idea (2) on the public by hiding behind a really good idea (1), and so make lots of money for the Department of Health which will spend it on - err - the NHS??
1. propose a motion.
2. argue against the motion.
3. Put in a box for me to vote For or Against
4. I agree with the "against" argument, so I click "AGAINST".
5. Yay! Our vultures deem this a vote against the against argument, an unspoken double-negative. You [REDACTED] vultures, Grr! You need a training course on UI design. For an Against proposal, you do need to make it clear what the buttons are for and what they are against, yesss?
Perhaps think of it as "implied licence" - so just like in [UK] trespass law there is a implied licence for people to come onto your property to conduct lawful business with you. (You can revoke that licence by asking them to leave, or even by posting signs at the entrance to your property or locking your gate).
The implied licence aspect of trespass is intended to help make society work - and balance that with the individual's right to privacy in their own home.
Here, the debate is [should be] about whether a similar thing ought to apply to particular uses of private data.
My health data has no monetary value to me. The only value it has to me is if someone analyzes it and come up with conclusions that will benefit me or my family members. This is not going to by done by my GP or the wider NHS, so from my personnel perspective I'm happy for them (as in any pertinent organization) to have my health data. Hopefully if they discover something significant there is at least some small chance of linking it back to me though alas the anonymization would probably destroy that chance.
Whilst the majority of people would be able to assign similar values to myself I accept that there will be some people who who may have medical conditions they would prefer to conceal due to a perception of attached stigma. Mental illness could fall into this category even though it should not. Also perhaps people suffering from something rather embarrassing like "one of those irritating pains in the bottom that just when you think you're rid of it, it pops back up again" https://www.dailymail.co.uk/news/article-4874748/MAC-Tony-Blair-speaking-again.html
As such I believe an assumed consent, with an easy way to opt out, is the right policy as it benefits society most.
I would have no issue with my health records being shared for research purposes but I do have issues with subsequent actions.
Taking email for example, "we will only share your email details with our partners" which is (debatably) fair enough but their partners share with their own partners & so on until "Single Russian ladies are waiting to meet you" spam turns up in an email you only use for confidential correspondence.
If you want to share my data then please publish a list of the "Reputable Companies" that you are sharing with and provide me with a method of looking up who my data has been shared with & a method of removing myself from some or all of the aforementioned companies.
Biting the hand that feeds IT © 1998–2021