Google accused of 'trust demolition' over health app

(bbc.com)

261 points | by 0xmohit 1988 days ago

20 comments

  • crazygringo 1988 days ago
    The article doesn't seem to have any substance, it's just an article about one person's opinionated tweet.

    The team is joining DeepMind/Alphabet, and it appears DeepMind promised that health data would never be joined to your Google account.

    And there's zero evidence that they are joined or that they would be joined. As far as I can tell, fearmongering like this would suggest that Alphabet couldn't ever deal with health care info, which feels silly... There are plenty of very strong laws enforcing health care privacy (e.g. HIPAA in the US), so this really just feels like FUD.

    • AlexandrB 1988 days ago
      The specific language was:

      > [data] "will never be linked or associated with Google accounts, products or services"

      Streams is plainly a Google product and is probably using deep learning models trained on this data. Morever, patients were never given the option to opt-out of this data collection at all.

      • crazygringo 1988 days ago
        I can't tell what you're trying to say...

        What is trained on what data? The point is that health care data will never be linked to your "Google account", e.g. your Gmail/YouTube/etc. or other Google services -- AdSense can't show you ads based on a hospital visit.

        DeepMind is part of Alphabet, like Google is part of Alphabet. DeepMind can do its thing without ever touching Google accounts or Google data.

        • kodablah 1988 days ago
          >>> "will never be linked or associated with Google [...] products [...]"

          >> is [...] a Google product

          > I can't tell what you're trying to say...

          Surely it's clear...said it wouldn't be associated w/ Google products and became a Google product. You can bring up nuance or twist words or whatever, but it's clear that at the least they misled.

          • andybak 1988 days ago
            > [data] "will never be linked or associated with Google accounts, products or services"

            was true at the time but surely what people care about is data flowing between the medical app and existing Google services? i.e. what the wording could be revised and still keep to the original intended meaning.

            Whether you think the company separation was critical to trust is a rather different question to whether they have broken the intent of their original promise. If you don't trust Google then you presumably also had suspicions about the previous arrangement.

            • clhodapp 1988 days ago
              Since "will never" describes the future, the statement was never true if the event happened (which it did) or ever end up happening, hence the PR-speak "we have no plans".
              • andybak 1988 days ago
                See my reply below.
            • eiaoa 1988 days ago
              >> [data] "will never be linked or associated with Google accounts, products or services"

              > was true at the time but surely what people care about is data flowing between the medical app and existing Google services?

              I would think people would care about the data flowing to any Google service. An understanding that it only applied to existing services like you're claiming would be meaningless. A hyperbolic example of this is Google/Deepmind would still be compliant if they came out with a brand new service, Google Privacy Destroyer 1.0, that puts all these people's private health data up for sale on the dark web.

              • andybak 1988 days ago
                I didn't mean that. At the time the medical company was separate from Google. Therefore the statement was intended to communicate the key point - we're not using your data for other stuff. This is the spirit of the declaration and the part that we should probably care most about.

                The statement unfortunately has a secondary implied promise which Google has broken. But my point is that if we don't trust Google to keep the data separate then the initial promise was meaningless. And if we do then not much has changed.

                If Google wanted to do bad things then they would have kept the companies separate to keep their promise superficially.

                The wording was poorly thought out but unless you think the reorganisation is going to materially change Google's behaviour then I don't see it as hugely significant.

        • danShumway 1988 days ago
          The problem is, even if Google is following the spirit of the policy, how do we know whatever policy they introduce now won't be changed in the future?

          So the new promise is "Patient data remains under our NHS partners' strict control, and all decisions about its use will continue to lie with them." Except I can think of at least two or three good reasons off the top of my head why Google could renege on that without breaking the spirit of their promise too. Maybe Google decides that the information will be more secure on their servers, and the original point was to make it more secure anyway, right?

          Yes, Google's original promise was kinda arbitrary and over-strict. But the way you build trust is you keep promises, even if they're arbitrary. When Google says they're not going to do something, they should not do it. Otherwise, nobody will trust them in the future when they say they're not going to do other things. Nobody wants to live in a world where we have to second guess what the intent is behind everything a company promises and are only able to hold them to that.

          If Google can't keep its promises, then it shouldn't have made any promises in the first place. It should have just given a milk-toast answer like, "we care about privacy", which would have communicated exactly the same information without deceiving customers into thinking that there was an actual line in the sand somewhere.

          • Loughla 1988 days ago
            >Nobody wants to live in a world where we have to second guess what the intent is behind everything a company promises and are only able to hold them to that.

            How is this not the world we already live in?

            Every agreement, obligation, promise, or contract I've read from my major tech 'partners' professionally and personally has had a clause that they can change it at any time with no prior notification.

            • danShumway 1988 days ago
              You're right, that is crummy, and it is something that annoys me.

              That being said, at least they include the clause. If this promise had been phrased as, "we won't link with Google products, but we might in the future", I would honestly feel a bit more charitable towards them. It wasn't.

              DeepMind went out of its way to describe this not just as a temporary promise, but as a legal one.[0] In their words, "...data will never be connected to Google accounts or services, or used for any commercial purposes like advertising or insurance. Doing so would be completely impossible under our NHS contracts and the law".

              That is a full step beyond the crappy TOS that most companies use, and statements like that were given to ease consumer concerns and make it easier for DeepMind to form deep relationships with hospitals.

              One other thing to note outside of legal obligations or technicalities is that even if you don't think Google did anything different from the companies you're talking about, companies who do 180s on their TOS still lose trust. The trust loss is separate from the legality.

              I don't know if you can go to Google and say, "legally, you shouldn't be allowed to link your product," I'm not a lawyer. I do know that we as consumers and engineers should instinctively distrust any future promises that Google makes about privacy.

              If what they're doing is legal, I'm still going to call them out for being untrustworthy and dishonest. Their earlier statements are still lies.

              [0]: https://twitter.com/MalcolmMoore/status/1062394654702927873

        • mayankkaizen 1988 days ago
          "DeepMind can do its thing without ever touching Google accounts or Google data."

          May be it is the case at present. But I see zero probability that it will always be the case.

          • je42 1988 days ago
            Health products need to be designed with privacy as a core pillar. Just pumped everything to the cloud and then figuring out - was nice a couple of years ago. Thankfully the laws are catching up and hopefully companies like Google will not only start to comply with them but actually follow the underlying spirit of these laws.
          • alehul 1988 days ago
            How can you say zero probability? I understand a hyperbole, but let's make a fair estimate.

            Assuming hyperbole: why do you believe that, over time, there is a low probability "DeepMind can do its thing without ever touching Google accounts or Google data?" What factors will contribute to that?

        • PunchTornado 1988 days ago
          > [data] "will never be linked or associated with Google accounts, products or services"

          it doesn't say your google account or product. it is in general google products.

          It seems to me that they are breaking this promise.

      • DannyBee 1988 days ago
        Streams is a Deepmind product, moving to Google Health.

        It's not like it was an existing Google product that the data is being moved to.

        If Google Health was part of Alphabet but not Google, would you argue it was "plainly not a Google product"?

        Or is that still a Google product?

        (FWIW: I actually expect the article gets it wrong and that Google Health will be/is a separate LLC under Alphabet but not Google)

        • danShumway 1988 days ago
          > Streams is a Deepmind product, moving to Google Health.

          So now it's a Google product. And when they said [data] "will never be linked or associated with Google accounts, products or services", that was a lie. Seems very straightforward to me.

          > If Google Health was part of Alphabet but not Google, would you argue it was "plainly not a Google product"?

          It has Google in the name. I don't understand this claim. It's a Google product. From the official blog post that Streams released yesterday[0]:

          "We’re excited to announce that the team behind Streams—our mobile app that supports doctors and nurses to deliver faster, better care to patients—will be joining Google."

          They're not joining Alphabet; they're joining Google. I mean, I guess linking to Google is technically different than literally being consumed by Google?

          [0]: https://deepmind.com/blog/scaling-streams-google/

          • DannyBee 1988 days ago
            "It has Google in the name. I don't understand this claim."

            Just because something has Google in the name does not mean it is part of the same legal entity as the rest of Google.

            This is already the case in various situations.

            Your view seems to be "branding" is what makes it a Google product, and that seems wrong. Branding has no effect on anything material at all.

            If Google called it "Alphabet Chrome" but it reported into the Google entity, would it suddenly remove privacy concerns around chrome? As I asked, and you never answered, would you suddenly no longer call it a Google product?

            What really matters is what entities it is part of and what walls/agreements/etc exist between the entities, etc.

            • danShumway 1988 days ago
              Okay, but: "We’re excited to announce that the team behind Streams—our mobile app that supports doctors and nurses to deliver faster, better care to patients—will be joining Google."

              They're joining Google. Representatives from DeepMind have said that they're joining Google[0]. Every source I can find says that they're joining Google. You're talking about a hypothetical that doesn't exist.

              If Streams was joining Alphabet instead of Google, that would probably change things a bit. If they announced they were joining Mozilla instead of Google, that would change things even more. But based on all of the information we have now from both Google and Streams, neither of those things are happening.

              Is your position that the blog post that Streams itself used to announce this about their own acquisition is wrong?

              [0]: https://twitter.com/Dominic1King/status/1062755561727578113

        • dekhn 1988 days ago
          Wait, are you saying "Google Health" is an independent company under Alphabet, sibling to Google, but with Google in the name?

          That seems implausible but would make sense.

          • DannyBee 1988 days ago
            Yes, I actually believe that is the case (or will be the case)

            I wouldn't see any advantage to making it part of the same legal entity, and a lot of disadvantages.

            Google does this in plenty of cases, actually.

            https://icis.corp.delaware.gov/ecorp/entitysearch/namesearch...

            Search for Google. A subset of the results are Google entities

            • dekhn 1988 days ago
              Oh, huh, I guess Google Fiber counts, and GV was called Google Ventures while being an Alphabet child for a few months.

              That's pretty misleading. I wonder if the news leaked before a name was picked?

    • Lapalata 1988 days ago
      Google is taking over everything and has the mindset of the most evil big brother you could imagine. And yet you hope they will just nicely make sure no correlation is done between you and your health data ?

      I don't get how people still try to defend Google. When you see what Google is doing with our data, they should be prohibited from accessing any health information on us. Google has just proven themselves to be one of the most data-angry and unethical company of the moment.

      • bun_at_work 1988 days ago
        And what exactly is Google doing with our data?

        And how does that apply in this context? Are you conflating Google and their parent: Alphabet?

        Also, comparing Google to Big Brother is disingenuous at best. Google isn't trying to control or limit what people think. They are merely trying to allow advertisers to target audiences precisely.

        I hate advertising, for sure. But the anti-Google fear-mongering and hate based on nothing substantial and leveraging an inaccurate view of what Google does is harmful for everyone involved.

        • danShumway 1988 days ago
          > Are you conflating Google and their parent: Alphabet?

          Alphabet isn't the result of a merger, or Google getting bought out by a different company. It's largely the same people and culture that Google was and is, just split out into a larger parent organization so they can oversee more products.

          If I change my name to Eric, that doesn't make me a different person.

        • math_and_stuff 1988 days ago
          > Google isn't trying to control or limit what people think.

          I think proactive censorship for authoritarian governments (e.g., China) falls into this category.

          • jpdus 1988 days ago
            With this argument, you should be more upset about Apple.

            Google is missing out on the soon-to-be biggest ad market of the world - at least partially due to ethical standards. Apple does earn a significant part of its revenues in China and has no problems with working with the govt there.

            (I am not affiliated in any way with either company - just don't understand all the hate against Google)

            • math_and_stuff 1988 days ago
              I am upset about Apple's appstore actions. I was responding to a claim about Google.
          • wafflesraccoon 1988 days ago
            Is that really Google's fault or China's for creating the policy? Google is forced to follow the local laws in the countries that it operates in.
            • KirinDave 1988 days ago
              It's true, but they're not actually forced to operate in China for any reason other than a desire to compete with unrestrained Chinese companies, and a desire to drive growth.

              So the relevant product groups and leadership are, at least to some degree, implicated in deciding to go along with rules they knew would be morally objectionable from the start.

              That said, Chinese folks are imo quite right to cry "hypocrisy" on the degree of moral outrage US citizens level at Western nations given how many ways western nations police dissidents and stifle anti-government speech.

            • Drakim 1988 days ago
              While this isn't something I disagree with in principle, I can't help but to feel uneasy about it. Should companies have helped Nazi Germany to identify Jews since that was the local law?

              If not, the clearly the principle isn't so solid after all. But I don't know what to replace it with.

              • bduerst 1988 days ago
                Even if Google didn't change search results, people who clicked on the censored website links still wouldn't be able to load the sites. You have to ask which is better - serving information in the framework provided or serving no information at all.
                • Drakim 1988 days ago
                  Yeah you are probably right about that.
              • jdmichal 1988 days ago
                This reminded me of a conversation chain I had a couple years ago. The main point is that civil disobedience is the purposeful breaking of law, but it is just that. You can break the law, but you need to do so with preparation to face whatever the penalty is. I'm not sure what civil disobedience looks like at the corporate level... I can't think of an examples. At the end, I think it must be an individual decision with enough corporate authority to make it stick... In which case, it falls under the same ideals.

                (I've removed contextual text that I think is irrelevant for this discussion.)

                https://news.ycombinator.com/item?id=11504953

                ... Civil disobedience is the refusal to obey laws, not the refusal to uphold them. People participating in civil disobedience do it with the understanding that they can (and should) be prosecuted for such. They do so to act as martyrs.

                https://news.ycombinator.com/item?id=11506428

                ...

                The bottom line is, you need to be really, really careful when you start arguing for "ethics" and "morality" as a basis for execution of law. For instance, to make a concrete example: It could be argued that based on the ethics and morality of the Nazis, that the mass murders committed under the Holocaust were in fact them morally disobeying those supra-national human rights laws. Who are you to say that the Nazi morality is wrong? You can't point to the agreed-upon supra-national human rights laws, because you are in fact arguing that law should be violated based on morality!

                In fact, one of the ways to view law is as an encoding of the morality of the society it covers. Sometimes laws, being fixed entities, and society, being ever changing, drift apart over time. Same as software drifts from the requirements of business if not kept up to date. It usually takes an example like this German one to point out the absurdity, and if the law really is no longer part of the society's morality, becomes fairly easy for lawmakers to fix. (As a reminder, this law being invoked is very old -- from when Germany was a monarchy and insulting dictator kings was morally a very serious crime!)

                • bun_at_work 1986 days ago
                  This is a bit late, but does civil disobedience for a corporation in the modern America have to be violation of the law? I'm wondering where Apple's refusal to help the FBI unlock the San Bernadino shooter's iPhone lies.
          • bduerst 1988 days ago
            Or "Right to be Forgotten" by the EU.
        • ErikAugust 1988 days ago
          "Are you conflating Google and their parent: Alphabet?"

          That is a valid conflation. Google became Alphabet in 2015. It began in 1998. The vast majority of its existence and brand persona is as Google.

          • gowld 1988 days ago
            Brand Persona doesn't matter. How the data is used matters.
        • WalterGR 1988 days ago
          Google isn't trying to control or limit what people think.

          The Web Spam team at Google is corrupt, such that a single person can affect rankings in ways that destroy or legitimize websites. And they do fully take advantage of this power.

          It’s literally impossible to say that Google doesn’t control or limit what people think, given that there’s zero transparency and zero accountability.

          • WalterGR 1987 days ago
            That should read, “It’s literally impossible to say that Google doesn’t try to control or limit...”

            (It’s too late for me to edit my comment.)

        • pacala 1988 days ago
          > Google isn't trying to control or limit what people think.

          Advertisement is precisely trying to control what people think. Specifically, influence their economic and political decisions.

          • bun_at_work 1988 days ago
            Google serves ads, they aren't (generally) the one's creating or doing the advertising. The only time they are advertising is when they are trying to sell a product, which is way less often than they are serving someone else's ads.
        • bzbarsky 1988 days ago
          > Google isn't trying to control or limit what people think.

          Google's efforts against "fake news" are exactly trying to do that.

          And just because one agrees with them in this one instance doesn't change the precedent.

          • gxigzigxigxi 1988 days ago
            Hrm. It is different. Google going against fake news is google avoiding the promulgation of falsehoods. That is, in a sense, an effort to change people’s beliefs. But in that sense, anything and everything that google does or doesn’t do would be considered an effort to change beliefs, which makes it a useless statement.
            • bzbarsky 1988 days ago
              Google has no problems with promulgating falsehoods that are not classified as "fake news". Unfortunately, that definition involves a lot of motivated reasoning, on all sides.

              > anything and everything that google does or doesn’t do would be considered an effort to change beliefs

              Absolutely. Google (like so many other organizations) is making efforts to change beliefs all the time. That's not a problem per se, except they have so much leverage to _succeed_ at it that it might be worth thinking about the implications a bit. And statements that they don't do it don't help with that step.

            • hammock 1988 days ago
              >promulgation

              Did you mean proliferation? (just trying to avoid falsehoods on HN)

          • andybak 1988 days ago
            Rather a case of "damned if they do, damned if they don't" on that one.
            • bzbarsky 1988 days ago
              I'm not saying I don't understand why they are doing it.

              But I also understand why news media in a totalitarian regime toe the party line, say. And they are also damned if they do, damned if they don't.

              Again, I am not claiming that Google should be doing something different. They may well not have any other options. I'm claiming that we should be aware of what they (and various other companies and governments!) are doing and act accordingly. If someone claimed that the US government "isn't trying to control or limit what people think", say, that would probably be met with a healthy dose of skepticism. Even more so for the Chinese government. My argument is that such claims about Google should also be treated sceptically. Not as sceptically as the Chinese government; hard to compare tothe US government.

        • 8note 1988 days ago
          it's not the worst comparison. Google search is removing our need, and thus ability to remember things
          • bun_at_work 1988 days ago
            By your logic, books can similarly be compared to Big Brother. Perhaps the Google fear-mongering is just a natural human reaction to new technology...
        • gaius 1988 days ago
          Google isn't trying to control or limit what people think. They are merely trying to allow advertisers to target audiences

          Do you see the juxtaposition here?

          • KirinDave 1988 days ago
            Even if we follow your logic faithfully, all it suggests is that Google enables people to "control and limit what people think."

            Honestly, I don't think society as a whole has really come to grips with what advertising is or how to treat it ethically. Informing consumers has never really been what it's used for, even though that's a critical function of any proposed society that relies on open markets.

      • Kiro 1988 days ago
        > has the mindset of the most evil big brother you could imagine

        Wow, really? I must be crazy but I still honestly feel and believe Google is a good company following "don't be evil".

    • ancorevard 1988 days ago
      "fearmongering like this would suggest that Alphabet couldn't ever deal with health care info"

      I think that would be a good start.

    • aaavl2821 1988 days ago
      From what I can tell the substance is in the latter half of the article:

      > Streams began as a collaboration with the Royal Free Hospital in London to assist in the management of acute kidney injury.

      > However, it emerged that neither the health trust nor DeepMind had informed patients about the vast amount of data it had been using.

      > DeepMind Health went on to work with Moorfields Eye Hospital, with machine-learning algorithms scouring images of eyes for signs of conditions such as macular degeneration.

      > In July 2017, the UK's Information Commissioner ruled the UK hospital trust involved in the initial Streams trial had broken UK privacy law for failing to tell patients about the way their data was being used.

      Still not super clear what happened but it appears that google simply failed to tell patients about how it was using their data, which seems to be a violation of UK privacy law.

      I'm sure to some people this does not seem super concerning, and to others seems very concerning. In my opinion this is a mistake that should just not be made. The data privacy cultures in tech and healthcare are so incredibly different. Personal healthcare data is regulated arguably too heavily, while in tech it seems that privacy doesn't exist. It seems quite irresponsible to me that such a high profile company as google, that already has a reputation for lax privacy practices, would mess up such a seemingly simple compliance issue. It suggests they simply either dont care, are negligent, or think they can get away with whatever they want

      That said, the reporting on this seems to be egregiously sensationalist and fearmongering, and potentially factually incorrect. My initial reaction to that kind of language is to recoil and it makes me side with google rather than the reporter, although upon further consideration it seems clear to me that google / both google and the journalist are in the wrong. But i am probably in the minority and other people may respond to this kind of journalism. If it takes this level of aggressive journalism to raise public awareness of privacy concerns in healthcare, then maybe i support it...

      • hammock 1988 days ago
        No need to go to the bottom of the article, it's in the second sentence.

        >Streams...hit headlines for gathering data on 1.6 million patients without informing them.

    • aylmao 1988 days ago
      > The team is joining DeepMind/Alphabet, and it appears DeepMind promised that health data would never be joined to your Google account.

      Ah, good to hear! Companies always keep their promises after all, especially when it is around user-data sharing and it's all left to good faith instead of a written, actionable contract. /s

      Throwback to Facebook and WhatsApp anyone?

      • reaperducer 1988 days ago
        The PR department is ready to crank out another "We can do better. We'll learn from this" SV non-apology.
      • bagacrap 1988 days ago
        I really don't see the connection between Facebook and Google. Track record on privacy is totally different between the two. No matter how many times I hear about cambridge analytica, my opinion of google will be unaffected.
        • aylmao 1988 days ago
          I once heard a Google employee mention this himself, specifically after the Google+ breach coverup came to light: "Honestly, I don't think it was the right thing to do but it definitely was the better business move. And announcing the deprecation today definitely tries to cover it up. We just have a better PR team."

          I'm not claiming one is better than the other and that's not a discussion to have today, but I agree with him 100%. Google, after all, invented the business does have a hell of a PR team.

    • hammock 1988 days ago
      Regardless, it says "gather[ed] data on 1.6 million patients without informing them."
    • thanatropism 1988 days ago
      FUD is disinformation with a commercial agenda. Unless you're suggesting that the rest of FAANG is playing dirty tricks at a guerilla level...
    • ionised 1988 days ago
      Alphabet should never be dealing with healthcare info, correct.
  • ChrisSD 1988 days ago
    The core issue:

    > "DeepMind repeatedly, unconditionally promised to 'never connect people's intimate, identifiable health data to Google'. Now it's announced... exactly that. This isn't transparency, it's trust demolition," [Lawyer and privacy expert Julia Powles] added.

    I remember this from when DeepMind was first given access to UK patient data. The firewall between them and Google proper was a major point at the time.

    The article is scant on details but while this move might not "demolish" trust it does seem to erode it.

    • mtgx 1988 days ago
      This is why Google's so-called "AI ethics board" is nothing but a sham. The AI ethics board should have already stopped this, or the fact that Google intended to use its AI in China for censorship, or use it for military drones to find targets to kill.

      But it didn't. It's just another PR thing Google did to get people off their backs while they continue with their original plans for AI in advertising/user tracking.

      • netcan 1988 days ago
        Do you happen to know what, more specifically than "ai ethics," the board considers part of its job?

        Is privacy issues around data sufficiently "ai" to be part of this.

        I mean, I can easily picture a board of hyper-intelligent academic types who are only interested in skynet or brain-in-a-vat situations.

        • burkaman 1988 days ago
          Nobody knows anything about the board.

          > DeepMind has consistently refused to say who is on the board, what it discusses, or publicly confirm whether or not it has even officially met.

          https://www.theguardian.com/technology/2017/jan/26/google-de...

        • shawn 1988 days ago
          Is privacy issues around data sufficiently "ai" to be part of this?

          Of course it is! In fact, if an AI had no data, it would not be useful AI. Just like you or I wouldn't be.

      • aylmao 1988 days ago
        I'm not sure why this is being down-voted, could someone thinking of down-voting comment?

        EDIT: now I'm being down voted? Could someone explain?

        • housingpost 1988 days ago
          HN has a huge amount of Googlers as users. That’s the downvotes. Just look at the Googlers in this thread defending the company.
          • aylmao 1987 days ago
            Makes sense, actually /:
  • CaptainZapp 1988 days ago
    When being admitted to hospital, or even for an MRI one of the questions on the form is usually

    Are we allowed to share your data in anonymized form for research ?

    I never thought much of that and, of course, always answered Yes.

    Nowadays, and after the shit that Facebook[1] and Google are tyring to pull off my answer is a surrounding:

    Hell, No!

    Do those guys actually consider how much they hurt science and by extension patients?

    Scum!

    [1] http://fortune.com/2018/04/06/facebook-medical-data-sharing-...

    • brlewis 1988 days ago
      Do you think the hospital is lying about "anonymized form"?
      • KaiserPro 1988 days ago
        No, the issue is that NHS Digital, and its predecessors are utterly stupid when it comes to "anonymous"

        firstly they sold everyone to a number of insurance companies. Something that as far as I can tell is illegal https://www.telegraph.co.uk/news/health/news/10656893/Hospit...

        secondly NHS digital created a schema for sharing records with researcher. Supposedly it was anonymous, however it had date of birth, sex and postcode (a post code is about 80 houses) plus the address of every interaction with the NHS.

        The only thing that was missing was the name. But cross referencing date of birth, gender and postcode with the electoral register give you a name in 99% of cases.

        Also, knowing a few of the people who work _for_ NHS digital, and their involvement of leaking things to the press for personal gain, I have no faith in their moral compass.

      • TeMPOraL 1988 days ago
        Yes and no.

        As I'm fond of saying, there ain't such thing as "anonymized", there's only "anonymized until combined with other data sets".

        Plenty of non-obvious things can deanonymize you. An accurate enough timestamp. A rare enough medication or treatment you received. The combination of treatments you received. It's all fine until a chain of data sets form that can identify you with high probability.

        Unrelated, I too used to be all for "share my data with whoever you need for medical research". These days, I worry that "medical research" doesn't mean actual research, but random startups getting deals with hospitals - startups that I don't trust to not play fast and loose with the data, and don't trust to share the results with wider community. I think there was even a HN story about that some time ago.

        • ikiris 1988 days ago
          After working in healthcare, the startups probably have much better data security. Medical is a horror show of bad data practice. I'd trust Uber with my data before I thought any major health org had any chance of doing it right.
        • Bartweiss 1987 days ago
          For people in the US, it's worth looking at the list of HIPAA-defined "protected health information.[1] The 18 listed fields are the basic standard for both anonymization and de-identification.

          Frankly, it's a terrible list made by people who don't understand statistics. 16 of the fields are simply unique identifiers in isolation. The only two sops offered to prevent deanonymization are dates, which are restricted to year, and location, which is restricted to a >20,000 person zipcode identifier.

          A complete medical history reduced to year is probably still a unique identifier for many people. Crossed with a 20,000 person geographic restriction, the year of even a single uncommon medical event is unique for many people. And that's before we even include non-redacted information like demographics. Adding just race, gender, and approximate age can easily turn 20,000 people into a few hundred.

          Who can deanonymize that data? Well, Visa can see when and where you're diagnosed with things by spotting a radiologist's bill or a monthly pharmacy payment. Target can use your location and OTC medical supply purchases. Plenty of ad networks could pair an IP location and search term to a ZIP and diagnosis. And that's what I've got with 2 minutes thought and no training in doing this.

          As a final, ugly aside: HIPAA coverage applies to patient-released data. Once it's anonymized, shared, and deanonymized, the new holder is likely free to shop it around with your name attached.

          [1] https://en.wikipedia.org/wiki/Protected_health_information

      • ams6110 1988 days ago
        No, but there's really no such thing as anonymous anymore given the corpus of data a company such as Facebook or Google have. Once they have just a few "anonymous" clues, such as zip code, year of birth, race, etc. they can probably connect any datum with an individual with pretty high accuracy.
        • tantalor 1988 days ago
          > "anonymous" clues, such as zip code, year of birth, race

          Those are examples of PII, not "anonymous"

          • gowld 1988 days ago
            It's not so clear-cut. https://en.wikipedia.org/wiki/Personally_identifiable_inform...

            And that's the point. Anything can add up to PII if you have enough clues. PII is a statistical concept, not a binary concept.

            • KaiserPro 1988 days ago
              in the EU, its a legal concept.

              its anything that can be reasonably used to identify a person.

              • ams6110 1988 days ago
                So by itself, medical data by birth year is not enough to identify a person. It's just statistics. Same as by race, or any other single characteristic.

                But when combined, and matched against the kind of individual detailed data that Google or Facebook have about most of the population, it's a lot less anonymous.

                • KaiserPro 1987 days ago
                  In the old Data protection Act, it was any data when combined with public records.

                  In the new GDPR, as far as I'm aware, its the _act_ of trying to combine/process (not sure of the exact wording) to de-anonymise

              • vharuck 1988 days ago
                >its anything that can be reasonably used to identify a person.

                This bar moves over time, especially as people share more (not health-related) personal information with companies.

          • Bartweiss 1987 days ago
            Under HIPAA at least, zip code is the only "protected health information" in that list - and even then three digits of it can be shared if they apply to a large enough population.

            HIPAA permits two de-idenficiation methods. One, expert analysis, will presumably catch things like demographic identifiers. The other, safe harbor, offers a concrete list of data to remove - and that list leaves an absolutely massive amount of PII unrestricted.

      • dragonwriter 1988 days ago
        If it's a US hospital, I would assume, absent specific reason to believe otherwise, that it follows deidentification rules in HIPAA regs if it says that (because it's a crime otherwise), but I'm not all that confident that data deidentified by those rules (especially since certification of a specified qualified expert substitutes for the concrete requirements) is necessarily effectively anonymized.
      • PurpleBoxDragon 1988 days ago
        Yes. I've seen far too many people who don't really have a clue at how to actually make data anonymized and which have adversarial systems systems set up for reporting any issues that encourages ignoring the issue over fixing it.
      • reaperducer 1988 days ago
        Considering that personal injury law firms geotarget their AdWords to people who have been in a hospital emergency room, I'd say it's not that hard to stitch the pieces together to remove that anonymity.
      • CaptainZapp 1988 days ago
        Not necessarily.

        The problem is that it's very hard to really anonymize data[1] and the hospital may not necessarily know that.

        However, with Google, Facebook and (yes, even) Apple getting into the game my trust is ever more shattered. Let alone any shlocky "health-app" maker which sends your most personal data straight into "the cloud".

        [1] https://arstechnica.com/tech-policy/2009/09/your-secrets-liv...

        • brlewis 1988 days ago
          That's a terrible example to illustrate it's "very hard" to really anonymize data. Date of birth is an obvious deanonymizer. Zip code should never be included in publicly released data; it's too granular when combined with race and approximate age.

          Definitely thought needs to be put in, but I don't think "very hard" is right.

          • gowld 1988 days ago
            > It's too granular when combined with race and approximate age.

            That goes or everything, and that's the problem with the PII concept.

            Passwords are (if handled well) unique secret keys. Either you know it or you don't. Close doesn't count.

            Personal identity is an amalgamation of dozens of personal facts, and each fact statistically deanonymizes a person.

      • ionised 1988 days ago
        I think the hospital fails to understand that anybody can be deanonymised by cross referencing enough 'anonymised' data sets.
      • spacehome 1988 days ago
        There’s nothing more “me” than my brain. I expect sufficiently advanced science to be able to narrow down my identity from a detailed enough scan of my brain. Whether this can be done today is a question of technology, and I’m not abreast of the latest findings in this field.
        • brlewis 1988 days ago
          If the data shared included a detailed brain scan, then yes I'd say the hospital was lying about "anonymized form", same as if it included a retinal scan or fingerprint.
          • JangoSteve 1988 days ago
            Or genomic information, or a unique combination of drug interactions which could be combined into a sort of health fingerprint, or phenotype information, race or age, which are often necessary to provide medical context for diagnoses, or timestamps on test results that match up with events in your calendar or destinations in your navigation or security camera footage, and I'm sure the list goes on.

            The word "anonymity" is a little ambigious. It can mean "unable to be identified" or it can mean "namelessness". There's not much information about anyone that is truly unable to be identified, especially when combined with additional pieces of data readily available from external sources.

            It all depends on who they're sharing the anonymous data with and to what external data do those partners have access.

      • fenwick67 1988 days ago
        Anonymized is a very loaded word, technically my IP address is anonymous but companies still figure out who I am from it.
      • lern_too_spel 1988 days ago
        He must believe both that the data won't be anonymized and that the data won't be given exclusively to medical researchers.
    • jpdus 1988 days ago
      This is a very hypocritical stance in my opinion.

      All medical research need patient data and every newly admitted medicine has to withstand rigourous (data-driven) testing and this did not change with new technology at all.

      Only 2 things changed: 1 - Nowadays with GDPR and increased alertness on data protection (which is a good thing in general as the possibilites evolved more quickly than the regulation), people are explicitly asked about these things. 2 - The big internet companies like Google/Facebook (which did not behave well in the past and are thus partially responsible themselves for the public mistrust in these cases) have most experience/talent for the development of ML/AI-based technologies and are deploying this in new areas like medicine.

      I don't get all the negativeness here. I vastly prefer Google/Facebook working on medical progress than working on improving ad targeting. Imo they should be encouraged to use their expertise and resources for progress in these fields. IF they are indeed found guilty of breaking promises and harvesting highly-sensitive data, shame on them. But this is not the case here.

      • saiya-jin 1988 days ago
        > I vastly prefer Google/Facebook working on medical progress than working on improving ad targeting.

        Google's primary mission if surgically precise ad targeting (pun intended). Facebook has the same mission, that's their primary revenue stream, the difference is where their user data comes from. Any effort they do is somehow/has potential to improve their primary mission. This is just another brick in their wall.

        This is how (a bit educated part of) public sees G/F. Maybe the image is not 100% precise, but they do very little to correct it and be the properly honest good guys.

  • drcode 1988 days ago
    The health+ML space is in a real ethical quandary right now, because at some level it's in all of our interests to have machine learning algos run against large patient data sets. I'm guessing that 100 years from now people will look down on us for not solving this coordination problem faster, and using all of the medical data we've collected to accelerate progress on disease research...

    ...but how do we do it without also transforming into a distopia of "radical transparency"?

    • 013a 1988 days ago
      I tend to believe that it isn't a problem with the core idea or philosophy of privacy (and I'm an intensely staunch privacy advocate). I believe it's a problem with Google. Or more generally, any company which has any financial incentive to use data in ways that the customer does not expect. It's about the human responsibility on the receiving end of the data, and Google is a Bad Seed.

      With the Apple Watch, Apple partnered with Stanford for a heart study to identity a-fib using smartwatch-sourced heart rate data. They attracted 400,000 participants. [1]

      I would guess that exactly zero of these participants feel cheated by their participation in this study (so far). Why? It's the same general concept as this DeepMind stuff. I'd argue its three things: clear, precise, & opt-in consent of the data shared, clear & constrained explanation of its usage, and trust in the receiving parties.

      Apple and Stanford have these three things in spades. Google is incapable of all of them, and you can't Design or Buy your way through corporate culture and customer trust. They gather and correlate so much data, their consent process is far from precise or opt-in. Once they have the data, they have a history of using it for Whatever they want. Which all ties back to trust; Google has permanently ruined any trust customers would have toward them for things that Actually Matter.

      Point being, other companies can pick up this mantle. It won't be Google. And if you're a health-focused company, joining Google is a literal death sentence for your product. You'll end up prototyping something amazing, then be completely incapable of deploying it for any public good because no one will give you data.

      [1] http://fortune.com/2018/11/02/stanford-apple-watch-heart-stu...

      • gowld 1988 days ago
        Who doesn't have an incentive to misuse data?

        A company can sell it to advertisers. A government can use it to run a genocide. An individual can use it for blackmail. Who's left?

        • 013a 1987 days ago
          Who doesn't have an incentive to mug someone on the street? So, what stops them? Morality. Law. Responsibility. Reputation.

          All of this applies to companies. Morality comes from shared culture and values. Laws like GDPR help. Reputation is a huge one; if companies want any sort of meaningful enterprise contracts, especially with PII/PHI, data privacy and security is paramount.

    • vharuck 1988 days ago
      Why not get subjects' consent for each study? Don't a lot of health organizations already do this?
  • scrooched_moose 1988 days ago
    This gets to the core reason I've basically given up on all new services.

    Eventually, no matter how virtuous a company claims to be, they'll sell the company to FAANG who I try to avoid as much as possible. When the true end goal of most startups is an exit payday to Google I can't trust them with my data.

    It's frustrating that avoiding Google isn't even as simple as "Don't use Google". It's become "Don't use anyone who might be an acquisition target for Google".

    • eiaoa 1988 days ago
      > Eventually, no matter how virtuous a company claims to be, they'll sell the company to FAANG who I try to avoid as much as possible. When the true end goal of most startups is an exit payday to Google I can't trust them with my data.

      And their appealing product will probably get shut down or rendered unrecognizable, either by FAANG or by bankruptcy. I've pretty much sworn off all startups that sell services, products dependent on services, or anything that requires "the cloud."

      At some point, my data's going to be sold off and monetized, and I won't even be able to enjoy the product I was supposed to get in return.

    • Cyclone_ 1988 days ago
      Apple hasn't proven to be untrustworthy with data, they have a much different business model than the others. I've never heard of Netflix doing anything that bad with data either.
      • izacus 1988 days ago
        Selling out their Chinese users to Chinese surveillance machine is pretty much the definition of untrustworthy. This trust into a multi-billion corporation is insane.
      • Lio 1988 days ago
        The simple fact that Netflix won't let you delete your watched movies list is enough to make me wary of them.

        The biggest problem with these big faceless companies is the lack of control and that you can't trust them to do what they say they will.

        For example the dark patterns that Google puts around location data controls. You turn it off in one place but unfortunately that's the wrong place so they're still collecting that data.

        They say that Nest data will never be combined with other Google data ...until they quietly change their terms and do just that.

        Privacy law is like tax law to these firms, something to look for loopholes in.

      • zalll 1988 days ago
        There was the time that Netflix used its data to make snarky tweets: https://www.chicagotribune.com/bluesky/technology/ct-netflix...
      • rocqua 1987 days ago
        There was the 'netflix challenge'.

        They published a pseudonymous list of (User, movie, rating), triplets and two lists of (User, movie) tuples. The idea was that people could train a model to predict the ratings that correspond to the two lists of (User, movie) tuples.

        You could hand in a guess every day, and would get back data a score on the first list of (User, movie) ratings. In the end, whoever got the best score on the second list got $1,000,000.

        It was a really interesting challenge, and some good research came of it, but it turned out that it was pretty easy to de-anonymize users by correlating to e.g. IMDB watch lists.

      • scrooched_moose 1988 days ago
        That's true, I just grabbed the acronym for a shortcut to "the huge technology companies who suck up user data all costs". It probably wasn't fair to include them, and Microsoft might be worth tossing in. FAAM?
        • FabHK 1988 days ago
          Did you mean FAMG?
        • dr1ggins 1988 days ago
          You removed Netflix and kept Apple in the new one.
      • ams6110 1988 days ago
        For now.
      • bduerst 1988 days ago
        Apple has handed over all their Chinese user data to the government. They even updated their TOS to reflect this.

        It took just two years to go from rejecting the FBI's request for one user's keys, to handing over the encryption keys to China for millions of users.

        • alphabettsy 1988 days ago
          Please provide a source for this claim because it is commonly repeated and completely counter to my understanding.
          • bduerst 1988 days ago
            What's your understanding then?

            Apple was asked by China to move the user data and it's keystore to local Guizhou-Cloud data centers. Apple updated their TOS and blocked service to anyone who didn't agree to moving their user data. [1]

            These datacenters were nationalized just several months ago [2]:

            > Fast forward to today: China Telecom, a government owned telco, is taking over the iCloud data from Guizhou-Cloud Big Data. This essentially means that a state-owned firm now has access to all the iCloud data China-based users store, such as photos, notes, emails, and text messages.

            [1] https://www.reuters.com/article/us-china-apple-icloud-insigh...

            [2] https://mashable.com/article/china-government-apple-icloud-d...

  • denzil_correa 1988 days ago
    At this point of time, the only way to ensure privacy is to have it embedded as a legal contract. Otherwise, all claims of "never use" or "will not be shared" means nothing.
  • petilon 1988 days ago
    Speaking of 'trust demolition', here's how Google has demolished my trust: In Chrome they have added a new option: "Allow Chrome sign-in". It appears to be a placebo. Even if it is turned off, if you sign in to Gmail then your browser is also signed in, enabling Google to surveil you even more. There appears to be a pattern here. Google maps tracks your location even when you turn off location history. I used to be an unabashed Google fan, now I am in the process of de-googling my life wherever I can.
    • FabHK 1988 days ago
      Welcome to the club. I'm on DDG for search, Apple Maps where possible, Zoho for shared spreadsheets etc., Youtube in a browser in incognito mode, or downloading via youtube-dl in the terminal, and my old gmail is set to forwarding to other accounts, which I primarily use. Anything else one can do to degoogle? (How nice it would be if that word also entered the language...)
    • bagacrap 1988 days ago
      Not sure how you could justify running someone's code on your machine if you don't trust said someone not to "surveil" you, regardless of purported features of that code.
  • Jmcdd 1988 days ago
    Surely anyone bothered by this can just request that their data be nuked? Thanks GDPR.
    • EamonnMR 1988 days ago
      I wonder how GDPR will apply to models trained on user data. What does deleting user data look like in that case?
      • abainbridge 1988 days ago
        I work at Microsoft Research in the UK. A few weeks ago we had a lecture from a lawyer on exactly this subject. Her main point was that GDPR gives people the right to request their data be deleted but it gives companies the right to refuse if it would cause unreasonable damage to their business. Until a case makes its way through all levels of the court system, nobody knows how this collision of rights will be interpreted.

        I suspect someone would have to show that the model trained on their data revealed something about them in a practically harmful way.

        • eiaoa 1988 days ago
          > A few weeks ago we had a lecture from a lawyer on exactly this subject. Her main point was that GDPR gives people the right to request their data be deleted but it gives companies the right to refuse if it would cause unreasonable damage to their business.

          I guess it still needs to be litigated, but the question on my mind is: Does that right of refusal only apply to the model, or also the data that trained it. If it applies to the data, the regulation is pretty useless, since anyone could avoid the deletion requirements by training models on it, if it doesn't I think the use in the model takes care of itself. At some point they'll need to retrain, and then you're data won't be there.

      • mattlondon 1988 days ago
        It is an interesting thought.

        I feel though that the point of the GDPR was to protect our personal data held by companoes, not to prevent companies using our personal data to make money.

        So if a company uses your personal data to train a model (lets assume you willingly gave your informed consent for the time being), and then they delete your data after they have trained their model, does that model contain your personally identifiable inbformation? I'd argue that it does not - the model is just some weights, right? So 0.6 34.291, 0.0016 - is that you, mum?

        .... but having just said that, I do wonder what happens if you run the model in reverse, like the deepdream stuff did (1). Could it re-generate PII (or rather generate "nearly-PII") purely from those weights?

        1 - https://en.wikipedia.org/wiki/DeepDream

      • toomuchtodo 1988 days ago
        I would assume deleting the model if you can't unwind specific user data used to evolve the model.
        • tjoff 1988 days ago
          You only have to delete data that is personally identifiable.
          • toomuchtodo 1988 days ago
            Can you prove it isn't personally identifiable?
            • ralmeida 1988 days ago
              Can you prove it is? Many many things can be personally identifiable given enough resources and associated data, so it's unclear whose burden of proof it is, especially considering sibling comment mentioning a GDPR exception to delete data if it causes sizable damage to a business.
    • LeoPanthera 1988 days ago
      Will the GDPR still apply in the UK after it leaves the EU?
      • ben_w 1988 days ago
        As with everything else Brexit related, technically the UK won’t be bound but practically it will have to choose between following one of the three major economic blocks — EU, USA, China — or struggling economically because they refuse to believe the Commonwealth is not on that list.
      • chippy 1988 days ago
        Yep, I think the current understanding is that all EU laws will still apply - they kind of grandfather them in - with the understanding that they could repeal them at a later date (as they can for any law).
      • pdpi 1988 days ago
        AIUI, the GDPR by itself doesn't apply to the UK (or any other EU member state in particular). Instead, the GDPR forces member states to enact laws that implement those rules.

        This means that, after Brexit, the GDPR implementation laws will still be law in the UK. Depending on the outcome of the Brexit negotiations, the UK might or might not be in a position to repeal those laws at their own discretion.

        • detaro 1988 days ago
          the key word is General Data Protection Regulation. Regulations are law, and apply directly, no local implementation needed (except for "interfaces", e.g. in the case of GDPR changes to existing laws to clarify how they interact with GDPR and to make exceptions GDPR explicitly allows the states to make)

          Directives are the ones that only direct the states to enact laws implementing them.

          • DanBC 1988 days ago
            But in the case of GDPR the UK has also implemented the DPA 2018 which implements GDPR.
          • pdpi 1988 days ago
            Ah awesome. Thanks for the clarification.
      • Angostura 1988 days ago
        Yes. Though the British government will be able to then amend regulations, they say the intention is to basically keep it.

        http://www.blplaw.com/expert-legal-insights/articles/gdpr-an...

    • gumby 1988 days ago
      Better do that before Brexit, when GDPR is (currently) supposed to stop being applicable to British citizens.
      • Angostura 1988 days ago
        No. GDPR has been written into UK legislation and will continue as-is.

        I’m a Remainer, but let’s concentrate on being alarmed about the real impacts.

      • philjohn 1988 days ago
        Is it? EU laws are written into our own national laws, and so to assume UK citizens won't be subject to GDPR (or requirements akin to it) is perhaps not sound.
      • 00deadbeef 1988 days ago
        It will remain part of UK law
        • swish_bob 1988 days ago
          Unless they decide to include it in the great repeal ...
      • duxup 1988 days ago
        If Brexit ever happens... so many contradictions in what folks think about it / want from it and what it is / what deal anyone can actually make.
        • KaiserPro 1988 days ago
          It will happen automatically, by law in march. That is what article 50 does.

          The battle now is, what form does the divorce take, messy and sharp (no deal, where the UK has no access to anything without lots of barriers) or really painful, where we loose access to lots of things, but retain access to a few key things.

          There is a very remote chance, very remote, that a referendum will take place, but unless article 50 is rescinded it will be meaningless, as we will leave the EU automatically at the end of march.

        • 666lumberjack 1988 days ago
          It will have to happen - the vocal leavers would create an absolute shitstorm if the government even considered reneging on article 50. Even if we weren't in a situation where the leadership of both relevant parties is committed to Brexit it'd be political suicide.

          As of today it's looking more and more likely that the separation will be in name more than in function though.

          • KaiserPro 1988 days ago
            > As of today it's looking more and more likely that the separation will be in name more than in function though.

            its worse than that. Its what nobody wants: tied to some parts of the EU, with no say whatso ever. In the case of the finance industry, it'll be 60 days notice to comply or access is withdrawn. Brilliant for the EU as it means that it can start creaming off the finance industry and the vast sums of money it generates in tax.

  • throw2016 1988 days ago
    This is the perfect example of why unconstrained self interest and greed are destructive for the ecosystems that sustain them.

    There are no limits to greed and that's why there are serious constraints on all sorts of things that can damage the commons and why the only legitimate force is the common good.

    And a lesson for the tech community who have seen first hand the rapid transformation of seemingly well meaning ethical actors into self obsessed exploitative bad actors completely divorced from ethics.

  • orbifold 1988 days ago
    What I really hate about this is how once great nations like Great Britain, which at one point ruled 1/5th of the planet, surrender more and more of their competencies to cooperations. It is high time to fight this and stem the tide. The rulers of these cooperations should tremble before the might of (realistically only some) nation states, instead it is the opposite way around.
    • Quarrelsome 1988 days ago
      > What I really hate about this is how once great nations like Great Britain

      What? We threw all that away 100 years ago. We've been selling all of Grandma's silverware for the past century to keep the lights on.

      To reach back into history and make things relative to Great Britain's former power is disingenuous. Britain as well as the rest of Europe decided to spend almost all of its "advantage" on killing each other. It got what it deserved.

      The process of stripping the country further back and selling the silverware kicked in harder around the Thatcher era (1980's). Lets not forget that earlier kicks which include the second world war, the US suddenly removing Lend-Lease because the British people voted in a socialist government and the Winter of Discontent.

      If you chart the history its not surprising. What's surprising is that people are still in a suspended state of disbelief about Britain's place in the world... including its own citizens (a la Brexit).

    • lawlessone 1988 days ago
      >once great nations like Great Britain, which at one point ruled 1/5th

      Nah screw that empire. 29 Million killed in India and 1 million killed in Ireland.

      • orbifold 1988 days ago
        Well that is kind of my point, as a company you would think twice about screwing over a country that is capable of such brutality. If you look at Britains leaders today they are old, really poor and hold second class BA degrees in geography, compared to the highly skilled and wealthy adversaries they are up against, such as Eric Schmidt or Jeff Bezos, they are basically outcompeted in every aspect. It is not a level playing field neither on an individual nor even organisational level.
  • altfredd 1988 days ago
    The most alarming part about this story is that DeepMind's involvement is largely unneeded and their "accomplishments" are superficial to say the least:

    > DeepMind Health went on to work with Moorfields Eye Hospital, with machine-learning algorithms scouring images of eyes for signs of conditions such as macular degeneration

    Sorry, but "scouring images of eyes for signs of conditions" on a scale of single hospital is a task for two CS graduates, easily accomplished with freely available machine learning tools. The hospital in question could have done that themselves at minuscule cost. Are UK hospitals legally prohibited from hiring non-medical staff or something? Instead they are partnering (conspiring) with international companies to... do what again? Write Android apps and feed images to neural networks? In exchange for their entire medical data??

    Is UK becoming another India or something?

  • naaymoo 1988 days ago
    Whoever wrote this goes into detail about how Google should not be trusted with this app, and they will now have the power to post your personal information online. This app could prove helpful to doctors and nurses. I do agree with the author that it is scary that a multi-million internet company will have access to medical information, but the thing is that hospitals have been using internet-based devices for storing patient information for years. Technically, Google could have already had access to all of this information (they wouldn't because that is illegal). We also have no idea how this app works, it could in end to end encryption, which means that Google could not get the information if they tried. We have no idea what Google's plans are, but I am fairly sure they will not be breaking any HIPPA laws.
    • KaiserPro 1988 days ago
      could help doctors

      They are not bound by HIPAA is its the UK.

  • ocdtrekkie 1988 days ago
    I fail to understand how this is legal. It was found the NHS gave DeepMind the data illegally, and the inquiry was only closed because of the assurance Google would never get the data. Now that the inquiry is closed, they are giving Google the data. The ICO needs to reopen its investigation.
  • growlist 1988 days ago
    I'll be contacting Moorfields to request my eye scans are not passed to Google.
  • carapace 1988 days ago
    Compare and contrast this story about a staunchly Western nation and a capitalist corporation with the story that just appeared [1] about Venezuela and China's ZTE.

    In China the tech/data hegemony is part of the central government while in the West it's separate, FAANG et. al. are expected to keep a distance from the government and vice versa.

    I was trying to imagine a scenario for a science-fiction story set around 2040, and my brain conjured an image of the people of China chipped and managed by computer... It was chilling. As for the West, I imagine we're going to have to nationalize the data and infrastructure of the tech companies OR acknowledge them as the new technocratic form of government. Either that or bifurcate into Morlocks and Eloi, in which case it doesn't matter what form the control system takes.

    I guess what I'm asking is, which system do you think will be stable in the long term ("long" meaning 20 to 70 years, the time it takes for the weather to get really hard to ignore), and why? Or will something else happen?

    [1] "How ZTE helps Venezuela create China-style social control"

    https://www.reuters.com/investigates/special-report/venezuel...

    "A new Venezuelan ID, created with China's ZTE, tracks citizen behavior" (reuters.com)

    https://news.ycombinator.com/item?id=18451109

  • duxup 1988 days ago
    What does this AI do medically?

    Is it just playing the odds "oh this and this are probabbly this or this or this."?

    • EpicEng 1988 days ago
      > "oh this and this are probabbly this or this or this."

      What do you imagine a doctor does? They use their education and experience to make an educated guess as to a course of testing/treatment.

      ML models are developed under the supervision of doctors (often leaders in their field) and engineers and are validated against large/statistically significant cohorts.

      Source: Spent five years working for a company which released an IF/ML prognostic model for late stage prostate cancer.

      • duxup 1988 days ago
        I don't doubt a doctor does the same thing with varying levels of success.

        I was just curious what the end game was with it.

        I am a bit skeptical of just playing the averages but no more / less than individual doctors.

        • EpicEng 1988 days ago
          >I am a bit skeptical of just playing the averages but no more / less than individual doctors

          That would be the minimum standard, but you gain efficiency and lower costs (well, theoretically... companies throw crap in just to hit a higher reimbursement tier.) The models often do better than doctors (where applicable) because often times doctors won't agree with each other. Like in any profession, you have varying levels of competence. My work has always been in the realm of pathology, so I don't know much about other areas. In pathology you will often see five different doctors give five different interpretations when looking at the same sample.

    • Ensorceled 1988 days ago
      That’s a ... very simplistic view of how ML works.
      • duxup 1988 days ago
        That's why I'm asking.
    • ewjordan 1988 days ago
      For a sufficiently advanced definition of "just playing the odds", that's pretty much what any model does.
    • misterman0 1988 days ago
      This.
  • ChrisSD 1988 days ago
    Btw, can the link be changed to the non-AMP url[0]? I was wondering why the page was taking so long to load until I noticed.

    [0]: https://www.bbc.co.uk/news/technology-46206677

    • sp332 1988 days ago
      I'm not really pro-AMP, but your link takes twice as long to load for me as the AMP one. And I'm on Firefox.
      • ChrisSD 1988 days ago
        The link above loads more or less instantly for me. The AMP version shows nothing but white for awhile before showing the page.
        • carapace 1988 days ago
          Is that what that was? I got a blank page too but then clicked "reader mode" by reflex before thinking about it. (I browse with JS disabled and a lot of sites are pretty broken anyway.)
        • josefx 1988 days ago
          I get a slow redirect from the co.uk to the .com version.
          • ChrisSD 1988 days ago
            Ha, now the link has been changed to the .com site I get a slow redirect to .uk. In fairness it's still faster than AMP and it's only the first redirect that was slow. But still, that's one slow redirect.
    • sctb 1988 days ago
      Updated, thanks!
    • SquareWheel 1988 days ago
      The amp version is easily five times faster here. Check your access settings.
      • ninkendo 1988 days ago
        What's an "Access setting"?
        • SquareWheel 1987 days ago
          If required content for the page is being blocked.
  • Clariti 1988 days ago
    I never use to give much thought to these forms in hospitals but these days with privacy issues I am bothered when I get such a form.
  • sandworm101 1988 days ago
    The next wave of medical advancement may require us to give up some privacy. It has worked in the past.

    Once upon a time childhood cancer was a death sentence. Cancer in young kids moves very fast. Doctors in the 50s/60s did their best but studied the problem as individuals, writing and presenting papers based on patients at their hospitals. But nobody had enough data to discover the incremental improvements. Then docs started getting together and adding patients from multiple hospitals into larger and larger studies. From this larger pool of patients came trends and treatment advice that, today, means childhood cancer is largely survivable. (It is still horrible, but today many childhood cancers are very treatable.) That movement required patient data leave the hands of their individual doctors. Today EVERY kid with cancer is part of multiple studies and it is normal for their information to be shared far and wide. AI may be the next great thing, but it needs data. It may be necessary for patients to again give up a little privacy to enable progress.

    • bognition 1988 days ago
      There's a problem with your reasoning here. Most people do not object to the pooling of data for scientific discovery. They object to the pooling of data by company that makes its money by exploiting personal information on individuals.
      • FussyZeus 1988 days ago
        ^ Exactly. I have an app on my phone from my insurance company that tracks my every move. I'm fine with it, because their agreement for it's use specifically says that data is not sold to third parties, nor used in any purpose beyond specifically evaluating my driving. And apparently I'm a pretty safe driver because they give me a fair amount of cash back on it.

        I would never in a million years let a company like Google or Facebook have that kind of info on me.

        • Spooky23 1988 days ago
          > "their agreement for it's use specifically says that data is not sold to third parties, nor used in any purpose beyond specifically evaluating my driving"

          Unlikely.

          The master contract for your insurance company almost certainly allows for information that you provide for a variety of business purposes. Insurance companies pool risk data, and it seems unusual that any negative event would not be shared if captured via this mechanism. Another key thing is to look at the precise wording around "We do not sell your data". That is a weasel-wording that usually means "We will rent your data" or "We will provide your data at no cost to our business partners, for our business purposes".

          • sandworm101 1988 days ago
            Or, we won't sell/rent "your data" but we certainly sell/rent "our data" collected by the company and not specifically tied to individuals. (Insert long debate about anonymization and/or exactly how few people can be in a pool while still keeping it non-individualized.) It is amazing how many data links can be made with non-PII data points such as vehicle type, color, neighborhood, distance traveled, method of financing ect.
        • auslander 1988 days ago
          > specifically says that data is not sold to third parties

          Businesses are for profit. They'll find ways to monetise the data. Wording 'the data' can be bypassed by modifying it, like extracting location heat maps, and selling it instead of raw data. Or/and they may create a service that uses the data and sell it instead.

        • golergka 1988 days ago
          Wow. I had no idea this exists, but I'm very glad it does.
          • sandworm101 1988 days ago
            Wait until they issue you speeding tickets based on that data. It has happened before (2001)

            https://tech.slashdot.org/story/01/06/19/187210/rental-car--...

            • golergka 1988 days ago
              I'd be delighted if they made it mandatory. Speeding kills way too many people.
              • rocqua 1987 days ago
                We know that driving kills a lot of people. I essentially consider that common knowledge. But it is not 'common knowledge' to me that this is due to speeding.

                Not that I know different, I'm just surprised to see so definite a statement.

    • Spooky23 1988 days ago
      I have no problem with data sharing for clinical or research purposes, especially since ethical standards for medical research provide meaningful privacy protection.

      I do have a problem with insurance organizations and pharmacies abusing data sharing agreements intended for subrogation and similar procedures to manage pharmaceutical sales quotas and conduct outbound marketing.

      Case in point: My wife was admitted to the hospital due to complications that from what was an early miscarriage. The health insurer sells data that allows an advertiser to surmise that there was a hospital admission to the OB department. The PBM provides anyone paying with information regarding prescriptions before my insurer even gets the claim.

      Outcome: An advertiser (infant formula company) determines that my wife is likely pregnant and likely to deliver on Month/Day/Year. Guess what arrives on that day? A Fedex care package of formula.

      That was a very hurtful event for us, and similar violations happen thousands of times every day.

      • zaroth 1988 days ago
        There’s a “funny” Target story where a family starts getting ads and coupons for baby products in their Target mailers after the teen daughter becomes pregnant (unbeknownst to the parents).

        In that case it had nothing to do with mining medical records for advertising purposes. The daughter’s browsing and shopping habits sent a strong enough signal to trigger the ad targeting.

        I don’t know anything about your case, and am very sorry to hear about your family’s loss. I don’t know if you can draw a line to the insurance company selling you out. But now I’m very curious to learn more about what data insurance companies are allowed to sell, and to whom.

        • Spooky23 1988 days ago
          I appreciate that. I've posted this a few times on these matters because it was very impactful to us and really sets the stage about the farce of medical privacy. It is my small way of perhaps inspiring a positive outcome from an awful event.

          I know the insurance companies, hospital and PBM sold pieces of the data because the formula company immediately upon request disclosed the list that they obtained my name from, and identified who had the relevant information by process of elimination. I don't know specifically all of the ways this is done.

          Basically, claims data is sold, but not diagnosis. There is other context (type of admission, source of claim) that can identify the reason with confidence. (ie. ER admission, hospital admission, claim from OB/gyn) The prescription, can strengthen the assumed condition, and your pharmacy provides that data in near real-time to pharmaceutical companies, brokers and others. That script is tied to the DEA number of the doctor and can be cross-referenced to the admission.

          The formula company takes that data and mashes against people who have used their coupons in the past.

        • gowld 1988 days ago
          It may not be the insurance company specifically, but anyone who has been pregnant in the USA can confirm that getting medical care for pregnancy leads to marketing. They even give photography companies access to patients in the delivery ward.
      • aylmao 1988 days ago
        Wow I'm so sorry to hear this.

        It's infuriating that companies think they can insert themselves in people's lives like that. Why not keep it strictly at "they sell something" and "we will go and look for that something if we need it"? Why try to squeeze themselves into people's faces like that?

    • endorphone 1988 days ago
      "The next wave of medical advancement may require us to give up some privacy. It has worked in the past"

      Of course. But companies like Google, Facebook, Amazon, Apple (yes, I'm including Apple), and others should be a mile from that data. Google and Facebook particularly are ill equipped, from a moral and focus perspective, to have anything to do with this industry.

    • ceejayoz 1988 days ago
      > Then docs started getting together and adding patients from multiple hospitals into larger and larger studies.

      It's important to note that these studies are subject to pretty rigorous review by IRBs (https://en.wikipedia.org/wiki/Institutional_review_board).