Airbnb also uses your personal information for direct marketing. You can opt out by sending an email to: email@example.com
> Where permissible according to applicable law we may use certain limited personal information about you, such as your email address, to hash it and to share it with social media platforms, such as Facebook or Google, to generate leads, drive traffic to our websites or otherwise promote our products and services or the Airbnb Platform.
> Please note that you may, at any time ask Airbnb to cease processing your data for these direct marketing purposes by sending an e-mail to firstname.lastname@example.org.
> In some jurisdictions, applicable law may entitle you to require Airbnb and Airbnb Payments not to process your personal information for certain specific purposes (including profiling) where such processing is based on legitimate interest.
> Where your personal information is processed for direct marketing purposes, you may, at any time ask Airbnb to cease processing your data for these direct marketing purposes by sending an e-mail to email@example.com.
Immediately (30') after I got the first e-mail saying that my account was deleted, I replied to the operator and told her I just wanted to opt-out, not having my account deleted. She replied quickly (another 30'), said that she was sorry for the misunderstanding, and reinstated my account.
Apparently my opting-out request got redirected to another operator who should have got in contact with me, but 17 days after I am still waiting.
"So, to get these perceptions, we’ll share Airbnb profile photos and the first names associated with them with an independent partner that’s not part of Airbnb. "
I have to imagine they considered making this opt-in and realized the vast majority of users wouldn't take action to increase sharing of their personal data. Despite the good cause, it is incredibly shady that they would implement this as an opt-out.
IANAL, but I would think that this would be a flagrant violation of privacy laws in many regions outside the US (this policy is limited to US hosts and guests).
Edit: I also did not receive an email, but found the toggle active in my account preferences after reading this post.
Whatever good intention might be behind this, it's really off-putting to me. They send your photo and name to a third party who is tasked with evaluating those to determine your race.
"will look at these photos and first names and indicate their perceptions—or, what race they think the information we shared suggests. They’ll share these perceptions with a specialized team at Airbnb exclusively for anti-discrimination work"
Presumably that'll be correlated to booking data to identify rejections. I can't think of any other use case.
Gotta say, if someone truly doesn't want me to stay somewhere because of my race, I'll take that rejection rather than force myself into a shitty situation.
I absolutely do NOT want that. Then hosts will learn to conceal their racism (unless they wanted to be deplatformed, in which case why wouldn’t they deplatform themselves?) and I might accidentally book with them.
I believe most attempts to counter racism involve shaming racists and therefore result in the concealment of racism rather than lack of racism, unfortunately. Just as teaching a kid to stop picking his nose or similar activities results in concealment more often than stoppage.
An opt-in strategy has some challenges here as the data may become biased a cohort of users and you wouldn’t be able to tell whether rejections are due to discrimination or the natural course of business. And opt-out strategy allows for the data to be as representative about the platform as possible
I think they're justified to make this clearly flagged (sent out emails), created with some of the leading privacy researchers, and provide an explicit opt-out, and in the service of revealing racism and discrimination on the platform. I got an email, read it, decided I didnt want to be involved, and decided to opt-out. Glad they were transparent and gave advance notice, not sure what else is reasonable to expect.
Not everybody will see that email though and they are probably relying on that. I am sure, given the choice to opt in, most users would not (regardless of the cause) - Similar to Apples new anti-tracking in iOS 14
Few people just opt in to having their rights violated
On the very last page of the process it said Airbnb might need to contact me to prove I made the request. I don't trust them either, but it's certainly better than the "deactivation" they were pushing.
Most companies don't delete accounts when you request to delete account, they only disable your access to account. So, before cancelling I prefer to go into account, remove as much info as possible and change all other info before requesting to delete account.
AirBnb wouldn't even let me access my account until I accepted their new T&C and upon declining they just moved me to cancel account with no chance to make any changes to existing info.
Even in Europe (where the GDPR is present) you can't always insist that they delete the data. If there is a legitimate interest, like a legal data retention requirement for transaction billing records, you can't force the covered data to be deleted while that interest remains valid. Also other data can often be acceptably anonymized by removing the linkage with your personal identifiers, rather than truly wiped from existence.
But yeah, a lot of companies don't even do what the GDPR actually requires, let alone what people think it does.
Yes, I've deleted my Airbnb account a few months ago and had to go through their confirmation process. I've never booked anything, I've never provided any ID etc., and yet I had to provide a copy of my ID to prove that I'm the owner.
Did you really delete your account because they want to measure whether people are being discriminated against (with an opt out available)? That seems pretty awful given all the racism that people are fighting hard to stop.
As far as the pandemic goes, Airbnb has a lot less turnover than hotels and there are mandated days between guests to ensure the virus cannot live on any surfaces. That’s much safer than you’re going to get at a hotel with a lobby that has hundreds of guests coming in and out every day. Half the things you pay extra for at a hotel are now useless (gym, room service) whereas the benefits of an Airbnb like having a real kitchen and more private space are worth more now.
> Did you really delete your account because they want to measure whether people are being discriminated against (with an opt out available)?
Man, this comment and some similar sibling comments betray this mentality where racism is the ultimate evil and, thus, anything done in the name of fighting racism is justified.
In particular, multiple people seem to think you're not allowed to criticise Airbnb's approach to handling private data _because Airbnb purportedly is using it to somehow fight discrimination_. Nevermind that:
* There's no transparency about how this 3rd party will use the data
* There's no data on how effective their approach will be.
* The default is opt-in, not opt-out.
* Airbnb has done ethically dubious things before.
Yet, according to some comments, if you don't let Airbnb use your data, you're supporting the racists/being tone deaf.
Deleting or opting out is fine if that’s what you really want to do. But making a big public announcement that you deleted your account because they started an optional study to try to stop discrimination is pretty awful. You’re basically sending them (and everyone on this forum) a message that if they try to help discrimination then they will lose users. It’s a lose-lose for Airbnb. They either do nothing and people blame the platform for passive discrimination or they try to improve it and then have to deal with people blaming them for being big brother and deleting their accounts in a fit of rage.
MIT just had to remove a major AI dataset that was used to teach hundreds of AIs. It had numerous examples of black people and monkeys labeled with the N-word and has trained systematic bias into our algorithms. This type of bias cannot be fought by hiding from the problem. What Airbnb is doing is courageous and you are free to opt-out if you don’t like it, but taking a big public stand against an organization for actually trying to improve things for minorities in 2020 is pretty tone deaf.
Airbnb is a for-profit company with a controversial record, not a credible research institution. If they wanted to do this with no controversy, they would have a) had an institution send out the email to provide credibility, b) provide clear opt-ins, c) not hide behind a "We know your privacy is important" cliché instead of actually acting that way.
> but taking a big public stand against an organization for actually trying to improve things for minorities in 2020 is pretty tone deaf
From the email: "we’ll use first names and profile photos from hosts and guests to help us understand the perceived race someone might associate with them...We’ll use this information to help us understand when and where racial discrimination is happening on our platform."
So if I get rejected for trying to book a room, my brown face will be labeled "minority" and be used as a datapoint to help improve the company. I have every right to disagree with how Airbnb is going about this especially from a privacy perspective, and do so vocally.
Yep you have every right, you just come across as an entitled white dude who doesn’t give a fuck about the problems in this country. You’re still arguing it should be opt-in which misses the obvious point that something like this could never get enough opt-ins to work because people are lazy and aren’t even going to bother to read it.
Didn’t say you were. Said it’s how you come across. It’s sadly true that it’s become a meme especially amongst many white males to hate every attempt to identify or prevent discrimination. The standard playbook is to say it’s an invasion of privacy, it’s a liberal company being too PC, and let’s start some right-wing version of the site where we can all be “free” which usually means a bunch of skinhead wannabes.
I apologize for being rude but to be honest it’s pretty frustrating that people would be so selfish as to make a big stink about it instead of just opting out. Do you know how many times Facebook or LinkedIn has opted me into research and sharing that I didn’t opt in to and no one even bats an eye most the time. Now that the research is about discrimination instead of advertising profits it’s some big invasion of privacy that we need to delete our accounts over. Not to mention that Facebook and LinkedIn typically don’t let you opt out at all.
That’s fair, but also a strawman. I haven’t seen anyone make a statement against the actual research they want to do, the issue is regarding the opt-out approach that they hide behind a “forced opt-in to share your data for the good cause”.
I don’t think anyone would complain if it was implemented as an opt-in.
No one would complain if it was opt-in because only 0.1% of people would even notice it was happening and sign up. It’s not a straw man because one can’t study racial discrimination at scale by having a few random users opt-in to sharing their race. You either design an effective program or why bother to pretend you’re helping?
Also, it launched and the email was sent on 30 June. Who knows whether opting out after that has any effect, because it's not clear on what "launch" is in this circumstance. They might have sent all the data over on the 30th.
I've recently come to the realization that more often than not people who use the phrase "virtual signal" is actually a really great signal in itself.
Essentially a good chunk of people do care so little about discrimination, that it's difficult for them to believe that others actually even care about it.
In this case, at it's face, to see a statement that Airbnb is collecting data to investigate if their platform is being used for discrimination - and then reach the conclusion that the most logical reason that Airbnb is collecting data is not because they want to know if their platform is being used to discriminate, but that instead it's just a ruse; they don't care and really just want to collect facial data in order to look like they care about discrimination to gain some side benefit.
Somehow via reasoning to think that that was the most logical conclusion is pretty amazing. It conveys so frequently as: I care so little about discrimination that I can't even imagine that Airbnb actually wants data and cares about discrimination - there must be some other reason.
And again, I can't make that conclusion about the statement above (because we've not discussed it), but from other discussions with people that's what surprisingly frequently ends up being the underlying thinking.
> Essentially a good chunk of people do care so little about discrimination, that it's difficult for them to believe that others actually even care about it.
Some of us care somewhat (i.e. I'm proud to have convinced my boss to hire that female IT technician from another country who was mopping our floors. It took some convincing but he gave in. She got paid as much as us and got a career here in the west (I know because she later married a friend of mine). I've also tried elsewhere to help others to a career in IT, not sure how far I have succeeded in other cases, but in at least on case I gave away my well paid but boring job to make sure another less privileged man got a fighting chance. I visited him last year and he had usef his opportunity well. Oh, and I helped my neighbor down the street to a better job and tried to help his girlfriend. In the end she got a good job through other channels but I tried and I tried so in a way that was appreciated, not in a creepy way.)
All that said: that doesn't mean I won't say virtue signalling when something like this happens.
Oh. And the irony of me virtue signalling so heavily to get my point through :-) At least my identity is only known to me and Dan, so I can't cash out on any stupid internet points I get.
It's important to remember that corporations don't have motivations, only the people in them do. In a corporation like this you could have one person who really cares about racism, one who doesn't but thinks this is good PR, another who doesn't but thinks it's a good excuse to collect the data so they can do other things with it and so on. In a company that size it's indeed likely that people with all of those motivations exist within the company.
And someone may rightly want to call out the people who are only doing it for the PR or the data collection, but we don't have visibility inside the company to know who they are in particular, so the blame goes to the abstraction of the corporation itself. Which it deserves if it hired those people even if it also hired some separate people with better motivations, because those people are still then going to capitalize on racism by marketing their claimed virtue to customers, or do bad things with the data.
Your comment also reflects a common type of “parrot” response — any critique of activities in this space gets the critic branded as “uncaring” or even worse like “racist.” Don’t think that AirBNB should send your photo to a third party to be racially described, and think that AirBNB couldn’t give a shit about you and is just doing this for PR points? You must not care about “white supremacy.”
Do you think it is possible to be suspicious about the motivations of faceless corporations driven by a profit motive without be accused of "caring little about discrimination" / labelled as a racist / misogynist / anti-LGTBTQ?
At no point was anyone labeled a racist, misogynist or anti-LGBTQ. I actually do have concerns about Airbnb using an opt-in policy for this. However, my point wasn't about "how do I feel about Airbnb's change in policy". Ok, so what was it about:
A guy getting a job at an animal shelter because he thinks it'll help him get laid is virtue signaling. Someone adhering to a religious practice by fasting is virtue signaling. Someone changing their facebook profile pic to have a logo for some cause is virtue signaling. A company prompted by recent events wanting to investigate if there is discrimination on their platform isn't.
So a person wanting to promote a cause through their FB profile is "virtue signalling" even if they do it regardless of recent events (let's imagine someone putting up a "stop racism" banner there 5 years ago), but a company doing that when the public eye is on it is not "virtue signalling".
While I believe one can be genuine in either case, I find it more likely to be "virtue signalling" when it's a company doing something on a hot topic right now.
> So a person wanting to promote a cause through their FB profile is "virtue signalling" even if they do it regardless of recent events (let's imagine someone putting up a "stop racism" banner there 5 years ago), but a company doing that when the public eye is on it is not "virtue signalling".
From your response, and others in this thread I'm starting to believe that a lot of people that use the phrase 'virtue signaling' may not actually know what it means.
> “ In this case, at it's face, to see a statement that Airbnb is collecting data to investigate if their platform is being used for discrimination - and then reach the conclusion that the most logical reason that Airbnb is collecting data is not because they want to know if their platform is being used to discriminate, but that instead it's just a ruse; they don't care and really just want to collect facial data in order to look like they care about discrimination to gain some side benefit.”
This is a good model of most human behavior though. Creating elaborate systems of social norms then covertly coordinating on how to evade them or politically argue about how some group gets punished for violating them, while another group gets to violate the norms but reap benefits.
The economist Robin Hanson has developed a pretty comprehensive social science body of theory under the name “homo hypocritus” for this and uses it to explain a lot of behaviors (see just four examples below).
It’s totally reasonable to disagree with Hanson’s theory, but I think you are going way too far to treat it almost like paranoia or conspiracy theories or something.
In fact I think given the way corporate scandals, dark patterns, rampant privacy violations, etc. are so egregiously common and often go unpunished, it should be the norm to assume harmful intent from corporations unless proven otherwise. Classic studies like Moral Mazes would seem to confirm this.
No, the problem is that the same data can be used at least as easily to discriminate, and people who care about that don't like it.
Do you really want to build a race database?
Do you want Airbnb making determinations about the intention of your actions based on a secret race database?
The timing isn't accidental; they even state as much in the announcement. This is driven by current events. I wonder if anyone would give a pass to 23andme if they started oversharing DNA data "to battle covid."
I think it's more likely that Airbnb is indeed virtue signaling. Pretty much every corporation is virtue signaling these days, so I don't see what conclusion you can draw from the use of that phrase itself. It's an accurate descriptor.
A user can care about discrimination and still object to policies that overtly erode user privacy at the same time as they show solidarity with a certain movement.
Ultimately it's my name and my likeness, and I don't want it to be shared without my knowledge and consent. A company can't say "but I'm fighting discrimination!" and expect me to be cool with the violation of my privacy.
Not to mention that the data will be used to classify users based on race, defining people by their appearance rather than their actions, beliefs, and character. There is no guarantee that such data will never be used against the very same users it purports to protect.
'Virtue Signalling' is not about whether someone 'cares' about racism, it's about wether their actions amount to anything material or relevant.
Almost everyone actually 'cares' about racism, at least in classical, crude forms. If someone calls a black person the 'n-word' - almost everyone 'cares' enough to know this is wrong.
But there are any number of people who Tweet this, say that, or even take some action that is utterly irrelevant to the material nature of advancing the cause, but a the same time, they are embellishing their own position or status.
'Personal Branding' is the living job of every actor, politician and political figure. That's 95% of their job. The statements they make are mostly about engendering to and audience.
A good example is the Gavin Newsom's wife, who has declared that she will be the 'First Partner' - and not the 'First Lady' of California because the term 'Lady' is 'exclusionary'.
I don't doubt there's materiality to her statement; she probably does 'care' about the issue on some level - but I doubt deeply that her actions matter and that this is mostly an opportunity for her to make a 'public statement' that the press will like, hop on, propagate, which endows here with 'progressive credibility' to possible voters. I significantly doubt she would ever do such a thing outside the context of the political lens. To boot - it's also counter productive because the term 'Wife' or 'First Lady' is in no way exclusionary. It's gendered yes, but no more so than we used gendered pronouns 'he' and 'she'. It's 'invented equality' that serves no purpose other than a nice bit of PR. That is 'virtue signalling'. 
A very excessive, fascist example of 'Virtue Signalling' from the Mayor of Oakland just this week. She initiated an FBI 'hate crime' investigation into some ropes hanging from a tree in the park. Those ropes were there to hold swings for kids, ironically put there by someone in the Black community. Nobody had complained or thought this 'kids equipment' was symbolic of anything, there was no public intrigue - but the Mayor took it upon herself to make a big fuss about it. To make the situation 'scary Orwellian' her public statement was that 'Intentions Don't Matter'. Read that again and consider the consequences. A swing set in a park in her view, is construed as a 'hate crime' irrespective of the fact that it's merely a swing set, not indicative of anything, placed there by a black man, in which nobody had any complaints. This is the Mayor of a major US city, who has initiated FBI investigations which could destroy people's lives, for absolutely no reason. 
The reason this is 'Virtue Signalling' - is because nobody outside of political theatre cares and the issue is irrelevant. Why was it only the 'mayor' who had to invent non-existent hate crimes? If there were any legitimacy at all to the situation, others would clearly be concerned. But there was no concern. The excessive, fascist reaction by the Mayor is therefore vapid, it's utterly political. It's an attempt to engender her credibility as a 'force against racism' - even at the cost of using the power of violence of the state over common sense. But like Trump supporters wouldn't condemn him 'if he shot someone' ... her supporters won't condemn her for her ridiculous display either.
A less extreme but more relevant example would be Alexandria Occasio-Cortez total rejection of Amazon's bid for a major HQ in NYC. Both the Democratic Governor, and 'far left' wing Democratic Mayor were both 'extremely' in favour of the opportunity, with Amazon to bring in thousands of very good, high paying jobs, along with so much incremental surplus. They were to receive the same support from the city/state that any other company gets. AOC rejected the offer because of Amazon's unwillingness to work with her directly by making $$$ investments in schools, and other things. Ironically, even the majority of people of colour in her district supported Amazons bid which puts her at odds with basically everyone that matters. Amazon is not some 'evil corporate devil' - they're a big, succesfull company, and the elevated taxation and incidental business from them is probably the closest thing any city could get to 'organic development' - which is to say 'things working well as they should' without having to resort to arbitrary distributions of wealth etc.. AOC knows this. In any other condition, I believe she would probably support Amazon's bid - but in a political context, she defines herself as an antagonist against 'evil corporations'. She can't 'support Amazon', that would be akin to making a deal with the Devil, at least in her popular, bombastic personal branding. From a marketing perspective, AOC is exactly on point to 'fight Amazon' even if in reality, it's possibly the best thing that could ever happen for her district (I understand it's not all roses, but overall, it would be good). So this is a toxic example of 'Virtue Signalling' wherein we don't doubt the ultimate sincerity of the individual, but where their motives are inconsistent with their actions, which are actually detrimental.
Finally - though we don't use it in this context too often, 'Virtue Signalling' could be equally applied to other things such as flag waving by fools who actually don't materially care about the nation, or those who espouse Christianity but really are the furthest thing from it. Donald Trump holding up a Bible (upside down no less) for a photo-op a few weeks ago is a repulsive form of 'Virtue Signalling' because, though surely he thinks he cares about America and probably thinks he's a 'good Christian' - the man probably hasn't been in a Church in 20 years and generally has nothing to do with it.
Every corporation is in the business of perception. AirBnB has a very expensive PR agency who will oversee their public announcements. Every politician and celebrity has power because they have carefully shaped their public image, not because they have necessarily 'done anything'. In many cases, they are 'talented' or 'are business with good intentions' or 'have accomplishments' but it doesn't matter - if they are going to be playing the 'message spinning game' we should absolutely be cynical. If they want something from you: sales, popularity for their career, votes - they are marketing to you.
I got this email and it was not exactly easy to turn it off considering I hadn't logged in to my Airbnb account in a while. Perhaps a better solution would have been to provide a click within the email that would allow you to turn it off from there.
Yes. There is a link in the email that leads you to the settings page but I hadn't logged in a while so after entering my phone number along with the code they sent me I also had to enter a code they sent to my email. My point being that clicking the link in the email should automatically turn off the "feature" I shouldn't have to log in.
It was confirmed they were doing it in 2018. If you used social login with them at all, they have all that info and they're supposed to delete it (according to Facebook's AUP) but effectively impossible to verify.
Why do hosts even get to reject people or see their picture or even name before making the decision? I don't see any legitimate use, only illegitimate use like denying based on race. Let hosts set a minimum stay length and such if they want but that should be it.
Perhaps so they can check your social media/criminal history. Someone might not want to rent to someone who has a history of criminality or posts lots of pictures of them partying on their social media. Not saying it's the right thing to do, but one of the things I first thought of. Also it might add more personality to the site and make it not as 'sketchy'. Just my opinion
If I see (in the photo/name) someone who has sexually assaulted me before, I'd like to be able to deny them from coming over to my house. Discrimination or not, people need to be allowed to deny people they would be uncomfortable with coming to their house. This is especially true for female hosts. My life > someone's hurt feelings that they don't get to Airbnb at my place
TLDR is that that information is stripped of Airbnb identifying information, asymmetrically encrypted before perceived race is determined, then noise is added when the data is returned to Airbnb to prevent identifying individuals. This data is only used for identifying acceptance rate disparity.
> This data is only used for identifying acceptance rate disparity.
Isn't that going to make it basically impossible to investigate false positives? This is already a huge problem with racial disparity numbers in general. If you don't account for things like income or education level you see huge racial disparities everywhere, but if you do they either get a lot smaller or go away entirely. It would be naive not to expect similar factors affecting housing determinations.
You also have the problem that the average guest is going to stay for a few days, so even a listing that stays fully booked is going to have maybe a hundred guests a year. ~12% of the population is black, so that's about 12 people. How are they expecting to draw any meaningful conclusions from a sample size that small?
This is to measure the acceptance rate gap overall, not the gap for a specific host/guest/experience. Then, after changing the product, you can measure the gap post facto and see if you actually did anything. You can't change what you can't measure.
But then what do you do with that if you don't understand the causes? It could be a systemic bias in your algorithm. It could be that the apparent disparity is just confounders you haven't accounted for. It could be that some of the hosts are overtly racist. It could be any combination of any of these.
You can just A/B test it with random changes until it goes away, but that doesn't mean you solved the problem, only that you made the metric a target.
If the problem is that some of the hosts are overtly racist, you could make the disparity go away by creating an algorithmic bias the other way, but that's only increasing the overall unfairness. The racists are still there harming the people they harm, then you introduce an additional harm to some entirely different innocent people in unrelated transactions. That doesn't help the people originally being harmed because the undue beneficiaries of your changes are completely different people who just happen to be the same race as the original victims, which makes the numbers balance even though overall unfairness has gone up rather than down. The equivalent of "solving" cops killing disproportionately many innocent black people by having the cops kill more innocent white people.
Meanwhile if the problem was algorithmic bias to begin with then you still have to understand the specific means by which the bias operates, otherwise you're susceptible to doing the same thing. Your algorithm was improperly disadvantaging Chris to the benefit of Chaz, you modify it to additionally improperly disadvantage Anna to the benefit of Alicia, and now your numbers balance because Chaz and Anna are the same race and the harms cancel out for your metric, even though they don't cancel out in real life for the people affected because they're different people.
And if the problem was that you weren't correctly accounting for confounders then there wasn't a real racial disparity there to begin with, and by making the un-adjusted numbers balance you created one.
You have to actually understand the causes and mechanisms before you can devise a solution. Just having aggregate statistics doesn't do it.
I'm inclined to think that it actually makes it worse, because then people only care about the statistics. But the statistics can show a disparity when everything is fine because it's just confounders, and the statistics can show balance when everything is not fine when you're making multiple errors in different directions that sum to zero in aggregate but not for the people affected.
Plus they essentially admit that anonymizing the data makes it less useful and their claimed solution is to use a larger sample size. Which is like offering "buy more fuel" as a solution to poor fuel economy. It's no solution to the efficiency loss (at any given sample size it's still worse) and you may not always be able to get a larger sample size.
None of which addresses the problem I identified anyway, which is related to identifying the cause of the disparity once one is discovered, which is already almost intractably hard even without anonymized data.
The much better solution is to identify specific instances of discrimination and address them (and the mechanism of discrimination they represent) regardless of what the statistics say because, again, aggregate statistics can both say that something is wrong when it's not and that nothing is wrong when it is, and the only way to tell is by looking at the individual cases.
Your post is disingenuous. I got the same email and it very clearly lays out that you can opt out of the settings. This was not hidden in some obscure T&C. You can click the opt out link on the email directly and opt out.
a) Saying this program was to fight discrimination, and
b) Saying they weren't collecting new data or doing anything nefarious.
What it left out was clearly explaining what they _were_ doing. I had to parse text carefully and use my own familiarity with the industry to understand that they were going to decide my race based on my photo/name.
Without clearly explaining what the new data use is, you can't expect people to make an informed decision about opting-out.
Here’s the email copy. It very clearly states exactly what this is doing. Not sure what text you need to parse here
“ We’ll only use information you already share
This project will address discrimination that's based on perception—so we’ll use first names and profile photos from hosts and guests to help us understand the perceived race someone might associate with them.
We’ll use it to uncover patterns of discrimination
We’ll use this information to help us understand when and where racial discrimination is happening on our platform. Any insights will be used to help develop new features and policies that create a more equitable experience for everyone.
Information won’t be tied to your specific account
We know your privacy is important, so we analyze trends in bulk and Airbnb won’t associate perceived race information with your account. We won’t use this information in marketing or advertising, and it will only be used for anti-discrimination work.
We consulted with and solicited input from leading civil rights and privacy organizations to guide us
We know how delicate this work is—so we developed this work with support and input from leading civil rights organizations like Color Of Change and Upturn, along with privacy organizations like Center for Democracy & Technology, to make sure our approach is both thoughtful and respectful of your privacy.