The credit system in the US is already a pretty horrific version of this, and nobody really seems to care and it’s been like this for decades. And it’s not even just used for credit anymore, renting an apartment now usually requires a credit check.
I had thought credit karma and similar services were at least adding transparency to the industry but was recently in for a rude awakening when trying to get preapproval for our first mortgage. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit. Services like credit karma are just making up their own score based on the report which can be drastically different than what the creditor decides it is. And credit pulled for different uses somehow ends up with different scores, e.g. for a credit card approval your score will often show up higher than for a mortgage. You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.
It’s insane and the fact that this fairly old and low-tech institution has not managed to be regulated successfully or made fair to the average person, bodes very poorly IMO for regulation of privacy and fairness based on future technology.
> You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.
There is no “the credit score”. Any lender is free to use whatever scoring algorithm they like. And they do. Some lenders may choose to use certain scores for certain things, but underwriting has no obligation to use one specific credit score.
Therefore there is no reason for people to care about their credit score, whether it be from FICO or credit karma or whoever.
The only thing you should do is make sure information on the credit report is accurate. And what is your source for claiming you can pay to get things removed from credit score? It doesn’t even make sense, as it just shows the status of your lines of credit. You can’t just make a line of credit disappear as no credit reporting agency or financial institution is going to want to commit fraud for any amount of money that an average person might offer.
Credit reporting agencies and financial institutions commit fraud all the time. They call it identity theft so that you think it is your fault instead of their fault. No one would care about a bank giving a loan to a person that was impersonating them except for the fact that the bank (credit card, mattress store, car dealership, etc) commits fraud and reports to the credit agency that you have defaulted on your loan when you have not. Then the credit reporting agency also commits fraud when they sell this false and damaging information to others.
This is the big lie. That "identity theft" should be the problem of the person who was impersonated. Pass laws that heavily fine entities that give false information to credit bureaus and fine credit bureaus who give out false information. "Identity theft" would no longer be something that people would worry about.
>Credit reporting agencies and financial institutions commit fraud all the time. They call it identity theft so that you think it is your fault instead of their fault. No one would care about a bank giving a loan to a person that was impersonating them except for the fact that the bank (credit card, mattress store, car dealership, etc) commits fraud and reports to the credit agency that you have defaulted on your loan when you have not.
That's literally what not fraud is.
>In law, fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right
If some guy walks into a bank and claim they're you and they believe it, the banks aren't gaining anything. If anything, they lost money. Wrongly reporting the default to the CRAs also doesn't benefit them either; it's not like they compensate the banks based on how many default reports they send in. Finally, it's missing the "intentional" part. Lax security practices are negligence at best.
Maybe a better way of say this is that the bank is committing libel when they report to a credit bureau that you have defaulted on a loan when you have not. They say you did something that you did not do and the fact of them declaring that lie does you damage. The important thing is that "identity theft" should be a problem for the entity that made the loan to a criminal and not the person that was impersonated. Call it "bank libel" when someones credit score is ruined by a bank that gives a loan to the wrong person and one is closer to describing the truth of the situation.
Edit: mindslight mentions in this thread that: "The 'Fair' Credit Reporting Act explicitly immunizes the surveillance bureaus against the tort of libel". Amazing.
Yeah, that's what their books say, but that doesn't necessarily match reality. In reality they gave $1000 in cash to someone, and their expected return on that is a fraction of that. Same logic if you bought paid $800 for bonds with face value of $1000, but unknown to you, the bonds are actually junk bonds with a expected value (factoring in repayment and default) of $500. In that case your paper gains are $200, but in reality you actually lost $300.
>Can't they sell the defaulted loan to collections?
They gave the bad guy a loan for $1000. They're out $1000. They sell the loan to collections for $300, they're still out $700.
>Systemic, known, and long-standing negligence speaks to intention.
Security exists on a spectrum, and there are trade-offs to be made. Clearly the banks don't have an interest in selling fraudulent loans, and putting up too much barriers when it comes to authentication also has costs.
Does... interest stop being a factor? They gave an x-year loan for $1000 at x% APR (was it subprime?). Also, penalties. Collections can go after you for the full amount. They buy it from the bank for the original $1000+.
Or, the loan hasn't defaulted yet. The bank suspects it will. They load it into a bundle, get it highly-rated, sell the bundle. They make a profit. A whole bunch of other people get hosed.
>Clearly the banks don't have an interest in selling fraudulent loans
This is the wrong decade to be making that statement.
That's not fraud. The tort you're looking for is libel. The "Fair" Credit Reporting Act explicitly immunizes the surveillance bureaus against the tort of libel. This is actually another instance of regulatory capture.
Wow, I did not know that the surveillance bureaus are exempt from libel. What an incredible law against the interests of the people. That explains why I've never heard about class action suites against them. I'll have to read up on it. Thanks for the info.
"Pay for delete" in collections reported has transitioned from a rarity, to most RMA agencies offering it directly in their initial letter and websites. None of the credit bureaus have pushed against this publicly in the last 3 years of it being more widely adopted.
In my experience, I've been able to settle for less than full balanced owed (the collection agencies buy them by the bundle for pennies on the dollar), and all negative tradelines removed (original charge off and collection account) under new (2018ish) publicly listed policies of doing so.
This approach is mainly used by third-party debt buyers, like Encore/Midland/Cavalry. And doesn't apply to public judgments, if they decide to pursue that route. Thankfully, I have managed to avoid that in my credit recovery journey.
First-party (original creditor) will laugh or tell you it isn't possible if you ask for a PFD settlement, but there are documented instances of it happening as early as 2010 in my research for "making the case" before these policies started to become more widespread.
I don't know if this is still the case, but banks at one time would deny you based on poor credit history also. And so, you get to stay in the "unbanked" category which is just another factor which may keep you in poverty.
The FinTech industry has since grown to the point that being unbanked doesn't need to be a thing anymore. It's easy to pick up a prepaid debit card which you can receive ACH payments on.
>> And it’s not even just used for credit anymore, renting an apartment now usually requires a credit check.
Renting an apartment is credit, I am not sure why you view it has something other than credit?
The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments
How is that not credit?
A better example is employers using it for hiring choices which does happen as well, but using an apartment as an example of bad uses of credit I think it misguided
>>. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit.
Yes the individual or organization that is loaning you a large amount of money can choose how they use the credit report they obtain for you. Again here I am not sure why this is a revelation or a bad thing.
There are also many different and competiting credit scores no person as "a credit score" there are at least 5 if not more credit scores out there and different institutions will use them in different ways. FICO being the most common but not the only
>You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.
Yea I believe these institutions should also have to release your personal score with your annual free report, Congress should fix that omission from the law.
>Renting an apartment is credit, I am not sure why you view it has something other than credit?
>The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments
Not really. In most places you have to pay first and last month's rent, which means you're paying for the service before it's rendered. Therefore they're not extending credit, as you have already paid for the service in advance.
1. I would like you to define "most places" as for the first 35 years of my life I was a renter and in that time I only ever had to pay "First and last" in one instance. Most of the time it was The first months rent, and security deposit(which was often something small like $100 or $200), no "last months rent" so around here it was not "most places"
2. Even if you use that as a metric that is more like a down payment than a "services paid in advance", unless you are on some kind of month to month with no lease. Every lease I have ever signed shows the TOTAL of all payments which are paid in 12 installments just like a loan, if on month 6th you just move well you still owe the other 6 payments (less any down payment aka last month you prepaid)
The owner is absolutely extending you credit for the use of the property, If you sign a lease for an apartment for $1,500 a month for 12 months you are agreeing to pay $18,000 to the owner for the use of the property, you have both agreed to pay that over 12 equal payments, and in your hypothetical the owner has asked for a $3,000 down payment on that loan in exchange the owner adjusted the terms to 10 equal payments
I'm not sure I follow here. The transaction might be structured as the landlord giving you 12 months of housing service at the time of signing of the contract (or when the lease starts) and you paying him in installments of 12 payments (with the first one or two upfront), but you don't "take delivery" of the service all at once. It's given to you on a continuous basis. Therefore, at any moment during the 12 months, you don't owe the landlord anything.
The second you sign the lease you owe the landlort the full amount. It is a debt you owe.
Does not matter if you realize the use of the property or not, for example if you signed a Lease on an apartment in January, then due to a global pandemic you could not move (through not fault of the landlord) in until July you would still owe the rent for the 6 months even though you did not occupy the unit.
Renting is not credit because you don’t get the thing all upfront and then pay for it later. If fact if anything, the renter is giving the landlord up to one month of rent as credit because rent is usually due on the first of the month in order to be able to live there that month. But that’s just sort of an artifact of the discretization of the payment, as far as the arrangement goes there is an ongoing equal trade of values between both parties.
As far as banks being able to make their own credit decision, that’s all fine and good but then they pretend like that’s not what’s happening. If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to). The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus. And then on top of that, just the act of seeing if you qualify for the thing lowers your credit score further!
If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past, I can’t even imagine how big of a problem this is for people who have had actual hardships.
>>Renting is not credit because you don’t get the thing all upfront and then pay for it later. If fact if anything, the renter is giving the landlord up to one month of rent as credit because rent is usually due on the first of the month in order to be able to live there that month
You have a fundamental misunderstanding of how a lease works, you are equating it to something like a prepaid phone bill, and that is not at all what a lease is
You are obligated the pay the full amount of the lease, if you up and move in the middle you still owe the landlord the full amount.
The property owner would run a credit check for the same reason as a lender would, to judge if you are responsible person that would repay this obligation under the terms of the agreement, it far closer to a loan then you seem to want to give it credit for.
Further the use of credit scores and other background checks only become more important the harder it becomes to evict bad tenants
Something that "there is an ongoing equal trade of values between both parties." would be terminable by either party the second that value proposition changes, this is not the case with Rental property where the interaction is governed not only by the terms of the lease but layers of federal and local laws
>If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy
If you scroll to my other comments I advocate for changing that, I am a big advocate of personal data ownership and believe any person should have the right at any time to request all data any company collected about them.
>> (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to).
You have the right to get an annual credit report from every credit agency every year, that is would they would see
>The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus.
Yes and no, there is 100% guarantee nothing is in life, but there are several ways you can get a good fact based analysis of your general credit worthiness, can it predict if a given institution would grant you a loan... no. but it can predict if you have a good chance at some institution giving you a loan.
The lower your general score obviously the less reliable these tools will be, If you have a FICO of 810 + provable long term income then chances are anyone will loan to you, if you have a FICO of 620 well that become more of crap shoot and will be based on many other factors than just your credit score, similarly if you have a high FICO score but unreliable income (self employed) then it also becomes more of a crap shoot.
>>If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past
I hear stories like this often but this is not my personal experience. Not saying it cant happen but companies I do business with do not insta report you if you autopay fails...
You have to be 60+ days over due before it shows up on the credit report... and with all the modern alerting and other tools I fail to believe that a "simple auto-pay mistake" is what caused one to become 60+ days delinquent on a payment
It’s already happening. Georgia had a anti-opioid program where they could score every pregnant woman in the state for likelihood to become dependent on opioids during pregnancy. Babies born with addiction cost the state about $1M each.
There were a few similar programs a few years ago when federal grants were made available. Iirc, they were modeled on the stuff built to identify people vulnerable to becoming extremist terrorist types. Insurance companies have databases of way more lifestyle and other behavior data than people realize. (Everything from sports, politics, to porn and gambling habits — anything for sale) I’m pretty sure Georgia mashed that against Medicaid claim data to build the model.
It’s a reason why we all need to watch “pre-existing condition” debates closely. If insurers know that 45 year old divorced father of 3 who moves every 3 months is a smoker, gambler and drinker, they don’t want to write a policy — the guy is a trainwreck with no support system.
As a healthy homeless former professional working as a grocery bagger now, I can feel management watching me for signs of drug abuse to explain my ratty clothes and spotty hygeine. Nobody can guess why I’m not working a good job but seem very smart and thorough (slander from boss destroyed reputation). I’m glad to have a job and sorry to show up looking trashy, but it’s the best I can do right now. I am just thankful that they don’t have any data or scores for me yet to help them see a drug addict or boozer, because I know that I’d raise a lot of flags there and it’d be enough to color the suspicion. Metrics are scary, I know my current scores will make things harder for years to come, starting with credit having bombed right down to these corporate/insurer metrics whose systems I was in as a $200K/yr professional. Lots of explaining the data to do if I make it back to pro life.
I suspect many think differently here but I don't think the problem lies in the knowledge but in how people with power are currently using the knowledge. Restricting the knowledge is simply or current best mitigation of the current conventions. Another world might use that knowledge to better support you and others who have seen hard times and restore what sounds like a lack of justice and help you find an environment where you could thrive and perhaps even that boss of yours. [edit: so that they could thrive but also have reduced negative impact]
In other words, rather than focusing on mitigating risk a sufficiently high quality system could help us maximize our lives. Unfortunately, the probability of something so pro-social being the outcome seems low.
The biggest problem today be it credit scoring, Social media Censorship, or anything that is using these vast databases of personal info is the complete lack of transparency
Denied a loan, denied a job, kicked off a platform, in none of these situations is the company requires to justify their actions or be transparent in the policies and processes they used to reach that conclusion
This black box leaves people feeling powerless and out of control because they are.
One way to combat that is stronger data ownership laws, and the ability for people to get ALL information a company has collected about them.
So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination
> The biggest problem today ... is the complete lack of transparency
While that's the biggest problem right now, transparency by itself isn't useful. I have transparency into my bills but I don't have the ability to change them. Negotiating power is being taken away from consumers. When's the last time you've seen anything which didn't have some form of liability limitation clause? When's the last time you've been able to negotiate that liability limitation?
"Fail fast and opaquely, good luck iterating ya bozo." Good advice, guys, thanks. /s
>So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination
The reason why that doesn't happen is because they're almost certainly discriminating in ways they shouldn't be.
I mean, it's already standard practice (All states except CA) to use credit score in things like pricing your auto insurance.
It's only a tiny leap to incorporate a similar magic number some company comes up with. Actually due to competition, if it correlates to risk, all companies will literally be forced to use the magic numbers or go into an adverse selection death spiral.
Unless there is regulation against it, like in CA. Not all regulation is bad.
I wonder at which point it becomes a self-fulfilling property - at which point decisions based on data-driven pigeonholing actually lock people on the paths "discovered" in the numbers?
E.g. if a young adult gets classified as "disorderly, drunk, unsuitable for reproduction, suitable only for low-skill work" based on their history of college partying, and then consequently denied work and social opportunities (as everyone doing background checks sees that summary), the prediction essentially becomes a sentence.
(The third season of Westworld, despite bad writing and even worse gunfights, was very good at bringing this point up.)
Read Weapons of Math Destruction by Cathy O'Neil. The book explores several ways that's already happening. Her main premise is that there's a feedback loop in many data-driven policies. You only get success results for the things that you try, and you only try the things you already think are likely to receive. As a result, algorithmic policies tend to reinforce the status quo.
Loan risk algorithms will favor people "similar to" those who have paid back loans before, a sample group biased towards people that banks have already loaned to before. As a result, a lot of the factors are biased towards "from a white upper-middle-class suburban background."
And recidivism estimators, which are used as jail sentencing guidelines in some places.
Screening algorithms for job resumes, and college applications.
Algorithms send police to where crimes are reported. Crimes are reported because the police are there to witness them. The area gets designated a high-crime area. Regular people are arrested more often because regular activity is suspicious in a high-crime area, affecting their future prospects. The higher arrest rate is used to justify this.
It's a continuous spectrum rather than a single point. But if I were to pick a single "point" where it became a self-fulling prophecy? 1994, due to the widespread passage of three-strikes laws.
Can't this be solved by randomly giving out the wrong prediction, and seeing how it turns out? eg. for 1% of applicants, pretend to give them 800+ credit score, then check the outcome compared to the "expected" score.
Yup, this is what "structural" social prejudice and discrimination are all about, once you strip away all the pointless and meaningless rhetoric that's somehow supposed to be "about" these issues. It's a self-perpetuating equilibrium of basic social conditions and superstructure (viz. discourse, supporting ideas, commonly-held worldviews etc.) that create a nearly unescapable "trap" of invisible oppression.
Yes this pattern of data-driven decisions on our loves is troubling. However if we understand what these corporations are looking for it is easy to exploit them for fun and profit. If you get a decent credit score say hello to multiple $500 credit card opening balances, free plane tickets, free hotels.
> Actually due to competition, if it correlates to risk, all companies will literally be forced to use the magic numbers or go into an adverse selection death spiral.
To expand on this: adverse selection is where a consumer has hidden information about the cost they can inflict on a provider. Usually this is talked about in terms of insurance (especially health insurance), but risk is risk and so the principles are the same.
My recollection of what economists predict is that there are two stable equilibria that achieve Pareto efficiency. The first is that discrimination (in the general yes/no sense) is completely forbidden and risks are totally pooled. The second is that complete discrimination is possible without limitation.
The worst outcomes are all found in attempted compromises. Not only do you have the costs of whatever tradeoff you chose between pooling and discrimination being less efficient than the Pareto points, but you also introduce a great deal of dead weight due to complex regulation and oversight, plus efforts made to evade regulation and oversight. Collectively we are worse off, even if individuals think otherwise.
I don't think allowing total discrimination is a viable option in this day and age. Which means banning it wholesale and encouraging the formation of universal risk pools.
last i checked data collection from western-block companies is pretty centralized. Extensive records of your online behavior are being used (hence the money being made in it's collection and sale) by HR dpt, insurance companies, and loan desks to determine whether you qualify for a top tier salary for the same work, affordable coverage, and approval for a loan.
People are complex. If some piece of code is used to predict my employability based on my facial expressions, there's not much to add. It's already as bad as it gets: you can't say/write anything non vanilla in public because 10 years later some HR snooping app will analyze the sentiment on the post and outright reject you. And we all kind of blindly walking into this no questions asked.
Many private companies surveilling everyone without anyone's knowledge is terrible. Companies using these scores are really only targeting a group of people that allow the collection to happen.
My car insurance company would sure love to put a tracker in my car but there is no way in hell that is going to happen. I'm sure my driving style would qualify me for higher rates, yet I've never caused an accident or made a claim where I was at fault.
Yes they would, but the flip side of them not being able to price people based on actual driving behavior is by charging all men higher, all single people higher vs married people, and poor people higher based on credit score.
Plus why single out insurance companies and ban them from creepily tracking you wherever you go and not all the other companies who do that with your cell phone.
Also to be pedantic, the don't want to put trackers in the car, that is expensive. They want to use your cell phone like all the other apps tracking you, or they want the car manufactures to let them in on their data since most modern cars have the ability to track and broadcast location.
>made a claim where I was at fault.
That's a big caveat. There are plenty of accidents that are truly not one parties fault, but according to law even being judged as 49% at fault is still "not at fault". And even in cases of 0% at fault as determine by the insurance adjusters, theres a big chance one still had contributing factors.
Also, there's a big chance not at fault claims will be taken into account when pricing you and raise your prices. Just right??
On the other hand, not at fault accidents do correlated with higher risk, and it's easy to see why. For example, people who get rear ended are not at fault. But following the car ahead too closely leads to needing to brake harder, increasing the chance of being rear ended. Following too closely also increases the chance of rear ending the vehicle in front.
I was rear-ended in stop-and-go traffic. The same person was behind me for miles. He was looking at his phone and ran into me - totally and completely not my fault. He refused to give me his insurance info. I made a claim against my own insurance. I'm pretty sure the insurance company went after him and recovered costs. How exactly should that influence my rates? This is exactly why I pay for insurance, service I already pay for that is already factored into the costs. I pay extra for the zip code I live in already.
I have no driving record and I've caused no accidents. If I put a tracking device in my car today my rates would go up because I accelerate quickly and brake hard when conditions allow. That is not right.
Your insurance company would prefer to cover people who notice the reckless driver then pull over to get out of their path.
You did not do that, so they raise your rates.
Edit: not that you were at fault or did anything wrong.
My point is that the insurance companies strongest incentive is to pay out as little as possible. This includes payments made where their client was not at fault. If it were up to them, everyone would pay all their premiums on time and never drive.
They recovered their costs. I already pay for the service. Tracking is just another excuse to raise rates, which as you note is the goal (bring money in, don't let it out). When rates have nothing to do with liability is it really insurance?
To quote wikipedia:
If the likelihood of an insured event is so high, or the cost of the event so large, that the resulting premium is large relative to the amount of protection offered, then it is not likely that the insurance will be purchased, even if on offer. Furthermore, as the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer. If there is no such chance of loss, then the transaction may have the form of insurance, but not the substance (see the U.S. Financial Accounting Standards Board pronouncement number 113: "Accounting and Reporting for Reinsurance of Short-Duration and Long-Duration Contracts").
What you are talking about would no longer be considered insurance.
> the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer
The vast majority of automobiles do not hold their value, they depreciate over time. If I've had my car for say 5 years and I've paid the premiums for 5 years, I may have already paid the full value of my car as it is today due to depreciation. I probably pay for one minor incident per year in premiums. Unless I total my within the first 5 years I'm not really getting insurance what I'm really getting is a payment plan for a payout I most likely will never get.
The payout is the risk you transferred to the insurance company. You’re not “supposed” to get all your premiums back because then why would anyone start an insurance company? Its not a bank account, its more like a lottery ticket. The insurer assumes risk of covering an accident in exchange for the guaranteed income of your monthly premiums. In return you exchange a fixed regular payment for protection against an unlikely event. In order for this arrangment to work, the insurer must make a profit.
You might understand this but many people today do not, which is why they expect medical insurance to cover 100% probability events like an annual checkup with the primary care physician.
I'm not asking for my premiums back and don't expect it.
Just saying that after 5 years of premiums the insurance company would be break even on that policy. If premiums are high enough its not really insurance. Not that everyone has the same insurance company but generally if everyone has paid premiums that meet the value of the car after 5 years then all of the insurance company risk is with cars less than 5 years old. The insurance company will keep pushing for more intrusion into your life to 'give you the best rate' but really you'll just pay more. Like you said if it wasn't profitable then no one would do it. Making apps and integrating with telematics doesn't come for free and its not coming out of the insurance company bottom line.
I believe most insurance also covers the damage done by your vehicle to other property. So the upper limit on the total collected through premiums is somewhere between 0 and the cost of an expensive house.
It's actually more along the lines of whatever a person's life is worth. Also where I live the ~3 cars around me in traffic could each cost more than a nearby home. You can pretty easily destroy $400,000 of property here in an otherwise uneventful accident.
However, it looks like that is mostly due to more lawyers being involved in bodily injury claims. More lawyers involved points to a failure with insurance company's ability to properly adjudicate claims. When insurance is only about profits and finding tools to further squeeze your existing clients no one benefits. Insurance companies do not need more ways to rate me, they just need to do the job they get paid to do.
> This is a private company looking for profit. They want to raise your rates.
And this is the problem with insurance being mandatory. Their business model is to force everyone to buy an expensive subscription, and then increase costs every time everyone actually uses their service in order to recoup costs.
Yes, I am oversimplifying, but holy fuck do I hate insurance companies since I have never gotten one to pay out a claim without suing first.
> If I put a tracking device in my car today my rates would go up because I accelerate quickly and brake hard when conditions allow. That is not right.
If statistically other drivers who do that are more likely to, on average, result in a loss for the insurer, they are right to raise your rates. "Past Performance is no guarantee of future results"
With enough data, maybe it would support your assertion that you are, in fact, a very safe driver, by realizing that you only drive fast at certain times, in certain locations that are statistically lower risk.
This argument would only be valid if there was a causal relationship between the “protected characteristic” and the increased risk. If the relationship was not causal, then the insurer would be mispricing the risk and thereby leaving profit on the table.
> my rates would go up because I accelerate quickly and brake hard when conditions allow
All it takes is for you to misjudge the conditions one time (are you claiming to be infallible?), or for another road user to do something you were unable to anticipate, and suddenly this driving habit of yours does contribute to a higher risk of an incident that results in a claim. (Even if the incident still isn't considered to be your fault; your driving style reduces safety margins for everyone.)
You cannot know the limits of your vehicle and yourself, and operate safely within those limits if you do not know what those limits are. My driving style contributes to safer driving because I know when I might exceed the limits. As I mentioned, I adjust my driving style for conditions. If my driving was so egregious I would have a driving record.
Giving the insurance company more ways to judge me only enables them to charge me more, without my actual level of risk changing at all.
How so? My insurance company offers programs with apps or use of telematics. I don't use any apps and don't allow access to any telematics. After the location data scandal last year my cell phone company claims to have stopped selling location data. I don't knowing allow any access to my location data.
Ok so I was being a little hyperbolic, but I wouldn’t trust that there is not some loophole of getting to your data - if not now then in the near future. Even very basic data (if you have enough of it) can be used to derive surprisingly complex patterns of behavior.
> Our machine learning technology has flagged hundreds of thousands of
instances of misogyny, bigotry, racism, violence and criminal behavior in
publicly available online content.
If you go and compare what the product actually seems to do - tracking down your Twitter account and grepping every Tweet you interacted with against a list of "thoughtcrime" words (like "hell" or "ass") - you can almost feel the next AI winter coming. How many more bullshit companies calling their fake, broken and trivial technology "AI" or "Machine Learning" will it take until the whole field of ML gets derailed by bad reputation? At least, (as much as I know history), the last AI winter involved companies trying but failing at AI. This time around, they're not even trying.
We've entered an awkward twilight zone where companies can do what they want a la Laissez-faire capitalism, which means they can also do exactly the same thing a government would do at exactly the same scale and with similar levels of authority. It is a grey area that hasn't yet been stopped by laws because sometimes companies are so big they influence the laws themselves.
Everything you have said except for the last phrase “ sometimes companies are so big they influence the laws themselves“ is false. There are a fantastic amount of regulations in every western country today, many of which are applied arbitrarily or with enough discretion that they are essentially arbitrary. Even the most massive corporations market caps are dwarfed by the monthly spending of government. The only cases where corporations exert the powers if the state over detention and violence are occasions where they do so at the bidding of the state.
It is true that some corporations are so big that they influence lawmakers just as ultra wealthy people always have. Indeed, if corporations were able to do the same things governments did and with the same authority, this process of influencing the government would be unnecessary.
yes, and whether this reflects poorly on capitalism, democracy, regulations, the corporate structure in a given country, the people in a given corporation, or the system as a whole depends to a large extent on your priors.
Well, I wouldn't say it was blind. About every 2 years you'll find a movie about how insanely stupid it is to post info online. And those types of movies go back pretty far. The problem is how dense people are.
> If some piece of code is used to predict my employability based on my facial expressions
Funfact: when I first moved to USA I would practice emoting in the mirror so Americans could read my face and wouldn’t penalize me during interactions. Now people back home say I “grin like an american”
> can’t say anything in public
Ever noticed how Gen Z is switching back to private chatrooms and message groups? Public stuff is all polished and curated.
I remember once being in a branch of HSBC trying to sort out a loan. I saw on the screen for my account they had a score of something like "customer behaviour", which was like 54 something.
I had drunkenly called HSBC and ended up ranting at the person on the phone a few times (lost cards etc) so I think this was a rating of how well behaved I was towards their staff.
The manager I was speaking to changed his tone fairly quickly after that screen came up!
So this is nothing new, but I guess the scale and opportunities for data points are new.
Although, having said that, a lot of marketing based scoring data is woefully inaccurate. I was once responsible for distributing a data set to company clients which covered interests and personal info for all UK population.
Not only was most of the data about my dad wrong, the things that were accurate were years out of date.
My information didn't exist. A colleague's email was completely wrong. Indicated he liked going on holiday but had never been outside of the UK.
I think that people wind up worried about the wrong thing on these systems. The worst case scenario, where some all-knowing system negatively impacts you because it's rating your behavior on some opaque scale, isn't the one we should be worried about.
That by itself is scary enough, but the much more likely case is the one where this system is rating you based on wrong information, and given Finagle's law..
Especially concerning when the justice system starts using this info; you really, really don't want to be the false positive.
The justice system has built into it the requirement to justify allegations against you though. By design everything in the justice system is transparent or its not admissible. So an opaque "score" of any sort wouldn't ever be acceptable evidence.
Data that appears "good enough" under the purely economic lens can leave an awful lot of people unfairly out of service.
In a low margin business a single bad customer can easily cost more than what is earned from ten good customers. In that situation, an oracle that rejects eight good customers per rejected bad customer would already be good enough.
If anything, they seem to be doubling down on this. From my discussions on the subject with people "in marketing", they are really convinced that not only is the data "accurate enough", but that collecting and exploiting it is actually for the client's benefit.
I wonder how one would go about quantifying the usefulness of all this tracking, because I doubt it's cheap. The best I could get was "we're able to see changes in sales which correlate to marketing campaigns". I can totally buy that and it seems somewhat more advanced of an answer than the more common one of "If didn't work they wouldn't be doing it".
What I wonder though is how they would go about quantifying how much better a tracking-based campaign worked than a "traditional" one would have.
That's a pretty strong assumption. Data that is garbage isn't necessarily going to cause losses. In fact, making the market more opaque (less efficient) will add to profit margins. If all companies are using the same data/algorithm, then even incorrect decisions form a focal point. If all the companies think someone is a bad risk, then that person will end up paying more regardless. Even if one company defects and accurately prices their risk, that company will just lower their rate a little and that customer still won't be getting the rate they would have if full market competition were taking place.
There's a very good reason why organizations should be allowed to keep "fool me once/twice/thrice" records: it's what enables them to be nice to customers who don't abuse that niceness.
But there's no process to appeal and the data might be laughably wrong. Outlawing cross-organizational aggregation might be a reasonable middle ground. It gives a further advantage to giants like Amazon, but all the problems compound at interorganizational integration.
In 2006, I spent some time in the hospital after a bike accident. Someone who shared my recovery room was an guy with kidney failure & an extremely poor attitude. I learned he was running out of places to get dialysis because he kept treating the staff poorly & getting kicked out of the dialysis centers. Imagine having some poor newly hired receptionist trying to help this near death, abusive asshole & reading a note in the computer screen that said “do not treat this person under any circumstances”.
I've had a similar experience. Had access to a dataset about which many breathless articles were written when it was leaked/breached a couple years later. Was able to find very little data on family members in the set and what was there was quite stale.
"2. In addition to the information referred to in paragraph 1, the controller shall provide the data subject with the following information necessary to ensure fair and transparent processing in respect of the data subject:
(g) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."
Case in point, I was just recently getting nowhere on an obviously automated decision, so I sent an email quoting GDPR Article 15(1)(h) (instead of Article 14) and got it resolved amicably within an hour. Maybe just coincidence, but their follow-up was a complete 180. I suppose it could be abused to get around certain automatic safeguards, but I made it clear that my intentions where just to ensure that my data is correct and being processed fairly, as I was sure it wasn't the case.
But that requires that the company even cares about following the GDPR correctly, or perhaps even the Data Protection Authority of a given EU country. I know of one small but popular enough data broker that's been operating for years in the EU and flaunting the GDPR without any punishment so far. Not sure if this is the place to advertise it. But I suppose somebody whose GDPR rights were violated has to lodge a complaint first.
Aside from being a mostly clickbait puff piece, the solution to this is regulation. All companies collecting data this way need to be subject to exactly the same requirements as the credit bureaus. Citizens need to be told what data is collected, why, when it gets used, and they need easy access to seeing their own data, as well as a well-regulated method for correcting it.
And we could just dramatically limit how much of this can even be used for housing and employment decisions, and crank up the penalties for misuse.
It was reasonable at the time it was put into place. It's now so much easier to deal with information, reform is needed but it's not going to happen unless the Democrats are in control.
What I would like to see is a rule that if a company collects data for more than in-house use (so it doesn't apply to a company who simply has records of it's customers) they must make a reasonable effort to notify you and allow you to examine the data and challenge anything you believe to be incorrect.
Both China and the US are oligarchies. You can argue that in China is is the government that owns the enterprises, and in the US it is the enterprises that own the government, but that does not change much.
A central premise of this article is not correct. If an employer, for example, is using tens of thousands of background data points to deny employment to a person, the employer would need to disclose that to the applicant.. If the things in this article are secret then the law may well be violated. Also, not really the best idea to apply secret proprietary algorithms to these kinds of decisions because you're eventually going to have to disprove improper bias, which can easily and "systemically" be baked into a background screening tool.
> Also, not really the best idea to apply secret proprietary algorithms to these kinds of decisions because you're eventually going to have to disprove improper bias
You only have to disprove that if the applicant can first show the existence of bias, which with a secret proprietary algo that they don't have access to either the inputs or outputs of or know exists, is pretty hard to do.
Ya large corps with the tools and resources are beholden to the most rigid of ethical standards and furthermore wholly unable to commit crimes systematically for very long due precisely because of their scale, amiriteamirite?! Yukyuk where my high fives at, guys? Yukyukyuk
It seems that such close attention to detail and blind trust to third-party scores will decrease chances of filling the positions and get the needed job done, which is supposedly the main business objective. This again will reduce the hiring succes to the good old chance or buddy-referred chance.
> Why wouldn't an HR person or hiring staffer immediately out the company or even sue them?
They wouldn't out the company because hurting the company for no private gain doesn't help them, and they wouldn't sue the company because they’d be a beneficiary, not an injured party, and so would have no damages to claim.
And also because both acts would destroy their future employability in the field.
We're already "scored" all the time ... all types of insurance, mortgages, credit ratings, past criminal record, and so on. Now, we may wish to control the use of data being used on us or against us, and I support that, but to the degree with which we are being "scored" generally, we are already being scored all the time across tons of other dimensions and commercial and even governmental applications.
HR arguably has the most immediate potential gain, and from its new policies those in the employee pool have the most to lose.
There's a lot of use in applying insurance analysis techniques to filter out bad hires much better than conventional practices. Instead of interviews, vetting, tests, headhunters, you can adopt the latest in data analysis to cut costs and improve your KPIs across the board.
Predict an employee's productivity by analyzing online browsing behavior. Fixed qualities like attention span and drift; escapism and procrastination; pleasure-seeking vs productive, curious, prosocial browsing habits. How do these overlap in a typical workday? Are you focused or dispersed? Does your attention cycle? Do you complete tasks to the end, what happens when you get stuck on a problem?
This doesnt even get into
friendship networks, purchasing behavior, public displays of attitudes.
Can you game the system? Probably not. These days that requires you to play a losing game of tweaking your personality down to the smallest meticulous detail. Essentially going against the grain of your natural flow ordering all aspects of your identity to suit an opaque and almost certainly fault-laden.
People flip out about the Chinese social score, without realizing that it was pioneered in the US private-sector.
Do you guys think there's any realistic hope today (not accounting for what kind of horrible stuff might be possible in the future) of being able to keep your 'genuine' online activities separated from your public facing, sanitized ones? Such that no company's HR department would be able to tie your real life identity to your real online activities? I imagine it could be possible if you're serious and rigorous about it.
Or am I being naïve for even entertaining such a thought?
The more energy you put into being anonymous, the more anonymous you are, but it's not an on or off switch. It's like something that can constantly be improved. You can even go as far as changing the way you speak to avoid stylometrics.
The question to ask yourself is 'what can reasonably be found out about me?' You can be found on the smallest trace of evidence, but is it reasonable to expect someone to do that? You're not exactly a government spy. Just take obvious steps like not using your real name and you should be fine.
Yes. Ditch being connected when you don't need to be and think several times before you connect...is it worth it.
That's what I have done for a very long time. Almost no data about me exists. I am also very heavy handed with permissions and apps on device. I don't even allow google to install the google app on my device.
When getting insurance quotes, there was a disclosure that a credit check had been conducted, listed the top 4 items affecting my Risk Score, and gave a link for questions. I found the guide to all the possible reasons and was amazed at some of the elements. 
* They prefer 84 years of credit history to put you in a lower risk category. Having 2 lines of credit paid every month versus 1 would make a huge difference in this regard.
* Oil company credit cards are seen as very high risk, and even having 1 is seen as a negative.
* Department store credit cards are also viewed negatively.
The weightings and mixing of variables is the secret sauce...but reading through it all made me very happy with my decision to turn down an offer from one of the major reporting agencies.
Article title: Data isn't just being collected from your phone [...]
I skimmed the article and there's little mention of phone data collection. There's one mention of insurers using phones to collect driving data, but that's it. I was expecting some vast surveillance network using your phone to score you.
One thing that could end this is someone hacking into the databases and changing scores. Say for all the US senators and a few CEO's and celebrities. It would basically prove that the scores are no longer trustworthy. Not that they are anyway, but if they start affecting "important" lives things might change.
I've been wondering if all this data is being used for stock market manipulation or timing the market. What exactly are those quant's using for inputs?
There are people out there resisting these efforts. For example, I know people who are against smartphones and use a so called 'dumbphone' or feature phone for their main number. If they need to buy groceries, they refuse to use a loyalty card, and always pay in cash. They typically have a secure and private laptop with something like Ubuntu on it, and use Firefox with all the anti-tracking features enabled, and uBlock Origin installed, JS turned off by default etc. The typical steps people take to minimize their footprint against these predictive algos
When I graduated high school ~10 years ago I remember one of our schools leadership gave a speech on making a good record of yourself on social media rather than avoiding it. This was when companies first started looking up people's Facebook. He was way ahead of his time but he was missing a key part which is the expansion of data collection into every aspect of our lives. It's not just social media you need to worry about.
Very much describes me.
I use cash for almost everything and have a credit score of 0 (zero)!
I don't use or have a use for credit.
18 months ago the company I worked for had a background check done on me. When the results came in I smiled, knowing I had been doing the right thing.
The results: this person barely exists. They went to school but there are no records of what happened, etc, etc, etc.
It is how some of us live our lives, always have always will.
I am over 50 y/o, born, raised, and still live in silicon valley.
Surveillance capitalism to the USA is like heroin to an addict: he thinks he can give up at any time, that he can manage the addiction and find the balance, until he realises that he is a slave to that addiction and can't change anything. The Congress could destroy this "big data is the new oil" business model, but they don't want to give up the money from big tech, they think they can manage this addiction and find a balance between greed and freedom, they find excuses that surveillance is a matter of national security today, but at the end USA will repeat the fate of that addict.
OK. Now lets look from another perspective. Now when we know that, how to make system score us high? How to show system what it scores as high and hide what it scores as low? How to benefit from that system?
This should be illegal. The credit unions are bad enough but this is egregious. Software should not be allowed to affect humans without a complete explanation of how it works, how it reached its decision, and a human with authority to appeal to. Frankly, a bunch of the uses here already seem illegal like denying people the ability to rent or to get a job based on some black box software that provides hidden information on people. How can we even know the software isn't something like:
if (race === black) deny Application();
Artificial intelligence indeed. From 1820.