I think the imposition of this burden on engineers is the result of our belief that our legislators are too incompetent or unwilling to act in the fact of clear moral hazards.
Tech has a very strong history of conscientious objectors. They're not the problem. Other institutions need to do their part.
But we do all have shared values. In fact, I think we all agree on most of everything. Imagine there was a button you could press where each and every person would suddenly receive at least a nice house, a bit of land to call their own, and a pleasant good paying job. There's not a single person, regardless of ideology, that wouldn't press this button. Another button to get rid of all crime? Yip, gonna be pushed. Another to get rid of all poverty? Everybody wants the exact same things. The only thing we disagree on is the best route to get there.
That said, I completely agree with you that there is a perception of the lack of any shared value. So why might this be? The 2016 election was interesting. How many people voted for Hillary thinking 'Yes, this person truly stands for what I believe in and will make a phenomenal president.'? How many voted for her because the alternative was completely unthinkable. And the exact same holds true for those that voted for Trump. Think about what a remarkable bit of social engineering that is. In a democracy, you've managed to get tens of millions of people to vote for people they don't actually like or event want in office. And all you had to do was to make people loathe, and fear, the alternative sufficiently.
Division helps entrench establishment forces. You can even see this in the choice of which issues get elevated to the national level. What is the weapon most typically used in mass homicides? What percent of homicides are rifles used in? I think the majority of Americans would get these questions completely wrong. Because the issues that we elevate paint a picture that is not in accordance with statistical reality. That reality being that pistols are the primary weapon of mass homicides, and with rifles being responsible for about 2% of all homicides. [1] Our homicide rate is driven by cheap little pistols. The year to year variance in pistol homicide is frequently larger than the entire sum of all rifle homicides. In other words if we had a magical button to get rid of all rifles, "assault" or not, you wouldn't even notice a drop in the homicide rate. It'd be statistical noise.
But the issue is promoted because it's extremely divisive. It makes people fear and hate one another and further drive this perception that we have no shared values. And what that translates to is at the polls you won't vote for who you want, but will instead vote against who you do not want. And that translates to voting for the establishment candidate who, by definition, will have shown themselves to be the most 'electable'. And then we all end up disappointed, and then do this again 4 years later. Only this time the establishment candidate is truly different, honest! Or in the case of an incumbent, they'll actually do what they've been promising now - they just need 4 more years, honest!
> There's not a single person, regardless of ideology, that wouldn't press this button.
You have vastly more faith in humanity than I do.
There is an entire class of people who define their success based on how much more they have than everyone else. It isn't enough for these people to be successful, healthy, and financially stable; it is necessary, for their happiness, that other people not have these things.
Many of the problems we currently face have, if not solutions, proven mitigations. These mitigations are complete non-starters in America, though, because a significant minority of people have been convinced that "those people" haven't earned it.
> It makes people fear and hate one another and further drive this perception that we have no shared values.
A more important issue is the fact that our legislatures are structured to have exactly zero concern for our shared values. If we take your cheap little pistols example (which is entirely accurate, as far as I can tell), we have one of those clear mitigations: universal background checks. This mitigation enjoys somewhere around 90% public support. It will never, ever pass as long as the NRA is allowed to terrorize politicians into voting against the public good.
I think the examples you offer are a pretty good example of the point. For instance you're presumably alluding to welfare, and people being upset seeing things such as somebody paying for food with food stamps while browsing on their $400 iPhone or walking their groceries out to their new car. It's not unreasonable to characterize this as somebody being unhappy because other people do not not have things, if you'll excuse the double negative. But is this really what it is?
Should the purpose of welfare be to solve poverty or to sustain it? This is another one of those questions that I think we'd all agree on. Nobody wants poverty and so welfare, as one of our primary means of combating it, should do exactly that. It should combat it. Is it succeeding? This is something we can look for objective information on. This [1] is a graph of the US poverty rate. It's clear that welfare is not effectively reducing poverty. One of the oldest proverbs, that's generally appeared throughout the world in completely independent cultures 'is give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.'
Our capitalist society is fundamentally unfair in one way. The best way to make money is to have money. Start as a billionaire and unless you're a complete idiot (or alternatively voluntarily engaging in extremely high risk enterprise) you're going to die a billionaire. And this goes all the way down. It's much easier to earn $15,000 when starting with $5,000 than it is to earn $10,000 when starting with $0 - even though it's the exact same increase in capital.
If a person is so poor they cannot reasonably afford to even feed themselves, is it a wise investment of what little capital they have to buy a luxury electronic device? Or a new vehicle? Is this the sort of behavior that's going to help them get out of poverty? I think people see these behaviors as a failing of the welfare system. Instead of getting people out of poverty, it's simply sustaining it.
And there are major corporate interests that are invested in sustained poverty. For instance WalMart in their SEC filings actually lists food stamps as one of their major profit conditionals [2]. About 20% of all food stamp outlays end up being spent at WalMart - around $13 billion in recent times. Quite an interesting system we've created. Welfare subsidizes low wages at WalMart, and then caps it off by directing massive amounts of money back to the company. Kraft Foods is another big advocate for welfare and its expansion. About 1/6th of their entire revenue coming from food stamp purchases.
And their are also political gains to be had from sustained poverty. Today around 40,000,000 people are dependent on food stamps. Politicians who promise to expand these sort of programs are likely to disproportionately receive their vote. This creates an incentive to simultaneously service these people, but also keep them dependent. See LBJ's quotes on how he viewed the Civil Rights Act (which he passed), or what he referred to it as privately, for an example of this issue. I will not repeat his language here.
This is really just scratching the surface, but this post is already unreasonable long so I'll cut it here. But I hope I've framed at least the start of a case for showing that when people are not necessarily the biggest fans of programs such as welfare, there are reasons beyond disdainful views of those receiving it, or a desire for them to stay impoverished.
> The only thing we disagree on is the best route to get there.
So we agree on most of everything, except this massive, critical thing?
Even aside from that, your examples of shared values also don't strike me as universally agreeable. I find many of these apparently agreeable things just poorly defined. It's much more obvious why these issues are so divisive when you dig a little deeper.
What exactly would getting rid of all crime mean? Because people disagree on what should be considered a crime.
Getting rid of poverty? Well, what kind of poverty do you mean? Because poverty in many ways is relative. Getting rid of all poverty implies getting rid of all wealth inequality. Is that really something you believe everyone agrees on?
Everyone owning their own land? I'd be surprised if everyone agreed on private land ownership. Everyone has a nice house? What kind of house? I'm not sure everyone wants to live in a suburbia.
>> Imagine there was a button you could press where each and every person would suddenly receive at least a nice house, a bit of land to call their own, and a pleasant good paying job. There's not a single person, regardless of ideology, that wouldn't press this button.
No I would not push that button. I think everyone should have a roof over their head, but only value-producing people should have a nice house.
Also, I think that the reward should be based purely on value production of the individual, without any regard for ownership of capital (the means of production).
Also, individuals should not be rewarded for capturing the value created by others (I.e. entrepreneurship).
There should be a clear distinction between activities which create value versus the ones which capture value.
We need a system which enables creators instead of entrepreneurs.
By the way, the word 'entrepreneur' is French and it literally means 'someone who takes what is between'; not someone who creates value.
>> Another button to get rid of all crime? Yip, gonna be pushed.
If we had a decent system, I would agree. But in the current system, I think that crime might as well just be legalized. The definition of crime is extremely unclear.
There are so many white-collar activities which are not considered crimes by the legal system but whose consequences are actually far worse for society than all the blue collar crimes added up together - In fact, you could argue that blue collar crime would not exist if it weren't for white collar operators rigging the game and effectively forcing poor people into a life of blue collar crime.
>> Another to get rid of all poverty?
Sure, but people need to be educated in order to be able to lead a socially conscientious life outside of poverty.
If you lifted everyone out of poverty with the push of a button, they'd eat the conscientious middle class for breakfast the next day.
I agree with your main thesis, but also: It's time that we start teaching kids at school about technology. And I mean actual technology, not how to make a pretty slides presentation on Powerpoint/libreOffice. I mean the importance of your data, how companies exploit it, what is a CPU, what is RAM, what is an operating system, what is security, cryptography, domain names, how ISPs work, etc.
I wrote a response to an email today, someone had sent me two ethics position papers. I pigeon-holed my own response after I wrote "it's time policy makers start assuming responsibility for understanding more logic than just policy logic. Math must be part of their domain expertise".
Mainly I shelved my response because I doubt the policy people on the thread would have taken that very well. But the truth is looming large these days.
Experts are a nice resource to have, but they're not a cure-all.
Not all of our legislature's representatives have systemically good incentives. Regardless of their ability to understand complexity, complexity offers them the ability to obfuscate. Ignoring any bad faith or corruption, though, our legislatures are also a limited compute resource.
If the complexity of modern society rises by orders of magnitude, the tools legislatures previously used to tackle complexity may not scale to the new challenges, or may simply be too slow to address the issues while they are relevant.
One of my first areas of research was mapping legislature throughput to variations in rates of technological change. It appears most western governments addressed social issues arising out of the industrial revolution, for instance, exceptionally slowly. To summarize a few years of reading in the library stacks: the relationship between social complexity and quality of governance is not well understood.
In the discourse that followed WW2 we put the same burden on soldiers and we still today prosecute German soldiers who worked in concentration camps (see [1], for a book keeper sentenced to prison time in 2015; not actually involved in killings).
Who, if not we, the engineers who really understand the implications of our actions, should stand up?
A moral hazard happens when an entity is somehow insured against something (e.g. health insurance), so is rationally more likely to behave in some "bad" way (e.g. driving recklessly).
I'm not seeing how this a moral hazard, do you just mean "immoral"?
"Insured" can be interpreted very broadly; moral hazard is whenever the negative outcomes of a risky decision are directed away from oneself. A CEO considering the option of a layoff is a moral hazard as she will make her board happy at the expense of her employees. Either way she has nothing to lose.
>"Insured" can be interpreted very broadly; moral hazard is whenever the negative outcomes of a risky decision are directed away from oneself.
Nope! That's called an externality. A moral hazard is a type of externality, but is very focused on a particular set of instances in the definition I provided.
My comment used the term in the exact manner described by the comment you're replying to, to describe a state of affairs whereby incentives are systemically perverse because risk is not properly allocated.
A recent review I looked through indicates that the term has been historically used for different purposes in economics, insurance and probability literature. If the language of externalities is easier to understand for you, feel free to mentally substitute it in.
I could have couched the comment in the language of externalities and made a similar point, but it would lose the rhetorical flourish of hinting that legislatures themselves discount risk associated with their actions (or lack thereof).
Sure, though, to be pedantic, a common example of moral hazard is the increased likelihood of driving recklessly in the presence of mandatory seat belts. See https://web.stanford.edu/~leinav/pubs/RESTAT2003.pdf
That’s not a moral hazard. The word doesn’t even appear in the paper. It’s just an example of somewhat efficiently choosing a different point on the risk/reward continuum when the payoffs change.
A moral hazard is choosing a selfish course of action with negative external effects.
This is a burden that lies with us all, individually. We are all individually responsible for our actions. Its absurd to suggest that somehow we should outsource our morality and the responsibility for our actions to politicians or legislators. Whether or not our system (and society) is horribly broken on every level (it is) has no bearing on the personal responsibility we all bear for what we do as individuals.
That’s less sinister than you might think. New legislation is more common for new industries.
Consider the internet began as the Arpanet is just 50 years ago. For much of that time the idea that the average person would be online from their own home would have seemed like crazy science fiction.
The thing is, the legislation is tweaked to the benefit of the giants, not against them. At least in general.
Not even federal government with their delusions of absolute control can do anything. (Say, FISA insanity.)
Net neutrality was a contest between two big corps. Patent battles ruled to support American business despite clear violations? (Qualcomm et al.)
The used to be laws considering encryption to be munitions. Silly attempt to sabotage it AKA Clipper. Yet somehow AI systems to discriminate explicitly sold to opposing regimes are not despite being much more applicable.
On the other hand, there is Comcast and Time Warner.
I think the public benifits from that to some degree. A search engine index is inherently a derivative work and without updated legislation could easily qualify as copyright infringement.
I am not saying is fair, but other industries which get away with actually killing people (ex: fine particle pollution from coal power plants). So, bias is somewhat expected from how our system works, it’s just not all bad.
Assume that a hypothetical future government that you find yourself living under is not good.
Assume furthermore that a future hypothetical company is equally not good and the two collude to do not good things together.
Now assume you’re motivated to do something about it. Sucks for you, they have already predicted that and cleansed you because they know everything you’ve ever read and it was a high probability that you’d be against them.
Consider another situation where hypothetical evil overlords know exactly what buttons press in order to influence you to act a certain way. They press them regularly. You dance as expected.
Both circumstances undermine your personal autonomy and right to self determination. Furthermore they undermine the humanity that we share where our society is no longer controlled by the needs of humans, but instead arbitrary algorithms coded for whatever mundanely evil purpose.
And that’s the point (that you’ve more or less hit on), this is evil, but also so boring that it is actually a threat to society because no one will care enough to do anything.
For those that want a more practical example, Russia has just passed a law requiring that all internet traffic be sent to their equivalent of the NSA and FBI, with all traffic sent in the clear [1]. If you're an adtech company in Russia, with historical data you are probably being subpoenaed for that historical information. Think about Turkey, which has had mass purges since 2016. Academics, judges, ordinary citizens have been jailed or disappeared for having gone to the wrong school, having attended the wrong house of worship, for being friends with the wrong people. The government isn't tracking these people down via paper records, they're tracking them down using ordinary technology.
It's not about step 1, which is to sell you more barbies. It's about step 4, step 5 where suddenly a bad actor has access to all this data and can see that people who make this search have a 10% chance of opposing the current leader/president/ruling party etc.
Does increased government authority over the tech industry make abusive access to its data and eyeballs less likely, or more likely? How would such a collaboration start? My money's on "It sure would be a shame if we audited the shit out of your infrastructure and hit you with everything we can find in the 500,000 page rulebook... or you could do me this one favor."
Doing something about a bad government is called terrorism. Rooting out such people before they act is called preventing radicalization, stemming the flow of hatred and bigotry (the #1 talking point in calls to regulate tech).
Knowing what buttons to press to influence populations to act a certain way is called called campaigning. Belief in democracy is exactly equal to belief that whoever is best at that each election cycle should have the most power.
Abuses are just that. If a corrupt person is using the law corruptly in order to achieve something illegal, and getting away with it, that is something we should root out. Including your hypothetical racket.
Creating laws to prevent the population from being abused and manipulated via big data is somewhat orthogonal to that, as it would be a separate set of laws. we can’t not do something just because it might be abused by those implementing it. We have to include reasonable limits to the powers we grant.
We don’t do away with all rule of law because it is occasionally abused, we prosecute the abusers under said laws and create new laws when loopholes are inevitably found.
You make that sound trivial. It isn't just about barbies. Technology has infinitely times multiplied our communication bandwidth. Giving government complete control over the personal data of people literally makes the government all seeing eye, which has never been possible before.
You might trust your government enough to be okay with this, but don't forget that government is made up of humans, with there own personal whims and idiocricies. Do you really want to live in a world where people around you might have power to completely end you? Will you be able to speak freely in such a world?
This level of power is a very new thing, and the people on higher order (wrongly) think that it's all good and they can keep going about there business as before.
As the article repeatedly states, the problem is Dragonfly, things like Cisco's and Yahoo's famous lawsuits, and Apple's handing over of iCloud keys. Nobody is conscientiously objecting to your scenario.
I've long thought that tech needs like.. a guild or trade union, or a professional association of some kind. The problem though, is that it's so easy to get into, so there's no easy way to "police" ourselves, because tech has been democratized so well-- Anyone can do a code boot-camp and be up and running within weeks. Not that that's a bad thing, it just makes it harder to hold developers to a code of ethics or anything like that...
Maybe we just, as a society, need to adopt morality and ethics more deeply into our cultural DNA.
Canada has a professional software engineering organization, at least in some provinces. That said, they talk a lot about ethics but then award first place at a local engineering competition to a team that tracks people's position and movements without them knowing using their cell signals (for use in airports), so maybe it doesn't do any good. Great technical work by them mind you.
I think Canada has had a professional, and pretty serious, association of engineers (in the traditional sense) for a long, long time. Maybe the software thing is a spin-off of that.
The regulation is coming on very gradually since they're confused about how to define and regulate the title/practice.
The regulation won't necessarily help anything but to outlaw a lot of smart people from being able to write certain software. It will put us further behind the USA in our ability to create tech companies that matter (or that can compete with offshoring) and lead to higher unemployment as computers take away more jobs.
Luckily the restriction is barely enforced at this point, though it is getting stricter. I guess it will benefit me in the future when somebody will be forced to hire me because there weren't enough other applicants with professional membership... sort of a depressing thought
> The regulation won't necessarily help anything but to outlaw a lot of smart people from being able to write certain software. It will put us further behind the USA in our ability to create tech companies that matter (or that can compete with offshoring)
Has any of that happened with engineers in Canada? We still have plenty of engineers.
A lot of this comes off as any industry wanting to hold onto the engineer title without any of the accountability that should come along with it, or really any accountability at all.
I don't want a random self taught engineer involved in building my house or a local highway overpass. So why would I also want a random self taught software engineer working on vital computer systems or on autonomous vehicles or on the software that controls the pitch controls on a Boeing 737? Software is not just some shit-tier SaaS app. It being hard to get official credation means accepting holding oneself to high (legal) standards. If that means lots of people have to go around calling themselves a developer instead of an engineer then I'm fine with that.
It might seem depressing that we hold engineers to high standards and expect legal accountability for the things they sign off on or create or approve, but I think more accountability in the tech industry is a good thing. The modern tech industry and software engineers hold themselves to ludicrously low standards because they basically operate on the idea that innovation = good and if-it-makes-money it must also be good. I know fast food workers who are held to higher standards than software engineers. That many continue to call themselves engineers is mainly just a hold over from a metaphor that describes computer systems as "architecture".
Software is a big deal and it effects every fabric of daily life in 2019. We shouldn't treat it like it's ephemeral stuff that has no consequences beyond the next VC exit or going public.
I think the fear is that the same restrictions will affect shit-tier apps as well as aeronautic control software. After all, it's not like you can't already say "to work on aeronautic software you must have credential X or Y". So what's the point of restricting the word "engineer" as a whole?
The problem with that argument is that they can just call themselves developers and nothing would change. Currently, "software engineer" doesn't mean anything. There is no credential that obligates software engineers to the same standards as other engineers or more than developers. Right now it's just an empty title in the tech industry.
It's not a restricted word. It's a restricted accredation that, should one carry it, obligates them to a certain set of standards and accountability.
The people making those apps don't need to call themselves engineers. They can call themselves developers. And the world will keep spinning. But a developer calling themselves an engineer is like a local contractor calling themselves a "residential engineer". An engineer can be a local contractor, but a local contractor can't simply call themselves an engineer, of any sort, unless they're accredited. That distinction matters because engineers have obligations that they are held to that a local contractor doesn't. That's the point.
The Candian Association For IT Professionals (cips.ca) administers the only nationally recognized IT designation, called the I.S.P (think of it like a P.Eng for computer scientists).
Their goal is to make this a requirement for the handling of certain types of sensitive data, as one needs to swear by a code of ethics in order to become a bearer.
A lot of sensitive data processing isn’t done by engineers or developers, it’s done by data scientists, statisticians and economists. How do they plan to handle that?
While it's true that getting "up and running" with a simple project is only a matter of getting a working laptop and internet connection (and a bootcamp if one is available to you), getting into the position where you could build (or not build) a system capable of broadly violating human rights on behalf of an oppressive government is not an opportunity most workers have. If you're interested in the sort of union that could help you take action in your workplace I'd recommend looking into the Tech Workers Coalition -- there's an active community online and probably a branch near you.
Further down in this thread the discussion has broadened to worker rights in tech-adjacent jobs (eg. amazon warehouses). It would be great if we could have one big union that fights for the rights of all workers, and if you think so too you should look into joining the IWW.
Yes, our society does need to adopt morality and ethics as guiding principles in our work, and organizing your workplace is not some orthogonal venture but is actually part of that goal.
> It would be great if we could have one big union that fights for the rights of all workers, and if you think so too you should look into joining the IWW.
Given how often the interests of various workers can and do conflict, this strikes me as an idea possessed of a wonderful opportunity to come into greater alignment with reality. As are those who advocate it.
I always ask would-be tech union organizers the same question: what's in it for me? Rarely do I get good answers back.
Unions aren't about moralizing. They're about self-interest. Show me that you're going to advance mine, and we can talk.
If your understanding of unions begins and ends in terms of what's "in it for me" then I'd urge you to rephrase the question: "what's in it for us?" And then expand "us" to the people you care about. As it turns out, quite a lot. In the original article, we're potentially talking about the safety and privacy of hundreds of millions.
In more day-to-day union campaigns, you'd be organizing so that your coworkers can have a place to work where they're not harassed, underpaid, or bullied by systems out of their control to create technology that harms people. Admittedly in the US it's not much leverage we have, but you could rest assured that if you were victim of an injustice by your boss, they'd have your back too.
It's about taking back just a little bit of control over what you use your skills to work on, and how. Some people would rather take orders from their boss until they die, but especially in software there seems to a collective desire for more accountability from the bottom up, and maybe a union is what that will look like.
> If your understanding of unions begins and ends in terms of what's "in it for me" then I'd urge you to rephrase the question: "what's in it for us?"
Respectfully, and with a heavy heart, I must decline to do so. I am not interested in some nebulous "us", doubly so one that is poorly defined. I want to know how a union will improve my life. It's far too easy to sacrifice me in the interests of "us". After you, my friend.
I do not want to hear how a union will improve the safety and privacy of hundreds of millions. That's the job of governments. I want to hear about how a union will improve my privacy and safety. I want to hear about how this union will unconditionally have my back. I want to hear how you propose to make sure that a union doesn't turn into another way to bully, harass, and sit in arbitrary moralistic judgment of workers.
Alternatively, tell me how your union will expel - and render unemployable - practicioners who through ignorance, negligence, and recklessness endanger the safety and privacy of hundreds of millions. I have worked with more than a few engineers in my time who have been guilty of that. No warnings, no slap on the wrist, just up-front education and swift, harsh incentives to be careful and not fuck up. I know, I know, unrealistic and cruel, right? No need to wreck someone's livelihood, right?
You're absolutely right, of course. It's in every way about taking back control. You'll have to excuse me if I'm reluctant to take it back from one faceless and unaccountable group that doesn't care about me so I can hand it to another faceless and unaccountable group that doesn't care about me. That seems like a needlessly complex way to not solve the problems at hand.
These are good questions to ask of any union IMO. Workers wouldn't join a union they felt didn't advance their interests (and anecdotally, there's gotta be a reason union members are crazy about unions, right?) So this is good to investigate.
Perhaps purporting to look out for the interests of an entire nation is too broad for any one organization. Some questions about your assignment of responsibilities that I think will clear things up: what if, hypothetically, the government was run by a ruling class who would leverage your need for income, healthcare, etc., in order to make you build things you didn't want to build? And specifically speaking of surveillance technology, what if you suspected they were building these tools for other countries as a trial run to later use on their own citizens? It's not that far-fetched.
Otherwise, I think you have some great questions to ask of any union you're considering to represent you. But you're also representing them -- if you felt like you wouldn't get a say in what collective actions were taken, that's not a union I'd encourage you to work with. I specifically mentioned mine because I know their track record and philosophy and would like to see more of it. There's no hierarchy, no inaccessible higher realm of bureaucrats making the decisions, it's workers all the way down! Ask your local branch about it. Although there are often differences in opinion, bullying or harassing workers is not tolerated.
Will it immediately and permanently improve your life? This ain't a fad diet and I won't market it like one, it's a chore sometimes honestly. I don't go to meetings and pay dues because I love it all the time. But I've seen results, and seen how much those in power want to keep us from realizing our collective ability to win some of their power through organizing. And I think you'll agree that the trajectory we're currently on is grim. Think it over.
I've known union workers who love it and I've known union workers who didn't. Both are common enough to be impossible to generalize from beyond opinions being as diverse as people.
Anyone who says "there's no hierarchy" is, without exception, wrong. All they actually mean is "there is no explicit hierarchy". There's always an implicit one. The big difference between the two is generally accountability - explicit hierarchies can have it structured in. Implicit hierarchies get to say "We're all just workers here!" to dodge accountability.
> And I think you'll agree that the trajectory we're currently on is grim. Think it over.
You're right. I do agree. That said, I also grew up in a place where short-sighted unions helped turn a thriving industrial area into a post-industrial wasteland.
Just because one course is grim doesn't mean I want to trade it for any other course.
"getting into the position where you could build (or not build) a system capable of broadly violating human rights on behalf of an oppressive government is not an opportunity most workers have."
Come join the Evil Mastermind Technical Union! We have workshops on supertech, the latest coming advances, we sponsor work and mentoring programs for young Evil Geniuses (including internships!) as well as seasoned practitioners of Forbidden Tech!
Last year one of our top candidates improved the speed of facial recognition at checkpoints in a "repressive regime", leading to the arrest of a whole slew of dissidents who had previously eluded the authorities! These kids are going places - and the targets of their employers are going to the gallows/dungeon/gulag/graves, that's for sure!
Why put this on programmers? The guy making the decision to make Dragonfly is a Stanford MBA with a materials background. It's not some working class kid who made it through a Javascript bootcamp that is making decisions about the direction of future Google products. So, why don't we let the MBAs be the ones to form an ethical organization to police themselves.
Because you get paid to do the work, if a person X comes up to you and offers Y amount to do certain tasks which may be seen to be unethical or gross but the Y amount is significant. It's the programmer that chooses.
My impression of the ACM is that it’s basically a glorified academic publisher. Maybe a relatively non-evil academic publisher, but I’d rather not take ethical guidance from any of those bloodsuckers.
This! I think what I'm going to start doing is to not take any computer professional seriously unless they are an ACM or IEEE member (or similar org in your country).
EDIT: Getting downvoted, clearly not a popular opinion. Are people here not interested in having a strong corporate-independent organization to push the industry forward?
EFF, FSF, ACLU are all great organizations. ACM is unique here though because it's specifically a professional organization meant to push the industry forward. EFF, FSF, and ACLU are great and should be supported but aren't as broad as ACM.
If the ACM is really pushing professionalism and ethics now, that's great.
I've actually been a member or contributed to all the organizations either of us mentioned, and EFF has seemed to me likely a better predictor, but that's just a guess.
My biggest issue with EFF is that they often take pro-corporate positions and is lead by many actual executives and former executives (see board of directors https://www.eff.org/about/board), while the ACM comes more from an academic or tech worker strain (see https://www.acm.org/about-acm/officer-bios).
So while I agree with the EFF on many policies, their corporate leanings are a turn-off and isn't, imo, a solid ground for people doing day to day work in computing.
Yeah, even if the ACM or IEEE are not the organizations we need (and I'm not sure if they are or are not), we would be better off if we had a professional organization dedicated to promoting responsible practices.
Just because there's a professional organization that pushes a set of ethics, does not mean they will be anything resembling the type of ethics you are envisioning. Building tools, your ethics generally get framed with regards to your client - not the subjects your tools will be used on.
Traditional engineerings have longstanding concepts of ethics, plenty of professional organizations, and even professional licensure. This does not stop mechanical engineers from adding to USG's weapons disparity or civil engineers from constructing more profit-center cages.
I'm a software engineer largely because my father and grandfather were union laborers. My and my cousins' generation was the first in our family to be able to go to college. And it was precisely because the union was so easy to get in to that my father and grandfather were able to get good paying jobs with benefits without a college education. A low barrier to entry to a profession is no reason not to build a union.
I think the challenge is a bit misunderstood online. The challenging part by far with forming something like a trade union is to gain the influence, not how to wield it. It is having a high participation rate, a strong organisation, institutions and a war chest is the hard part. Actual influence will then be largely automatic just by existing. In general only weak organizations need "activist activities". A strong union's activities would be ongoing negotiations, legal cases, education, conferences, media production etc. That would form the culture more than any sort of code.
The UK has the British Computer Society -- though I'm not actually sure what purpose they serve.
I was a student member for a couple of years, but beyond the occasional Government IT job advert which required BCS qualification and accreditation, they seem to be extremely quiet (or not very good at advertising).
> The problem though, is that it's so easy to get into, so there's no easy way to "police" ourselves, because tech has been democratized so well-- Anyone can do a code boot-camp and be up and running within weeks. Not that that's a bad thing, it just makes it harder to hold developers to a code of ethics or anything like that...
Moreover, "coding" is a cross-cutting concern in that a lot of people who aren't formally programmers have been sold on the idea of being able to automate parts of their job by writing code, and I'm not sure how you'd yoke all of those disparate fields (especially academics outside of industry) into a single labor organization.
It was about combining divergent technical professionals together.
And the previous president Denise was also president of Uni which acts to support workers world wide eg the rights of garment workers and also supporting the victimised organisers of the google walkout.
To give an analogy, I as an engineer need to write a lot of documents sometimes, but nobody in their right mind would suggest that I should be forced to join the writers guild, in order to be allowed to write an essay.
Why does the fact that people can join easily mean there's less interest in a union? Should Amazon factory workers not form a union if they find it's easy to join the ranks?
>Should Amazon factory workers not form a union if they find it's easy to join the ranks?
Please mind the is–ought gap. Parent was talking about difficulties in unionizing, not about whether or not people in this or that sector should unionize.
"We are a small - as-of-yet unincorporated - nonprofit providing pro bono consulting on algorithmic and policy issues arising from the proliferation of: Statistical inference / Automated decision making - often called 'AI', ..."
From the author's website, listed at the bottom.
Imagine if journalists and other people posting to the www called "AI" what it actually is, instead of constantly portraying it as something futuristic to capture people's imaginations.
Even terms like "statistical inference" and "automated decision-making" could probably be explained using more common language to be comprehensible to the general public.
Compensation, including perks, is usually very good in tech in comparison to many other industries. Most people rationalize and convince themselves that they're not really doing bad things, and that it's someone else who's the problem.
In some companies, the culture is also that of being oblivious (or acting oblivious) to the harm caused by the work. Facebook is the best example of this. Has there been any conscientious objector in that company? People like the WhatsApp and Instagram founders, who grew a conscience after getting billions in their pockets, don't count (even if one or two left some money on the table when quitting).
Money tells many a true tale of what goes in in people's minds.
The article, sadly, does not mention Facebook and all the surveillance that it enables and grows, along with oppressing people by forcing them to use "real names".
Many software engineers conscientiously object to working at social or information platforms like Facebook, Twitter, Google. They simply have never worked there or tried to work there, just like how most wartime conscientious objectors avoid enlisting in the first place.
Objectors' reasons are diverse and not as unified as NYT makes it seem - some believe that big tech's arbitrary censorship and subtle political bias is immoral, while others believe that big tech's inability or lack of incentive to limit ads, fake news, or distasteful speech is immoral.
Usually in articles with a lot of facebook bashing, some facebook employee comes out of the woodwork and tries to spin it as "haters" being the problem somehow (for not doing anything, for (potentially) using the service?)
I doubt anyone sensitive to the privacy issue would apply to work at facebook in the first place. Even if the technology and know-how might be promising.
While I agree that individuals should consider the ethics of the projects that they work on, I think it's unfair to place the entire burden on them. Those in management who make the product requirements in order to sell the software are in the best position to evaluate the ethics of how software will be used. Why haven't I read a story about management conscientiously objecting to something?
I’m in management (at a company that doesn’t make anything where conscientious objection could be relevant). When we make other ethics-based judgment calls, we don’t go crowing about them generally. They’re simply “how things are done around here.”
Maybe you should tell the world about it. More people need inspiration to stand up to morally bankrupt leaders. Your story might just push them in the right direction.
I think this is s bit naive. Americans and EU citizens could conceivably follow some protocol such that what they produce will not go against certain principles (if we even presume no such thing as dual use tech), but that would do little to stifle and control developers outside this sphere. Vladimir, Jinping, Jeong-Un, Narendra, etc., aren’t about to handicap themselves and have some realization, oh, you know what guys, we should be respecting human rights, we gotta stop.
Those export control policies affected access to strong encryption by citizens of repressive states, not just by their leadership. I'd say the balance is positive here, especially since many such states also (try to) regulate access to crypto.
I get the point you're making, and it's valid... but "Vladimir, Jinping, Jeong-Un, Narendra, etc" is a bit offensive. One's ethnic background, and even citizenship, does not define one's morality. Take a look at the list of names on http://neveragain.tech/ - it's not all Americans and Europeans.
I think you might have misunderstood me. Those leaders are not going to respect whatever ethics are set up and will ensure their own developers do what they are told irrespective of this effort. Calling out those leaders isn’t an affront to their citizens any more than calling out George W (or Obama or Trump) is an affront to Americans.
A conscientious objector is an "individual who has claimed the right to refuse to perform military service" [0]
I believe that it is dangerous to conflate job choice with conscientious objection.
Everyone should be held accountable for what they spend their day contributing to - this shouldn't be a special case.
Comparing this to a situation in which someone doesn't want to participate in compulsory murder, and in some cases risks being killed themselves, is not helpful.
On the one hand, I agree with you that it isn't helpful.
But on the other hand, there is a plausible parallel. The reason for strong controls on privacy is because every couple of generations governments tend to do a lot of damage to their own citizens.
If I thought the worst that was going to happen was the risk of being wrongly accused, public shaming or something, then privacy isn't really worth it. I trust law enforcement to be generally correct in assessing the situation, and if you are doing something shameful you may as well be shamed.
Privacy is important because it is one layer of protection against institutions participating in the murder of citizens, seen in Germany, China, Russia in the last generation and China + others this generation. Who knows for next generation? Change is very quick - there is no law of nature that says it won't be an Anglophone country. We can get thingy about whether a conscientious objector has to be the last person pulling the trigger or not, but in practice that distinction is not an important one.
Worth noting that probably everyone here has seen something like this happen. Not sure if it is specific to tech, but some of the egos and pathologies are extreme.
I have never seen up close someone risking and losing their livelihood over moral issues with a business. It clearly does happen, but I would have assumed it to be rare, maybe more so in IT. So if you or GP have some details they can share about cases you witnessed, I'd love to hear.
Oh, I meant more generally seeing someone getting forced out by a single other person in the organization.
I think taking a moral stance usually leads people to move on of their own volition, as they tend to know the rules (and want them followed) and don't want to be associated with the fall out or with an organization that enforces them selectively.
This article is premised on the idea that basically everything the Chinese government does is unethical, which is debatable. For one, the Chinese government is largely supported by its population. There might actually be engineers working on Dragonfly who do not believe the project is unethical. It's not that black and white...
How would one know if the Chinese government was supported by the population or not, given that anyone expressing a contrary opinion is likely to be a thorn in the eye that is China's surveillance state?
It's quite apparent to anyone who has lived there for a while and talked to people. Some large scale anonymous surveys done by not particularly sympathetic outsiders also confirm it (e.g. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/0...).
That's what happens when the party controls information to paint a very specific picture. You'll find that anonymous surveys in DPRK will also show that most support the Kim regime.
What became the Chinese government we know today also conducted "anonymous" surveys during the cultural revolution. Those who answered incorrectly were bundled up (along with their families) and never seen again.
> How would one know if the Chinese government was supported by the population or not
Or given the fact that any attempt to gather opinion about the communist party on the ground would instantly land you in jail, this is an impossible statement if anything.
Being largely supported by its population is not a valid justification for human rights abuses. Consider Japan during WW2 for a concrete example of how far this can go.
How would one know if the Chinese government was supported by the population or not, given that anyone expressing a contrary opinion is likely to be killed?
Encouraging dissent is a small piece of what is needed. Developers must certainly consider the moral and ethical consequences of their work, but most situations are not going to be black and white, and no matter how diabolical a project might be, there is always a way to rationalize it. What is most needed is widespread public debate among those who understand the technologies in order to form consensus on appropriate directions of development. Any consensus then needs to be effectively communicated to the public and to policy makers. This helps everyone, including developers, make better decisions about what they should or should not support. Forums such as HN and various blogs can foster debate, but there seems to be a need for a more consolidated approach that someone needs to establish. As far as communicating consensus goes, I think that nonprofit organizations like the Electronic Frontier Foundation can be a good conduit, but they need more widespread support from the technical community.
Yea I frequently see sentiments like “if only everyone would stop working for X”. It’s true but when we’re all locked in a game of the prisoners dilemma, moralizing isn’t going to get us to move, laws and changed incentives will. I think americans finally figured that out about climate change, which is why we’re looking at carbon taxes instead of just public pressure.
It's worth remembering that most of the directly attributable harm we associate with tech was created by small entities like the NSO Group and Cambridge Analytica.
Instead of conscientious objectors applying pressure to large companies, it might be better for tech employees who come across small companies doing shady activities to report them. Many times, these smaller firms are more susceptible to pressure anyways, so this is a double benefit.
Being smaller, though, those companies can simply be up front about what they do during recruiting and people who don't like it won't sign up for them. Succinctly, prudes don't work in porn.
Is there a name for this phenomenon in general? Your phrasing is quite clear but apparently novel, and I think it happens a lot - anti-war clergy don't apply to be military chaplains, so soldiers and commanders hear biased opinions of whether their work is incompatible with their religion compared to civilians of the same faith; people who don't think self-driving cars are a good goal tend not to apply for self-driving car companies; people who think on-prem is better than public cloud tend not to apply for companies that are all public cloud; a company that is known for a certain work culture will tend to get applications who don't think that culture is hostile; etc.
Yes, it's very much like "adverse selection" in insurance; @pdimitar mentions some specific cases of it, but I think the most general might be "speciation" or "group selection".[1] I'm not sure of a non-biological term.
"Filter bubble" is almost right but some of those seem somewhat intentional - the phenomenon here is that even if a porn studio wants to have its employees' prudishness reflect the general population, it can't. Even if the military wants to hire chaplains whose views on just war reflect the general population, it can't. You can usually turn off a filter bubble and you can usually actively avoid self-selection because you're doing it yourself: the phenomenon here is that the selectees are doing the filtering for you.
As someone who has worked in sf tech for 5-6 years I feel like I’ve gotten to the point where I’ve become too critical of the industry to continue working in it.
I've always ended up with, and still reach the conclusion that there are only two tenable options which will work against a corrupt and unethical system.
1. Infiltrate, subvert, and implode the system from within (that's meant to be as Tzu-esque as it sounds).
2. Using parallel propaganda to get the public to realize that they have the ability to disable the system if they can collectively wield a moral, objective, and ethical compass (with all the McLuhan-esque difficulties that come with it).
I prefer the former. As dangerous as it is, I think it's strategically easier to wield covert action as a tool for effecting change than it is to attempt a unification of the masses. But, maybe a combination of both these approaches would be ideal.
It's tricky. It's really easy for "I'll change things from within" to be a convenient rationalization for $400K total comp.
I considered it once again myself last year, when I talked with a big-name company, and my version that time was "I can do good work on good stuff, promote that over the not-as-good stuff, and generally be a positive citizen of the company".
Of course, I was also thinking I could really use the big-corp money. But when they insisted that I'd have to do the new-grad hazing rituals, I decided I was kidding myself that I was being considered for any position where I'd have any input into goodness, and they were really only considering me as a junior frat pledge.
Option 2 gives the other side time to fight back, with much more resources than you have.
Option 3: pick the lesser evil, not the best option. Cut the money/power flow for the incumbent parties. Force them to compete. It is my hope for democracy's purpose - to force competition at governent level.
For my part, I'm just not interested in working at facebook or google or certain other companies because I disagree with their practices. Luckily, I'm far enough along in my career that I don't need to take just any job, even if it is limiting to be a privacy conscious data scientist.
I do worry that people like me just staying out of these places leads to an echo chamber, and change from within is great. I wonder whether my individual impact on social good is greater trying to be moral at a moral company, or trying to be moral at an immoral company.
Luckily I don't have to twist my logic around to justify staying and vesting.
Yeah, it's hard to be a conscientious objector when you never put yourself into the position of working for a company whose goals you need to object to.
I respect the choices of tech sector workers who choose to protest against certain company policies and even refuse to perform certain types of work that enhance mass personalised surveillance and the enhancement of military lethality.
It’s a different world from 30 and 70 years ago.
70 years ago nearly all R&D spent was specifically military in nature. So it was pretty clear from the point of hire that you were(or weren’t working on military or related(surveillance) projects.
30 years ago military and commercial R&D hit parity and it was still quite easy to known going into an employment contract what side of the fence one was on.
Today, almost all R&D is commercial in nature, but with vast duel use military/surveillance potential.
So it’s much more convoluted and nebulous to delineate project output, perhaps a major contributing factor to recent decisions and actions.
Combined with the long standing low supply, high demand for talent enhancing leverage.
I agree that tech talent conscience is critical.
I only hope we see the same from Chinese software engineers working at Baidu, Alibaba, Tencent, Huawei, etc.
I wonder if the future will see a convoluted analog to the Manhattan Project, the Rosenbergs, and apex talent migration out of 1930’s fascist countries.
I remember drafting up a blog post a decade ago where I theorized/ranted that tech workers would end up being the bad guys in history. I didn't post it because it seemed so unrealistic at the time (and because I don't really know what I'll do when tech is killed... maybe some kind of appliance repair?).
What I object to is mass media and political activists trying to shame or force me to agree with their idea of "ethics" and political positions. I'll decide that for myself thank you very much. When you see activists making noise you can bet there are plenty of people who disagree with them. Unfortunately when you have outlets like the New York Times and mobs of Twitter journalists adding fuel to the fire (for one side of the debate only), speaking out for what you believe becomes incredibly dangerous if you don't go with the mainstream message.
So yeah, I object to everything this article says and stands for. Sadly I can't do that under my real name because I need to keep my job.
Discussions about ethical, privacy, surveillance issues often get hand waved and diluted here. See yesterday's thread on Google's retaliation as an example.
We need a diversity of communities that can represent the the software industry in more dimensions than the VC funded side of things, growth hacking and the singular focus on success that are often in heavy conflict with any kind of value dimension and ultimately greed and opportunism crowds out every other concern.
There is also often a extremely selfish mercenary view of the world this is disturbing and can jade most of us and needs some maturity to handle without letting it poison ones worldview.
Entirely one-sided discussion (to put it kindly), with dissenters finding their comments invisible. For as good as the discussion around here usually is, there are some topics that even HN cannot have an open discussion on, and that happens to be one of them.
Try an experiment: Argue in a thread that Facebook/Google/etc. are anything less than mustache-twirling-villian levels of evil, or that the people that work for those companies sometimes make actual human mistakes.
Arguing against the mainstream position can even get your comment flagged. Doing it a few times (even about unrelated topics) will get your account banned for being "incendiary". HN is an echo chamber. But that's not surprising, all Internet communities are an echo chamber to some point.
Right - it's probably very relevant to discussions of, say, Google's AI council that Google's purpose in putting Kay Coles James on their board was not that the Heritage Foundation's views per se should be represented (note that they didn't do corresponding representation of other political factions) but that she and the foundation have significant legitimacy and influence with the politically powerful parts of the American right wing, with which Google is generally in political trouble, and therefore it was a good way to allow Google to execute on its AI-related goals and therefore make money without political backlash.
A genuine ethics board would not include people whose real job is to curry favor with government - it's sufficiently close to unethical that it's a bad way to get started - and would be generally fine with saying "This task is ethically permissible, but not an ethical imperative, and so if it's politically difficult to do it, we should just do other things."
Tech needs fewer conscientious objectors, not more. These people rarely know anything about what they're protesting, and resort fearmongering and populism to get what they want without even attempting to resolve things civilly.
The most recent example is Google's AI ethics council debacle. Activist employees were so stubborn that they can't won't even let people have a discussion about AI ethics because one of them held views they disagree with. Half the authors of the American Constitution thought that slavery should be legal, but they managed draft the foundation of modern day democracy anyways.
> but they managed draft the foundation of modern day democracy anyways.
American democracy. It makes absolute sense that the founding fathers of America are the basis of the modern American democracy, but let's not pretend that they lay sole claim to modern democracy.
I agree they don't have a special understanding of things. I also rolled my eyes at the Heritage Foundation thing, among others.
On the other hand, what's the alternative? "Shut up and code"? Google isn't entitled to their work. We all have to operate by our own moral compass at the end of the day, and if you don't feel comfortable with what you're working on, it doesn't seem unreasonable to resign.
> Half the authors of the American Constitution thought that slavery should be legal, but they managed draft the foundation of modern day democracy anyways.
This is a story of democracy happening in spite of the founding fathers, not because of the founding fathers. It was, as you may know, a long (multiple centuries) and bloody (a pretty big war, plus lots of extrajudicial lynchings etc.) road from what they drafted to universal suffrage. It's still in progress in many ways, and a lot of people were hurt in the progress. So this seems like a pretty good argument that we should not let people who have views that are contrary to the long-term goal be setting the direction of how to get to the long-term goal.
It's easy to criticize someone when you're not the doing the work. The world was a lot more racist a few centuries ago, so considering what they had to work with, the founding fathers did well. The alternative is that there still would have been civil war, and slavery could have continued to this day.
It may be easy, but it is worth saying anyway. Otherwise nostalgia based on myth will be used to guide our future actions and may lead to same problems.
Also, it just make sense to keep history accurate. There were people both opposed to slavery and those for slavery. Some founding fathers had financial interest in keeping slavery and making it. The people who were opposing it were disadvantaged in competition for power.
Even if the Great British Empire was "better" than the early US (I'm sure some of their former colonies would say otherwise), it bears no relevance to the discussion of whether the American Constitution was a success or failure. The US isn't Great Britain, so it can't be compared as such. I also don't understand why you would hold the founding fathers responsible for things that happened several decades in the future.
> I also don't understand why you would hold the founding fathers responsible for things that happened several decades in the future.
The Constitutional explicit protections for both slavery and the slave trade were written in by the framers, they did not occur several decades later. The concrete divergence between UK and US policy on slavery began no later than the UK 1807 ban on the slave trade, which was less than two decades after the adoption of the Constitution while the framers were still politically active in many cases (Jefferson was President, for instance).
The UK may have banned slavery early on, but the US was the first of many British colonies to gain independence, something which took a few hundred years for the UK to relinquish world-wide. Colonization was a milder form of slavery, but oppressive none the less.
It's not like the American colonies gained independence because they were just that special compared to everyone else.
They did it because France (a major power) helped them to do it (France being at war with Britain at the time). Without France the American revolution would have failed.
If by "democracy" we mean "a certain privileged class of humans have shared influence in their government, usually by voting," then the US founding fathers were rather late to the game - plenty of republics had that already.
If by "democracy" we mean "there is no privileged class, and all people born in the land can vote if they are of age and have not had their privileges removed by due process of law," or we mean "that all Men are created equal, that they are endowed by their Creator with certain unalienable rights, and among these are Life, Liberty, and the Pursuit of Happiness" (the Declaration of Independence), or we mean "Everyone has the right to take part in the government of his country, directly or through freely chosen representatives.... The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage" (Universal Declaration of Human Rights), then yes, I think the people responsible for the Constitution were quite adversarial to that goal.
The United Kingdom, in freedom from whom the US founding fathers supposedly established democracy, banned slavery over half a century before the US did and slowly but steadily allowed more people to vote for Parliament during the 1800s. At the time the US Constitution was established, only 6% of the US population could vote, according to https://web.archive.org/web/20160706144856/http://www.archiv... .
I think you are missing the point of the parent comment. They implied the U.S. founding fathers made progress toward universal suffrage, not perfected it.
While accurate, your point on slavery misses the point too. The parent comment was implying that significant democratic progress was made despite contentious disagreement on important issues like slavery.
Your "in spite of" comment comes across as if it's saying "their system was flawed, therefore it didn't make any meaningful contributions and society would have been better without it". Don't let perfect be the enemy of the good...
This exemplifies the parent comment's original point about Google's AI ethics committee of throwing the baby out with the bathwater because they were unable to meet the expectation that everyone agree on everything. If that maxim isn't met, they just take their ball and go home and the overall system is no better than when it started.
...and all it took was 200 years of rancor & malice and one of the bloodiest wars in the history of our country to begin to resolve. That’s a pretty extreme reduction of the founding of America with regard to those deep disagreements among the founding fathers.
Those who “disagree with” the Heritage Foundation probably take offense by their propensity to outright lie and manipulate to achieve their objectives, most of which run contrary to the values of those working on Google’s AI teams. In light of that, I highly doubt they see Heritage as good-faith actors.
My point about the Constitutional convention is that they were able to postpone resolving a contentious issue in order to make progress on things they can agree on. That's what it truly means to be progressive.
IMO, the activists outright lie
to and manipulate the public to achieve their objectives by overly simplifying the positions of the Hertitage Foundation. Regardless, it doesn't really matter because the Heritage Foundation represents the beliefs of a large proportion of the population, so their voice is important.
This just sounds not only like you gave up trying to influence the world around you for the better a long time ago, but for some perverse reason want everyone else to give up too.
The educational Youtube channel SmarterEveryDay made a video about his day job as a military rocket engineer. It's an insightful how-and-why explanation for those wondering why there aren't more conscientious objectors: https://www.youtube.com/watch?v=qOTYgcdNrXE
There are plenty of people who don't work at these companies for these reasons. There could be more of those people. But there will still be enough who are willing to do it for the right price or who actually support the morality and ethics of doing it, that it will still get built even if the vast majority of the tech industry conscientiously objected.
I worked at Google and frequently found myself acting in this capacity. I think I did well at it and was explicitly promoted for doing so. At the time I admired the way the company walked the walk in that regard.
There's no point in taking a stand if it means relinquishing responsibility.
You'd rather have weapons in the hands of pacifists than warmongers, but that won't happen if pacifists "conscientiously object" and throw down their arms; police brutality isn't eliminated by making it a job that appeals only to bullies; and software development doesn't become ethical by asking everyone with a sense of ethics to quit.
What you need is for people to communicate and understand the world well enough that they can find solutions that are as ethical as they are effective.
I don't see "asking everyone with a sense of ethics to quit" software development, though. I'm seeing a call to refuse working on things that are unethical. Most software engineering does not involve working on projects that have immediate side effects that significantly impact human rights and freedoms. To build on your police analogy, this is about asking police officers to refuse to enforce policies such as "stop and frisk".
You can't ask individuals to make that decision; they'll just be replaced with ones who are willing. Leadership needs to agree with you. You can't ask people to work at a place while undermining their companies' business plans.
Of course you can ask individuals directly working on such stuff to make that decision. Implementers (who are aware what they're implementing) are just as much responsible for it as the people who gave them directions. In fact, this is just a rehash of the "just following orders" defense, except in this case it's not even orders, since you can walk out on them.
Not only that, I seriously doubt that Google would punish anybody who would simply request to not work on that particular project, instead of accommodating them by reassigning to a different team. So this is a very low bar to clear.
Does it mean that somebody else will do it? Probably, but responsibility matters. If you're the one who assists in illegal or unethical activities, then you're personally responsible for them.
I really wish people wielding the downvotes around here had the principles to avoid downvoting comments because they disagree with the argument. This is not reddit.
It's an uncharitable low-effort comment labeling HN users with differing opinions as mainly motivated by greed. That kind of comment is discouraged by the rules and is why the downvote feature exists.
It's low on words but dense with meaning. If you want to interpret it as calling out greed, so be it, but I think that good money is hard to turn down for many reasons besides greed. Is it greedy to want to pay off your student loans or provide for your family?
There are many people with stories of being unable to find work because they can't hack the modern coding interview process. One way to escape that trap is to take a job working for a defense contractor. I did it. I was over a barrel, facing an insane spousal support payment and I needed a job fast. I took a job with a defense contractor. I'm not proud of it, but I needed to live. Was I motivated by greed?
I see the point but it is a narrow and modernist characterization of other minds on the matter. A lot of perfectly decent people do defense work and don't really need the paychecks (the famous Youtuber SmarterEveryDay comes to mind). Here are a few reasons why:
1) Many work on oft maligned technologies because they simply see no moral conundrum. It's common to see tools as amoral, and believe moral responsibility in the use of tools for good or evil lies solely with its wielder - not any inventor or refiner.
2) Or, they may see their work as their moral duty as a competent professional in a society fending off morally worse alternatives. This is the patriotic or "necessary evil" view.
3) A few utilitarian engineers see the refinement of technology in and of itself as always a subtle moral good, as they believe the elimination of inefficiency is the primary reason why humanity no longer suffers as much as it used to. There's also the assumption that the development of any kind of tech leads to unanticipated benefits to other areas of life (this is a common demonstrated pattern in defense R&D).
4) There's also the classical Greek justification of Eudaimonia. Simply doing what one does best is seen as the most moral thing one can do, as it eliminates any outward and internal dishonesty about ones self.
I'm very uncomfortable with how we are importing Chinese censorship into our lives, due in part to tech companies needing to build services that can operate in both regions.
At an object level, it's an important point to make and re-iterate.
On a meta-level, I am annoyed at how much special pleading I see from the NYtimes about the evils of tech, while it's simply par for the course for -- say -- Finance to act with near impunity when managing, fund raising, and dealing with autocratic and communist regimes.
When Google's internal turmoil surfaced about the work on Project Jedi, it did not ultimately kill the project[0].
As long as there's someone else still willing to compromise their moral grounds for the value returned, it's still going to get made. That's just how capitalist societies work: People need to make money and someone is going to fulfill that objectional need because they have a need to fill their own coffers.
One need only look at the fervor against Facebook and realise that they still employ over 35k people to understand that, at the end of the day, people are either willing to do the task or they're in a position that they have no other choice (considering that the US doesn't have much of a social safety net).
So, while "conscientious objectors" might be good for a single business direction, it - sadly - does nothing for the overall scheme of things.
...but even if all of the engineers organised in North America to create manifestos, under psuedo or actual unions, which dictated what the future of tech should be used for, this does nothing to dissuade the partners (e.g.: 5Is) from doing the same and then sharing that technology back to their own government.
I concede that maybe I'm misunderstanding the author's intent but to me, the posit seems too idyllic to be fruitful (overall) in modern society.
It really does, if you look at different engineering companies like Rolls Royce whilst the military takeover in Chile, a group of engineers in Scotland stopped working because they knew where their product was being sold too. HN seems to think that Programmers can and should get away with everything. Disturbing to say the least.
I am honestly surprised that Dragon Fly is the last straw for people at Google. The company is the embodiment of surveillance capitalism. Why is there no talk about changing that model? It just seems the activists there are piling on the "China Scary" bandwagon (see: Huawei and ZTE losing hella business due to US pressure) rather than changing the revenue source in order to improve privacy and reduce data collection.
Do people actually care about privacy or data collection these days, though? It's not clear that most do - see the amount of people who freely interact on Facebook, even though they surely must be aware that everything on that site will ultimately be something that Facebook knows about. Google just doesn't seem very different by that standard.
I feel like they could care, but the network effects are so strong, and the negatives are hidden for now. This is because for the most part the people designing and implementing the tech are not classically or obviously "bad" people, nor are they designing it for those purposes. Plus its complicated, so many people defer to social proof, which is a stacked metric because the strong network effects are involved. I don't know that its a "care/not" care dynamic instead of a "what where the incentives acting on those who taught you" dynamic.
There is a small and very vocal privacy contingent that speaks so loudly that many people think it's representative, but it's not. Internally, with the exception of some product managers focused on growth, most googlers are very protective of privacy, probably more so than their users.
No, the only place I see the kind of rabid hatred of Google and Facebook for their data collection practices are here on HN. Of course these people want their services, actually expect them for free, continuously at the same standard. They just don't want to be tracked, don't want to pay for them, don't want to be advertised too and don't have any real solutions on how they would be paid for other than some handwaving about breaking Google/Facebook up which apparently will solve everything.
Everytime I've pointed out that most people simply don't care about the level of tracking Facebook and Google do I'm endlessly shouted down with words to the effect of "No! If they understand how much they were tracked they'd care!", but I don't believe that's the case, I think most people simply don't care based on the value they get from these companies.
> these people want their services, actually expect them for free, continuously at the same standard.
Source? I'm sure most people would be willing to pay for a service in exchange for some reasonable level of privacy.
> don't have any real solutions on how they would be paid for other than some handwaving about breaking Google/Facebook up which apparently will solve everything.
The "breaking up Google/Facebook" argument is completely separate from the privacy argument. Strawman much?
> most people simply don't care about the level of tracking Facebook and Google do
Source?
> "No! If they understand how much they were tracked they'd care!", but I don't believe that's the case
Again, source? If you told the average person the amount of information that Facebook and Google have inferred from them, do you seriously think they would not care?
---
There's no doubt that the data collection practices of Google and Facebook or pathologic. There is nothing stopping them from taking a more privacy-centric approach like Apple does and still make a ton of money.
But people have the mindset that corporations should be exempt from morality and that this is grounds for tossing considerations like privacy out the window. I'm getting the feeling you feel similarly.
Most people probably don't mind seeing advertisements. It's just completely ridiculous that Google or FB think they need to track individuals in order to provide targeted advertising.
If someone searches for 'football', then just show a fucking ad that is relevant to 'football' along with the search results.
What makes companies like Google valuable is that they can tell whether the user searching "football" should be shown an ad for "American football" or soccer.
How are you (any everyone else in this conversation) defining "tracking", and how does it differ from profiling? I can't seem to tell what people mean by "tracking" anymore.
Coding is a craft. I wouldn't work on something I had moral issues with. There are grey areas for sure, but for me, work for the defense industry or a local gun shop.. nope.. go find someone else.
There will always be someone that doesn't have any moral compass about their work or income. That person is simply not me.
Do you feel defense contractors should go out of business and that the work they do is immoral? Or is it just not for you?
I don’t know what country you’re from but would you prefer you had no military or weapons or defense? Like it or not there will always be other countries with weapons and they may not always be friendly towards you or your interests.
I wouldn’t want my work to have anything to do with killing people either, but I also think the defense industry (while unfortunate) is probably here to stay, for good reasons too.
Perhaps one day, we'll see lists like this [0] for companies that helped the Chinese government detain entire ethnic populations, or for those that provided assistance to the Turkish regime during its purges and disappearances of opponents, or those that facilitated states, such as Russia, to politically interfere in the democratic functioning of their rivals, and the extra-judicial killing of their opponents.
Lots of businesses do, there is nothing special about tech companies in this regard. NYT really could have used some conscientious objectors in 2003, just as one example.
I conscientiously object here on HN with shitposts, because I think HN is slightly too pretentious, and something important is lost in the dry demeanor.
At some point, @jcs of lobste.rs complained of, quote: "capricious modding" and I think that much rings true.
Perhaps most crucial, is the undeniable reality, that once you place a comment on HN, you are no longer the owner. You lose the ability to delete or edit very quickly, and the statements posted here become locked and permanent. With this, it is ill-advised that an authentic persona should ever become attached to any user name on HN. If your opinions run contrary to political winds in this climate, real problems could come your way, and in an emergency, there is no way to disentangle yourself for data that exists on this site.
That's okay, since HN is decidedly low-fi in many ways. But based on this, you'de hope for a lighter, more permissive hand on the moderation. Not so. Not at all.
So, with an iron fist in play, why pull ones own punches. Let the harshest opinions fly, if they'll assuredly be stamped out by such stifling interference. No need to feel guilty for resisting the iron fist. After all, opinions on the internet aren't really what's wrong with the world right now. It's just an oh so popular scape goat.
How do you like my objections to HN's subculture? Conscientious enough?
Tech has a very strong history of conscientious objectors. They're not the problem. Other institutions need to do their part.
Without values there is no good or evil, there is only power and those too weak to seek it.
That said, I completely agree with you that there is a perception of the lack of any shared value. So why might this be? The 2016 election was interesting. How many people voted for Hillary thinking 'Yes, this person truly stands for what I believe in and will make a phenomenal president.'? How many voted for her because the alternative was completely unthinkable. And the exact same holds true for those that voted for Trump. Think about what a remarkable bit of social engineering that is. In a democracy, you've managed to get tens of millions of people to vote for people they don't actually like or event want in office. And all you had to do was to make people loathe, and fear, the alternative sufficiently.
Division helps entrench establishment forces. You can even see this in the choice of which issues get elevated to the national level. What is the weapon most typically used in mass homicides? What percent of homicides are rifles used in? I think the majority of Americans would get these questions completely wrong. Because the issues that we elevate paint a picture that is not in accordance with statistical reality. That reality being that pistols are the primary weapon of mass homicides, and with rifles being responsible for about 2% of all homicides. [1] Our homicide rate is driven by cheap little pistols. The year to year variance in pistol homicide is frequently larger than the entire sum of all rifle homicides. In other words if we had a magical button to get rid of all rifles, "assault" or not, you wouldn't even notice a drop in the homicide rate. It'd be statistical noise.
But the issue is promoted because it's extremely divisive. It makes people fear and hate one another and further drive this perception that we have no shared values. And what that translates to is at the polls you won't vote for who you want, but will instead vote against who you do not want. And that translates to voting for the establishment candidate who, by definition, will have shown themselves to be the most 'electable'. And then we all end up disappointed, and then do this again 4 years later. Only this time the establishment candidate is truly different, honest! Or in the case of an incumbent, they'll actually do what they've been promising now - they just need 4 more years, honest!
[1] - https://ucr.fbi.gov/crime-in-the-u.s/2014/crime-in-the-u.s.-...
You have vastly more faith in humanity than I do.
There is an entire class of people who define their success based on how much more they have than everyone else. It isn't enough for these people to be successful, healthy, and financially stable; it is necessary, for their happiness, that other people not have these things.
Many of the problems we currently face have, if not solutions, proven mitigations. These mitigations are complete non-starters in America, though, because a significant minority of people have been convinced that "those people" haven't earned it.
> It makes people fear and hate one another and further drive this perception that we have no shared values.
A more important issue is the fact that our legislatures are structured to have exactly zero concern for our shared values. If we take your cheap little pistols example (which is entirely accurate, as far as I can tell), we have one of those clear mitigations: universal background checks. This mitigation enjoys somewhere around 90% public support. It will never, ever pass as long as the NRA is allowed to terrorize politicians into voting against the public good.
Should the purpose of welfare be to solve poverty or to sustain it? This is another one of those questions that I think we'd all agree on. Nobody wants poverty and so welfare, as one of our primary means of combating it, should do exactly that. It should combat it. Is it succeeding? This is something we can look for objective information on. This [1] is a graph of the US poverty rate. It's clear that welfare is not effectively reducing poverty. One of the oldest proverbs, that's generally appeared throughout the world in completely independent cultures 'is give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.'
Our capitalist society is fundamentally unfair in one way. The best way to make money is to have money. Start as a billionaire and unless you're a complete idiot (or alternatively voluntarily engaging in extremely high risk enterprise) you're going to die a billionaire. And this goes all the way down. It's much easier to earn $15,000 when starting with $5,000 than it is to earn $10,000 when starting with $0 - even though it's the exact same increase in capital.
If a person is so poor they cannot reasonably afford to even feed themselves, is it a wise investment of what little capital they have to buy a luxury electronic device? Or a new vehicle? Is this the sort of behavior that's going to help them get out of poverty? I think people see these behaviors as a failing of the welfare system. Instead of getting people out of poverty, it's simply sustaining it.
And there are major corporate interests that are invested in sustained poverty. For instance WalMart in their SEC filings actually lists food stamps as one of their major profit conditionals [2]. About 20% of all food stamp outlays end up being spent at WalMart - around $13 billion in recent times. Quite an interesting system we've created. Welfare subsidizes low wages at WalMart, and then caps it off by directing massive amounts of money back to the company. Kraft Foods is another big advocate for welfare and its expansion. About 1/6th of their entire revenue coming from food stamp purchases.
And their are also political gains to be had from sustained poverty. Today around 40,000,000 people are dependent on food stamps. Politicians who promise to expand these sort of programs are likely to disproportionately receive their vote. This creates an incentive to simultaneously service these people, but also keep them dependent. See LBJ's quotes on how he viewed the Civil Rights Act (which he passed), or what he referred to it as privately, for an example of this issue. I will not repeat his language here.
This is really just scratching the surface, but this post is already unreasonable long so I'll cut it here. But I hope I've framed at least the start of a case for showing that when people are not necessarily the biggest fans of programs such as welfare, there are reasons beyond disdainful views of those receiving it, or a desire for them to stay impoverished.
[1] - http://poverty.ucdavis.edu/sites/main/files/imagecache/mediu...
[2] - https://en.wikipedia.org/wiki/Supplemental_Nutrition_Assista...
So we agree on most of everything, except this massive, critical thing?
Even aside from that, your examples of shared values also don't strike me as universally agreeable. I find many of these apparently agreeable things just poorly defined. It's much more obvious why these issues are so divisive when you dig a little deeper.
What exactly would getting rid of all crime mean? Because people disagree on what should be considered a crime.
Getting rid of poverty? Well, what kind of poverty do you mean? Because poverty in many ways is relative. Getting rid of all poverty implies getting rid of all wealth inequality. Is that really something you believe everyone agrees on?
Everyone owning their own land? I'd be surprised if everyone agreed on private land ownership. Everyone has a nice house? What kind of house? I'm not sure everyone wants to live in a suburbia.
No I would not push that button. I think everyone should have a roof over their head, but only value-producing people should have a nice house. Also, I think that the reward should be based purely on value production of the individual, without any regard for ownership of capital (the means of production). Also, individuals should not be rewarded for capturing the value created by others (I.e. entrepreneurship). There should be a clear distinction between activities which create value versus the ones which capture value. We need a system which enables creators instead of entrepreneurs. By the way, the word 'entrepreneur' is French and it literally means 'someone who takes what is between'; not someone who creates value.
>> Another button to get rid of all crime? Yip, gonna be pushed.
If we had a decent system, I would agree. But in the current system, I think that crime might as well just be legalized. The definition of crime is extremely unclear. There are so many white-collar activities which are not considered crimes by the legal system but whose consequences are actually far worse for society than all the blue collar crimes added up together - In fact, you could argue that blue collar crime would not exist if it weren't for white collar operators rigging the game and effectively forcing poor people into a life of blue collar crime.
>> Another to get rid of all poverty?
Sure, but people need to be educated in order to be able to lead a socially conscientious life outside of poverty. If you lifted everyone out of poverty with the push of a button, they'd eat the conscientious middle class for breakfast the next day.
* Tech is incredibly, incredibly important and powerful, in its implications and effects on politics, society, geopolitics, everything
* It seems plausible there's far more going on than anyone knows about
E.g. rather than how domain names work, I'd rather people learn about privacy in the internet era.
Rather than say, RAM, I'd rather people learn about how social media is addictive and why, etc.
All of the above would be good too :)
Mainly I shelved my response because I doubt the policy people on the thread would have taken that very well. But the truth is looming large these days.
Having technology that's too complex for most people to handle hasn't been a new thing for at least a couple centuries.
Not all of our legislature's representatives have systemically good incentives. Regardless of their ability to understand complexity, complexity offers them the ability to obfuscate. Ignoring any bad faith or corruption, though, our legislatures are also a limited compute resource.
If the complexity of modern society rises by orders of magnitude, the tools legislatures previously used to tackle complexity may not scale to the new challenges, or may simply be too slow to address the issues while they are relevant.
One of my first areas of research was mapping legislature throughput to variations in rates of technological change. It appears most western governments addressed social issues arising out of the industrial revolution, for instance, exceptionally slowly. To summarize a few years of reading in the library stacks: the relationship between social complexity and quality of governance is not well understood.
Who, if not we, the engineers who really understand the implications of our actions, should stand up?
[1] https://www.rt.com/news/421101-auschwitz-accountant-guard-di...
I'm not seeing how this a moral hazard, do you just mean "immoral"?
Nope! That's called an externality. A moral hazard is a type of externality, but is very focused on a particular set of instances in the definition I provided.
A recent review I looked through indicates that the term has been historically used for different purposes in economics, insurance and probability literature. If the language of externalities is easier to understand for you, feel free to mentally substitute it in.
I could have couched the comment in the language of externalities and made a similar point, but it would lose the rhetorical flourish of hinting that legislatures themselves discount risk associated with their actions (or lack thereof).
Smoking instead of reckless driving may make it (slightly) more understandable.
A moral hazard is choosing a selfish course of action with negative external effects.
Consider the internet began as the Arpanet is just 50 years ago. For much of that time the idea that the average person would be online from their own home would have seemed like crazy science fiction.
Not even federal government with their delusions of absolute control can do anything. (Say, FISA insanity.) Net neutrality was a contest between two big corps. Patent battles ruled to support American business despite clear violations? (Qualcomm et al.) The used to be laws considering encryption to be munitions. Silly attempt to sabotage it AKA Clipper. Yet somehow AI systems to discriminate explicitly sold to opposing regimes are not despite being much more applicable.
On the other hand, there is Comcast and Time Warner.
I am not saying is fair, but other industries which get away with actually killing people (ex: fine particle pollution from coal power plants). So, bias is somewhat expected from how our system works, it’s just not all bad.
Such a belief is justified because they have a strong history of being both.
You used google while signed in and now the world knows you like barbies?
Serious question, because I do not feel violated.
Assume furthermore that a future hypothetical company is equally not good and the two collude to do not good things together.
Now assume you’re motivated to do something about it. Sucks for you, they have already predicted that and cleansed you because they know everything you’ve ever read and it was a high probability that you’d be against them.
Consider another situation where hypothetical evil overlords know exactly what buttons press in order to influence you to act a certain way. They press them regularly. You dance as expected.
Both circumstances undermine your personal autonomy and right to self determination. Furthermore they undermine the humanity that we share where our society is no longer controlled by the needs of humans, but instead arbitrary algorithms coded for whatever mundanely evil purpose.
And that’s the point (that you’ve more or less hit on), this is evil, but also so boring that it is actually a threat to society because no one will care enough to do anything.
It's not about step 1, which is to sell you more barbies. It's about step 4, step 5 where suddenly a bad actor has access to all this data and can see that people who make this search have a 10% chance of opposing the current leader/president/ruling party etc.
[1] https://www.themoscowtimes.com/2019/04/21/russias-sovereign-...
Doing something about a bad government is called terrorism. Rooting out such people before they act is called preventing radicalization, stemming the flow of hatred and bigotry (the #1 talking point in calls to regulate tech).
Knowing what buttons to press to influence populations to act a certain way is called called campaigning. Belief in democracy is exactly equal to belief that whoever is best at that each election cycle should have the most power.
Creating laws to prevent the population from being abused and manipulated via big data is somewhat orthogonal to that, as it would be a separate set of laws. we can’t not do something just because it might be abused by those implementing it. We have to include reasonable limits to the powers we grant.
We don’t do away with all rule of law because it is occasionally abused, we prosecute the abusers under said laws and create new laws when loopholes are inevitably found.
Why punish google?
You might trust your government enough to be okay with this, but don't forget that government is made up of humans, with there own personal whims and idiocricies. Do you really want to live in a world where people around you might have power to completely end you? Will you be able to speak freely in such a world?
This level of power is a very new thing, and the people on higher order (wrongly) think that it's all good and they can keep going about there business as before.
Something has to change.
2) you quit the browser
3) three days later you open the browser to search for a new job
4) Depending on your works settings, they now know that you are looking for a new job.
-or-
1) you have work gmail on your phone (stupid idea, but hey)
2) Work now has admin control over your phone, with the ability to remote wipe, backup and inspect.
3) fatfinger/perv/abusive admin can now do what ever they will to your personal life.
See "Forces of Labor".
https://sites.google.com/view/tech-workers-coalition/topics/...
Oh? I know of Joseph Weizenbaum, who else?
Maybe we just, as a society, need to adopt morality and ethics more deeply into our cultural DNA.
http://www.peo.on.ca/
The regulation is coming on very gradually since they're confused about how to define and regulate the title/practice.
The regulation won't necessarily help anything but to outlaw a lot of smart people from being able to write certain software. It will put us further behind the USA in our ability to create tech companies that matter (or that can compete with offshoring) and lead to higher unemployment as computers take away more jobs.
Luckily the restriction is barely enforced at this point, though it is getting stricter. I guess it will benefit me in the future when somebody will be forced to hire me because there weren't enough other applicants with professional membership... sort of a depressing thought
Has any of that happened with engineers in Canada? We still have plenty of engineers.
A lot of this comes off as any industry wanting to hold onto the engineer title without any of the accountability that should come along with it, or really any accountability at all.
I don't want a random self taught engineer involved in building my house or a local highway overpass. So why would I also want a random self taught software engineer working on vital computer systems or on autonomous vehicles or on the software that controls the pitch controls on a Boeing 737? Software is not just some shit-tier SaaS app. It being hard to get official credation means accepting holding oneself to high (legal) standards. If that means lots of people have to go around calling themselves a developer instead of an engineer then I'm fine with that.
It might seem depressing that we hold engineers to high standards and expect legal accountability for the things they sign off on or create or approve, but I think more accountability in the tech industry is a good thing. The modern tech industry and software engineers hold themselves to ludicrously low standards because they basically operate on the idea that innovation = good and if-it-makes-money it must also be good. I know fast food workers who are held to higher standards than software engineers. That many continue to call themselves engineers is mainly just a hold over from a metaphor that describes computer systems as "architecture".
Software is a big deal and it effects every fabric of daily life in 2019. We shouldn't treat it like it's ephemeral stuff that has no consequences beyond the next VC exit or going public.
I think the fear is that the same restrictions will affect shit-tier apps as well as aeronautic control software. After all, it's not like you can't already say "to work on aeronautic software you must have credential X or Y". So what's the point of restricting the word "engineer" as a whole?
It's not a restricted word. It's a restricted accredation that, should one carry it, obligates them to a certain set of standards and accountability.
The people making those apps don't need to call themselves engineers. They can call themselves developers. And the world will keep spinning. But a developer calling themselves an engineer is like a local contractor calling themselves a "residential engineer". An engineer can be a local contractor, but a local contractor can't simply call themselves an engineer, of any sort, unless they're accredited. That distinction matters because engineers have obligations that they are held to that a local contractor doesn't. That's the point.
Looks at ring on the pinky of his working hand
Yep.
Their goal is to make this a requirement for the handling of certain types of sensitive data, as one needs to swear by a code of ethics in order to become a bearer.
Speaking as a social scientist, i was shocked at how little of this made it into industry.
Further down in this thread the discussion has broadened to worker rights in tech-adjacent jobs (eg. amazon warehouses). It would be great if we could have one big union that fights for the rights of all workers, and if you think so too you should look into joining the IWW.
Yes, our society does need to adopt morality and ethics as guiding principles in our work, and organizing your workplace is not some orthogonal venture but is actually part of that goal.
Given how often the interests of various workers can and do conflict, this strikes me as an idea possessed of a wonderful opportunity to come into greater alignment with reality. As are those who advocate it.
I always ask would-be tech union organizers the same question: what's in it for me? Rarely do I get good answers back.
Unions aren't about moralizing. They're about self-interest. Show me that you're going to advance mine, and we can talk.
In more day-to-day union campaigns, you'd be organizing so that your coworkers can have a place to work where they're not harassed, underpaid, or bullied by systems out of their control to create technology that harms people. Admittedly in the US it's not much leverage we have, but you could rest assured that if you were victim of an injustice by your boss, they'd have your back too.
It's about taking back just a little bit of control over what you use your skills to work on, and how. Some people would rather take orders from their boss until they die, but especially in software there seems to a collective desire for more accountability from the bottom up, and maybe a union is what that will look like.
Respectfully, and with a heavy heart, I must decline to do so. I am not interested in some nebulous "us", doubly so one that is poorly defined. I want to know how a union will improve my life. It's far too easy to sacrifice me in the interests of "us". After you, my friend.
I do not want to hear how a union will improve the safety and privacy of hundreds of millions. That's the job of governments. I want to hear about how a union will improve my privacy and safety. I want to hear about how this union will unconditionally have my back. I want to hear how you propose to make sure that a union doesn't turn into another way to bully, harass, and sit in arbitrary moralistic judgment of workers.
Alternatively, tell me how your union will expel - and render unemployable - practicioners who through ignorance, negligence, and recklessness endanger the safety and privacy of hundreds of millions. I have worked with more than a few engineers in my time who have been guilty of that. No warnings, no slap on the wrist, just up-front education and swift, harsh incentives to be careful and not fuck up. I know, I know, unrealistic and cruel, right? No need to wreck someone's livelihood, right?
You're absolutely right, of course. It's in every way about taking back control. You'll have to excuse me if I'm reluctant to take it back from one faceless and unaccountable group that doesn't care about me so I can hand it to another faceless and unaccountable group that doesn't care about me. That seems like a needlessly complex way to not solve the problems at hand.
Perhaps purporting to look out for the interests of an entire nation is too broad for any one organization. Some questions about your assignment of responsibilities that I think will clear things up: what if, hypothetically, the government was run by a ruling class who would leverage your need for income, healthcare, etc., in order to make you build things you didn't want to build? And specifically speaking of surveillance technology, what if you suspected they were building these tools for other countries as a trial run to later use on their own citizens? It's not that far-fetched.
Otherwise, I think you have some great questions to ask of any union you're considering to represent you. But you're also representing them -- if you felt like you wouldn't get a say in what collective actions were taken, that's not a union I'd encourage you to work with. I specifically mentioned mine because I know their track record and philosophy and would like to see more of it. There's no hierarchy, no inaccessible higher realm of bureaucrats making the decisions, it's workers all the way down! Ask your local branch about it. Although there are often differences in opinion, bullying or harassing workers is not tolerated.
Will it immediately and permanently improve your life? This ain't a fad diet and I won't market it like one, it's a chore sometimes honestly. I don't go to meetings and pay dues because I love it all the time. But I've seen results, and seen how much those in power want to keep us from realizing our collective ability to win some of their power through organizing. And I think you'll agree that the trajectory we're currently on is grim. Think it over.
Anyone who says "there's no hierarchy" is, without exception, wrong. All they actually mean is "there is no explicit hierarchy". There's always an implicit one. The big difference between the two is generally accountability - explicit hierarchies can have it structured in. Implicit hierarchies get to say "We're all just workers here!" to dodge accountability.
> And I think you'll agree that the trajectory we're currently on is grim. Think it over.
You're right. I do agree. That said, I also grew up in a place where short-sighted unions helped turn a thriving industrial area into a post-industrial wasteland.
Just because one course is grim doesn't mean I want to trade it for any other course.
I'd love to see an example of a union using their structural philosophy as an excuse to evade accountability. Don't suspect you had one in mind?
Come join the Evil Mastermind Technical Union! We have workshops on supertech, the latest coming advances, we sponsor work and mentoring programs for young Evil Geniuses (including internships!) as well as seasoned practitioners of Forbidden Tech!
Last year one of our top candidates improved the speed of facial recognition at checkpoints in a "repressive regime", leading to the arrest of a whole slew of dissidents who had previously eluded the authorities! These kids are going places - and the targets of their employers are going to the gallows/dungeon/gulag/graves, that's for sure!
Join TODAY!
EDIT: Getting downvoted, clearly not a popular opinion. Are people here not interested in having a strong corporate-independent organization to push the industry forward?
I've actually been a member or contributed to all the organizations either of us mentioned, and EFF has seemed to me likely a better predictor, but that's just a guess.
So while I agree with the EFF on many policies, their corporate leanings are a turn-off and isn't, imo, a solid ground for people doing day to day work in computing.
Traditional engineerings have longstanding concepts of ethics, plenty of professional organizations, and even professional licensure. This does not stop mechanical engineers from adding to USG's weapons disparity or civil engineers from constructing more profit-center cages.
https://www.youtube.com/watch?v=DwbzxemJZIc
I was a student member for a couple of years, but beyond the occasional Government IT job advert which required BCS qualification and accreditation, they seem to be extremely quiet (or not very good at advertising).
Moreover, "coding" is a cross-cutting concern in that a lot of people who aren't formally programmers have been sold on the idea of being able to automate parts of their job by writing code, and I'm not sure how you'd yoke all of those disparate fields (especially academics outside of industry) into a single labor organization.
Wikipedia says it’s a trade union for “scientists, managers, and engineers” with 140k members across quite a few industries.
No mention of what they actually do though.
And the previous president Denise was also president of Uni which acts to support workers world wide eg the rights of garment workers and also supporting the victimised organisers of the google walkout.
The same thing applies to coding.
Please mind the is–ought gap. Parent was talking about difficulties in unionizing, not about whether or not people in this or that sector should unionize.
From the author's website, listed at the bottom.
Imagine if journalists and other people posting to the www called "AI" what it actually is, instead of constantly portraying it as something futuristic to capture people's imaginations.
Even terms like "statistical inference" and "automated decision-making" could probably be explained using more common language to be comprehensible to the general public.
"AI", if it means anything at all, is fast becoming a way of identifying snake-oil products. The explanation is necessary.
In some companies, the culture is also that of being oblivious (or acting oblivious) to the harm caused by the work. Facebook is the best example of this. Has there been any conscientious objector in that company? People like the WhatsApp and Instagram founders, who grew a conscience after getting billions in their pockets, don't count (even if one or two left some money on the table when quitting).
Money tells many a true tale of what goes in in people's minds.
The article, sadly, does not mention Facebook and all the surveillance that it enables and grows, along with oppressing people by forcing them to use "real names".
Objectors' reasons are diverse and not as unified as NYT makes it seem - some believe that big tech's arbitrary censorship and subtle political bias is immoral, while others believe that big tech's inability or lack of incentive to limit ads, fake news, or distasteful speech is immoral.
No it's a cult. They wouldn't hire you unless you drank the koolaid.
I believe that it is dangerous to conflate job choice with conscientious objection.
Everyone should be held accountable for what they spend their day contributing to - this shouldn't be a special case.
Comparing this to a situation in which someone doesn't want to participate in compulsory murder, and in some cases risks being killed themselves, is not helpful.
[0]: https://en.wikipedia.org/wiki/Conscientious_objector#cite_no...
But on the other hand, there is a plausible parallel. The reason for strong controls on privacy is because every couple of generations governments tend to do a lot of damage to their own citizens.
If I thought the worst that was going to happen was the risk of being wrongly accused, public shaming or something, then privacy isn't really worth it. I trust law enforcement to be generally correct in assessing the situation, and if you are doing something shameful you may as well be shamed.
Privacy is important because it is one layer of protection against institutions participating in the murder of citizens, seen in Germany, China, Russia in the last generation and China + others this generation. Who knows for next generation? Change is very quick - there is no law of nature that says it won't be an Anglophone country. We can get thingy about whether a conscientious objector has to be the last person pulling the trigger or not, but in practice that distinction is not an important one.
Most of them drive trucks now. A few lucky ones became artists or writers, but mostly a lot of bad outcomes.
One of these failed truck drivers even makes deliveries to the house of the person who forced him out sometimes, a fact which he loves laughing about.
Worth noting that probably everyone here has seen something like this happen. Not sure if it is specific to tech, but some of the egos and pathologies are extreme.
I think taking a moral stance usually leads people to move on of their own volition, as they tend to know the rules (and want them followed) and don't want to be associated with the fall out or with an organization that enforces them selectively.
Or given the fact that any attempt to gather opinion about the communist party on the ground would instantly land you in jail, this is an impossible statement if anything.
I think it's fair to say most of the surveillance technology they're buying is not being used to help people.
Do not forget, the Chinese government is, right now, using surveillance technology to imprison Uighurs.
https://www.npr.org/templates/transcript/transcript.php?stor...
https://ctsp.berkeley.edu/ethical-pledges-for-individuals-an...
[1]: https://en.wikipedia.org/wiki/Reinforcement_(speciation)
1. Infiltrate, subvert, and implode the system from within (that's meant to be as Tzu-esque as it sounds).
2. Using parallel propaganda to get the public to realize that they have the ability to disable the system if they can collectively wield a moral, objective, and ethical compass (with all the McLuhan-esque difficulties that come with it).
I prefer the former. As dangerous as it is, I think it's strategically easier to wield covert action as a tool for effecting change than it is to attempt a unification of the masses. But, maybe a combination of both these approaches would be ideal.
I considered it once again myself last year, when I talked with a big-name company, and my version that time was "I can do good work on good stuff, promote that over the not-as-good stuff, and generally be a positive citizen of the company".
Of course, I was also thinking I could really use the big-corp money. But when they insisted that I'd have to do the new-grad hazing rituals, I decided I was kidding myself that I was being considered for any position where I'd have any input into goodness, and they were really only considering me as a junior frat pledge.
Option 3: pick the lesser evil, not the best option. Cut the money/power flow for the incumbent parties. Force them to compete. It is my hope for democracy's purpose - to force competition at governent level.
I do worry that people like me just staying out of these places leads to an echo chamber, and change from within is great. I wonder whether my individual impact on social good is greater trying to be moral at a moral company, or trying to be moral at an immoral company.
Luckily I don't have to twist my logic around to justify staying and vesting.
It’s a different world from 30 and 70 years ago.
70 years ago nearly all R&D spent was specifically military in nature. So it was pretty clear from the point of hire that you were(or weren’t working on military or related(surveillance) projects.
30 years ago military and commercial R&D hit parity and it was still quite easy to known going into an employment contract what side of the fence one was on.
Today, almost all R&D is commercial in nature, but with vast duel use military/surveillance potential.
So it’s much more convoluted and nebulous to delineate project output, perhaps a major contributing factor to recent decisions and actions.
Combined with the long standing low supply, high demand for talent enhancing leverage.
I agree that tech talent conscience is critical.
I only hope we see the same from Chinese software engineers working at Baidu, Alibaba, Tencent, Huawei, etc.
I wonder if the future will see a convoluted analog to the Manhattan Project, the Rosenbergs, and apex talent migration out of 1930’s fascist countries.
It's quite amazing the difference 10 years makes!
So yeah, I object to everything this article says and stands for. Sadly I can't do that under my real name because I need to keep my job.
We need a diversity of communities that can represent the the software industry in more dimensions than the VC funded side of things, growth hacking and the singular focus on success that are often in heavy conflict with any kind of value dimension and ultimately greed and opportunism crowds out every other concern.
There is also often a extremely selfish mercenary view of the world this is disturbing and can jade most of us and needs some maturity to handle without letting it poison ones worldview.
Do they? I tend to find that Hacker News often has a lot of discussion of ethics, especially with respect to technology.
Try an experiment: Argue in a thread that Facebook/Google/etc. are anything less than mustache-twirling-villian levels of evil, or that the people that work for those companies sometimes make actual human mistakes.
A genuine ethics board would not include people whose real job is to curry favor with government - it's sufficiently close to unethical that it's a bad way to get started - and would be generally fine with saying "This task is ethically permissible, but not an ethical imperative, and so if it's politically difficult to do it, we should just do other things."
The most recent example is Google's AI ethics council debacle. Activist employees were so stubborn that they can't won't even let people have a discussion about AI ethics because one of them held views they disagree with. Half the authors of the American Constitution thought that slavery should be legal, but they managed draft the foundation of modern day democracy anyways.
American democracy. It makes absolute sense that the founding fathers of America are the basis of the modern American democracy, but let's not pretend that they lay sole claim to modern democracy.
On the other hand, what's the alternative? "Shut up and code"? Google isn't entitled to their work. We all have to operate by our own moral compass at the end of the day, and if you don't feel comfortable with what you're working on, it doesn't seem unreasonable to resign.
This is a story of democracy happening in spite of the founding fathers, not because of the founding fathers. It was, as you may know, a long (multiple centuries) and bloody (a pretty big war, plus lots of extrajudicial lynchings etc.) road from what they drafted to universal suffrage. It's still in progress in many ways, and a lot of people were hurt in the progress. So this seems like a pretty good argument that we should not let people who have views that are contrary to the long-term goal be setting the direction of how to get to the long-term goal.
Also, it just make sense to keep history accurate. There were people both opposed to slavery and those for slavery. Some founding fathers had financial interest in keeping slavery and making it. The people who were opposing it were disadvantaged in competition for power.
And it is just as easy to praise someone when you're not the one they're hurting.
The Constitutional explicit protections for both slavery and the slave trade were written in by the framers, they did not occur several decades later. The concrete divergence between UK and US policy on slavery began no later than the UK 1807 ban on the slave trade, which was less than two decades after the adoption of the Constitution while the framers were still politically active in many cases (Jefferson was President, for instance).
They did it because France (a major power) helped them to do it (France being at war with Britain at the time). Without France the American revolution would have failed.
If by "democracy" we mean "there is no privileged class, and all people born in the land can vote if they are of age and have not had their privileges removed by due process of law," or we mean "that all Men are created equal, that they are endowed by their Creator with certain unalienable rights, and among these are Life, Liberty, and the Pursuit of Happiness" (the Declaration of Independence), or we mean "Everyone has the right to take part in the government of his country, directly or through freely chosen representatives.... The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage" (Universal Declaration of Human Rights), then yes, I think the people responsible for the Constitution were quite adversarial to that goal.
The United Kingdom, in freedom from whom the US founding fathers supposedly established democracy, banned slavery over half a century before the US did and slowly but steadily allowed more people to vote for Parliament during the 1800s. At the time the US Constitution was established, only 6% of the US population could vote, according to https://web.archive.org/web/20160706144856/http://www.archiv... .
While accurate, your point on slavery misses the point too. The parent comment was implying that significant democratic progress was made despite contentious disagreement on important issues like slavery.
Your "in spite of" comment comes across as if it's saying "their system was flawed, therefore it didn't make any meaningful contributions and society would have been better without it". Don't let perfect be the enemy of the good...
This exemplifies the parent comment's original point about Google's AI ethics committee of throwing the baby out with the bathwater because they were unable to meet the expectation that everyone agree on everything. If that maxim isn't met, they just take their ball and go home and the overall system is no better than when it started.
Those who “disagree with” the Heritage Foundation probably take offense by their propensity to outright lie and manipulate to achieve their objectives, most of which run contrary to the values of those working on Google’s AI teams. In light of that, I highly doubt they see Heritage as good-faith actors.
IMO, the activists outright lie to and manipulate the public to achieve their objectives by overly simplifying the positions of the Hertitage Foundation. Regardless, it doesn't really matter because the Heritage Foundation represents the beliefs of a large proportion of the population, so their voice is important.
Worst kind of defeatism.
You'd rather have weapons in the hands of pacifists than warmongers, but that won't happen if pacifists "conscientiously object" and throw down their arms; police brutality isn't eliminated by making it a job that appeals only to bullies; and software development doesn't become ethical by asking everyone with a sense of ethics to quit.
What you need is for people to communicate and understand the world well enough that they can find solutions that are as ethical as they are effective.
Not only that, I seriously doubt that Google would punish anybody who would simply request to not work on that particular project, instead of accommodating them by reassigning to a different team. So this is a very low bar to clear.
Does it mean that somebody else will do it? Probably, but responsibility matters. If you're the one who assists in illegal or unethical activities, then you're personally responsible for them.
There are many people with stories of being unable to find work because they can't hack the modern coding interview process. One way to escape that trap is to take a job working for a defense contractor. I did it. I was over a barrel, facing an insane spousal support payment and I needed a job fast. I took a job with a defense contractor. I'm not proud of it, but I needed to live. Was I motivated by greed?
1) Many work on oft maligned technologies because they simply see no moral conundrum. It's common to see tools as amoral, and believe moral responsibility in the use of tools for good or evil lies solely with its wielder - not any inventor or refiner.
2) Or, they may see their work as their moral duty as a competent professional in a society fending off morally worse alternatives. This is the patriotic or "necessary evil" view.
3) A few utilitarian engineers see the refinement of technology in and of itself as always a subtle moral good, as they believe the elimination of inefficiency is the primary reason why humanity no longer suffers as much as it used to. There's also the assumption that the development of any kind of tech leads to unanticipated benefits to other areas of life (this is a common demonstrated pattern in defense R&D).
4) There's also the classical Greek justification of Eudaimonia. Simply doing what one does best is seen as the most moral thing one can do, as it eliminates any outward and internal dishonesty about ones self.
PS: Here is a great video by SmarterEveryDay explaining what he does and why https://www.youtube.com/watch?v=qOTYgcdNrXE
Perhaps not greedy, but definitely selfish.
It's YOUR student loans and YOUR family (thus, YOUR genetic code) that you're helping without regard for anyone else.
Don't pretend like only helping yourself and your spouse/children is anything other than self-serving.
Self-serving is fine in moderation, but a lot of people foolishly seem to hold it as an axiomatic good.
> Perhaps not greedy, but definitely selfish.
This place is beyond parody sometimes.
At an object level, it's an important point to make and re-iterate.
On a meta-level, I am annoyed at how much special pleading I see from the NYtimes about the evils of tech, while it's simply par for the course for -- say -- Finance to act with near impunity when managing, fund raising, and dealing with autocratic and communist regimes.
As long as there's someone else still willing to compromise their moral grounds for the value returned, it's still going to get made. That's just how capitalist societies work: People need to make money and someone is going to fulfill that objectional need because they have a need to fill their own coffers.
One need only look at the fervor against Facebook and realise that they still employ over 35k people to understand that, at the end of the day, people are either willing to do the task or they're in a position that they have no other choice (considering that the US doesn't have much of a social safety net).
So, while "conscientious objectors" might be good for a single business direction, it - sadly - does nothing for the overall scheme of things.
...but even if all of the engineers organised in North America to create manifestos, under psuedo or actual unions, which dictated what the future of tech should be used for, this does nothing to dissuade the partners (e.g.: 5Is) from doing the same and then sharing that technology back to their own government.
I concede that maybe I'm misunderstanding the author's intent but to me, the posit seems too idyllic to be fruitful (overall) in modern society.
[0] - https://www.bloomberg.com/news/articles/2019-04-23/pentagon-...
Everytime I've pointed out that most people simply don't care about the level of tracking Facebook and Google do I'm endlessly shouted down with words to the effect of "No! If they understand how much they were tracked they'd care!", but I don't believe that's the case, I think most people simply don't care based on the value they get from these companies.
> these people want their services, actually expect them for free, continuously at the same standard.
Source? I'm sure most people would be willing to pay for a service in exchange for some reasonable level of privacy.
> don't have any real solutions on how they would be paid for other than some handwaving about breaking Google/Facebook up which apparently will solve everything.
The "breaking up Google/Facebook" argument is completely separate from the privacy argument. Strawman much?
> most people simply don't care about the level of tracking Facebook and Google do
Source?
> "No! If they understand how much they were tracked they'd care!", but I don't believe that's the case
Again, source? If you told the average person the amount of information that Facebook and Google have inferred from them, do you seriously think they would not care?
---
There's no doubt that the data collection practices of Google and Facebook or pathologic. There is nothing stopping them from taking a more privacy-centric approach like Apple does and still make a ton of money.
But people have the mindset that corporations should be exempt from morality and that this is grounds for tossing considerations like privacy out the window. I'm getting the feeling you feel similarly.
Most people probably don't mind seeing advertisements. It's just completely ridiculous that Google or FB think they need to track individuals in order to provide targeted advertising.
If someone searches for 'football', then just show a fucking ad that is relevant to 'football' along with the search results.
For that, you need tracking.
There will always be someone that doesn't have any moral compass about their work or income. That person is simply not me.
I don’t know what country you’re from but would you prefer you had no military or weapons or defense? Like it or not there will always be other countries with weapons and they may not always be friendly towards you or your interests.
I wouldn’t want my work to have anything to do with killing people either, but I also think the defense industry (while unfortunate) is probably here to stay, for good reasons too.
[0] https://en.wikipedia.org/wiki/List_of_companies_involved_in_...
At some point, @jcs of lobste.rs complained of, quote: "capricious modding" and I think that much rings true.
Perhaps most crucial, is the undeniable reality, that once you place a comment on HN, you are no longer the owner. You lose the ability to delete or edit very quickly, and the statements posted here become locked and permanent. With this, it is ill-advised that an authentic persona should ever become attached to any user name on HN. If your opinions run contrary to political winds in this climate, real problems could come your way, and in an emergency, there is no way to disentangle yourself for data that exists on this site.
That's okay, since HN is decidedly low-fi in many ways. But based on this, you'de hope for a lighter, more permissive hand on the moderation. Not so. Not at all.
So, with an iron fist in play, why pull ones own punches. Let the harshest opinions fly, if they'll assuredly be stamped out by such stifling interference. No need to feel guilty for resisting the iron fist. After all, opinions on the internet aren't really what's wrong with the world right now. It's just an oh so popular scape goat.
How do you like my objections to HN's subculture? Conscientious enough?