This essay is a modified version of a talk I presented at Daftarkhwan (a co-working space in Lahore)
If what has brought you here is a search for answers then unfortunately you will not find any. Best case scenario, some clarity shall be attained but most likely you will go away with further questions.
That, however, IS the goal.
To spawn a trajectory of thought about the state of affairs in the field of technology and how we ought to begin peeling away the flashy layers to look at its effects on society and the environment. This is the beauty of philosophy; that no matter how far we dig, we can and will keep on digging. To some it might be frustrating and seemingly futile. But that is the crux of our existence. Technology is a tool that extends our capabilities and so it raises the same dilemmas and moral questions that have existed for us ever since we became homo sapiens. Just as any tool it can be used in a variety of different ways, for good and for ill. Which is why it is important to think what place ethics serves in the age of technology.
Negligent Design
In 2012 Apple launched its infamous Apple Maps to provide an alternative to Google’s own offering which had a behemoth following and to move away from the reliance on a third party/rival company's service. It soon became obvious that the app was released earlier than it ought to have. Complaints began to arise of inaccuracies in data, a lack of transit directions, mislabeled locations, bad renderings, and in some cases an odd choice of satellite imagery.
In Australia, for example, drivers were being driven off into the desert due to faulty location information, stranding people in the middle of nowhere during the hot summer months. Local authorities had to issue a statement urging people to stop using Apple Maps for navigation purposes to avoid any untoward incidences. [1]
In the 1980s Canadian and American hospitals had been using the Therac-25 as a therapeutic option for cancer patients. Nicknamed the ‘cancer zapper’ it was a radiation therapy device, then in its third generation. Its predecessors, the Therac-6 and Therac-20, had a computer terminal but were primarily manually operated. The differentiation of Therac-25 was it relied solely on a computer terminal to operate the device. During development the manual overrides and the hardware interlocks from previous versions were removed, leaving the computer code to handle safety mechanisms.
Image from https://hackaday.com/2015/10/26/killed-by-a-machine-the-therac-25/
Between 1985 and 1987 a few incidents with the machine stirred up a controversy. While undergoing therapy six patients complained of experiencing significant pain accompanied by a burning sensation. Three of those patients ended up dying, the others left with permanent injuries. Upon investigation a plethora of negligent behavior was uncovered.
A software bug due to a concurrency issue would, in certain circumstances, leave the device in a malfunctioned state. Because of the removal of hardware safety mechanisms and the lack of software mechanisms to handle such a situation, the device would end up blasting patients with high doses of radiation. The software engineer who had written the code had no prior experience of writing real-time systems, allowing for such an error to pop up. Bugs are part of the process of programming and mistakes are only human, however there are methodologies in place - such as code reviews and unit/feature tests - to catch and mitigate most issues. Certainly a medical device would need to meet stringent quality and safety control requirements in order to be approved. Yet no experts were consulted and neither code reviews nor unit tests were adopted during the development cycle of the Therac-25. It is most baffling such negligent behavior would be conducted while building a life-critical medical device! [2]
Both these examples highlight a question that I feel is not asked often enough:
As programmers and technologists do we have a responsibility towards what we create, and also towards the people who use them?
An Oath to Well-Being
Sometime between the 5th and 3rd century BC lived a man whose words resonate even today in the medical community: Hippocrates. His oath became the basis for the ethical standards expected of medical practitioners around the world to this day, defining ethical standards such as the respect of medical confidentiality, doing no harm to the patient, and the health and recovery of the patient being the primary goal.
This is understandable. As a caregiver one forges an intimate relationship with the other person, especially at a time when they are vulnerable. It demands a responsibility where their best interest is sought for.
Technology also forges a similar intimate bond with the individual, and broadly at a collective, societal level. Our reliance, dependency, and trust in such systems has risen to the point where not having it, or it malfunctioning, can be even detrimental. And unlike in the medical field where the bond is limited to a time of ailment, technology is pervasive in our lives - "in sickness and in health" - through time and space.
So why would that not warrant a need for ethical standards?
“I promise to work for a better world, where science and technology are used in socially responsible ways. I will not use my education for any purpose intended to harm human beings or the environment. Throughout my career, I will consider the ethical implications of my work before I take action. While the demands placed upon me may be great, I sign this declaration because I recognize that individual responsibility is the first step on the path to peace.” - Joseph Rotblat [3]
“Technology, science, and business should be founded or created or innovated, shaped, on the basis of what is actually good for human beings. Part of the problem is this idea that an individual can create a better world by him or her (and usually him) self. This idea of rugged individualism actually goes back a long time in American history, and is one of the problems that we're seeing in science, technology, and business.” - Greg Epstein [4]
Both these quotes refer to having a holistic view of the world where we should not see ourselves as trailblazers looking to make a quick buck but as parts of a grander whole, an intricately tied system – society, the environment – where thought and empathy are essential ingredients for creating responsible and ethical solutions.
The Importance of Being an Ethicist
Currently a lot of talk about ethics in technology is being driven by the increased application of artificial intelligence in our daily lives: autonomous vehicles, predictive policing, or targeted marketing being some examples. This is for good reason as computers making decisions and judgments based on a mix of algorithms and historical data opens up a Pandora’s Box of ethical dilemmas which need to be addressed. Is the historical data inclusive of different demographics of society? Who is liable if a human is injured or killed by an autonomous machine?
However, as the prior examples of Apple Maps and Therac-25 suggest, talking about ethics in technology is just as important regardless of the presence of AI. And as Douglas Rushkoff states in his article, it is fancier to think and talk about these science fiction scenarios of killer robots or a global zombie pandemic but when it comes to ethics, justice and the human condition are we willing to ask even the most basic of questions?
The question of ethics spans not only the product or service being used by people (deployment), but also the way they are developed and produced (design and implementation), and dealing with the unintended negative outcomes (maintenance) that entail from the existence of such a system.
Take, for example, Google’s ‘Don’t Be Evil’ or Facebook’s ‘Move Fast and Break Things’ motto (which incidentally have been changed). These words are a work ethic inculcated and espoused by employees at these companies. If we were to take the leap of faith and consider such taglines to be a dictate for an ethical road map then we rather quickly fall into a quagmire of loose ends. One statement has a moral basis (Google's) whereas the other (Facebook's) is completely devoid of any morality. With no clear guidelines it is only expected that a selective interpretation and application will follow.
‘Move Fast and Break Things’, the ethos championed by Facebook, originates from a mold smelling garage where a hacker is furtively working towards the next big thing. The idea is to quickly prototype a concept and test it out; if it breaks you chuck it, if it sticks you iterate on and improve it. However such an ethos applied to a product/service being used by millions of people around the world can have unexpected consequences (once again Apple Maps).
Consider the ‘social experiment’ Facebook conducted back in 2012 where they manipulated the information people saw on their news feeds. By suppressing positive or negative emotion content for over 689,000 people they were able to make people feel more or less positive. And although Facebook claimed it was to improve their services (and I can imagine many people agreeing with this statement) and it was consistent with their own data usage policies, internet activists, politicians and researchers alike found such whimsical decision-making 'disturbing' and 'scandalous'. [5]
The issue here lies with ‘informed consent’, a prerequisite for carrying out any research. The person who is to become a participant of the research is informed and asked if they would like to take part and have to actively choose to opt in or out. In the case of Facebook's experiment this was assumed under their Terms of Use policy. This is clearly a violation of the aforementioned clause.
However it also sheds light on a more troublesome work ethic: the willingness to go to any means necessary in order to achieve the desired outcomes (increasing shareholder value and profits) without considering or taking on the responsibility of the entailing consequences. Could it not have been possible that, due to this experiment, someone already in the throes of depression and contemplating suicide could have been led to follow through with their thoughts? Such a lack of empathy on the part of those working on this experiment (and features like this) is egregious for it reduces the people using their product into the prevalent, dehumanized concept of a "user", not a being with inherent emotions and feelings.
Back in March, Gizmodo featured an article uncovering a contract between Google and the US Department of Defense where they were helping analyze drone footage using artificial intelligence. Google stated its own infrastructure was not being used for the work, nor was the tech being used in any active drone warfare (as if to diminish complicity), instead lending their knowledge and expertise of computer vision and machine learning to carry out tasks such as to "identify vehicles and other objects in drone footage, taking that burden off analysts". [6]
Image from https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533
Google attempted to downplay their involvement but internal emails among upper management revealed a want to formulate better ties with the Pentagon in order to receive lucrative military contracts in the future as well. When this became public Google received a lot of flak, externally as well as internally. A petition was signed by over 3000 employees demanding Google rescind its involvement from the project, and a dozen employees even resigned due to inaction. Although Google was on the receiving end of the ire of the people for winning the bid for the contract it should be noted that the likes of IBM, Microsoft and Amazon had also been in the running.
One could argue it’s simply business and that the technology is not being used to make autonomous weaponry. But if a machine learning algorithm is being used currently to identify objects, is it certainly not possible that in the foreseeable future it could be expanded to make decisions to neutralize targets as well? After all, a machine learning algorithm does "learn" and the decisions that drone operators would make could be used in a feedback loop to give it more relevant targeting data. Are we willing to open that can of worms and pave the way for a future where drones can decide who is an enemy combatant and threat and neutralize accordingly, or act after calculating what amount of collateral is acceptable? That would be the ultimate transfer of responsibility, where any human culpability is relegated instead to a surrogate that has no conception of guilt or emotions.
It won't stop at the military either. Such weaponry could eventually be added to the arsenal of police forces. How can we be sure an autonomous police drone will not make erroneous decisions at times and put innocent civilians in its crosshairs? Such a possibility is not far fetched, nor is it impossible to occur for such wrongful justice exists even today with humans as cops. And although one could also argue that it could potentially reduce erroneous decisions that human cops make, but then how would we hold an autonomous drones liable if it does make a wrong decision?
Would the responsibility be of the drone? How would we hold it accountable? Would it be decommissioned after a subsequent systems investigation? Or would attempts be made to rectify its heuristics before putting it back into the field? Would the data scientists who trained the drone be responsible? Or the programmers who wrote the AI for the drone? Or the company who built it? Or the police department who owns the drone?
All this lands in the realm of science fiction and conjecture but if we do not start thinking along these lines then, just as always with technology, we'll be lagging behind in formulating a discourse around the political, social, economic and ethical effects of it and be addressing them after the fact. For all we know it may even evolve the way we perceive justice but the discourse needs to take place for that to happen.
However there is a certain mythic aspect about AI and machine learning that needs to be debunked.
The Fault in Our Machines
Undeniably machines are faster at computation, sifting through mountain hordes of data with relative ease, and generally better and more efficient at certain cognitive functions that we humans partake in. It would be a misnomer to conflate this into believing machines are better and more adept than humans at everything, one of those being they will make judgments free of cognitive bias that plague humans.
The truth is they are best at replicating and exacerbating the very biases that humans have. Being the creators of such systems we end up imparting our very imperfections to our creation. The training data curated by data scientists and engineers used to build the models contain the subjective reasoning their human teachers hold. Facial recognition systems harbor such issues where the gender of white males is identified to a 99% accuracy while dark-skinned females were identified only up to a 65% accuracy. Another find was that facial recognition systems built in Asia were better at identifying Asian faces than white faces (similar to how systems built in Europe and America were better at identifying white faces). [7] These biases are not intentionally baked into the models but due to the use of non-inclusive training sets and assumptions of race and gender that are simplified to a fault.
Similarly even historical data carries such preconceived notions and biases. Take for example the concept of predictive policing where historical records of reported crimes in a city are used to predict the time and location range of future crimes in order to deploy patrols more effectively. According to a report by Upturn crime reporting in itself is skewed (between 2006-2010 52% of violent crimes and 60% of household crimes went unreported) and enforcement also ends up being subjective (even though the the usage of marijuana is similar between white and black communities more black Americans end up being apprehended for possession). Not only is the data skewed, but it is also not representative enough, meaning the creation of an inaccurate predictive model that only reinforces existing biases. [8]
As with Tesla’s Autopilot crash fatalities an investigation by the NTSB (National Transportation Safety Board) was kicked off after a pedestrian crossing the street was fatally run over by an autonomous Uber vehicle in Tempe, AZ. A safety driver had been present who had not been paying attention to the road. The blame was being shifted onto her (similar to how Tesla claims that Autopilot users ought to still be attentive and keep their hands on the wheel) but with the NTSB’s report it seems there is more than meets the eye.
Based on the logs of the vehicle an ‘unknown object’ was noticed on the road 6 seconds prior to impact. Due to low light conditions the vehicle was unable to accurately ascertain the trajectory of the object, or what the object was for that matter. It was only about a second before impact, once the pedestrian was in view of the headlights that it identified a bicycle (which she had been rolling along on her side).
Although the car had its own emergency braking system it is deactivated when the car is in autonomous mode so as to reduce erratic vehicle behavior. On top of that no visual or auditory cues were provided to the driver in order to alert them to take evasive maneuvers. The car’s sensors also were not effective in detecting objects in low-light conditions, which could have been avoided with the use of thermal sensors. Uber is not the only autonomous vehicle tech lagging in this regard however. Such negligence on the part of Uber ought to have been addressed before putting the cars onto the streets for testing.
Dawn of the Planet of Ethical Tech
Not all is sour in the valley of silicon and glitter. In recent months greater scrutiny has been placed on companies the likes of Facebook, Twitter and Google. Congressional hearings although have focused more so on the political effects of these technologies, a wider public debate is ensuing about the ethical issues that lie beyond just the political. And work is being done across different companies to rectify prior transgressions.
Consider Jigsaw, a technology incubator founded in 2010 at Google (now a subsidiary of Alphabet Inc.), whose mandate lay at the intersection of technology and geopolitics. Termed a “think/do tank”, Jigsaw looks at issues such as extremist indoctrination over social media, DDoS attacks, and internet trolls, and ideates and implements solutions to counter them.
One such solution is the redirect method for countering extremist indoctrination. Instead of filtering out extremist/objectionable content that people would search for online, which makes sense as removing such content will not stop people from searching for it elsewhere, the algorithm surfaces content that counters such views is also presented around the search results. This way the person has the opportunity to read up on views, on their own volition, that could mitigate the indoctrination. And services like YouTube have been making use of the method in production. [9]
Such an effort is a recognition of the double-edged sword of tools like search engines and social media: they can be used to democratize and disseminate information (which is what most techno-utopians solely focus on) but can also be used to spread misinformation and hate. Recognizing the issue and trying to find ways to counter the issue is a step in the right direction.
NOTE: During the talk an audience member did share his disdain about Jigsaw’s methodology, stating the way they define (and thus ascertain) extremist content is narrow and limited. An issue that would arise due to that would be the redirect method working in only certain cases. This is a valid concern and one that should not be taken lightly as this is one of the things I am actually advocating against, i.e. creating half-baked solutions that work only in particular instances. It implies a lack of time and effort put to understand the problem in its totality to create an effective solution, or equally worse, an ignorance that is blinding.
Ever since the advent of the iPhone the smartphone industry has seen yearly release cycles of new phones, each trying to out best the other with better processors, slimmer form factors, and bigger sizes. This of course has led to phones’s becoming less and less repairable along with the use of materials that hinder easy recyclability. Consumers are enticed to buy a new phone each year, creating a plethora of waste as they discard their old ones. The questions ‘is this necessary?’ or ‘do I need this?’ are subdued as consumers are coddled by the mention of faster processors, better cameras, and other sleek and glamor-inducing visuals.
Claiming to be the world’s “first ethical and modular smartphone”, the Fairphone 2 is a paradigm shift in an industry plagued by hyper-consumerism and unsustainable practices. To achieve their aim of long-lasting design the phone is built of modular components, enabling repairs or upgrades when necessary. Striving to be ethical they made efforts to ensure their supply chain for minerals such as tin, tantalum, tungsten and gold is obtained from conflict-free mines. And due to the phone being modular it makes it easier to repair as well as recycle. [10] Their efforts hopefully will spawn conversations among major players in the industry about the need for more discretion in acquiring raw materials so that communities are nurtured instead of left in strife, about building things to last longer rather than churning out products each year to garner more profits, and about having more environmentally sustainable business practices rather than adding to problems of e-waste.
Another rocking of the boat, mentioned earlier in this post, came in the guise of the backlash Google received after signing a contract for Project Maven with the US DoD. This resulted in an online petition signed by over 3,000 Google employees, and a few resignations as well. In June 2018, Google announced the contract would not be renewed, expiring in 2019. [11] The company also unveiled a set of ethical principles to guide the development of AI, stressing on areas such as avoiding reinforcing bias, testing for safety, and not using it to build weaponry or surveillance systems.
Recently another petition has been going around at Google, this time about a censored search engine for China. Irked by this latest venture by the company, employees are protesting the work being done as they consider it to be a violation of human rights and free expression. Feeling the ethics principles defined after the Project Maven petition were not encompassing enough, they have been demanding more transparency and oversight as “Google employees need to know what we’re building.” [12]
Although this bit was not in the talk I recently learned about a feature introduced by Facebook in fall 2017, and felt it would be important to mention here. It certainly is not absolution for their prior mistakes, but just as in the case of Jigsaw, it shows the desire to make more empathetic decisions.
The feature is an AI system that can ascertain if a person has suicidal intent. It determines this based on different signals such as if a friend leaving a worried comment on a post, or if someone expresses such intent in a post or a live video. Upon ascertaining such a case authorities are notified in order to take appropriate preventative action. [13]
Consciences have been stirred and conversations, albeit delayed, have begun. As with the aforementioned examples, an article published in Vanity Fair last month corroborates a growing trend of people working at Silicon Valley companies taking stronger ethical stances in recent times.
However for this to become part of our regular discourse and take hold in our collective consciousness we need to start thinking about ethics much earlier on. It requires rehashing current utopian conceptions of technology, which always tend to frame it as an upward progression for human civilization. By shutting down that myth technologists can start looking at technology for what it really is: an amoral tool that can be used in whatever creative capacity it withholds. Without having principles to guide the creation and usage of technology it is not a certainty that progress (used in the broadest sense of the term) would follow.
Including ethics in the computer science discipline at universities is a first step towards ingraining such a discourse. University of Texas, Austin has an offering on the Ethical Foundations for CS, while Harvard/MIT have formulated a joint course on ethics and AI. Stanford is also working to incorporate ethics into their tech curriculum by this fall semester. [14] Other universities however need to join in on the bandwagon as well. Courses exploring ethics in technology need to become a standard and essential part of a computer science curriculum, just as Algorithms and Data Structures or Object Oriented Design are, in order for such conscientious thinking to arise in a technologist.
Ethics Inc.
Individual responsibility is a necessary ingredient for creating collective responsibility. However it is also imperative that tech companies create standards that direct their employees in thinking about their work from an ethical standpoint, factoring in social, psychological, economic, and the political dimensions of technology.
Such a holistic approach towards the implications of technology may sound idealistic, a tall order, and debilitating to the ‘move fast’ motto. However that is exactly what companies need to move away from. More thought needs to be put into, for example, how the introduction of a new technology effects people, what socioeconomic disruptions arise due to it, if yearly release cadences for products/services are necessary, how the primary stakeholder of a technology ought to be the end user and therefore any features added be geared towards their benefit, or if a technology is misused for harmful purposes what to do in order to mitigate or resolve the issue.
The Association for Computing Machinery (ACM) recently updated their code of ethics in order to provide clear guidelines, to students and professionals in the technology industry, in how to bear responsibility in different facets of their work - conceptualization to implementation to maintenance - ensuring it be driven to achieve public good. It is a good basis upon which tech companies can build their own contextualized ethical frameworks.
I highly recommend going over it as it provides a strong base to understand one’s responsibilities to society and how to understand and apply those principles. Here are a few points that stood out to me:
Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing.
Honor confidentiality
Strive to achieve high quality in both the processes and products of professional work.
Foster public awareness and understanding of computing, related technologies, and their consequences.
Recognize and take special care of systems that become integrated into the infrastructure of society.
I’ll end with a quote from a sage man from 13th century Persia whose spiritual poetry, read by millions the world over, bear the wisdom of love, harmony and union. May it serve as a helpful reminder that for paradigms to shift we inevitably need to take that first step.
“Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself.”
References
[1] Eördögh, F. (2012, December 11). Apple Maps Glitch in Australia Shows Why We Have To Stop Blindly Following GPS Navigators. Retrieved from http://www.slate.com/blogs/future_tense/2012/12/11/apple_maps_mildura_australia_glitch_strands_drivers_in_dangerous_area.html
[2] Fabio, A. (2015, October 26). Killed By A Machine: The Therac-25. Retrieved from https://hackaday.com/2015/10/26/killed-by-a-machine-the-therac-25/
[3] Rotblat, J. (1999, November 19). A Hippocratic Oath For Scientists. Retrieved from http://science.sciencemag.org/content/286/5444/1475
[4] Kraus, R. (2018, April 28). These 8 Books Are Required Reading For Anyone Who Wants To Change The World With Tech. Retrieved from https://mashable.com/2018/04/28/tech-leaders-reading-list-greg-epstein
[5] Booth, R. (2014, June 30). Facebook Reveals News Feed Experiment To Control Emotions. Retrieved from https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds
[6] Cameron, D. and Conger, K. (2018, March 06). Google Is Helping the Pentagon Build AI For Drones. Retrieved from https://gizmodo.com/google-is-helping-the-pentagon-build-ai-for-drones-1823464533
[7] Lohr, S. (2018, February 9). Facial Recognition Is Accurate, If You’re a White Guy. Retrieved from https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html
[8] Koepke, L and Robinson, D. (2016, August). Stuck In A Pattern: Early Evidence On "Predictive Policing" And Civil Rights. Retrieved from https://www.upturn.org/reports/2016/stuck-in-a-pattern
[9] Bort, J. (2018, August 25). Meet The Little-Known Group Inside Of Google That's Fighting Terrorists And Trolls All Across The Web. Retrieved from https://www.businessinsider.com/google-alphabet-jigsaw-terrorists-trolls-2018-8
[10] Fairphone: Our Goals. Retrieved from https://www.fairphone.com/en/our-goals/
[11] Conger, K. (2018, June 01). Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program. Retrieved from https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-mave-1826488620
[12] Dave, P. and Menn, J. (2018, August 16). Google Employees Demand More Oversight Of China Search Engine Plan. Retrieved from https://www.reuters.com/article/us-alphabet-china/google-employees-demand-more-oversight-of-china-search-engine-plan-idUSKBN1L128O
[13] Terdiman, D. (2017, November 27). How Facebook’s AI Is Helping Save Suicidial People’s Lives. Retrieved from https://www.fastcompany.com/40498963/how-facebooks-ai-is-helping-save-suicidal-peoples-lives
[14] Singer, N. (2018, February 12). Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It. Retrieved from https://www.nytimes.com/2018/02/12/business/computer-science-ethics-courses.html
Influences for the article
Rushkoff, D. (2018, July 24). How Tech's Richest Plan To Save Themselves After The Apocalypse. Retrieved from https://www.theguardian.com/technology/2018/jul/23/tech-industry-wealth-futurism-transhumanism-singularity
Pancake, C. (2018, August 11). Computer Programmers Get New Tech Ethics Code. Retrieved from https://www.scientificamerican.com/article/computer-programmers-get-new-tech-ethics-code/
West, D. (2018, April 19). Why Tech Companies Need a Code of Ethics for Software Development. Retrieved from https://www.entrepreneur.com/article/311410
Ito, J. (2017, November 15). Resisting Reduction: Designing Our Future With Machines. Retrieved from https://jods.mitpress.mit.edu/pub/resisting-reduction
Dvorsky, G. (2018, February 21). New Report on Emerging AI Risks Paints a Grim Future. Retrieved from https://gizmodo.com/new-report-on-ai-risks-paints-a-grim-future-1823191087
Technology is a tool that extends our capabilities and so it raises the same dilemmas and moral questions that have existed for us ever since we became homo sapiens.