The AI Ethics industry, including consulting firms, governments, quasi-governments, etc., has produced mountains of documents, conferences and individual hustles. If you examine this corpus of work, you will see tremendous repetition covering how AI can harm people - focusing on the bias, gender, age and other criteria that apply to people individually.
What’s missing from this mountain of “research"?
Harm from AI is far more dangerous. I attribute this to the preponderance of “ethicists” who approach from a moralistic, do-good point of view. The organization's recommendations for “ethics committees” and sensitivity training about unfairness to individuals have their place, but it doesn’t address the more significant problem.
For example, it is impossible to rid AI of bias without taking a historical perspective. Bias in decision-making systems has been around since the nineteenth century. Bias is not a result of data science or AI; these technologies didn’t create bias. We created them. People. Only by understanding the systemic aspects of bias, can AI progress in eliminating it.
Bias isn’t one thing; it exists on a spectrum.
- where you set the threshold
- what metric(s) you choose to measure fairness
- what is your objective function
- how you weigh variables
- who approves the model for deployment
- where the model is deployed
- what problem you’re trying to solve in the first place
No significant burst of technology has ever appeared without slamming human rights, from Roman roads to the cotton gin. A similar phenomenon exists with AI that affects not just people one at a time, such as credit or employment decisions, but whole civilizations. We face tremendous risks of danger and injustice on a far greater scale than a biased hiring algorithm:
The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war, resulting in mass casualties. To avoid being thwarted by the enemy, these weapons would be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
The AI is programmed to do something beneficial. Still, it develops a destructive method for achieving its goal. This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient, intelligent car to take you to the airport as fast as possible, it might get you there, chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. Suppose a super-intelligent system is tasked with an ambitious geoengineering project. In that case, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
Getting back to the topic of unethical AI, whose breadth is more than just some people. Here are some other use cases with harmful impacts to account for:
- Harming the environment: AI can pursue an organization's goals to reduce cost and raise revenue, causing harm to the environment because a set of guardrails weren’t embedded in the model (or because the organization didn’t care).
- Disinformation/politics: The damage to one person’s privacy or reputation causes concern, but when it becomes a vehicle to mold public opinion based on lies, it’s a more significant problem.
- Creates an abusive or discriminatory environment (Crime stopper applications).
- Supply chain routes that perennially serve certain businesses late, and potentially cause their demise.
- Disrupts stable markets.
- Identifies hostile takeover targets at light speed.
- Accumulating data to analyze and use in harmful and discriminatory ways, such as law enforcement or financial services, among others
- Hackers: AI has boosted its capacity to the extreme.
- Ransomware: a similar story.
In a Techopedia story, AI suggested 40,000 new possible chemical weapons in just six hours. The issue here isn't that AI would do something threatening or dangerous to humanity. It gives bad human actors the keys to do those bad things themselves.
AI applications as part of weapons, as a rule, make those weapons more powerful, and weapons, to anybody with a lick of common sense, are just pretty scary in general! This has boiled to the surface as an area of great concern, even more so than bias in the social context. Even the Department of Defense has raised these questions.
The danger is moral deskilling and debility. Allowing machines to make decisions will, over time, debilitate our capacity for questioning and making decisions. A good counterexample is airline pilots. Today, the autopilot can do everything from taking off to landing, but pilots take over manual control at specific points to keep their piloting skills sharp.
We must be diligent, because humans' skills may degrade if they allow the machines to take over their tasks. Consider the consequences of allowing the devices to operate in their most extreme form. If AI starts to make ethical and political decisions for us, the consequences are not good. We may shrink our moral center when our power has become most significant, and our choices are the most important.
We have tools today for mathematically ferreting out bias, privacy invasion, discrimination, fairness and all of the other ethical things that affect people individually. It’s not the lack of tools at our disposal that is the problem. It’s the will.
If the organization is not sufficiently committed to these “ethical “principles, all the workshops, white papers, books, and conferees aren’t worth a bucket of spit (as they say in Texas). The ethicists do a good job exposing the danger of biased AI applications, but the problem is much bigger than that. Predictive policing, judicial systems, hacking and ransomware, and probably worst of all, social media.