Navigating the Intersection of Technology and Collective Ethics

- Previously the logic for policing and regulating tech has mostly been in the scope of moral philosophy and social science.
- Political philosophy introduces three important values to the mix including pluralism, agency, and legitimacy.
- Political philosophy can benefit other areas of the tech sector like AI, data privacy, and surveillance.
Think fast. We've got a trolley full of innocent people, a woman tied to one set of train tracks, and 5 tied to another. The only thing that decides who's blood is spilled is you and the lever in front of you.
Except it's 2025: think faster. You're not behind the lever coping with option paralysis anymore. You're inside the trolley and it's self-driving. How do we decide how to develop the A.I. piloting this death trap to navigate this crisis?
Johannes Himmelreich, Ph.D. in philosophy and public policy, set out to solve this riddle. His critique on the approach to tech development, and his prescription on using political philosophy to move forward, are cutting-edge. And his is a voice I think the rest of the tech sector could benefit from.
I.
Up until now, the ethics of self-driving cars have only been considered through moral philosophy or social science. The way you're used to hearing the trolly problem, before a cyberpunk dystopian blogger challenged you to consider it as driverless, is an example of a moral philosophical perspective. It begs the question, what is the right thing for you to do as an individual?
The thought experiment is great for challenging the listener's own judgement inwardly. But just because you would start by railing the first 5 and reverse to get the body on the other tracks you missed before launching the trolley into the twin towers doesn't mean that's how everyone would do it. And when we're developing and deploying driverless cars we have to build judgements that work for everyone.
Social science on the other hand is a popular method in Silicon Valley for making decisions by fielding the populace. We can empirically study public attitudes, preferences, and behaviors to paint a bigger picture of what's acceptable to the masses. The only problem with that is sometimes you're all wrong. Don't play coy now. I already have the data showing more than half of you run red lights. So just because we know how the majority of folks might make a decision doesn't mean we've found an answer.
II.
So we can't dictate driverless cars with moral philosophy because it doesn't answer how we're going to develop for everyone, not just you. And even if we understand what everyone wants, sometimes (often, in fact, when it comes to matters of the road) you're all wrong. Himmelreich poses the idea that incorporating political philosophy might help us make the best decisions with these manless trolley problems. Political philosophy relies on three key principles: reasonable pluralism, human agency, and legitimacy of authority.
Pluralism addresses the fact that differences pop up between diverse sets of people. That those disagreements happen for good reason. Maybe bringing together voices from different walks of life to consider a problem from more angles, allowing us to envision the issue with more depth, is a good thing. Possibly something to be cherished in the driverless industry.
Human agency is exactly what it sounds like. We can't compromise the autonomy of the individual for the views of the many. As long as you're not routing your Waymo into the 93rd floor of the World Trade Center. Or any reasonable boundary, really. Everyone should be free to make their own decisions within a reasonable limit. Are you allowed to grab the steering wheel and hit the gas? Absolutely not. But maybe setting options in the app to pre-select a driving style or route could restore some of your independence and preference to the experience without compromising safety.
Legitimacy of authority is probably the most important principle in this situation because it becomes a synthesis for all the other perspectives we discussed. Why do we respect the decision? How was this decision made? Who came up with this idea? What if we developed unmanned cars to run red lights citing the earlier mentioned stat that 55.8% of drivers admit to running red lights? You, as the consumer, might not jump in the passenger seat of a Waymo if you found out its judgement was synthesized from the half of drivers that account for the majority of accidents. Which is not the case at Waymo. That was just a hypothetical example. Waymo, don't sue me. At the same time, regulatory laws made around the temperature you're allowed to have the A/C on in the vehicle would feel like an illegitimizing overreach of power. Trying to regulate my climate control to go a degree above 72 is some real clown activity. Can't respect that.
III.
Political philosophy isn't something that we have to limit to the confines of driverless trolley thought experiments. It's the stuff that is direly needed across the tech industry.
Take artificial intelligence. Since its inception, AI has been riddled with systemic inequalities through racism and sexism. It doesn't have to be that way though. We can train AI with equitable design that understands societal structures and power dynamics. A form of democratic oversight by its users and shareholders prioritizing pluralism, agency, and legitimacy could curb AI's tendency to caricature the worst parts of the collective human psyche. In tandem with moral philosophy, political philosophy could help train machines to perceive the difference between recognizing sex and ethnicity and an ethical way to recognize the equal dignity between them all.
Data privacy and surveillance are also years behind in the same philosophical approach. Who gets to make the rules on what it scans, when, and how? The answers to those questions inform whether we'll respect the authority designing these mechanisms. And through pluralism and agency we can recognize a social push in the direction of personal autonomy when walking down the street in a public space drowning in cameras.
Altogether, tech can be developed with superior philosophical judgement that protects human rights. After all, tech should be helping humanity forward rather than holding it back in the throws of the 50s. And after trying self-driving cars myself, I believe self-driving cars are going to be a staple of security and comfort in the future. Without the driver, we're looking at private transportation women and children can access with peace of mind that they're not about to have their face printed on the side of a milk carton. And political philosophy welds the needs for safety, autonomy, and sensible authority for making your adventures smoother.
