There have been discussions about bias in algorithms related to demographics, but the issue goes beyond superficial characteristics. Learn from Facebook’s reported missteps.
Many of the recent questions about technology ethics focus on the role of algorithms in various aspects of our lives. As technologies like artificial intelligence and machine learning grow increasingly complex, it’s legitimate to question how algorithms powered by these technologies will react when human lives are at stake. Even someone who doesn’t know a neural network from a social network may have pondered the hypothetical question of whether a self-driving car should crash into a barricade and kill the driver or run over a pregnant woman to save its owner.
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
As technology has entered the criminal justice system, less theoretical and more difficult discussions are taking place about how algorithms should be used as they’re deployed for everything from providing sentencing guidelines to predicting crime and prompting preemptive intervention. Researchers, ethicists and citizens have questioned whether algorithms are biased based on race or other ethnic factors.
Leaders’ responsibilities when it comes to ethical AI and algorithm bias
The questions about racial and demographic bias in algorithms are important and necessary. Unintended outcomes can be created by everything from insufficient or one-sided training data, to the skillsets and people designing an algorithm. As leaders, it’s our responsibility to have an understanding of where these potential traps lie and mitigate them by structuring our teams appropriately, including skillsets beyond the technical aspects of data science and ensuring appropriate testing and monitoring.
Even more important is that we understand and attempt to mitigate the unintended consequences of the algorithms that we commission. The Wall Street Journal recently published a fascinating series on social media behemoth Facebook, highlighting all manner of unintended consequences of its algorithms. The list of frightening outcomes reported ranges from suicidal ideation among some teenage girls who use Instagram to enabling human trafficking.
SEE: AI and ethics: One-third of executives are not aware of potential AI bias (TechRepublic)
In nearly all cases, algorithms were created or adjusted to drive the benign metric of promoting user engagement, thus increasing revenue. In one case, changes made to reduce negativity and emphasize content from friends created a means to rapidly spread misinformation and highlight angry posts. Based on the reporting in the WSJ series and the subsequent backlash, a notable detail about the Facebook case (in addition to the breadth and depth of unintended consequences from its algorithms) is the amount of painstaking research and frank conclusions that highlighted these ill effects that were seemingly ignored or downplayed by leadership. Facebook apparently had the best tools in place to identify the unintended consequences, but its leaders failed to act.
How does this apply to your company? Something as simple as a tweak to the equivalent of “Likes” in your company’s algorithms may have dramatic unintended consequences. With the complexity of modern algorithms, it might not be possible to predict all the outcomes of these types of tweaks, but our roles as leaders requires that we consider the possibilities and put monitoring mechanisms in place to identify any potential and unforeseen adverse outcomes.
SEE: Don’t forget the human factor when working with AI and data analytics (TechRepublic)
Perhaps more problematic is mitigating those unintended consequences once they are discovered. As the WSJ series on Facebook implies, the business objectives behind many of its algorithm tweaks were met. However, history is littered with businesses and leaders that drove financial performance without regard to societal damage. There are shades of gray along this spectrum, but consequences that include suicidal thoughts and human trafficking don’t require an ethicist or much debate to conclude they are fundamentally wrong regardless of beneficial business outcomes.
Hopefully, few of us will have to deal with issues along this scale. However, trusting the technicians or spending time considering demographic factors but little else as you increasingly rely on algorithms to drive your business can be a recipe for unintended and sometimes negative consequences. It’s too easy to dismiss the Facebook story as a big company or tech company problem; your job as a leader is to be aware and preemptively address these issues regardless of whether you’re a Fortune 50 or local enterprise. If your organization is unwilling or unable to meet this need, perhaps it’s better to reconsider some of these complex technologies regardless of the business outcomes they drive.