Ethical issues of facial recognition technology

An overview of facial recognition technology’s ethical issues, ranging from privacy to bias concerns.

Image: Leszek/Adobe Stock

Facial recognition technology has entered the mass market, with our faces now able to unlock our phones and computers. While the ability to empower machines with the very human ability to identify a person with a quick look at their face is exciting, it’s not without significant ethical concerns.

Suppose your company is considering facial recognition technology. In that case, it’s essential to be aware of these concerns and ready to address them, which may even include abandoning facial recognition altogether.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

When assessing these ethical concerns, consider how your customers, employees and the general public would react if they fully knew how you’re using the technology. If that thought is disconcerting, you may be veering into an ethical “danger zone.”

What are the ethical issues of facial recognition technology?

Lack of transparency and consent

A foundational ethical issue of facial recognition is that these technologies are often employed without consent or notification. Having access to surveillance cameras or video feeds of employees, customers or the general public doesn’t mean it’s a good idea to use that data without informing the affected parties.

Identifying someone by their face further opens the potential to access all manner of other data, which can amplify ethical concerns. For example, if you use facial recognition to identify people coming into a store, should you use that identity to pull purchasing history? How about a credit report? Should you avoid serving people with a low credit score and focus on the “high value” customers?

A similar example includes consumer electronics like the Ring doorbell. According to Politico, Amazon shares videos from these devices to police without user consent.

Mass surveillance

While concerns over mass surveillance may seem like an overblown conspiracy theory, they’ve hit home rather dramatically in the United States. The removal of federal abortion protections has created concerns that large tech companies could be subpoenaed to identify users that have traveled to abortion clinics.

As the abortion clinic example illustrates, logging public movements on a massive scale might seem fine until it isn’t. The ability for the public to assemble and freely express support or opposition to the issues of the day is a fundamental element of most democracies. Mass surveillance and facial recognition could create a record of perceived “bad behavior” that’s used against citizens. There are already elements of this ethical concern coming to fruition. According to the EFF, Police in Los Angeles requested Ring camera footage of protests from users during anti-police demonstrations.

Even more benign examples create ethical concerns. Suppose your company runs a set of ice cream chains and uses facial recognition to identify frequent customers and “surprise” them with rewards. Might those customers feel squeamish that their “guilty pleasure” is being monitored and tracked?

Bias and accuracy concerns

An often-cited ethical concern with facial recognition technology is the presence of racial bias in the algorithms. However, this worry reflects a more profound concern about the overall accuracy of facial recognition technologies. Claimed evidence of bias often originates from demographic classification algorithms that seek to use a face to determine characteristics like race, gender and age. A 2018 MIT study found that most classification algorithms misidentified darker-skinned women more than any other group.

Proponents of facial recognition argue that classification and identification are two different things and that algorithms have evolved both technologically and through better training in the intervening years. However, under “perfect” conditions, facial identification technologies have a 99.92% success rate. This might seem impressive, but if you’re looking for a serial killer in Manhattan, facial recognition might suggest 130,000 potential false positives, and that’s under “perfect” conditions.

Furthermore, a range of factors can skew algorithms, from the age of a photo versus an individual’s face to difficulties in discerning typical human features like doppelgängers or identical twins. This may seem like an overblown concern if you’re attempting to identify preferred shoppers, but it is deeply concerning if you make law enforcement decisions based on facial recognition technology.

The ethics of anonymity

There is a raging debate about anonymity online, evidenced by heated battles over everything from tracking cookies used for marketing to how and where data is stored about what we do online. Much of this debate comes down to the sentiment that just because we can track all manner of online behavior doesn’t mean it is ethical or appropriate.

This same debate applies to facial recognition technology, which could eventually provide a similar ability to track all our behaviors in the real world, just as much of our online behavior is tracked. As this debate evolves, it can provide lessons relevant to emerging technologies like facial recognition.

Technologists might feel that ethical concerns are “above their pay grade;” however, technologies like facial recognition have immense power and ethical risk. Rather than rushing ahead with implementation and assuming someone else will worry about the ethics later, it’s worth understanding and debating these concerns.

Source link

Leave a Comment

Your email address will not be published.