Home » artificial intelligence » The Biases Percolating Algorithms: Will AI Facilitate Disparity and Discrimination?

The Biases Percolating Algorithms: Will AI Facilitate Disparity and Discrimination?

By Emily Lamm

Cryptically crafted and living behind the façade of technology, algorithms have escaped the standards we hold ourselves to.  The allure of coding and quantum computing arouses a sense of intrigue and elevates the status of the underlying algorithms. Yet, this charm should not obscure the fact that the authority afforded to technology is constructed and highly sensitive to context.  For instance, when a deep learning, neural network is introduced to an incongruous object––an elephant within a living room––pixels are crossed and previously detected objects are misidentified.  These types of errors are not uncommon, but they do take on forms far more sinister than an elephant-triggered kerfuffle. High-profile examples include LinkedIn’s platform showing high-paying job ads to men more frequently than women, and law enforcement officials and judges relying upon patently racist AI-powered tools.

On one hand, the United States has developed a robust body of laws combating discrimination.  The Equal Protection Clause of the Fourteenth Amendment and Title VII of the Civil Rights Act have been paramount, and the Americans with Disabilities Act of 1990 is considered an immense success in protecting individuals with qualifying disabilities.  On the other hand, the United States has no such analogue to offer protection from algorithmic bias.  In effect, algorithms––just one step removed from humans––have escaped the rule of law despite being a reflection (or manifestation) of the implicit values of the very humans who created them.

Now, just because there is no general legislation or regulatory scheme to control for algorithmic bias, doesn’t mean there won’t be soon.  Other countries have filled this gap by implementing a data protection regime. In due time, perhaps with a change of administration, we will begin to see a drastically different approach to Artificial Intelligence.  Although Americans have been rather lackadaisical about data privacy (often trading their Facebook information for a quiz predicting what their child will look like), they have been quick to advocate against discrimination.  Just look to the sweeping nature of the civil, women’s, and LGBT rights movements.  Accordingly, there are numerous initiatives––launched by the likes of Facebook, IBM, Google, and Amazon––researching algorithmic bias and announcing tools to bolster AI fairness.

Lawyers are also not immune from the mysterious nature of algorithms.  Indeed, most litigators interface with it regularly. Every time we run a search in Lexis Advance or Westlaw, the results we see are the product of algorithms hard at work behind the scenes.  Recently, Fastcase provided the option for users to toggle with its research algorithms through factors like relevancy and authoritativeness.  Although this tool appears to have little influence upon the results generated, it is responsive to a growing demand for algorithmic accountability.  Undoubtedly, lawyers today must embrace and implement technology in order to remain at the forefront of the industry.  Nevertheless, lawyers must also continue to be skeptical, discerning, and autonomous thinkers that refuse to grow complacent with inadequate technology.

As the United States citizenry grows increasingly diverse, technology’s “black box” must begin to encompass an intersectional awareness that accounts for the vast array of identities its users embody.  Ensuring that technology is implemented and monitored responsibly should be at the forefront of everyone’s mind.  Whether it be lobbying for new legislation or updating corporate policies, the time is ripe to seriously consider the role of law in algorithmic bias.


Leave a comment