A.I. Fairness and Data Privacy
We are experiencing one of the defining developments of the 21st century: the rise of Artificial Intelligence (“A.I.”). From suggesting what movies to stream to determining which schools our children should attend, this technology has found its way into virtually all aspects of our lives. Although A.I. brings countless benefits to our lives, a critical question must be addressed: do A.I. systems benefit everyone?
Legal and tech experts alike have detected biases in the implementation of A.I. systems that mimic the prejudices and biases that exist in society. Without proper checks and balances, deployment of biased A.I. systems present serious risks in criminal justice, government surveillance, healthcare, credit scores, online speech rights, and internet privacy.
In order to protect clients against these new and profound risks, lawyers must become experts in this area of rapid innovation. To help clients navigate this new territory, Eisenberg & Baum has launched its Artificial Intelligence Fairness and Data Privacy practice group to promote fairness and accountability in step with technological advancements. Through nation-wide legal advocacy, our team of attorneys work tirelessly to rectify injustice stemming from unfair implementations of A.I. systems. Collaborating with experienced tech experts, we aim to combine our legal expertise and rigorous scientific research to decode systemic bias in the A.I. systems used in various private and public institutions.
While A.I. was once thought of as a niche area, rapid innovation in this space has allowed A.I. to embed itself in nearly every conceivable part of day-to-day life. This practice group brings together advocates, public organizations, and the scientific community, to tackle the following issues:
- Criminal Justice: Automated risk assessments tools are used in courts to determine a defendant’s pre- and post-trial incarceration, such as in bail and sentencing hearings. Flawed algorithms and skewed data used in the programming of these A.I. systems deprive individuals of fundamental liberties and rights.
- AI is sending people to jail—and getting it wrong (MIT Technology Review)
- Bias in Criminal Risk Scores Is Mathematically Inevitable, Researches Say (ProPublica)
- The accuracy, fairness, and limits of predicting recidivism (American Association for the Advancement of Science)
- Government Surveillance: Facial recognition technology is widely used in police searches, investigations and surveillance cameras. However, there are growing concerns over the technology’s error rates and misidentification that disproportionately affect women and individuals of color. This can gravely harm a person’s right to liberty and privacy.
- NYPD Surveillance Technology Use: Eisenberg & Baum Hosts Public Forums with Experts and the Public to Facilitate the Public Commenting Process (Feb. 2021)
- Additional Resources:
- Wrongfully Accused by an Algorithm (New York Times)
- Facial-Recognition Software Might have a Racial Bias Problem (The Atlantic)
- Healthcare: Algorithms are used to prioritize the care of certain patients over others in hospitals. Yet, research shows that flaws in A.I. decision-making in life-and-death situations may result in discriminatory treatment based on economic status and race. Moreover, the abuse of sensitive health information, biometric and genetic data pose a threat to one’s privacy.
- Healthcare Algorithms and Discrimination (Eisenberg & Baum’s Current Investigations)
- Fixing Bias in Algorithms is Possible, And This Scientist is Doing It (CAI)
- New Research Finds “Significant Racial Bias” in Commonly-Used Healthcare Algorithm (Emerging Tech Brew)
- Widely used health care prediction algorithm biased against black people (Berkeley Public Health)
- Is Artificial Intelligence Worsening COVID-19’s Toll on Black Americans? (Massive Science)
- Employment: A.I.-based hiring software is now commonplace. Employers screen and interview candidates by relying on algorithms employed by hiring platforms.. Web-based recruiting sites use algorithms to target advertisements to select groups of candidates based on a wide range of personal and social data. However, researchers have detected biases in these platforms arising from faulty data and proxies that perpetuate discrimination based on gender, race, disability, and social class.
- An MIT researcher who analyzed facial recognition software found eliminating bias in AI is a matter of priorities (Business Insider)
- When the Robot Doesn’t See Dark Skin (New York Times)
- How AI Technology Discriminates Against Job Candidates With Disabilities (Texas Public Radio)
- For Some Employment Algorithms, Disability Discrimination by Default (Brookings)
- Credit Scores Impacting Financial and Housing: A.I.-based credit risk assessments that determine a person’s ability to gain loans and housing are prone to explicit or implicit bias based on the data sources and proxies that are used in designing the system. Studies show that some risk assessment tools perpetuate discrimination based on race and gender, and exacerbate the financial and information gap between the haves and have-nots in society.
- Housing Discrimination goes High Tech (Curbed)
- Consumer Lending Discrimination in the Fin Tech Era (UC Berkeley)
- Online Speech and Information: Social media platforms are surreptitiously censored by algorithms based on contents of speech and visual image. Online sources of (dis)information pose harm to public health, national security, voting rights, and free speech rights. Free speech and access to information is the fundamental bedrock of democracy, and we are developing strategies to protect these rights using our existing legal framework.
- Digital Disinformation and Voter Suppression (Brennan Center)
- Cyberbullying: Some social media platforms have stated policies asserting that they will remove, ban, and report abusive users — and they have the technological tools available to follow through on these statements — but they have not done so.
- Eisenberg & Baum files lawsuit on behalf of Carson Bride, a young cyberbullying victim, against app makers of Snapchat, YOLO, and LMK (Eisenberg & Baum lawsuit)
- Suit Against Snap Over Suicide May Test Platform Protections (LA Times)
Practice Areas
- Communication Barriers
- Employment Discrimination
- Sexual Assault
- Sexual Harassment
- Artificial Intelligence
- Personal Injury
- Housing & Facility Discrimination
- NY Phone: (212) 353-8700
- FL Phone: (305) 513-8700*
- Video Phone: (646) 807-4096
*The firm has a member of the Florida Bar who will meet with potential clients by appointment only.
ASL fluent attorneys, staff and CDIs available