Not just a tech policy debate.
Samantha Marcotte is a staff writer and second-year MPP student.
Artificial Intelligence (AI) is rapidly becoming embedded in the systems that determine who gets hired, who receives credit, who qualifies for public benefits, and who is flagged by law enforcement. Yet much of the public debate around AI focuses on productivity gains, job displacement, or consumer convenience. Far less attention has been paid to the civil rights implications of allowing unregulated algorithms to govern economic and social opportunity.
Employment, housing, credit, public benefits, healthcare access, and criminal justice are all areas where AI systems are making decisions. AI has the ability to increase resources by taking on tedious administrative tasks, enhancing data analysis, solving complex issues, increasing access to information, and reducing human error. Whether screening out a job applicant, flagging a benefit recipient for fraud, or denying housing to a prospective tenant, AI is increasingly becoming the gatekeeper to opportunity.
AI systems are trained on historical data that reflect existing racial, gender, and economic inequalities, embedding past patterns of discrimination into future decision-making. These systems do not operate in a vacuum but rather learn from hiring practices shaped by occupational segregation, lending decisions influenced by redlining, and public policy systems with a long history of distributing resources unevenly. Even if explicit indicators such as race, gender, or wealth are removed, proxy variables– such as zip code, employment history, or educational background allow these disparities to persist. When deployed without transparency or accountability, AI would not just replicate bias; it could automate and scale discrimination across employment, housing, credit, healthcare access, and criminal justice. Unlike individual bias, algorithmic decision-making operates at a speed and scale that can shield it from public scrutiny, affecting thousands or millions of individuals while remaining largely undetected. The inconspicuous nature of AI limits the ability of individuals to understand, challenge, or appeal decisions that shape access to employment, housing, credit, healthcare, and public benefits. AI has been rapidly integrated into business, government, and daily life while federal oversight has failed to keep pace, leaving these systems largely unregulated despite their growing influence on fundamental economic and social rights.
Who governs the infrastructure of knowledge and opportunity will drastically shape our futures. Within the U.S., the first major federal AI policy action was the 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence, focusing on AI as an opportunity for innovation and economic growth. This was followed by the 2023 Executive Order on Safe, Secure, and Trustworthy AI, which emphasized the need for proper regulation as concerns for civil rights and national security increased, but this was lifted in 2025. The White House’s 2025 “Winning the AI Race: America’s AI Action Plan” focuses on removing barriers to building infrastructure and accelerating innovation to lead in international AI diplomacy. Federal regulations leave a gap in protecting citizens from abuse of these systems, which states have tried to fill, but there is still a need for additional policy protections.
While the regulatory landscape remains fragmented, several policies have been proposed to mitigate AI-caused harm and ensure human rights are protected. One of these recommendations includes algorithmic transparency, requiring organizations to disclose how AI systems are being trained and used to influence decision-making in areas such as housing, credit, public benefits, and the criminal justice system. In practice, transparency extends beyond simply acknowledging the use of AI, it includes providing meaningful explanations of how decisions are made, what data is used, and what factors are weighted in the process. These requirements are critical in ensuring due process, as individuals cannot challenge or appeal decisions that they do not understand. Implementing transparency requirements in AI systems’ decision-making provides the public with accountability that these systems will not violate rights that are otherwise legally mandated to be upheld. Transparency is not simply a technical preference, it is a foundational necessity for accountability to ensure the protection of legally guaranteed human rights. Without regulation, rights become theoretical rather than enforceable.
AI mirrors historical racial, gender, and economic inequalities. As AI utilization scales, there is a risk that discrimination will scale with it. The European Union has implemented a range of policies to regulate AI in different industries and tailored the legislation based on the amount of risk it imposes on residents’ rights. The European Network of National Human Rights Institutions (ENNHRI) has recommended a human rights impact assessment to be used in verifying that AI systems are not furthering harmful and discriminatory practices. Requiring an assessment that can be used to verify there is no discrimination within the system will ensure that existing anti-discrimination policies are being produced. Other policy recommendations require third-party audits of AI systems for bias and fairness, and the right to human review of decisions if an individual feels harmed. Many government officials are not prepared to understand the complexity of this rapidly changing technology landscape in order to ensure civil rights are upheld. Instituting third-party audits and the right to human review will create accountability in the systems that self-regulation would not.
Critiques of implementing transparency and civil rights protections often focus on concerns that AI regulation may reduce innovation and put the United States at a technological disadvantage globally. Additionally, some argue that AI can reduce human bias and errors within these systems, suggesting that fears of discrimination are overstated. Furthermore, there are concerns that policy will never be able to keep up with the rapid development of AI technologies. In practice, this means civil rights protections would operate retroactively– addressing harm only after discrimination, exclusion, or abuse has already impacted people’s lives. Unchecked AI systems can cause great harm at an accelerated speed and scale. This comes with the potential for embedding bias into core institutions before safeguards are in place, ultimately outweighing any gains they promise.
The real question is no longer whether AI will shape the future of human rights; it already is. The real question is whether democratic institutions will set enforceable guardrails to protect due process, equal protection, and economic security in the digital age. AI governance is not simply a matter of tech policy– it is a human rights imperative.
Photo by Aerps.com on Unsplash
The views expressed in Policy Perspectives and Brief Policy Perspectives are those of the authors and do not represent the approval or endorsement of the Trachtenberg School of Public Policy and Public Administration, the George Washington University, or any employee of either institution.