AI Needs a Constitution

Ensuring civil rights in the age of AI: Why a constitution is needed to align artificial intelligence with civil rights and ethical governance.

 

June 4, 2024

 

DALL·E 2024 06 03 16.34.53 Create a featured image that represents trust in the context of AI and data governance. The image should depict a futuristic handshake between a human
Futuristic handshake between a human and a robot, symbolizing mutual trust and cooperation in the context of AI and data governance. The background features holographic data displays and digital connections, emphasizing the integration of AI in a transparent and trustworthy manner. Image generated by DALL-E.

Today you were passed over for a job. You were declined a loan. You, or perhaps your child, were rejected from a top university. And you never even knew it happened. Every day, across some of the deepest, most constitutionally-protected parts of our lives, algorithms are making decisions in milliseconds that no face-to-face human could ever justify. Under pressure, companies are debating ethical frameworks and empanelling advisory councils. We don't need an AI manifesto—we need a constitution. As it is being used today, artificial intelligence is simply incompatible with civil rights.

How do I know? As Chief Scientist of one of the first companies to use AI in hiring, I built the system that passed you over for that job. The massive employers that were our customers didn’t need to wait for your job application; we effectively applied for you, whether you knew it or not. But when AI decides who gets hired you’ll never know why it wasn’t you. A human being asking you about age, politics, or family plans during the hiring process would be an actionable violation of your civil rights. AI doing the same without your knowledge is just as wrong but completely hidden from view.

Beyond hiring, some employers are toying with systems to automatically fire low productivity workers. But why fire people if you can “help” them choose to leave? Multiple companies have asked me to build AIs that can identify employees who might become unhappy at work. The plan was to then target those employees with internal messages encouraging them to happily and willingly “choose” to leave. Others are developing machine learning-based personality assessments that screen out applicants predicted to agitate for increased wages, support unionization, or demonstrate job hopping tendencies. It’s not clear that such systems could truly work, but predictive HR serving only the employer tips an already unbalanced relationship wildly in the bosses’ favor.

Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learned, however, is that even if an AI works exactly as intended, avoiding the pitfalls of these tangled webs of historical correlation, it is still solely designed to optimize the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50 millisecond computation at an AI server farm in Wyoming.

Heath and biotech bring mind-blowing new AI-driven breakthroughs every week. It has been the home to many of the most rewarding projects of my career. I built an AI to predict manic episodes in bipolar sufferers and invented another to manage my own son’s diabetes. My current projects use AI to help treat Alzheimer's and to develop the first ever biological test for postpartum depression. AI in health is meant to be unbiased, but if you are an outlier (hint: you

absolutely are, as we all are, in a dozen ways that are medically meaningful) that amazing new diagnostic tool may not diagnose you accurately. How would you know if some unknown company's proprietary algorithm was trained on patients like you? And if all the doctors in your region are using the same tool, you have nowhere to go for a second opinion. When an AI-powered doctor tells you that you’re just being hysterical, the AI-powered insurance company is unlikely to disagree.

In law, the right to a lawyer and judicial review are a constitutional guarantee in the United States and an established civil right throughout much of the world. You have the right to cross examine your accusers. You have the right to a lawyer working unambiguously in your self-interest. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights aren’t simply eroded–they cease to exist.

People aren't perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Jeff Bezos or Mark Zuckerburg are bad actors or that CitiBank or Microsoft are malevolent to understand that what is in their self interest is not always in mine. The framers of the US Constitution recognized this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply presume people would always act toward that greater good, but rather it defined a dynamic mechanism–self-interest and the balance of power–that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Today, you were offered an AI-powered test for postpartum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. So, I founded an independent nonprofit, The Human Test, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a lifesaving medical test and her civil rights.

AI can do wonderful things—I want its benefits in my life. But civil rights can’t exist in a world of hidden calculations. Just as with a lawyer or doctor, we must have AI that acts in our self-interest. AI needs a constitution—or more accurately, we need a constitution that defines access to artificial intelligence acting solely on our behalf as a civil right.