About Us

We help the world’s largest and most innovative businesses manage the privacy, fairness, security, and transparency of their AI and data—including generative AI systems. Our clients rely on us for concrete, highly technical advice to manage the risks of AI models that affect millions of people around the world.

Leadership

Andrew Burt is co-founder and managing partner at Luminos.Law. He is also a visiting fellow at Yale Law School’s Information Society Project.

Previously, Andrew was chief legal officer at Immuta, where he founded and led the company’s legal engineering team, a group of lawyers focused on automating compliance requirements for data science environments. During his six years at Immuta, the company was named one of the 50 most innovative companies in the world by Fast Company and was valued at $1 billion. Prior to Immuta, Andrew served as special advisor for policy to the head of the FBI Cyber Division.

A frequent speaker and writer, Andrew has published articles on law and technology for the New York Times, the Financial Times, and Harvard Business Review. He holds a JD from Yale Law School.

­Ellie Graeden, Ph.D., is a partner and chief data scientist at Luminos.Law. She also serves as an adjunct Research Professor with the Georgetown University Massive Data Institute and Center for Global Health Science and Security. Dr. Graeden spent the last decade as CEO of Talus Analytics, a company focused on designing and building data products to align legal, policy, and data science requirements.

Ellie has extensive experience developing quantitative approaches for global-scale decision-making. She has worked with the UN, the G20, the G7, the White House, and agency partners across the US Federal government. Ellie holds a doctorate from the Massachusetts Institute of Technology.

Brenda Leong is a partner at Luminos.Law. Brenda also serves as adjunct faculty teaching privacy and information security law at George Mason University. Previously, Brenda was senior counsel and director of AI and ethics at the Future of Privacy Forum, where she oversaw the implementation and analysis of AI and ML technologies for FPF member companies, which include many of the largest companies in the world. She is an expert speaker and instructor on the responsible use of biometrics and digital identity, with a focus on facial recognition, facial analysis, and emerging issues around voice-operated systems.

Prior to her work at FPF, Brenda served in the US Air Force. She is a 2014 graduate of George Mason University School of Law.

Contact Info
1717 K ST NW, Suite 900
Washington, DC 20006
(202) 787-5888
contact@luminos.law
press@luminos.law
resumes@luminos.law
“We’ve worked with nearly every law firm but none knows more about AI or managing its risks than Luminos.Law.”
Fortune 500 Company
& Luminos.Law Client

Our Story

Our firm began with two realizations. Here’s the first:

1. The biggest barriers to the adoption of AI and analytics are not technical—they are legal, ethical, and policy related.

After spending years working in AI startups in Silicon Valley and deploying analytics systems in major financial institutions, healthcare organizations, and elsewhere, we heard the same questions again and again: Is my AI system fair? What does “fair” mean in practice? Can we even use this dataset to train our models? 

The legal, policy, and ethical questions related to AI and analytics were endless and often asked at the end of the model life cycle, jeopardizing time (sometimes years) and funds (sometimes millions of dollars). Both data scientists and lawyers had a hard time communicating with each other and were in need of help.

This brought us to our second realization:

2. The only way to manage the risks of AI and analytics systems is to blend legal, ethical, and policy knowledge with deep technical expertise. 

That’s why we founded Luminos.Law (formerly known as BNH.AI)—the first and only law firm in the world jointly run by lawyers and data scientists. At Luminos.Law, we combine sophisticated legal counsel with tactical advice focused on managing the risks of AI and data. 

That means we don’t just tell our clients what their legal liabilities are—we advise them on the technical controls they should use to measure AI bias and reidentification risk, on the packages they should use to deploy their AI models, and more. Our clients appreciate our technical skill as much as our legal acumen, because they know that in real-world environments, the two simply need to go hand in hand.