Blog

How to Red Team Generative AI: Our Experience at Luminos.Law

Andrew Burt
January 4, 2024

Generative AI is taking off, which is why governments around the world are scrambling to find ways to govern the technology. Their answer? Red teaming.

In my latest article for Harvard Business Review, I explain what red teaming GenAI looks like in practice, and share some of the lessons we’ve learned at Luminos.Law from red teaming some of the highest profile GenAI systems in the world. Our firm is uniquely made up of lawyers and data scientists—and this means that when major generative AI systems need to be tested and assessed for risk, we are able to craft the right legal analysis, draft and implement sophisticated testing plans, and do so under legal privilege.

You can read the full article at Harvard Business Review here.

A few of the lessons we've learned:

- Align red teams and testing to the level of risk of the AI system—riskier systems require more testing and oftentimes external third-party testers.

- Put together the right testing plan, which should be guided by liability analysis and mapped to what we call “degradation objectives,” or the harms most likely to cause the greatest liability.

- Document, document, document. Without clear and standardized ways to capture all the information garnered from testing, it’s hard to understand the results as a red team and even harder to communicate them to the engineers and data scientists who are designing or operationalizing the model.

- And, of course, much, much more!

If you’re interested in learning more about red teaming generative AI, please reach out to us via email at contact@luminos.law or click the "Get Started" button. Red teaming is among the most important tools available to manage generative AI’s risks, and we're doing all we can to share our knowledge and our expertise.