<< All Practices

Artificial Intelligence

Responsible AI Governance

We advise clients on development, operationalization, and adoption of machine learning, large language models, and other technologies powered by artificial intelligence. We counsel on the responsible use of AI, including the implications for privacy compliance, trust and safety, marketing and advertising, security, and copyright issues; the development of AI governance policies and procedures; and more. Among other services, we can help you:

  • Handle litigation or other disputes dealing with AI or other data-related issues
  • Develop, integrate, or adopt AI-powered technologies
  • Conduct AI vendor diligence 
  • Craft customer and vendor agreements to address unique AI risks and opportunities
  • Address evolving legal requirements and regulatory guidance on the use of AI, particularly in highly-regulated industries like employment and financial services
  • Provide customized AI governance policies that address AI development and integration, including employee use of commercial AI products and services
  • Prepare for or respond to regulatory inquiries

AI Model Audits

In a regulatory environment that is increasingly focused on responsible oversight of AI models and systems, performance and bias testing of AI systems is critical to mitigate risks and ensure defensibility. ZwillGen’s AI Division has a unique combination of legal and technical expertise, allowing us to work hand in hand with clients to develop tailored testing plans for the statistical analysis of AI model performance outcomes for validity and bias. These are techniques that data scientists can actually implement and that lawyers can understand.

Our team has extensive experience conducting model audits, guiding clients through complex and sensitive issues related to their AI systems, such as performance validation, managing bias, ensuring transparency, and safeguarding privacy.

Generative AI Red Teaming

Generative AI Models such as Large Language Models require specialized techniques to test for targeted vulnerabilities including illegal activities, model bias, toxicity and hate speech, privacy harms, security attacks, and hallucinations. We provide experienced red teaming services to assess generative AI, including multi-modal systems, for risk areas. Our structured approach gives objective, actionable outcomes that development teams can use and reuse to monitor ongoing operations.

Red teaming is emerging as the primary method for developers to proactively test their systems for problematic outputs, including interactions based on adversarial roles to identify flaws. The problems identified during testing can be addressed with a combination of technical and policy safeguards to prevent or mitigate undesired outputs in real-world scenarios.

ACTIONABLE SOLUTIONS

In addition to providing privileged and confidential assessments of AI risks, our audit and red teaming services give clients actionable technical and legal advice on how to manage those risks. Tailored to each client’s unique needs, our algorithmic services range from initial AI risk assessments to hands-on technical remediation of AI systems.

EXPERIENCED & EFFICIENT

We have expertise across nearly every type of AI system, from traditional classifiers to graphs, cutting-edge generative AI models, and more. Our assessments are efficient, often completed within weeks, and are conducted in line with relevant legal standards and emerging frameworks unique to each client’s operations and systems.

ESTABLISH DEFENSIBILITY

Our assessments help our clients comply with a wide range of regulatory requirements, as well as third-party oversight and investigations, ensuring legal defensibility for their most critical AI systems. This type of substantiation is increasingly important as state and federal regulators are beginning to initiate enforcement inquiries into AI-related claims. Many of our clients also use our audits and red teaming to publicly demonstrate best-in-class AI risk management, building trust and confidence with their customers.

We help our clients comply with:

  • Anti-discrimination provisions under the Civil Rights Act, the Fair Credit Reporting Act, the Equal Credit Opportunity Act, and the Fair Housing Act, among other laws
  • State-level requirements such as Colorado’s Consumer Protections for Artificial Intelligence (Colorado’s “AI Act”), the California Consumer Privacy Act, the Colorado AI Act, and New York City’s Local Law 144, which mandates audits of automated employment decision tools
  • Compliance with standards such as the National Institute for Standards and Technology’s AI Risk Management Framework
  • Testing requirements related to evolving legal standards, such as the EU AI Act, Executive Order No. 14110, the G7 Hiroshima Process International Code of Conduct, as well as demonstrating adherence to reasonable standards of care
  • External regulatory oversight or investigations

In addition to performance and bias considerations, our testing and assessments also focus on privacy, security, transparency, and other risks.

Billing Philosophy

We believe the existing hourly law firm model is outdated and misaligns the interests of lawyers and their clients. That’s why we have alternative pricing models. On every matter, our goal is to find the right pricing structure to align our incentives with your success.

Learn more →

Trusted by some of the biggest names in tech.

“ZwillGen has become an essential partner, even an extension of our team, over the past year. Their ability to quickly and thoroughly provide subject-matter expertise, thoughtful guidance, and overall support has been a game-changer as our company navigates the complexities of all that comes with hyper-growth in this industry.”

Sara Haven, LogicGate

“ZwillGen attorneys are second to none in their expertise, efficiency, and value-add to their clients.”

Will Cooper, Fortinet

“I have ZwillGen on speed dial. Their deep substantive knowledge of our industry and the inner workings of start-ups enable them to offer pragmatic, actionable guidance quickly and at great value.”

Stephanie King, Former GC, AdRoll
x-logo-white11001955_threads_logo_brand_sign_icon1269936_linkedin_social_social media_icon_whiteBluesky_butterfly-logo-white