Breaking News

Artificial Intelligence and Automated Systems 2022 Legal Review

Artificial Intelligence and Automated Systems 2022 Legal Review

January 25, 2023

Click for PDF

The past year saw increased global government scrutiny of AI technologies and building regulatory momentum as proposed AI-focused laws and regulations matured.  Numerous proposed regulations were enacted,[1] but many stalled, underscoring the complexity inherent in regulating the increasingly crowded and fast-developing field of AI systems and tools.  In the fourth quarter of 2022, the first major AI regulation, the EU’s landmark Artificial Intelligence Act (“AI Act”), navigated some key hurdles on the path to becoming law and is widely expected to set a critical precedent for future risk-based regulatory approaches beyond Europe.[2]  There is (still) no comparable governance framework on the horizon in the U.S., but policymakers took tentative steps towards articulating a rights-based regulatory approach with the Biden administration’s “Blueprint for an AI Bill of Rights.”  Meanwhile, the patchwork of proposed and enacted state and local laws and regulations that either target or incidentally apply to AI systems continue to create compliance challenges for companies across the U.S.

Looking ahead, we anticipate that both the U.S. and EU will reach major policy milestones in 2023.  In January 2023, the National Institute of Standards and Technology (NIST) will release its long-awaited AI Risk Management Framework 1.0, a voluntary set of standards to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.  In the EU, lawmakers anticipate that the European Parliament will vote on the proposed text for the AI Act by March 2023.

Business adoption of AI has doubled in the last five years,[3] and the continued growth of the AI ecosystem reflects not only the accelerating commercial and public sector deployment of AI capabilities, but also growing organizational awareness of the governance risks posed by AI systems—up to and including in C-suites.[4]  Moreover, global standards bodies continued to advance their efforts to create risk frameworks and develop measurable standards and certification programs across all aspects of AI governance.[5]

Our 2022 Artificial Intelligence and Automated Systems Legal Review focuses on these regulatory efforts and also examines other notable policy developments within the U.S. and the EU.

I.  U.S. LEGISLATIVE, REGULATORY & POLICY DEVELOPMENTS

A.  Federal Policy Initiatives

1.  AI Bill of Rights

The past several years have seen several new algorithmic governance initiatives take shape at the federal level, building on the December 2020 Trustworthy AI Executive Order that outlined nine distinct principles to ensure agencies “design, develop, acquire and use AI in a manner that fosters public trust and confidence while protecting privacy.”[6]  On October 4, 2022—almost a year after announcing its development[7]—the White House Office of Science and Technology Policy (“OSTP”) released a white paper, titled “Blueprint for an AI Bill of Rights,” intended to guide the design, use, and deployment of automated systems to “protect the American public in the age of artificial intelligence.”[8]  It provides practical guidance to government agencies and a call to action for technology companies, researchers, and civil society to build protections towards human-centric AI that is “designed to proactively protect [people] from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.”  The Blueprint identifies five non-binding principles to act as a “backstop” in order to minimize potential harms stemming from certain applications of AI:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

For more details, please see our Artificial Intelligence and Automated Systems Legal Update (3Q22).  The principles apply broadly to “ automated systems that … have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”  “Automated systems” are themselves defined very broadly, encompassing essentially any system that makes decisions using computation.[9]  The Blueprint therefore stands in contrast to the draft EU AI Act, which is generally limited in scope to an identified list of high-risk AI.[10]  The Blueprint is intended to further the ongoing discussion regarding privacy among federal government stakeholders and the public, but its impact on the private sector is likely to be limited because—unlike the wide-ranging EU AI Act, which is inching towards an implementation date—it lacks prohibitions on AI deployments and details or mechanisms for enforcement.  The Blueprint is accompanied by supporting documentation, including a set of real-life examples and a high-level articulation of how the five principles can “move into practice.”[11]

2.  National Institute of Standards and Technology (“NIST”) Risk Management Framework

On August 18, 2022, NIST published and sought comments on a second draft of the NIST Artificial Intelligence Risk Management Framework (“AI RMF”), which provides guidance for managing risks in the design, development, use, and evaluation of AI systems.[12]  The AI RMF, as mandated by Congress, is intended for voluntary use to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.[13]  The framework is made up of four core principles:

  • Organizations must cultivate a risk management culture, including appropriate structures, policies, and processes.  Risk management must be a priority for senior leadership.
  • Organizations must understand and weigh the benefits and risks of AI systems they are seeking to deploy as compared to the status quo, including helpful contextual information such as the system’s business value, purpose, specific task, usage, and capabilities.
  • Using quantitative and qualitative risk assessment methods, as well as the input of independent experts, AI systems should be analyzed for fairness, transparency, explainability, safety, reliability, and the extent to which they are privacy-enhancing.
  • Identified risks must be managed, prioritizing higher-risk AI systems.  Risk monitoring should be an iterative process, and post-deployment monitoring is crucial given that new and unforeseen risks can emerge.

NIST plans to publish AI RMF on January 26, 2023.

NIST is also leading federal regulatory efforts to establish practices for testing, evaluating, verifying, and validating AI systems.  In March 2022, NIST released a document titled, “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence,” which aims to provide guidance for mitigating harmful bias in AI systems.[14]  The guidance makes the case for a “socio-technical” approach to characterizing and mitigating bias in AI, noting that while computational and statistical sources of bias remain highly important, broader societal factors —including human and systemic biases—that influence how technology is developed should also be considered.  The guidance also recommends a human-centered design process, and draws out organizational measures that can be deployed to reduce the risk of potential bias, including monitoring AI systems, providing resource channels for users, implementing written policies, procedures, and other documentation addressing key terms and processes across the AI model lifecycle, and fostering a culture of internal information sharing.

3.  FTC

a)  FTC Explores Rulemaking to Combat “Commercial Surveillance”

On August 11, 2022, the FTC announced an Advance Notice of Proposed Rulemaking (“ANPRM”) to seek public comment on data privacy and security practices (“commercial surveillance”) that harm consumers,[15] and, specifically, “whether [the agency] should implement new trade regulation rules or other regulatory alternatives concerning the ways in which companies collect, aggregate, protect, use, analyze, and retain consumer data, as well as transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.”[16]

Notably, the ANPRM solicited public input on algorithmic decision-making, including the prevalence of algorithmic error, discrimination based on protected categories facilitated by algorithmic decision-making systems, and how the FTC should address algorithmic discrimination through the use of proxies.[17]  The FTC is undertaking this rulemaking under Section 18 of the FTC Act (also known as “Magnuson-Moss”),[18] a lengthy and complicated hybrid rulemaking process that goes beyond the Administrative Procedure Act’s standard notice-and-comment procedures.[19]  In light of these procedural hurdles, any new proposed rules likely will take considerable time to develop.  The ANPRM notes that, if new rules are not forthcoming, the record developed in response to the ANPRM nevertheless will “help to sharpen the Commission’s enforcement work and may inform reform by Congress or other policymakers.”  The inclusion of algorithmic decision-making in the scope of the potential rulemaking underscores the FTC’s continued focus on taking the lead in the regulation of automated systems at federal level.

b)  FTC Report Warns About Using Artificial Intelligence to Combat Online Problems

In December 2020, as part of the 2021 Appropriations Act, Congress tasked the FTC with conducting a study and reporting on whether and how AI could be used to identify, remove, or take other appropriate action to address a variety of online harms (scams, deepfakes, child sexual abuse, terrorism, hate crimes and harassment, election-related disinformation, and the traffic in illegal drugs and counterfeit goods).  Congress also required the FTC to recommend reasonable policies and procedures for using AI to combat these online harms, and any legislation to “advance the adoption and use of [AI]” for these purposes.

In its June 16, 2022 report,[20] the FTC advised that, while AI can be used as a tool to detect and remove harmful material online, there are significant risks associated with its use.  In particular, the FTC cautioned that because AI systems rely on algorithms and inputs created by humans, and often have built-in motivations geared more towards consumer engagement rather than content moderation, even supposedly neutral systems can disproportionately harm minorities while threatening privacy and free speech.  Additionally, the FTC stated that while many companies currently use AI tools to moderate content, they “share little information about how these systems work, or how useful they are in actually combating harmful content.”[21]  The FTC therefore advised that there needs to be more transparency before the government can understand how AI tools work in the real world.  Although the Commission acknowledged that major tech platforms and others are already using AI tools to address online harms, the report’s final recommendation is that Congress should avoid laws that would mandate or overly rely on the use of AI to combat online harms and instead conduct additional investigation into other tools that might also be helpful in moderating online content.  In his dissenting statement, Commissioner Phillips noted that the report “has no information gleaned directly from individuals and companies actually using AI to try to identify and remove harmful online content, precisely what Congress asked us to evaluate.”[22]

Further, on June 22, 2022, Senators Ed Markey (D-MA), Elizabeth Warren (D-MA), Brian Schatz (D-HI), Cory Booker (D-NJ), Ron Wyden (D-OR), Tina Smith (D-MN), and Bernie Sanders (VT) sent a letter to FTC chair Lina Khan urging the FTC to “build on its guidance regarding biased algorithms and use its full enforcement and rulemaking authority to stop damaging practices involving online data and artificial intelligence.”[23]  The letter cites the National Institute of Standards and Technology’s study that Black and Asian individuals “were up to 100 times more likely to be misidentified” by biometric surveillance tools than white individuals, and asks the FTC to use its authority to combat “invasive and discriminatory biometric surveillance tools,” including facial recognition tools.

4.  CFPB

The Consumer Financial Protection Bureau (“CFPB”) published guidance in May 2022 for financial institutions that use AI tools.  The CFPB guidance addresses the applicability of the Equal Credit Opportunity Act (“ECOA”) to algorithmic credit decisions and clarifies that creditors’ reporting obligations under the ECOA extend equally to adverse decisions made using “complex algorithms.”

5.  EEOC

The U.S. Equal Employment Opportunity Commission (EEOC) has been pursuing an initiative that seeks to provide guidance on algorithmic fairness and the use of AI in employment decisions.

On May 12, 2022, more than six months after the Equal Employment Opportunity Commission (“EEOC”) announced its Initiative on Artificial Intelligence and Algorithmic Fairness, the agency issued its first guidance regarding employers’ use of AI.  The EEOC’s non-binding, technical guidance provides suggested guardrails for employers for the use of AI technologies in their hiring and workforce management systems.

On May 5, 2022, the EEOC filed a complaint in the Eastern District of New York alleging that a software company providing online English-language tutoring to adults and children violated the Age Discrimination in Employment Act (“ADEA”) by denying employment as tutors to a class of plaintiffs because of their age. Specifically, the EEOC alleges that the company’s application software automatically denied older, qualified applicants by soliciting applicant birthdates and automatically rejecting female applicants age 55 or older and male applicants age 60 or older.  The EEOC seeks a range of damages, including back wages, liquidated damages, a permanent injunction enjoining the challenged hiring practice, and the implementation of policies, practices, and programs providing equal employment opportunities for individuals 40 years of age and older.

The EEOC’s guidance outlines best practices and key considerations that, in the EEOC’s view, help ensure that employment tools do not disadvantage applicants or employees with disabilities in violation of the Americans with Disabilities Act (“ADA”).  The guidance provides three ways in which an employer’s tools could be found to violate the ADA:  (1) by relying on the tool, the employer fails to provide a reasonable accommodation; (2) the tool screens out an individual with a disability that is able to perform the essential functions of the job with or without an accommodation; and (3) the tool makes a disability-related inquiry or otherwise constitutes a medical examination.

B.  Federal Laws & Regulations

1.  Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act)

The Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act) was signed into law by President Biden in October 2022.  The bi-partisan Act takes a risk management approach towards federal agency procurement of AI and cleared the Senate in late 2021 after being introduced by Sens. Gary Peters (D-Mich.) and Rob Portman (R-Ohio).  This bill requires the Office of Management and Budget (OMB) to develop an AI training program to support the informed acquisition of AI by federal executive agencies, and ensure agencies and individuals responsible for procuring AI within a covered workforce are aware of both the capabilities and risks associated with AI and similar technologies.

2.  National Defense Authorization Act 2023

On December 23, 2022, the James M. Inhofe National Defense Authorization Act for Fiscal Year 2023 (NDAA) was signed into law by President Biden.[24]  The NDAA contains a number of provisions relevant to AI for both the U.S. Department of Defense (DOD) and other federal agencies.  The NDAA directs defense and intelligence agencies to work to integrate AI systems and capabilities into intelligence collection and analysis, data management, cybersecurity, and other DOD operations.  The NDAA also directs the Office of Management and Budget (OMB) and the Department of Homeland Security to develop recommendations and policies for federal AI use and to assess risks and impacts, taking into consideration the December 3, 2020 Executive Order 13960 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government)[25] providing guidance for federal agency adoption of AI for government decision-making in a manner that protects privacy and civil rights, and the input of a host of governmental and non-governmental stakeholders and experts, including academia and industry technology.

3.  The Algorithmic Accountability Act of 2022 (H.R. 6580)

The Algorithmic Accountability Act was introduced on February 3, 2022 by Sen. Ron Wyden, Sen. Cory Booker, and Rep. Yvette Clark.[26]  The bill would require large technology companies across states to perform a bias impact assessment of any automated decision-making system that makes critical decisions in a variety of sectors, including employment, financial services, healthcare, housing, and legal services.  Documentation from impact assessments would be required to be submitted to the FTC.  The Act’s scope is potentially far reaching, as it defines “automated decision system” to include “any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”  The bill, which came as an effort to improve upon the 2019 Algorithmic Accountability Act after consultation with experts, advocacy groups, and other key stakeholders, was referred to the Subcommittee on Consumer Protection and Commerce.

4.  Digital Platform Commission Act of 2022 (S. 4201)

On May 12, 2022, Senator Michael Bennet (D-CO) introduced the Digital Platform Commission Act of 2022 (S. 4201), which would empower a new federal agency, the Federal Digital Platform Commission, to promulgate rules, impose civil penalties, hold hearings, conduct investigations, and support research with respect to online platforms that facilitate interactions between consumers, as well as between consumers and entities offering goods and services.[27]  The Commission would have a broad mandate to promote the public interest, with specific directives to protect consumers, promote competition, and assure the fairness and safety of algorithms on digital platforms, among other areas.  Regulations contemplated by the bill include requirements that algorithms used by online platforms “[be] fair, transparent, and without harmful, abusive, anticompetitive, or deceptive bias.”  The bill was referred to the Committee on Commerce, Science, and Transportation.

5.  American Data Privacy and Protection Act (H.R. 8152)

On June 21, 2022, members of Congress introduced a bipartisan federal privacy bill, H.R. 8152, the American Data Privacy and Protection Act (“ADPPA”).[28]  The ADPPA aims to create a national framework that would preempt many, but not all, state privacy laws.  The bill stalled during the past Congressional session, and it remains to be seen whether its framework will advance in the new Congress.  While ADPPA shares similarities with current state privacy laws, several proposed requirements are particularly relevant to AI technologies, including risk assessment obligations.  For a detailed overview of the ADPPA, please see our Artificial Intelligence and Automated Systems Legal Update (2Q22).  Although the bill was not enacted, it is likely that this or similar legislation will be considered in the new session.

6.  Health Equity and Accountability Act of 2022 (H.R. 7585)

Introduced in the House on April 26, 2022, the Health Equity and Accountability Act of 2022 (H.R. 7585) aims to address algorithmic bias in the context of healthcare.  The Bill would require the Secretary of Health and Human Services to establish a “Task Force on Preventing AI and Algorithmic Bias in Healthcare” to develop guidance “on how to ensure that the development and [use] of artificial intelligence and algorithmic technologies” in delivering care “does not exacerbate health disparities” and help ensure broader access to care.  Additionally, the Task Force would be charged with identifying the risks posed by a healthcare system’s use of such technologies to individuals’ “civil rights, civil liberties, and discriminatory bias in health care access, quality, and outcomes.”  The bill was referred to the Committee on Energy and Commerce.

C.  State Laws & Regulations

1.  Washington, D.C. Stop Discrimination by Algorithms Act (B24-0558)

In the District of Columbia, a pending bill titled Stop Discrimination by Algorithms Act of 2021 (SDAA) sought to “prohibit users of algorithmic decision-making in a discriminatory manner” in employment, housing, healthcare, and financial lending.[29]  SDAA would also require annual bias audits to identify discriminatory outcomes associated with algorithmic decision-making Systems, and impose transparency and notice requirements.  SDAA would apply to any individual or organization that possesses or controls personal information on more than 25,000 District residents; has greater than $15 million annual revenue; is a data broker that derives at least 50{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a} of its annual revenue from collecting, assembling, selling, distributing, providing access to, or maintaining personal information; or is a service provider.  The bill proposes a private right of action for individual plaintiffs, with remedies such as injunctive relief, punitive damages, and attorneys’ fees.

In September 2022, a public hearing was held to clarify SDAA’s requirements and objectives.  Commenters focused on the expansive definition of “algorithmic eligibility determination” or “algorithmic information availability determination” in the bill, which, as drafted, applies to any determination based “in whole or significant part” on an “algorithmic process that utilizes machine learning, artificial intelligence, or similar techniques.”[30]  These broad definitions—which mirror the rights-based approach in the AI bill of rights and contrast with the EU AI Act’s risk-based hierarchy—could potentially include virtually any automated process and therefore create both significant uncertainty about the scope of the bill and the prospect of burdensome audit and disclosure obligations even for low-risk processes.  We will continue to track the progress of this bill, as well as forthcoming opportunities to participate in public hearings and submit comments.

2.  Colorado Law “Protecting Consumers from Unfair Discrimination in Insurance Practices” (SB 21-169)

In July 2021, Colorado enacted SB 21-169, “Protecting Consumers from Unfair Discrimination in Insurance Practices,” a law intended to protect consumers from unfair discrimination in insurance rate-setting mechanisms.[31]  The law requires insurers to test their big data systems—including external consumer data and information sources, algorithms, and predictive models—to ensure they are not unfairly discriminating against consumers on the basis of a protected class, and to demonstrate to the Division of Insurance how they are testing their data and tools to ensure they do not result in unfair discrimination.  The legislation directs the regulator to work with stakeholders during the rulemaking process regarding how companies should test for bias and demonstrate compliance.  The latest stakeholder meeting took place on December 8, 2022.

Similar laws attempting to regulate insurers’ use of consumer data and algorithmic processing have since been proposed in Indiana,[32] Oklahoma,[33] Rhode Island,[34] and New Jersey.[35]  We will continue monitor Colorado’s stakeholder process, as well as state legislative and regulatory activity seeking to impose requirements with respect to insurers’ use of external consumer data, information sources, and algorithms.

3.  California Department of Insurance Issues Bulletin Addressing Racial Bias and Unfair Discrimination

On June 30, 2022, the California Department of Insurance issued a bulletin addressing racial bias and unfair discrimination in the context of consumer data.[36]  The bulletin notes that insurance companies and other licensees “must avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of ‘Big Data’ … when marketing, rating, underwriting, processing claims, or investigating suspected fraud.”[37]  To that end, the Department now requires that insurers and licensees conduct their own due diligence to ensure full compliance with all applicable law “before utilizing any data collection method, fraud algorithm, rating/underwriting or marketing tool, insurers and licensees must conduct their own due diligence to ensure full compliance with all applicable laws.”  In addition, insurers and licensees “must provide transparency to Californians by informing consumers of the specific reasons for any adverse underwriting decisions.”[38]

D.  Employment & HR

Employers are facing a patchwork of recently enacted and proposed state and local laws regulating the use of AI in employment.[39]  Our prior alerts have addressed a number of these legislative developments in New York City, Maryland, and Illinois.[40]  So far, New York City has passed the broadest AI employment law in the U.S., which governs automated employment decision tools in hiring and promotion decisions.  Specifically, before using AI in New York City, employers will need to audit the AI tool to ensure it does not result in disparate impact based on race, ethnicity, or sex.  The law also imposes posting and notice requirements for applicants and employees.  Meanwhile, since 2020, Illinois and Maryland have had laws in effect directly regulating employers’ use of AI when interviewing candidates.  Further, effective January 2022, Illinois amended its law to require employers relying solely upon AI video analysis to determine if an applicant is selected for an in-person interview to annually collect and report data on the race and ethnicity of (1) applicants who are hired and (2) applicants who are and are not offered in-person interviews after AI video analysis.[41]

1.  New York City Artificial Intelligence Law

On September 19, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) proposed rules in an attempt to clarify numerous ambiguities in New York City’s Automated Employment Decision Tools (AEDT) law, which was originally expected to take effect on January 1, 2023.[42]  New York City’s law will restrict employers from using AEDT in hiring and promotion decisions unless it has been the subject of a bias audit by an “independent auditor” no more than one year prior to use.[43]  The law also imposes certain posting and notice requirements to applicants and employees.  The DCWP’s proposed rules are currently under consideration and may well invite more questions than answers as uncertainty about the requirements lingers.  The proposed rules attempted to clarify certain key terms, specify the requirements for and provide examples of bias audits, and outline several different ways by which, if passed, employers may provide the advance notice to candidates and employees regarding the use of an AEDT.[44]

Emphasizing the ambiguities in both the law and proposed rules, commenters at the first public hearing, held on November 4, 2022, advocated for a delay in the law’s enforcement on the basis that employers would not have enough time to come into compliance with finalized rules before the January 1, 2023 effective date.  On December 12, 2022, DCWP announced that it would delay enforcement of the law to April 15, 2023.  At the end of December 23, 2022, DCWP issued revisions to the proposed rules, which included a new definition for an “independent auditor,” a slightly narrowed definition of AEDT, and information about conducting a bias audit using historical or test data.  In light of the high volume of comments it has received, DCWP held a second public hearing on January 23, 2023.[45]  We are continuing to monitor the law and proposed rules for further updates.

2.  New Jersey Bill to Regulate Use of AI Tools in Hiring Decisions, A4909

On December 5, 2022, New Jersey lawmakers introduced a bill to regulate the “use of automated tools in hiring decisions to minimize discrimination in employment.”[46]  The bill is similar to the initial draft of the New York AI law and imposes limitations on the sale of AEDTs, including mandated bias audits, and requires that candidates be notified that an AEDT was used in connection with an application for employment within 30 days of the use of the tool.  The bill has been referred to the Assembly Labor Committee.

3.  California

In March 2022, the Fair Employment & Housing Council released proposed regulations intended to clarify that the state’s current employment discrimination regulations apply to automated-decision systems, defined as any “computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.”[47]  Under the proposed regulations, actions that are based on decisions made or facilitated by automated-decision systems may constitute unlawful discrimination if the action results in disparate impact, imposing liability on employers as well as third-party vendors that use, sell, or administer covered employment-screening tools.  At a remote public workshop on March 25, 2022, the Council did not set a timeframe for adopting the proposed regulations.

The Workplace Technology Accountability Act, AB 1651, proposed in April 2022, would restrict electronic monitoring of workers to situations where there is a “business necessity,” provide access to the data collected, and mandate specific risk management requirements:  algorithmic impact assessments and data protection impact assessments for automated decision tools and worker information systems to identify risks such as discrimination or bias, errors, and violations of legal rights.[48]  The bill was referred to the Committee on Labor and Employment but was pulled in November 2022 ahead of a scheduled vetting by the Assembly Privacy Committee.

E.  Intellectual Property

1.  Federal Circuit Rules Inventors Must Be Natural Human Beings

On August 11, 2022, the U.S. Court of Appeals for the Federal Circuit affirmed a lower court’s ruling in Thaler v. Vidal that the plain text of the Patent Act requires that inventors must be human beings.[49]  Attorneys for Dr. Steven Thaler, the creator of the AI system “DABUS” (Device for the Autonomous Bootstrapping of Unified Sentience), argued that an AI system that has “created” several inventions should be granted a patent application, and that inventorship requirements should not be a bar to patentability.  The argument followed the U.S. Patent and Trademark Office’s rejection of two DABUS patent applications.  A Virginia federal court affirmed that ruling last year, finding AI cannot be an inventor under U.S. patent law.[50]  The DABUS project has also lodged several unsuccessful test cases in Australia, the EU, and the UK.

2.  Copyright Issues

Novel copyright issues continue to emerge in 2022 as technology companies release AI tools and features to the public.  With the deployment of large-scale AI systems such as Chat GPT-3 and DALL-E 2 in 2022, there has been increasing attention paid to potential copyright issues, including authorship of AI-generated works, whether the outputs of sophisticated machine learning systems can infringe copyrighted works, and the use of copyrighted materials as training data for machine learning.  On October 28, 2022, the U.S. Copyright Office (USCO) revoked an earlier registration for an artist’s partially AI-generated graphic novel, stating that “[c]opyright under U.S. law requires human authorship. The Office will not knowingly grant registration to a work that was claimed to have been created solely by machine with artificial intelligence.”[51]  Earlier this year, the USCO Review Board affirmed a decision of the USCO denying registration of artwork generated by an AI algorithm created by Dr. Stephen Thaler, mirroring his attempts to argue that his DABUS AI system is eligible to be granted a patent.[52]

II.  EU POLICY & REGULATORY DEVELOPMENTS

A.  AI Act Developments

Following the agreement on a common European AI strategy in 2018, the establishment of a high-level expert group in 2019, and various other publications, including a 2020 White Paper, on April 21, 2021, the EU Commission published its proposal for “the world’s first legal framework on AI”—the EU Artificial Intelligence Act (“AI Act”).  The AI Act classifies AI use by risk level (unacceptable, high, limited, and minimal) and describes documentation, auditing, and process requirements for each risk level.  High-risk systems—which will be listed in an Annex—are subject to certain requirements throughout their lifecycle, including conformity assessments, technical and auditing requirements, and monitoring requirements.  Businesses will be subject to the AI Act if an output of their AI system is used within the EU, regardless of where the business operator or system is based.

In September 2022, the Czech Presidency of the Council of the European Union published a new proposal[56] proposing relatively minor changes to the draft legislation that notably included narrowing the definition of AI to focus on an AI system’s degree of autonomy and adding a chapter on General Purpose AI (“GPAI”)—large, multipurpose data models—indicating that obligations for these systems will likely be imposed through an implementing act.

The Committee of the Permanent Representatives of the Governments of the Member States to the European Union approved the final version on November 18, 2022,[57] and the EU Council formally provided its updated consensus draft (the “general approach”) on December 6, 2022.[58]  The consensus proposal limits the definition of AI systems to “systems developed through machine learning approaches and logic- and knowledge-based approaches.”  On December 14, MEPs reached an agreement to delete the provision of the AI Act that would allow providers of high-risk AI to process sensitive data to detect biases in algorithms.[59]

The adoption of the general approach allows the Council to enter negotiations with the European Parliament (known as “trilogues”) once the latter adopts its own position with a view to reaching an agreement on the proposed regulation.  The European Parliament, which is still working through a slew of compromise amendments, will likely vote on the final text in the first quarter of 2023, possibly by the end of March.[60]  Following this vote, discussions between the Member States, the Parliament and the Commission are expected to commence in April, so further negotiations can be expected during 2023.[61]  Reports suggest EU lawmakers anticipate that the Act could be approved by the end of 2023, though it would not come into force until a later time.

B.  Draft AI Liability Directive and New Draft Product Liability Directive

On September 28, 2022, the European Commission (“EC”) published a set of proposals aiming to modernize the EU’s existing liability regime and adapt it to AI systems, give businesses legal certainty, and harmonize member states’ national liability rules for AI.  The EC had previewed the draft rules in its February 2020 Report on Safety and Liability, emphasizing the specific challenges posed by AI products’ complex, opaque, and autonomous characteristics.[62]  The draft EU AI Act, the AI Liability Directive (“ALD”),[63] and the updated Product Liability Directive (“PLD”)[64] are intended to be complementary[65] and, together, are set to significantly change liability risks for developers, manufacturers, and suppliers who place AI-related products on the EU market.[66]

The draft Product Liability Directive (“PLD”) establishes a framework for strict liability for defective products across the EU—including AI systems—meaning claimants need only show that harm resulted from the use of a defective product.  Notably, the mandatory safety requirements set out in the draft AI Act can be taken into account by a court for the purpose of determining whether a product is defective.

The AI Liability Directive (“ALD”), which would apply to fault-based liability regimes in the EU, would create a rebuttable “presumption of causality” against any AI system’s developer, provider, or user, and would make it easier for potential claimants to access information about specific “High-Risk” AI Systems—as defined by the draft EU AI Act.  Of particular significance to companies developing and deploying AI-related products is the new disclosure obligation related to “High-Risk” AI systems, which could potentially require companies to disclose technical documentation, testing data, and risk assessments—subject to safeguards to protect sensitive information, such as trade secrets.  Failure to produce such evidence in response to a court order would permit a court to invoke a presumption of breach of duty.

The PLD and ALD will be subject to review and approval by the European Council and Parliament before taking effect.  Once implemented, Member States will have two years to implement the requirements into local law.  We are monitoring developments closely and stand ready to assist clients with preparing for compliance with the emerging EU AI regulatory framework.

C.  Digital Services Act

In November 16, 2022, the new Digital Services Act (“DSA”), which requires major marketplace and social media platforms to provide insight into their algorithms to the government and to provide users with avenues to remove abusive content and disinformation, entered into force.[67] The DSA imposes different obligations on four categories of online intermediaries.  The most stringent requirements apply to platforms and search engines with at least 45 million monthly active users in the EU—whether they are established inside or outside the EU—and require them to conduct risk assessments and independent audits, adopt certain crisis response mechanisms and heightened transparency requirements, provide access, upon request, to data for monitoring and assessing compliance, and establish a dedicated DSA compliance function.  Accordingly, the DSA—which is directly applicable in all 27 EU member states—brings with it significant compliance obligations for large online businesses, as well as increased accountability to relevant authorities.  The bulk of the DSA provisions will apply from January 1, 2024, although a first wave of transparency obligations will apply from February 17, 2023, and “very large online platforms” with 45 million active monthly service recipients in the EU will need to comply with additional requirements—including annual risk assessments—four months after having been designed as such by the EU Commission.

D.  The EU Parliament Adopts Special Report on AI

On April 5, 2022, the European Parliament adopted a special report on AI, which sets out a list of demands to secure the EU’s position in AI, and points to research as one of the key means to achieving that goal.[68]  The report was developed by the Parliament’s special committee on AI and will support the ongoing negotiations on the pending AI Act.  The European Parliament’s aim is to support AI research in the EU by increasing public and private investment to €20 billion by 2030.  Policymakers believe that the EU can catch up to the U.S. and China in terms of AI investment, technology development, research, and attracting talent “with clear regulations and an investment push.”

E.  EDPS Opinion on Negotiating Directives for Council of Europe’s AI Convention

On October 13, 2022, the European Data Protection Supervisor (“EDPS”) published Opinion 20/2022 “Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law.”[69]  The “AI Convention” would complement the EU’s proposed AI Act and the proposed AI Liability Directive.  Besides the 46 EU member states, the AI Convention would also be open to participation by non-Member States and may be the first legally binding international instrument to regulate AI.  In September 2022, the Council of Europe’s Committee on Artificial Intelligence (“CAI”) examined a first draft, with a focus on “developing common principles ensuring the continued seamless application of and respect for human rights, democracy and the rule of law where AI systems assist or replace human decision-making.”[70]  The AI Convention would cover both public and private providers, and users of AI systems.

___________________________

[1] See, e.g., Mainland China’s new regulation on algorithmic recommendation technologies (Internet Information Service Algorithmic Management (IISARM) regulations), which came into effect on March 1, 2022.  http://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm

[2] Another landmark EU technology law, the Digital Services Act (DSA), entered into force on November 16, 2022. The DSA introduces a comprehensive regime of content moderation rules for a range of businesses operating in the EU, including all providers of hosting services and “online platforms.”  See II.A.3. below.

[3] McKinsey, The state of AI in 2022 (December 6, 2022), available at https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review (In 2018, “40 percent of respondents at organizations using AI reported more than 5 percent of their digital budgets went to AI,” and in 2022, that rose to 52{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}.).

[4] WEF Guidance

[5] See OCEANIS, The Global AI Standards Repository, available at https://ethicsstandards.org/repository/.

[6] For more details, please see President Trump Issues Executive Order “Maintaining American Leadership in Artificial Intelligence.”

[7] White House, Join the Effort to Create a Bill of Rights for an Automated Society (Nov. 10, 2021), available at https://www.whitehouse.gov/ostp/news-updates/2021/11/10/join-the-effort-to-create-a-bill-of-rights-for-an-automated-society/.

[8] White House, Office for Science and Technology, available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/.

[9] “An ‘automated system’ is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.  Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.  ‘Passive computing infrastructure’ is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity.  Throughout this framework, automated systems that are considered in scope are only those that have the potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access.”  See The White House, OSTP, Blueprint for an AI Bill of Rights, Definitions, https://www.whitehouse.gov/ostp/ai-bill-of-rights/definitions/.

[10] The Blueprint does include an appendix of examples of covered AI systems, but is not limited to such.

[11] The White House, OSTP, Blueprint for an AI Bill of Rights, From Principles to Practice, https://www.whitehouse.gov/ostp/ai-bill-of-rights/safe-and-effective-systems-3/.

[12] NIST Seeks Comments on AI Risk Management Framework Guidance, Workshop Date Set, https://www.nist.gov/news-events/news/2022/08/nist-seeks-comments-ai-risk-management-framework-guidance-workshop-date-set; NIST, AI Risk Management Framework: Second Draft, https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf.

[13] NIST Risk Management Framework, https://www.nist.gov/itl/ai-risk-management-framework.

[14] NIST, National Cybersecurity Center of Excellence, Mitigation of AI/ML Bias in Context, available at https://www.nccoe.nist.gov/projects/mitigating-aiml-bias-context.

[15] Federal Register, Trade Regulation Rule on Commercial Surveillance and Data Security, https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security.

[16] Id.

[17] Public comments are available at https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security.

[18] Magnuson-Moss Warranty Federal Trade Commission Improvement Act, 15 U.S.C. § 57a(a)(1)(B).

[19] The FTC may promulgate a trade regulation rule to define acts or practices as unfair or deceptive “only where it has reason to believe that the unfair or deceptive acts or practices which are the subject of the proposed rulemaking are prevalent.”  The FTC may make a determination that unfair or deceptive acts or practices are prevalent only if:  “(A) it has issued cease and desist orders regarding such acts or practices, or (B) any other information available to the Commission indicates a widespread pattern of unfair or deceptive acts or practices.”  That means that the agency must show (1) the prevalence of the practices, (2) how they are unfair or deceptive, and (3) the economic effect of the rule, including on small businesses and consumers.

[20] Fed. Trade Comm’n, FTC Report Warns About Using Artificial Intelligence to Combat Online Problems (June 16, 2022), available at https://www.ftc.gov/news-events/news/press-releases/2022/06/ftc-report-warns-about-using-artificial-intelligence-combat-online-problems.

[21] Id.

[22] Fed. Trade Comm’n, Dissenting Statement of Commissioner Noah Joshua Phillips Regarding the Combatting Online Harms Through Innovation Report to Congress (June 16, 2022), available at https://www.ftc.gov/system/files/ftc_gov/pdf/Commissioner{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Phillips{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Dissent{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20to{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20AI{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Report{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}28FINAL{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}206.16.22{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20noon{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}29_0.pdf.

[23] Letter to Hon. Lina Khan, Chair FTC (June 22, 2022), available at https://www.politico.com/f/?id=00000181-8b25-d86b-afc1-8b2d11e00000.

[24] 117th Cong. S. 4543 (2021-2022).

[25] Donald J. Trump, Executive Order Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, The White House (Dec. 3, 2020), available at https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/.

[26] 117th Cong. H.R. 6580, Algorithmic Accountability Act of 2022 (February 3, 2022), available at https://www.wyden.senate.gov/imo/media/doc/Algorithmic{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Accountability{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Act{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20of{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}202022{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Bill{e421c4d081ed1e1efd2d9b9e397159b409f6f1af1639f2363bfecd2822ec732a}20Text.pdf?_sm_au_=iHVS0qnnPMJrF3k7FcVTvKQkcK8MG.

[27] S. 4201, 117th Cong. (2021-2022); see also Press Release, Bennet Introduces Landmark Legislation to Establish Federal Commission to Oversee Digital Platforms (May 12, 2022), available at https://www.bennet.senate.gov/public/index.cfm/2022/5/bennet-introduces-landmark-legislation-to-establish-federal-commission-to-oversee-digital-platforms.

[28] American Data Privacy and Protection Act, H.R. 8152, 117th Cong. (2022).

[29] 2021 D.C. B558.

[30] Id., Sec. 3(2)-(3).

[31] S.B. 21-169.

[32] H.B. 1238.

[33] H.B. 3186.

[34] H.B. 7230.

[35] A.B. 5651.

[36] Cal. Ins. Comm’r, Bulletin 2022-5 (June 30, 2022), available at https://www.insurance.ca.gov/0250-insurers/0300-insurers/0200-bulletins/bulletin-notices-commiss-opinion/upload/BULLETIN-2022-5-Allegations-of-Racial-Bias-and-Unfair-Discrimination-in-Marketing-Rating-Underwriting-and-Claims-Practices-by-the-Insurance-Industry.pdf.

[37] Id.

[38] Id.

[39] For more details, see Danielle Moss, Harris Mufson, and Emily Lamm, Medley Of State AI Laws Pose Employer Compliance Hurdles, Law360 (Mar. 30, 2022), available at https://www.gibsondunn.com/wp-content/uploads/2022/03/Moss-Mufson-Lamm-Medley-Of-State-AI-Laws-Pose-Employer-Compliance-Hurdles-Law360-Employment-Authority-03-30-2022.pdf.

[40] For more details, see Gibson Dunn’s Artificial Intelligence and Automated Systems Legal Update (4Q20) and Gibson Dunn’s Artificial Intelligence and Automated Systems Annual Legal Review (1Q22).

[41] Ill. Public Act 102-0047 (effective Jan. 1, 2022).

[42] NYC Dep’t Consumer & Worker Prot., Notice of Public Hearing and Opportunity to Comment on Proposed Rules, https://rules.cityofnewyork.us/wp-content/uploads/2022/09/DCWP-NOH-AEDTs-1.pdf.

[43] For more details, please see Gibson Dunn’s New York City Enacts Law Restricting Use of Artificial Intelligence in Employment Decisions.

[44] For more details regarding the proposed rules, please see our update, New York City Proposes Rules to Clarify Upcoming Artificial Intelligence Law for Employers.

[45] NYC.gov, Automated Employment Decision Tools (Updated), available at https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/.

[46] Bill A4909 (Sess. 2022-2023).

[47] California Fair Employment & Housing Council, Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, available at https://calcivilrights.ca.gov/wp-content/uploads/sites/32/2022/03/AttachB-ModtoEmployRegAutomated-DecisionSystems.pdf.

[48] A.B. 1651.

[49] Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

[50] Thaler v. Hirshfeld, 558 F. Supp. 3d 238 (E.D. Va. 2021).

[51] Riddhi Setty & Isaiah Poritz, ‘Wild West’ of Generative AI Poses Novel Copyright Questions, Bloomberg Law (Nov. 18, 2022), available at https://news.bloomberglaw.com/ip-law/wild-west-of-generative-ai-raises-novel-copyright-questions; see further Riddhi Setty, Artist Fights for Copyright for AI-Assisted Graphic Novel, Bloomberg Law (Dec. 6, 2022), available at https://news.bloomberglaw.com/ip-law/artist-contests-copyright-denial-for-ai-assisted-graphic-novel.

[52] U.S. Copyright Office, Copyright Review Board, Letter Re: Second Request for Reconsideration for Refusal to Register a Recent Entrance to Paradise (Feb 14, 2022), available here.

[56] EURActiv, AI Act: Czech EU presidency makes final tweaks ahead of ambassadors’ approval (Nov. 4, 2022), available at https://www.euractiv.com/section/digital/news/ai-act-czech-eu-presidency-makes-final-tweaks-ahead-of-ambassadors-approval/.

[57] Euractiv, Last-minute changes to EU Council’s AI Act text ahead of general approach (Nov. 14, 2022), available at https://www.euractiv.com/section/digital/news/last-minute-changes-to-eu-councils-ai-act-text-ahead-of-general-approach/.

[58] EC, Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights (Dec. 6, 2022), available at https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/.

[59] EURActiv, Tech Brief: US draft data adequacy decision, Sweden’s (low) digital priorities (Dec. 16, 2022), available at https://www.euractiv.com/section/digital/news/tech-brief-us-draft-data-adequacy-decision-swedens-low-digital-priorities/

[60] Luca Bertuzzi, AI Act: MEPs want fundamental rights assessments, obligations for high-risk users, EURActiv (Jan. 10, 2023), available at https://www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-want-fundamental-rights-assessments-obligations-for-high-risk-users/?utm_source=substack&utm_medium=email; Mike Swift, AI oversight milestones ahead for both EU and US in early 2023, officials say, Mlex (Jan. 6, 2023).

[61] Speaking at a CES Industry gathering on January 5, 2023, a policy advisor at the European Parliament said that the AI Act would include prohibitions on the use of AI for social scoring as well as “real-time, remote biometric identification” of people in public places, except for limited law enforcement purposes.

[62] EC, Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM(2020) 64 (Feb. 19, 2020), available at https://ec.europa.eu/info/files/commission-report-safety-and-liability-implications-ai-internet-things-and-robotics_en; see also European Commission, Questions & Answers: AI Liability Directive, available at https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5793 (“Current national liability rules are not equipped to handle claims for damage caused by AI-enabled products and services. In fault-based liability claims, the victim has to identify whom to sue, and explain in detail the fault, the damage, and the causal link between the two. This is not always easy to do, particularly when AI is involved. Systems can oftentimes be complex, opaque and autonomous, making it excessively difficult, if not impossible, for the victim to meet this burden of proof.”)

[63] European Commission, Proposal for a Directive on adapting non contractual civil liability rules to artificial intelligence (Sept. 28, 2022), available at https://ec.europa.eu/info/files/proposal-directive-adapting-non-contractual-civil-liability-rules-artificial-intelligence_en.

[64] European Commission, Proposal for a directive of the European Parliament and of the Council on liability for defective products (Sept. 28, 2022), available at https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-7b6b23220bcd_en.

[65] The AI Liability Directive uses the same definitions as the AI Act, keeps the distinction between high-risk/non-high risk AI, recognizes the documentation and transparency requirements of the AI Act by making them operational for liability through the right to disclosure of information, and incentivizes providers/users of AI-systems to comply with their obligations under the AI Act.

[66] European Commission, Questions & Answers: AI Liability Directive, available at https://ec.europa.eu/commission/presscorner/detail/en/qanda_22_5793 (“Together with the revised Product Liability Directive, the new rules will promote trust in AI by ensuring that victims are effectively compensated if damage occurs, despite the preventive requirements of the AI Act and other safety rules.”).

[67] Regulation (EU) 2022/2065.

[68] European Parliament, Report—A9-0088/2022, REPORT on artificial intelligence in a digital age (Apr. 5, 2022), available at https://www.europarl.europa.eu/doceo/document/A-9-2022-0088_EN.html; see further Goda Naujokaityte, Parliament gives EU a push to move faster on artificial intelligence, Science Business (May 5, 2022), available at https://sciencebusiness.net/news/parliament-gives-eu-push-move-faster-artificial-intelligence.

[69] EDPS, Opinion 20/2022 (Oct. 13, 2022), available at https://edps.europa.eu/system/files/2022-10/22-10-13_edps-opinion-ai-human-rights-democracy-rule-of-law_en.pdf.

[70] Council of Europe, 2nd plenary meeting of the Committee on Artificial Intelligence (CAI), available at https://www.coe.int/en/web/artificial-intelligence/-/2nd-plenary-meeting-of-the-committee-on-artificial-intelligence.


The following Gibson Dunn lawyers prepared this client update: H. Mark Lyon, Frances Waldmann, Samantha Abrams-Widdicombe, Tony Bedel, Iman Charania, Kevin Kim*, Evan Kratzer, Brendan Krimsky, Emily Lamm, and Prachi Mistry.

Gibson Dunn’s lawyers are available to assist in addressing any questions you may have regarding these developments.  Please contact the Gibson Dunn lawyer with whom you usually work, any member of the firm’s Artificial Intelligence and Automated Systems Group, or the following authors:

H. Mark Lyon – Palo Alto (+1 650-849-5307, [email protected])
Frances A. Waldmann – Los Angeles (+1 213-229-7914,[email protected])

Please also feel free to contact any of the following practice group leaders and members:

Artificial Intelligence and Automated Systems Group:
J. Alan Bannister – New York (+1 212-351-2310, [email protected])
Patrick Doris – London (+44 (0)20 7071 4276, [email protected])
Cassandra L. Gaedt-Sheckter – Co-Chair, Palo Alto (+1 650-849-5203, [email protected])
Kai Gesing – Munich (+49 89 189 33 180, [email protected])
Joel Harrison – London (+44(0) 20 7071 4289, [email protected])
Ari Lanin – Los Angeles (+1 310-552-8581, [email protected])
Carrie M. LeRoy – Palo Alto (+1 650-849-5337, [email protected])
H. Mark Lyon – Palo Alto (+1 650-849-5307, [email protected])
Vivek Mohan – Co-Chair, Palo Alto (+1 650-849-5345, [email protected])
Alexander H. Southwell – New York (+1 212-351-3981, [email protected])
Christopher T. Timura – Washington, D.C. (+1 202-887-3690, [email protected])
Eric D. Vandevelde – Los Angeles (+1 213-229-7186, [email protected])
Michael Walther – Munich (+49 89 189 33 180, [email protected])

*Kevin Kim is a trainee solicitor working in the firm’s London office who is not yet admitted to practice law.

© 2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising:  The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.