Skip to main content
Photo of a Law Library

Robot Regulations

Henry H. Perritt, Jr.[1]*

Introduction

The author asked ChatGPT, “Do you think you should be regulated by the government?”

ChatGPT responded:

As an AI language model, I don’t have personal opinions or desires. Whether or not AI models like myself should be regulated by the government is a matter of ongoing debate among experts, policymakers, and the public.

Advocates for regulation argue that AI technologies, including language models like me, have the potential to be used in harmful ways or to amplify existing biases. They argue that regulations can help ensure responsible and ethical use of AI, protect privacy and security, and address potential risks.

On the other hand, some argue that heavy regulation could stifle innovation and hinder the development of AI technologies. They believe that a collaborative approach involving industry self-regulation, ethical guidelines, and transparency measures can be more effective in addressing the concerns associated with AI.

Ultimately, the decision on whether to regulate AI models like myself lies with policymakers, who must carefully consider the potential benefits and risks of different approaches and strike a balance between encouraging innovation and ensuring societal well-being.[2]

The author finds ChatGPT pretty smart in its answer.

Others have stronger views:

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

. . . .

[W]e call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[3]

The White House avers:

Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.[4]

Proposals for regulation of generative AI should be disentangled from hostility to new technologies generally, from criticisms of big social networks, and from longer-standing proposals to regulate robots. If regulation of generative AI is appropriate, it should be risk-based, narrowly focused on particular probabilities of harm to legally recognized interests, and positioned in the mainstream of government regulation so that constitutional mechanisms of accountability through judicial review are available. Crafters of regulation should be wary of empowering stakeholders to decide how the market evolves new technologies because few stakeholders are accountable through the democratic process.

Greater transparency of learning models and censorship protocols is desirable, but if censorship of unpopular viewpoints is imposed and survives constitutional attack, the blacklist used by the censors should be public and available for public comment.

After this Introduction, Part II of this article explains the technology involved in generative AI. Part III then reviews the dangers that observers have noted.

Following that, Part IV assesses various regulatory approaches available to constrain generative AI, drawing on the rich U.S. experience with various forms of regulation of other industries and activities.

Parts V–VII consider how the debate over regulating generative AI intersects the ongoing and unresolved debate over section 230 immunity for Internet intermediaries, and assess the efficacy of other means of assuring civilized generative AI. Part VIII concludes by recommending a wait-and-see approach.

Technology

The National Security Agency (NSA) has been doing serious research into natural language processing since the earliest days of cryptography and cryptanalysis.[5] As its collection capabilities have expanded, the need to automate the processing of intercepted communications has expanded commensurately. Deep language models meet the need.[6] The likelihood that much of the advance in deep language models has been funded by the NSA[7] may explain the paucity of publicly available U.S. patents concerning generative AI. The Patent Act permits the patent office to classify patent applications when necessary to protect national security.[8]

The technologies described in this Part are equally relevant to all types of generative AI: music, image, and computer-code generators, as well as text generators.

Machine Learning

“Machine learning is the ability [of a computer] to learn without explicitly being programmed.”[9] Deep learning extracts patterns from data using neural networks.[10] Generative AI uses deep learning to develop a toolkit of very finely grained semantic elements of expression which it then can reassemble into coherent output shaped by a user’s instructions.[11] In all cases, machine learning works from a huge database of training samples.[12] It uses statistical techniques and mathematical equations associated with them to deconstruct the data into quantitative features and to predict how those features relate to each other.[13]

All deep learning involves multiple layers of feature extraction.[14] In face recognition, for example, the lowest layer recognizes and extracts clusters of a half-dozen or so pixels representing edges and lines.[15]The next higher layer recognizes combinations of those as corners and curves.[16] A higher intermediate layer recognizes colors, shadows, and texture.[17] Still higher layers discern eyes, noses, and ears.[18] The highest layer recognizes facial structure.[19] Each progressively higher layer associates lower-level features with semantic aspects of pictures, with increasing levels of generality as one proceeds upward through the layers.[20]

Supplied with enough images in its learning database having the requisite subject matter and diversity,[21] the system can learn how particular aspects of all kinds of pictures can be represented digitally and how those aspects and features can be assembled so as to create an entirely new image of something meaningful: a face, a helicopter, or a wildfire.[22]

The state-of-the-art has been built up from techniques developed for more specialized types of image processing, such as those developed for compression of image files and video streams[23] and those associated with computer-aided tomography,[24] automated animation for video games,[25] and face[26] and object recognition.[27]

Image-processing, language-processing, and music-processing deep learning systems resemble each other in that they all work their way up from the finest grained elements of their subject matter and use statistical techniques to build a semantic tree from which new image, language, or music artifacts can be generated. [28]

The content of the learning databases is completely different, however, as are the semantic representations. The content of the databases and the particular techniques used vary depending on subject matter.[29] A face-recognition program uses a database of faces.[30] A cattle-herding program uses images of cattle.[31] A music program uses a database of musical works. A computer coding program uses sections of computer code.[32] A graphical image-creation program uses millions of images, and a text-generation program uses millions of stories, articles, and tables of information.[33]

With some types of data, it is sufficient if the machine learns to predict what should come next after a particular element of information. Given the name of a procedure, a computer coding program can predict the lines of code that should follow to implement the procedure.[34] Given instructions about key signature and chord progression, a music composition program can write the baseline for a song by predicting what sequence of chords follow the tonic chord.[35] Told the time signature and instructed to syncopate, it knows not to start a note on the first beat of the measure.[36]

Predicting what comes next in an image or an essay is a bit more subtle than predicting what comes next in computer code or a musical work. To construct an image, it involves, at the lowest level, predicting whether the next pixel, moving from left to right across a raster image, should be light or dark; at a higher level it involves determining whether the next feature should be a nose or a cheek.[37]In an essay, a generative AI system not only predicts whether a period or another character comes next, it also predicts what idea should come next.[38]

The systems learn by minimizing their loss functions—their errors in computing what comes next.[39] In image recognition systems,[40] for example, some of the samples contain the target image, and others contain something else.[41] Thus, a robocowboy might be trained to recognize cattle by being presented with hundreds of thousands of images of different kinds of animals, tagging only those that represent cows, bulls, steers, and calves.[42] A computer system uses statistical analysis implemented through neural networks to evaluate which images match the tagged exemplars.[43] The loss function represents quantitatively how many times it tags a snake as a cow and a rabbit as a bull.[44] By recalculating the values in its layers, a model minimizes its loss function and converges on a set of quantitative templates that represent cattle.[45] Machine learning techniques accommodate challenges associated with recognizing the target image despite different orientations, different lighting conditions, and different backgrounds.[46] The multiple layers in a robocowboy system allow the robot to learn what a bovine is; learn how to reorient an image so that it more easily can compare it with others; and learn what features uniquely define a particular animal. It refines algorithms and templates so that it can apply these steps to an arbitrary set of images in production systems.[47]

The “machine learning” label applies to the process of identifying the distinguishing features, as many as have statistical significance.[48] There is nothing magical about the analysis: it is factor analysis,[49] which has been used as a social science methodology for more than 100 years.[50] What has changed is computing power, the availability of digital storage, cheap digital cameras, and an enormous inventory of digital representations of faces.[51]

Machine learning works similarly when the domain of interest is natural language rather than images.[52] Text programs parse the text in their databases and associate words, phrases, and sentences with each other and with a hierarchy of semantic concepts.[53] Then, given prompts that trigger particular semantic concepts, the programs assemble appropriate words and phrases according to grammar rules that it also has learned from its samples to output coherent stories and essays.

It is a bottom-up learning process: first distinguishing individual words, then evaluating the frequency with which words appear together in phrases, then associating the phrases with concepts, and then building a hierarchy of concepts, a semantic tree,[54] not unlike a conventional thesaurus.[55] A loss function reflects how well the network does in predicting. Sophisticated networks feed back their results to compare them with actual data, permitting loss functions to be computed and then improved.[56]

A recent patent for generative AI[57] explains how generative AI text systems work at the technical level. This 51-page patent explains in some detail how machine learning is used to train a system for natural language output in a style that “write[s] like me.”[58] At a fundamental level, natural language analyzers and generators use an ontological network: a sophisticated, computerized thesaurus, which classifies linguistic concepts and organizes them according to their relationship with each other.[59] The result is a semantic framework for a particular language.[60] Particular words are slotted into their appropriate ontological classes, thus organizing the entire vocabulary of the language.[61]

The system parses training text into sentences,[62] uses pattern matching to classify concepts expressed in each sentence, and then assigns semantic tokens accordingly.[63] Deictic context is developed by the use of anchor words, which signify that the surrounding syntactical units should be processed as teaching examples.[64] Examples of anchor words signifying comparison include “increase[,] reduction[,] decrease[,] decline[,] rise[,] fall[,] raise[,] [and] lower.”[65] Anchor words are specified for each semantic concept. A complete system may use scores of separately patented methods.[66]

The most sophisticated machine learning systems employ transformers—the “T” in ChatGPT.[67] Transformers are a type of neural network architecture that “remember” what came before, maybe long before, in a sequence of words, ideas, sounds, or images.[68] Transformers enable better and more efficient prediction of “what comes next” than the common alternative, recurrent neural networks.[69] Transformers use convolutional neural networks combined with attention models.[70]

Deep learning models use neural-network transformer architecture to learn the characteristics of their subject matter from enormous amounts of data. MuseNet, for example, “uses the recompute and optimized kernels of Sparse Transformer to train a 72-layer network with 24 attention heads—with full attention over a context of 4096 tokens. This long context may be one reason why it is able to remember long-term structure in a piece . . . .”[71]

Pattern Matching

AI systems use pattern matching at multiple levels. They use the technique at the bottom layer of their learning stages, to detect primitive features of relevant information, such as edges in an image, or delimiters like spaces or punctuation marks in text.[72]

They also use pattern matching at the highest level of their processing to make use of information that now is slotted into position in a semantic tree.[73] Semantic pattern matching involves determining whether two artifacts have the same meaning. The artifacts may be sentences; they may be passages of music; they may be images.[74] The pattern matching system encodes the artifacts as vectors whose variables correspond to what the system has learned from a large database through machine learning techniques.[75] The variables reflect entries in a high-level semantic tree.[76] It then uses statistical measures of similarity to determine if they have the same meaning.[77]

Face-matching programs perform pattern matching at their highest level to select the closest possible matches with prompt faces.[78] The semantic possibilities are the actual faces from the systems’ enrolled databases.[79] None of the other generative AI applications, however, select entire works from a database—even the face-matching programs do not select images from the learning database.[80] Rather, they aggregate properties that they infer from the individual learning-database entries.[81] Then they assemble new creations based on matching those properties with user prompts.[82] It is possible, of course, that someone might build a generative AI program that appropriates particular inputs and presents them in its output, but that is not the norm for how the technology works.

The same basic techniques one might use in a semantic search engine coded in the Python language[83] are used to organize information derived from large learning databases[84] to generate questions pertinent to database information,[85] and to enable conversational computerized tutors for students.[86] The basic idea of search based on pattern matching has been used for a decade or more to improve free-text search.[87] Both Westlaw and Lexis have natural language search features.[88] The advances in text-based systems permit searching for concepts (semantic categories) rather than just literal strings of text.

Learning Databases

Large language models learn from enormous databases of text such as CC-100[89] or Pile.[90] The databases are constructed from crawling the Web, from specialized sources such as FreeLaw and PubMed, from online discussion groups like Reddit, from the Gutenberg books collection, and from Wikipedia.[91] NVDIA uses:

1. Web pages: . . . which can include articles, blogs, and news reports.

2. Books: . . . which can include works of fiction, non-fiction, and academic texts.

3. Scientific papers: . . . from various fields, including physics, biology, chemistry, and more.

4. Social media: . . . [including] text data from social media platforms such as Twitter, Facebook, and Instagram.

5. News articles: . . . from various sources to understand current events and trends.

6. Chat logs: . . . [including] customer service interactions to better understand natural language and improve conversational abilities.[92]

Creating Expression

When a generative AI system creates, it starts at its top layers and uses pattern matching to associate user prompts with concepts contained in the top layer of its semantic tree. It then works its way down through the layers, from top to bottom, pulling increasingly fine fragments of language, music, or imagery, depending on its purpose. “The models generate responses by predicting the next likely word in response to the user’s request, and then continuing to predict each subsequent word after that.”[93]

It is important to understand that when the system works its way downward through the transformer layers, it does not simply follow the same paths that it used working its way up when it was learning. Rather, it makes paragraph, sentence, and word choices (or their equivalents in music or imagery) indicated by the values determined statistically in its learning process as to what comes next.[94] It is extremely unlikely that it would follow a path back to verbatim content in the learning database.

Hardware

Ordinary computer memory and CPU chips[95] are incapable of handling all the computations necessary for machine learning at anything close to an acceptable speed. So deep learning systems use hundreds of specialized chips developed initially for video gaming.[96] The market leader is Nvidia, which enjoyed an 80% market share in 2020.[97] Intel and Advanced Micro Devices are aggressively moving to catch up, and new entrants like Cerebras Systems claim to be leapfrogging Nvidia in the capability of new chips.[98]

Cerebras says that its WSE-2 (Wafer Scale Engine) is the largest chip ever built—it has 2.6 trillion transistors, 40 Gigabytes (“GB”) of high performance on-wafer memory, and 850,000 AI-optimized cores to accelerate AI work.[99] It is “independently programmable and optimized for the tensor-based sparse linear operations underpinning inference and neural network training for deep learning.”[100] The scope of the chip permits programming teams to build models at large-scale without wrestling with the distributed programming necessary for GPU clusters.[101]

The Nvidia chip has 76.3 billion transistors and 18,432 CUDA cores capable of running clocks over 2.5 GHz, maintaining 450-watt TGP electric power draw.[102] Its architecture was developed to calculate things like lighting effects in a typical videogame,[103] which requires multiple retracing operations per pixel, ranging as high as 600 in Nvidia’s latest chips’ RT overdrive mode.[104]

The most sophisticated deep learning engines use hundreds or thousands of these chips.[105] That has implications for both cost and power consumption.

Particular Engines

In 2023, the largest large language models included WuDao 2.0 Beijing Academy of Artificial Intelligence, with 1.75 trillion parameters; MT-NLG Nvidia and Microsoft, with 530 billion parameters; GPT-3, Open AI, with 175 billion parameters; LaMDA Google, with 137 billion parameters, and ESMFold Meta AI, with 15 billion parameters.[106] These models develop statistical predictions of what text comes next. Their numbers of parameters quantify the number of factors they consider in making predictions and generating output.[107] Each parameter is a variable, the value of which the model can vary as it learns.[108]

The models typically undergo expensive database and computationally intensive basic training by their developers and then a simpler “fine-tuning” by customers on their own, much smaller databases.[109]

A Censor Transformer Layer

The architecture of neural network transformers permits inclusion of a transformer layer that would be linked to a database of prohibited or cautionary subjects. When the other layers of an expressive robot are about to say something that the censor layer says it should not, the censor layer would intervene.

This kind of censorship already is in operation by ChatGPT[110] and Bard. ChatGPT refuses outright to provide sources doubting global warming, while readily providing sources expressing concern about global warming. Likewise, it refuses to provide links to anti-vaccine groups.[111]

Other Robots

Robotic automation is occurring rapidly throughout the economy. Hardly a day passes without a news story about self-driving cars.[112] The aviation industry has met a mandate to equip all its aircraft with ADS-B technology by 2020, which enables every aircraft to transmit its position to other aircraft and the air traffic control system every second.[113] Amazon and other e-commerce providers are aggressively pushing for a low-level automated air navigation system that will permit small drones to deliver merchandise,[114] and the Federal Aviation Administration (“FAA”) has complied with a rule that permits drones to be flown with private-sector-provided airspace management systems.[115] The railroads have satisfied a Congressionally mandated nationwide positive train control (PTC) system,[116] which permits passenger and freight trains, signals and switches, maintenance personnel, and dispatchers to exchange data on a real-time basis to avoid collisions and derailments.[117] Seagoing ship systems technology has reached a point where self-navigating seagoing vessels are entirely feasible.[118]

None of these systems function perfectly. Mishaps will occur, causing damage to property and loss of life. The high level of automation poses new challenges for tort law in deciding whether to blame man or machine. If the machine is to blame, how does the law translate that culpability into meaningful judgments?[119]

These robots are different in a fundamental way from generative AI robots, even though they all process information. The others, when they run amok, cause automobiles, airplanes, or railroad trains to crash, resulting in property damage and personal injury. Generative AI programs, when they run amok, rarely cause personal injury or property damage; they harm reputations, interfere with business relations, and cause emotional distress.

The Fears

Public discourse has been active in identifying the potential dangers of generative AI. The clamor to regulate generative AI is intellectually an extension of the clamor to regulate robots more generally, which has been going on for a decade or more. It also is a natural extension of the criticism and calls for regulation of social media, focused especially on Mark Zuckerberg in Congressional hearings.[120]

Copyright Infringement

Because of the way it learns from huge databases of material, some of which may be copyrighted, the neural network transformers behind generative AI may cause them to infringe copyright when they generate new material. Indeed, two lawsuits have already been filed, one concerning the generation of computer programming code,[121] and the other concerning the generation of images.[122]

Copyright owners will have a hard time winning these lawsuits, however, because generative AI does not simply retrieve pre-existing artifacts from learning databases and output them; it is not a search engine on steroids. Instead, it learns ideas and facts from the material in its databases and puts them together in new forms. Rarely would that constitute infringement of the reproduction, derivative work, public display, public performance, or public distribution rights under the Copyright Act. Moreover, copyright owners will not get very far with infringement litigation unless they identify specific copyrighted works of the years that are infringed by specific outputs of the generative AI system.[123]

Defamation

Considerable evidence exists that generative AI systems more or less randomly get facts wrong or make them up entirely.[124] Engineers knowledgeable about the systems refer to this as hallucination.[125] In some cases, these misstatements might be defamatory, invade privacy, create emotional distress, or interfere with contractual relations.[126] When the victim of the misstatements can establish the elements of one of these torts, she can recover damages.

On the other hand, it is not clear that remedies exist for misstatements of fact that are merely embarrassing or which lead those relying on them into economically damaging decisions.

Reducing the risk of misstatements technologically is challenging because generative AI systems are designed to hallucinate—to make up things that have never existed before. It’s only some kinds of hallucination that are troublesome.[127]

Hate Speech and Disinformation

Many critics and commentators fear that generative AI will perpetuate and amplify hate speech and misinformation that proliferates on the Web and other Internet-accessible resources. Just as a child who grows up in a bigoted family environment, hearing constant racist, sexist, and xenophobic remarks, learns to view such expression as normal and legitimate, so also will a deep learning neural network transformer learn that what it finds in his database is acceptable. It may well make use of offensive content without any humanlike self-censorship.

It is not clear how generative AI magnifies the threat extant when anyone can sit at a keyboard or speak into a dictation transcription program and tell lies and invent facts. Indeed, this threat has existed ever since a monk could take up a quill pen or an apprentice could set lead type.

Nor does it seem that generative AI amplifies disinformation beyond that possible with instant replication on small computers and through Internet resources like Facebook or Twitter.

Bias

Many warnings about the perils of generative AI expressed concern about racial and other kinds of bias.[128] Critics are especially concerned when robots are used to screen applications for access to important resources such as employment, housing, and education. Much expression reflects racial and ethnic biases. Civil liberties advocates are concerned that generative AI systems will bias their outputs according to historical views of race, sex, sexual orientation, nationality, and religion rather than reflecting more enlightened egalitarian views.

“Bias is often baked into the outcomes the AI is asked to predict. Likewise, bias is in the data used to train the AI—data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups—and can rear its head throughout the AI’s design, development, implementation, and use.”[129]

The National Institute of Standards and Technology (NIST) put it this way:

Systemic and implicit biases such as racism and other forms of discrimination can inadvertently manifest in AI through the data used in training, as well as through the institutional policies and practices underlying how AI is commissioned, developed, deployed, and used. Statistical/algorithmic and human cognitive and perceptual biases enter the engineering and modeling processes themselves, and an inability to properly validate model performance leaves these biases exposed during deployment. These biases collide with the cognitive biases of the individuals interacting with the AI systems as users, experts in the loop, or other decision makers.[130]

Beyond that, a variety of movements, not least of which is the Critical Race Theory Movement, seeks to rewrite history to give greater prominence to certain historically disfavored groups and to accounts of misdeeds of those in control.[131] The learning databases for generative AI reflect conventional views of history and thus cause programs that learn from them to mute newer interpretations.

Hallucination and Invented Facts

“While ChatGPT can write credible scientific essays, the data it generates is a mix of true and completely fabricated ones.”[132] This phenomenon, known as artificial hallucination, is more likely to arise with generative AI systems trained on large amounts of unsupervised data.[133]

Confusion might be a better term than hallucination. Generative AI systems do not make up things completely randomly. They inappropriately associate items that are closely related in other contexts.

This undesired phenomenon arises from the way that deep learning systems use latent spaces to organize ideas. Related words and phrases receive numerical index values close to each other. Apples thus are grouped with oranges and bananas; clouds and blue skies are associated with rain and snow. When very large data sets are used to train systems, a great many concepts are grouped together because the system has seen so many examples of everything.[134]

Then, when the user prompts the system for output, it begins constructing the output by associating words, phrases, and concepts in the prompt with those that it already knows. Information artifacts about people with similar names are likely to be grouped together. Details regarding investigations are likely to be grouped together. So the system, eager to please its prompter, reaches for information that it thinks is related to the prompt, which may not be in fact. The system cannot understand that.

The challenge in preventing hallucination is that hallucination is what generative AI systems are designed to do, all the time.[135]

Labor Market Disruption

Authors, artists, musicians, reporters, and others in the creative professions join technology critics in fearing that deployment of generative AI will throw millions of people out of work, as robots use their capability to generate mundane stories about corporate financial results, athletic contests, celebrities, self-help, and self-improvement. Artists and animators fear the generative AI will replace them in creating movie trailers and animated movies, and musicians fear that robots will write country songs and movie background music that is their bread and butter.[136]

The rhetorical tumult over generative AI being a job destroyer is grossly overblown. The cost of the technology and the unreliability of its results will limit its application. As with other potentially transformative technologies, it will create jobs as well as making others obsolete. Its macroeconomic effect in raising overall labor productivity will create new occupations and new demands for leisure activities.[137]

Regulatory Approaches

Government regulation of new technologies is hardly novel. The English Star Chamber in the time of Henry VIII licensed printing presses because of the concern that the new technology of printing could disrupt established religious and political institutions.[138]

More recently, the United States Congress has embraced scores of regulatory approaches. Sometimes, products or activities must be certified before they can be sold or conducted. This is the case with airworthiness certification of aircraft, Federal Communications Commission (FCC) certification of radio transmitters,[139] and Food and Drug Administration (FDA) approval of drugs.[140] As a precondition to approval by the FDA, applicants must disclose reports of safety investigations, lists of drug components, description of methods used for manufacture, processing and packaging of drug, sample, and consumer labels.[141]

Closely related to that are licensing regimes, which prohibit engaging in certain activities such as flying,[142] operating steamboats, or operating radio transmitters without government licenses.[143] Some licensing regimes focus on qualifications of the holder; some are mechanisms for allocating scarce resources.

Regulators often establish standards, and either impose them as part of licensing regimes, such as in Environmental Protection Agency (EPA) water effluent discharge permits, offer them for voluntary adoption, or publish them as a starting point for per-se negligence liability in lawsuits. NIST’s AI risk assessment guidelines[144] for generative AI are an example.

Some regimes impose detailed prescriptive rules, such as the Federal Aviation Regulations,[145] the Federal Railroad Administration regulations for railroads,[146] and state regulations for embalmers[147] and music therapists.[148]

Censorship by government agencies or private entities has occurred in wartime and in the entertainment industries.

Governmental largess, in the form of subsidies or contracts, often is conditioned on compliance with certain conditions, as in the case of the Rehabilitation Act and Title IX of the Civil Rights Act.

Inspection of private facilities and activities is a regular feature of regulation, as in the case of the Occupational Safety and Health Act (OSHA),[149] and the Fair Labor Standards Act (FLSA).[150]

Legally mandated disclosure is required by The Employee Retirement Income Security Act (ERISA)[151] and various warning requirements imposed by the FDA.

Some regulatory regimes simply enforce private bargains. This is the case with ERISA[152] and Federal Trade Commission (FTC) information privacy. [153]

Sometimes, the government imposes remediation and compensation obligations as under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA).[154]

Very often, regulatory regimes obligate those covered to keep certain records, as under OSHA,[155] FLSA,[156] and the ubiquitous Internal Revenue Code (IRC).[157]

In other cases, the government provides infrastructure and imposes conditions for access to it. FAA airspace, public lands regulated by the Department of Interior, mail service, public education, and Defense Advanced Research Projects Agency (DARPA) research and development results are examples.[158]

Proposals

The author asked ChatGPT: “What form should regulation take?”[159] ChatGPT said:

The specific form of regulation for AI models like myself can vary depending on the specific concerns and goals of policymakers. Here are some common areas that regulation could address:

1. Transparency and Explainability: Regulations could require AI models to provide clear explanations or justifications for their outputs, especially in critical domains like healthcare or finance. This can help increase trust and accountability, allowing users and affected individuals to understand how decisions are being made.

2. Data Privacy and Security: Regulations can focus on protecting personal data used by AI models, ensuring proper consent and secure storage of sensitive information. This can involve compliance with existing data protection laws, like the General Data Protection Regulation (GDPR), or the development of specific regulations for AI systems.

3. Fairness and Bias Mitigation: To address biases in AI models, regulations can require companies to demonstrate fairness in the design and deployment of their AI systems. This can involve auditing and testing for biases, ensuring diverse and representative training data, and monitoring for discriminatory impacts.

4. Accountability and Liability: Regulations can establish frameworks for determining legal responsibility when AI systems cause harm or make erroneous decisions. This can help define liability for AI developers, service providers, or end-users, depending on the specific context.

5. Ethical Guidelines and Standards: Governments can establish ethical guidelines and best practices for the development and use of AI. These guidelines can cover issues such as transparency, fairness, safety, and the avoidance of malicious uses of AI technology.

It’s important to note that any regulatory framework for AI should be flexible and adaptable, as technology evolves rapidly. It should also involve collaboration between governments, industry experts, researchers, and other stakeholders to ensure a comprehensive and balanced approach to regulation.

One commentator suggests three elements to AI regulation. “First, regulation should enforce more transparency on the part of developers. That would mean transparency about when AI is being used, as in the case of companies using AI algorithms to sort through job or rental applications . . . .”[160] Second, developers should “make available to researchers the data on which chat bots are trained.”[161] Third, “AI should declare that it is AI.”[162]

Most current proposals for regulation of generative AI reflect earlier proposals for regulation of robots. Now, as then, the proposals are vague, talking about “build[ing] AI systems that can interact with human norms, rules, and law,” and “build[ing] ‘a novel regulatory structure—third-party regulatory markets—to spur the development . . . [of regulatory technologies].’”[163] There is little about the content of regulation or how it relates to specific risks.[164] One 2017 assessment noted that the United States and Europe are more likely to regulate to protect worker safety, while China is more likely to regulate to provide advantages to robots originating in China.[165]

“[T]he White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”[166] These principles are (1) “Safe and Effective Systems”; (2) “Algorithmic Discrimination Protection”; (3) “Data Privacy”; (4) “Notice and Explanation”; and (5) “Human Alternatives, Consideration and Fallback.”[167]

As another commenter reported:

The rapid deployment of increasingly powerful AI tools merits scrutiny from governments and civil society, but we must also be wary of conceding the terms of this scrutiny, notably what is ethical or desirable, to the makers of technology. A more robust deliberative process that invites a broad range of experiences and expertise—from civil rights advocates to educators to labor unions—into the conversation will ensure that when the moratorium ends, we’re left with a richer understanding of the social stake in our collective future, rather than a narrower one.[168]

As to generative AI, a number of regulatory tools might be employed. They are not mutually exclusive and, indeed, usually operate in tandem with one or more others.

President Biden’s October 2023 executive order purporting to mandate federal regulation of AI is similarly vague.[169]

Bottlenecks

Students of regulation long have understood the importance of identifying bottlenecks in the flow of whatever is to be regulated. In 1983, Massachusetts Institute of Technology (“MIT”) political science professor Ithiel de Sola Pool explained that effective regulation must focus on bottlenecks in the regulated activities in order to be effective.[170] As an example, he discussed how copyright regulation had always focused more on bottlenecks like printers, publishers, and booksellers, rather than end users.[171] The number of end users is so large, and their assets so small, that chasing them is infeasible. Instead, chasing a smaller number of bottlenecks with more assets is far more efficient. He advocated applying this principle to regulating emerging computer networking technologies.[172]

It is more efficient to regulate publishers and bookstores than individual authors, search engines rather than those whose posts they retrieve and display. The Digital Millennium Copyright Act very much reflects this understanding by focusing on intermediaries who carry copyright infringing material rather than leaving the copyright holders to their remedies against the direct infringers.[173]

Generative AI involves huge economies of scale because of the expense and difficulty of assembling and maintaining the large and diverse databases necessary for the machine learning process.[174] The impact of those economies of scale can be seen in the fact that there are only a handful of top generative AI engines.[175] Any regulatory regime for generative AI will be more effective and efficient if it focuses on these big engines rather than trying to regulate the conduct of thousands or millions of individual users.

Definition

Any effort to regulate generative AI must have a workable definition to determine the scope of the regulation. Fashioning such a definition is not trivial. Most computer systems assist human beings in making decisions. So definitions that focus on decision-making roles are overbroad. Presumably, generative AI regulation proponents do not want to regulate every spreadsheet that a business uses to develop, implement, and revise its business plan.

Computerized statistical models, including multiple regression analysis, are mainstays of social science research and marketing.[176] Their underlying statistical methods have been used for a century or more, and computerized applications are as old as computer technology itself.[177]

Sophisticated computer modeling of engineering phenomena, such as the behavior of structures, fluid flow around aircraft wings, helicopter rotor blades, and the hulls of ships, has been in wide use for sixty years.[178] So a definition based on the use of statistical modeling similarly is overbroad.

Generative AI systems create expression relatively autonomously, once they are trained. But other much simpler systems create expression as well, including basic word processing software and computer program development environments that make available preprogrammed libraries of procedures.[179]

Neural networks and transformers using them are relatively new software architectures for doing massive statistical calculation. Machine learning also is a relatively new technique enabled by the new software architectures, greatly increased computing power, and massive, ubiquitous data sources available through the Internet. It might be possible to craft a definition that focuses regulation on the use of large databases and transformer software technology to enable machine learning.

Such a definition might be expressed as follows:

A generative AI system is a computer system or collection of systems that:

  1. engages in machine learning by application of neural network transformers to large databases with more than 100,000 data entries, and
  2. uses the results of that machine learning to generate new expressive output according to relatively simple and short prompts by a user.

Outright Bans or Moratoria

The most basic regulatory stratagem is an outright ban or moratorium[180] on a particular activity. That is what the March 2023 open letter proposes.[181] In a few cases, bans are permanent, such as the one covering nuclear explosives. More often, legislatures or regulatory agencies impose moratoria until a defined event occurs. That might be development of a more comprehensive regulatory program; it might be approval of applications for licenses or certifications on a case-by-case basis. In a sense, that is the way radio and television regulation works: section 301 of the Communications Act of 1934 bans transmitting electromagnetic signals unless both the station and the operator have been licensed by the FCC.[182]

A far more aggressive approach for regulating generative AI would prohibit the use of certain types of algorithms until a federal agency determines that they are “safe and effective” and could be controlled to prevent their misuse.[183] Andrew Tutt proposes that the agency itself determine which algorithms are so “opaque, complex, and dangerous” as to be subject to “regulatory scrutiny.”[184] He gives examples of possible performance standards: requiring self-driving cars to be involved in fewer than a specified number of accidents per vehicle mile, requiring stock trading algorithms to predict and report volatility of returns on investment, and requiring job applicant screening algorithms not to underrepresent any protected class by more than 20%.[185] Tutt’s basic approach is so broad that it is hard to understand how far it would reach. Would it, for example, prohibit doing tutorials for PyTorch, available from pytorch.org/tutorials?

Sam Altman argued against a moratorium for generative AI, saying that OpenAI’s “iterative development” with wide public involvement helps everyone understand potential threats.[186]

John Villasenor identifies a number of problems with proposals for moratoria: they would delay the benefits of AI; they would be legally dubious, raising questions of legislative power and the First Amendment; and they would be difficult to enforce effectively[187]

Licensing and Certification

Sam Altman said, “[T]he U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.”[188]

Licensing and registration are common regulatory tools. They can be employed through a mixture of government pre-approval requirements and self-certification. For example, the Federal Aviation Act says that a “person may not . . . operate a civil aircraft in air commerce without an airworthiness certificate in effect or in violation of a term of the certificate.”[189] The statute obligates the FAA Administrator to prescribe aircraft airworthiness rules: “minimum standards required in the interest of safety for appliances and for the design, material, construction, quality of work, and performance of aircraft, aircraft engines, and propellers.”[190] The FAA relies on preapproval to certificate aircraft, operators, and airman.

But self-certification is the prevailing way to assure compliance with safety standards in non-aviation industries, such as motor vehicles, consumer products, and consumer electronics.

Anyone is prohibited from selling or distributing a motor vehicle unless it complies with federal safety standards.[191]The Secretary of Transportation is authorized to prescribe standards,[192] and:

may require a manufacturer of a motor vehicle or motor vehicle equipment to keep records, and a manufacturer, distributor, dealer, or rental company to make reports, to enable the Secretary to decide whether the manufacturer, distributor, dealer, or rental company has complied or is complying with this chapter or a regulation prescribed or order issued under this chapter.[193]

Manufacturers must certify compliance to the next entity in the stream of commerce—for example, to dealers[194]—and must also affix a certificate of compliance to the vehicle.[195]

The Consumer Product Safety Act[196] establishes an independent regulatory commission, the Consumer Products Safety Commission (CPSC), authorized to develop “uniform standards for consumer products” and to reduce conflict between federal regulation and state and local regulation.[197] It is unlawful to “sell, offer for sale, manufacture for sale, distribute in commerce, or import into the United States any consumer product . . . not in conformity with the applicable consumer product safety rule.”[198]

A consumer product subject to the applicable safety rule requires certification.[199] The certification includes testing of the product to ensure conformity with the safety regulations.[200] Specific consumer safety rules define certification procedures for the products they cover. Walk-behind lawn mower manufacturers and importers must certify their safety compliance by labeling their products accordingly.[201]The manufacturer or importer must issue certificates of safety based on a “reasonable testing program.”[202] Unlike children’s products, such as cribs, walk-behind lawn mowers do not require certification at an accredited laboratory.[203]

The Communications Act of 1934 prohibits transmitting electromagnetic emissions (radio signals) without station and operator licenses from the FCC.[204] Licensees must use equipment meeting FCC standards.[205] An electronic manufacturer can self-certify compliance with FCC standards for emissions of its product through a Supplier’s Declaration of Conformity,[206] or it can seek Certification[207] prior to marketing, pursuant to FCC requirements. All methods require manufacturers to test their products and take the necessary measurements to “ensure that the equipment complies with the appropriate technical standards.”[208] Certification, however, requires the applicant to submit measurements and test data for approval.[209]The FCC does not require manufacturers to send their products for approval unless specifically requested,[210] but the FCC requests test measurements for Certification. Whether a manufacturer requires a Declaration of Conformity (DoC) or Certification depends on the class of radiator and type of electronic device.

A regulatory regime for generative AI certainly can be fashioned that prohibits use of the technology until compliance with certain standards has been certified, either by a governmental approval process before deployment, or through self-certification.

Certification, preapproval, and licensing approaches presuppose the existence of standards to guide decisions as to what systems and what capabilities are acceptable. So any proposal for certification, licensing, or approval must be accompanied by a practicable proposal for standards development.

In the absence of clear, risk-based standards and careful definitions, pre-release certification requirements turn into moratoria or toothless rhetoric.

Disclosure, Transparency, and Audit Mandates

A much lighter regulatory touch requires disclosure by system designers. This is a particularly popular suggestion in the debate over generative AI regulation, usually expressed as assuring transparency. The simplest form of disclosure is labelling, similar to that required for foods by the Federal Food, Drug, and Cosmetic Act.[211] Generative AI systems can be required to disclose that they are such rather than a real human being. This would follow the example of FDA food labeling and consumer product labeling requirements.

The justification for mandating disclosure usually focuses on the content of the databases from which generative AI systems learn.[212] If researchers, observers, and critics know what databases have been used in the learning process, they are better positioned to pick apart the learning that occurs and to isolate distances of infringement of intellectual property or undesirable viewpoints or sources that the machine gives prominence to.

Disclosure of the design of the neural network transformers and the algorithms they use permit scrutiny of how loss functions are managed and optimized and how censorship layers are implemented, if at all.

Transparency also means that developers and users should disclose any blacklists or censor layers they use.

Requirements for transparency usually contemplate some mechanism for scrutinizing whatever is disclosed. Formal scrutiny takes the form of audits. The National Telecommunications and Information Administration (NTIA) issued a request for comment on April 13, 2023.[213] The request:

focuse[d] on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. NTIA will rely on these comments, along with other public engagements on this topic, to draft and issue a report on AI accountability policy development, focusing especially on the AI assurance ecosystem.[214]

Transparency can be a regulatory requirement without more; it need not be accompanied by any sort of licensing, approval, or certification requirement. Developers and users of generative AI systems would be obligated to disclose specific characteristics and features of their systems, but any scrutiny of that information would take place only through public discourse, and not formal legal review or action.

Most of the proposals for regulation of generative AI have called for increased transparency of the development of models through machine learning, of the databases used, and of precautionary measures implemented by developers and users. Notably, however, Sam Altman, in his May 2023 testimony before the Senate Judiciary Committee, did not embrace transparency as a regulatory goal, even though he advanced pretty far out in front of other industry spokesmen in embracing the idea of regulation in general, through a new agency.[215]

AI innovators have generally been wary of transparency requirements because they want to protect their trade secrets in methods and strategies that give them a competitive advantage in the marketplace. Trade secret protection is only available so long as they keep these methods secret.[216] They may be entitled to copyright protection in their specific computer code, but copyright does not extend to ideas or algorithms.[217] Many of the systems may be entitled to patent protection, and that would indeed protect the generative AI entrepreneurs against misappropriation of their methods and ideas. But obtaining a patent costs tens of thousands of dollars, and it typically takes two to three years before one is granted.

So, despite the frequent calls for more transparency, it is not likely that the industry will jump on the bandwagon of exposing their algorithms and databases.

Antidiscrimination Screening

Antibias regulation can take the form of requiring generative AI developers and users to disclose statistics about how they treat and report on various identity classes, including acceptances and denials by race, sex, ethnicity, and national origin. Then, the mechanism could be expanded to impose something like disparate impact dispute resolution under Title VII of the Civil Rights Act of 1964, where if a challenger shows statistical disparities in decision-making, the burden of proof shifts to the proponent of the decision-making process to show that it uses nondiscriminatory legitimate factors rather than prohibited factors such as race, sex, ethnicity, or national origin.[218]

The Equal Employment Opportunity Commission issued guidance on the use of generative AI in compliance with Title VII[219] and the Americans with Disabilities Act[220] in May 2023—Reinforcing the conclusions of this Section, both documents say, “the document applies principles already established in the Title VII statutory provisions as well as previously issued guidance.” Both simply reiterate requirements under Title VII and the ADA.

NIST prepared a report on how to address bias in AI systems.[221] It referred to disparate impact analysis as one useful approach.[222] But it opposes relying on technical solutions alone[223] and opposes what it calls “technochauvinism.”[224] It opines that “[t]he most accurate model is not necessarily the one with the least harmful impact.”[225]

The question of dataset fit or suitability requires attention to three factors: statistical methods for mitigating representation issues; processes to account for the socio-technical context in which the application is being deployed; and awareness of the interaction of human factors with the AI technical system at all stages of the AI lifecycle. When datasets are available, the set of metrics for demonstrating fairness are many, context-specific, and unable to be reduced to a concise mathematical definition.[226]

It recommends that model developers work with organizations deploying the models to update models and recalibrate them based on performance.[227] It cautions that algorithmic censorship can introduce bias.[228]

It recommends monitoring systems for bias, mechanisms for recourse when it is detected, and institutional structures for risk management.[229]

The seventy-seven-page report spends more effort reiterating the problem than suggesting solutions. Moreover, its emphasis on deeply embedded societal and structural bias makes it difficult to fashion mitigation in a largely technology-centric discourse. One can infer a preference by the authors that AI not be deployed at all.

A useful approach to limiting bias would focus on how generative AI systems are used. When they are used to make decisions allocating access to scarce resources such as jobs, housing, and public benefits, the systems should be subjected to the same scrutiny that already is required of selecting and testing measures under the Civil Rights Act of 1964.[230] No need exists for reinventing anti-discrimination rules out of whole cloth.

In broader use contexts, it is not clear that use of generative AI harms any protected legal interest. Whether a fictional account adequately balances ethnicities and faithfully presents them, or whether an essay gives an account of history or current affairs that a particular reader believes is balanced and adequately reflective of particular views, is the stuff of public criticism and debate. Governments in free societies generally have left it alone.

Mandated Training

Some regulatory systems, such as those imposed by the U.S. Department of Education under Title IX of the Civil Rights Act, mandate training by regulated entities.[231]

Training similarly could be required of developers and users of generative AI systems. Any training requirement, however, would have to be informed by the design features to be avoided and the risks of specific uses.

Censorship

The logical response to criticism of generative AI focused on hate speech and misinformation is censorship. Censorship is not a novel activity. It has been commonplace in wartime[232] and a routine tool used by authoritarian regimes to suppress dissent. The U.S. government currently restricts information regarding biosecurity and nuclear security.[233] Social media firms engage in it.

The U.S. movie industry used a decency rating system for many years,[234] the FCC imposed decency restrictions on television and radio stations,[235] and more recently, Facebook and other social media services censored undesirable content.[236] The Facebook Oversight Board adjudicates appeals of Facebook’s content decisions.[237] Its charter[238] gives standing to request review by the content provider or by its accuser.[239] As of September 3, 2023, the board had published forty-four decisions.[240] In one decision, South Africa Slurs, it upheld a decision by Facebook to remove a post on South African society as violative of its hate speech standard.[241] In another, Wampum Belt, it overturned a Facebook decision to remove a post from an indigenous article alleged to violate the hate speech standard.[242]

Censorship regulation requires deciding what should be censored, who should do it, and how it should be done.

One charged with censoring robot output, whether a public official or an employee of a private firm, can start with a list of words and phrases that have been deemed by a significant part of the population to be hateful, offensive, or dangerously untrue. The generality of the list can be broadened to include concepts and positions.[243]

Censorship has never been without controversy, however. Developing such a list would be daunting. Some on the left would cry to include words like “field,” which the department of sociology at the University of Southern California famously decided should be banned.[244] The right would march against political correctness and “wokeness.” Elon Musk is quoted as saying, “The danger of training AI to be woke—in other words, lie—is deadly.”[245] Executives of the major generative AI forms are under constant political pressure to censor unpopular expression[246]—and not to censor it.[247]

But quite apart from the political conflicts over what speech should be allowed, censorship quickly runs up hard against the limits of the First Amendment to the United States Constitution. The First Amendment does not leave much room for censorship of speech on the grounds that it is offensive or false. In R.A.V. v. City of St. Paul, the Supreme Court invalidated a city ordinance that prohibited displays that would “arouse[] anger, alarm or resentment in others.”[248]

Even in a public school environment, where the state interest in regulating speech and conduct is thought to be substantial, the limits are narrow. In Saxe v. State College Area School District, the court said:

In any event, we need not map the precise boundary between permissible anti-discrimination legislation and impermissible restrictions on First Amendment rights today. Assuming for present purposes that the federal anti-discrimination laws are constitutional in all of their applications to pure speech, we note that the SCASD Policy’s reach is considerably broader.

For one thing, the Policy prohibits harassment based on personal characteristics that are not protected under federal law. Titles VI and IX, taken together with the other relevant federal statutes, cover only harassment based on sex, race, color, national origin, age and disability. The Policy, in contrast, is much broader, reaching, at the extreme, a catch-all category of “other personal characteristics” (which, the Policy states, includes things like “clothing,” “appearance,” “hobbies and values,” and “social skills”). Insofar as the policy attempts to prevent students from making negative comments about each others’ [sic] “appearance,” “clothing,” and “social skills,” it may be brave, futile, or merely silly. But attempting to proscribe negative comments about “values,” as that term is commonly used today, is something else altogether. By prohibiting disparaging speech directed at a person’s “values,” the Policy strikes at the heart of moral and political discourse—the lifeblood of constitutional self government (and democratic education) and the core concern of the First Amendment. That speech about “values” may offend is not cause for its prohibition, but rather the reason for its protection: “a principal ‘function of free speech under our system of government is to invite dispute. It may indeed best serve its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger.’” Texas v. Johnson, 491 U.S. 397, 408–09, 109 S.Ct. 2533, 105 L.Ed.2d 342 (1989) (quoting Terminiello v. Chicago, 337 U.S. 1, 4, 69 S.Ct. 894, 93 L.Ed. 1131 (1949)). No court or legislature has ever suggested that unwelcome speech directed at another’s “values” may be prohibited under the rubric of anti-discrimination.[249]

In Reno v. American Civil Liberties Union, the Supreme Court, striking down the original version of the Communications Decency Act as violative of the First Amendment, noted that its vague terms such as “indecent” and “patently offensive” could be read to cover “a serious discussion about birth control practices, homosexuality, the First Amendment issues raised by the Appendix to our Pacifica opinion, or the consequences of prison rape.”[250] Twitter’s exclusion of tweets making allegations of impropriety against Joe Biden’s son Hunter do not appear justifiable on any ground except a conclusion by Twitter that the tweets contain false information.[251] Whether the claims against Hunter Biden were false, like the question of whether the allegations of collaboration between Russia and the Trump campaign were, are legitimate subject for robust public debate. The spirit of the First Amendment is that the public will decide which opposing factual claim is valid.

What is “offensive” exists only in the eye of the beholder, subjectively. Any regime imposing obligations to remove offensive content imposes high transaction costs on decision-makers, who now must consider sharply conflicting evidentiary records. Such litigation is far beyond what is feasible for a platform like YouTube that handles millions of posts per minute. So the censorship regime becomes one whose barriers to information are defined by the most thin skinned—whoever wants to make a protest or file a challenge.

That means that proposals to censor the output of generative AI end up back in the lap of the private sector, which already is in the eye of the section 230 hurricane.[252]

The U.S. Department of Homeland Security (DHS) announced the establishment of a Disinformation Governance Board on April 27, 2022. The Board was to collect best practices for DHS units concerned about defending against disinformation threats. It would advise the Secretary of DHS on how its agencies should conduct analysis of online content.[253] After howls of protest from both left and right,[254] the Board was dissolved on August 24, 2022.[255]

Once a decision is made to censor, carrying out the decision is not overwhelming. A list of words and phrases, concepts and positions deemed to be hateful, offensive, or dangerously untrue can feed a censor layer in the neural network transformer as described in Section II.G, or it can be used more conventionally as a standard for licensing or certification. It might be appropriate to give notice and accept public comment on the contents of the list, under the Administrative Procedure Act.

The tools of the trade of past censors dealing with purely human communication and publishing can be consulted as the starting point for censorship of generative AI. The same logic that argues for imposing liability for harmful content on conduits when the conduits are social media networks, search engines, or web portals extends to less transparent Internet assets: routers and domain name registrars. A disfavored originator of content can be blocked by denying it a domain name, just as effectively as by filtering its content from social networks and search engines. Disfavored content also can be blocked by programming Internet router firewalls to reject packets coming from the disfavored originator. If would-be censors of the Internet want to make censorship effective, they not only should be bashing Mark Zuckerberg for not doing enough to illuminate “hate speech” and false facts in quote from Facebook; they should be imposing the same demand on everyone who runs the domain name registration service or operates a router. The Chinese Communist Party generally understands this,[256] and the European Union is on the verge of implementing widespread Internet surveillance,[257] but the larger idea has not yet percolated to the top of the U.S. policy debate.

Censorship practices in China presumably use the state of the art to police the Internet. More than a dozen government agencies enforce restrictions on information flows and employ some two million analysts to carry out their work.[258] Chinese censors use bandwidth throttling, deep packet inspection technology, keyword filtering, and block access to certain websites.[259] Censorship is implemented technologically through firewalls at the router level and through filtering software deployed at the backbone and ISP level.[260]

The European Union Digital Services Act similarly calls for policing at the Internet-service-provider, Internet exchange, Domain Name System (DNS), and router levels.[261]

Censorship and policing of disinformation in generative AI can be a camel’s nose under the tent for censorship of all forms of Internet-enabled information and communication. Some forces want to censor things generally, including everything on the Internet and social media. Others want to erase history as it generally has been told and to rewrite it. This is their chance to get started.

Erecting further procedural protections for the accused content provider, such as the Facebook Oversight Board, is not the answer. Before long, one ends up with complex, Rube Goldberg mechanisms rivaling that of the Interstate Commerce Commission in its worst innovation-squelching days.[262]

Enforcement of Private Commitments

Some regulatory regimes provide for the enforcement of private commitments, either by creating a private right of action under federal law or by sanctions imposed by an administrative agency. ERISA takes the former approach. It provides for the enforcement of the terms of pension plans and other employee benefit plans as those terms have been defined by plan sponsors—usually employers.[263] Enforcement of the terms of data privacy statements voluntarily adopted by cyberspace participants by the FTC is an example of the second approach.[264]

Federal law could call upon generative AI vendors and users to publish terms of service that disclose the machine learning methods the systems use and limit the purposes to which the systems are put. Publication of these terms could be mandatory, or it could be voluntary. But once published, the terms would be enforceable, either by private right of action, by administrative proceeding, or both.

New Regulatory Agency

Some AI regulatory proposals contemplate the establishment of a new agency. In his May 2023 testimony, Sam Altman proposed a new regulatory agency for AI, testing performance on various metrics, and independent audits.[265] Senator Blumenthal proposed a licensing regime.[266] A Brookings Institution report on section 230 recommended establishment of a new agency.[267] Past literature sometimes has proposed a new federal agency to regulate robots.[268] For example, University of Washington law professor Ryan Calo suggested creation of a Federal Robotics Commission.[269] His proposed commission would not “regulate” in the traditional sense, but it would advise public policy makers at all levels of government on robots, especially those that have the potential to cause physical harm.[270] He draws his examples primarily from automated features of driverless cars, drones, high-speed trading on securities exchanges, cognitive radio, and surgical robots.[271] Calo frames the responsibilities of his proposed commission, however, mostly to encourage the development and deployment of robot technology.[272] Calo stops short of proposing any particular regulatory approach or requirements, although he notes that much eventual robot regulation will be accomplished—or enforced—through code rather than through traditional rules and agency adjudication.[273]

He erroneously uses the 1927 Federal Radio Commission as a precedent. That commission, he says, responded to the “need to manage the impact of radio on society.”[274] It did not. The Federal Radio Commission responded to a much narrower problem: the need to avoid radio interference between stations operating on the same frequencies.[275] No one has suggested a similarly crisp problem created by robots, amounting to a need for government intervention, and it is not clear that the public would support such a step.[276]

In the generative AI field, the “National Artificial Intelligence Initiative Act of 2020” is Division E of the National Defense Authorization Act for Fiscal Year 2021. Its section 5104(c) establishes an advisory committee authorized to develop recommendations regarding:

(4) issues related to artificial intelligence and the United States workforce, including matters relating to the potential for using artificial intelligence for workforce training, the possible consequences of technological displacement, and supporting workforce training opportunities for occupations that lead to economic self-sufficiency for individuals with barriers to employment and historically underrepresented populations, including minorities, Indians (as defined in 25 U.S.C. 5304), low-income populations, and persons with disabilities.

. . . .

(10) whether ethical, legal, safety, security, and other appropriate societal issues are adequately addressed by the Initiative; [and]

. . . .

(12) accountability and legal rights, including matters relating to oversight of artificial intelligence systems using regulatory and nonregulatory approaches, the responsibility for any violations of existing laws by an artificial intelligence system, and ways to balance advancing innovation while protecting individual rights[.]

Section 5105 requires the National Research Council to conduct a study of “current and future impact of artificial intelligence on the workforce of the United States.”[277] In conducting the study, the researchers must seek input “from a wide variety of stakeholders in the public and private sectors.”[278]

Everything else in the act focuses on promoting AI and ensuring U.S. competitiveness in the field.

The proposals for a new agency are vague about what the agency would do. Altman suggests it might certify and license large AI models before they are allowed to be marketed to the public.[279] Marcus was even vaguer: “Ultimately, we may need something like CERN [European Council for Nuclear Research], global, international, and neutral, but focused on AI safety.”[280]

It might do no more than the advisory committee does under the National Artificial Intelligence Initiative Act of 2020: review industry practices and report publicly to the Congress and the executive branch. More actively, it might develop standards for generative AI systems, although proponents of such a limited approach should be called upon to explain why a new agency is needed, given NIST’s existing mandate to develop standards. The agency also might serve as a clearinghouse for databases, best practices, censorship blacklists, and complaints.

More ambitiously it might be responsible for developing and maintaining an infrastructure, as discussed in section III.M.

A more muscular agency might be authorized to investigate violations of standards or good practices, especially relating to bias and disinformation.

Finally, of course, a new agency might be clothed with enforcement powers. But that assumes that the regulatory regime defines prohibited conduct, for example:

[] An unlawful practice based on disparate impact is established under this act only if—

[] a complaining party demonstrates that a respondent uses a particular practice that causes a disparate impact on the basis of race, color, religion, sex, sexual orientation, or national origin and the respondent fails to demonstrate that the challenged practice is related for the purpose for which the system is used and consistent with business necessity.[281]

Robot Tax

When the debate was about robots generally, one popular regulatory suggestion was to impose a robot tax. This was justified as an economic disincentive to use robot technology to replace human employees. But the same approach also can be used to effect a ban.

Bill Gates proposed to tax robots, frankly acknowledging that the purpose of such a tax is to discourage automation that will displace workers.[282] He and others proposing such a tax advance two arguments to support their proposals: that the government loses payroll taxes when workers are displaced, and that the current tax code incentivizes automation even when it does not benefit the business investing in it.[283] Such proposals fly in the face of the generally recognized need of any economy to innovate to stay prosperous,[284] and are ironic in the mouths of people like Bill Gates or Elon Musk.[285] One wonders what their reaction would be to a proposal to impose a surtax on small computers or on electric cars, both of which had and are having a substantial job-displacement effect.[286]

“It’s one of the more harebrained ideas. Just about every aspect of it’s wrong,” one commentator concludes.[287] “[T]he country should be trying to improve flagging productivity growth, not inhibiting it. ‘The problem that we’re ostensibly trying to fix isn’t there.’”[288]

Infrastructure

Regulation—perhaps better termed as government involvement—might take the form of providing infrastructure for generative AI. The most obvious form of such infrastructure would be standardized databases for machine learning. This might be combined with collective licensing regimes for intellectual property. Copyright holders would contribute their works to the database under a standard license, allowing its use for machine learning. On the other hand, the ideal scope of learning databases is very broad, encompassing most of what’s available through the Internet, and it is not clear that the federal government wants to be in the position of maintaining a parallel Google search engine tailored to meet the needs of machine learning.

Another aspect of infrastructure would be a standard blacklist of words, phrases, and ideas to support censorship. This could be developed with public input through notice and comment rulemaking.

Physical Inspection

A regulatory regime providing for physical inspection seems unsuitable for regulating generative AI, which involves computations distributed widely among different physical places and in the Internet cloud.

Stakeholder Involvement

The NIST Managing Bias report recommends stakeholder involvement and embracing “Diversity, Equity & Inclusion” in training and deploying AI systems.[289]

Sam Altman said:

The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.[290]

Two Berkley law professors, for example, have proposed a system for public participation in the design of machine learning systems used by government agencies.[291] They point to the 1978 Equal Employment Opportunity Commission (EEOC) Employment Selection Procedures (Uniform Guidelines) as a good starting point for validation of algorithms embedded in machine learning systems.[292] The EEOC guidelines, however, are used after the fact, to adjudicate claims of disparate impact discrimination. The proposed machine learning criteria would be used before systems are deployed. Three criteria would be employed: (1) “design should expose built-in values;” (2) “design should trigger human engagement;” and (3) “design should promote contestation about social and political values.”[293] The problem with these suggestions is that they almost certainly would result in endless delay before machine learning is deployed as part of decision systems; in other words, they would provide a political level for blocking the new technology.

In a complex society, with a democracy encompassing many competing interests, no one can get absolutely everything he wants. Governance is a process of compromise. Mandating involvement by stakeholders allows end runs by those disappointed in, or pessimistic about, the outcomes of the political process.

Compromise is the heart and soul of democracy. “Shouting ‘No Compromise!’ may fire up the crowd, but it’s a recipe for failure when it comes to getting things done in office.”[294] In Federalist No. 10, James Madison argues that the only practicable protection against the dangers of factionalism are government institutions structured to force compromise among competing interests by ensuring that no interest, even if it constituted a majority, could get its own way, without considering opposing views. [295]

Giving stakeholders direct access to decision-making circumvents the Madisonian structures of government and corporation law’s rules of private governance.

Imagine how fraught would be a regime for stakeholder participation in a regime to regulate AI “misinformation” about the 2023 Israeli/Hamas conflict in Gaza.

The Section 230 Debate

At common law, information intermediaries were liable for harmful information they published because they actively selected and edited the information, even if it was provided by someone else. Section 230 of the Communications Decency Act[296] was enacted in 1994 to shield new types of information intermediaries from liability merely because they exercise some degree of control and censorship over what appears through their services. Written around CompuServe and USENET blogs, section 230 has been adapted over three decades since its enactment to shield more sophisticated and much higher volume intermediaries like YouTube, Google, and a variety of social networks.[297]

Section 230(c) provides “[p]rotection for ‘Good Samaritan[s]’ blocking and screening off offensive material”:

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

 

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

 

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[298]

It defines “interactive computer service” as:

any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.[299]

It defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”[300]

By its terms, section 230 extends immunity only to information service providers that make available information provided by others.

So whether section 230 extends to generative AI systems depends on whether they qualify under that definition. It appears that they do, because generative AI programs can provide only the information that they have learned, and what they learn is, in all cases, provided by others—the contributors to their learning databases.

But these expressive robots are also information content providers because they generate information rather than merely retrieving it.[301] Generative AI systems are more than search engines.[302]

A party defending AI-generated output cannot be assured of the section 230 shield.

In Lewis v. Google LLC, the district court granted a motion to dismiss a lawsuit brought against Google and YouTube by a self-styled “societal, cultural, and political commentator” who owned and operated websites and a YouTube channel.[303] He claimed that the defendants were censoring his content and excluding him from advertising revenue from his videos. The case illustrates not only application of section 230, but the reach of private censorship.

YouTube’s guidelines defined “hateful content” as:

Content that incites hatred against, promotes discrimination, disparages, or humiliates an individual or group of people based on the following . . . :

• Race

• Ethnicity or ethnic origin

• Nationality

• Religion

• Disability

• Age

• Veteran status

• Sexual orientation

• Gender identity

• Any other characteristic associated with systemic discrimination or marginalization.[304]

It excluded such content from advertising.

The court briefly rejected the plaintiff’s argument that Section 230 is unconstitutional[305] and held that it shielded the defendants from his remaining claims.[306]

“Any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230,” the court said, quoting Fair Housing Council of San Fernando Valley v. Roommates.com, LLC.[307] Fair Housing Council does say that section 230 extends immunity to a conduit even when the conduit engages in some censorship. It discussed the facts of Stratton Oakmont,[308] which Congress sought to repudiate by enacting section 230, and quoted the legislative history in support of its conclusion that “Congress sought to immunize the removal of user-generated content, not the creation of content.”[309]

The Lewis district court treated the defendants as publishers, like newspapers, which, it said, had enjoyed First Amendment privileges with respect to their selection and rejection of content.[310]

It rejected his section 1983 claims based on the First Amendment because the defendants were not state actors.[311]

It also rejected public accommodations claims under the Civil Rights Act,[312] Lanham Act and fraud claims,[313] breach of the implied covenant of good faith and fair dealing,[314] and tortious interference with economic advantage.[315]

The court’s analysis is superficial with respect to the section 230 issues.

Section 230 has become controversial, with both major political parties proposing changes.[316] Democratic proposals seek to:

reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, online harassment, and discrimination on social media platforms.

“For too long, Section 230 has given cover to social media companies as they turn a blind eye to the harmful scams, harassment, and violent extremism that run rampant across their platforms,” said [co-sponsor Senator Mark] Warner.[317]

Republican proposals oppose censorship under the cover of section 230, “Sen[ator] Hawley’s legislation removes the immunity big tech companies receive under Section 230 unless they submit to an external audit that proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.”[318]

In a late 2020 report, the Brookings Institution summarized the policy dilemma, stating “proposals for government agencies to direct social media companies to remove harmful material are non-starters. An appealing alternative starts with mandated transparency and accountability rules that require disclosure of content standards and follow through.”[319]

The report says that “It is hard to see how a government agency can enforce this requirement without creating an opportunity to manipulate social media discourse to favor a partisan perspective.”[320]

It proposes that Congress establish a “non-governmental industry-public authority under the supervision of a federal regulatory commission” to provide arbitration of disputes over content rules established by the private intermediaries.[321] It points to legislation introduced by Senators Brian Schatz and John Thune as an example.[322] The bill requires “providers of interactive computer service” to publish acceptable use policies,[323] which include a procedure for complaints of violation of those policies.[324] It requires the provider to notify the source of accused content, after the content is removed pursuant to a complaint.[325] After such notification, the source of the content is entitled to appeal the takedown decision and know the resolution of the appeal.[326]

It makes violations of these requirements an “unfair or deceptive trade practice” under the Federal Trade Commission Act.[327]

It mandates that NIST establish standards and recommendations for moderation of social media content, including automated detection tools.[328]

It removes section 230 immunity for providers who fail to remove “illegal content shared or illegal activity” within twenty-four hours of receiving notice of such content.[329]

The bill defines illegal content or activity as that determined by a federal or state court to violate “Federal criminal or civil law or State defamation law.”[330]

The direct requirement to remove material is much narrower than the requirement to provide internal dispute resolution machinery. The removal requirement applies only to content or activity judicially determined to be illegal; the dispute-resolution requirement applies to any content or activity that violates provider terms of service.

Interestingly, given the Brookings Institution Report’s reference to it, the bill does not establish any kind of private arbitration body.

Sam Altman said section 230 is not a useful framework.[331]

Risk-Based Regulation

The benefits of a market economy occur only when the government abstains from interfering in markets unless market failure exists. Market failure is of two types: the inability of markets to protect against injury because of externalities; and the failure of markets to preserve competition, which is the characteristic that produces all the advantages theoretically associated with markets in the first place.[332]

Despite this precept, instances are legion in which interest groups successfully press legislators and administrative agency personnel to regulate technologies and markets to discourage the use of new technologies in order to protect the vested positions of incumbents. Such efforts were prominent in the beef industry in the 1930s and 1940s, when the railroads and union-organized trucking companies sought regulation of cattle haulers under the Interstate Commerce Act to limit their competitive threat.[333] Efforts to limit Uber, Lyft, and other manifestations of the gig economy in order to protect the interests of taxicab medallion holders are prominent now.[334]

Already, irrational calls exist for regulation of robots and other techniques of industrial automation and for regulation of artificial intelligence on the general ground, entirely unsubstantiated, that allowing markets to guide evolution of these new families of technology will lead to unacceptable levels of job loss or loss of human control over society.[335]

Risk-based regulation should be the norm. Generative AI regulation should focus on actual, quantified risks posed by the technology, rather than imagined risks, and should impose performance standards, rather than detailed engineering standards.[336] Engineering standards freeze technology at a particular point in time and discourage innovation. Moreover, the burdens of any particular regulatory approach should be weighed against the benefit it produces to the public interest. All regulation necessarily excludes some low-probability risks with modest costs, when the burden of eliminating them is too high.[337]

IBM urged Congress to adopt a “precision regulation” approach to artificial intelligence. “This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself.”[338]

The federal Occupational Safety and Health Administration’s (OSHA) initial efforts regarding robots is a good model. It has published guidance for robot safety in industrial workplaces.[339] The guidance suggests beginning with an assessment of the risks posed by the particular robotic system: The proper selection of an effective robotic safeguarding system should be based upon a hazard analysis of the robot system’s use, programming, and maintenance operations.[340]

The National Institute of Standards and Technology (NIST) produced an AI Risk Management Framework, which provides a voluntary process for managing a wide range of potential AI risks.[341]

The framework asserts:

[T]he risks posed by AI systems are in many ways unique (See Appendix B). AI systems, for example, may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand. AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks—and benefits—can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.[342]

It offers six desirable characteristics of trustworthy AI: (1) safe; (2) secure and resilient; (3) explainable and interpretable; (4) privacy-enhanced; (5) fair—with harmful bias managed; and (5) accountable and transparent.[343]

Other Means for Assuring Accountability

All that does not necessarily mean that it’s time to regulate generative AI. Existing law and markets are likely to do a quite creditable job of regulating it for the foreseeable future.

Most basically, generative AI is expensive and unreliable. It is far too early for anyone to make a credible business case on its serious use in any mainstream endeavor. It is likely that, as initial experimentation settles down and business decisionmakers explore use of the technology in artistic creation, factual analysis and reporting, and evaluation of large data that business, they will realize that the technology is useful in relatively noncontroversial arenas such as civil discovery document management, search of legal and medical databases, and retrieval of stock art. They likely will conclude that serious reliance on it to summarize data or to craft advertisements, customer communications, or political messaging is fraught with inaccuracy and high costs for post-machine human fact checking.

So the diffusion of generative AI will be much less than commonly thought.

But however widely generative AI is used, existing law provides reasonable protection against most of the advertised dangers. The Copyright Act prohibits infringement. The copyright owner who can show that a generative AI system took protected elements of his or her work and reproduced, published, distributed, or displayed it or created a derivative work from it can recover damages and get an injunction.[344]

An individual or a business entity claiming that a generative AI system has defamed him, her, or it can recover damages in an action for defamation or false light invasion of privacy, assuming that the plaintiff can establish at least negligence and design, management, or prompting of the AI system.[345]

Similarly, one claiming emotional distress or interference with contractual relations can obtain a judgment for damages, assuming the usual elements of those torts can be established.[346]

Discriminatory decision-making with respect to employment, housing, or access to financial resources is prohibited by existing law, backed up by a combination of administrative agency and civil enforcement provisions. A victim who can show that she was discriminated against by a generative AI system is no worse off because of the technologies involved than she would be with purely human discrimination.

The law does not, outside of doctrines of defamation, false light, and fraudulent misrepresentation, provide remedies for simple falsehoods and disinformation. The received wisdom is that it should be left to the marketplace of free expression to explore what is false and what is true.

Markets, rather than regulation, are the best way to deal with other effects of generative AI technology. Despite repeated campaigns throughout history, even before the Industrial Revolution, the law has been wary of restricting innovation and technologies that might threaten jobs. Printing presses were free to displace monks with quill pens; waterpowered looms and spinning Jenny’s were allowed to displace hand weavers and spinners—though not without a fair amount of violent disruption in England[347]—steamboats displaced flatboat operators and bargemen;[348] railroads displaced steamboats;[349] the Morse telegraph displaced Pony Express; the telephone displaced Morse code telegraphy and 35,000 Morse code telegraphers,[350] automated dialing replaced telephone operators,[351] and so on.[352]

Always in the past, labor markets have proven agile in adapting to new technologies, finding new kinds of work for those displaced by the evaporation of old.[353]

The Best Approach: Wait and See

The public has been surprised about the capability of generative AI applications like ChatGPT to generate fluent expression. Enthusiasts have stretched their imaginations to forecast how revolutionary the technology is, usually overlooking its firm roots in statistical analytical and computational techniques that are decades old. They usually also ignore its position in the gradual evolution of natural language and image processing technology over the last twenty or thirty years.

Interest- and identity-group advocates have played their role in sounding the alarm about what the new technologies might mean to their groups. Political science recognizes the need of interest groups to be entrepreneurial in exaggerating the effects of new phenomena.[354]

Elected officials and candidates for public office have done their part in perceiving that generative AI is a theme that has public resonance and in seeking prominence to align themselves with their constituencies in proposing governmental action to address the problem.

It is conceivable, some people claim, that this time is different, but it bears waiting a while to see what actually happens to determine where law should intervene, if anywhere.

It is often said that the law lags behind technology. That rarely is intended as a compliment. But it is a compliment in a market economy and a democracy. If the law tries to lead or to keep pace with technology, legislators and regulators must guess about problems that technology might create before they see what actually happens. Rarely do they do any better with their crystal balls than do stock pickers with their crystal balls seeking to see the future of the stock market. Law should adopt a wait-and-see attitude and respond to what actually happens in the real world.

First, engineers mobilize science to create new possibilities for machines and systems. Some science may never get engineered and does not need the law’s attention.

Second, entrepreneurs look at what the engineers have come up with and pick what they think is promising in terms of public acceptance and profitability. Engineering fantasies that never get adopted by entrepreneurs do not need the law’s attention.

Third, entrepreneurs seek financing from capital markets. Rarely do they have enough capital on their own to turn technology into concrete products, hire the employees, finance the marketing, and cover working capital costs until they start to make a profit. The law can disregard the entrepreneurs who never obtain financing.

Fourth, consumers must embrace the product. Many well-financed entrepreneurial ideas never gain traction in the marketplace. Those that don’t do not need the law’s attention.

Fifth, things must go wrong. Consumers may be disappointed in their expectations about products they have bought; third parties may be injured economically or physically by product malfunctions or side effects. Things that never go wrong do not need the law’s attention.

Sixth, the victims of the disappointments and mishaps must care enough about it to file lawsuits. If they do not care, neither should the law.

Seventh, the courts must begin to decide cases. Often, they will decide them according to existing law in a way that seems fair to the society. When they do that, there’s no need for legislators or regulators to write new law.

Only when the judges get it wrong is there need for the law to catch up and for the legislators and regulators to get busy.

  1. * Professor of Law (retired) and former dean, Chicago-Kent College of Law. Member of the bar: Virginia, Pennsylvania (inactive), District of Columbia, Maryland, Illinois (inactive), Supreme Court of the United States. Author of more than 100 articles and 25 books on dispute resolution, technology and law, and labor law. Commercial helicopter and private instrument airplane pilot. Extra class radio amateur (K9KDF). The author has followed artificial-intelligence developments since the mid-1980s and wrote an expert system to evaluate the legality of employee terminations. He was a consultant to the Administrative Conference of the United States, taught administrative law, and has written about many aspects of regulating new technologies. This Article is the fourth in a quintet of articles about law and generative AI. The first, Copyright for Robots? 57 Ind. U. L. Rev. 139 (2023), considers whether the output of generative AI programs is eligible for copyright protection. The second, Robots as Pirates, 73 Cath. U. L. Rev. ___ (2024), evaluates possibilities that generative AI programs can be liable for copyright infringement of the works in their learning databases. The third, Robot Slanderers, 46 U. Ark. Little Rock L. Rev. ___ (2024), considers the possibilities for recovery on theories of defamation for misstatements of fact in generative AI system output. The fifth, Robot Job Destroyer, 84 LA. L. Rev. 207 (2023), evaluates claims that generative AI will displace large numbers of employees and concludes that the dire forecasts are considerably exaggerated.
  2. . ChatGPT session with the author (May 16, 2023) (on file with author) (responding to question: “do you think you should be regulated by the government[?]”).
  3. . Pause Giant AI Experiments: An Open Letter, Future of Life Inst. (Mar. 22, 2023), https://futureoflife.org/open-letter/pause-giant-ai-experiments/ [https://perma.cc/5BDM-RAXJ
    ] (signed by Elon Musk, Steve Wozniak, and tens of thousands of others).
  4. . White House Off. of Sci. & Tech. Pol’y, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People 3 (2022) [hereinafter AI Bill of Rights], https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf [https://perma.cc/4V3H-E23B].
  5. . See Research Overview, Nat’l Sec. Agency/Cent. Sec. Serv., https://www.nsa.gov/Research/Overview/ [https://perma.cc/K5Y5-KES6] (identifying “science of analysis,” including “foundational research in human language technologies,” as one of five areas of NSA focus).
  6. . See Yoav Aviram, The NSA’s Large Language Models, Conscious Digit. (Mar. 7, 2023), https://consciousdigital.org/the-nsas-large-language-models/ [https://perma.cc/Q3NE-JY43] (speculating about NSA use of large language models). For articles discussing the application of machine learning by the NSA, see generally 22 Next Wave, no. 2, 2019. (The Next Wave is a regular publication by the NSA reviewing emerging technologies.)
  7. . See e.g., NSA and University of Texas: Joining Forces in Machine Learning, 22 Next Wave, no. 2, 2019, at 37, 37.
  8. . 35 U.S.C. § 181 (providing for secrecy of certain inventions related to national security and authorizing withholding of patent).
  9. . Alexander Amini, MIT 6.S191: Introduction to Deep Learning, YouTube, at 6:30 (Mar. 10, 2023), https://www.youtube.com/watch?v=QDX-1M5Nj7s&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=2 [https://perma.cc/TC72-FLWW].
  10. . Id.
  11. . Owen Hughes, Generative AI Defined: How it Works, Benefits and Dangers, TechRepublic (Aug. 7, 2023, 11:40 AM), https://www.techrepublic.com/article/what-is-generative-ai/ [https://perma.cc/WN29-2QY6].
  12. . See Sara Brown, Machine Learning, Explained, MIT Mgmt. (Apr. 21, 2021), https://
    mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
    [https://perma.cc/LEG7-2F
    2H].
  13. . What is Machine Learning?, Geeks for Geeks, https://www.geeksforgeeks.org/
    what-is-machie-learning/
    [https://perma.cc/P3X4-8CD7].
  14. . See What is Deep Learning?, IBM, https://www.ibm.com/topics/deep-learning [https
    ://perma.cc/86GZ-SKHH].
  15. . Amini, supra note 8, at 12:48 (illustrating what happens at different layers of face recognition).
  16. . Id. at 13:00.
  17. . Id.
  18. . Id.
  19. . Id.
  20. . See Method & Sys. for End-to-End Image Processing, U.S. Patent No. 10,997,690 (method for processing input image, extracting features, and generating image in final layer of deep learning network).
  21. . See Image Classification & Info. Retrieval Over Wireless Digit. Networks & the Internet, U.S. Patent No. 2014/0105467, at [0017]–[0026] (identifying preceding face recognition patents).
  22. . See Amini, supra note 8, at 12:48.
  23. . JPEG and MPEG are examples. See generally Ida Mengyi Pu, Fundamental Data Compression 189–210 (2006) (explaining image compression as dependent on predicting what kind of pixel comes next).
  24. . See Jie-Zhi Cheng et al., Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans, 6 Sci. Reps., No. 24454, 2016, at 1, 2 https://www.nature.com/articles/srep24454 [https://perma.cc/5CT4-678Z]; Comput. Visualizations of Anatomical Items, U.S. Patent No. 2015/0049081.
  25. . See infra Section I.E (explaining genesis of dominant deep-learning chip in video game processing).
  26. . See Amini, supra note 8, at 12:30 (offering illustrations from face recognition).
  27. . See Brown, supra note 11.
  28. . See generally, e.g., Richard Socher et al., Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank, Stan. Nat. Language Processing Grp. (2013), https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf [https://perma.cc/77SC-866H] (explaining this process in greater detail).
  29. . See What is Deep Learning?, supra note 13.
  30. . See William Crumpler & James A. Lewis, Ctr. for Strategic & Int’l Stud., How Does Facial Recognition Work? 3 (2021).
  31. . See Henry H. Perritt, Jr., The 21st Century Cowboy: Robots on the Range, 43 U. Ark. Little Rock L. Rev. 149, 175 (2020).
  32. . See Matthew Hutson, AI Learns to Write Computer Code in ‘Stunning’ Advance, Science (Dec. 8, 2022, 2:00 PM), https://www.science.org/content/article/ai-learns-write-computer-code-stunning-advance [https://perma.cc/BU2J-F57J].
  33. . See Brown, supra note 11.
  34. . See Cade Metz, A.I. Can Now Write Its Own Computer Code. That’s Good News for Humans., N.Y. Times (Sept. 10, 2021), https://www.nytimes.com/2021/09/09/technology/co
    dex-artificial-intelligence-coding.html
    [https://perma.cc/88FH-5CTM].
  35. . See Zinheng Chen et al., Machine Learning in Automatic Music Chords Generation, Stan. Univ. (2015), http://cs229.stanford.edu/proj2015/136_report.pdf  https://perma.cc/NXE8
    -YEV5].
  36. . See id.
  37. . See Amini, supra note 8, at 12:48; Brown, supra note 11.
  38. . See Hughes, supra note 10.
  39. . Amini, supra note 8, at 32:56 (explaining loss functions in neural network learning).
  40. . This Article explains machine learning by reference to image-recognition, rather than natural-language, processing because image recognition is easier to understand.
  41. . Perritt, supra note 30, at 163–64.
  42. . See generally id. at 149–237 (exploring feasibility of robot cowboy who herds cattle; describing machine learning aimed at recognition of cattle).
  43. . See id. at 166.
  44. . See generally, e.g., James R. Clough et al., A Topological Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology, 44 IEEE Transactions on Pattern Analysis & Mach. Intel. 8766, 8766–69 (2022) (describing process and potential application in medical context).
  45. . See id.; Perritt, supra note 30, at 163–69.
  46. . This involves the second step in most typologies: alignment. See Perritt, supra note 30, at 165–66.
  47. . See id. at 163–69.
  48. . One popular method is the Viola/Jones approach. See The Viola/Jones Face Detector UBC Comput. Sci. (2001), https://www.cs.ubc.ca/~lowe/425/slides/13-ViolaJones.pdf [https://perma.cc/6M62-PFLQ] (slides explaining method); Paul Viola & Michael Jones, Rapid Object Detection Using a Boosted Cascade of Simple Features, Accepted Conf. on Comput. Vision & Pattern Recognition, 2001, at 1, https://www.cs.cmu.edu/~efros/courses/LBMV0
    7/Papers/viola-cvpr-01.pdf
    [https://perma.cc/U8WY-P658] (describing method in greater detail).
  49. . See Factor Analysis, Complete Dissertation by Stats. Sols., https://www.statisticssolutions.com/factor-analysis-sem-factor-analysis/ [https://perma.cc/AL2 V-L36V]. 
  50. . See, e.g., C. Spearman, Demonstration of Formulæ for True Measurement of Correlation, 18 Am. J. Psych. 161, 161–62 (1907) (discussing application of factor analysis in field of psychology).
  51. . See Barbara Rusnáková, How the Accuracy of Facial Recognition Technology Has Improved Over Time, Innovatrics: Tr. Rep., https://innovatrics.com/trustreport/how-the-accu
    racy-of-face-recognition-technology-has-improved-over-time/ [https://perma.cc/S3NG-NDPM
    ].
  52. . See Julia Hirschberg & Christopher D. Manning, Advances in Natural Language Processing, 349 Science 261, 261–66 (2015) (explaining evolution of computerized natural language processing).
  53. . Hyejin Youn et al., On the Universal Structure of Human Lexical Semantics, 113 Proc. Nat’l Acad. Scis. U.S. 1766, 1766–70 (2016) (discussing creation of trees to measure cluster proximity based on semantical meaning).
  54. . See Jesús Giménez & Lluís Márquez, Linguistic Measures for Automatic Machine Translation Evaluation, 24 Mach. Translation 209, 232, 236 (2010) (noting use of semantic trees in machine translation of languages); Youn et al., supra note 52, at 1770; Jan-Pierre Koenig & Anthony R. Davis, The KEY to Lexical Semantic Representations, 42 J. Linguistics 71 (2006) (explaining that “the semantic content of a lexical entry determines to a large extent its syntactic subcategorization or other contexts of occurrence”).
  55. . Warren Sack, Conversation Map: An Interface for Very Large-Scale Conversations, J. Mgmt. Info. Sys., Winter 2000–2001, at 73, 79–80 (explaining analysis of email messages and Usenet posts with reference to semantic trees and thesauri).
  56. . See Jason Brownlee, Loss and Loss Functions for Training Deep Learning Neural Networks, Mach. Learning Mastery (Oct. 23, 2019), https://machinelearningmastery.com/
    loss-and-loss-functions-for-training-deep-learning-neural-networks/
    [https://perma.cc/MY4T-BTT7] (explaining loss functions).
  57. . Applied A.I. Tech. for Using Nat. Language Processing to Train a Nat. Language Generation Sys., U.S. Patent No. 11,042,713.
  58. . Id. at col. 1 ll. 51–54.
  59. . Id. at col. 2 ll. 16–21, 30–35.
  60. . See id. at col. 2 l. 64–col. 3 l. 2.
  61. . See id. at col. 9 II. 24–26, 45–65.
  62. . Id. at col. 16 ll. 39–46.
  63. . Id. at col. 17 ll. 64–67, col. 18 ll. 7–23.
  64. . Id. at col. 10 ll. 53–59.
  65. . Id. at col. 12 ll. 28–52.
  66. . See Applied Artificial Intelligence Technology for Using Natural Language Processing to Train a Natural Language Generation System, Google Patents https://patents.google.com/patent/US11042713B1/en?oq=U.S.+Patent+No.+11%2c042%2c713+ [https://perma.cc/XK2J-ATQN?type=image] (demonstrating that the ‘713 Patent cites 333 patents, and is cited by 26).
  67. . See Ben Dickson, The Power of Wide Transformers Models, TechTalks (Oct. 31, 2022), https://bdtechtalks.com/2022/10/31/wide-transformers-models/ [https://perma.cc/4AFD-6LKN]; Shomit Ghose, Opinion, The Biggest Innovation in ChatGPT? It’s the “T”, Not the Chat, Berkeley News (Mar. 29, 2023), https://news.berkeley.edu/2023/03/29/18160#
    :~:text=This%20is%20the%20%E2%80%9CT%E2%80%9D%20in,other%20word%20in%20the%20sentence
    [https://perma.cc/H5CC-KMCY].
  68. . See Giuliano Giacaglia, How Transformers Work, Medium: Toward Data Sci. (Mar. 10, 2019), https://towardsdatascience.com/transformers-141e32e69591 [https://perma.cc/
    FH87-EQ47]. 
  69. . See id.
  70. . Id. (explaining transformers and comparing them with recursive neural networks; giving examples of language translation systems). “A transformer model is a neural network that learns context and thus meaning by tracking relationships in sequential data like the words in this sentence.” Rick Merritt, What Is a Transformer Model?, NVIDIA (Mar. 25, 2022), https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/ [https://perma.cc/7T
    9X-2YN7] (explaining how transformers “pay attention”). See also Alexander Amini, MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention, YouTube (Mar. 17, 2023), https://www.youtube.com/watch?v=ySEx_Bqxvvo [https://perma.cc/9VP4-GNLH] (focusing on “how we can build neural networks that can handle and learn from sequential data”).
  71. . MuseNet, OpenAI (Apr. 25, 2019), https://openai.com/research/musenet [https://perma.cc/JCV9-3CYM] (providng example of a sample imitating Chopin).
  72. . Amini, supra note 8, at 12:00–13:20, 28:00.
  73. . See Andrew Guthrie Ferguson, Facial Recognition and the Fourth Amendment, 105 Minn. L. Rev. 1105, 1110–14 (discussing breaking images into smaller delimiters in context of facial recognition technology).
  74. . See id. (explaining this matching process in the context of facial recognition technology).
  75. . See Ken Gu & Rohit Saha, An Introduction to Semantic Matching Techniques in NLP and Computer Vision, Medium: Georgian Impact Blog (Sept. 8, 2021), https://medium.com/georgian-impact-blog/an-introduction-to-semantic-matching-techniques-in-nlp-and-computer-vision-c22bf3cee8e9 [https://perma.cc/2SQF-YHLC];
  76. . See Roland Kuhn & Renato De Mori, The Application of Semantic Classification Trees to Natural Language Understanding, 17 IEEE Transactions on Pattern Analysis & Mach. Intel. 449 (1995), https://www.researchgate.net/profile/Renato-De-Mori/publication/220182176_The_Application_of_Semantic_Classification_Trees_to_Natural_Language_Understanding/links/543befbb0cf24a6ddb97c86f/The-Application-of-Semantic-Cl
    assification-Trees-to-Natural-Language-Understanding.pdf [https://perma.cc/A9TA-HWRV?ty
    pe=standard].
  77. . Ken Gu & Rohit Saha, An Introduction to Semantic Matching Techniques in NLP and Computer Vision, Medium: Georgian Impact Blog (Sept. 8, 2021), https://medium.com/georgian-impact-blog/an-introduction-to-semantic-matching-techniques-i
    n-nlp-and-computer-vision-c22bf3cee8e9 [https://perma.cc/2SQF-YHLC] (explaining semantic pattern matching); Kaveti Naveenkumar & Shrutendra Harsola, Deep Learning for Semantic Text Matching, Medium: Towards Data Sci. (Nov. 5, 2020), https://towardsdatascience.com/deep-learning-for-semantic-text-matching-d4df6c2cf4c5 [https
    ://perma.cc/BV3U-4TTJ] (giving examples); Brian Gallagher, Matching Structure and Semantics: A Survey on Graph-Based Pattern Matching, Ass’n for Advancement A.I. Fall Symp., 2006, at 1, https://cdn.aaai.org/Symposia/Fall/2006/FS-06-02/FS06-02-007.pdf [https://perma.cc/NBD6-2H7U]. 
  78. . See Ferguson, supra note 72.
  79. . Face-recognition terminology calls the learning databases the “training” databases and the databases of actual persons whose faces might match a candidate image the “enrolled” database. See Henry H. Perritt, Jr., Defending Face-Recognition Technology (And Defending Against It), 25 J. Tech. L. & Pol’y 41, 50–51 (2020) (describing two types of databases).
  80. . See Ferguson, supra note 72.
  81. . See id.
  82. . See Hughes, supra note 10.
  83. . See Tuana Çelik, How to Build a Semantic Search Engine in Python, deepset (Nov. 23, 2022), https://www.deepset.ai/blog/how-to-build-a-semantic-search-engine-in-python [http
    s://perma.cc/AU8K-57MT].
  84. . See Semantics Graphs and Conversational Agents, U.S. Patent No. 9,196,245 col. 3 ll. 5–14 (applying semantic clustering techniques to group utterances according to topic and importance).
  85. . See Automated Tool for Question Generation, U.S. Patent No. 10,614,106 col 5. ll. 13–28.
  86. . See Arthur C. Graesser et al., Learning by Communicating in Natural Language with Conversational Agents, 23 Current Directions Psych. Sci. 374, 375–76 (2014) (describing computerized natural language tutor).
  87. . See Method & Apparatus for Semantic Pattern Matching for Text Retrieval, Eur. Patent No. 0601550 (filed Dec. 7, 1993) (issued Aug. 22, 2001) (using semantic pattern recognition to locate relevant text rapidly). 
  88. . Paul D. Callister, Introduction to Online Legal Research (Westlaw and Lexis), Albert E. Jenner Jr. Mem’l L. Libr., https://www1.law.umkc.edu/faculty/callister/bootcamp/ot/nl01.html [perma.cc/W492-A2CA].
  89. . CC100, Papers with Code, https://paperswithcode.com/dataset/cc100 [https://per
    ma.cc/LM4Z-5MH5].
  90. . The Pile: An 800GB Dataset of Diverse Text for Language Modeling, Pile, https://pile.eleuther.ai/ [https://perma.cc/YYS9-B74D].
  91. . See Matt Rickard, LAION, The Pile, and More Datasets, Matt Rickard (Dec. 14, 2022), https://matt-rickard.com/laion-the-pile-and-more-datasets [https://perma.cc/T2HM-BTAR] (listing and evaluating different training data sets); Kevin Schaul et al., Inside the Secret List of Websites that Make AI Like ChatGPT Sound Smart, Wash. Post (Apr. 19, 2023, 6:00 AM), https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/ [https
    ://perma.cc/W34H-FNHT] (identifying patents.google.com, wikipedia.org, scribd.com, fool.com, and kickstarter.com as major sources).
  92. . ChatGPT session with the author (May 16, 2023) (on file with author) (responding to question: “what sources do large language models use[?]”).
  93. . Oversight of A.I.: Rules for Artificial Intelligence, Before the Subcomm. on Priv., Tech., & the L. of the S. Comm. On the Judiciary, 118 Cong. (2023) (written testimony of Sam Altman, CEO, OpenAI) [hereinafter Altman testimony], https://www.judiciary.senate.go
    v/imo/media/doc/2023-05-16%20-%20Bio%20&%20Testimony%20-%20Altman.pdf [https://
    perma.cc/2L95-ZM4C].
  94. . See Kim Martineau, What Is Generative AI, IBM (Apr. 20, 2023), https://research.ibm.com/blog/what-is-generative-AI [https://perma.cc/2D5X-N3HE].
  95. . A chip is an integrated semiconductor circuit, and a CPU is a central processing unit. Rahul Awati, Integrated Circuit (IC), TechTarget, https://www.techtarget.com/whati
    s/definition/integrated-circuit-IC
    [https://perma.cc/8R3H-KZKV].
  96. . See Aaron Tilley, The New Intel: How Nvidia Went from Powering Video Games to Revolutionizing Artificial Intelligence, Forbes (Nov. 30, 2016, 9:30 AM), https://www.forbes.com/sites/aarontilley/2016/11/30/nvidia-deep-learning-ai-intel/?sh=52d47f
    277ff1 [https://perma.cc/476M-PH9V].
  97. . See Asa Fitch, Nvidia, Other Chip Companies Race to Cash in on ChatGPT Frenzy, Wall St. J. (Feb. 23, 2023, 9:38 AM), https://www.wsj.com/articles/chip-makers-see-chatgpt-stirring-strong-demand-for-advanced-processors-76f152d1 [https://perma.cc/7H8G-TZ46].
  98. . See id.; Marisa Garanhel, Top 20 Artificial Intelligence Chips of Choice in 2022, AIAI, (Oct. 14, 2022) https://www.aiacceleratorinstitute.com/top-20-chips-choice-2022/ [https://perma.cc/B3V4-WV33].
  99. . Garanhel, supra note 97.
  100. . Id.
  101. . Id.
  102. . NVIDIA, NVIDIA ADA GPU Architecture 4, https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvidia-ada-gpu-architecture.pdf [https://perma.cc/8T97-WFMX].
  103. . Sharon Goldman, How Nvidia Dominated AI – and Plans to Keep It That Way as AI Explodes, Venture Beat (Feb. 23, 2023, 5:00 AM), https://venturebeat.com/ai/how-nvidia-dominated-ai-and-plans-to-keep-it-that-way-as-generative-ai-explodes/ [https://perma.cc/3YR
    L-FVX3] (reporting on how chips developed for video games drove machine learning to new level).
  104. . See NVIDIA, supra note 101.
  105. . See, e.g., Goldman, supra note 102 (reporting that OpenAI used 10,000 NVIDIA GPUs to train ChatGPT).
  106. . Cem Dilmegani, Large Language Model Training in 2023, AI Multiple (May 20, 2023), https://research.aimultiple.com/large-language-model-training/ [https://perma.cc/P9L5-53WB].
  107. . See id.
  108. . See Cem Dilmegani, Large Language Models: Complete Guide in 2023, AI Multiple (June 21, 2023), https://research.aimultiple.com/large-language-models/ [https://perma.cc/295D-DGN5].
  109. . See Dilmegani, supra note 105.
  110. . See Altman testimony, supra note 92.
  111. . ChatGPT session with the author (May 16, 2023) (on file with author) (responding to question: “where can [I] join an anti-vaxer group [?]”).
  112. . E.g., Mike Isaac, Uber Defies California Regulators with Self-Driving Car Services, N.Y. Times (Dec. 16, 2016), http://www.nytimes.com/2016/12/16/technology/uber-defies-c
    alifornia-regulators-with-self-driving-car-service.html?_r=0/
    [https://perma.cc/2ZUC-KYM6].
  113. . See ADS-B FAQ, Fed. Aviation Admin., https://www.faa.gov/air_traffic/technol
    ogy/adsb/faq [https://perma.cc/UM66-A2QE].
  114. . See Amazon Staff, Amazon Prime Air Prepares for Drone Deliveries, amazon (June 13, 2022) https://www.aboutamazon.com/news/transportation/amazon-prime-air-prepares-for-drone-deliveries [https://perma.cc/6UFT-96KB].
  115. . See id.
  116. . See Positive Train Control (PCT), Fed. R.R. Admin., https://railroads.dot.gov/research-development/program-areas/train-control/ptc/positive-train-control-ptc [https://perma.cc/247A-XPRX]; see also 75 Fed. Reg. 2,598 (Jan. 15, 2010) (now codified at 49 C.F.R. §§ 229, 234–36 (2022)).
  117. . Positive Train Control (PTC), supra note 115.
  118. . See Jane Wakefield, Rolls-Royce Imagines a Future of Unmanned Ships, BBC News (Mar. 5, 2014), https://www.bbc.com/news/technology-26438661 [https://perma.cc/AH29-CCAQ].
  119. . See generally Henry H. Perritt, Jr., Who Pays When Drones Crash?, 21 UCLA J.L. & Tech., Spring 2017, at 1 (analyzing sources of liability for mishaps involving civilian drones; describing developments with other instances of transportation autopilots).
  120. . See Roger McNamee, Big Tech Needs to Be Regulated. Here Are 4 Ways to Curb Disinformation and Protect Our Privacy, Time (July 29, 2020, 10:05 AM), https://time.com/5872868/big-tech-regulated-here-is-4-ways/ [https://perma.cc/8FW3-5FJR] (reporting on campaign, in Congress and elsewhere, to rein in social media); Clare Duffy, Here’s How US Lawmakers Could Finally Rein in Facebook, CNN Bus. (Jan. 17, 2022, 8:38 AM), https://www.cnn.com/2022/01/17/tech/meta-facebook-regulation/index.html [https://perma.cc/
    KM8Y-ME69] (reporting on legislative initiatives targeting Facebook and other big tech enterprise).
  121. . Doe 1 v. GitHub, Inc., No. 22-cv-06823-JST, 2023 WL 3449131 (N.D. Cal. May 11, 2023).
  122. . Anderson v. Stability AI Ltd., No. 3:23-cv-00201-WHO (N.D. Cal. filed Jan. 13, 2023).
  123. . See Henry H. Perritt, Jr., Robots as Pirates, 73 Cath. U. L. Rev. (forthcoming 2024) (manuscript at 40–43) (on file with author).
  124. . See, e.g., Henry H. Perritt, Jr., Robot Slanderers, 46 U. Ark. Little Rock L. Rev. (forthcoming 2024) (manuscript at 2) (on file with author).
  125. . Id. (manuscript at 4).
  126. . See id. (manuscript at 6).
  127. . Id. (manuscript at 4).
  128. . See Olga Akselrod, How Artificial Intelligence Can Deepen Racial and Economic Inequities, ACLU (July 13, 2021), https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities [https://perma.cc/DW33-7UEJ].
  129. . Id.
  130. . Nat’l Inst. of Standards & Tech., U.S. Dep’t of Com., NIST Spec. Publ’n 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence 10 (2022) [hereinafter NIST Spec. Publ’n 1270], https://nvlpubs.n
    ist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
    [https://perma.cc/F8MT-SHG8].
  131. . See Steven Mintz, Rewriting the Narrative of U.S. History, Inside Higher Ed (June 24, 2020), https://www.insidehighered.com/blogs/higher-ed-gamma/rewriting-narrative-us-history [https://perma.cc/YT2H-JUSB] (“Each generation rewrites the past to meet its own needs. As the United States evolved into a more multicultural society, limitations of earlier, complacent and nationalistic historical narratives became flagrantly self-evident. The romanticizing of the Civil War; the downplaying of Indian removal, slavery and Jim Crow; and the nostalgia-laced sentimentalizing of immigration, evading the realities of nativism and ethnocentrism.”). 
  132. . Hussam Alkaissi & Samy I. McFarlane, Artificial Hallucinations in ChatGPT: Implications in Scientific Writing, 15 Cureus, Feb. 19, 2023, at 1, 4, https://www.cureus.com/articles/138667-artificial-hallucinations-in-chatgpt-implications-in-scientific-writing#!/ [https://perma.cc/XT2Y-XLG4]; see also Altman testimony, supra note 92 (“In some circumstances, the most likely words that appear near each other may not be the most accurate ones, and the outputs of ChatGPT or other AI tools may also be inaccurate.”).
  133. . Alkaissi & McFarlane, supra note 131, at 3.
  134. . See Devansh, Why ChatGPT Lies, Medium: Geek Culture (Mar. 13, 2023), https://medium.com/geekculture/why-chatgpt-lies-4d4e0c6e864e [https://perma.cc/X5ZL-KFFW] (explaining how latent-space organization and large datasets lead to hallucination).
  135. . See Bill Franks, Generative AI: It’s All a Hallucination!, CXOTECH Mag., https://cxotechmagazine.com/generative-ai-its-all-a-hallucination/ [https://perma.cc/QJA8-7EMD] (“[G]enerative AI is effectively producing hallucinations 100% of the time.”).
  136. . See Molly Crabapple, Opinion, Op-Ed: Beware a World Where Artists Are Replaced by Robots. It’s Starting Now, L.A. Times (Dec. 21, 2022, 3:20 AM), https://www.latimes.com/opinion/story/2022-12-21/artificial-intelligence-artists-stability-ai-di
    gital-images
    ; [https://perma.cc/2FC3-X3TZ]; Vanessa Thorpe, ‘ChatGPT Said I Did Not Exist’: How Artists and Writers Are Fighting Back Against AI, Guardian (Mar. 18, 2023), https://www.theguardian.com/technology/2023/mar/18/chatgpt-said-i-did-not-exist-how-artists
    -and-writers-are-fighting-back-against-ai
    [https://perma.cc/NHV7-AHGW].
  137. . Henry H. Perritt, Jr., Robot Job Destroyer, 84 La. L. Rev.4–5 (2023).
  138. . See John Rushworth, The Star Chamber on Printing, 1637, in Historical Collections of Private Passages of State: Volume 3, 1639–40 (1721), as digitized at Brit. Hist. Online, https://www.british-history.ac.uk/rushworth-papers/vol3/pp306-316 [https://perma.cc/2T28-CLHX].
  139. . See infra Section III.E (describing motor vehicle, consumer products, and radio transmitter certification).
  140. . 21 U.S.C. § 355(a) (prohibiting sale of new drug until FDA approves it).
  141. . Id. § 355(b).
  142. . See infra Section III.E (describing FAA licensing).
  143. . See infra Section III.E (describing FCC licensing and certification).
  144. . Nat’l Inst. of Standards & Tech., U.S. Dep’t of Com., NIST AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023) [hereinafter AI RMF 1.0], https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf [https://perma.cc/Z38N-KWW9].
  145. . 14 C.F.R. §§ 1–198.17 (2022).
  146. . 49 C.F.R. §§ 200–99 (2022) (Federal Railroad Administration regulations).
  147. . E.g., 225 Ill. Comp. Stat. Ann. 41/10-5 (West 2023) (providing for the licensing of embalmers).
  148. . E.g., id. at 56/35 (restricting the activities of music therapists).
  149. . Id. § 657(a) (authorizing inspections by Secretary of Labor).
  150. . Id. § 211(a) (authorizing entry and inspection by Wage and Hour Administrator).
  151. . Id. § 1021 (requiring disclosure of terms of employee benefit plans). 
  152. . 29 U.S.C. § 1132. 
  153. . Privacy and Security Enforcement, Fed. Trade Comm’n, https://www.ftc.gov/news-events/topics/protecting-consumer-privacy-security/privacy-security-enforcement [https://perm
    a.cc/2WKH-LFXL
    ].
  154. . E.g., Niagara Mohawk Power Corp. v. Chevron U.S.A., Inc., 596 F.3d 112, 120 (2d Cir. 2010) (describing CERCLA’s cleanup and remediation requirements). 
  155. . 29 U.S.C. § 657(c) (requiring employers to keep records pertinent to causes and preventions of industrial accidents and diseases). 
  156. . See id. § 211(c) (requiring employers to keep records of wages, hours, and conditions of employment).
  157. . See 26 U.S.C. § 6001 (imposing recordkeeping requirements).
  158. . See 43 C.F.R. § 4120.2 (2006) (imposing management practice regulations on users of public lands); 14 C.F.R. § 91.225 (2021) (imposing ADS-B transponder requirements on aircraft operating in Class A airspace).
  159. . ChatGPT session with the author (May 16, 2023) (on file with author).
  160. . Daniel Van Boom, Elon Musk Is Right: We Need to Regulate AI Now, CNET (May 17, 2023, 5:17 PM), https://www.cnet.com/tech/elon-musk-is-right-we-need-to-regulate-ai-now/ [https://perma.cc/HDM5-5AXA].
  161. . Id.
  162. . Id.
  163. . Rules for Robots: The Path to Effective AI Regulation, MIT Initiative on Digit. Econ. (June 12, 2019), http://ide.mit.edu/news-blog/blog/rules-robots-path-effective-ai-regulation [https://perma.cc/L8CQ-DKE8] (summarizing presentation by University of Toronto professor Gillian Hadfield); see generally Gillian K. Hadfield, Rules for a Flat World 247–48, 353–54 (2017) (describing the current state of AI regulation and how it can be improved upon in the future).
  164. . See Andrea Bertolini, RoboLaw: Why and How to Regulate Robotics, Robohub (Oct. 29, 2014), https://robohub.org/robolaw-why-and-how-to-regulate-robotics/ [https://perma.cc/JR
    B8-EWAW] (describing European study on the philosophy of regulating robots; suggesting a case-by-case approach to particular applications).
  165. . See Abishur Prakash, Why Robot Law Around Industrial Automation Varies Worldwide, Robotics Bus. Rev. (Jan. 1, 2017), https://www.roboticsbusines sreview.com/manufacturing/why_robot law_around_industrial_auto mation_varies_worldwide/ [https://perma.cc/L7VY-7ZVY] (providing a thoughtful discussion of different national approaches, some emphasizing worker safety, as in the United States and European Union, and some emphasizing national origin of robots, as in China). 
  166. . AI Bill of Rights, supra note 3, at 3–7.
  167. . Id.
  168. . Jacob Moses & Gili Vidan, An AI Moratorium Probably Is Not the Answer, Wash. Post (May 8, 2023, 6:00 AM), https://www.washingtonpost.com/made-by-history/2023/05/08/artificial-intelligence-moratorium/ [https://perma.cc/3H5U-N4YP].
  169. . See Press Release, The White House, FACT SHEET: President Biden Issues Exec. Ord. on Safe, Secure, & Trustworthy A.I. (Oct. 30, 2023), https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-saf
    e-secure-and-trustworthy-artificial-intelligence/ [https://perma.cc/C7HK-CZ6S] (requiring NIST standards, studies, and reports).
  170. . See generally Ithiel de Sola Pool, Technologies of Freedom 14–17, 215–16 (1983) (discussing enforcement of United States copyright law through effective targeting of bottlenecks).
  171. . See id. at 215–16.
  172. . See id. at 244–51.
  173. . See Viacom Int’l, Inc. v. YouTube, Inc., 676 F.3d 19, 27 (2d Cir. 2012) (quoting S. Rep. No. 105-190, at 2 (1998)) (“[T]he DMCA . . . was designed to ‘clarif[y] the liability faced by service providers who transmit potentially infringing material over their networks.’”).
  174. . See Jonathan Vanian & Kif Leswing, ChatGPT and Generative AI Are Booming, but the Costs Can Be Extraordinary, CNBC (Apr. 17, 2023, 2:09 AM), https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expe
    nsive-price.html
    [https://perma.cc/V33L-GNVF]; Jim McGregor, Generative AI Breaks the Data Center: Data Center Infrastructure and Operating Costs Projected to Increase to Over $76 Billion by 2028, Forbes (May 12, 2023, 4:33 PM), https://www.forbes.com/sites/tiriasresearch/2023/05/12/generative-ai-breaks-the-data-center-data-center-infrastructure-and-operating-costs-projected-to-increase-to-over-76-billion-by-202
    8/?sh=33b013587c15
    [https://perma.cc/GBD2-XVQS].
  175. . See supra Section I.F (describing major engines).
  176. . See, e.g., Charlotte H. Mason & William D. Perreault, Jr., Collinearity, Power, and Interpretation of Multiple Regression Analysis, 28 J. Mktg. Rsch. 268, 268 (1991); Ayodele Temitope Ajibade et al., Regression Analysis and Relevance to Research in Social Sciences, Acad. J. Acct. & Bus. Mgmt., Dec. 2021, at 1, 1–2.
  177. . See Dan Kopf, The Discovery of Statistical Regression, Priceonomics (Nov. 6, 2015), https://priceonomics.com/the-discovery-of-statistical-regression/ [https://perma.cc/PH
    8V-G6EU]; The Evolution of Statistical Computing: A Captivating Journey Through History, VSNI (Apr. 27, 2021), https://vsni.co.uk/blogs/evolution-of-statistical-computing [https://perma.cc/CD4R-VJFA].
  178. . See generally Arianna Borrelli & Janina Wellmann, 27 NTM Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin [NMT J. Hist. Sci., Med, Tech.] 407 (2019) (describing early and continued use of computer modeling in engineering).
  179. . See Andrew Zola, Development Environment, TechTarget (Apr. 2022), https://www.techtarget.com/searchsoftwarequality/definition/development-environment [https:
    //perma.cc/RS3F-VFYV].
  180. . A moratorium is essentially a temporary ban. Moratorium, Black’s Law Dictionary (11th ed. 2019).
  181. . See Pause Giant AI Experiments: An Open Letter, supra note 2.
  182. . See 47 U.S.C. § 301.
  183. . Andrew Tutt, An FDA for Algorithms, 69 Admin. L. Rev. 83, 83 (2017).
  184. . Id. at 106–07.
  185. . Id. at 108.
  186. . See Cat Zakrzewski et al., CEO Behind ChatGPT Warns Congress AI Could Cause ‘Harm to the World’, Wash. Post (May 16, 2023, 4:12 PM), https://www.washingtonpost.com/technology/2023/05/16/sam-altman-open-ai-congress-hearin
    g/
    [https://perma.cc/HXE8-ME7U].
  187. . John Villasenor, The Problems With a Moratorium on Training Large AI Systems, Brookings (Apr. 11, 2023), https://www.brookings.edu/articles/the-problems-with-a-moratorium-on-training-large-ai-systems/ [https://perma.cc/RJV5-2A32].
  188. . Altman testimony, supra note 92.
  189. . 49 U.S.C. § 44711(a)(1).
  190. . Id. § 44701(a)(1).
  191. . Id. § 30112(a)(1) (prohibiting sale of noncompliant vehicles).
  192. . Id. § 30111(a).
  193. . Id. § 30166(e).
  194. . See id. § 30115 (requiring certification of compliance).
  195. . See 49 C.F.R. § 567.4 (2022) (requiring certification placard).
  196. . 15 U.S.C. §§ 2051–89.
  197. . Id. §§ 2051, 2053.
  198. 197. Id. § 2068(a)(1).
  199. . See id. § 2068(a)(6).
  200. . See id. § 2068(a)(14).
  201. . 16 C.F.R. § 1205.30(a) (2022).
  202. . Id.
  203. . Compare Children’s Product Certificate (CPC), U.S. Consumer Prod. Safety Comm’n, https://www.cpsc.gov/Business–Manufacturing/Testing-Certification/Childrens-Product-Certificate-CPC [https://perma.cc/M8NG-5E5T], with 16 C.F.R. § 1205.33 (2022).
  204. . See 47 U.S.C. § 301.
  205. . See id.
  206. . 47 C.F.R. § 2.906 (2022).
  207. . Id. § 2.907.
  208. . Id. § 2.906.
  209. . Id. § 2.907(a).
  210. . Id. § 2.906(a).
  211. . See 21 U.S.C. § 343 (detailing requirements for food labeling).
  212. . See Alex Engler, The Case for AI Transparency Requirements, Brookings (Jan. 22, 2020), https://www.brookings.edu/articles/the-case-for-ai-transparency-requirements/ [https://p erma.cc/7TQD-M646] (Advocating for AI disclosure policies in the name of transparency.)
  213. . AI Accountability Policy Request for Comment, 88 Fed. Reg. 22,433 (Apr. 13, 2023).
  214. . Id.
  215. . See Altman testimony, supra note 92.
  216. . See generally Henry H. Perritt, Jr., Trade Secrets for the Practicioner § 6 (2d ed. 2005 & Supp. 2022) (analyzing secrecy requirement for trade secret protection).
  217. . 17 U.S.C. § 102(b). See generally Henry H. Perritt, Jr., Copyright for Robots?, 57 Ind. L. Rev. 139 (2023) (analyzing possibility of copyright protection for generative AI).
  218. . See infra Section III.K (offering example of prohibition of disparate impact).
  219. 218. See U.S. Equal Emp. Opportunity Comm’n, EEOC-NVTA-2023-2, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (2023), https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used [https://perma.cc/MC9U-GNQ9].
  220. . U.S. Equal Emp. Opportunity Comm’n, EEOC-NVTA-2022-2, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-a
    nd-artificial-intelligence
    [https://perma.cc/8WUX-32UZ].
  221. . NIST Spec. Publ’n 1270, supra note 129.
  222. . See id. at 4.
  223. . See id. at 11.
  224. . Id. at 12.
  225. . Id. at 14.
  226. . Id. at 17.
  227. . Id. at 27.
  228. . See id. at 29–30.
  229. . Id. at 42–45.
  230. . See generally 29 C.F.R. § 1607.1 (2022) (describing guidelines for employer compliance with federal law during hiring process).
  231. . See, e.g., Training and Certification, Ass’n of Title IX Adm’rs, https://www.atix
    a.org/training-certification/ [https://perma.cc/3VHM-893T] (offering 40 courses in Title IX training).
  232. . Howard B. Homonoff, The First Amendment and National Security: The Constitutionality of Press Censorship and Access Denial in Military Operations, 17 N.Y.U. J. Int’l L. & Pol. 369, 377–79 (1985) (describing functions of Director of Censorship during World War II and the Korean War); Christina E. Wells, Information Control in Times of Crisis: The Tools of Repression, 30 Ohio N.U. L. Rev. 451, 461–62 (2004) (describing direct censorship during World War I under Espionage Age and Sedition Act); see also, e.g., Exec. Order No. 9,182, 7 Fed. Reg. 4,468 (June 13, 1942) (establishing Office of War Information).
  233. . See generally Brian J. Gorman, Biosecurity and Secrecy Policy: Problems, Theory, and a Call for Executive Action, 2 I/S: J.L. & Pol’y for Info. Soc’y 53, 58–69 (2006) (summarizing existing controls on information about atomic science and life science research).
  234. . See Maria Lewis, Early Hollywood and the Hays Code, ACMI (Jan. 14, 2021), https://www.acmi.net.au/stories-and-ideas/early-hollywood-and-hays-code/ [https://perma.cc/F
    UK5-MRH8] (reporting on self-imposed industry code prohibiting profanity, suggestive nudity, graphic or realistic violence, sexual persuasions, and rape and limiting use of crime, costume, dance, religion, national sentiment, and morality).
  235. . See Fed. Commc’n Comm’n v. Fox Television Stations, Inc., 556 U.S. 502, 505–06 (2009) (describing FCC indecency regulations).
  236. . Rebecca Kern, 5th Circuit Upholds Texas Law Forbidding Social Media ‘Censorship’ — Again, Politico (Sept. 16, 2022, 6:40 PM), https://www.politico.com/news/
    2022/09/16/5th-circuit-upholds-texas-law-forbidding-social-media-censorship-again-00057316
    [https://perma.cc/J6R9-QCQD].
  237. . See Ensuring Respect for Free Expression, Through Independent Judgment., Oversight Bd., https://www.oversightboard.com/ [https://perma.cc/9XSW-EBGA].
  238. . Oversight Board Charter, Oversight Bd. (Feb. 2023), https://oversightboard.com/at
    tachment/494475942886876/
    [https://perma.cc/8U2H-UW8N].
  239. . Id. at art. 2, sec. 1.
  240. . Case Decisions and Policy Advisory Opinions, Oversight Bd., https://oversightboa
    rd.com/decision/
    [https://perma.cc/U9YL-K9HX].
  241. . South African Slurs, Oversight Bd., https://www.oversightboard.com/decision/FB-TYE2766G/ [https://perma.cc/L5KK-CBFH].
  242. . Wampum Belt, Oversight Bd., https://www.oversightboard.com/decision/FB-L1
    LANIA7/ [https://perma.cc/CW8D-4NMV].
  243. . See infra Part IV (giving examples of privately censored contents called into question under 47 U.S.C. § 230).
  244. . See Giulia Hayward, A USC Office Removes ‘Field’ From Its Curriculum, Citing Possible Racist Connotations, NPR (Jan. 14, 2023, 6:48 PM), https://www.npr.org/2023/01/14/1148470571/usc-office-removes-field-from-curriculum-racist [https://perma.cc/N2GP-LRJ7].
  245. . Darren Orf, Elon Musk Is Building an Anti-Woke AI, Whatever That Means, Popular Mechs. (Mar. 1, 2023), https://www.popularmechanics.com/technology/robots/a43126181/e
    lon-musk-anti-woke-artificial-intelligence/
    [https://perma.cc/9DTW-UB38].
  246. . See Altman testimony, supra note 92.
  247. . Joseph Clark, House Republicans Introduce Bill To Crack Down on Censorship, Wash. Times (Jan. 12, 2023), https://www.washingtontimes.com/news/2023/jan/12/house-republicans-introduce-bill-crack-down-censor/ [https://perma.cc/N7YR-ZUT6].
  248. .  505 U.S. 377, at 391 (1992).
  249. . 240 F.3d 200, 210 (3d Cir. 2001).
  250. . 521 U.S. 844, 844, 870–71 (1997).
  251. . See generally Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter’s Role in Suppressing the Biden Laptop Story Before the H. Comm. on Oversight & Accountability, 118 Cong. (2023) (“examining the coordination between the Federal Government and Big Tech to restrict protected speech and interfere in the democratic process” within context of “the Biden Laptop Story”).
  252. . See infra Part IV (discussing controversy over 47 U.S.C. § 230).
  253. . See Fact Sheet: DHS Internal Working Group Protects Free Speech and Other Fundamental Rights When Addressing Disinformation That Threatens the Security of the United States, Homeland Sec. (May 2, 2022), https://www.dhs.gov/news/2022/05/02/fact-sheet-dhs-internal-working-group-protects-free-speech-other-fundamental-rights [https://perma.cc/4Y9R-RH6L] (citing disinformation spread by cartels and coyotes to persuade people to cross the border; false information and scams during natural disasters, and foreign disinformation to critical infrastructure).
  254. . See Amanda Seitz & Nomaan Merchant, DHS Disinformation Board’s Work, Plans Remain a Mystery, AP News (May 5, 2022, 7:36 AM), https://apnews.com/article/russia-ukraine-europe-united-states-freedom-of-speech-alejandromayorkas69f658351103d4d049083 ad20a713e2a [https://perma.cc/UZ3R-R69T] (reporting on controversy); Rob Garver, US Homeland Security’s ‘Disinformation Governance Board’ Assailed by Lawmakers, VOA (May 5, 2022, 2:55 AM), https://www.voanews.com/a/homeland-security-s-disinformation-governance-board-assailed-by-lawmakers-/6557453.html [https://perma.cc/SLG3-RQCC].
  255. . Geneva Sands, DHS Shuts Down Disinformation Board Months After Its Efforts Were Paused, CNN (Aug. 24, 2022 10:46 PM), https://www.cnn.com/2022/08/24/politics/dhs-disinformation-board-shut-down/index.html [https://perma.cc/5V7W-QRYJ].
  256. . See supra Section III.I (describing Chinese techniques for censoring the Internet).
  257. . See Les Législateurs Européens Doivent Rejeter Cette Proposition de Scanner les Chats Privés [EU Lawmakers Must Reject This Proposal to Scan Private Chats], Elec. Frontier Found. (Mar. 20, 2023), https://www.eff.org/deeplinks/2022/10/eu-lawmakers-must-reject-proposal-scan-private-chats [https://perma.cc/37F5-JEY7] (reporting on proposal before the European Commission to mandate scanning of every private message, photo, and video and checking against databases of images of child abuse); Kelvin Chan & Raf Casert, EU Law Targets Big Tech over Hate Speech, Disinformation, AP News (Apr. 23, 2022, 9:26 AM), https://apnews.com/article/technology-business-police-social-media-reform 52744e1d0f5b93a 426f966138f2ccb52 [https://perma.cc/C5QB-XKS2] (reporting on Digital Services Act, governments could ask for removal of a wide range of content deemed illegal, such as material that promotes terrorism, child sexual abuse, hate speech, and commercial scams); Regulation 2022/2065, of the European Parliament and of the Council of 19 October 2001 on a Single Market for Digital Services and amending Directive 2000/31/EC, O.J. (L277) 1, 12 (giving examples of illegal content).
  258. . Beina Xu & Eleanor Albert, Media Censorship in China, Council on Foreign Rels. (Feb. 17, 2017), https://www.cfr.org/backgrounder/media-censorship-china [https://per
    ma.cc/P56U-HBCH].
  259. . Id.
  260. . See “Race to the Bottom”: Corporate Complicity in Chinese Censorship: II. How Censorship Works in China: A Brief Overview, Hum. Rts. Watch (Aug. 2006), https://www.hrw.org/reports/2006/china0806/3.htm [https://perma.cc/44DL-TAKT].
  261. . Regulation 2022/2065, supra note 256, at paras. 5, 29 (applying regulation to “mere conduit” services and listing “internet exchange points, wireless access points, virtual private networks, DNS services and resolvers, top-level domain name registries, registrars, certificate authorities that issue digital certificates, voice over IP and other interpersonal communication services” as activities that might be covered).
  262. . See generally Am. Trucking Ass’n, Inc. v. United States, 755 F.2d 1292, 1294 (7th Cir. 1985) (noting movement toward deregulation); Michael Conant, Railroad Consolidations and the Antitrust Laws, 14 Stan. L. Rev. 489, 517–18 (1962) (criticizing ICC’s consolidation decisions). 

    The costs of complying with uniform standards, whether those of the Environmental Protection Agency or of the Interstate Commerce Commission, are similar in character. They often render marginal firms unprofitable, discourage new entry and investment, stifle innovation, regressively tax consumers when passed on in product prices, diminish consumer choice, surround capital investment decisions with great uncertainties, and violate norms of horizontal equity. This is not to deny that social regulation often generates benefits; doubtless it does, although available data do not establish conclusively or even persuasively that benefits always exceed costs. The point here is that those costs are no less problematic simply because the regulation is social rather than economic. If anything, the contrary is true.

    Peter H. Shuck, Book Review: The Politics of Regulation, 90 Yale L.J. 702, 709–10 (1981) (internal footnotes omitted).

  263. . See 29 U.S.C. § 1132.
  264. . See Privacy and Security Enforcement, supra note 152.
  265. . Zakrzewski et al., supra note 185.
  266. . Gabby Miller, Transcript: US Senate Judiciary Hearing on Oversight of A.I., TechPolicy.Press (Sept. 13, 2023), https://techpolicy.press/transcript-us-senate-judiciary-hearing-on-oversight-of-a-i/ [https://perma.cc/Z4YN-X4PG].
  267. 238. See infra Part IV (regarding Brookings Institution report).
  268. . But see Bertolini, supra note 163 (arguing that robots already are regulated, by tort law if nothing else, and that the only legitimate question is how they should be regulated).
  269. . Ryan Calo, Brookings Inst., The Case for a Federal Robotics Commission 3 (2014) [hereinafter Calo, Case for Commission], https://www.brookings.edu/wp-content/uploads/2014/09/RoboticsCommissionR2_Calo.pdf [https://perma.cc/8CQG-GSBE] (proposing federal “agency dedicated to the responsible integration of robotics technologies into American society”); see also Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev. 513, 555–58 (2015) [hereinafter Calo, Robotics].
  270. . Calo, Case for Commission, supra note 268, at 3.
  271. . Id. at 6–10.
  272. . See id. at 11–12.
  273. . See Calo, Robotics, supra note 268, at 559.
  274. . Id. at 556.
  275. 274. See generally, Fed. Commc’ns Comm’n, A Short HIstory of Radio 4 (2003–2004), http://transition.fcc.gov/omd/history/radio/documents/short_history.pdf [https://perma.c
    c/R4A5-4ZDP] (providing timeline of radio history and listing 1927 as the year “[t]he Federal Radio Commission [was] established to bring order to chaotic airwaves”); Radio Chaos to End Tomorrow Night, Evening Star, Apr. 22, 1927, at 2, https://chroniclingamerica.loc.gov/lccn
    /sn83045462/1927-04-22/ed-1/seq-2/
    [https://perma.cc/TS95-L8KR] (describing the problem of “wave jumping” and Commission efforts to clear the “chaos” by assigning frequencies).
  276. . See Ashleigh Garrison, What Americans Think About Creating a New Federal Agency to Oversee the Robots, CNBC (June 30, 2018, 11:00 AM), https://www.cnbc.com/2018/06/29/w
    hat-americans-think-about-a-new-federal-agency-to-oversee-robots.html [https://perma.cc/FK7
    D-CBHB] (reporting results of a poll showing 61% of respondents are “very uncomfortable or somewhat uncomfortable” with robots, but only 32% support creation of Federal Robotics Commission).
  277. . National Artificial Intelligence Initiative Act of 2020, H.R. 6395, 116th Cong. § 5105(a), (c) (2023).
  278. . Id.
  279. . Zakrzewski et al., supra note 185 (“Altman advocated a number of regulations—including a new government agency charged with creating standards for the field—to address mounting concerns that generative AI could distort reality and create unprecedented safety hazards.”).
  280. . Oversight of A.I.: Rules for Artificial Intelligence, Before the Subcomm. on Priv., Tech., & the L. of the S. Comm. On the Judiciary, 118 Cong. (2023) (written testimony of Gary Marcus, Professor Emeritus, N.Y.U.) [hereinafter Marcus testimony], https://www.judiciary.sen
    ate.gov/imo/media/doc/2023-05-16%20-%20Testimony%20-%20Marcus.pdf [https://perma.cc
    /J989-XHQQ].
  281. . The quoted language is derived from 42 U.S.C. § 2000e-2(k)(1)(A) (allocating burdens of proof in disparate impact employment discrimination cases).
  282. . See The Robot That Takes Your Job Should Pay Taxes, Says Bill Gates, Quartz (Feb. 17, 2017), https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ [https://perma.cc/HEP3-FUSN] (reporting on an interview with Bill Gates in which he proposed a tax to slow the pace of automation).
  283. . See Eduardo Porter, Don’t Fight the Robots. Tax Them., N.Y. Times (Feb. 23, 2019), https://www.nytimes.com/2019/02/23/sunday-review/tax-artificial-intelligence.html [https://pe
    rma.cc/R5VE-LLC6] (assessing Gates proposal to tax automation in light of tax incentives to automate; acknowledging difficulties of a direct robot tax but suggesting that tax incentives for automation might be reduced instead).
  284. . See Paul Ericksen, A Robot Tax Is a Very Bad Idea, IndustryWeek (Sept. 20, 2019), https://www.industryweek.com/technology-and-iiot/article/22028269/a-robot-tax-is-a-very-bad-idea [https://perma.cc/4L6C-CRRB] (arguing that the United States lags in automation and that taxing robots will increase the lag).
  285. . Musk has not proposed a tax but has proposed a universal basic income to offset job displacement effected by artificial intelligence. See Catherine Clifford, Elon Musk Says Robots Will Push Us to a Universal Basic Income—Here’s How It Would Work, CNBC make it (Nov. 21, 2016, 9:33 AM), https://www.cnbc.com/2016/11/18/elon-musk-says-robots-will-push-us-to-a-universal-basic-income-heres-how-it-would-work.html [https://perma.cc/85LF-ZZYC].
  286. . See Lawrence Hodge, EVs Are Going to Cost Jobs Across the Auto Industry, Jalopnik (Aug. 15, 2022), https://jalopnik.com/evs-are-going-to-cost-jobs-across-the-auto-industry-1849413566 [https://perma.cc/WHK6-EP84]. But see Milton Ezrati, A Robot Tax Will Help No One and Hurt Many, Forbes (Oct. 27, 2019, 6:33 PM), https://www.forbes.com/
    sites/miltonezrati/2019/10/27/a-robot-tax-will-help-no-one-and-hurt-many/#35b16133779f [ht
    tps://perma.cc/LYH9-XPJV] (challenging proposals by Bill de Blasio and others for a tax on robots that displace workers, as determined by a new “Federal Automation and Worker Protection Agency (FAWPA)”; marshaling historical evidence that innovation does not result in net job loss because creation of new jobs compensates for those lost).
  287. . Richard Rubin, The ‘Robot Tax’ Debate Heats Up, Wall St. J.: The Future of Everything (Jan. 8, 2020, 10:00 AM), https://www.wsj.com/articles/the-robot-tax-debate-heats-up-11578495608 [https://perma.cc/84CF-5HDS] (quoting and summarizing Dean Baker, co-founder of the Center for Economic and Policy Research, a left-of-center think tank).
  288. . Id.
  289. . NIST Spec. Publ’n 1270, supra note 129, at 36–37.
  290. . Altman testimony, supra note 92.
  291. . Deirdre K. Mulligan & Kenneth A. Bamberger, Procurement as Policy: Administrative Process for Machine Learning, 34 Berkley Tech. L.J. 773, 773­–74 (2019).
  292. . See id. at 834–35.
  293. . Id. at 846–50 (original capitalization omitted).
  294. . Lee Hamilton, Compromise Is the Essence of Our Democracy, Ind. Univ. Ctr. on Representative Gov’t (June 5, 2019), https://corg.iu.edu/programs/hamilton-views/comm
    ents-oncongress/Compromise%20is%20the%20Essence%20of%20Our%20Democracy%20.ht
    ml
    [https://perma.cc/ZH7M-J54S].
  295. . See The Federalist No. 10 (James Madison).
  296. . 47 U.S.C. § 230.
  297. . See David S. Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act, 43 Loy. L.A. L. Rev. 373. Zeran v. Am. Online, Inc., 129 F.3d 327, 323 (4th Cir. 1997) (finding that “AOL falls squarely within [§ 230’s] traditional definition of a publisher and, therefore, is clearly protected by § 230’s immunity.”).
  298. . Id. § 230(c).
  299. . Id. § 230(f)(2).
  300. . Id. § 230(f)(3).
  301. . See Gilmore v. Jones, 370 F. Supp. 3d 630, 662 (W.D. Va. 2019) (finding that posters of derogatory articles were information content providers, not entitled to section 230 immunity).
  302. . See Baldino’s Lock & Key Serv. v. Google, Inc., 88 F. Supp. 3d 543, 546–47 (E.D. Va. 2015) (finding Google to be immune under section 230).
  303. . 461 F. Supp. 3d 938, 945 (N.D. Cal. 2020), aff’d, 851 F. App’x 724 (9th Cir. 2021).
  304. . Id. at 949.
  305. . See id. at 952–53.
  306. . See id. at 955.
  307. . Id. at 953 (quoting Fair Hous. Council v. Roomates.com, LLC, 521 F.3d 1157, 1170–71 (9th Cir. 2008) (en banc)).
  308. . Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995).
  309. . Fair Hous. Council, 521 F.3d at 1163 (finding section 230 inapplicable to the defendant in Fair Housing because its search features enabled the display of certain content). “Where it is very clear that the website directly participates in developing the alleged illegality—as it is clear here with respect to Roommate’s questions, answers and the resulting profile pages—immunity will be lost.” Id. at 1174.
  310. . See Lewis, 461 F. Supp. 3d at 954–55. The court’s analysis in this regard is misplaced. Section 230 says that eligible intermediaries shall not be treated as publishers. Newspaper publishers are liable for third-party content they publish, subject to First Amendment limitations.
  311. . Id. at 955–56.
  312. . Id. at 956.
  313. . Id. at 957–61.
  314. . Id. at 961– 62.
  315. . Id. at 962–63.
  316. . See Corynne Mcsherry, Content Moderation and the U.S. Election: What to Ask, What to Demand, EFF (Oct. 26, 2020), https://www.eff.org/deeplinks/2020/10/content-moderation-and-us-election-what-ask-what-demand [https://perma.cc/3RD9-NWQF].
  317. 315. Press Release, Mark R. Warner, U.S. Sen., Legislation to Reform Section 230 Reintroduced in the Senate, House (Feb. 28, 2023), https://www.warner.senate.gov
    /public/index.cfm/2023/2/legislation-to-reform-section-230-reintroduced-in-the-senate-house [
    https://perma.cc/V2XM-XP6Z].
  318. . Senator Hawley Introduces Legislation to Amend Section 230 Immunity for Big Tech Companies, Josh Hawley U.S. Sen. for Mo. (June 19, 2019), https://www.haw
    ley.senate.gov/senator-hawley-introduces-legislation-amend-section-230-immunity-big-tech-c
    ompanies
    [https://perma.cc/AKY2-YGDK].
  319. . Mark MacCarthy, A Dispute Resolution Program for Social Media Companies, Brookings (Oct. 9, 2020), https://www.brookings.edu/research/a-dispute-resolution-program-for-social-media-companies/ [https://perma.cc/CX25-XU57].
  320. . Id.
  321. . Id.
  322. . Id.; Platform Accountability and Consumer Transparency Act, S. 4066, 116th Cong. (2020).
  323. . S. 4066 § 5(a).
  324. . See id. § 5(b).
  325. . Id. § 5(c)(2)(A).
  326. . Id.
  327. . Id. § 5(g)(1)(A).
  328. . Id. § 5(i).
  329. . Id. § 6(a).
  330. . Id. § 6(b)(5)–(6) (amending Federal Communications Act).
  331. . Zakrzewski et al., supra note 185.
  332. . See Cary Coglianese et al., Seeking Truth for Power: Informational Strategy and Regulatory Policymaking, 89 Minn. L. Rev. 277, 281–82 (2004) (explaining that government regulation is justified by three types of market failure, including lack of competition and externalities); see also James A. Henderson, Jr., Learned Hand’s Paradox: An Essay on Custom in Negligence Law, 105 Cal. L. Rev. 165, 175 (2017) (explaining that judicial review or regulation can check some kinds of market failure).
  333. . See Perritt supra note 30, at 193 (analyzing railroad, Teamster-union, and major-trucker lobbying to regulate cattle haulers).
  334. . See Henry H. Perritt, Jr., Don’t Burn the Looms—Regulation of Uber and Other Gig Labor Markets, 22 SMU Sci. & Tech. L. Rev. 51, 80–81 (2020) (describing political pressure to regulate ride-hailing enterprises to protect legacy industries).
  335. . See Devin Coldewey, AI Desperately Needs Regulation and Public Accountability, Experts Say, TechCrunch (Dec. 7, 2018, 4:44 PM), https://techcrunch.com/2018/12/07/ai-desperately-needs-regulation-and-public-accountability-experts-say/ [https://perma.cc/J2SU-M
    ZV9] (“Artificial intelligence systems and creators are in dire need of direct intervention by governments and human rights watchdogs.”).
  336. . “Fundamentally, regulations should address risks—to health and safety, to the environment, to the economy, to consumers, etc.—and their causes. Rules and procedures that are based on science, focused, and proportionate are more effective, and less costly.” Org. for Econ. Coop. & Dev., OECD Regulatory Policy Outlook 2021, Risk-Based Regulation: Making Sure That Rules Are Science-Based, Targeted, Effective and Efficient (2021), https://www.oecd.org/gov/regulatory-policy/chapter-six-risk-based-regulation.pdf [https://perma.cc/Z7DV-L6D8].
  337. . Cary Coglianese, What Does Risk-Based Regulation Mean?, The Regul. Rev. (July 8, 2019), https://www.theregreview.org/2019/07/08/coglianese-what-does-risk-based-regulation-mean/ [https://perma.cc/P7PN-VEGE] (distinguishing between low-level and high-level risks as regulatory targets).
  338. . Oversight of A.I.: Rules for Artificial Intelligence, Before the Subcomm. on Priv., Tech., & the L. of the S. Comm. On the Judiciary, 118 Cong. (2023) (written testimony of Christina Montgomery, Chief Privacy and Trust Officer, IBM) [hereinafter Montgomery testimony], https://www.judiciary.senate.gov/imo/media/doc/2023-05-16%20-%20Testimony
    %20-%20Montgomery.pdf [https://perma.cc/EXR8-CEW7].
  339. . See Occupational Safety & Health Admin., U.S. Dep’t of Lab, OSHA Technical Manual (OTM) Section IV: Chapter 4, https://www.osha.gov/otm/section-4-safety-hazards/chapter-4 [https://perma.cc/AW4M-AWD4].
  340. . Id. § 6(E).
  341. . See AI RMF 1.0, supra note 143.
  342. . Id. at 1.
  343. . Id. at 2–3.
  344. . See Perritt, supra note 122 (manuscript at 25–26).
  345. . See Perritt, supra note 123 (manuscript at 86–87).
  346. . See id. (manuscript at 40).
  347. . See Why Did the Luddites Protest?, Nat’l Archives, https://www.nationalarchives.gov.uk/education/resources/why-did-the-luddites-protest/ [https:/
    /perma.cc/QG4Y-5JSJ].
  348. . See A History of Steamboats, U.S. Army Corps of Eng’rs. 2, https://www.sam.usace.army.mil/Portals/46/docs/recreation/OP-CO/montgomery/pdfs/10thand
    11th/ahistoryofsteamboats.pdf
    [https://perma.cc/49UE-9BME] (explaining why steamboats replaced flatboats); Charles Henry Ambler, A History of Transportation in the Ohio Valley 150–51 (1932) (explaining how steamboats overcame opposition of those dependent on flatboats and canalboats).
  349. . See A History of Steamboats, supra note 347, at 6 (explaining why railroads replaced steamboats); Ambler, supra note 347, at 185–86 (reporting the demise of steamboats because of railroads).
  350. . Emergency Bd. No. 138, Report to the President 3 (1961), as digitized at HathiTrust (Mar. 14, 2020, 9:36 PM), https://babel.hathitrust.org/cgi/pt?id=hvd.hl26ji&view
    =1up&seq=11
    [https://perma.cc/6BJH-DQCP] (reporting 35,000 national members of Order of Railroad Telegraphers).
  351. . See Greg Daugherty, The Rise and Fall of the Telephone Operators, History (June 1, 2023), https://www.history.com/news/rise-fall-telephone-switchboard-operators [https://per
    ma.cc/DEL2-URK3] (reporting employment decline from peak of 235,000 to 5,000).
  352. . See Henry H. Perritt, Jr., Job Training Mythologies: Stitching up Labor Markets, 98 Neb. L. Rev. 795, 796 (2020) [hereinafter Perritt, Job Training Mythologies]; Perritt, supra note 333, at 53.
  353. . See Perritt, Job Training Mythologies, supra note 351, at 799–801 (explaining Schumpeter’s theory of Creative Destruction; technology creates new jobs as it destroys old).
  354. . See James Q. Wilson, The Politics of Regulation, in The Politics of Regulation 357, 368 (James Q. Wilson ed., 1974); Perritt, supra note 333, at 80–81 (discussing politics of regulating labor market disruption).