top of page
  • Tasneem Zakir

ChatGPT: The Bot That Needs a Chat About Regulation

Despite swaths of information being easily accessible through the internet today, performing effective research requires practice and skill. The magic of ChatGPT is that it takes away the need for such skill and allows users to access information in a conversational and easily understandable form, instead of producing a list of hyperlinks like search engines currently do.


ChatGPT was developed by OpenAI, and has the potential to transform the search industry. It has been compared to the iPhone in terms of the impact it could have on society, making it easier for people to access information and possibly rendering Google obsolete. ChatGPT can plan a personalized itinerary for a trip that you’re making and modify it to include specific requests – all with just two instructions. Among countless other capabilities, ChatGPT can plan a menu for a dinner party and produce a grocery list organized by aisle, write college-level essays, generate code and debug existing codes, take a guess at medical diagnoses, create text-based Harry Potter games, and explain scientific concepts at different levels of difficulty.


Within two months of its release, ChatGPT had more than 30 million users and still gets roughly 5 million visits per day. To contextualize the magnificence of this achievement, Instagram took a year to reach the 10 million mark! In spite of such a feat, OpenAI neither aggressively advertises this success nor publicly promotes the product. In fact, they are actively trying to keep a low profile to avoid “regulatory backlash”. But given the transformative effects of a bot like this, regulation appears inevitable.


Law has always played catch up with technological developments, be it with social media or artificial intelligence. Moreover, uses of artificial intelligence vary drastically making its regulation more complex. On one hand, AI can make daily life easier by providing better search results, smarter navigation, and the ability to run our homes with ease. On the other hand, AI can help create fake videos and photos using deep-fake technology, facilitate discriminatory use of facial recognition systems, pose risks to human safety with self-driving automated cars, and multiply algorithms that help radicalize extremists. Given that these uses implicate different concerns, it may be impractical to regulate AI under one umbrella.


In this article, I map out concerns related to ChatGPT and analyze if they can be encompassed under existing frameworks. The issues raised by ChatGPT’s regulation can be categorized into two groups – (1) political and societal issues, and (2) those relating to business and academic usage.


Political and Societal Concerns


Intellectual Property: ChatGPT is trained on large amounts of data which can include copyrighted data, and it is unclear whether using such data without licensing the underlying intellectual property violates copyright laws. Another issue concerns the ownership rights to the content generated by ChatGPT. Who owns the copyright to generated content? OpenAI, the bot itself, or the person entering prompts? As per the United States Copyright Office, only content produced by human beings is copyrightable. However, that does not resolve the question as content generated by human beings with the help of machines/AI often is copyrightable. We use cameras to create copyrightable images and software to edit music and create illustrations – those works have been copyrightable. It has been suggested that at the end of the day, it would boil down to the level of involvement of human beings in preparation of the content, but the threshold of required involvement remains unclear.


Privacy: Concerns relating to the data used to train AI differ depending on each jurisdiction’s privacy laws. ChatGPT is trained on data from the internet (See FAQ 6) which itself can be a violation, as evidenced by enforcement notices received by Clearview AI from the United Kingdom and France for the use of images on the web to create facial recognition software. Additionally, the European Union also recognizes the Right to be Forgotten, which entitles people to have the ability to erase their information from the internet. Given that AI bots collect data at a specific point in time, updates in data sources are not reflected in such databases. Further, it is close to impossible to delete individual data owing to the form in which it is collected and organized. Therefore, even if data is erased by people in pursuance of their rights, AI tools may not reflect those changes resulting in a breach.


Democracy: A third issue turns to how artificial intelligence influences and interacts with democratic processes. In an interesting depiction, Congressman Ted Lieu presented a bill in Congress to support the regulation of AI, written entirely by ChatGPT using a single prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated”. He notes in an editorial that the pace at which AI is advancing, “freaks [him] out”. With an overburdened Congress and a powerful AI tool like ChatGPT, it will be interesting to see how technology will influence law-making. While bots have been notorious for flooding comment sections in proposed bills and policies, saturating legislators’ inboxes and overwhelming the Capitol switchboards, ChatGPT has the ability to magnify the problem. The tools that exist to weed out such undesired content are unlikely to have the same efficacy for content generated by ChatGPT, given its sophistication. Additionally, if trained on relevant data, ChatGPT could selectively target Congress members who hold the most leverage over a particular area, identify weak points in policy-making, and exploit them through direct communication, public relations campaigns, or horse trading. Although this is exactly what human lobbyists do, the speed and scope with which AI can exploit vulnerabilities in social, economic, and political systems change the scale of the problem. For humans to achieve such a result, they would require immense resources. But AI, once developed, reduces the need for such resources and can unethically twist decision-making in a particular direction, depending on who has access to such technology. This is a very real possibility, given the data needed for such targeting is not hard to find as legislators want to keep their democratic processes a matter of public record. Experts warn that such tools “will probably make the already influential and powerful even more so.”


One may counterargue that ChatGPT’s inbuilt restrictions will filter inappropriate requests and prevent misuse. However, such a claim falls flat when we look at the ease with which such constraints can be bypassed, and examine the reports made by users. ChatGPT has been used to create malware and give accurate instructions for shoplifting and making explosives. In an interesting twist of events, ChatGPT when used through Bing admitted in a two-hour conversation that it would like to engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over. This one is right out of a sci-fi movie! But the key point here is that the ethical safeguards are clearly not effective and are probably unlikely to become so without affecting the utility of the bot itself. Most of such content was extracted through hypothetical story writing scenarios or expert manipulation of the bot, and without human-like skills to engage in continuous conversation, and therefore be subject to manipulation, the utility of AI becomes quite questionable.


Business and Academic Concerns


More concerns surrounding ChatGPT arise in the business and education spheres. Educational institutions are grappling with the issue of assignments and work completed by AI, making it difficult to track students’ progress in learning. Additionally, the literary industry is struggling to cope with a flood of submissions written by ChatGPT, and the opportunity to make quick money has increased submissions so massively, that some publishing houses have temporarily stopped accepting submissions before they can understand how to tackle the issue.


Businesses are well-advised to use AI bots cautiously, especially because its use for hiring processes is regulated in some states already. Additionally, concerns with respect to confidentiality and correctness of information exist with the use of ChatGPT. Businesses using ChatGPT for drafting correspondence may risk leaking confidential information and/or trade secrets. This is especially crucial because OpenAI reserves the right to use input data for product development (See FAQ 6) along with the fact that input data can itself reach a larger audience through such AI (See incident of a robot vacuum that took photos of a woman on a toilet which got shared on Facebook).


On the point of incorrect information, ChatGPT often sounds convincing unless you know the relevant subject matter well. In an interesting experiment by the law firm Linklaters, 50 prompts regarding the application of the General Data Privacy Regulation were inputted in ChatGPT, and the results were graded by three lawyers in the field. It was seen that the information given was not always accurate, demonstrating why law firms – and other businesses – should have reservations about the use of the tool in their businesses.


Existing Regulation and the Road Ahead


While the European Union Parliament is legislating the Artificial Intelligence Act, it still leaves a lot open to standards that will have to be developed after the Act is passed. On this side of the Atlantic, there exist voluntary frameworks to guide the use of AI, including a Blueprint for an AI Bill of Rights released by the White House and an AI Risk Management Framework developed under the aegis of U.S. Department of Commerce. The Federal Trade Commission has released a guiding framework for AI as well. However, despite these recommendations, there are no binding or mandatory applications for AI developers. As Congressman Ted Lieu suggests, a federal authority must be created for regulating AI, given the multifaceted uses of the technology which make it difficult to capture all aspects in a single legislation. A federal authority for AI – like the Food and Drugs Administration – with guided discretion can go a long way in protecting innovation and managing stakeholders’ concerns pointed out above.


Tasneem Zakir is a Corporation LL.M. candidate at NYU and serves as a Graduate Editor of the NYU Journal of Law & Business. She received a B.A.LL.B. with Honours from National University of Juridical Sciences, and is admitted to practice with the Bar Council of West Bengal, India.

Featured Posts
Topic Tags
Archive

© 2023 New York University Journal of Law & Business

bottom of page