Legal Aspect of ChatGPT

August 7, 2023

The launch of ChatGPT by OpenAI in 2022, has initiated a transformative phase in the domain of
Artificial Intelligence (AI). AI language models like ChatGPT exemplify this profound shift, given
their ability to generate human-like text and their wide-ranging applications in customer service and
content creation1.Nevertheless, as AI technology progresses, the legal implication associated with
its usage also evolves.


There are various ways in which an individual can initiate legal action against ChatGPT. The most
frequent basis for such claims has been the infringement of legal rights. Some of the potential
concerns are:


  • Violation of copyright
  • Data privacy breaches
  • Presence and reproduction of biased or erroneous information


Due to its extensive training on an enormous dataset gathered from the Internet, ChatGPT’s ability
to trace the sources of its information becomes virtually impossible. In essence, this implies that the
model itself could potentially be responsible for various copyright infringement. This was exhibited
when authors Mona Awad and Paul Tremblay jointly filed a class action complaint for copyright
infringement at the US District Court for the Northern District of California against OpenAI, the
company which created the AI tool ChatGPT. The plaintiffs alleged that OpenAI violated copyright
laws by “training” its AI model i.e. ChatGPT on novels without obtaining prior authorization from
the authors2.


OpenAI faced further legal challenge in the form of an extensive lawsuit, which alleged that their
AI models, ChatGPT and DALL-E, underwent training utilizing the data of hundreds of millions of
individuals without obtaining proper consent. The lawsuit contended that OpenAI collected
personal data directly from individuals who interacted with their AI systems and other applications
that incorporated ChatGPT3. The complainant argued that such data collection and usage were in
violation of privacy laws 4 , particularly concerning the data of children.


It is important to highlight that ChatGPT retains conversations as training data for potential use in
future models to provide better user experience. This practice could potentially result in legal
challenges if users input confidential or sensitive information that later gets reproduced by the AI
tool.For example, if a junior lawyer were to use ChatGPT to draft a contract, it’s possible that personal
data such as clients’ names, addresses, and other confidential information could be saved as future
training material, which will then be made available to prospective users of ChatGPT, thereby
breaching such clients’ right of privacy.


Due to the limitations of its training data and available information, ChatGPT is prone to
inaccuracies. The AI tool can only provide responses based on the information it has at a given
time, lacking a deep understanding of the subject matter. In this regard, OpenAI’s website containes
a heading titled “Limitations” which specifically states that the tool sometimes generates plausible-
sounding yet incorrect or nonsensical answers. 5 Recently, ChatGPT incorrectly labelled Australian
politician Brian Hood as a criminal. 6 In the United States, OpenAI is facing a lawsuit involving
radio host Mark Walters. 7 ChatGPT falsely identified Walters as being accused of embezzling funds
from a non-profit organization called the Second Amendment Foundation, despite no such
accusation ever being made against him.


Furthermore, all information or data uploaded to ChatGPT, as well as any output generated by the
system, remain the property of the user, as long as they adhere to OpenAI's terms of use. Upon
perusal of OpenAI’s Terms of Use, it is apparent that ChatGPT bears no responsibility or liability
for any of its outputs. The sole responsibility for any resulting outputs lies with the user, including
any potential liability towards OpenAI itself.


The Government of Bangladesh is yet to implement any regulations to govern the use of AI in line
with local laws. Although there are no specific laws solely dedicated to regulating AI usage, some
existing laws, such as the Digital Security Act 2018, offer guidance on regulating digital activities
and addresses issues like the misuse of digital devices, cyber-terrorism, hacking, and the
dissemination of false information through digital media.


As mentioned before, the sole responsibility for any resulting outputs lies with the users of
ChatGPT. Therefore, should the user employ ChatGPT in a manner resulting in any form of
infringement of rights, they shall be held accountable for such actions. Sections 25 and 26 of the
Digital Security Act 2018 address specific offenses related to digital data and information. Section
25 pertains to the transmission, publication, or propagation of offensive, false, or threatening data
via digital means, while Section 26 deals with the unauthorized collection, use, or possession of
identity information without lawful authority. Complying with these legal provisions is crucial to
ensure responsible usage of ChatGPT and avoid any legal repercussion.


Written by Ameer Faysal Rohan (Associate) and Tahia Nur (Pupil) at Vertex Chambers

† Disclaimer: The opinions and comments expressed in this Blawg are not to be regarded or
construed as legal advice by and from Vertex Chambers or any of its members. It is highly advisable
that any person should seek independent legal advice before relying on any of the contents of this

[1] Siddharth K; ‘Explainer: ChatGPT – what is OpenAI’s chatbot and what is it used for?’; Reuters;
available at
[2] Tremblay et al v. OpenAI, INC
[3] PM v. OpenAI LP
[4] Gerrit De Vynck; ‘ChatGPT maker OpenAI faces a lawsuit over how it used people’s data’; The
Washington Post; available at
[7] Mark Walters v. Open AI, LLC (