Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Page 1 of 7 - Cover Page Submission ID trn:oid:::3618:53332558

Shubhangi gupta
Write-up Article - Aviral Srivastava.docx
My Files

NUSRL, Ranchi

Document Details

Submission ID

trn:oid:::3618:53332558 5 Pages

Submission Date 1,412 Words

Feb 27, 2024, 6:50 PM GMT+5:30


7,827 Characters

Download Date

Mar 11, 2024, 9:14 PM GMT+5:30

File Name

Write-up Article - Aviral Srivastava.docx

File Size

17.9 KB

Page 1 of 7 - Cover Page Submission ID trn:oid:::3618:53332558


Page 2 of 7 - AI Writing Overview Submission ID trn:oid:::3618:53332558

How much of this submission has been generated by AI?

30%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 7 - AI Writing Overview Submission ID trn:oid:::3618:53332558


Page 3 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

Write-up Article

Artificial intelligence: India lacks clear IP laws around


AI results

With recent advancements in artificial intelligence technologies, such as ChatGPT, the days of
searching several results on Google for a single inquiry are long gone. Instead, material tailored to
the user's query is now provided. OpenAI created the ChatGPT AI model, which is intended for
conversational exchanges. It can produce content, respond to inquiries, provide clarifications, and
have interactive text-based discussions on a variety of topics. With the launch of ChatGPT, the
debate over copyright and AI-generated material has come back to life, bringing up several issues.
The copyright rules that are now in place in India don't appear to be sufficiently developed to
protect or register AI-generated works under the law or to specifically recognize AI as an author.
The intricacy of content ownership under Indian law about content created by artificial intelligence
is acknowledged by ChatGPT itself. There are no particular laws in India that deal with who owns
content created by artificial intelligence.

Is it possible to get the copyright for AI-generated content?

Regarding whether content produced by AI may be protected by copyright, current copyright rules
state that the creator is the first person to own copyright to a work. The 1957 Copyright Act of
India makes no mention of AI-generated works or acknowledges AI as an author. AI creations
must be unique and innovative to be eligible for copyright protection, which is a major restriction.
Because AI depends on data from pre-existing online sources and data supplied during training,
content created by AI may not adhere to standards of originality or inventiveness. "Computer-
generated work": In 1994, the Copyright Act of India was modified to encompass computer-
generated works, encompassing artistic, theatrical, musical, and literary creations. The act's
introduction of Section 2(d)(v) defined authorship of such works as "the person who causes the
work to be created."

Page 3 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558


Page 4 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

Recently, The New York Times Company filed lawsuits against Microsoft and OpenAI (the
creators of the well-known AI model "ChatGPT") alleging that these companies' models relied on
large-language models ("LLMs") that were produced by copying and using millions of copyrighted
news articles, in-depth analyses, opinion pieces, reviews, how-to guides, and other Times content.
The New York District Court is yet to provide a decision on whether or not an individual or
organization may be identified as the creator of the information produced by ChatGPT in such
circumstances.

Where Generative AI Fits into Today’s Legal Landscape-

Despite being relatively new to the market, generative AI is significantly impacted by current rules.
Presently, judges are deciding how the existing legislation needs to be implemented. Users should
be able to prompt these tools with direct references to other creators' copyrighted and trademarked
works by name without obtaining permission. There are concerns regarding infringement and
rights of use, ownership of AI-generated works, and unlicensed content in training data. A lawsuit
has already been filed over these accusations. Three artists formed a class action lawsuit in
Andersen v. Stability AI et al., a case that was filed in late 2022. The artists claimed that the
generative AI platforms were using their original works without obtaining permission to train their
AI in their styles, enabling users to create works that might not be sufficiently transformative from
their existing, protected works, and thus would be considered unauthorized derivative works.
Substantial infringement fines may be imposed if a court determines that the AI's works are
unapproved and derivative. There have been previous instances where copyright law and
technology have collided. Google successfully defended itself against a lawsuit by claiming that
transformative usage permitted the use of book text scraping to construct its search engine; this
ruling is nonetheless precedent-setting for the time being.

Page 4 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558


Page 5 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

Liability and Infringement

It's difficult to determine who is liable for works created by AI. It entails comprehending the
functions of AI systems, users, and developers. Copyright rules must be followed by both the
producers and consumers of work created by AI. Finding the true copyright owner, however,
becomes difficult if an AI system produces material without human input. Because AI is not
considered to have a legal personality, problems might emerge when it infringes upon copyright.
Although AI is not acknowledged as a legal entity, people are normally held accountable for
infringement under the Copyright Act. It is necessary to create precise frameworks to assign
accountability to the developers, owners, or operators of AI to address liability issues.

The practice of assigning copyright rights to the AI system's programmer is adopted by several
nations, such as Ireland, New Zealand, and India. This method acknowledges the programmer's
creative thought as the reason behind the AI's creation. India has adopted a more forgiving
approach recently, granting the AI RAGHAV co-ownership of its creation, Suryast, with its
inventor serving as the other co-author. Considering that, some argue that if an AI system
independently generates a completely original work, it should be considered the author and hold
sole copyright ownership. According to a different viewpoint, AI-generated works need to be open
and publicly owned, much like Creative Commons. Although this strategy is advantageous to the
general public, if tech corporations are unable to profit financially from the work generated, it can
deter them from funding AI initiatives.

Legal Scenario in India

The Copyright Act of 1957 governs the topic of creative works in India. When it comes to AI-
generated art, India is not inclusive. The person who causes the work to be created—a human or
legal person—is defined as the "author" under Section 2(d) of the act. AI systems are not
considered to be writers under this definition. This stance has been reaffirmed by Indian courts in
several rulings, making it clear that AI systems cannot be regarded as authors of works protected
by copyright.

Page 5 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558


Page 6 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

Adopted from the United States, the legal idea of fair use permits restricted unrestricted use of
copyrighted content under certain conditions. The goal, nature, volume, and effect of AI-generated
work must all be taken into account when determining whether it constitutes fair use.

Although there isn't a single legislation in India that governs AI, the topic is covered by several
other laws and regulations. A draft National Strategy on Artificial Intelligence was published by
the Ministry of Electronics and Information Technology in 2020, providing a framework for
policy development in the field. The Information Technology Act of 2000, the Right to
Information Act of 2005, and the forthcoming Digital Person Data Protection Bill of 2022 are
further rules. The handling of personal data is governed by the Digital Person Data Protection
Bill, 2022, which mandates that AI systems be auditable, transparent, and explainable in addition
to being biased-free. According to the Information Technology Act of 2000, intermediaries are not
allowed to host, publish, or distribute any harmful or defamatory content. Although several
legislation frameworks have been developed to prevent prejudice in AI, there are substantial
barriers to their enforcement. First off, there is a regulatory vacuum that prevents AI developers
from being held responsible for the biases in their products due to the lack of specific legislation
addressing AI. Moreover, insufficient resources and expertise exist to effectively assess and track
the biases of AI systems. Furthermore, it is difficult to recognize and address biases in AI systems
due to the opaque nature of their decision-making process. Lastly, stakeholders like lawmakers,
attorneys, and AI developers need to be made aware of the need to reduce AI prejudice.

Conclusion

Concerns regarding potential bias and discrimination in decision-making have been highlighted
by the increasing use of AI in India. A lack of diversity in training data, developer prejudices, and
historical data are some of the factors that might lead to AI bias. AI is covered by several Indian
legislations. Unfortunately, a lack of specialized resources and stakeholder knowledge makes it
difficult to enforce these rules. India may use IBP to address this by forming inclusive and diverse
teams, maintaining transparency and accountability in AI decision-making procedures, and
carrying out regular audits and assessments to find and remove prejudice. To build ethical and
responsible AI and implement it in India with a legal framework, the Indian government must

Page 6 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558


Page 7 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

intervene and adopt thorough legislation and guidelines. There are serious issues with intellectual
property raised by the rise of ChatGPT. Copyright rules may need to be amended to handle the
particular difficulties that AI technology presents. Until more clear regulations are created, the
legal ramifications of employing such instruments will probably continue to be complicated and
unclear.

Page 7 of 7 - AI Writing Submission Submission ID trn:oid:::3618:53332558

You might also like