Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

6TH SURANA & SURANA & KLE LAW COLLEGE NATIONAL

CONSTITUTIONAL LAW MOOT COURT COMPETITION


2023-24

BEFORE THE SUPREME COURT OF BHARAT NADU

(CIVIL ORIGINAL JURISDICTION)

Under Article 32 of the Constitution of the Union of Bharat Nadu

Democratic Reform Project and Ors............................................................................Petitioner

V.

Union of Bharat Nadu.................................................................................................Respondent

TO THE HON’BLE CHIEF JUSTICE OF BHARAT NADU AND HIS LORDSHIP’S


COMPANION JUSTICES OF THE HON’BLE SUPREME COURT OF BHARAT NADU

MEMORIAL FILED ON BEHALF OF RESPONDENT


Page|2

Table of Contents

SL. NO. CONTENTS PG. NO.

1. TABLE OF CONTENTS 2

2. INDEX OF AUTHORITIES 3

3. STATEMENT OF JURISDICTION 4

4. STATEMENT OF FACTS 5

5. STATEMENT OF ISSUES 6

6. SUMMARY OF ARGUMENTS 7

7. ARGUMENTS ADVANCED 8

8. PRAYER 24

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|3

Index of Authorities

SL.
CASES
NO.
1. Coffin v. United States, 156 U.S. 432, 545 (1895)
2. Loomis v. Wisconsin, 881 N.W.2d 749 (Wis. 2016)
3. Quadrature du Net and Others v. Premier ministre and Others

SL. NO, STATUTES


1. European Convention of Human Rights, 1950
2. Constitution of India, 1950
3. Constitution of the United States of America, 1789
4. Treaty establishing a Constitution for Europe, 2004

SL. NO. BOOKS


1. Trial of Criminal Cases, Adv.Nayan Joshi
2. Race and Crime, Shaun L. Gabbidon – Helen Taylor Green
3. Introduction to the International Criminal Court, William A. Schabas
4. Crime prevention, Nick Tilley
5. Crime in India, Dr Uttam Kumar Singh
6. Control of Organized Crime, Adv. R .Chakraborthy

SL. NO. DATABASE


1. EUR-Lex
2. US Courts.gov
3. BAILII
4. SSRN

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|4

Statement of Jurisdiction

Union of Bharat Nadu, the respondent in this case (Civ) No. of 2023, concerning the

matter of Democratic Reform Project and Ors. Vs Union of Bharat Nadu, humbly submits to the

jurisdiction of the Hon’ble Supreme Court under article 32 of the Constitution of Bharat Nadu.

The present memorandum sets forth the facts, contentions and arguments in the present case.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|5

Statement of Facts

1. Bharat Nadu, a diverse and populous nation with limited resources to devout towards
traditional methods of law enforcement has partnered with BNMIL in the creation of
“Yuga Drishti”, an AI algorithm that assists the authorities in minimizing crime rates,
improving public safety, saving resources and improving decision making in the
criminal justice system.
2. Yuga Drishti is a result of an unprecedented association between rembrandts of the legal
fraternity, law enforcement agencies and criminal and forensic sciences. The algorithm
is trained on historical criminal justice data from Bharat Nadu including arrest records,
court documents, and other public data sources.
3. The AI has been rigorously tested and validated using the large dataset of historical
criminal justice data from Bharat Nadu with results of the tests being used to refine
and optimize the algorithm to ensure that it is highly accurate and reliable. A pilot
program using Yuga Drishti was run in three provinces of Bharat Nadu for a period of
one year and gave generally positive results.
4. AI models with bigger datasets make better decisions hence the government of Bharat
Nadu in the interest of public welfare has passed two legislations; one mandating the use
of the Yuga Drishti in pre-trial detention decisions, bail hearings, and sentencing and
another aimed at expanding the available database on delinquents. The task of
collecting, processing, preserving, storing, destroying, sharing, and disseminating the
data has been entrusted to the Crime Records Bureau of Bharat Nadu.
5. Yuga Drishti has been statistically proven to be more accurate in predicting recidivism
and has also substantially reduced administrative costs. This has allowed law
enforcement agencies and courts to allocate resources more efficiently, redirecting
funds towards crucial areas such as improved legal aid and welfare programs.
6. A group of civil rights activists (“the Democratic Reform Project”) and legal scholars
have challenged the constitutionality of the use of the AI algorithm, arguing that it
violates several fundamental rights guaranteed under the Constitution of Bharat Nadu and
the matter is considered before a constitutional bench of the Hon’ble Supreme Court.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|6

Statement of Issues

1. Does the use of an AI algorithm in pre-trial detention decisions, bail hearings, and
sentencing violate the right to a fair trial guaranteed under Article 21 of the
Constitution of Bharat Nadu?

2. Does the use of an AI algorithm to predict criminal behavior violate the right to equality
guaranteed under Article 14 of the Constitution of Bharat Nadu by disproportionately
affecting certain communities?

3. Does the use of an AI algorithm in criminal justice decision-making raise concerns


about transparency, accountability, and due process, as guaranteed under the
Constitution of Bharat Nadu?

4. Does the use of an AI algorithm to predict criminal behavior violate the right to
privacy guaranteed under Article 21 of the Constitution of Bharat Nadu?

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|7

Summary of Arguments

1. Does the use of an AI algorithm in pre-trial detention decisions, bail hearings,


and sentencing violate the right to a fair trial guaranteed under Article 21 of the
Constitution of Bharat Nadu?
It is most humbly submitted that the Government of Bharat Nadu introduced “Yuga
Drishti” to help reduce crime rates and improve public safety. Bharat Nadu being a
diverse and populous nation with limited resources, demand a better law enforcement
and criminal justice system. The use of AI for the same does not violate the right to fair
trial guaranteed under Article 21 as it is a state-of-the-art system created by the nations
best legal and technical minds with various fail safes to avoid error.
2. Does the use of an AI algorithm to predict criminal behavior violate the right to
equality guaranteed under Article 14 of the Constitution of Bharat Nadu by
disproportionately affecting certain communities?
It is most humbly submitted that the AI algorithm is regularly updated and improved and
is merely a tool of suggestion and not incrimination. The tenet of equality is of great
importance to any justice system and “Yuga Drishti” in no way is created with bias
towards any communities benefit or expense.
3. Does the use of an AI algorithm in criminal justice decision-making raise
concerns about transparency, accountability, and due process, as guaranteed
under the Constitution of Bharat Nadu?
It is most humbly submitted that the AI algorithm is designed with precision and
diligence. The decision-making process is based on careful evaluation of historical
criminal justice data from Bharat Nadu including arrest records, court documents, and
other public data sources. This doesn’t by itself convict any individual but assists the
authorities in the procedures to be followed once a crime is discovered.
4. Does the use of an AI algorithm to predict criminal behavior violate the right
to privacy guaranteed under Article 21 of the Constitution of Bharat Nadu?
The database on which the AI algorithm is based on is maintained by the Crime Records
Bureau of Bharat Nadu. It’s exclusive to delinquents and is data already available at the
disposal of various branches of the government. The AI aids faster, more efficient
application of this data into preventing crimes thus creating a safer society for citizens.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|8

ARGUMENTS ADVANCED

A. Does the use of an AI algorithm in pre-trial detention decisions, bail hearings,


and sentencing violate the right to a fair trial guaranteed under Article 21 of the
Constitution of Bharat Nadu?

1. It is humbly submitted that Yuga Drishti is a tool of suggestion and not conviction. As per
the National Crime Records Bureau, prison overcrowding in 2019 was the highest in the past
10 years. Prison Statistics India report 2021 revealed that the number of convicts in jails
decreased by 9.5 per cent, whereas the number of undertrial inmates increased by 45.8 per
cent between 2016 and 2021. This longstanding problem cannot be solved without police
and jail reforms. The most urgent of reforms would be introducing substitutes for money or
property-based bail systems in Indian courts. Jai Parkash, a 47-year-old differently abled
individual spent over 22 years in judicial custody without a trial because there was no one to
furnish a Rs.30,000 surety bond. 1An AI assisted pretrial would liberate thousands as this is a
phase that involves relatively straightforward legal questions; and its outcomes are easy to
measure.
2. It is a reality that, many defendants are being held in jail not because of the risk they pose to
the community but because they are poor. And this is not a third world problem. In the U.S,
the nation with the world’s largest prison population, more than one-third of pretrial
detainees across the country are in jail because they cannot afford to post bail; and
nationwide, nine of ten felony defendants who were detained pretrial had bail set and would
have been released if they had posted it.2 The for-profit bail bond industry is a two billion
per year industry that is a blot to humanity.
3. Even when tried, detention during pretrial is correlated with receiving a longer sentence. A
study using data from state courts within the U.S found that defendants who were detained
for the entire pretrial period were at least three times more likely to be sentenced to jail or

1
“Long forgotten: India’s pretrial and undertrial prisoners”, Ashutosh Sharma;The Hindu
2
Megan Stevenson & Sandra G. Mayson, Pretrial Detention and Bail, in 3 REFORMING CRIMINAL JUSTICE:
PRETRIAL AND TRIAL PROCESSES 21, 22–23

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|9

prison 3and received significantly longer sentences than defendants who were released at
some point pending trial.
4. Detention during pretrial limits the ability of legal counsel to prepare properly for the
defendants’ cases in addition to the psychological and social effects on the detainee
and his/her family.4
5. AI involvement is in no way against the presumption of innocence. 5Justice Edward Douglass
White wrote in 1895, “The principle that there is a presumption of innocence in favor of the
accused is the undoubted law, axiomatic and elementary, and its enforcement lies at the
foundation of the administration of our criminal law.”6
6. In India the decisions of a pretrial are entirely up to the discretion of the magistrate which is
a bottleneck as the judiciary is notoriously understaffed and overworked. Domestic
enactments like the Speedy Trial Act 1974 of the U.S were requisites that we needed decades
ago but never realized. Agencies to interview each person charged with any offense other
than a minor crime, verify background information, and present a report to the judicial
officer 7considering bail remains an unrealized dream much like our directive principles.
7. Traditionally, judges relied on their own judgment and experience to assess the risk that each
defendant poses. But research shows that when judges rely on their intuition, they do not use
information reliably—they may assign weight to items that are in fact not predictive, or they
may be overly influenced by causal attributions8.
8. The U.S was the pioneer in introducing actuarial and regression-based systems into
pretrial hearings. The Vera Point Scale, considered to be the first actuarial pretrial risk
assessment, was developed and adopted in New York City in 1961.9 Yuga Drishti is also
an

3
Meghan Sacks & Alissa R. Ackerman, Bail and Sentencing: Does Pretrial Detention Lead to Harsher Punishment?
25 CRIM. JUST. POL’Y REV. 59, 77
4
3 Samuel R. Wiseman, Pretrial Detention and the Right to be Monitored, 123 YALE L.J. 1344, 1356
5
Ramsingh Jalia v. State of M.P., (1996) 2 Crimes 275(M.P.)
6
Coffin v. United States, 156 U.S. 432, 545 (1895)
7
PROBATION AND PRETRIAL SERVICES HISTORY, UNITED STATES COURTS,
http://www.uscourts.gov/services-forms/probation-and-pretrial-services/probation-and-pretrial-services-history
8
Stephen D. Gottfredson & Laura J. Moriarty, Clinical Versus Actuarial Judgments in Criminal Justice Decisions:
Should One Replace the Other?, 70 FED. PROB. 15, 15 (2006)
9
See CYNTHIA MAMALIAN, PRETRIAL JUSTICE INST., STATE OF THE SCIENCE OF PRETRIAL RISK
ASSESSMENT 4–5 (2011), https://www.bja.gov/publications/pji_pretrialriskassessment.pdf; see also PRETRIAL
JUSTICE INST., PRETRIAL RISK ASSESSMENT: SCIENCE PROVIDES GUIDANCE ON ASSESSING
DEFENDANTS 3 (May 2015),
https://www.ncsc.org/~/media/Microsites/Files/PJCC/Pretrial%20risk%20assessment%20Science%20provi des

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|
%20guidance%20on%20assessing%20defendants.ashx

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

amalgamation of decades of trial and error data collected from the many pretrial risk
assessment tools introduced by the justice systems of the world. Yuga Drishti’s source code
and training data cannot be presented before the hon’ble court as that will jeopardize decades
of research and can lead to misuse of the same. But information available on its many
predecessors and beta models from around the world can provide this court with the
necessary insight required to reach a decision.
9. Studies show that twenty-four percent of pretrial agencies use tools based on static factors,
mainly criminal history; 12% rely on tools that include dynamic aspects and are based on
interviews with the defendant and data about employment, education, family status, and
the like; and 64% of pretrial agencies use tools that include a combination of the two.10
10. All following AI pretrial risk assessment systems were created abiding to the values of fair
trial and equality before law. In preventive detention trials an algorithm ranked people as
medium or high risk. Thus, defendants who were charged with misdemeanors or non-
violent felony crimes were released without unnecessary procedures. 11
11. The pretrial stage is unique because pretrial risk measures only two specific behaviors: court
appearance and re-arrest between initial arrest and the end of the trial. Because the aim of
the model is very specific, the result is more accurate. In contrast, determining the
appropriateness of sentences meted out requires predicting a complex set of factors, such as
long-term recidivism, with a great deal more subcategories and factors, so machine learning
algorithms will be less useful.12
12. The goal of the risk assessment tools like Yuga Drishti is to improve the decision making of
judges and not to replace them, the risk to due process is manageable so long as judges
receive adequate training and gain a sophisticated understanding of how the algorithm
works.
13. Next, in order to ensure a fair trial, the difficulties the defense has to deal with in the light of
limitations on its rights have to be counterbalanced by the procedures the judicial authorities
follow (Rowe and Davis v. UK, ECtHR, 16 February 200013, par. 61; Doorson v. The

10
Arthur Rizer & Caleb Watney, Artificial Intelligence Can Make Our Jail System More Efficient, Equitable, and
Just, 23 TEX. REV. L. & POL. 181, 183 (2018).
11
SARAH PICARD ET AL., CTR. FOR COURT INNOVATION, BEYOND THE ALGORITHM: PRETRIAL
REFORM, RISK ASSESSMENT AND RACIAL FAIRNESS 12 (2019),
https://www.courtinnovation.org/sites/default/files/media/document/2019/Beyond_The_Algorithm.pdf.
12
Spike Bradford, (Evidence-Based, Actuarial Pretrial) Risk Assessment, U. PRETRIAL. (Aug. 9, 2017),
https://university.pretrial.org/blogs/spike-bradford/2017/08/09/evidence-based-actuarial-pretrial-riskassessment

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
13
Page|
ECHR 2000-II

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

Netherlands, ECtHR, 26 March 1996, par. 7214—where the Court dealt with the case of
anonymous witnesses; ECtHR 2020, p.36). As will be explained below, promoting the use
of open-source data and codes may be a fair solution to this problem.
14. The Administrative Office of the U.S. Courts has created its own actuarial risk assessment
tool tailored to the characteristics of federal offenses, the Pretrial Risk Assessment (PTRA).
It found that the PTRA performs well at predicting pretrial violations. For example, of
defendants classified in risk category 1, only 5% violated their release with either failure to
appear or a new crime arrest. This number increased gradually as the risk categories
increased, so that in risk category 5, 36% had a violation. The tool includes 11 factors
divided into two categories: criminal history and others. In terms of gender, the PTRA was
equally accurate in its predictions for men and women.15
15. The Public Safety Assessment (PSA) is a pretrial risk assessment tool created in 2013 by the
nonprofit Laura and John Arnold Foundation. The PSA was created using a very large
dataset of over 750,000 cases drawn from more than 300 U.S. jurisdictions. It produces two
risk scores: one for failure to appear and one for committing a new crime. The PSA is
designed to be a national risk assessment tool, and to date, it has been adopted by more than
thirty-eight jurisdictions, including the states of Arizona, Kentucky, Utah, and New Jersey,
and cities like Phoenix, Chicago, and Houston. The PSA is offered for free to jurisdictions
that wish to implement it, and the foundation funds technical support to improve
implementation of the tool.16
16. Researchers have conducted validation studies involving more than 650,000 cases in
several jurisdictions, and many more are being planned. A 2018 study that examined the
validity of the PSA on a dataset from Kentucky found that, in general, its predictive validity
is aligned with what is considered the norm in criminal justice.17

14
App no 20524/92
15
PRETRIAL SERVICES RISK ASSESSMENT (PTRA): FREQUENTLY ASKED QUESTIONS,
http://www.edwinwall.com/PTRA/Federal%20Pretrial%20Risk%20Assessment%20Instrument%20FAQ%
202010.pdf.
16
Laura & John Arnold Found., Public Safety Assessment (PSA) – Background, PUBLIC SAFETY
ASSESSMENT, https://www.psapretrial.org/about/background.
17
Mathew DeMichele et al., The Public Safety Assessment: A Re-Validation and Assessment of Predictive Utility
and Differential Prediction by Race and Gender in Kentucky (Apr. 25, 2018),
https://craftmediabucket.s3.amazonaws.com/uploads/PDFs/3-Predictive-Utility-Study.pdf; Laura & John Arnold
Found., Intro, PUBLIC SAFETY ASSESSMENT, https://www.psapretrial.org/about/intro (last visited Aug. 31,
2019).

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

17. In terms of predictive accuracy by race, the PSA was found to be a fair predictor of new
crime arrest and new violent crime arrest, but there are disparities when it comes to
predicting failure to appear: The PSA assigns black defendants lower risk scores than
white defendants who fail to appear.
18. In another study of judges’ and public officials who use the PSA,79% of judges reported that
the PSA often informed their decision making and 61% of public officials often agree with
its recommendation.5 Overall, research shows that jurisdictions that implemented the PSA
are experiencing decreases in the size of their jail populations without increasing the crime
rate.18
19. The Virginia Department of Criminal Justice Services developed the VPRAI in 2005.
Virginia has been professionally maintaining and revalidating the tool every few years. The
first revalidation study was conducted after two years of statewide use, and its purpose was
to examine whether factors that can change over time, such as crime patterns, law
enforcement practices, and demographic factors, affected the accuracy of the VPRAI. The
examination confirmed the tool’s general accuracy and led to minor revisions that were
implemented in early 2009. In 2014, a second thorough revalidation study was launched, and
in addition to examining again the impact of changing factors, it analyzed the race and
gender neutrality of the tool. The study confirmed that the VPRAI is statistically significant
in predicting failure to appear and new crime arrests.19
20. The current version of the VPRAI includes the following eight factors: (1) active
community criminal justice supervision; (2) charge is a felony drug, theft, or fraud; (3)
pending charge;
(4) criminal history; (5) two or more failures to appear; (6) two or more violent convictions;
(7) unemployed at time of arrest, primary caregiver, full-time student, or retired; and (8)
history of drug abuse.20
21. Accompanying the VPRAI is Praxis, a decision grid that helps translate the VPRAI score
into the type of release and level of supervision; the VPRAI measures the risk, and
Praxis

18
Laura & John Arnold Found., Research, PUBLIC SAFETY ASSESSMENT,
https://www.psapretrial.org/about/research (last visited Aug. 31, 2019).
19
Marie VanNostrand, Pretrial Risk Assessment—Perpetuating or Disrupting Racial Bias?, PRETRIAL JUST.
INST. (Dec. 6, 2016), http://www.pretrial.org/pretrial-risk-assessment-perpetuating-disrupting-racial-bias/.
20
VIRGINIA DEP’T OF CRIMINAL JUSTICE SERVS., VIRGINIA PRETRIAL RISK ASSESSMENT
INSTRUMENT (VPRAI) INSTRUCTION MANUAL 1–2 (2003),
6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|
http://www.pacenterofexcellence.pitt.edu/documents/VPRAI_Manual.pdf [hereinafter VPRAI INSTRUCTION
MANUAL].

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

helps manage that risk. A revalidation study that analyzed the use of Praxis found that
judges released defendants 1.9 times more often than judges who did not use it. As a result
of the excellent documentation of all the stages of calculating the VPRAI and developing
final recommendations, as well as the extensive validation studies analyzing the VPRAI
since it was first implemented, the VPRAI has been adopted by counties in more than twelve
states or used as a model for other jurisdictions interested in implementing a pretrial risk
assessment tool.21
22. The Colorado Pretrial Risk Assessment Tool (CPAT) was developed in 2013 as part of
the Colorado Pretrial Reform Act. Data collected from the city of Denver show an
increase in release without money bail.22
23. The Ohio Pretrial Assessment Tool (PAT) is part of the Ohio Risk Assessment System
(ORAS), begun in 2006 in a collaboration between the Ohio Department of
Rehabilitation and Correction and the University of Cincinnati Center for Criminal
Justice Research.
24. The interviews identified more than 100 potential factors that could be included in the tool,
and eventually, seven items were selected: (1) age at first arrest, (2) number of failure to
appear warrants in the past 24 months, (3) three or more prior jail incarcerations, (4)
employed at the time of arrest, (5) residential stability, (6) illegal drug use during the past
6 months, and (7) a severe drug use problem.
25. The tool predicted relatively well new arrests for white defendants but, for non-white
defendants, detected no significant correlation between the levels of risk and the rate of
rearrests. Similar findings were reported for new convictions.23
26. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an
empirical risk and needs assessment tool integrated into the Northpointe Suite, a web-based
assessment and case management system for criminal justice practitioners. It was developed
by the private company Northpointe, now owned by Equivant. It recently made headlines for
perpetuating systemic racial bias. The reason for the same might be that COMPAS is the
only

21
4 KENNETH ROSE, VIRGINIA PRETRIAL RISK ASSESSMENT INSTRUMENT (VPRAI) & PRAXIS
OVERVIEW 12– 13 (Jun. 11, 2018),
https://www.dcjs.virginia.gov/sites/dcjs.virginia.gov/files/announcements/vpraipraxisoverview6112018.pdf
22
AUBREE COTE, SMART PRETRIAL PROGRAM CMTY. CORRS. DIV. CITY AND COUNTY OF DENVER,
PRETRIAL SERVS. (Jan. 9, 2018), https://cdpsdocs.state.co.us/ccjj/Committees/PRTF/Handout/2018-01-09_CCJJ-
PRTF_DenverPretrialServices.pdf.
6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
23
Page|
Edward J. Latessa et al., The Ohio Risk Assessment System, in HANDBOOK OF RECIDIVISM RISK/NEEDS
ASSESSMENT TOOLS 147, 159–60 (Jay P. Singh et al. eds., 2018)

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

risk assessment tool in-use that is based on machine learning. COMPAS is a product of a for-
profit company and the inner workings of its algorithm and the way the scores are calculated
are not public information. 24
27. It is important to mention that in addition to the media and academic interest in COMPAS,
the question regarding its validity was examined also in court. The issue was raised in
State
v. Loomis, a case that made its way to the Supreme Court of Wisconsin. Eric Loomis, who
was allegedly involved in a drive-by shooting, was charged with first-degree recklessly
endangering safety, attempting to fee or elude a traffic officer, operating a motor vehicle
without the owner’s consent, possession of a firearm by a felon, and possession of a short-
barreled shotgun or rife—in all cases as a repeat offender. The court accepted the
defendant’s plea, which was limited to two of the lesser charges, and ordered a pre-sentence
investigation—with the respective report including a COMPAS risk assessment. According
to the latter, Loomis was classified as a high risk of recidivism in all three categories
COMPAS produces an assessment, namely pre-trial release risk, general recidivism and
violent recidivism, and was denied parole on this basis.
28. Loomis challenged the algorithm as a violation of his right to be sentenced using accurate
information, to have an individualized sentence as well as to non-discrimination, considering
that gender has been one of the risk predictors The court concluded that using a risk
assessment tool in the sentencing phase did not violate the defendant’s right to due process,
because the output of the algorithm was not the determinative factor in deciding the length of
the sentence, it was one factor among many others; and the judge has the discretion to
diverge from it if needed. There was an attempt to challenge this decision in the U.S.
Supreme Court, but certiorari was not granted.25 The court’s final decision was in favor of
the State, but stressed some limitations and cautions courts shall observe to avoid due
process violations in the event of using a COMPAS risk assessment.
29. John Kleinberg (Cornell University) and colleagues are studying the use of the gradient
boosted decision tree technique in pretrial risk assessment. They built an algorithm based on
a large dataset of cases heard in New York City from 2008–2013. The data included these

24
Julia Angwin et al., Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And
It’s Biased Against Blacks., PROPUBLICA (May 23, 2016)

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
25
Page|
Loomis, 881 N.W. 2d 749, cert. denied, 137 S. Ct. 2290 (2017).

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

factors: age, current offense, criminal record (including prior failures to appear), and the
outcome of the case: release, failed while awaiting trial, or rearrested while awaiting trial.
The algorithm only had three input variables—current offense, priors, and age—and the
outcome variable was the likelihood that the defendant would fail to appear.26
30. For each step, a similar split would be made based on the information gathered in the
previous splits. Thus, using the gradient boosted decision trees enable a higher degree of
interactivity among the variables and yields a score that is more tailored to each defendant.
31. The algorithm focused on predicting only flight risk and not recidivism, because that is the
only factor that judges in New York are allowed to consider; however, the researchers
obtained qualitatively similar findings from a national dataset. The results of the study are
quite promising. They show that using machine learning, crime can be reduced up to 24.7%
with no change in the rate of detention, or the detention rate can be reduced up to 41.9%
with no increase in crime rates. Moreover, all categories of crime, including violent crimes,
showed reductions, and these gains can be achieved while simultaneously reducing racial
disparities.27
32. It is difficult to choose one AI algorithm superior to others since each has its pros and cons.
Yet the Pretrial Justice Institute, a leading nonprofit organization in the field, published in
2017 a comprehensive report, The State of Pretrial Justice in America, which gives all fifty
states a score on a scale of A–F. The only state that got an “A” is New Jersey, where money
bond has been eliminated statewide, except in instances where no other condition is
sufficient, and where the Arnold Foundation PSA has been implemented statewide. As a
result of these reforms, pretrial detention has dropped by 34%, and public safety improved,
shown by a reduction in all types of crime.28
33. Against this background, one may argue that, at the end of the day, AI-generated evidence
presents almost the same challenges DNA tests presented at the outset of their use in
judicial settings.

26
Jon Kleinberg et al., Human Decisions and Machine Predictions, 133 Q.J. ECON. 237, 237 (2017).
27
Id. at 1.
28
PRETRIAL JUSTICE INST., THE STATE OF PRETRIAL JUSTICE IN AMERICA 6–8 (Nov. 2017),
https://university.pretrial.org/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=f9d45 2f6-
ac5a-b8e7-5d68-0969abd2cc82&forceDialog=0.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

34. If an algorithm is a ‘sequence of computational steps that transform the input into the output,
then a recidivism algorithm transforms sets of data gathered from samples of a relevant
population into an assessment of the risk of re-offending relating to a concrete individual
that has already been accused or even found guilty of having committed a crime.
35. HART (Harm Assessment Risk Tool), a software based on ML and trained with the use
of Durham Police archives dating from 2008 to 2012, in order to assess the risk of
reofending on the basis of 34 risk predictors.
36. The most debated are the variables considered under risk parameters of each algorithms. For
example, gender might seem as an inherently biased parameter for a risk assessment model.
But this does not necessarily signal malicious intentions, but rather that (criminal) law and
computer science may approach these variables differently—with a programmer
considering the inclusion of gender as risk parameter as a means to promote the accuracy of
the risk assessment and not as a discrimination enabler.

B. Does the use of an AI algorithm to predict criminal behavior violate the right to
equality guaranteed under Article 14 of the Constitution of Bharat Nadu by
disproportionately affecting certain communities?

1. Discussions of understandability sometimes assume that human decision makers are


themselves interpretable because they can explain their actions. But as described earlier,
studies show that judges consciously and unconsciously weigh more than just legal factors
when making decisions: They are influenced by unconscious biases, and their intuitions
can often be inaccurate.29
2. Imagine that it has been proven that having different algorithms for black and white
defendants will improve their predictive accuracy; should we allow that as a society? In the
context of the criminal justice, the common view is that the use of race in any form is
unconstitutional and would violate the equal protection clause, derived from the Fourteenth
Amendment. However, researchers show that this prohibition on considering race is
practically impossible because all the existing risk assessment tools include factors that
serve as proxies for race30

29
Jennifer Doleac, Let Computers Be the Judge, MEDIUM (Apr. 20, 2017),
https://medium.com/@jenniferdoleac/let-computers-be-the-judge-b9730f94f8c8.
30
Crystal S. Yang & Will Dobbie, Equal Protection Under Algorithms: A New Statistical and Legal Framework 7

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

3. Judges can use their discretionary power to make decisions based on their biases,
stereotypes, and prejudices. This could happen unconsciously, because the human brain itself
is a black box, and psychology research shows that people who discriminate are usually not
aware of it, acting from rapid automatic responses that the brain generates before the
deliberative mind can intervene.31
4. Deploying an ML-based actuarial risk assessment tool can help filter out some of the
harmful effects of discretion when done cautiously because the engineers who build the
algorithm may themselves have unconscious biases and shifting the harmful effects of
discretion to them is very hard to detect. It is a horizon the entire world is approaching and a
point beyond which we must leave behind the mistakes of our past and their subsequent
ratchet effects.32
5. The researchers concluded that any additional information that judges are exposed to, other
than the necessary factors for prediction, act as noise and distract them from reaching a fair
decision. They attributed some of the distraction to what they call a “selective labels
problem,” meaning that judges rely on many factors that are hard to measure, such as
mood, or specific features of the case such as the defendant’s appearance.33
6. It could be interesting to use machine learning to count the race in the design of the algorithm
but not in the prediction phase, as suggested by other researchers. It is important to mention
that the use of race here is in order to “fix” prior discrimination. Although it might be
constitutionally challenging to implement such approach, on the theoretical front, there are
more and more researchers that are showing that using race explicitly does not harm equal
protection, and it can help racially disadvantaged communities.34
7. Most criticisms of ML techniques center around the fact that the decision is untraceable and
therefore unappealable. But in practice, pretrial hearings are already opaque, and using an
ML-based tool, if used properly, is unlikely to cause more harm and might be even
beneficial. There is a huge variation between jurisdictions in the conditions of pretrial
hearings, but typically they do not last more than few minutes, they often take place
through video conference and not in person, legal representation is not always provided,
and the

31
Jon Kleinberg et al., Discrimination in the Age of Algorithms 10–11 (
32
Ben Green & Yiling Chen, Disparate Interactions: An-Algorithm-in-the-Loop Analysis of Fairness in Risk
Assessment, in FAT* ‘19 Proceedings of the Conference on Fairness, Accountability and Transparency 90, 96
(2019).
6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
33
Page|
Jon Kleinberg et al., Human Decisions and Machine Predictions, 133 Q.J. ECON. 237, 237
34
Deborah Hellman, Measuring Algorithmic Fairness, 106 VA. L. REV. at 38–39

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

official who makes the decision is often a magistrate and not necessarily a judge. In addition,
there is evidence showing that court officials spend very little time looking at each
defendant’s file and determining the release or detention conditions.35
8. In the case of the tools signaling future wrongdoers, predictive policing is also expected to
release law-abiding citizens from unnecessary encounters with the police and the
respective violations of their fundamental rights, including privacy. Besides, it should lead
to a ‘more equitable and non-discriminatory policing’ by objectifying the decisions reached
by police officers and reducing reliance on their subjective judgment36
9. Evidence generated by means of AI, whether in the context of predictive policing or in
the larger context of human-technology interaction, appears to have the possibility to
enhance fact-finding.
10. It has been contested that AI pretrial risk assessment systems can be biased and magnify
the disparities within a society. This hypothesis has been backed by research yet there
could be many explanations for that finding. Judges respond and interact differently with
risk assessment tools: In one jurisdiction, judges can use the tool to “liberalize” their
practices, and in another jurisdiction, judges can use it to reinforce their internal biases and
deviate from the recommendation presented by the tool. 37

C. Does the use of an AI algorithm in criminal justice decision-making raise


concerns about transparency, accountability, and due process, as guaranteed
under the Constitution of Bharat Nadu?

1. Machine learning has a set of unique strengths and weaknesses that challenge our
commitment to human judgment and basic concepts of law. Because of the way ML
algorithms operate, they require us to adopt new ways of understanding concepts such as
transparency, explainability, and fairness. However, a comparison between machine learning
and regression analysis shows us that there are more similarities than differences between
the two.

35
Stevenson & Mayson, supra note 1, at 24–25.
36
Ferguson 2015; Data mining, dog snifs and the Fourth Amendment 2014, p. 695
37
Megan Stevenson, Assessing Risk Assessment in Action, 103 MINN. L. REV. 303, 309 (2018)

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

2. Successful implementation of ML algorithms that improve the criminal justice requires


that the following conditions be met. First, collaboration between the engineers who create
the algorithms and the policy makers responsible for their implementation is needed to
ensure that both groups have a comprehensive understanding of the capabilities and
limitations of the algorithms. Second, a discussion and then reaching a consensus about the
trade-offs between concepts such as fairness, accuracy, efficiency, transparency, justice,
and equity 38need to be part of the system design.1 Third, proper safeguards are needed to
ensure that ML algorithms comply with legal principles such as due process and equal
protection.
3. Pro-arguments focus on the right to a hearing within a reasonable time and the right to such
a hearing by an independent and impartial tribunal as core elements of the fair trial principle
(Art. 6(1) ECHR). Algorithms are presented as a solution to the load of often undermanned
criminal courts, lengthy and costly procedures and human mistakes, and as a source of new
knowledge. Besides this, their output is expected to be “free” from subjective criteria or
sympathies with the one or the other side and, as such, to decrease arbitrary judgments, as
well as to grant the defendants equal opportunities before all criminal courts.39
4. In that sense, the use of open-source software is recommended where possible (cf. Sects.
3.2.1 and 3.2.2). Additionally, appropriate public procurement processes should be adopted,
in order to ensure compliance with fundamental rights and applicable laws, and public–
private partnerships, contracts and acquisitions and the purposes for which AI systems are
procured should be disclosed to the public.
5. One should reconsider whether, at the end of the day, there is any space left for doubt in a
context where the algorithmic output is surrounded by objectivity and a scientific language
(EP 2021, par. 15) and, thus, “beats” those that are apt to lie. Should the algorithm leave
space for doubt, one should also decide “how much of this doubt” would actually benefit
the suspect or accused person. In other words, one should decide whether 1% false positive
rate would be enough to exonerate the defendant.

38
Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 U.C. DAVIS L. REV. 399, 405 (2017).

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
39
Page|
Dreyer and Schmees 2019, p.759

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

6. Periodical validation is the solution to this problem and is not something unheard of. It is
also useful for building trust in the tool among the public, litigants and judges 40where there
already are great tensions against the many outdated legislations and precedents.
7. Another contention is AI algorithms like Yuga Drishti becoming black box sourced decision
makers. To this question, none of the risk assessment tools currently in use operate like a
black box. Even the most advanced pretrial risk assessment tools use machine learning on a
small scale and for a very specific task that does not reduce explainability.
8. There are technical solutions for the explainability problem, such as introducing strong
auditing mechanisms that analyze the fairness and level of bias in the output of the
algorithm and not in the process itself. In the pretrial context, an auditing mechanism could
analyze the results of an algorithm to determine whether it treated a Dalit and Brahmin
defendants differently or men and women differently.41

D. Does the use of an AI algorithm to predict criminal behavior violate the right to privacy
guaranteed under Article 21 of the Constitution of Bharat Nadu?

1. The EU, an outgoing critique of AI assisted pretrial risk assessment tools is ironically the
biggest investor in AI technology. The mass data mining for AI advancements has been
recognized as a necessity by the EU, subject to ‘effective review, either by a court or by an
independent administrative body whose decision is binding’. 42By means of this review it
should specifically be verified that ‘a situation justifing that measure exists and that the
conditions and safeguards that must be laid down are observed’ (Quadrature du Net and
Others v. Premier ministre and Others, Joined Cases 511/18, C-512/18 and 520/18, CJEU, 6
October 2020, par. 179, 189, 192). In case of automated data analysis, the CJEU also
highlighted that the algorithm must be based on specific and reliable pre-established models
and criteria – to be regularly re-examined – and not on sensitive data in isolation.
Additionally, such analyses must be subject to human reexamination before a measure that
may adversely affect the concerned individual is adopted.

40
Joint Tech. Comm., Using Technology to Improve Pretrial Release Decision-Making, JTC RESOURCE
BULLETIN 1, 15 (2017).
41
Pauline T. Kim, Auditing Algorithms for Discrimination, 166 U. PA. L. REV. ONLINE 189, 189–90 (2017).
42
C-512/18 and 520/18, CJEU, 6 October 2020, par. 179, 189, 192

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

2. In the post 9/11 world the nations contested on which crimes can be considered serious
enough to justify such intrusive preventive measures, that is, the criterion of being a high
risk to commit any crime is to be deemed invalid, as well as which exactly persons can be
the target of counter-terrorism preventive measures. Besides this, concrete geographical
criteria should be determined in a non-discriminatory way.43
3. In April 2018, the EU released its AI Strategy with a twofold aim: to make the EU a world-
class hub for AI and to ensure the human-centric and trustworthy character of AI (EC
2018a). Next, AI HLEG presented in 2019 the ‘Ethics Guidelines for Trustworthy Artificial
Intelligence’, according to which (AI HLEG 2019b, p.5): Trustworthy AI systems are—
throughout their entire life cycle– lawful (that is, they comply with all applicable laws and
regulations), ethical (that is, they ensure adherence to ethical principles and values), and
robust from a technical and social perspective. To achieve this goal, they should meet the
following key requirements: human agency and oversight; technical robustness and safety;
privacy and data governance; transparency; diversity, non-discrimination and fairness;
environmental and societal well-being; and accountability.44
4. The Union of Bharat Nadu will have the largest working population in the world by 2060.
Should AI-generated evidence become mainstream in the future, the failure to lay down
specific criminal procedural rules concerning relevance and reliability tests of this
evidence as well as the means to contest the message it conveys would breach equality of
arms.
5. Mass surveillance is no longer a dystopian concept nor necessarily a stepping stone to
systemic exploitation. While forensic evidence pertaining to the second generation became
mainstream—with digital/electronic evidence being a core element of it. Taking data
mining as an example, the sources and the tools to gather information have multiplied:
smartphones enabling, inter alia, location tracking; automatic license plate readers,
electronic toll collection systems, speed cameras and car GPS devices recording travel
patterns; social

43
Art. 83(1) of the Treaty on the Functioning of the EU could serve as guidance: Tracol (2021), p.10. Even in this
context, it remains problematic that newly established criminal offences, such as the ones included in the EU
terrorism criminal legislation (see Directive (EU) 2017/541), often involve neutral acts (e.g., receiving training,
travelling): Kaiafa (2019); Giannakoula et al. (2020), p.52, 59.
44
Aiming to facilitate compliance with the seven key requirements, the AI HLEG (2020) also presented a detailed
assessment list (Assessment List for Trustworthy AI (ALTAI)) and developed a prototype web based tool for
6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|
practical guidance purposes: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/ altai-assessment-list-
trustworthy-artificial-intelligence

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

media websites tracking communications; e-shops facilitating the creation of consumer


profiles; and digital databases storing financial data to name a few.
6. The need for investing greater intellectual and material sources may not be new –compared
to, for instance, the difficulties inherent in scrutinizing DNA evidence for the first time or
analyzing and presenting the model behind traditional data mining software—but AI
systems distinguish themselves by operating in a way that is at least for the time being partly
known and not always predictable even by their designers.
7. This new evidence category refers not only to tools designed to meet investigatory needs and
to produce tangible evidence, such as the image and video comparison software employed
by Interpol to connect victims, perpetrators and places in cases of child sexual abuse.45
8. To seize the benefits of predictive policing as a data-driven application, one needs to ensure
that the algorithm is supplied with accurate data, which has been collected previously in an
appropriate context and links it properly, i.e., without leading to false positives or
negatives.
9. When employed for law enforcement purposes, data mining can be divided into two
categories: (1) subject-based data mining, where the focus lies on previously identified
individuals; and (2) pattern-based data mining, where the focus shifts onto non-
suspect individuals ‘to identify patterns of transactions or behaviors that correlate with
suspect activity.
10. As can be seen from the lists of factors, the main focus of all the tools is on criminal history
and its variance; other commonly used factors include age, community ties, residential
stability, employment, and substance abuse.
11. Machine learning allows researchers to try different combinations and different partitions, a
task that is complicated to execute using regression analysis. Studies show that criminal
history is the factor with the highest correlation with recidivism, and it can easily be
obtained and verified through criminal records.
12. In addition, models that are based on regression analysis depend on relatively few
predictors that have strong associations with the outcome. Predictors with weak associations
with the outcome are usually considered as “noise” and discarded. Yet using machine
learning

45
More information about the International Child Sexual Exploitation (ICSE) image and video database
6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|
available at: https://www.interpol.int/en/Crimes/Crimes-against-children/International-Child-Sexual-Exploitation-
database.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

enables the inclusion of those predictors with weak associations, which, in the aggregate, can
dramatically improve forecasting accuracy.
13. Based on traditional statistical techniques, including regression analysis, the increased
capabilities have been enabled by enhanced computing capabilities and a huge amount of
data that was not available in the past. It is only sensible to make use of an available
resource to serve the interests of the state and its citizens.

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24
Page|

Prayer

WHEREFORE, in the light of the issues raised, arguments advanced, reasons given and
authorities cited, may this Hon’ble Court be generously pleased to:

1. Uphold the legislations passed by the government of the Union of Bharat


Nadu in regard to the use of Yuga Drishti in pre-trial detention decisions,
bail hearings and sentencing.

2. Recognize the authority of the Crime Records Bureau of Bharat Nadu over
the data collected on delinquents as absolute and constitutionally valid.

3. Recognize the use of AI in court procedures as a necessity and solution to


the many bottlenecks within the system that requires immediate attention.

AND/OR

Pass any other order that it deems fit in the interest of Justice, Equity and Good Conscience.
And for this, the respondent in duty bound, shall humbly pray.

-COUNSEL FOR THE RESPONDENT

6 th SURANA & SURANA & KLE LAW COLLEGE NATIONAL CONSTITUTIONAL LAW MOOT
COURT COMPETITION 2023-24

You might also like