Professional Documents
Culture Documents
SLHOIP2 20200401016 Prerna Bhansali LR
SLHOIP2 20200401016 Prerna Bhansali LR
Literature Review
1. Trust in Automated Vehicles – By F. Walker, M. Martens, William Payre, Philip W.,
Dr. Hergeth and Dr. Forster
The challenge of trust in automated driving technology is multifaceted, with both
insufficient and excessive trust posing risks. Trust is a dynamic, multilayered
phenomenon involving dispositional, situational, and learned aspects, fluctuating
across varying time scales. Despite its significance, some facets of trust lack adequate
attention, such as understanding its evolution with experience or establishing
appropriate trust levels. The Research Topic aims to bridge these gaps, fostering
interdisciplinary Human Factors research on trust in automated vehicles. Prioritizing
original studies, it seeks to explore under-researched dimensions, including methods
for real-life trust measurement, longitudinal studies, key factors influencing trust and
vehicle design, and policy solutions. Emphasis is placed on specific trust layers, road
user categories, and automation levels. The goal is to contribute valuable insights for
future applications, improved policies, human-machine interfaces, and safer roads
through innovative hypotheses and methodologies in on-road and simulator studies.
Topics may span defining trust, measuring it reliably, understanding its evolution, and
exploring factors like situational variables, dispositional factors, trust calibration, and
design features impacting trust. The Research Topic aspires to uncover novel concepts
and hypotheses critical for advancing trust in automated vehicles.1
2
MDPI. MPC-Based Trajectory Tracking Control of an Automated Guided Vehicle with Adaptive Look-Ahead
Time. Retrieved from https://www.mdpi.com/2032-6653/12/2/62, Published on 23 April 2021 and in 2017 21st
International Conference on Process Control (PC).
3
IEEE. (2017). Proceedings of the 21st International Conference on Process Control (PC). Retrieved from
https://ieeexplore.ieee.org/abstract/document/7976252/authors#authors, published in 2017.
4
Frontiers. The Role of Trust in Automated Vehicle Acceptance: A Meta-Analysis. Retrieved from
https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1021656/full, published on 10 November 2022.
5. A Blockchain Based Liability Attribution Framework for Autonomous Vehicles – By
Chuka Oham, Salil S. Kanhere, Raja Jurdak, and Sanjay Jha
The rise of autonomous vehicles is poised to disrupt the traditional auto insurance
liability model, shifting from driver-centric to involving entities like manufacturers,
software providers, technicians, and owners. Autonomous vehicles, equipped with
sensors and connectivity, can gather ample data for liability attribution, but this
connectivity also exposes them to potential attacks. This creates a motivation for
entities to disclaim involvement in accidents. To address this, the paper proposes a
Blockchain-based framework to integrate entities into the liability model, providing
tamper-proof evidence for attribution and adjudication. The framework employs
permissioned Blockchain, tailoring data access for relevant participants, and
undergoes a security analysis to ensure resilience to potential attacks.5
5
Arxiv. A Blockchain Based Liability Attribution Framework for Autonomous Vehicles. Retrieved from
https://arxiv.org/abs/1802.05050, published on 14 Feb 2018.
6
SpringerLink. Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project, volume 26,
pages3285–3312 (2020). Retrieved from https://link.springer.com/article/10.1007/s11948-020-00272-8,
published on 13 October 2020
This study tackles the ethical challenges faced by Autonomous Vehicles (AVs) in
unavoidable accidents, known as the AV moral dilemma. Despite technological
progress, accidents with serious consequences persist. The research aims to fill gaps
in understanding AV ethical decision-making processes from users' perspectives. The
proposed "Integrative ethical decision-making framework for the AV moral dilemma"
outlines four stages and incorporates variables like perceived moral intensity and
personal moral philosophies. It suggests cultural influences on ethical decisions,
introduces a dual-process theory involving intuitive and rational moral reasoning, and
emphasizes situation-dependent ethical behavioral intentions shaped by an
individual's perception of seriousness and personal moral philosophy. The framework
offers a concise, step-by-step explanation of pluralistic ethical decision-making in AV
moral dilemmas.7
7
Frontiers. Ethical Challenges in Artificial Intelligence: A Multi-Dimensional Review. Retrieved from
https://www.frontiersin.org/articles/10.3389/frobt.2021.632394/full, Published on 04 May 2021
8
Carlos, A. Ethics in Artificial Intelligence: Navigating New Frontiers. LinkedIn. Retrieved from
https://www.linkedin.com/pulse/ethics-artificial-intelligence-navigating-new-frontiers-carlos, published on
September 12, 2023.
9. Artificial Intelligence and Liability
The article explores the complex relationship between artificial intelligence (AI) and
liability, addressing challenges and solutions. It defines AI and highlights its
pervasive applications, particularly in autonomous vehicles. The "black box paradox"
complicates assigning liability due to the opacity of AI decision-making. The global
landscape of autonomous vehicle liability is examined, revealing varying regulations.
AI bias is illustrated through examples, emphasizing the need for addressing biases.
Ethical concerns include job displacement and privacy violations, necessitating
responsible governance. Existing regulations like GDPR and IEEE's Ethically
Aligned Design are discussed, along with key principles and strategies for ethical AI.
Collaboration between government and private sectors is crucial. Emerging trends and
challenges, such as AI in critical infrastructure and malicious use, are identified. The
article concludes by emphasizing ongoing efforts to navigate the legal complexities of
AI responsibly.9
10. From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial
Intelligence Innovation – By David Nersessian and Ruben Mancha
The article addresses legal and ethical concerns related to the increasing prevalence of
artificial intelligence (AI) systems. It identifies three key actors in the AI value chain
(innovators, providers, and users) and three primary types of AI (automation,
augmentation, and autonomy). The focus is on responsibility in AI innovation,
examining strict liability claims for products with embedded AI capabilities and
ethical practices in AI development. The consideration of both legal and ethical
perspectives provides a comprehensive understanding of the potential consequences
and impacts of AI innovation. The article suggests that companies and policymakers
should adopt strategies that encompass both realms for effective AI regulation.10
9
Juriscentre. Artificial Intelligence and Liability in the Context of Autonomous Vehicles. Retrieved from
https://juriscentre.com/2023/10/25/artificial-intelligence-and-liability/. Published on October 25, 2023.
10
Michigan Law Review. Autonomous Vehicles and the Law: A Primer for Practitioners. Retrieved from
https://repository.law.umich.edu/cgi/viewcontent.cgi?article=1023&context=mtlr. Published in 2021.