Will AI make accidental nuclear war more likely? If so, how might these risks be reduced? AI and the Bomb provides a coherent, innovative, and multidisciplinary examination of the potential effects of AI technology on nuclear strategy and escalation risk. It addresses a gap in the international relations and strategic studies literature, and its findings have significant theoretical and policy ramifications for using AI technology in the nuclear enterprise. The book advances an innovative theoretical framework to consider AI technology and atomic risk, drawing on insights from political psychology, neuroscience, computer science, and strategic studies. In this multidisciplinary work, James Johnson unpacks the seminal cognitive-psychological features of the Cold War-era scholarship, and offers a novel explanation of why these matter for AI applications and strategic thinking. The study offers crucial insights for policymakers and contributes to the literature that examines the impact of military force and technological change. Source: Syndetic Solutions
Call Number: Law Library: UG479 .A44 2021 and [electronic resource]
Publication Date: 2021
"AI at War is intended to provide a balance and practical understanding for both national security professionals and the interested public of the application of AI to war fighting. Although the themes and findings of the chapters are generally applicable across the U.S. Department of Defense (DoD), to include all Services, Joint Staff and defense agencies, as well as allied and partner ministries of defense/defence, it is a "case study" of war fighting functions in the Naval Services-the United States Navy and United States Marine Corps"-- Provided by publisher.
"Autonomous weapons systems seem on the path of becoming accepted technologies of warfare. The weaponization of artificial intelligence raises questions about whether human beings will maintain control of the use of force. The notion of meaningful human control has become a focus of international debate on lethal autonomous weapons systems among members of the United Nations: many states have diverging ideas about various complex forms of human-machine interaction and the point at which human control stops being meaningful. In Autonomous Weapons Systems and International Norms Ingvild Bode and Hendrik Huelss present an innovative study of how testing, developing, and using weapons systems with autonomous features shapes ethical and legal norms, and how standards manifest and change in practice. Autonomous weapons systems are not a matter for the distant future - some autonomous features, such as in air defence systems, have been in use for decades. They have already incrementally changed use-of-force norms by setting emerging standards for what counts as meaningful human control. As UN discussions drag on with minimal progress, the trend towards autonomizing weapons systems continues. A thought-provoking and urgent book, Autonomous Weapons Systems and International Norms provides an in-depth analysis of the normative repercussions of weaponizing artificial intelligence."-- Provided by publisher.
"Because of the increasing use of Unmanned Aerial Vehicles (UAVs, also commonly known as drones) in various military and para-military (i.e., CIA) settings, there has been increasing debate in the international community as to whether it is morally and ethically permissible to allow robots (flying or otherwise) the ability to decide when and where to take human life. In addition, there has been intense debate as to the legal aspects, particularly from a humanitarian law framework. In response to this growing international debate, the United States government released the Department of Defense (DoD) 3000.09 Directive (2011), which sets a policy for if and when autonomous weapons would be used in US military and para-military engagements. This US policy asserts that only "human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense ...". This statement implies that outside of defensive applications, autonomous weapons will not be allowed to independently select and then fire upon targets without explicit approval from a human supervising the autonomous weapon system. Such a control architecture is known as human supervisory control, where a human remotely supervises an automated system (Sheridan 1992). The defense caveat in this policy is needed because the United States currently uses highly automated systems for defensive purposes, e.g., Counter Rocket, Artillery, and Mortar (C-RAM) systems and Patriot anti-missile missiles. Due to the time-critical nature of such environments (e.g., soldiers sleeping in barracks within easy reach of insurgent shoulder-launched missiles), these automated defensive systems cannot rely upon a human supervisor for permission because of the short engagement times and the inherent human neuromuscular lag which means that even if a person is paying attention, there is approximately a half-second delay in hitting a firing button, which can mean the difference for life and death for the soldiers in the barracks. So as of now, no US UAV (or any robot) will be able to launch any kind of weapon in an offensive environment without human direction and approval. However, the 3000.09 Directive does contain a clause that allows for this possibility in the future. This caveat states that the development of a weapon system that independently decides to launch a weapon is possible but first must be approved by the Under Secretary of Defense for Policy (USD(P)); the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)); and the Chairman of the Joint Chiefs of Staff. Not all stakeholders are happy with this policy that leaves the door open for what used to be considered science fiction. Many opponents of such uses of technologies call for either an outright ban on autonomous weaponized systems, or in some cases, autonomous systems in general (Human Rights Watch 2013, Future of Life Institute 2015, Chairperson of the Informal Meeting of Experts 2016). Such groups take the position that weapons systems should always be under "meaningful human control," but do not give a precise definition of what this means. One issue in this debate that often is overlooked is that autonomy is not a discrete state, rather it is a continuum, and various weapons with different levels of autonomy have been in the US inventory for some time. Because of these ambiguities, it is often hard to draw the line between automated and autonomous systems. Present-day UAVs use the very same guidance, navigation and control technology flown on commercial aircraft. Tomahawk missiles, which have been in the US inventory for more than 30 years, are highly automated weapons with accuracies of less than a meter. These offensive missiles can navigate by themselves with no GPS, thus exhibiting some autonomy by today's definitions. Global Hawk UAVs can find their way home and land on their own without any human intervention in the case of a communication failure. The growth of the civilian UAV market is also a critical consideration in the debate as to whether these technologies should be banned outright. There is a $144.38B industry emerging for the commercial use of drones in agricultural settings, cargo delivery, first response, commercial photography, and the entertainment industry (Adroit Market Research 2019) More than $100 billion has been spent on driverless car development (Eisenstein 2018) in the past 10 years and the autonomy used in driverless cars mirrors that inside autonomous weapons. So, it is an important distinction that UAVs are simply the platform for weapon delivery (autonomous or conventional), and that autonomous systems have many peaceful and commercial uses independent of military applications"-- Provided by publisher.
"The legal industry has been one of the slowest to adapt to today's data economy. It's a profession that is still bogged down in traditional, manual review processes and it bottle-necks organizations of all sizes with the staggering levels of business dependencies that rely on legal expertise. But this is changing. Technology - particularly Artificial Intelligence - is rapidly transforming the Legal profession and industry. Over $1.2B has been invested in legal technology in 2019, a record for annual investment in the space. AI for Lawyers is a guide to using artificial intelligence to transform a law firm or legal department, and in the process add tremendous value to any type of business. It explains how AI can help any organization win more business, exceed client expectations, drive greater efficiencies, risk management, and employee engagement by enabling them to do more meaningful work"-- Provided by publisher.
"In shifting technological and regulatory quicksands, this book offers a toolkit for harnessing AI in the practice of law and for optimizing AI by society as a whole. This is ... [a] study of how AI will dramatically affect the law, both as a profession and a regulatory domain, as well as society at large. The urgency for this guide stems from the gap between the transformative forces of AI, on the one hand, and the lagging grasp by key stakeholders of the capabilities, limitations and effects of AI, on the other"-- Back cover.
This groundbreaking work offers a first-of-its-kind overview of legal informatics, the academic discipline underlying the technological transformation and economics of the legal industry. Edited by Daniel Martin Katz, Ron Dolin, and Michael J. Bommarito, and featuring contributions from more than two dozen academic and industry experts, chapters cover the history and principles of legal informatics and background technical concepts - including natural language processing and distributed ledger technology. The volume also presents real-world case studies that offer important insights into document review, due diligence, compliance, case prediction, billing, negotiation and settlement, contracting, patent management, legal research, and online dispute resolution. Written for both technical and non-technical readers, Legal Informatics is the ideal resource for anyone interested in identifying, understanding, and executing opportunities in this exciting field. Source: Syndetic Solutions
"Law today is incomplete, inaccessible, unclear, underdeveloped, and often perplexing to those whom it affects. In The Legal Singularity, Abdi Aidid and Benjamin Alarie argue that the proliferation of artificial intelligence-enabled technology--and specifically the advent of legal prediction--is on the verge of radically reconfiguring the law, our institutions, and our society for the better. Revealing the ways in which our legal institutions underperform and are expensive to administer, the book highlights the negative social consequences associated with our legal status quo. Given the infirmities of the current state of the law and our legal institutions, the silver lining is that there is ample room for improvement. With concerted action, technology can help us to ameliorate the law and our legal institutions. Inspired in part by the concept of the "technological singularity," The Legal Singularity presents a future state in which technology facilitates the functional "completeness" of law, where the law is at once extraordinarily more complex in its specification than it is today, and yet operationally vastly more knowable, fairer, and clearer for its subjects. Aidid and Alarie describe the changes that will culminate in the legal singularity and explore the implications for the law and its institutions."-- Provided by publisher.
"This book explores the intersection between artificial intelligence and two intellectual property rights: copyright and patents. The increasing use of artificial intelligence for generating creative and innovative output has an impact on copyright and patent laws around the world. The book aims to map and analyse that impact. The author considers how artificial intelligence systems may aid, or in some cases substitute, human creators and inventors in the creative process. It is from this angle that the copyright and patent regimes in four jurisdictions (Europe, United States, Australia and Japan) are investigated in depth. The author describes how these jurisdictions look at works and inventions generated through a process where artificial intelligence is present or prevalent, and examines how copyright and patent regimes should adapt to the reality of artificially intelligent creators and inventors. As the use of artificial intelligence to generate creative and innovative products becomes more common, this book will be a valuable resource to researchers, academics and policy-makers alike"-- Provided by publisher.
NOTE: Description based on print version record and CIP data provided by publisher.
"Technologies have always led to turning points for social development. In the past, different technologies have opened the doors towards new phase of growth and change while influencing social values and principles. Algorithmic technologies fit within this framework. Although these technologies have positive effects for the entire society by increasing the capacity of individuals to exercise rights and freedoms, they have also led to new constitutional challenges. The opportunities of new algorithmic technologies clash with the troubling opacity and lack of accountability. We believe that constitutional law plays a critical role to address the challenges of the algorithmic society. New technologies have always challenged, if not disrupted, the social, economic legal and, to an extent, the ideological status quo. Such transformations impact constitutional values, as the state formulates its legal response to the new technologies based on constitutional principles which meet market dynamics, and as it considers its own use of technologies in light of the limitation imposed by constitutional safeguards. The primary goal of this chapter is to introduce the constitutional challenges coming from the rise of the algorithmic society. The first part of this work examines the challenges for fundamental rights and democratic values with a specific focus on the right to freedom of expression, privacy and data protection. The second part looks at the role of constitutional law in relation to the regulation and policy of the algorithmic society. The third part examines the role and responsibilities of private actors underlining the role of constitutional law in this field. The fourth part deals with the potential remedies which constitutional law can provide to face the challenges of the information society"-- Provided by publisher.
"This is a book about rights and powers in the digital age. It is an attempt to reframe the role of constitutional democracies some in the information or network society, which, in the last twenty years, has transmuted into the algorithmic society as the current societal background where large, multinational social platforms 'sit between traditional nation states and ordinary individuals and the use of algorithms and artificial intelligence agents to govern populations'"-- Provided by publisher.
"AI threatens to disrupt the professions as it has manufacturing. Frank Pasquale argues that law and policy can avert this outcome and promote better ones: instead of replacing humans, technology can make our labor more valuable. Through regulation, we can ensure that AI promotes inclusive prosperity"-- Provided by publisher.
The Law and Economics of Privacy, Personal Data, Artificial Intelligence, and Incomplete Monitoring presents new findings and perspectives from leading international scholars on several emerging areas issues in legal and economic research.
This book is the first collective work devoted exclusively to the ethical and penal theoretical considerations of the use of artificial intelligence at sentencing. Jesper Ryberg and Julian V. Roberts bring together leading experts in the field to investigate to what extent, and under which conditions, justice and the social good may be promoted by allocating parts of the most important task of the criminal court--that of determining legal punishment--to computerized sentencing algorithms.
This open-access-book synthesizes a supportive developer checklist considering sustainable Team and agile Project Management in the challenge of Artificial Intelligence and limits of image recognition. The study bases on technical, ethical, and legal requirements with examples concerning autonomous vehicles. As the first of its kind, it analyzes all reported car accidents state wide (1.28 million) over a 10-year period. Integrating of highly sensitive international court rulings and growing consumer expectations make this book a helpful guide for product and team development from initial concept until market launch. The author Thomas Winkle (Prof. Dr.-Ing., MBA Communication & Leadership) is a multiple author in best-selling Springer books such as "Autonomous Driving: Technical, Legal and Social Aspects" or the "Handbook of Driver Assistance Systems". His work bases on three decades in sustainable team consulting as employee and researcher in the legal departments of three car manufacturers as well as a professor at IU International University and TU Munich. Thomas Winkle received the Volkswagen research award for his significant Human-Centered Design into the development of the Automatic Emergency Brake. He was responsible to prepare the ADAS Code of Practice. As consultant at international courts, he links Artificial Intelligence, Ethics, Sustainable Agile Management, Mindful Communication and Law using Autonomous Vehicles example definition requirements.
"Artificial Intelligence (AI) is transforming human society in fundamental and profound ways. Not since the Age of Reason have we changed how we approach security, economics, order, and even knowledge itself. In the Age of AI, three deep and accomplished thinkers come together to consider what AI will mean for us all" -- Book jacket.
Artificial intelligence (AI) is the latest technological evolution which is transforming the global economy and is a major part of the "Fourth Industrial Revolution." This book covers the meaning, types, subfields and applications of AI, including U.S. governmental policies and regulations, ethical and privacy issues, particularly as they pertain and affect facial recognition programs and the Internet-of Things (IoT). There is a lengthy analysis of bias, AI's effect on the current and future job market, and how AI precipitated fake news. In addition, the text covers basics of intellectual property rights and how AI will transform their protection. The author then moves on to explore international initiatives from the European Union, China's New Generation Development Plan, other regional areas, and international conventions. The book concludes with a discussion of super intelligence and the question and applicability of consciousness in machines. The interdisciplinary scope of the text will appeal to any scholars, students and general readers interested in the effects of AI on our society, particularly in the fields of STS, economics, law and politics.Source:Syndetic Solutions
"What happens when artificial intelligence saturates political life and depletes the planet? How is AI shaping our understanding of ourselves and our societies? In this book Kate Crawford reveals how this planetary network is fueling a shift toward undemocratic governance and increased inequality. Drawing on more than a decade of research, award-winning science, and technology, Crawford reveals how AI is a technology of extraction: from the energy and minerals needed to build and sustain its infrastructure, to the exploited workers behind 'automated' services, to the data AI collects from us. Rather than taking a narrow focus on code and algorithms, Crawford offers us a political and a material perspective on what it takes to make artificial intelligence and where it goes wrong. While technical systems present a veneer of objectivity, they are always systems of power. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world."-- Provided by publisher
"In the past decade, artificial intelligence (AI) has become a disruptive force around the world, offering enormous potential for innovation but also creating hazards and risks for individuals and the societies in which they live. This volume addresses the most pressing philosophical, ethical, legal, and societal challenges posed by AI. Contributors from different disciplines and sectors explore the foundational and normative aspects of responsible AI and provide a basis for a transdisciplinary approach to responsible AI. This work, which is designed to foster future discussions to develop proportional approaches to AI governance, will enable scholars, scientists, and other actors to identify normative frameworks for AI to allow societies, states, and the international community to unlock the potential for responsible innovation in this critical field. This book is also available as Open Access on Cambridge Core"-- Provided by publisher.
Digital innovations influence every aspect of our lives in this increasingly technological world. Firms that pursue digital innovations must think carefully about how digital technologies shape the nature, process and outcomes of innovation as well as the long- and short-term social, economic and cultural consequences of their offerings. The Handbook contributes to building a transdisciplinary understanding of digital innovation by bringing together a diverse set of leading scholars from business, engineering, economics, science and public policy. Their distinct perspectives advance ideas and principles intended to set the agenda for future research on digital innovation in ways that inform not only firm-level strategies and practices but policy decisions and science-focused investments as well. The first of its kind, this Handbook provides scope and depth for scholars interested in information systems and digital technologies, innovation and entrepreneurship, strategy, and digital platforms and ecosystems. In addition, it is informative and enlightening to scholars and practitioners interested in the impact of digital technologies on organizations and the broader society. Contributors include: A. Aaltonen, C. Alaimo, E. Autio, N. Berente, C. Bubel, P.N. Courant, J. Cutcher-Gershenfeld, E.L. Echeverri-Carroll, A. Gawer, T.L. Griffith, V. Grover, J. Grudin, O. Henfridsson, S.L. Jarvenpaa, J. Kallinikos, M.J. Kim, J.L. King, R.J. Kulathinal, S. Kumar, K.A. Loparo, K. Lyytinen, A. Majchrzak, A. Malhotra, M.L. Markus, S. Nambisan, W. Nan, J.V. Nickerson, A. Pedraza-Avella, L.W. Rogowski, S. Seidel, L.D.W. Thomas, C. Velu, Y. Yoo, X. Zhang
The Routledge Social Science Handbook of AI is a landmark volume providing students and teachers with a comprehensive and accessible guide to the major topics and trends of research in the social sciences of artificial intelligence (AI), as well as surveying how the digital revolution - from supercomputers and social media to advanced automation and robotics - is transforming society, culture, politics and economy. The Handbook provides representative coverage of the full range of social science engagements with the AI revolution, from employment and jobs to education and new digital skills to automated technologies of military warfare and the future of ethics. The reference work is introduced by editor Anthony Elliott, who addresses the question of relationship of social sciences to artificial intelligence, and who surveys various convergences and divergences between contemporary social theory and the digital revolution. The Handbook is exceptionally wide-ranging in span, covering topics all the way from AI technologies in everyday life to single-purpose robots throughout home and work life, and from the mainstreaming of human-machine interfaces to the latest advances in AI, such as the ability to mimic (and improve on) many aspects of human brain function. A unique integration of social science on the one hand and new technologies of artificial intelligence on the other, this Handbook offers readers new ways of understanding the rise of AI and its associated global transformations. Written in a clear and direct style, the Handbook will appeal to a wide undergraduate audience.