The History of AI Rights Research

By Jamie_Harris @ 2022-08-27T08:14 (+48)

This is a linkpost to https://www.sentienceinstitute.org/the-history-of-ai-rights-research

Explanation for the Forum

I previously created a formal literature review of research on "The Moral Consideration of Artificial Entities" and a brief, informal primer on "The Importance of Artificial Sentience." Tldr; seeking to expand the moral circle to include artificial sentient beings seems an important, neglected, and potentially tractable method for improving the long-term future and avoiding risks of astronomical suffering.

I wanted to follow up on those projects with a history of relevant research that was not constrained by a formal, systematic methodology; this would allow me to dive into citation trails and really get a sense of how the field has developed.

The report has a few goals:

The full report includes a detailed chronology of contributions. I expect this to be valuable for people seeking to reduce suffering risks via moral circle expansion, or otherwise exploring questions relating to artificial sentience/digital minds.

However, to keep the forum post brief and focus on the most relevant parts for people seeking to encourage the development of other research fields, I've stripped out the "methodology" and "results" sections, plus the reference list and any footnotes. Please refer to the full report if you're interested in those bits!

Thanks to Abby Sarfas, Ali Ladak, Elise Bohan, Jacy Reese Anthis, Thomas Moynihan , and Joshua Gellers for feedback.

Summary

This report documents the history of research on AI rights and other moral consideration of artificial entities. It highlights key intellectual influences on this literature as well as research and academic discussion addressing the topic more directly. 

We find that researchers addressing AI rights have often seemed to be unaware of the work of colleagues whose interests overlap with their own. 

Academic interest in this topic has grown substantially in recent years; this reflects wider trends in academic research, but it seems that certain influential publications, the gradual, accumulating ubiquity of AI and robotic technology, and relevant news events may all have encouraged increased academic interest in this specific topic. 

We suggest four levers that, if pulled on in the future, might increase interest further: the adoption of publication strategies similar to those of the most successful previous contributors; increased engagement with adjacent academic fields and debates; the creation of specialized journals, conferences, and research institutions; and more exploration of legal rights for artificial entities.

Figure 1: Cumulative total of academic publications on the moral consideration of artificial entities, by date of publication (from Harris & Anthis, 2021)

Figure 2: A summary chronology of contributions to academic discussion of AI rights and other moral consideration of artificial entities

Pre-20th century

Mid-20th century

1970s

1980s

1990s

Early 2000s

Late 
2000s

2010s and 2020s

       Synthesis
      Moral and social psych
      Social-relational ethics
     HCI and HRI
     Machine ethics and roboethics
     Floridi’s information ethics
    Transhumanism, EA, and longtermism
   Legal rights for artificial entities
  Animal ethics
 Environmental ethics
Artificial life and consciousness
Science fiction

Discussion

Why has interest in this topic grown substantially in recent years?

The “Results” section above identifies a handful of initial authors who seem to have played a key role in sparking discussion relevant to AI rights in each new stream of research, such as Floridi for information ethics, Bostrom for transhumanism, effective altruism, and longtermism, and Gunkel and Coeckelbergh for social-relational ethics. Perhaps, then, some of the subsequent contributors who cited these authors were encouraged to address the topic because those writings sparked their interest in AI rights, or the publication of those items reassured them that it was possible (and sufficiently academically respectable) to publish about it.

This seems especially plausible given that the beginnings of exponential growth some time between the late ‘90s and mid-’00s (Figure 1) coincides reasonably well with the first treatments of the topic by several streams of research (Figure 2). This hypothesis could be tested further through interviews with later contributors who cited those pioneering works. Of course, even if correct, this hypothetical answer to our question would then beg another question: why did those pioneering authors themselves begin to address the moral consideration of artificial entities? Again, interviews (this time with the pioneering authors) may be helpful for further exploration.
 

A common theme in the introductions of and justifications for relevant publications is that the number, technological sophistication, and social integration of robots, AIs, computers, and other artificial entities is increasing (e.g. Lehman-Wilzig, 1981; Willick, 1983; Hall, 2000; Bartneck et al., 2005b). Some of these contributors and others (e.g. Freitas, 1985; McNally & Inayatullah, 1988; Bostrom, 2014) have been motivated by predictions about further developments in these trends. We might therefore hypothesize that academic interest in the topic has been stimulated by ongoing developments in the underlying technology.

Indeed, bursts of technical publications on AI in the 1950s and ‘60s, artificial life in the ‘90s, and synthetic biology in the ‘00s seem to have sparked ethical discussions, where some of the contributors seem to have been largely unaware of previous, adjacent ethical discussions.

Additionally, the “Results” section above details how several new streams of relevant research from the 1980s onwards seem to have arisen independently of one another, such as Floridi’s information ethics and the early transhumanist writers not citing each other or citing the previous research on legal rights for artificial entities. Even within the categories of research there was sometimes little interaction, such as the absence of cross-citation amongst the earliest contributors to discussion on each of legal rights for artificial entities, HCI and HRI (where relevant to AI rights), and social-relational ethics. If these different publications addressing similar topics did indeed arise independently of one another, it suggests that there were one or more underlying factors encouraging academic interest in the topic. The development and spread of relevant technologies is a plausible candidate for being such an underlying factor.

However, the timing of the beginnings of exponential growth in publications on the moral consideration of artificial entities — seemingly from around the beginning of the 21st century (Figure 1) — does not match up very well to the timing and shape of technological progress. For example, there seems to have only been linear growth in industrial robot installations and AI job postings in the ‘10s (Zhang et al., 2021), whereas exponential growth in computing power began decades earlier, in the 20th century (Roser & Ritchie, 2013). This suggests that while this factor may well have contributed to the growth of research on AI rights and other moral consideration of artificial entities, it cannot single-handedly explain it.
 

As noted in the “Synthesis and proliferation” subsection above, there have been a number of news events in the 21st century relevant to AI rights, and these have sometimes been mentioned by academic contributors to discussion on this topic. However, only a relatively small proportion of recent publications explicitly mention these events (Table 14). Additionally, the first relevant news event mentioned by multiple different publications was in 2006, whereas the exponential growth in publications seems to have begun prior to that (Figure 1). A particular news story also seems intuitively more likely to encourage a spike in publications than the start of an exponential growth trend.
 

If the growth in academic publications in general — i.e. across any and all topics — has a similar timing and shape to the growth in interest in AI rights and other moral consideration of artificial entities, then we need not seek explanations for growth that are unique to this specific topic. There is some evidence that this is indeed the case; Fire and Guestrin’s (2019) analysis of the Microsoft Academic Graph dataset identified exponential growth in the number of published academic papers throughout the 20th and early 21st century, and Ware and Mabe (2015) identified exponential growth in the numbers of researchers, journals, and journal articles, although their methodology for assessing the number of articles is unclear.

At a more granular level, however, the prevalence of certain topics can presumably deviate from wider trends in publishing. For example, Zhang et al. (2021) report “the number of peer-reviewed AI publications, 2000-19”; the growth appears to have been exponential in the ‘10s, but not the ‘00s. There was a similar pattern in the “number of paper titles mentioning ethics keywords at AI conferences, 2000-19.” 

So it was not inevitable that the number of relevant publications would increase exponentially as soon as some of the earliest contributors had touched on the topic of the moral consideration of artificial entities. But science fiction, artificial life and consciousness, environmental ethics, and animal ethics all had some indirect implications for the moral consideration of artificial entities, even if they were not always stated explicitly. So it seems unsurprising that, in the context of exponential growth of academic publications, at least some scholars would begin to explore these implications more thoroughly and formally. Indeed, even though several of the new streams of relevant research from the 1980s onwards seem to have arisen largely independently of each other, they often owed something to one or more of these earlier, adjacent topics.

Which levers can be pulled on to further increase interest in this topic?

There seem to be two separate models for how the most notable and widely cited contributors to AI rights research have achieved influence.

Some, like Nick Bostrom, Mel Slater (and co-authors), and Lawrence Solum have published relatively few items specifically on this topic, but where they have done so, they have integrated the research into debates or topics of interest to a broader audience. They’ve mostly picked up citations for those other reasons and topics, rather than their discussion of the moral consideration of artificial entities. They’ve also tended to have strong academic credentials or publication track record relevant to those other topics, which may be a necessary condition for success in pursuing this model of achieving influence.

Others, like David Gunkel and Luciano Floridi, published directly on this topic numerous times, continuing to build upon and revisit it. Many of their individual contributions attracted limited attention in the first few years after publication, but through persistent revisiting of the topic (and the passage of time) these authors have nonetheless accumulated impressive numbers of citations across their various publications relevant to AI rights. These authors continue to pursue other academic interests, however, and a substantial fraction of the interest in these authors (Floridi more so than Gunkel) seems to focus on how their work touches on other topics and questions, rather than its direct implications for the moral consideration of artificial entities.

Of course, these two models of paths to influence are simplifications. Some influential contributors, like Christoper Bartneck and Mark Coeckelbergh, fall in between these two extremes. There may be other publication strategies that could be even more successful, and it is possible that someone could adopt one of these strategies and still not achieve much influence. Nevertheless, new contributors could take inspiration from these two pathways to achieving academic influence — which seem to have been quite successful in at least some cases — when seeking to maximize the impact of their own research.
 

As noted above, a number of contributors have accrued citations from papers that addressed but did not focus solely on the moral consideration of artificial entities. Early contributions that addressed the moral consideration of artificial entities more directly without reference to other debates often languished in relative obscurity, at least for many years (e.g. Lehman-Wilzig, 1981; Willick, 1983; Freitas, 1985; McNally & Inayatullah, 1988). This suggests that engaging with adjacent academic fields and debates may be helpful for contributors to be able to increase the impact of their research relevant to AI rights. Relatedly, there is reason to believe that Fields’ first exposure to academic discussion relevant to AI rights may have been at an AI conference, perhaps encouraging them to write their 1987 article.

Although it seems coherent to distinguish between moral patiency and moral agency (e.g. Floridi, 1999; Gray et al., 2007; Gunkel, 2012), many successful publications have discussed both areas. For instance, much of the relevant literature in transhumanism, effective altruism, and longtermism has focused on threats posed to humans by intelligent artificial agents but has included some brief discussion of artificial entities as moral patients. Many writings address legal rights for artificial entities in tandem with discussion of those entities’ legal responsibilities to humans or each other. Before Gunkel (2018) wrote Robot Rights, he wrote (2012) The Machine Question with roughly equal weight to questions of moral agency and moral patiency. Even Floridi, who has often referred to information ethics as a “patient-oriented” ethics, has been cited numerous times by contributors interested in AI rights for his 2004 article co-authored with Jeff Sanders “On the Morality of Artificial Agents”; 32 of the items in Harris and Anthis’ (2021) systematic searches (12.1%) have cited that article. Indeed, for some ethical frameworks, there is little meaningful distinction between agency and patiency. Similarly, some arguments both for (e.g. Levy, 2009) and against (e.g. Bryson, 2010) the moral consideration of artificial entities seem to be motivated by concern for indirect effects on human society. So contributors may be able to tie AI rights issues back to human concerns, discuss both the moral patiency and moral agency of artificial entities, or discuss both legal rights and legal responsibilities; doing so may increase the reach of their publications.

Artificial consciousness, environmental ethics, and animal ethics all had potentially important ramifications for the moral consideration of artificial entities. These implications were remarked upon at the time, including by some of the key thinkers who developed these ideas, but the discussion was often brief. Later, machine ethics and roboethics had great potential for including discussion relevant to AI rights, but some of the early contributors seem to have decided to mostly set aside such discussion. It seems plausible that if some academics had been willing to address these implications more thoroughly, AI rights research might have picked up pace much earlier than it did. There may be field-building potential from monitoring the emergence and development of new, adjacent academic fields and reaching out to their contributors to encourage discussion of the moral consideration of artificial entities.

As well as providing opportunities to advertise publications relevant to AI rights, engagement with adjacent fields and debates provides opportunities for inspiration and feedback. Floridi (2013) and Gunkel (2018) acknowledge discussion at conferences that had no explicit focus on AI rights as having been influential in shaping the development of their books. Additionally, several authors first presented initial drafts of their earliest relevant papers at such conferences (e.g. Putnam, 1960; Lehman-Wilzig, 1981; Floridi, 1999).
 

While the above points attest to the usefulness of engagement with adjacent fields and debates (e.g. by attending conferences, citing relevant publications), in order to grow further, it seems likely that AI rights research also needs access to its own specialized “organizational resources” (Frickel & Gross, 2005) such as research institutions, university departments, journals, and conferences (Muehlhauser, 2017; Animal Ethics, 2021). With a few exceptions (e.g. The Machine Question: AI, Ethics and Moral Responsibility symposium at the AISB / IACAP 2012 World Congress; Gunkel et al., 2012), the history of AI rights research reveals a striking lack of such specialized resources, events, and institutions. Indeed, it is only recently that whole books dedicated solely to the topic have emerged (Gunkel, 2018; Gellers, 2020; Gordon, 2020).

The creation of such specialized resources could also help to guard against the possibility that, as they intentionally engage with adjacent academic fields and debates, researchers drift away from their exploration of the moral consideration of artificial entities.
 

Detailed discussion of the legal rights of artificial entities was arguably the first area of academic enquiry to focus in much depth on the moral consideration of artificial entities. Articles that touch on the moral consideration of artificial entities from a legal perspective seem to more frequently accrue a substantial number of citations (e.g. Lehman-Wilzig, 1981; McNally & Inayatullah, 1988; Solum, 1992; Karnow, 1994; Allen & Widdison, 1996; Chopra & White, 2004; Calverley, 2008). Additionally, in recent years, there have been a number of news stories related to legal rights of artificial entities (Harris, 2021). This could be due to differences in the referencing norms between different academic fields, but otherwise weakly suggests that exploration of legal topics is more likely to attract interest and have immediate relevance to public policy than more abstract philosophical or psychological topics.

Limitations

This report has relied extensively on inferences about authors’ intellectual influences based on explicit mentions and citations in their published works. These inferences may be incorrect, since there are a number of factors that may affect how an author portrays their influences.

For example, in order to increase the chances that their manuscript is accepted for publication by a journal or cited by other researchers, an author may make guesses about what others would consider to be most appealing and compelling, then discuss some ideas more or less extensively than they would like to. Scholars are somewhat incentivized to present their works as novel contributions, and so not to cite works with a substantial amount of overlap. Authors might also accidentally omit mention of previous publications or ideas that have influenced their own thinking. 

There are a few instances where a likely connection between authors has not been mentioned, although we cannot know in any individual case why not. One example is the works of Mark Coeckelbergh and Johnny Hartz Søraker, who were both advancing novel “relational” perspectives on the moral consideration of artificial entities while in the department of philosophy at the University of Twente, but who do not cite or acknowledge each other’s work. Another is how Nick Bostrom gained attention for the ideas that suggest our world is likely a simulation, but a similar point had been made earlier by fellow transhumanist Hans Moravec.

These examples suggest that the absence of mentions of particular publications does not prove that the author was not influenced by those publications. But there are also some reasons why the opposite may be true at times; that an author might mention publications that had barely influenced their own thinking.

For example, they may be incentivized to cite foundational works in their field or works on adjacent, partly overlapping topics, in order to reassure publishers that there will be interest in their research. Alternatively, someone might come up with an idea relatively independently, but then conduct an initial literature review in order to contextualize their ideas; citing the publications that they identify would falsely convey the impression that their thinking had been influenced by those publications.

Since the relevance of identified publications was sometimes filtered by the title alone, it is likely that I have missed publications that contained relevant discussion but did not advertise this clearly in the title. Additionally, citations of included publications were often identified using the “Cited by…” tool on Google Scholar, but this tool seems to be imperfect, sometimes omitting items that I know to have cited the publication being reviewed.

This report initially used Harris and Anthis’ (2021) literature review as its basis, which relied on systematic searches using keywords in English language. This has likely led to a vast underrepresentation of relevant content published in other languages. There is likely at least some work written in German, Italian, and other European languages. For example, Gunkel (2018) discussed some German-language publications that I did not see referenced in any other works (e.g. Schweighofer, 2001).

This language restriction has likely also led to a substantial neglect of relevant writings by Asian scholars. For Western scholars exploring the moral consideration of artificial entities, Asian religions and philosophies have variously been the focus of their research (e.g. Robertson, 2014), an influence on their own ethical perspectives (e.g. McNally & Inayatullah, 1988), a chance discovery, or an afterthought, if they are mentioned at all. However, very few items have been identified in this report that were written by Asian scholars themselves, and there may well be many more relevant publications.

This report has not sought to explore in depth the longer-term intellectual origins for academic discussion of the moral consideration of artificial entities, such as the precedents provided by various moral philosophies developed during the Enlightenment.

As I have discussed at length elsewhere (Harris, 2019), assessing causation from historical evidence is difficult; “we should not place too much weight on hypothesized historical cause and effect relationships in general,” or on “the strategic knowledge gained from any individual historical case study.” The commentary in the discussion section should therefore be treated as one interpretation of the identified evidence, rather than as established fact.

The keyword searches are limited to the items included from Harris and Anthis’ (2021) systematic searches. Those searches did not include all research papers with relevance to the topic. For example, the thematic discussion in this report includes a number of publications that could arguably have merited inclusion in that review, if they had been identified by the systematic searches.

The items identified in each keyword search were not manually checked to ensure that they did indeed refer to the keyword in the manner that was assumed. For example, the search for “environment” may have picked up mentions of that word that have nothing to do with environmental ethics (e.g. how a robot interacts with its “environment”) or just because they were published in — or cited another item that was published in — a journal with “environment” in its title.

Similarly, where multiple authors who might have been cited in the included publications share a surname (as is the case for at least the surnames Singer, Friedman, and Anderson), then the keyword searches might overrepresent the number of citations of that author. In contrast, if an author has a name that is sometimes misspelled by others (e.g. Putnam, Freitas, Lehman-Wilzig), then the searches might underrepresent the number of citations of them.

Potential items for further study

What is the history of AI rights research that is written in languages other than English? This report predominantly only included publications written in English, so relevant research in other languages may have been accidentally excluded.

Given the difficulty in assessing causation through historical evidence and in making inferences about authors’ intellectual influences based solely on explicit mentions in their published works, it would be helpful to supplement this report with interviews of researchers and other stakeholders.

Previous studies and theoretical papers have identified certain features as potentially important for the emergence of “scientific/intellectual movements” (e.g. Frickel & Gross, 2005; Animal Ethics, 2021). A literature review of such contributions could be used to generate a list of potentially important features. The history of AI rights research could then be assessed against this list: which features appear to be present and which missing?

Histories of other research fields could be useful for better understanding which levers can be pulled on to further increase interest in AI rights. Such studies could focus on the research fields that have the most overlap in content and context (e.g. AI alignment, AI ethics, animal ethics) or that have achieved success most rapidly (e.g. computer science, cognitive science, synthetic biology).

There are numerous alternative historical research projects that could help to achieve the underlying goal of this report — to better understand how to encourage an expansion of humanity’s moral circle to encompass artificial sentient beings. For example, rather than focusing on academic research fields, historical studies could focus on technological developments that have already created or substantially altered sentient life, such as cloning, factory farming, and genetic editing.


Phil Tanny @ 2022-08-29T11:44 (+1)

Why has interest in this topic grown substantially in recent years?

 

Because it's a topic which offers academics another opportunity to position themselves as experts.  

When intellectual inquiry becomes a business, the business agendas tend to take over the process.   Academics will always try to make such investigations complicated, because it's only in complication that they can position themselves as experts, as being superior to the public, and thus meriting a salary, position, status etc.

When intellectual inquiry is not a business, then an opportunity arises to transcend never ending complications in search for fundamental pivot points at the heart of a problem, which are often quite simple in nature.   For example...

PREMISE:  Unless we can gain control of the pace at which the knowledge explosion is generating new threats, AI is irrelevant, because if we're not destroyed by AI we'll be destroyed by something else.

Once intellectual inquiry is a business, we can't afford to focus on such fundamentals because to do so would sweep the entire subject of AI off the table, removing a great deal of business opportunity.