The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI
Birch (Jonathan)
This Page provides (where held) the Abstract of the above Book and those of all the Papers contained in it.
Text Colour-ConventionsDisclaimerPapers in this BookBooks / Papers Citing this BookNotes Citing this Book



Notes



"Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI"

Source: Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI


For a write-up, see this Note1.

Write-up2 (as at 04/02/2026 07:39:22): Birch - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI

Introductory Notes
  1. This book was made available pre-publication (August 2024) and I’ve downloaded the free pdf – and now the free eBook – which seems to work fine on my Kindle for reading while walking Bertie. The eBook is unfortunately missing the diagrams, so I’m printing off the Chapter pdfs for subsequent review at my desk.
  2. I may buy the paperback when it comes out (or a cheap second-hand copy of the hardback).
  3. See the book’s website: Birch - The Edge of Sentience: Website.
  4. I’ve read the ‘Precautionary Framework’, repeated below, which is a summary of all the recommendations in the book and intend to add sundry footnotes to show where I do and don’t agree with the author. I may change my mind about these evaluations as I read and reflect on the text of the book, but I think it’s worth thinking hard about these issues before, during and after reading what the author has to say.
  5. As I read the Chapters, I added the TOCs and Summaries. Links are provided from the TOC below.

Publisher's Blurb
  • Can octopuses3 feel pain and pleasure? What about crabs, shrimps, insects, or spiders4? How do we tell whether a person unresponsive after severe brain injury5 might be suffering? When does a fetus6 in the womb start to have conscious experiences? Could there even be rudimentary feelings in miniature models of the human brain7, grown from human stem cells? And what about AI8?
  • These are questions about the edge of sentience, and they are subject to enormous, disorienting uncertainty. The stakes are immense, and neglecting the risks can have terrible costs. We need to err on the side of caution, yet it’s often far from clear what ‘erring on the side of caution’ should mean in practice9. When are we going too far10? When are we not doing enough?
  • The Edge of Sentience presents a comprehensive precautionary framework designed to help us reach ethically sound, evidence-based decisions despite our uncertainty.


Contents
  1. Summary of the Framework and Proposals
    1. A Walk along the Edge
    2. The Concept of Sentience
  2. Part I: The Zone of Reasonable Disagreement
    1. The Mind-Body Problem
    2. Ethics and Religion
    3. The Science of Consciousness and Emotion
  3. Part II: A Precautionary Framework
    1. Converging on Precautions
    2. Involving the Public
    3. Debating Proportionality
  4. Part III: Sentience and The Human Brain
    1. People with Disorders of Consciousness
    2. Fetuses and Embryos
    3. Neural Organoids
  5. Part IV: Sentience in Other Animals
    1. The Clearest Candidates
    2. Pushing the Boundaries
    3. Frontiers of Proportionality
  6. Part V: Preparing for Artificial Sentience
    1. Against Complacency
    2. Large Language Models and the Gaming Problem
    3. The Run-Ahead Principle
  7. Stepping Back


Summary of the Framework and Proposals

Precautionary Framework
  • A sentient being (in the sense relevant to the present framework) is a system with the capacity to have valenced experiences11, such as experiences of pain and pleasure.
  • A system S is a sentience candidate12 if there is an evidence base that (a) implies a realistic possibility of sentience in S that it would be irresponsible to ignore when making policy decisions that will affect S, and (b) is rich enough to allow the identification of welfare risks and the design and assessment of precautions.
  • A system S is an investigation priority if it falls short of the requirements for sentience candidature, yet (a) further investigation could plausibly lead to the recognition of S as a sentience candidate and (b) S is affected by human activity in ways that may call for precautions if S were a sentience candidate.
  • Framework Principle 1. A duty to avoid causing gratuitous suffering. We ought, at minimum, to avoid causing gratuitous suffering to sentient beings either intentionally or through recklessness/negligence. Suffering is not gratuitous if it occurs in the course of a defensible activity despite proportionate attempts to prevent it. Suffering is gratuitous if the activity is indefensible or the precautions taken fall short of what is proportionate.
  • Framework Principle 2. Sentience candidature can warrant precautions. If S is a sentience candidate, then it is reckless/negligent to make decisions that create risks of suffering for S without considering the question of what precautions are proportionate to those risks. Reasonable disagreement about proportionality is to be expected, but we ought to reach a policy decision rather than leaving the matter unresolved indefinitely.
  • Framework Principle 3. Assessments of proportionality should be informed, democratic, and inclusive. To reach decisions, we should use informed and inclusive democratic processes. These decisions should be revisited periodically and whenever significant new evidence emerges.
  • An example of an informed, inclusive, democratic process is a citizens’ panel or assembly that assesses the proportionality of possible responses by debating their permissibility-in-principle, adequacy, reasonable necessity, and consistency (the ‘PARC’ tests).

Proposals about Specific Cases
  • The following should be read as proposals for discussion and debate. There is a case for regarding the proposed measures as proportionate, so they merit consideration by an informed, inclusive democratic process, but my aim is not to pre-judge the outcome of such a process.
  • People with Disorders of Consciousness
    1. Proposal 1. Assume sentient (specific). Any signs that would be interpreted as signs of pain or distress in a conscious patient should still be so interpreted in a patient with any disorder of consciousness, and pain relief administered. This treatment should not be reserved for a subset of patients, e.g. those diagnosed as minimally conscious.
    2. Proposal 2. Assume sentient (general). A patient with a prolonged disorder of consciousness should not be assumed incapable of experience when an important clinical decision is made. All clinical decisions should consider the patient’s best interests as comprehensively as possible, working on the precautionary assumption that there is a realistic possibility of valenced experience and a continuing interest in avoiding suffering and in achieving well-being, but without taking this assumption to have implications regarding prognosis.
    3. Proposal 3. Avoid line-drawing (general). The diagnostic categories of PVS, MCS−, and MCS+ should be phased out in therapeutic and legal contexts. Clinicians should work with the broader category of ‘prolonged disorder of consciousness’ (PDOC) and with profiles of individual patients, tailoring care to the patient’s individual needs.
    4. Proposal 4. Avoid line-drawing (specific). Decisions about withdrawing life-sustaining treatment should be based on comprehensive assessment of the patient’s best interests, sensitive to the details of their case, and not on the PVS/MCS distinction.
    5. Proposal 5. The need for more humane options. Methods of hastening death other than through withdrawal of clinically assisted nutrition and hydration (CANH) are needed. At minimum, clinicians should not face any risk of punishment for administering large doses of sedatives or analgesics after a decision to withdraw CANH has been made.
    6. Proposal 6. Waiting for more information. If a humane method of hastening death becomes available, there will be no strong rationale for withdrawing life-sustaining treatment within days after injury, before the patient’s condition has stabilized. Given the great uncertainty about prospects for recovery at this stage, the default approach should be to delay the decision until the patient’s condition has stabilized.
  • Human Fetuses and Embryos
    1. Proposal 7. Sentience and abortion. The point at which a human fetus becomes sentient is not the point at which abortion becomes morally impermissible. We should separate these issues. The ethics of abortion depend primarily on questions of personhood and bodily autonomy, not on questions of sentience.
    2. Proposal 8. Human sentience candidature begins early. Human fetuses are sentience candidates from the beginning of the second trimester. This line may move as new evidence emerges, but it should always track the earliest scientifically credible, evidence-based estimate.
    3. Proposal 9. Fetal pain relief (in therapeutic contexts). Direct fetal anaesthesia and pain relief should be considered whenever therapeutic fetal surgery is performed. The public should be involved in discussions about general norms of medical practice (not specific decisions), and data regarding current practices should be collected and published to allow such discussions.
    4. Proposal 10. Fetal pain relief (in the context of abortion). Clinicians need to communicate uncertainty about fetal sentience honestly to patients. In some cases, fetal pain relief may be appropriate. Deliberative processes for setting clinical norms must give appropriate weight to the voices of women.
    5. Proposal 11. Sentience and the 14-day rule. If the main goal of setting a legal time limit on human embryo research were that of prohibiting research on human sentience candidates, a significant liberalization of the current 14-day rule would still be proportionate to that goal. However, the issue raises deep value conflicts that have little to do with sentience.
  • Human Neural Organoids
    1. Proposal 12. Brainstem rule. If a neural organoid develops or innervates a functioning brainstem (including the midbrain) that regulates arousal and leads to sleep-wake cycles, then it is a sentience candidate. An artificial functional equivalent of a brainstem would also suffice.
    2. Proposal 13. Targeted bans. If organoid research leads to the creation of organoids that are sentience candidates, a moratorium (time-limited ban) or indefinite ban on the creation of this particular type of organoid may be an appropriate response. Bans should avoid indiscriminate targeting of all organoid research.
    3. Proposal 14. Ethical review. When a neural organoid is a sentience candidate, research on it, if permitted at all, should be subject to ethical review and harm-benefit analysis, modelled on existing frameworks for regulating research on sentient animals.
  • Other Animals
    1. Proposal 15. All adult vertebrates are sentience candidates. Debates about proportionality are warranted in cases where human activities create risks of suffering to any adult vertebrate animal. Further investigation concerning sentience candidature in vertebrates should focus on juvenile/ larval stages.
    2. Proposal 16. Sentience is neither intelligence nor brain size. We should be aware of the possibility of decouplings between intelligence, brain size, and sentience in the animal kingdom. Precautions to safeguard animal welfare
      should be driven by markers of sentience, not by markers of intelligence or by brain size.
    3. Proposal 17. Sentience is not pain. Although there are pragmatic reasons for the focus on pain in debates about animal sentience, we must be open to the possibility that the class of sentient animals is bigger than the class of animals that feel pain. Other forms of evidence can make an animal a sentience candidate, such as evidence of sophisticated forms of learning, attention, working memory, and planning.
    4. Proposal 18. Some invertebrates are sentience candidates. Coleoid cephalopod molluscs, decapod crustaceans of the suborder Pleocyemata, and insects (all when in the adult stage) are sentience candidates. Debates about proportionality are warranted in cases where human activities create risks of suffering to these animals. Decapod crustaceans of the suborder Dendrobranchiata, insect larvae, spiders, gastropods, and nematode worms are investigation priorities.
    5. Proposal 19. Codes of good practice and licensing. There should be a licensing scheme for companies attempting to farm sentience candidates or investigation priorities for which no welfare regulations yet exist (such as insects). Obtaining a license should be dependent on signing up to (and, where necessary, funding research leading to) a code of good practice concerning animal welfare.
    6. Proposal 20. Octopus farming. It is very unlikely that octopus farming can meet reasonable expectations regarding welfare and humane slaughter. It would be proportionate to ban octopus farming.
    7. Proposal 21. Towards humane slaughter. When an animal is a sentience candidate, it is proportionate to ban slaughter methods that needlessly risk extending and intensifying the suffering associated with dying, such as boiling animals alive without prior stunning.
  • Artificial Intelligence (AI)
    1. Proposal 22. Sentience is not intelligence (II). We should be aware of the possibility of a substantial decoupling between intelligence and sentience in the AI domain. Precautions to manage risks of suffering should be driven by markers of sentience, not markers of intelligence. For example, emulations of animal brains could achieve sentience without necessarily displaying impressive intelligence.
    2. Proposal 23. The gaming problem. For any set of criteria for sentience candidature, we need to be aware of the risk of the AI system or its designer learning (implicitly or explicitly) that they are regarded as criteria, leading to gaming of the criteria. We need to discount markers we have reason to think may have been gamed.
    3. Proposal 24. Deep computational markers. We can use computational functionalist theories (such as the global workspace theory and the perceptual reality monitoring theory) as sources of markers of sentience. If we find signs that an AI system, even if not deliberately equipped with such features, has implicitly learned ways of recreating them, this should lead us to regard it as a sentience candidate.
    4. Proposal 25. The run-ahead principle. At any given time, measures to regulate the development of sentient AI should run ahead of what would be proportionate to the risks posed by current technology, considering also the risks posed by credible future trajectories.
    5. Proposal 26. Codes of good practice and licensing (II). There should be a licensing scheme for companies attempting to create artificial sentience candidates, or whose work creates even a small risk of doing so, even if this is not an explicit aim. Obtaining a license should be dependent on signing up to (and, where necessary, funding the creation of a code of good practice for this type of work that includes norms of transparency.

Chapter 01: A Walk along the Edge
  • Sections
    1. The Unmarked Border
    2. Decision Points
    3. When to Stop?
    4. Accidental Golems
    5. The Goal: A Precautionary Framework
    6. A Note on Influences
    7. Summary of Chapter 1
  • Summary
    • There is a family of cases at the edge of sentience. In these cases, grave decisions hinge on whether we regard sentience — initially introduced, informally, as ‘ethically significant experience’ — to be present or absent in a person, animal, or other cognitive system. The family includes people with disorders of consciousness, embryos and fetuses, neural organoids, other animals (especially invertebrates), and AI technologies that reproduce brain functions and/or mimic human behaviour.
    • It is worth studying these cases together not because there is a moral equivalence between them but because they present us with similar types of uncertainty. We need frameworks for helping us to manage that uncertainty and reach decisions. This book aims to develop a consistent precautionary framework that enshrines — but also goes beyond — the insight that we must err on the side of caution in these cases, take proportionate steps to manage risk, and avoid reckless or negligent behaviour. Where sentience is in doubt, we should give these systems the benefit of the doubt. What that means in practice will be considered in the rest of the book.


Chapter 02: The Concept of Sentience
  • Sections
    1. Sentience and Consciousness
    2. ‘Phenomenal Consciousness’ as Unstable Common Ground
    3. Affective Space
    4. Affective Spaces beyond the Human Case
    5. Three Views on the Nature of Valence
    6. The Ethical Significance of Valence
    7. Summary of Chapter 2
  • Summary
    • Sentience has broader and narrower senses. In a broad sense, it refers to any capacity for conscious experience. Conscious experience here refers to ‘phenomenal consciousness’, the idea that there is ‘something it’s like’ to be you. In a narrower sense, it refers specifically to the capacity to have valenced experiences — experiences that feel bad or feel good to the subject — such as experiences of pain and pleasure. This narrower sense of the term will be used as a working definition in the rest of the book (replacing the initial characterization of sentience in Chapter 1, where it was introduced as ‘ethically significant experience’).
    • A definition of sentience as the capacity for valenced experience can be common ground, despite continuing disagreement about the nature of conscious experience and the nature of valence. We can hope that the rather weak ‘something it’s like’ definition of phenomenal consciousness will eventually be superseded by a more theoretically loaded successor, but this must await much more theoretical consensus than currently exists. There are, at present, many theories of phenomenal consciousness, displaying massive variation, some of which will be discussed in Chapters 3, 5, and 6.
    • Valence, meanwhile, is one of the two major dimensions, along with arousal, that structure our affective experiences, and the one that matters most from an ethical point of view. Valenced experiences do not have to fall under traditional human categories. There may well be valenced experiences in other beings that our everyday categories (e.g. ‘pain’, ‘fear’, ‘anger’) fail to capture well.
    • There are three main accounts of the nature of valence (hedonic quality, non-conceptual representation of value, and imperative content), though all agree that valence is a real and general feature of affective experience. Meanwhile, three major ethical theories (classical utilitarianism, Korsgaard’s neo-Kantian view, and Nussbaum’s capabilities approach) have converged from different directions on the special ethical significance of valence. There are no obvious logical entailments between theories of the nature of valence and theories of its ethical significance, but some packages fit together better than others.


Chapter 03: The Mind-Body Problem
  • Sections
    1. Disagreeing Reasonably about Sentience
      → Baseless Recommendations
      → Dogmatism
      → Moral Views beyond the Pale
    2. Might Conscious Experience Leave No Trace on Behaviour?
    3. Is Conscious Experience One Kind or Many?
    4. Radical Alternatives
      → Interactionist Dualism
      → Russellian Monism
      → Biopsychism
      → The Integrated Information Theory
    5. Are Agency and Embodiment Necessary?
    6. The Relevant Scale of Functional Organization
    7. Are There Borderline (Indeterminate) Cases of Sentience?
    8. Summary of Chapter 3
  • Summary
    • Reasonable disagreement about sentience requires responsiveness to evidence and argument. It excludes baseless recommendations, dogmatic adherence to refuted theories, and morally abhorrent (e.g. sadistic) positions. However, the uncertainty in this area is such that many very different positions can be held by reasonable people who are well versed in the relevant science and philosophy.
    • This chapter has examined sources of disagreement that have their origins in the philosophy of mind. The focus in this literature has been on the ‘phenomenal consciousness’ aspect of sentience, not the valence aspect. Epiphenomenalism, the idea that consciousness leaves no causal imprint on the physical world, is a coherent possibility, but one that threatens to leave us unable resolve disputes at the edge of sentience using physical evidence. Luckily, it is empirically implausible, because consciousness bears all the hallmarks of being an evolved adaptation.
    • There are three main ways to make sense of the idea that consciousness has a place in the causal order of the physical world: materialism, interactionist dualism, and Russellian monism. Materialism covers many different positions on which conscious experience has a physical basis. A significant fault-line is between materialists who identify phenomenal consciousness with a single, unified natural kind and those who suspect it gestures indeterminately towards two or more kinds. Interactionist dualism and Russellian monism have the status of radical alternatives, but cautious formulations of these views can be reasonable hypotheses.
    • Biopsychism can be reasonable too if advanced as a speculative hypothesis, rather than as a claim which receives support from evidence-based accounts of the nature of life. The integrated information theory of consciousness, which in its full form involves a metaphysical background picture with elements of Leibnizian idealism, has a similar status: a reasonable hypothesis to put forward, provided one is prepared to acknowledge its highly speculative nature. In both cases, the key is not to present a speculative metaphysical picture as a claim supported by scientific evidence.
    • There are several further axes of philosophical disagreement that lead people to disagree about the edge of sentience. One concerns the importance of agency and embodiment, real or virtual. Another concerns the scale of functional organization that matters: can small-scale (e.g. within neuron) functions make a difference to the presence or absence of sentience, or is largescale functional organization (e.g. possessing a global workspace) all that counts?
    • A third concerns whether the edge of sentience is sharp or blurred: must there be a sharp transition between the absence of sentience and its presence in its most minimal forms, or can there (as with baldness, tallness, oldness) be borderline cases in which there is no determinate fact of the matter one way or the other? Both positions are reasonable, and the arguments so far given for each position are inconclusive.


Chapter 04: Ethics and Religion
  • Sections
    1. Bio- and Ecocentrism
    2. Agency-centric Views
    3. Consciousness without Valence
    4. Rationality-centric Views
    5. Abrahamic Religions and the Stewardship Tradition
    6. Indian Religions and Ahimsa
    7. Summary of Chapter 4
  • Summary
    • Chapter 2 highlighted a convergence across classical utilitarianism, Korsgaard’s neo-Kantianism, and Nussbaum’s capabilities approach regarding the ethical significance of sentience. These ‘sentientist’ outlooks can be contrasted with those that deny the necessity and/or sufficiency of sentience for moral standing.
    • The clash between sentientist and biocentrist/ecocentrist positions is easily overstated: reasonable versions of these views will grant some significance to sentience, while insisting that other ecological properties such as biodiversity also matter. Meanwhile, the possibility of Vulcan-like beings who have the consciousness aspect of sentience without the valence aspect suggests a qualification to a pure sentientism may be needed. This qualification may matter in the future (if we develop conscious but valence-free AI) but has little immediate significance.
    • A more serious challenge to sentientism comes from agency-centric and rationality-centric positions. An example of the latter is orthodox Kantianism, which allows only indirect duties (formally owed to ourselves) concerning non-rational sentient beings. The need to bring rationality-centric viewpoints with us places a limit on how ambitious we can be when looking for ethical consensus.
    • Another challenge comes from the Abrahamic religions, which give only very limited moral standing to non-human sentient beings. We can, however, find in all of them support for duties of stewardship, including the duty to avoid causing gratuitous suffering. Indian religions, by contrast, share important elements of the sentientist outlook, though they have reached these overlapping conclusions by a very different path.


Chapter 05: The Science of Consciousness and Emotion
  • Sections
    1. Grades of Optimism
    2. The Conscious and the Unconscious: The Case of Blindsight
    3. The Conscious and the Unconscious: Wider Lessons
    4. Conscious and Unconscious Affect?
    5. How Important Is the Neocortex? Two Contrasting Pictures
    6. Looking beyond the Mammalian Case
    7. Summary of Chapter 5
  • Summary
    • This chapter has focused on sources of disagreement in the science of consciousness and emotion. To have a science of consciousness at all, we need reliable ways of disentangling conscious and unconscious processing. In the case of vision, long-running debates about blindsight epitomize two main sources of uncertainty: the criterion problem and the problem of confounders. These sources of uncertainty arise even more strongly in the case of valenced experience (including emotions like fear), since methods for eliciting unconscious analogues of valenced experiences are less mature.
    • In the absence of secure ways of dissociating valenced experience from its unconscious analogues, two rival pictures of the neural basis of valenced experience in the mammalian brain persist. On one picture, valenced experience wells up directly from subcortical mechanisms without the need for any further cortical processing. On the other, subcortical circuits produce coordinated behavioural responses without the involvement of any kind of conscious processing, with even the simplest, ‘anoetic’ level of consciousness requiring cortical ‘re-representation’ of subcortical activity. Current evidence does not allow us to choose confidently between these pictures.
    • These research programmes have all had a strong mammalian focus, with most of the evidence coming from humans, macaques, and rats. They have not yielded theories that apply in a clear, unambiguous way to non-mammalian animals. In these cases, we are typically left with a good amount of behavioural evidence, and some understanding of the underlying neural mechanisms, but no agreed theoretical basis on which to judge how similar the neural mechanisms must be in order to support sentience. The most appropriate way forward in these cases is to use the evidence that exists to guide precautionary thinking, a task that will be taken up in Part II of the book.


Chapter 06: Converging on Precautions
  • Sections
    1. Uncertainty, Inconclusiveness, and Dissensus
    2. A Scientific Meta-consensus on the Range of Realistic Possibilities
    3. Two Bridging Concepts: Sentience Candidates and Investigation Priorities
    4. In Search of Ethical Framework Principles
    5. Back to the Edge
    6. Relation to Other Precautionary Ideas
    7. Summary of Chapter 6
  • Summary
    • This chapter has sought points of consensus across the zone of reasonable disagreement. To do this, it introduced two crucial bridging concepts: sentience candidates and investigation priorities. The key idea is that the zone of reasonable disagreement is wide, but not so wide as to preclude a meta-consensus about what it takes to be a sentience candidate. Of particular relevance to later chapters is that the relatively inclusive theories of Panksepp, Damasio, and Merker, which link sentience to evolutionarily ancient mechanisms in the midbrain, describe realistic possibilities, so a system with these mechanisms is a sentience candidate. When the evidence falls short of showing that a system is a sentience candidate, but there are still welfare risks that may call for a precautionary response, the system should be classed as an investigation priority.
    • There is, moreover, an ethical consensus around a duty to, at minimum, avoid causing gratuitous suffering to sentient beings. Suffering is not gratuitous if it occurs in the course of a defensible activity despite proportionate attempts to prevent it. Suffering is gratuitous if the activity is indefensible or the precautions taken fall short of what is proportionate.
    • These ideas (the concept of a sentience candidate and the duty to avoid gratuitous suffering) are combined in the principle that precautions may be proportionate whenever a being is a sentience candidate. This leaves the issue of what proportionality requires, and we should expect disagreement about this. All can agree, however, on the unacceptability of a free-for-all in which everyone can treat sentience candidates however they wish. To avoid this, we need informed and inclusive democratic processes to reach decisions on proportionality, which can then be revisited periodically and whenever significant new evidence emerges. The question of what sort of process might fit the brief of being ‘informed, inclusive, and democratic’ will be taken up in the next two chapters.
    • This framework is ‘precautionary’ in the sense that it endorses several of the core tenets of precautionary thinking: we should not let scientific uncertainty paralyse decision-making, precautions should be proportionate to the threat, our procedures for debating precautions should be democratic and inclusive, and the evidential bar for triggering those procedures should be set reasonably low. However, the case for adopting this framework does not assume any prior sympathy for precautionary approaches to other problems.


Chapter 07: Involving the Public
  • Sections
    1. Citizens’ Assemblies and Panels: The Basic Idea
    2. Avoiding the Tyranny of Expert Values
    3. Three Alternatives
      → Elected Representatives
      → Referendums
      → A Common Currency
    4. Objections to Citizens’ Panels: Representativeness, Deference, Competence
      → Representativeness
      → Deference
      → Competence
    5. Summary of Chapter 7
  • Summary
    • A ‘tyranny of expert values’ occurs when the values of expert advisers determine a policy decision without those values being properly scrutinized by a democratic process. Citizens’ panels and assemblies, composed of random samples of the wider population, can be an attractive way to avoid the tyranny of expert values, when carefully designed.
    • Citizens’ panels can have advantages over elected assemblies and referendums. These advantages are especially clear when an issue generates deep value conflicts, requires sustained attention and regular revisiting, requires consideration of the interests of beings who cannot vote, and when there are reasons to departisanize the issue. Questions of proportionality at the edge of sentience have all of these properties.
    • To be adequately representative, citizens’ panels/assemblies should be sizeable (at least 150 citizens) but should fission into smaller groups of 10–25 for deliberation. They should have the power to co-opt new members to ensure that important but marginalized perspectives are represented in every deliberative group.
    • Elected assemblies should not defer to citizens’ panels in all circumstances. In cases where a panel makes a recommendation that contradicts the core policy platform of an elected government, a referendum may be justified as a way to resolve the deep value conflict.
    • Since members of citizens’ panels do not generally have scientific training, careful thought needs to be given to the structure of deliberation, so that they are not forced into a position of arbitrating scientific disagreement. Their focus should be on whether or not a proposed response can be publicly justified as proportionate to an identified risk, not on whether a being is a sentience candidate.


Chapter 08: Debating Proportionality
  • Sections
    1. Curating Options: Learning from the Climate Assembly UK
    2. A Pragmatic Analysis of Proportionality
      → Test 1: Permissibility-in-Principle
      → Test 2: Adequacy
      → Fig. 8.1 The PHIA probability yardstick. Versions of this yardstick can be found frequently in UK government documents.
      → Test 3: Reasonable Necessity
      → Test 4: Consistency
    3. The Division of Labour Implicit in the PARC Tests
      → Table 8.1 A summary of the PARC tests and the division of labour they induce between experts and members of the public.
      → Fig. 8.2 A procedure for assessing the proportionality of a proposed response to an identified risk.
    4. Public Policy and Private Policies
    5. Philosopher as Sage, Philosopher as Proposer
    6. Summary of Chapter 8
  • Summary
    • This chapter has set out a procedure for debating proportionality. In this procedure, a citizens’ panel deliberates following the structure laid out in Figure 8.2.
    • The panel is presented with a shortlist of feasible options on which stakeholders have been openly consulted. To each policy option, the panel applies four tests in sequence: permissibility-in- principle, adequacy, reasonable necessity, and consistency (Table 8.1). Proposals that fail a test are set aside. Proposals that pass all four tests are judged proportionate.
    • The four tests introduce a division of labour between the panel and its expert advisers. At each stage, the expert advisers provide on-demand input regarding the likely consequences of different policy options, but it falls to ordinary citizens to debate the central evaluative questions. These questions can be easily understood and do not require arbitrating scientific disagreements.
    • A government is the ideal implementing agent for such a process, but other organizations (e.g. universities) can conduct similar processes. The process may be approximated in other, non-ideal ways, such as by a small panel (or even a single individual) reflecting on whether their decisions could be justified to a more inclusive panel implementing the full process.
    • In these debates, those with expertise in philosophy can bring their expertise to bear by explaining and elucidating value conflicts, mapping out consistencies and inconsistencies between different positions, and making proposals. These roles can be played without claiming additional, sage-like expertise over the correct way to resolve conflicts between deeply held values.


Chapter 09: People with Disorders of Consciousness
  • Sections
    1. The Problem of Diagnostic Uncertainty
    2. The Search for Cognitive-Motor Dissociation
    3. The Realistic Possibility of Continuing Valenced Experience in the PVS
    4. Pain Management and the ‘Assume Sentient’ Principle
    5. Moving Past the PVS/MCS Distinction
    6. The Question of Treatment Withdrawal
    7. Summary of Chapter 9
  • Summary
    • Sometimes a person, after brain injury, displays sleep-wake cycles but has severely impaired, or entirely absent, responses to external stimuli. Traditionally, attempts have been made to distinguish the persistent vegetative state (PVS) from the minimally conscious state (MCS), itself subdivided into ‘MCS plus’ and ‘MCS minus’.
    • Diagnostic procedures for distinguishing PVS from MCS are subject to high error rates and high uncertainty. Moreover, there are cases of patients who, after being diagnosed as PVS through a very thorough application of behavioural criteria, show signatures of brain activity in response to commands. And there is a realistic possibility that midbrain mechanisms suffice for basic valenced experiences, even if cortical injury prevents a patient from reporting these experiences.
    • In the context of pain management, an ‘assume sentient’ principle is appropriate. Any signs that would be interpreted as signs of pain or distress in a conscious patient should still be so interpreted in a patient with any disorder of consciousness, and pain relief administered. This treatment should not be reserved for a subset of patients, e.g. those thought to be minimally conscious. In fact, this ‘assume sentient’ principle can be extended, with care, to all clinically important decisions. This must be understood as a precautionary assumption — a way of erring on the side of caution — and not evidence for or against any particular prognosis.
    • We should, in fact, aim for a situation in which the problematic PVS/MCS distinction carries no clinical significance and can be retired. All decisions should be based on the best interests of the patient, taking all relevant information into account, including information about their individual profile of responsiveness and capacity, and regarding continuing affective experiences as a realistic possibility.
    • In some cases, the decision-making team (including clinicians and families) is in agreement that sustaining life indefinitely is not in the patient’s best interests. Here too, the decision should be based on comprehensive best-interests assessment sensitive to the details of the case, and not on the PVS/MCS distinction.
    • The method of withdrawing clinically assisted nutrition and hydration (CANH) would not be acceptable for any other sentient being, and alternatives must be explored and discussed by inclusive, democratic processes. Part of the problem is that clinicians fear prosecution if they administer large doses of sedatives or analgesics after CANH withdrawal. In cases where CANH withdrawal has already been authorized, clinicians should be guaranteed that hastening death in this way will not lead to punishment. If alternative methods were available, it would better enable clinicians and patient’s families to watch and wait during the early days after a traumatic brain injury, rather than moving quickly towards a decision to withdraw treatment before a patient’s condition has stabilized.


Chapter 10: Fetuses and Embryos
  • Sections
    1. The Cautionary Tale of Newborn Pain
    2. Fetal Sentience and Women’s Rights: Separating the Issues
    3. Fetuses as Sentience Candidates
    4. Taking a Precautionary Stance towards Fetuses
    5. Communicating Uncertainty in the Abortion Clinic
    6. Human Embryos and the 14-Day Rule
    7. The 14-Day Rule and Sentience
    8. Summary of Chapter 10
  • Summary
    • In the 1980s, growing public awareness that anaesthesia was not routinely being used on newborn babies during surgery led to a successful campaign to change clinical practice, vindicated by subsequent evidence. The story shows us the value of taking a precautionary attitude towards existing evidence and the value of involving the public in discussion of general clinical norms.
    • When we turn to fetuses, a concern arises that recognizing fetal sentience may be incompatible with recognizing a right to abortion. However, these are substantially separate issues. The time limit for abortions depends either on when the fetus becomes a person and/or on the strength of a person’s right to bodily autonomy. Sentience is not sufficient for personhood, so this is not fundamentally a question about sentience.
    • We should recognize human fetuses as sentience candidates from the start of the second trimester. This aligns with the earliest scientifically credible, evidence-based estimates in the zone of reasonable disagreement. Future evidence could move the threshold for sentience candidature in either direction, but it should always align with the earliest scientifically credible, evidence-based estimate.
    • Whenever therapeutic surgery is performed on a fetus (and this happens for a variety of reasons), direct administration of anaesthesia and pain relief should be considered. This can be justified by the need to control the fetal stress response, even setting aside the possibility of sentience. Specific decisions need to be made by expert anaesthesiologists in discussion with surgeons and patients, but the public can and should be involved in discussions of the general norms of medical practice in this area and the value-judgements implicit in those norms.
    • Although recognizing second-trimester fetuses as sentience candidates does not give us a reason to change the legal time limit on abortions, it does require honest communication of uncertainty with patients. Clinical norms in this area need to be formulated by appropriately inclusive processes, and these processes need to give appropriate weight to the voices of women.
    • The anti-precautionary attitude that has sometimes been taken towards neonates and fetuses can be contrasted with the strongly precautionary attitude often taken towards human embryo research, encapsulated in the ‘14-day rule’. The original rationale for the 14-day rule (in the UK) rests on the dubious idea13 that the development of an individual human being begins with the appearance of the primitive streak. In practice, this has always represented a strategic compromise between deeply conflicting values, and the conflict has never been satisfactorily resolved.
    • If the main goal were simply that of avoiding research on human beings who are sentience candidates, a significantly liberalized legal limit would still be proportionate to that goal. However, sceptics of human embryo research are often driven by concerns unrelated to sentience. The best way to resolve this value conflict stably and for the long term is through democratic, inclusive processes in which those who oppose liberalization are properly represented.


Chapter 11: Neural Organoids
  • Sections
    1. The Promise of Organoid Research
    2. No Risk of Sentience?
    3. Early Warning Signs
    4. Assessing Sentience Candidature in Neural Organoids
    5. Analogies with Embryos
    6. The Brainstem Rule
    7. Possible Regulatory Frameworks
    8. Summary of Chapter 11
  • Summary
    • Human neural organoids are showing great promise as models of the human brain, models that could potentially replace a substantial amount of animal research. It would be hasty to dismiss the possibility they could develop sentience. However, scepticism about this idea is appropriate when considering current organoids (at the time of writing). This is not because of their size, but because of their organization: current organoids lack a functioning brainstem or anything equivalent to one. There are nonetheless some troubling early warning signs, suggesting that organoid research may create forms of sentient being in the future.
    • Researchers with very different views about the neural basis of sentience can unite behind the ‘brainstem rule’: if a neural organoid develops or innervates a functioning brainstem that regulates arousal and leads to sleep-wake cycles, then it is a sentience candidate. An artificial brainstem substitute may also be enough. This is proposed as a sufficient condition for sentience candidature. When a system is a sentience candidate, we should take the possibility of its sentience seriously and discuss proportionate steps to protect its welfare, despite continuing uncertainty and doubt.
    • What steps might be proportionate? If organoid research leads to the creation of organoids that are sentience candidates, a moratorium (time-limited ban) or indefinite ban on the creation of this particular type of organoid may be appropriate, but bans should avoid indiscriminate targeting of all organoid research. An alternative approach, consistent with existing approaches to animal research, is to require ethical review and harm-benefit analysis whenever a neural organoid is a sentience candidate.


Chapter 12: The Clearest Candidates
  • Sections
    1. Fishes and Invertebrates as the New Centre of the Debate
    2. Octopuses as ‘Honorary Vertebrates’
    3. The Story of the ‘Sentience Act’
    4. The Institute of Medical Ethics (IME) Criteria
    5. Problems with the IME Criteria
    6. My Team’s Revised Criteria
    7. The Question of Generalization
    8. What We Found, in Brief
      → Example 1: Conditioned Place Avoidance in Octopuses
      → Example 2: ‘Anxiety-Like States’ in Crayfish
      → Example 3: Integrative Brain Regions Linked to Learning and Memory
    9. From Grades of Evidence to Sharp Decisions
    10. Some Critical Reflections
    11. Summary of Chapter 12
  • Summary
    • Since all adult vertebrates are sentience candidates, debate in recent years has shifted towards invertebrates, where sentience candidature is more contestable. Octopuses are often regarded as sentient due to their large brains and impressive intelligence. Yet sentience is neither intelligence nor brain size, and we should be aware of the possibility of decouplings between intelligence, brain size, and sentience in the animal kingdom. Precautions to safeguard animal welfare should be driven by markers of sentience, not by markers of intelligence or by brain size14.
    • That said, even when we restrict our focus to markers of sentience, octopuses are clearly sentience candidates. I led a review in 2021 that considered the evidence of sentience in cephalopod molluscs (octopuses, squid, cuttlefish, and nautiluses) and decapod crustaceans (including many crabs, lobsters, crayfish, and shrimps). We constructed a framework based on eight theoretically well-motivated criteria. We reported with high confidence that octopuses satisfied at least seven of the eight criteria.
    • More broadly, we found a complicated evidential picture. The question of when enough evidence has amassed to imply a realistic possibility of sentience that it would be irresponsible to ignore is a difficult judgement call. In my view, the coleoid cephalopods and decapods of the Pleocyemata suborder are clear sentience candidates, whereas decapods of the Dendrobranchiata suborder (including penaeid shrimps) are investigation priorities.


Chapter 13: Pushing the Boundaries
  • Sections
    1. Insects: The Old Received Wisdom
      13.2 Insects: The Emerging New Picture
      → Judgement Bias
      → Evaluative Modelling in the Central Complex
      → Working Memory, Attention and Sophisticated Associative Learning
    2. Insects as Sentience Candidates
    3. Four Investigation Priorities
      → Gastropod Molluscs
      → Nematode Worms
      → Spiders
      → Insect Larvae
    4. Neither Sentience Candidates nor Investigation Priorities: Plants and Unicellular Organisms
    5. Summary of Chapter 13
  • Summary
    • The idea of pain in insects has often been dismissed on the grounds that insects will continue normal feeding and mating behaviours despite catastrophic injury. But this is compatible with insects having aversive experiences in response to other stimuli, such as heat, and there is emerging evidence that they do.
    • Importantly, sentience is not just pain, and there are ways to make a case for insect sentience that do not proceed via responses to noxious stimuli. There is evidence that insects have a behavioural core control unit functionally similar to that taken to be sufficient for sentience by Merker’s theory. There is also evidence, mainly from bees, of working memory and of forms of associative learning that seem to be facilitated by conscious experience in humans. These lines of evidence push us to take seriously the possibility of sentience in insects.
    • When we turn our attention to molluscs other than the cephalopods (such as gastropod molluscs, including snails) and other arthropods (such as arachnids, including spiders15) we find a frustrating paucity of studies looking for markers of sentience. There is some evidence of prospective simulation in Portia spiders, suggesting experiments aimed at exploring sentience in arachnids would be worthwhile. But we currently lack the sort of evidence base that would be needed to inform evidenced-based discussions about proportionality. These animals, along with nematode worms and insect larvae16, should be seen as investigation priorities.


Chapter 14: Frontiers of Proportionality
  • Sections
    1. Taking Invertebrates Seriously
    2. Codes of Good Practice and Licensing Schemes
    3. Against Octopus Farming
    4. Towards Humane Slaughter
    5. Summary of Chapter 14
  • Summary
    • Taking invertebrate welfare seriously involves proposing and debating steps that may be proportionate (permissible-in-principle, adequate, non-excessive, consistent) in relation to specific welfare risks. The growing interest in invertebrates as potentially sustainable protein sources underlines the need for these debates.
    • A plausibly proportionate step would be to set up a licensing scheme that requires companies farming sentience candidates or investigation priorities to fund the creation (by independent experts) of codes of good practice, where none yet exists. There is one important exception, namely octopus farming. It is very unlikely that octopus farming could ever meet reasonable expectations regarding animal welfare, and so it would be proportionate to ban it outright.
    • Our current state of knowledge makes it difficult to be confident that slaughter is humane or that stunning is effective. It is easier to be confident about what is inhumane. We should not use live boiling without prior stunning in decapods, since this method needlessly risks extending and intensifying the suffering associated with dying.


Chapter 15: Against Complacency
  • Sections
    1. A Case against Complacency
    2. Sources of Risk 1: Whole-Brain Emulation
    3. Sources of Risk 2: Artificial Evolution
    4. Sources of Risk 3: Minimal Implementations of Cognitive Theories of Consciousness
    5. Summary of Chapter 15
  • Summary
    • We should not be complacent about the risks of developing sentient AI in the near future. Large language models (to be discussed in the next chapter) already present some risk, because they can implicitly acquire algorithms during training, we have no grip on how sophisticated these algorithms can be, and large-scale computational functionalism is generally considered a realistic possibility in consciousness science (see §3.6).
    • Three other pathways to artificial sentience candidates are also worth taking seriously. The first involves emulating the brains of sentience candidates such as insects, neuron by neuron, based on their connectomes. The resulting virtual brains are sentience candidates if they display the same pattern of behavioural markers that we take as sufficient for sentience candidature in the biological original. The second path involves evolving artificial agents that converge on similar patterns of behavioural markers to biological sentience candidates. The third involves deliberately implementing a minimal version of a large-scale computational feature (such as a global workspace) that is credibly linked to sentience in humans.
    • All three pathways present ways in which we might come to recognize a system as an artificial sentience candidate even though it does not display impressive feats of intelligence. We need to be mindful of the possibility of significant decouplings of sentience from intelligence in this area.


Chapter 16: Large Language Models and the Gaming Problem
  • Sections
    1. The Gaming Problem: When the System Knows the Criteria
    2. Boxing the AI: Schneider and Turner’s ‘Artificial Consciousness Test’
    3. The Need for Deep Computational Markers
    4. Summary of Chapter 16
  • Summary
    • When an artificial agent is able to intelligently draw upon huge amounts of human-generated training data (as in large language models, or LLMs), the result can be gaming of our criteria for sentience. Gaming occurs when systems mimic human behaviours that are likely to persuade human users of their sentience without possessing the underlying capacity. No intentional deception is needed for gaming. It could happen in service of benign, mundane objectives, such as maximizing user-satisfaction or maximizing interaction time.
    • The gaming problem initially leads to the thought that we should ‘box’ AI systems when assessing their sentience candidature: that is, the system must be denied access to a large corpus of human-generated training data. However, this would destroy the capabilities of any LLM, thereby setting an impossibly high bar.
    • This in turn leads to the thought that what we really need in the AI case are deep computational markers, not behavioural markers. We can use current computational functionalist theories of consciousness as a possible source of markers. If we find signs that an LLM, though not deliberately equipped with a global workspace or perceptual/evaluative reality monitoring system, has implicitly learned ways of recreating them, this should lead us to regard it as a sentience candidate. The main problem with this proposal is that we currently lack the sort of access to the inner workings of LLMs that would allow us to reliably ascertain which algorithms they have implicitly picked up during training.


Chapter 17: The Run-Ahead Principle
  • Sections
    1. Metzinger’s Call for a Moratorium
    2. A Moderate Alternative
    3. Will We Soon Need an AI Welfare Law?
    4. A Call for Democratic Debate
    5. Summary of Chapter 17
  • Summary
    • Given the rate at which AI is developing, and the risks associated with artificial sentience taking us by surprise, we should apply the run-ahead principle: at any given time, measures to regulate the development of sentient AI should run ahead of what would be proportionate to the risks posed by current technology, considering also the risks posed by credible future trajectories.
    • The run-ahead principle may potentially justify strong regulatory action, but a moratorium, such as that proposed by Metzinger, may go beyond what is reasonably necessary to manage risk. Meanwhile, Schneider’s more moderate alternative — involving regular testing to monitor the sentience of our AI creations — is currently unfeasible, given the absence of tests that can be applied to large language models and other systems with high potential for gaming our criteria. A third approach involves oversight by means of sector-wide codes of good practice and licensing schemes. Yet this path would require a greater level of transparency than we have seen from the AI industry to date.
    • The overarching imperative is to have democratic debate about these questions now, in the hope that we might be prepared for the upheaval of human lives that artificial sentience candidates will inevitably precipitate, if and when they arrive.


Stepping Back (Full Text)
  • In the woods near my home a tiny fly, about the size of Drosophila, landed on the middle finger of my left hand. I usually blow flies away. This time, this book weighing on my mind, I let the fly sit there. Like the octopus in My Octopus Teacher, the fly seemed oddly curious about my skin, exploring the strange textures and odours, raising and lowering its limbs. I walked with them for twenty minutes or so.
  • The longer you watch a fly, the easier it becomes to see them for what they are: an exploratory, unpredictable, inscrutable creature with a complex brain. A false perception of the fly as a nondescript nuisance gives way to something more accurate, more attuned to reality. You think at first that a being of that size could not be as complex or interesting as a dog or an octopus. Looking closely makes you think again.
  • I am enough of an optimist about human ingenuity to find it likely we will one day build systems as capable and complex as flies. We will create new sentience candidates. Will we use them or co-exist with them? The pressure to let instrumental relations dominate will be overwhelming. Perhaps the only way to avoid that temptation is to never attempt to build such beings in the first place. But there may be a different way: one that involves all of us reflecting intensely and honestly on how we might change our orientation towards sentient life to become more about co-existence than use. Maybe then, when we do work out how to create new forms of sentience, we will be ready.
  • This book has been about action, decision, policy: about how we can face up to real-life dilemmas at the edge of sentience and handle them appropriately. Some of these dilemmas are heading towards us at alarming speed. Some are already here. Some have always been with us. I hope the framework and set of proposals I have put forward (summarized at the beginning of the book) provide a platform for further debate about these issues. We need to decide what to do in these cases, and we need to do it calmly, despite the urgency of the problems. We need to think through our value conflicts, listen to experts, stakeholders, and advocates, look for points of consensus and meta-consensus, and design precautions that are permissible in principle, adequate, reasonably necessary, and consistent. Many readers may disagree with my specific proposals, but I hope everyone can agree with that general approach.
  • The need for action and decision focuses minds, puts limits on speculation, and forces us to make judgements—judgements about what counts as evidence of sentience and how that evidence should guide us. But the urgency of the problems does not imply a need to rush. Good decisions come not from rushing, nor from procrastinating, but from taking the right amount of time. When faced with the disturbing and often terrible problems at the edge of sentience, we need to be prepared to act fast — but we also need to make time to reflect, deliberate, and listen to views from across the full range of reasonable disagreement. And we must do this while ensuring that the sentience candidates themselves remain at the centre of the picture.




In-Page Footnotes ("Birch (Jonathan) - The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI")

Footnote 2:
  • This is the write-up as it was when this Abstract was last output, with text as at the timestamp indicated (04/02/2026 07:39:22).
  • Link to Latest Write-Up Note.
Footnote 3: Footnote 4:
  • The author has all these down as ‘sentience candidates’.
  • His reasons include supposedly analogous brain-regions to those that some theories of consciousness claim to be involved in generating consciousness in humans and behaviour that would indicate ‘valenced experience in humans’.
  • I’m suspicious of all this – it seems somewhat anthropomorphising.
  • I agree that we need to be cautious, but have hitherto been influenced by neuron-counts – which the author takes to be largely irrelevant.
  • Also, I think we should factor in the likely quality of the ‘valenced experience’. Whatever it feels like to be a spider – if anything – won’t be very analogous to what it’s like to be one of us. Or so I would suspect.
Footnote 5:
  • This is – as the author notes – critically important.
  • There’s a clear distinction between PVS (a term which the author would like consigned to the dustbin of history) and brain death.
  • I agree that the situation of some potentially conscious but immobile and incommunicative individuals may be worse than death, and agree that the legally-acceptable method of bringing their miserable existence to an end (by withdrawing sustenance) might be unconscionably cruel.
  • I also agree with the precautionary measures suggested – analgesics where pain is possible; avoidance of careless talk where sentience is possible – are sensible (and compassionate).
Footnote 6:
  • See my Note on Fetuses.
  • The arrogance of surgeons in assuming lack of sentience because their pet theories of brain development discount it is well noted. The author is rightly appalled at the practice – now thankfully discontinued – of operating on newborns without anaesthetic.
  • I do agree, though, with those on the contrary side that – while sentience may be there – the mental pain usually associated with physical pain will not be, given lack of experience and physical and mental development.
  • These thoughts are also relevant when considering the likely distress caused to the less sophisticated forms of life (though arrogant presumptions are also to be rejected in these cases as well).
Footnote 7:
  • This is where things get silly. ‘Feelings’ is a slippery term, having a psychological as well as sensory aspect.
  • I suppose we shouldn’t underestimate the self-organising capacity of neural tissue, but in this case there is neither input (of ‘sense data’) nor communicative output. Maybe one day …
Footnote 8:
  • The author notes the ‘gaming problem’ for LLMs.
  • He gives an appreciative nod to "Bostrom (Nick) - Superintelligence: Paths, Dangers, Strategies" about the ultimate possibility of whole-brain simulations, but is somewhat inconsistent with his approach to simple animals – where neuron-count isn’t critical, partly because of the physical complexity of the neurons and axons themselves. If these aspects aren’t in the model, we may be missing a lot (even assuming Functionalism).
Footnote 9:
  • There are cases – dogs and most mammals – where the assumption of sentience makes sense and we should assume it.
  • But this is a slippery slope and one – if we follow it too far – causes more suffering rather than less, in that it takes our focus away from beings for which sentience is a serious burden to those for which it likely isn’t.
  • Also, there’s the issue of incommensurability along the lines of it being better to be Socrates dissatisfied than a pig satisfied.
Footnote 10:
  • This is an important point to bear in mind.
  • The author is not supportive of plant sentience, but is keen on fruit-flies. Either way, we – and all animals – are in competition with such forms of life for their livelihoods. What’s to be done?
Footnote 11:
  • The author admits there are aspects of sentience other than pleasure and pain, but chooses these because they have a clear moral imperative attached.
  • Elsewhere, I think, he includes animal flourishing – and relief from boredom – as important.
  • As for pain, there is – if I remember correctly – a human pathological condition in which pain is felt but – in one sense or other that’s difficult to fathom – it ‘doesn’t matter’ to the patient.
  • So … much of the distress of pain relates to reflecting on it – anticipating it, knowing it will continue and maybe get worse, knowing that it may be symptomatic of some impending doom.
  • Higher animals – such as abused dogs – anticipate future ill-treatment; but, I doubt this is true of lower animals which may live more in the moment.
Footnote 12:
  • This category – of sentience candidates – becomes very wide and seems to include almost all animals of any complexity, though the author stops short of including nematode worms and shrimp in this category (they are investigative priorities).
  • I think treating ‘valenced sentience’ as an ‘on/off’ switch is too simplistic. There may be something it’s like to be an oyster, but not much.
  • The author doesn’t – in my view – face up to the fact that nociception (perception and avoidance of noxious environments) is a necessary property of any organism that has evolved (they would not live long enough to propagate otherwise), and need not be associated with phenomenal consciousness of any sort.
Footnote 13: Footnote 14:
  • I get the author’s point, but it seems to move the argument in the opposite direction to that which he thinks it should.
  • He points out that some large brain areas – such as the cerebellum in mammals – are taken up with somatic control and have nothing to do with sentience. So, large areas of large brains may not be related to sentience. Hence, simply having a large brain doesn’t in itself imply sentience.
  • But this doesn’t make it any more probably that a small brain is sentient. Rather, it makes it even less likely, as large areas – percentage-wise – of the small brain might also be earmarked for unconscious processing. I suspect that this is the case with bees’ relatively large brains. They are intelligent – in that they perform complex navigation and communicate with their ‘waggle dance’ – but none of this need be conscious.
  • Another point is that it seems to be a basic assumption of AI – of the ‘neural net’ variety – that pseudo-neuron count – as well as the number of layers in the network – is critical. The ‘neurons’ are taken as contentless points, and it’s the weighting of the connections between them that’s important. I think this is the wrong model of biological brains, but it’s worth bearing in mind.
Footnote 15:
  • I’ve noted spiders that stay in the same spot motionless for days on end waiting for something to happen, and if nothing does, they die. Whatever it’s like to be a spider is nothing like what it is to be a mammal, let along one of us. Do they get bored?
  • The author make great play of analogies between humans and lower animals, along the lines of ‘in us, this is evidence of sentience’. Well, if we were making a complex web – or a honey-comb – it surely would. But does anyone thing that spiders and bees suck their metaphorical pencils which working out how to do it?
  • Hunting spiders would seem to have more of a blank canvass, but unconscious algorithms would probably work just as well.
Footnote 16:
  • I think this is where things start to get absurd.
  • This applies also to fish and their spawn, and I’ll use them as an example.
  • If I understand it correctly, fish produce millions of eggs in the hope of producing one adult. All but one of these therefore get eaten along the way. If they are sentient in any way comparable to us, they die horrible deaths in the digestive bath of a predator’s stomach. Is this at all likely? What good would consciousness be to them? What plans can they undertake to avoid their almost inevitable fate?
  • If this really is the case, wouldn’t we – as some nihilists have argued – have a duty to bring all complex life to an end?


Text Colour Conventions (see disclaimer)
  1. Blue: Text by me; © Theo Todman, 2026
  2. Mauve: Text by correspondent(s) or other author(s); © the author(s)



© Theo Todman, June 2007 - March 2026. Please address any comments on this page to theo@theotodman.com. File output:
Website Maintenance Dashboard
Return to Top of this Page Return to Theo Todman's Philosophy Page Return to Theo Todman's Home Page