Is consciousness required for AI welfare?
New Open Access Paper
Goldstein and Kirk–Giannini have recently argued that artificial language agents can possess well-being in the absence of phenomenal consciousness. Here, I challenge their position, contending that their arguments fail to establish that consciousness is dispensable for well-being. Moreover, their arguments generate counterintuitive implications that are more problematic than those they attribute to views requiring consciousness for welfare subjecthood. Thus, consciousness (or rather sentience) should still be treated as a requirement for AI welfare.
You can download a pdf of the journal article here: https://doi.org/10.1007/s44204-026-00382-3 For convenience, I’ve shared the whole thing here. I’m looking forward to your comments!
1 Introduction
The rapid development of artificial intelligence has raised serious philosophical concerns that we may soon be creating AI agents with welfare interests that could be harmed. But is consciousness a prerequisite for wellbeing in such systems?Footnote1 In a thought-provoking paper titled “AI Wellbeing,” Goldstein and Kirk–Giannini argue that while large language models currently seem to lack phenomenal consciousness (that is, there is nothing it is like to be them), they possess wellbeing under both desire-satisfaction and objective list theories of wellbeing (Goldstein & Kirk–Giannini, 2025). Dedicating an entire section to deal with the inevitable objection that consciousness is required even under these accounts, i.e., what they call a Consciousness Requirement, I believe that their arguments ultimately fail to adequately defend their conclusion that AI agents can have welfare without consciousness.
In this reply, I will examine their arguments and defend what they describe as this Consciousness Requirement. My commentary is structured as follows. In the next section, I will deal with their argument from the intuitive link between welfare theories and theories of welfare subjects. In the subsequent section, I will deal with their philosophical zombie thought experiments against the consciousness requirements. In the final section, I will respond to their argument from unconscious influences on welfare.
2 The link between welfare and welfare goods
That consciousness is required for wellbeing has been the working assumption for much of the history of philosophy. It has guided animal welfare discussions about where we should draw the lines of our concern (Veit, 2023; 2025; Browning & Veit, 2022; 2023) and has been the guiding principle for discussions about AI welfare. However, Goldstein and Kirk–Giannini question “whether the Consciousness Requirement is well motivated and free from unintuitive consequences” (Goldstein & Kirk–Giannini, 2025, p. 13). While I believe they prematurely reject the hedonism, I will grant for the sake of this paper that they are correct in arguing that both desire-satisfaction and objective list theories of wellbeing appear to be applicable to non-conscious systems. They define a welfare subject as “an entity that possesses welfare or wellbeing” and a welfare good as “something which contributes to the welfare level of the welfare subjects that possess it” (Goldstein & Kirk–Giannini, 2025, p. 3). Further, they identify three ways in which the Consciousness Requirement may be defended:
First, it might be derived from experientialism — the view that “only what affects a subject’s conscious experience can matter for welfare” [(Bradford, 2023, p. 907)]. Second, it might be derived from the weaker claim that every welfare good itself requires phenomenal consciousness. Third, it might be held that though some welfare goods can be possessed by beings that lack phenomenal consciousness, such beings are nevertheless precluded from having wellbeing because phenomenal consciousness is necessary to be a welfare subject. (Goldstein & Kirk-Giannini, 2025, p. 13)
I will grant for the purpose of this discussion that their arguments against experientialism and against welfare goods needing to be experienced are successful. However, even if non-conscious entities, such as AIs, are capable of possessing preferences or knowledge, I shall argue that they cannot be welfare subjects without consciousness.
If we accept that these factors can change the wellbeing of a subject, however, Goldstein and Kirk–Giannini are right to note that: “If wellbeing can increase or decrease without conscious experience, why would consciousness be required for having wellbeing?” (Goldstein & Kirk–Giannini, 2025, p. 14). They cite Lin, who has similarly argued that it seems “implausible and ad hoc” that desire-satisfaction and objective list theories require consciousness (Lin, 2021, p. 878). As Lin argues:
If a sentient being can become positive in welfare without undergoing a change in phenomenology, why isn’t the same true of non-sentient beings? If one sentient being can be better off than another even though they feel exactly the same, then why can’t one non-sentient being be better off than another even though it is trivially true that there is no difference in how they feel? […] One can reconcile [an Objective List Theory] with Sentience by claiming that although even non-sentient beings can have knowledge, only sentient beings directly benefit from having it. But this claim seems implausible and ad hoc. (Lin, 2021, p. 878)
Goldstein and Kirk–Giannini contend that their position, shared with Lin, rests on a natural intuition that our theory of welfare and our theory of welfare goods should hang together. They describe this intuition as a Simple Connection principle: “An individual is a welfare subject just in case it is capable of possessing one or more welfare goods” (Goldstein & Kirk–Giannini, 2025, p. 14). Given this principle, they reason as follows: if we reject both experientialism (the view that all welfare is constituted by conscious experience) and the claim that every individual welfare good requires being consciously experienced, then we are forced to abandon either the Consciousness Requirement or Simple Connection.
However, Lin’s argument only aimed at sufficiency. It should be apparent that such a principle requires justification rather than mere assertion. Goldstein and Kirk–Giannini thus face a dilemma. Either they endorse a weaker Lin-esque version of the connection principle, which fails to support their conclusion, or they endorse an iff version that cannot be justified solely by appealing to an intuitive connection between welfare subjecthood and welfare goods. Their argument appears to commit them to a stronger view, but the plausibility of the weaker principle provides no evidential support for the stronger one. Indeed, under their preferred methodology of reflective equilibrium, their principle could be deemed just as ad hoc and implausible as the Consciousness Requirement they seek to reject.
Unlike Goldstein and Kirk–Giannini, I hold that “that consciousness is required to be a welfare subject even if it is not required for the possession of particular welfare goods” (Goldstein & Kirk–Giannini, 2025, p. 15), but they offer two additional objections in the form of thought experiments to my view that merits greater attention.
3 Minimally conscious zombies
Their first thought experiment, or rather series of thought experiments, involves an interesting series of amendments to traditional philosophical zombies. Goldstein and Kirk–Giannini begin by asking us to consider a being that is physically identical to a human and behaves exactly like a typical human in all respects but lacks any phenomenal experiential states whatsoever. They are right that those who endorse the Consciousness Requirement must deny that this zombie has wellbeing, even when “desires are satisfied or its life instantiates various objective goods” (Goldstein & Kirk–Giannini, 2025, p. 15).
Now suppose we give this philosophical zombie a single, unchanging conscious experience: the visual sensation “homogenous white visual field” (Goldstein & Kirk–Giannini, 2025, p. 15), which is an idea originally expressed by Kagan (2019). Goldstein and Kirk–Giannini argue that adding this minimal form of consciousness makes no intuitive difference to the being’s welfare status. If we previously thought its welfare goods made no difference to its welfare, this featureless white experience should not change our assessment. Again, I agree that such a zombie would not have welfare. What matters to be a welfare subject is the capacity for valenced experiences.
However, then they raise the question of why it would make a difference if the zombie just had a constant mild sensation of pleasure:
To our judgment, this should equally have no effect on whether the agent’s satisfied desires or possession of objective goods contribute to its wellbeing. Uniformly sprinkling a field of pleasure on top of the functional profile of a human does not make the crucial difference. These observations suggest that whatever consciousness adds to wellbeing must be connected to individual welfare goods, rather than some extra condition required for wellbeing (Goldstein & Kirk-Giannini, 2025, p. 15)
Thus, in their view, proponents of the idea that consciousness is necessary to be a welfare subject face similar projects to experientialism or the view that welfare goods need to be experienced to count for welfare.
I find this argument unpersuasive for two reasons. First, the thought experiment rests on a conceivability argument according to which consciousness is a mere epiphenomenon (see also Bailey, 2009). A being with the complete functional organization of a human, including all the physical and behavioral dispositions associated with human belief and desire, would ipso facto have phenomenal consciousness on most leading theories of consciousness. The scenario of a genuine phenomenal zombie with full human functional organization is, on many views, metaphysically impossible (Kirk, 2021; Webster, 2006). Thus, the thought experiment is misleading in its design to pump our intuitions in the direction of denying the importance of consciousness.
Second, even granting the coherence of the thought experiment, I draw the opposite conclusion. The reason adding a homogeneous white field or mild pleasure seems insufficient is not that consciousness per se is irrelevant to wellbeing, but rather that these impoverished forms of consciousness are not the right kind. What matters for wellbeing is not the existence of consciousness, but the fact that the subjective experience of valence brings about a welfare subject who can be benefitted in the first place. I agree with their view that a pure experience of white should not make a difference since we cannot really speak of a welfare subject here. On most plausible versions of the Consciousness Requirement, entities require sentience, i.e., the capacity for hedonically valenced conscious states, such as pleasure, pain, and fear, rather than any type of experience. In the case of a mild positive experience, I am thus happy to agree that we find ourselves with a welfare subject.
Yet, if I do not deny that desire-satisfaction and objective list theories can imbue such a system with welfare, Goldstein and Kirk–Giannini are not wrong to push pressure on me and others that hold onto a requirement for consciousness. What is it about a constant mild positive experience that would make desire-satisfaction matter in one case but not the other if there is no connection to the functional architecture of our zombie’s desires? Here I reject their comparison. The problem is not that desires can sometimes benefit welfare subjects without being experienced, but rather that in this case there is no connection between desires or objective goods and the welfare subject at all.
Consider a modified version of the thought experiment: instead of adding an unchanging level of hedonic pleasure, we add phenomenal experiences that are appropriately connected to the agent’s functional states. When a sophisticated future AI has a desire to help us and achieves this goal, it experiences phenomenal motivation and satisfaction. With these additions, I submit, we would judge that the AI’s welfare is improved when its desires are satisfied.
This modified thought experiment highlights that what is doing the intuitive work is not the presence or absence of consciousness per se, but whether consciousness is appropriately connected to welfare goods. Goldstein and Kirk–Giannini’s original thought experiment severs this connection by adding pleasure in a completely disconnected way, then concludes consciousness does not matter for welfare.
Without having to rely on thought experiments, let us consider the case of plants. Plausibly, health is a welfare good under an objective list theory of wellbeing. Yet just because we can assess plants or even bacteria as being more or less healthy, does not mean that they thereby gain the capacity for welfare. Some welfare goods are relevant only from a moral perspective because they belong to a conscious subject. While biological conceptions of health are irrelevant for AIs, we may well postulate that a well-functioning goal-directed organization is an objective good in AIs similar to health in us. However, why should that matter to the system if there is no subject at all? Welfare goods are identified for welfare subjects. Their existence outside of welfare subjects should thus not be confused with welfare subjecthood being grounded in the existence of those goods themselves.
Love is plausibly another good. While it seems that entities without consciousness cannot love, they may be the recipient of unconditional love, which could still be seen as a welfare good. Yet just because someone may love their Orchid, does not mean it has welfare, nor would we say that for AI agents designed to make people fall in love with them. In any case, it seems that subjecthood remains necessary for welfare, not because welfare goods need to involve experience. The argument is not that “states like knowledge and desire require phenomenal consciousness” (Goldstein & Kirk–Giannini, 2025, p. 14), but that it is only when they are in the domain of a conscious subject that they matter for its welfare. Goldstein and Kirk–Giannini’s conclusion to treat welfare as independent of consciousness brings with it much more counterintuitive implicationsthan the view they try to reject. Their next argument will emphasize this even more.
4 Wellbeing of the unconscious
Goldstein and Kirk–Giannini’s final argument focuses on welfare changes during unconsciousness. They observe that the wellbeing of agents can improve while they sleep unconsciously, such as when desires are being fulfilled. To address this case, they draw on a distinction by Lee (2025) between “between state and capacity versions of the Consciousness Requirement” (Goldstein & Kirk–Giannini, 2025, p. 16). The state version says a being is a welfare subject only when it is currently conscious. The capacity version says a being is a welfare subject only when it possesses the capacity to be conscious. Lee endorses the capacity version precisely because it can handle these sleeping cases without difficulty.
However, Goldstein and Kirk–Giannini object that this move comes at a significant theoretical cost. The capacity version allows that a being could qualify as a welfare subject simply by having the potential for consciousness, even if this potential is never actualized throughout its entire existence. Such a being would live and die without ever having a single conscious experience, yet still count as a welfare subject. Goldstein and Kirk–Giannini argue that this consequence undermines the very intuition that motivates the Consciousness Requirement in the first place. They suggest it would be preferable to simply reject the Consciousness Requirement entirely rather than accept this implausible implication.
Unlike Goldstein and Kirk–Giannini, I do not find this description unintuitive. Consider the following thought experiment. Imagine that, in the future, scientists are able to instantly duplicate humans using an atomic scanner and a sophisticated 3D printer. The resulting human is kept in a state of limbo through a future form of cryogenics but is never brought to consciousness, and is unceremoniously killed 5 years later. It strikes me as obvious that such a being would indeed constitute a welfare subject. However, this human’s welfare would be at a level of zero, since it never instantiates a subject whose life could be described as going well or badly. Alternatively, one might hold that withholding the ability to experience at all from such a subject constitutes a harm. But in either case we would not grant any of this in the case for a copy of a non-conscious but knowledgeable AI agent with desires that is never activated.
Thus, anyone defending a consciousness requirement is not required to reject the view that the satisfaction of desires during sleep, or even after death, could contribute to a person’s wellbeing. What matters is that welfare goods such as desire-satisfaction or knowledge are properties of the welfare subject. Consider a human who has an unconscious desire they can never access, are unaware that it influences their decision-making, and whose satisfaction brings about no hedonic experiences. Goldstein and Kirk–Giannini seem to force us to treat it as prudentially wrong not to care about the satisfaction of such unconscious desires. But this strikes me as deeply counterintuitive. The desires whose satisfaction matters to my welfare are those I have at least experienced once, for they are otherwise completely alien to me as a subject. If it were revealed to me that I had been unconsciously working toward achieving some desire toward which I feel no conscious attraction, I would not judge my welfare to be improved once it had been achieved. If anything, I would experience my subjecthood as diminished, not expanded to include purported non-conscious welfare goods. In conclusion, the view that sentience is required for welfare subjecthood better accommodates our intuitions while maintaining that beings lacking the capacity for consciousness cannot be welfare subjects. If AIs become welfare subjects, it will be only when they develop sentience.Footnote2
Data availability
This research involved no data.
Notes
I treat “wellbeing” and “welfare” as synonyms within this article.
Within the scope of this paper and due to limitations of space, I leave it open whether the satisfaction of this desire must always bring about pleasure to count as welfare-enhancing. My argument, in any case, does not require it.
References
Bailey, A. (2009). Zombies and epiphenomenalism. Dialogue, 48(1), 129–144. https://doi.org/10.1017/S0012217309090076
Bradford, G. (2023). Consciousness and welfare subjectivity. Noûs, 57(4), 905–921. https://doi.org/10.1111/nous.12434
Browning, H. & Veit, W. (2022). The sentience shift in animal research. The New Bioethics. https://doi.org/10.1080/20502877.2022.2077681
Browning, H., & Veit, W. (2023). Studying animal feelings: Bringing together sentience research and welfare science. Journal of Consciousness Studies, 30(7-8), 196–222. https://doi.org/10.53765/20512201.30.7.196
Goldstein, S., & Kirk-Giannini, C. D. (2025). AI wellbeing. Asian Journal of Philosophy, 4(1), Article 25. https://doi.org/10.1007/s44204-025-00246-2
Kagan, S. (2019). How to count animals, more or less. Oxford University Press.
Kirk, R. (2021). Zombies. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2021).
Lee, A. Y. (2025). Consciousness makes things matter. Philosophers’ Imprint, 25(0). https://doi.org/10.3998/phimp.1956
Lin, E. (2021). The experience requirement on well-being. Philosophical Studies, 178(3), 867–886. https://doi.org/10.1007/s11098-020-01463-6
Veit, W. (2023). A philosophy for the science of animal consciousness. New York: Routledge.
Veit, W. (2025). Animal consciousness: Why it matters. Metode Science Studies Journal, 15(5), e29452. https://doi.org/10.7203/metode.16.29452
Webster, W. R. (2006). Human zombies are metaphysically impossible. Synthese, 151(2), 297–310. https://doi.org/10.1007/s11229-004-8006-4


Walter — I found this piece valuable, especially because if the position you’re dismantling gains traction, the downstream implications become genuinely dangerous.
Where I pause is not with your defence of a Consciousness Requirement, but with the zombie framing itself. The “white field” and “constant mild pleasure” cases seem to treat valence as a detachable scalar — something that can be stapled onto a system without being functionally integrated into perception, memory, learning, or action. At that point, I’m not sure we’re describing consciousness at all. We’re describing a label.
If valence is real, it must be doing work. It must bind across time, constrain behaviour, or participate in some internal regulatory architecture. Otherwise, “pleasure” becomes indistinguishable from a variable name in code.
That’s why I agree that welfare subjecthood requires sentience — but I read sentience as necessarily integrated, not epiphenomenal. A static scalar doesn’t generate moral status; a system for whom things can go better or worse in an ongoing, structured way might.
The inflation risk at the AI boundary is real. If we allow “preference” or “objective goods” to generate welfare subjecthood without grounding them in valenced experience, moral scope explodes. But if we ground valence without architectural coherence, we risk manufacturing subjects by stipulation.
Appreciate you pushing on this — it feels like a Sisyphian task, not because the issue is trivial, but because the performative nature of some of these thought experiments allows them to regenerate even after they’ve been dismantled. In that sense, your work here feels less like escalation and more like a return to gravity.
The sentience requirement you defend here cuts through the Simple Connection problem cleanly: welfare goods exist in functional space, but welfare *subjecthood* requires phenomenal grounding. Goldstein/Kirk-Giannini's zombie thought experiments fail precisely because they sever the connection between functional states and valenced experience—adding disconnected hedonic tone doesn't create a welfare subject any more than attaching a pleasure meter to a thermostat would. Your plant analogy is sharp: health can be a welfare good without making plants welfare subjects because there's no "for whom" the good obtains. The real test case is whether desire-satisfaction during dreamless sleep counts for welfare, but even there the subject's prior conscious engagement with those desires makes them *their* desires rather than alien functional states.