Theory in the New Humanities
Reviewed by: Kenneth Novis (University of Edinburgh, MScR Philosophy)
Manuel DeLanda is best known by some due to the experimental films which he made before beginning his philosophical career; to others, he is known as one of the leading interpreters of Gilles Deleuze’s philosophy; to others still, it is because of his association with the so-called speculative realist turn in continental philosophy that his name is recognisable. His latest book is likely to disappoint those familiar with his earlier contributions to continental philosophy. Indeed, Deleuze’s name appears a single time here (200-1n101) and DeLanda’s engagement with speculative realism continues only in the background of this book. Materialist Phenomenology sits more comfortably in the tradition of analytic philosophy of mind, alongside the works of Dennett and the Churchlands, with whom he substantially engages here. More notable is the absence of the many authors with whom DeLanda might be expected to have engaged due to his background in continental philosophy, including Jean-Paul Sartre, Trần Đức Thảo, Michel Henry, Jacques Derrida, and Maurice Merleau-Ponty.
The purpose of this book is to produce a non-reductive materialism, in opposition to the contemporary prominence of epiphenomenalist, panpsychist and eliminativist philosophies of mind. The leading motivation behind DeLanda’s own philosophy of mind is that, to paraphrase Deleuze, ‘modern neuroscience hasn’t found its metaphysics, the metaphysics it needs.’[i] Furthermore, DeLanda hopes to provide such a metaphysics by developing a novel theory of perception, drawing upon his close engagement with contemporary neuroscience, systems theory, and the science of artificial intelligence. One might be surprised to hear such grand intentions attributed to so brief a book. However, Materialist Phenomenology is extremely dense, requiring careful and deliberate navigation. For that reason, the largest part of this review is dedicated to laying out DeLanda’s arguments from each chapter, following which I will conclude by providing some brief, critical comments on the book.
The Contributions of the World
The first chapter of DeLanda’s book introduces the synthetic approach on which he will rely for the remainder of this work, as well as several key concepts for his theory of perception. However, the ponderous manner in which he navigates his subject matter here makes the overall trajectory of his argument sometimes difficult to discern. One might struggle, for instance, to see the relevance of his critique of Lewis and Kripke’s modal metaphysics on pages 23-4. Indeed, the explicit terms of DeLanda’s argument are not introduced until the conclusion of the chapter, where he clarifies that he is offering a “proof by construction” (38). I will briefly consider the relevance of such a proof and its meaning before proceeding. To begin, let us note that, to offer an account of perception, one option which lies open to philosophers (frequently adopted by phenomenologists) is to begin with the fact that we have a direct awareness of what perception is, as perceiving beings. But, after one has deduced the structures of perception from examining one’s own experience, it may remain a mystery how those structures emerged in the first place. Materialist Phenomenology pursues just such an explanation.
DeLanda’s argument proceeds via proof by construction. Such a proof, in this case, means simulating the origins of perception, and using this simulation to draw real lessons about the historical origins of consciousness. DeLanda calls his simulation the “Multi-Homuncular Model” [henceforth, MHM] (10). The name of this model follows from the method of construction on which DeLanda relies. This book pursues the possibility of modelling the origins of perception using “artificial neural nets” (10). Neural nets are artificial intelligences which, unlike other intelligences that are programmed by a human designer to possess certain capacities, can simply be fed information from which they develop associations autonomously. It may be objected that reliance on software developed by already-conscious beings cannot accurately simulate the origins of consciousness unless the existence of a designer, like the human software designer, is assumed.[ii] However, DeLanda does not consider this objection, and instead relies on a theory of natural signs, according to which the basic components of perception always-already exist in the external world (7-8).
To initiate the construction of MHM, DeLanda begins by providing what he terms a “job description” for perception. This job description names the features which any rudimentary cognitive agent must possess to have access to perception in our sense. For this purpose, DeLanda thinks that perception must possess a causal and intentional connection to the external world (9), create meanings usable by other cognitive agents (11), and emerge from evolution alongside other, similarly perceiving cognitive agents (13). This cannot be all, however. Not only does human perception possess the aforementioned features, it also emerged in a very specific, terrestrial environment which provided the basic form that we use to discern features of the external world. Our world is populated by solid surfaces which appear in consistent shapes (16). Additionally, things in our usual environment have tendencies to behave in one way rather than another, such as the tendency of gasses to maximally expand within a given container (22). Such facts about our material environment evolutionarily present us with a basic datum of what to detect and what to ignore. With this as his description of perception, DeLanda attempts to show that we can move from mere hypotheses about the origins of such perception to demonstrate these origins by simulating each feature using neural nets.
If we accept DeLanda’s use of neural nets as analogues for the origins of human perception, a radically different view of perception presents itself to us. Such a view of perception is at great odds with the classical image of the Cartesian Theatre. On this old image, perception operates by a kind of master-operator or homunculus which observes the visual field as would a viewer in a theatre. Contrary to this, MHM is a non-hierarchical conception which posits a multitude of homunculi, represented by different neural nets, operating autonomously to build up perception as we know it through their spontaneous cooperation. In this case, when sense stimuli are received, they are imagined to activate an array of corresponding data-processing units which “all broadcast their signs to whatever other agents are capable of making use of their content” (29). DeLanda concludes this chapter arguing for MHM’s use in contemporary debates around reductionism, for which mental phenomena “are nothing but physical phenomena” (29). Unlike other models of perception, MHM is emergentist, allowing both the origin of perception in the material world, and the causal efficacy of the mind upon the body, represented in the way in which a program, when implanted in an artificial body, can issue commands to that body making it move.
The Contributions of the Body
The second chapter provides a response to a possible criticism of MHM. MHM in chapter 1 conceives of perception as if it was done by disembodied minds whose sole occupation is the processing of input-data. However, to accurately mirror perception as it occurs in us, “the brain must command and control, not only represent” (39). An intuitive view of embodied artificial intelligence would have it that “when we raise our arm the brain must specify the exact angle that each different joint (shoulder, elbow, and wrist) must end up having, as well as the precise degree of contraction that the attached muscles must have once the target position is reached” (39). Embodiment understood in this way would be unrecognisable to human cognitive agents, and if MHM entailed endorsement of such a view of embodiment, the analogy between neural nets and human perceivers would fail. DeLanda’s argument is that this view of embodied perception does not accurately describe the operations of neural nets outfitted with artificial bodies: the way neural nets occupy bodies is in fact extremely similar to our own.
To show the similarity of embodied artificial intelligence to human embodied perception, DeLanda makes significant use of insights from systems theory. This use of systems theory has been a staple of his philosophical work throughout his career, but it is given a central place in Intensive Science and Virtual Philosophy. There, he uses the “theory of dynamical systems where the dimensions of a manifold are used to represent properties of a particular physical process or system, while the manifold itself becomes the space of possible states which the physical system can have.”[iii] Imagining the transformations a system can undergo as a space of possible states (a state space), some of those states (attractors) are more statistically likely ones for the system to move into. For instance, due to the elasticity of tendons in human hands, the natural state which they occupy when no other influence is causing them to be otherwise is with the fingers curled slightly towards the palm. However, when a new system is created, such as one including a hand and a box which it is involved in pushing, the hand might be attracted into a flat and stretched-out configuration.
Attractors within a state space also provide the parameters in which the system it represents operates. Continuing with our example, the same elasticity that gives resting fingers their natural state also entails their inoperability when bent too far backwards and broken. Instead of issuing abstract commands, detailing precise dimensions for movement as would the operator of a Taylorist factory, neural nets implanted in an artificial body learn to navigate their environment through the mediation of the parameters and capacities of the body that they occupy. For this reason, the neural net does not need to specify the exact parameters of its intended movement, as the intuitive view of embodied artificial intelligence above claimed. Rather, “much of the computational work would still be offloaded to the dynamics of the robot’s legs interacting with the ground” (40). DeLanda also extends this view of embodiment to a theory of embeddedness, in which “the dynamical system is simply an extension of the body” (43). Dynamic systems theory conceives of systems as open and reconfigurable as they enter into contact with other systems. Because of this, the environment in which a system functions in a very real sense enters into the functioning of that system as well by adding new attractors.
The above accounts only for how the body influences the commands issued by the mind; it is desirable as well to understand how the body is influenced by the structure of its environment. For this purpose, DeLanda introduces the concept of affordance, which describes features of the environment which are activated by the introduction of the right kind of system, such as “medium-sized elongated objects that afford wielding to a human hand” (45). Introducing the concept of affordance suggests the existence of an already well-structured environment in which only certain details become salient based on the kind of agent interacting with them. Summarising his use of affordance, DeLanda suggests that “[r]ather than thinking of the cook’s brain as containing a single unified model of the entire kitchen, and a detailed plan of the overall task we can think of multiple partial models each specific to a task, or even to a single stage in the task” (48). This addition further highlights the potential value of MHM: it removes the need to develop a single, complete cognitive agent to simulate perception, since the same job can be done by a team of only partially functional agents.
The remainder of this chapter returns to the inner sense of embodiment, before dealing at length with several objections to DeLanda’s account of embodiment and embeddedness, primarily those associated with what he calls the “enactive approach” and the “radical embodiment approach” (63). I will pass over the objections for the sake of this review. To explain the inner sense of embodiment, DeLanda focusses specifically on two kinds of internal perception: proprioception and interoception. While proprioception is responsible for the fact that I do not need to look at my hand to know it is raised above my head, interoception “keep[s] the brain informed about the current state of the body’s visceral environment in order to maintain its metabolic balance” (48). By accounting for both proprioception and interoception, DeLanda hopes to show how we also develop our “sense of ownership of our bodies [and] a sense of agency” (50). For this purpose, DeLanda once again relies on a constructivist approach. Internal representations of our body as situated in a three-dimensional environment, and as a complex system requiring biological regulation are “slowly developed as sensory experiences accumulate” (51). The body map built up through these sensory experiences is primarily a way of organising the “steady stream of signs from joints, muscles, tendons and viscera” (57), and this organisational map of internal perception gives “us a sense of ownership of our phenomenal experience” (57).
The Contributions of the Brain
The picture of perception which DeLanda has offered until this point intentionally avoids the question of how external signs become internal ones. The distinction between internal and external signs was explored in chapter 1 in the context of DeLanda’s attempt to offer a job description for internal signs. This job description required that internal signs be causally and intentionally connected to the external world, and that the intentional meaning of these signs be transmissible between cognitive agents who have undergone a similar process of evolution. The discussion of interoception at the end of chapter 2 adds a further condition to the success of DeLanda’s model, since here it is added that “interoceptive information, in turn, is transformed into lived hunger, thirst, and sexual arousal, or into a primordial feeling of anger, joy, sadness, or fear” (68). Now, MHM must not only be able to account for the communicable content of internal signs, but also their emotional content for the cognitive agent that produces them. In other words, the purpose of chapter 3 is to develop MHM to account for the change from signs that merely satisfy the job description in chapter 1, into signs that also have an emotional, lived significance.
To add this requisite complexity to his model, DeLanda begins by “removing, one at a time, the simplifying assumptions” (77) which allowed him to develop an initial, cogent articulation of MHM. This complexifying process begins by moving from hypothetical models of artificial cognitive agents to a “toy model of the brain” (76). Accordingly, this chapter marks a crucial shift in his argument. Until now, his proof by construction has only attempted to show that the qualities of perception with which we are familiar can be replicated with simulations involving artificial neural nets. This chapter “replace[s] artificial neural networks as our main example of a mindless cognitive agent with something more realistic” (79), that is, actual neuronal systems embedded in an organic brain and body. One simplification that DeLanda here discards imagines that the retina functions in a manner analogous to the optical array of a camera. Now, he stresses that the retina should be understood as “capturing not pictures resembling objects but an array of intensities isomorphic with the optical phenomena in the section” (80). This cunningly bypasses the classical problem of the correspondence between mental contents and the external world by acknowledging that the array of photoreceptors in the retina directly encode the external world in a form isomorphic to itself (although this description of mind-world correspondence is complicated again on page 90).
The greatest part of this chapter is devoted to explaining how the features of neural nets as already described apply in similar terms to specific regions of the brain. Assuming DeLanda’s understanding of contemporary neuroscience to be sound, this is an impressive addition indeed, since it decisively shifts the discussion from artificial neural nets to, not only organic systems in the abstract, but the neuronal structure of the human brain as it realises perception in fact. After a prolonged discussion of how the anterior intraparietal area and the premotor areas of the frontal lobe interact in a dynamic loop that allows affordances to exist within perception, DeLanda returns to considering “the role that the biological value system we discussed in the previous chapter plays in this process” (92). For this purpose, he introduces the contribution to perception of two “relay station[s]”, the first comprised of the basal ganglia, cerebellum, and hippocampus, and the second comprised of the brain stem, hypothalamus, and amygdala, although DeLanda does concede that “from the point of view of the neural basis of the biological value system, the amygdala is probably the most important component” (93-4).
It may be wondered at this stage just how a description of the structure of the brain will yield an understanding of the emotive content of internal signs. Furthermore, it is just this emotive content that DeLanda earlier claimed this chapter is meant to explain. However, after describing the neurobiology of emotions, DeLanda concedes that his argument at this stage must be left “deliberately vague” (96) due to the current state of the science. Despite promising to offer an account of the emotive content of internal signs in perception, DeLanda’s conclusion that neuroscience has yet to provide a satisfying account of this may confirm what critics of physicalism have long suspected: that, however much neuroscientific data we acquire on the biological structure of the brain, it will always be insufficient until we have at least discovered a bridge principle adequate for closing what Nagel called “the gap between subjective and objective”.[iv] Be that as it may, DeLanda’s suggestions, although not obviously providing a satisfactory bridge principle, may give suggestions concerning the form that such a bridge principle could take.
The standard approach to dealing with the hard problem of consciousness begins with the qualitative difference between mind and body, and provides a description of how the latter realises the former. DeLanda’s attempt to provide an account of the emergence of internal signs in their emotive aspect should be seen as a variant on the hard problem of consciousness, to the extent that he is trying to explain how the functions performed by the brain come to acquire a lived significance. However, he does not begin to deal with this problem as most do. His way of beginning is instead similar to Searle’s. Searle writes: “I believe that the key to solving the mind-body problem is to reject the system of Cartesian categories in which it has traditionally been posed. And the first step in that rejection is to see that ‘mental,’ naively construed, does not imply ‘non-physical’ and ‘physical’ does not imply ‘non-mental.’”[v] Likewise, DeLanda states that “given that we need consciousness and intentionality to emerge gradually, we must give up any model that includes only two levels, such as the brain and the mind” (96). Accordingly, DeLanda’s suggestion is that what might serve as a bridge principle “will involve introducing intermediate levels between the two” (98) which do not involve qualitative differences but greater degrees of systematic unity and coordination between a plethora of non-conscious, rudimentary cognitive agents.
The Contributions of the Mind
Operating within the limits imposed on his explanation by the current state of science, the last chapter of DeLanda’s book offers some final additions to MHM to help with overcoming the hard problem of consciousness. In this context, DeLanda’s humility is certainly appreciated. At the end of this chapter, he clarifies that “[i]ntroducing intermediate levels between the brain and a subject who can issue reports does not solve the hard problem but it does break it down into three more tractable problems. And it points to the direction we must follow to find the solution: a methodology that combines analysis and synthesis, starting from the bottom and moving upward” (139). What these three problems are will be seen in the following. DeLanda begins here, surprisingly, by attempting to show that at least three forms of perception (the perception of properties, objects, and situations) do not involve the use of concepts. Why this matters is not at all obvious; and, as another reviewer has noted, it goes against much of the received wisdom on the theory-ladenness of experience.[vi] An explanation of this decision is offered in the book’s introduction. It might be objected that a neurobiological account of perception is sufficient only for explaining perception in organisms less complex than we are. However, “[t]here is no deep discontinuity between animal and human visual experience, as there would be if linguistically expressed concepts shaped perception” (2-3).
If not by means of concepts, how does perception access the sense-data presented to the retina? Most of this chapter is devoted to developing DeLanda’s alternative, that it is instead preferable “to view the perception of properties as performing a measurement function, to view the perception of objects as performing the function of separating the perspectival from the factual, and the perception of situations as having the function of allowing qualitative judgements about the relations between objects” (113). His attempt to prove that none of these kinds of perception involve concepts is unlikely to convince anyone committed to the contrary view. Consider, for instance, how DeLanda deals with the perception of situations. After dubiously asserting that there is no need to depend upon concepts to ask the question “What is that?” (125), he proceeds by appealing to vervet monkeys, who can perform a variety of tasks related to attention and specification “without possessing any sortal concepts” (126). However, DeLanda makes his case here using a doctored understanding of what a concept is. He takes as intuitive the view that a prototype which “does not stand for an essence or abstract universal” (126) is not a concept. But what of the concept of a game? It is well established that such a concept would stand for neither an essence nor an abstract universal, encoding instead a variety of mere family resemblances, and being much closer to what DeLanda calls “a construct capturing statistical regularities in the objects actually used for training” (126).[vii]
The above concerns DeLanda’s attempt to deal with the “easy problems of consciousness, that is, the problems that can be tackled by cognitive psychology and the neurosciences” (129). From this point on, he tries to build a solution to the hard problem of consciousness, on the principle that “the brain monitoring its own activity is the key to the solution to the hard problem” (129). By this, DeLanda means the following. It is strictly incorrect to speak of perception as the perception of qualities in the external world. Instead, wherever there is perception, it is perception by the mind of the various neuronal circuits and the way they behave when subjected to certain kinds of stimulation of the retinal array. Thus, “[f]ar from being an input from the world, perception is more like an intermediate output, and the volition behind an action is not an output to the world but an intermediate input to the motor areas of the brain” (102). Allowing that perception is not perception of the external world, but perception of changes within the brain “eliminates the idea that the visual field is like a veil separating us from reality, as well as the idea that the transparency of this veil must be accounted for” (130).
Let’s see how DeLanda’s multilayer approach deals with the hard problem of consciousness. For this purpose, DeLanda distinguishes between four different senses of the word ‘consciousness’: arousal, alertness, flow, and selective awareness (137-8). In each case, he shows that MHM can simulate the different kinds of conscious phenomena in question. What is most surprising is that qualitative experiences, sometimes called qualia, enter into none of the senses of consciousness DeLanda defines. This begs an important question: in showing that, beginning with a multiplicity of mindless cognitive agents we can simulate arousal, alertness, flow, and selective awareness, has DeLanda shown how such systems realise consciousness? But there is no answer to the hard problem of consciousness which does not explain how it is possible for the operations of mindless things, even teams of mindless data-processing units, to produce conscious states. Along these lines, the three constitutive problems into which DeLanda analyses the hard problem concern the emergence through evolution of what he calls protoselves, core selves, and autobiographical selves. However, only the emergence of protoselves is analogous to the hard problem since the development of protoselves is the stage at which mindless systems “slowly get the ‘sentient’ part to emerge” (139). And the closest DeLanda comes to explaining how protoselves emerge is the following: “The emergence of protoselves in the course of evolution may be due to the fact that the internal milieu displays a greater degree of constancy than the body as a whole” (69).
With the core arguments of DeLanda’s book aside, I want to conclude by offering three considerations pertaining to the success and nature of his project. In the first case, consider the following. DeLanda’s book begins with the claim that “[t]raditionally, communication between philosophers of the materialist and phenomenological schools has been limited” (1). But this is not remotely true. It would be more correct to claim, as Derrida did, that attempts to synthesise materialism and phenomenology have consistently resulted in “impasse.”[viii] DeLanda’s attempt to bring materialism and phenomenology into dialogue with one another suffers from this same fault. However, his attempt does so differently than have many others. Rather than approaching this reconciliation through the mediation of social interaction and language, DeLanda decisively rejects the relevance of these “meso-scale” (143n4) phenomena for understanding consciousness itself. Instead of approaching the reconciliation of materialism and phenomenology through the highest order of material phenomena, social and economic factors, DeLanda’s approach begins from the lowest: the biological evolution of neuronal networks ultimately possessing rudimentary consciousness.
Secondly, for a work which purports to produce communication between materialism and phenomenology, it is remarkable that there are no discussions whatsoever of the core components of the phenomenological method, such as epoche or noesis. This calls into question the sincerity of the rapprochement being offered. DeLanda’s approach, apart from ignoring the work of actual phenomenologists, sides consistently with phenomenology’s critics. There is an unmistakable affinity between Dennett’s ‘hetero-phenomenology’ and DeLanda’s claim that the “tendency of the conscious mind to make sense of its decisions and actions, results in the fabrication of explanations after the fact” (97). Despite this, the book concludes by adding, “[a]nd this is why it is so important to adopt a materialist approach to phenomenology” (141). But the arguments of the book have nothing to do with adopting a materialist approach to the deduction of the transcendental categories of experience, as one might expect this closing statement to mean. Rather, DeLanda appears to mean ‘phenomenology’ in the sense in which it is more frequently used in analytic philosophy of mind, where it refers to the general quality of experiential states, instead of the rigorous study of experience which begins from within subjectivity itself.
Thirdly, a contrast should be made between the kind of account of perception that DeLanda attempts to provide, and the kind of account that he actually provides. He declares his theory to be a variety of “non-reductive materialism”, which for him means “first, that there are mental properties that are different from physical properties; second, that the existence of mental properties depends on the existence of physical properties; and third, that mental properties can confer causal powers on mental events” (1). However, MHM tries to solve the hard problem of consciousness by positing a continuum between the physical and the mental such that there are between the two “intermediate levels [which] implies a graded conception of both intentionality and consciousness” (134). This solution may be inconsistent with DeLanda’s commitment to a conception in which mental and physical properties really are different. Given some continuum, for instance between hot and cold, it can be granted that the properties lying at either end are really different in some sense. However, this kind of difference is not the kind that a non-reductive materialism requires. Representing differences along a continuum implies their representation as differing in terms of something continuous between the two, reducing the difference to a difference in degree. But the claim that between conscious and nonconscious things there is only a difference in degree is something that even reductive materialists can assent to. A truly nonreductive materialism must successfully maintain the difference in kind between minds and bodies, and since DeLanda’s theory does not offer this, it is perhaps incorrect to call it non-reductive.
The project behind Materialist Phenomenology is a highly ambitious one; and insofar as ambition and innovation themselves deserve praise, DeLanda’s work is clearly laudable. However, the careful reader of this book will inevitably discover many questionable inferences. I am greatly sympathetic to attempts to defend non-reductionism within materialism. I also agree with DeLanda’s initial premise that such a materialism must be brought about by cultivating communication between materialism and phenomenology. But, as with any worthwhile discussion, this communication must transpire among equals. This would mean taking seriously what phenomenologists have learned throughout the last century of their deliberations on the meaning and nature of consciousness. In Materialist Phenomenology, the materialist has been handed the megaphone, and the voice of the phenomenologist has been drowned out by the amplified orations of their interlocutor. Despite this, DeLanda must be applauded: even if the attempt to unify materialism and phenomenology has failed here (and, if Derrida is to be believed, may always fail), the attempt itself is something which unfortunately few philosophers today aspire to enact.
[i] Paraphrasing Deleuze, G. 2007. “Responses to a Series of Questions,” in Collapse, Volume III. London: MIT Press, p. 41.
[ii] For closer treatment of this objection, see Negarestani, R. 2018. Intelligence and Spirit. Falmouth: Urbanomic.
[iii] DeLanda, M. 2013. Intensive Science and Virtual Philosophy. London: Bloomsbury, p. 5.
[iv] Nagel, T. 1974. “What is it like to be a Bat?” The Philosophical Review (83:4), p. 449.
[v] Searle, J. 1991. “Response: The Mind-Body Problem,” in John Searle and His Critics, ed. Lepore, E. and Van Gulick, R. Oxford: Blackwell, p. 141.
[vi] Richmond, S. 2022. Manuel DeLanda, “Materialist Phenomenology: A Philosophy of Perception”, Philosophy in Review (42:2).
[vii] Cf. Wittgenstein, L. 2009. Philosophical Investigations, trans, Anscombe, G.E.M., Hacker, P.M.S., and Schulte, J. Oxford: Wiley Blackwell, §67.
[viii] Derrida, J. 1983. “The Time of a Thesis: Punctuations,” in Philosophy in France Today, ed. Montefiore, A. Cambridge: Cambridge University Press.