# [Contemplative Science and the Nature of Reality](releases/2025/Contemplative%20Science/Contemplative%20Science.md) # Chapter 9: Brain as Processor *Cognitive Architectures for Consciousness* Having explored the rich phenomenology of contemplative states (Chapters 2-4), the methods used to cultivate them (Chapter 5), and the current neuroscientific understanding of their neural correlates (Chapters 7-8), we now continue our exploration within **Part II** by turning to the crucial theoretical challenge: developing explanatory frameworks that can account for consciousness itself, particularly in light of the data presented. How can we model the relationship between brain activity and subjective experience? What principles govern the emergence of awareness? This chapter begins our examination of several major theoretical approaches, focusing on models grounded primarily in **cognitive science** and **computational neuroscience**. These frameworks attempt to explain consciousness in terms of specific information processing architectures, cognitive functions, or computational principles thought to be implemented in the brain. They often focus on identifying the functional prerequisites or computational mechanisms necessary for conscious awareness to occur, effectively viewing the brain as a sophisticated processor. We will examine three influential families of theories: **Predictive Processing** (and its extension, Active Inference), which views the brain as a prediction engine; **Integrated Information Theory (IIT)**, which defines consciousness in terms of a system’s capacity for integrated information; and **Global Workspace Theory (GWT)**, which models consciousness as information being broadcast within a central cognitive workspace. For each, we will outline its core ideas, discuss its potential application to understanding contemplative states, and critically evaluate its strengths and limitations. We will also briefly consider the relevance and limits of drawing parallels between emergent behaviors in artificial intelligence and the emergence of consciousness in biological systems, a comparison often invoked in computational approaches. These cognitive and computational models provide essential groundwork before we explore information-centric and quantum perspectives in subsequent chapters. ## 9.1 Predictive Processing / Active Inference One highly influential contemporary framework for understanding brain function, spanning perception, action, learning, and potentially consciousness, is **Predictive Processing (PP)**, often extended into the broader **Active Inference (AIF)** framework. Developed and championed by thinkers such as Karl Friston, Andy Clark, Jakob Hohwy, and others, this approach offers a unifying perspective that views the brain not as a passive feature detector processing bottom-up sensory input, but as an active, predictive inference engine constantly striving to model the causes of its sensory inputs and minimize the difference between its predictions and the actual incoming signals. This difference is termed **prediction error** (or, more formally within the underlying Bayesian mathematics, variational free energy or ‘surprise’). In the PP view, perception itself is fundamentally a process of top-down prediction or hypothesis generation. Higher levels of the cortical hierarchy generate predictions about the expected patterns of activity at lower levels, ultimately predicting the sensory input itself. Incoming sensory signals serve primarily not to construct perception from scratch, but to correct or update these ongoing predictions, propagating prediction errors up the hierarchy when mismatches occur. What we consciously perceive, according to this model, is essentially the brain’s best hypothesis or “controlled hallucination”–the generative model that most successfully minimizes prediction error and accounts for the current sensory evidence. This inferential process occurs continuously and hierarchically across multiple levels of abstraction. **Attention** is often conceptualized within this framework as the mechanism for optimizing the **precision** (inverse variance, or reliability) assigned to prediction errors at different levels or from different sources. Higher precision assigned to prediction errors allows them to exert a stronger influence in updating the model, effectively selecting which information drives perception and learning. Active Inference (AIF) extends this core principle of prediction error minimization to encompass action as well as perception. It proposes that organisms act in ways that fulfill their predictions or minimize their long-term expected free energy (prediction error). Actions are selected based on their predicted sensory consequences, effectively sampling the environment in ways that confirm the organism’s internal model of the world and its preferred or expected states (homeostasis, goals). This provides a unified account of perception, learning, and action grounded in the single imperative of minimizing prediction error or surprise. The **self** can be modeled within this framework as a particularly deep, stable, and high-level set of generative models–priors or predictions about the enduring causes of interoceptive (internal bodily) and proprioceptive signals, as well as narrative or autobiographical regularities over time. The statistical boundary separating the internal states of the system (the self) from its external environment is formalized using the concept of a **Markov blanket**. Predictive processing and active inference offer intriguing possibilities for modeling contemplative states and their effects. Meditation practices, particularly **focused attention (FA)**, can be interpreted as training the ability to precisely regulate attention (optimize precision weighting)–for example, increasing the precision assigned to breath sensations while decreasing the precision assigned to distracting thoughts (prediction errors from the Default Mode Network). This leads to stabilization of top-down predictions and reduced influence of irrelevant prediction errors, resulting in mental calm (*samādhi*). **Insight meditation (Vipassanā)** might involve attending closely to the prediction errors themselves, learning to recognize the constructed and often inaccurate nature of habitual perceptions and beliefs (priors), thereby weakening maladaptive or overly rigid high-level priors about the self and the world. This aligns with the cognitive mechanism of deautomatization. Experiences of **ego dissolution** or non-dual awareness could potentially be modeled within PP/AIF as a profound alteration or even a controlled collapse of the high-level generative models constituting the self. This might occur through a radical down-weighting of the precision assigned to self-related predictions (reducing their influence), a fundamental revision of the self-model priors based on insight, or perhaps even a shift in the perceived boundaries of the Markov blanket separating self from world. States of deep absorption (*Jhana*) might correspond to states of extremely low prediction error, where the internal generative model perfectly predicts a stable, minimal sensory input (due to sensory withdrawal) or a highly focused internal state, leading to profound calm and potentially bliss (if bliss is a predicted state). While formal modeling of these advanced states is still in its infancy, the PP/AIF framework provides a powerful, mathematically grounded, and neurobiologically plausible approach for conceptualizing how contemplative practices might reshape perception, self-awareness, and the fundamental experience of reality by systematically altering the brain’s predictive modeling processes and the precision weighting of prediction errors. ## 9.2 Integrated Information Theory Another prominent and mathematically sophisticated contemporary theory of consciousness is **Integrated Information Theory (IIT)**, primarily developed by neuroscientist Giulio Tononi and his collaborators. Unlike many other theories that start by asking about the functions or neural correlates of consciousness, IIT takes a fundamentally different approach. It begins by attempting to identify the essential phenomenological properties of subjective experience itself (its axioms) and then rigorously derives the necessary and sufficient physical properties a system must possess to generate such experience (its postulates). IIT aims to provide a fundamental theory of what consciousness *is*, not just what it *does*. The **axioms** of IIT are derived from direct introspection into the nature of any possible experience. They state that consciousness intrinsically **exists** (it is actual); it is **structured** or **compositional** (it is composed of multiple phenomenal distinctions or aspects); it is **informative** (each experience differs in specific ways from other possible experiences); it is **integrated** (each experience is unified and irreducible to independent components–one cannot experience the left visual field independently of the right, for example); and it is **definite** or **excluded** (each experience has specific content and boundaries, being what it is and not something else, and having a particular spatio-temporal grain). From these phenomenological axioms, IIT derives its central **postulates** about the physical substrate of consciousness. It postulates that consciousness is identical to a system’s capacity to generate **integrated information**, a quantity formally defined and measured by **Φ (“Phi”)**. Φ quantifies the extent to which a system’s current state specifies its own past and future states in a way that cannot be reduced to the information generated by its parts considered independently. In essence, Φ measures the causal irreducibility or “intrinsic causal power” of a system as a whole. A system is conscious, according to IIT, to the degree that it possesses a high value of Φ. Consciousness corresponds specifically to the subsystem with the maximal Φ value within a larger system, termed the “main complex.” The specific quality or “quale” of any particular conscious experience (e.g., the redness of red) is proposed to be determined by the unique geometrical structure of the “conceptual structure” or “qualia space” unfolded by this main complex–the complete set of irreducible causal relationships (cause-effect repertoire) the complex has with itself. IIT offers potential avenues for understanding contemplative states and different levels of consciousness, although applications are still largely speculative. States of reduced consciousness like dreamless sleep or general anesthesia would correspond to low values of Φ, reflecting a breakdown in the brain’s capacity for integrated information processing (e.g., due to disconnection between cortical areas). Ordinary waking consciousness would correspond to a high Φ value supported by the complex and highly interconnected corticothalamic system. Deep meditative absorption states like the Jhanas, characterized by profound unity and potentially reduced differentiation of content, might be hypothesized to correspond to states with high overall integration (Φ) but perhaps simpler or less differentiated conceptual structures compared to the complexity of normal waking experience. Experiences of boundless awareness or non-duality could potentially relate to states where the main complex undergoes a significant reconfiguration, perhaps expanding or losing its usual boundaries, leading to a maximal Φ state with a unique, highly integrated phenomenal quality. However, IIT also faces significant challenges and ongoing critiques. Calculating Φ precisely is computationally extremely demanding, making it intractable for systems as complex as the human brain with current methods, thus limiting direct empirical testing of the theory’s core predictions. The theory’s implication that consciousness is graded (Φ can take continuous values) and potentially widespread–a form of **panpsychism**, where even relatively simple systems possessing some non-zero Φ might possess a minimal degree of consciousness–is counterintuitive to many and philosophically debated. Furthermore, whether the specific mathematical formalism of Φ and the geometry of conceptual structures truly capture the subjective essence of consciousness and qualia remains a subject of intense philosophical discussion and requires further development and validation. Despite these challenges, IIT provides a unique, principled, mathematically rigorous framework that attempts to directly ground a theory of consciousness in the essential properties of subjective experience itself, offering a distinct and influential perspective on the physical basis of consciousness and its potential alterations in contemplative practice. ## 9.3 Global Workspace Theory A third influential family of theories, often contrasted with IIT due to its focus on function rather than intrinsic properties, falls under the umbrella of **Global Workspace Theory (GWT)**. Originally proposed in a cognitive psychological form by Bernard Baars, GWT has been significantly developed and neuroscientifically grounded by researchers like Stanislas Dehaene, Jean-Pierre Changeux, and others, often referred to as the **Global Neuronal Workspace (GNW)** model. GWT focuses primarily on the functional role of consciousness, particularly what philosophers sometimes call **“access consciousness”**–the state where information becomes globally available within the cognitive system, allowing it to be reported verbally, used for deliberate reasoning and planning, stored in episodic memory, and guide voluntary behavior. The core metaphor of GWT is that of a **“theater stage”** or a central **workspace** with limited capacity. While a vast amount of neural processing occurs unconsciously in specialized, parallel processors operating “in the audience” or “backstage,” certain salient or task-relevant information gains access to this central, limited-capacity global workspace (the “stage”). According to the theory, once information is represented in this workspace, it is **“broadcast”** widely throughout the brain, becoming available to a multitude of unconscious consumer systems (e.g., language centers, motor planning systems, memory systems). This global availability and broadcasting capability is what constitutes consciousness in the functional sense, enabling flexible integration of information and coordinated control of behavior. **Attention** plays a crucial role as a “spotlight” or gating mechanism, selecting which information from the competing unconscious processors gains access to the global workspace at any given moment. The neural implementation proposed by the GNW model involves a distributed network of neurons, primarily located in high-level associative cortical areas, particularly **fronto-parietal regions** (including prefrontal cortex, anterior cingulate cortex, and posterior parietal cortex). These neurons are characterized by long-range excitatory axons capable of broadcasting information widely across the cortex. Conscious access, according to the GNW model, corresponds to a specific, dynamic neural signature: a sudden, non-linear **“ignition”** event where activity within this workspace network becomes strongly amplified, sustained over time (forming a stable representation), and globally coherent, often accompanied by large-scale synchronization of neural firing, particularly in the high-frequency gamma band. This ignition event makes the selected information globally available, distinguishing conscious processing from subliminal (unconscious) processing where information might activate specialized processors locally but fails to enter the global workspace and trigger the broadcast. GWT and the GNW model offer plausible and empirically testable explanations for many functional aspects of consciousness and the critical role of attention. In the context of contemplation, meditation practices, particularly those involving focused attention, could be interpreted as training the attentional mechanisms that gate access to the global workspace. This training might lead to greater voluntary control over the contents of the workspace (reducing entry of distracting thoughts) and enhanced stability of representations within it. Practices involving open monitoring might alter the dynamics of information entering and leaving the workspace, perhaps allowing for a broader, less filtered awareness of ongoing processes or a more rapid updating of workspace contents. However, GWT/GNW is often criticized for primarily addressing access consciousness rather than the subjective quality of experience (phenomenal consciousness or qualia) itself–the “what it’s like” aspect. It explains *what information becomes conscious* in a functional sense (i.e., globally available and reportable), but not necessarily *why it feels like something* to have that information broadcast. Furthermore, it is less immediately clear how GWT/GNW would account for states like deep meditative absorption where reportability might be minimal or absent, or non-dual states where the subject-object structure, potentially implicit in the workspace architecture (information *for* consumer systems), seems to dissolve. ## 9.4 Comparing IIT and GWT Integrated Information Theory (IIT) and Global Workspace Theory (GWT/GNW) represent two of the leading, yet fundamentally different, neuroscientific and computational approaches to understanding consciousness today. Comparing their core claims, strengths, weaknesses, and domains of explanation highlights key issues and ongoing debates within the field. Their primary difference lies in their starting points and ultimate goals. IIT begins with the essential properties of experience itself (phenomenology) and seeks the physical substrate capable of generating those properties, leading to integrated information (Φ) as the proposed identity of consciousness. GWT begins with the functional role of consciousness (global information availability for report and control) and seeks the neural architecture that implements this function, leading to the global neuronal workspace model. Consequently, their **focus and explanatory targets differ**. IIT aims to explain the existence, intrinsic nature, level (quantity), and specific quality (quale) of phenomenal consciousness itself–why experience exists and feels the way it does. GWT primarily aims to explain the difference between conscious and unconscious information processing in terms of cognitive accessibility, reportability, and functional integration–what makes certain information available for flexible use by the cognitive system. Their respective **strengths and weaknesses** reflect this divergence. IIT offers a principled, mathematically precise framework derived from phenomenology that directly confronts the hard problem and attempts to explain the integrated nature of experience and potentially qualia (via conceptual structures). Its main weaknesses lie in the current computational intractability of calculating Φ for complex systems like the brain, making direct empirical testing difficult, and its potentially counterintuitive implications regarding panpsychism. GWT, on the other hand, is more readily testable through experiments contrasting conscious and unconscious perception (e.g., using masking paradigms), aligns well with established concepts in cognitive psychology like attention and working memory, and has identified plausible neural correlates (workspace ignition in fronto-parietal networks). Its main weakness is that it is often seen as explaining the functional correlates, prerequisites, or consequences of consciousness (access consciousness) rather than subjective experience (phenomenal consciousness) itself; the link between global availability and subjective feeling remains largely unexplained. A crucial question is whether these theories are **mutually exclusive competitors or potentially complementary** descriptions of different facets of consciousness. Some researchers propose a complementary view: IIT might explain the underlying substrate and level of phenomenal consciousness (why any experience exists at all), while GWT might explain how specific contents gain access to cognitive processing and become reportable within that conscious field. However, formally integrating these two frameworks is challenging due to their different starting assumptions, mathematical formalisms (or lack thereof in GWT’s original formulation), and predicted neural substrates (IIT potentially emphasizing posterior cortical “hot zones,” GWT emphasizing fronto-parietal networks). The relationship between integrated information (Φ) and global neuronal broadcasting remains unclear and is a topic of active theoretical and empirical investigation. The ongoing debate between proponents of IIT, GWT, and other theories (like PP/AIF or recurrent processing theories) underscores the lack of scientific consensus on a definitive model of consciousness and highlights the complexity of the phenomenon they seek to explain. ## 9.5 Emergence in AI: Parallels and Limits The remarkable progress in **artificial intelligence (AI)**, particularly the development of large language models (LLMs) and other complex deep learning systems, has brought the concept of **emergence** to the forefront of scientific and public discussion. Emergence refers to the arising of novel and often surprising properties, behaviors, or capabilities in a complex system that are not explicitly programmed into its individual components or rules but rather “emerge” from the interactions among those components at scale. Modern AI systems, trained on vast datasets using relatively simple underlying algorithms (like backpropagation and transformer architectures), can exhibit strikingly sophisticated capabilities–such as fluent language generation, complex reasoning, creative problem-solving, and even coding–that seem to transcend the sum of their parts. This phenomenon naturally invites **parallels** to the long-standing question of how consciousness might arise in the brain. Could subjective awareness similarly emerge from the complex, dynamic interactions of billions of non-conscious neurons and synapses? The concept of emergence offers a potential conceptual framework for understanding consciousness that avoids both strict reductionism (where consciousness is simply identical to neuronal activity) and substance dualism (where consciousness is fundamentally non-physical). Philosophers often distinguish between **weak emergence**, where the emergent properties, though perhaps unexpected, are in principle derivable or predictable from the properties and interactions of the lower-level components (often associated with computational systems), and **strong emergence**, where genuinely new causal powers or properties arise at the higher level that are irreducible to and cannot be fully explained by the lower level. Whether consciousness, if emergent, represents weak or strong emergence is a major philosophical debate with significant implications for physicalism. However, drawing direct parallels between the emergent capabilities observed in current AI systems and the emergence of subjective consciousness in biological brains requires extreme caution due to crucial **limits and disanalogies**. Despite their impressive performance on specific tasks, current AI systems show no credible evidence of possessing genuine subjective experience, sentience, understanding, intentionality, or the qualitative feel (qualia) characteristic of biological consciousness. They excel at statistical pattern recognition, prediction, and manipulation of symbols based on their training data, but arguably lack the embodied, affective, world-grounded, and biologically evolved basis of animal and human minds (related to Searle’s Chinese Room argument regarding syntax vs. semantics and the symbol grounding problem). There is a significant risk of **anthropomorphism**, projecting human-like mental states onto systems that operate on fundamentally different principles and architectures. While AI serves as an invaluable tool for simulating brain processes, modeling cognitive functions (e.g., within PP/AIF frameworks), and potentially testing aspects of functional theories like GWT, its current state provides limited direct insight into the intrinsic nature of subjective awareness or the conditions necessary for its emergence. Understanding emergence in AI is a fascinating scientific endeavor in its own right, but its relevance to solving the hard problem of consciousness remains indirect and necessitates careful critical interpretation to avoid misleading analogies. --- [10 It from Bit](releases/2025/Contemplative%20Science/10%20It%20from%20Bit.md)