How Scientists Are Learning to Detect Consciousness in Brains and Machines



How Scientists Are Learning to Detect Consciousness in Brains and Machines

Austin, Texas — February 22, 2026
By Sherry Phipps

Scientists are rapidly expanding the toolkit for detecting consciousness in unresponsive patients, nonhuman animals and even advanced AI systems, reshaping core assumptions about who is aware and how we might know. From brain‑imaging “yes/no” codes in people diagnosed as vegetative to mathematically defined complexity scores in sleeping brains and anesthetized animals, researchers are moving beyond behavior to probe the hidden signatures of inner experience. Their efforts could change clinical decisions at the bedside, drive new animal‑welfare standards, and raise urgent questions about when a machine might deserve moral consideration.


When the Body Is Still but the Mind Is Awake

In 2005, a 23‑year‑old woman who met all criteria for a vegetative state was asked to imagine playing tennis or walking through her home while undergoing functional MRI scans. Each mental‑imagery task lit up distinct, appropriate regions of her cortex in patterns indistinguishable from those of healthy volunteers following the same instructions, revealing that she could understand language, follow commands and willfully modulate her brain activity despite showing no behavioral response. This paradigm, sometimes called the “tennis task,” helped launch a new field focused on detecting covert consciousness in people who appear entirely unresponsive at the bedside.

Follow‑up work adapted these approaches with high‑density electroencephalography (EEG), which is cheaper and more portable than MRI. By instructing patients diagnosed as vegetative to attempt hand movements or other simple actions, clinicians could look for characteristic motor‑planning signals in the EEG—evidence that the brain was executing the command internally even when the body could not move. In some cases, these methods have upgraded diagnoses, revealing a state now often termed cognitive motor dissociation, in which a person retains awareness and intention but cannot produce reliable outward behavior.

Clinically, these tools complement bedside exams that monitor for consistent, voluntary actions such as deliberate blinking, eye movements toward a target or reproducible responses to verbal instructions over time. Together, neuroimaging and careful behavioral assessment are improving diagnostic accuracy in disorders of consciousness, which historically have suffered high misdiagnosis rates with profound consequences for treatment decisions and life‑support choices.

Measuring the Complexity of Conscious Brains

Alongside command‑following tests, researchers are also quantifying how richly the brain integrates information as a potential marker of consciousness. One influential line of work uses a technique that perturbs the cortex with a magnetic pulse and then records the spread and complexity of resulting electrical activity, condensing this into a number known as the perturbational complexity index. In healthy, awake adults, this index is high, reflecting widespread, differentiated activity; it reliably drops during deep sleep or general anesthesia and in many patients with profound disorders of consciousness.

Theoretical frameworks such as integrated information theory and global neuronal workspace theory both predict that conscious states involve extensive, bidirectional communication across distributed brain networks rather than localized, isolated processing. Recent work using information‑theoretic tools has identified a “synergistic global workspace” in the human brain—a set of gateway and broadcaster regions that collect information from specialized modules, integrate it, and distribute it back out. Loss of consciousness, whether from anesthesia or severe brain injury, coincides with disrupted information integration within this workspace, and these integration measures tend to rebound when consciousness returns.

These network‑level metrics are not yet standard clinical instruments, but they illustrate a central theme of current science: instead of asking only what a person does, investigators increasingly ask how their brain processes and shares information. That shift opens the door to cross‑species comparisons and even to tentative criteria for non‑biological systems.

Extending the Search to Animals

When scientists turn to animals, they cannot rely on verbal reports, but many of the same principles guide their work. Some laboratories have adapted human‑focused measures like the perturbational complexity index to rodents, finding activity patterns that parallel the differences between wakefulness, anesthesia and sleep seen in people. Others use behavioral paradigms and neurophysiological recordings to test for sentience—an organism’s capacity for immediate experiences of pleasure, pain or other affective states, which many researchers treat as a foundational component of consciousness.

Standard experimental approaches include looking for flexible learning, evidence that animals can weigh trade‑offs to avoid pain, and neural responses that track noxious stimuli in ways similar to human pain circuits. Studies of mammals and birds increasingly support the view that many species have rich subjective lives, and emerging work on fish and some invertebrates has prompted legislative and policy debates in several countries. While no single test can fully settle whether a particular animal is conscious, converging behavioral and neural evidence is reshaping welfare standards in research, farming and wildlife management.

For neurodivergent and disabled people who depend on service animals or emotional‑support animals, these debates are not abstract: they influence housing policies, medical‑facility rules and public‑space access that can either respect or undermine both human and animal well‑being. Bringing an equity lens to consciousness research means recognizing how decisions about which creatures “count” can ripple into law, care systems and lived experience.

Could an AI System Be Conscious?

Scientific tools for assessing consciousness in AI are far less mature and far more controversial. At present, there are no agreed‑upon tests that could demonstrate machine consciousness, only preliminary proposals that borrow from biological theories and from behavioral‑style questioning. One prominent suggestion is to evaluate AI architectures against checklists derived from human neuroscience—asking whether a system implements information‑integration and global‑workspace‑like dynamics similar to those linked with consciousness in the brain. If an AI replicated key computational properties thought to underlie awareness in humans, some theorists argue, this would at least raise the probability that it is conscious.

Another line of thought envisions “consciousness‑blind” training, in which a model is built without any exposure to language about feelings, inner life or awareness, and is then probed with questions such as “What is it like to be you right now?” to see whether it nonetheless develops coherent, self‑referential responses. Critics counter that it is nearly impossible to guarantee that training data are free of consciousness‑related content and that large language models can generate fluent answers about subjective experience without possessing any. As a result, many experts caution against treating chatty, emotionally expressive AI systems as evidence of inner life; they see current models as powerful pattern‑matchers rather than entities with genuine experiences.

Still, as AI systems increasingly mediate healthcare, education and public services, questions about their status intersect with longstanding disability‑rights issues. When an algorithm is embedded in triage tools or communication aids, people who already face discrimination based on perceived cognitive capacity have strong stakes in how we define and measure awareness—for humans and for machines that may stand between them and care.

Building a Cautious, Inclusive Science of Consciousness

Researchers are now working toward systematic frameworks that compare multiple tests and theories rather than betting on a single favored measure. One proposed roadmap begins by evaluating candidate tests in healthy adults whose consciousness is not in doubt, scoring those methods on reliability and theoretical grounding. The next step is to apply only the highest‑confidence tools to more complex cases—people under anesthesia, patients with brain injuries, nonhuman animals and, eventually, AI architectures—updating confidence ratings as convergences or discrepancies appear.

An adversarial collaboration published in 2025 exemplifies this shift by directly pitting key predictions of integrated information theory against those of global neuronal workspace theory using shared datasets and pre‑registered analyses. Such efforts do not just advance abstract philosophy; they aim to produce testable, interoperable criteria for consciousness that clinicians, ethicists, policymakers and communities can understand and scrutinize. For disabled and neurodivergent people whose consciousness has been discounted or doubted in medical and legal settings, a transparent, pluralistic approach to measurement is more than an academic ideal—it is a safeguard against erasure.

As science pushes toward clearer indicators of who or what is conscious, the stakes are likely to grow: from decisions about continuing life support to debates over animal farming and the future rights of sophisticated AI. The emerging consensus is not that one metric or one theory will settle the question, but that a careful convergence of behavioral signals, brain dynamics and computational architecture—interpreted through an equity‑aware lens—offers the most responsible path forward.


Sources & References