EEG/fMRI and Human Echolocation: How the Body Accumulates Evidence Through Sound
EEG/fMRI and Human Echolocation: How the Body Accumulates Evidence Through Sound
Human echolocation may seem, at first, like an extraordinary ability: blind individuals produce mouth clicks and use returning echoes to perceive objects, directions, and spaces. But the study by García-Lázaro and Teng shows something even deeper: the brain does not perceive space all at once. It accumulates evidence through sound.
The scientific question of the article is beautiful: does spatial perception in echolocation depend on a single ideal echo, or does it emerge from the progressive integration of several echoes over time? To answer this, the researchers combined psychophysics with EEG in blind expert echolocators and untrained sighted participants.
The study deserves recognition because it takes a real human ability, used in daily life by some blind individuals, and transforms it into a precise experimental question. Instead of treating echolocation as a curiosity, the article asks how the brain builds a spatial representation when vision is not available.
The study included 4 blind expert echolocators, all male, and 21 sighted novices, 12 of them male. The experts were recruited based on active and long-term use of mouth-click echolocation in daily activities. The sighted controls had no echolocation training, and all participants had normal hearing as assessed by audiometry.
The stimuli were synthesized mouth clicks and spatialized echoes, simulating a virtual object 1 meter away at different horizontal positions: 5°, 10°, 15°, 20°, or 25° to the left or right. Each trial presented sequences of 2, 5, 8, or 11 clicks, separated by 750 ms. The task was to decide whether the virtual object was located to the left or right of the body midline. Figure 1 shows click generation, the task structure, and the EEG/MVPA pipeline.
EEG was recorded with a Brain Products actiCHamp Plus system, using 64 channels, an EasyCap in a modified 10–20 configuration, Fz as online reference, 1000 Hz digitization, and StimTrak, also from Brain Products, to precisely mark auditory stimulus onset. This is highly relevant for BrainLatam/Brain Support because it shows a high-temporal-precision auditory EEG design, compatible with studies of perception, decision-making, and ecological neuroscience.
The behavioral results were clear. Early-blind expert echolocators performed far better than sighted controls. The three early-blind participants scored above 88% correct, while sighted controls were close to chance, with an average of 51.83%. The late-blind participant performed above chance but did not statistically differ from sighted controls.
This point is very important: echolocation does not depend only on hearing sounds. It involves experience, training, body, and sensory history. Early blindness seems to favor deeper reorganization of auditory spatial perception, while late blindness may preserve spatial references more strongly calibrated by vision.
The central finding appears when the authors analyze the number of clicks. For blind experts, especially EB2 and EB3, precision improved as the number of clicks increased. In other words: each click added information. The brain was not simply waiting for “the best echo.” It seemed to sum successive evidence until a more stable spatial representation emerged.
Figure 3 is very didactic: it shows that localization thresholds decreased with more clicks. In simple terms, the more acoustic samples the echolocator received, the smaller the spatial difference needed for localization. For EB2, each additional click improved precision by approximately 0.61°. For EB3, the improvement was even stronger, around 2° per click, until reaching a plateau.
Figure 4 shows another important point: when the object was more lateralized, for example at 20° or 25°, fewer clicks were needed for localization. When it was close to the midline, such as 5°, the task became harder and required more accumulated evidence. This confirms that spatial perception through sound depends on both the strength of acoustic cues and the number of available samples.
The EEG revealed the most fascinating part. Using MVPA with SVM classifiers, the authors decoded whether the echo came from the left or right. In early-blind experts, echo laterality could already be discriminated neurally from the first clicks. In sighted controls, this significant decoding did not appear.
Figure 5 shows that the initial EEG decoding peak was related to behavioral performance: participants with stronger neural discrimination of laterality also localized echoes better. This directly connects brain and behavior. It is not merely “interesting brain activity”; it is neural activity related to the real ability to perceive space through sound.
Another strong result appears in the evolution of the sequence. Figure 6 shows that neural responses changed according to the ordinal position of the click: first, second, fifth, eighth, eleventh. This suggests that the brain does not respond to every click as if it were identical to the previous one. It updates its neural state as the sequence unfolds.
The computational models reinforce this idea. The authors compared neural readout rules: cumulative sum, single best sample, softmax, and slope across the sequence. In EB2, the best model was cumulative integration. In EB3, the best model was rate-sensitive integration. No participant with sufficient behavioral variability favored a strictly single-best-sample readout.
This is central for BrainLatam2026: the echolocating brain does not seem to function like an auditory camera that captures a ready-made image. It functions as a living system of sampling, comparison, accumulation, and decision. Each click is a question asked to the territory. Each echo is a partial answer. Perception is born from the conversation between body and world.
A microphysiological detail can deepen this BrainLatam2026 reading: sound localization depends on extremely fast comparisons between signals arriving at both ears. Tiny differences in arrival time, phase, and intensity are processed in auditory brainstem circuits, especially structures such as the medial and lateral superior olive. Interaural time differences and interaural level differences are fundamental cues for horizontal sound localization, and these computations depend on binaural circuits able to compare information from both ears. (PubMed)
This is where electrical synapses become relevant. They are formed by gap junctions, often associated with connexin 36 — Cx36, and they are widely present in major auditory centers in mouse and rat, including regions with purely electrical synapses and mixed electrical-chemical synapses. These synapses do not “think” the direction of sound, but they help create conditions for coupling, synchronization, and precise temporal transmission, which are crucial for circuits that compare timing and phase at very small scales. (PubMed Central)
In BrainLatam2026 language, this deepens the idea of APUS: the body-territory does not begin only in the cortex. It begins in microtemporal differences between the ears, in phase, delay, intensity, chemical synapses, and electrical synapses. Before “knowing” where sound comes from, the body is already comparing times, phases, and echoes. Human echolocation shows this principle in action: sound leaves the body, returns from the territory, and is integrated by auditory circuits capable of accumulating spatial evidence click after click.
Although the article uses EEG, the EEG/fMRI idea can enter as a future experimental horizon. EEG shows time: when each echo begins to generate neural evidence. fMRI could show anatomical space: auditory, parietal, motor, attentional, and possibly occipital networks involved in echolocation. Together, EEG and fMRI could unite temporal dynamics and brain localization.
From the lens of the Damasian Mind, echolocation is a beautiful example of consciousness in action. Interoception, proprioception, and auditory perception are not separated. The person regulates mouth, jaw, breathing, head, posture, ear, and attention. The click leaves the body, touches the environment, and returns as territorial information.
Here, APUS enters as body-territory. The echolocator is not merely hearing the environment. They are extending the body through sound. The click is a bodily action that crosses space and returns as a map. The environment stops being purely external and begins to function as a sensitive extension of proprioception.
The avatar-lens for this blog can be APUS with Jiwasa. APUS perceives the body-territory; Jiwasa perceives synchrony between body, sound, and environment. In echolocation, the body does not dominate the world: it converses with the world. Each echo is a response from the territory.
This study also speaks to Tensional Selves. An experienced echolocator organizes a highly refined Tensional Self: emit, wait, listen, compare, decide, and move. It is not rigid tension. It is functional tension. It is a sensitive readiness state capable of transforming sound into direction.
The BrainLatam2026 question would be: does high performance in echolocation depend only on hearing, or on a more efficient integration between EEG, breathing, head movement, facial/cervical EMG, HRV/RMSSD, GSR, and fNIRS? To answer this, we could create a multimodal design with EEG + fNIRS + EMG + respiration + HRV/RMSSD + GSR + inertial sensors, plus a complementary fMRI study.
EEG would show rapid evidence accumulation click by click. fNIRS would observe prefrontal and parietal cortex in more natural tasks. fMRI could map auditory, visual, and spatial networks. EMG would record microactions of the face, jaw, and neck. Inertial sensors would measure small head movements. HRV and respiration would show autonomic regulation during exploration.
A Latin American experimental design could compare blind experts, blind beginners, trained sighted participants, and children in sound-localization tasks. We could also create tasks in real environments: corridors, classrooms, public squares, forests, trails, urban spaces, and cultural territories. The question would be: how does the brain-body learn to transform echo into spatial belonging?
The generous decolonial critique is that neuroscience often separates perception, body, and territory. But echolocation shows the opposite. The world is not only represented in the brain. It is explored through cycles of action and return. In Latin American contexts, this opens space to study bodily orientation in communities, listening practices, navigation in forests, rivers, cities, music, dance, and traditional embodied knowledge.
The bridge with DREX Cidadão appears when we think about accessibility and public policy. Blind people do not need only technical adaptation; they need accessible territory, safety, mobility, sensory education, and technologies that respect their autonomy. A society in Zone 2 creates conditions for different bodies to expand their perceptual intelligence.
Closing
This study shows that human echolocation is not magic. It is body, sound, brain, synapses, time, phase, and territory accumulating evidence. Each click expands the possible field of action. For BrainLatam2026, this is fundamental: perceiving is not receiving the world ready-made; it is building presence with the environment. Echolocation reveals a neuroscience of APUS: the body extends through sound, listens to the return of the territory, and transforms echo into direction.
References
García-Lázaro, H. G., & Teng, S. (2026). Neural and Behavioral Correlates of Evidence Accumulation in Human Click-Based Echolocation. eNeuro, 13(4). doi:10.1523/ENEURO.0342-25.2026.
Rubio, M. E., & Nagy, J. I. (2015). Connexin36 expression in major centers of the auditory system in the CNS of mouse and rat: Evidence for neurons forming purely electrical synapses and morphologically mixed synapses. Neuroscience, 303, 604–629. doi:10.1016/j.neuroscience.2015.07.026. (PubMed Central)
Pecka, M., Brand, A., Behrend, O., & Grothe, B. (2008). Interaural Time Difference Processing in the Mammalian Medial Superior Olive: The Role of Glycinergic Inhibition. Journal of Neuroscience, 28(27), 6914–6925. doi:10.1523/JNEUROSCI.1660-08.2008. (PubMed)
Keine, C., et al. (2025). Cellular and synaptic specializations for sub-millisecond sound localization in the mammalian auditory brainstem. Frontiers in Cellular Neuroscience. (Frontiers)