Dreams can’t be recorded the way you’d record a video, but scientists have made surprising progress toward reconstructing what the brain “sees” during sleep. Using brain imaging and AI, researchers can now generate rough visual and linguistic approximations of mental activity. The technology is real, but it’s far from producing a playback of last night’s dream.
What Brain Imaging Can Actually Capture
The closest thing to “recording” a dream relies on functional MRI (fMRI), which measures blood flow changes in the brain. When specific groups of brain cells activate, they consume more oxygen, and fMRI detects that shift with high spatial precision. Researchers use this data to build models that link patterns of brain activity to specific visual features like edges, motion, color, and shapes.
In a landmark study published in Current Biology, scientists at UC Berkeley recorded brain activity while people watched movie clips, then used those neural patterns to train a computer model. The system learned which brain activity patterns corresponded to which visual features. It could then take new brain scans and work backward, generating blurry but recognizable video reconstructions of what the person was watching. The reconstructions captured general movement, color, and shape, not fine detail.
This was done while subjects were awake and watching a screen. Applying the same approach to dreams, where there’s no external stimulus to verify against, is a much harder problem. But it established that the brain’s visual processing can, in principle, be decoded from the outside.
How AI Changed the Game
More recent work has paired brain imaging with generative AI models similar to those behind image generators like Stable Diffusion. Rather than trying to reconstruct images pixel by pixel, these systems take a two-step approach. First, they decode a high-level description of what the brain is processing (a face, an animal, a landscape). Then they use an AI image generator to produce a realistic picture that matches that description, refining it against the actual brain data through multiple rounds.
A 2023 study using 7-Tesla fMRI (a particularly powerful scanner) showed this method can produce reconstructions that clearly resemble the original images a person viewed. The system decoded a semantic “summary” from visual cortex activity, generated 250 candidate images, scored each one against the real brain data, and repeated the process ten times. The final images captured the right category, layout, and often the mood of the original, though details like text or exact facial features remained off.
That same year, a team at the University of Texas published a semantic decoder in Nature Neuroscience that went beyond images entirely. Their system reconstructed continuous language from brain activity. A person could listen to a story, imagine telling a story, or even watch a silent video, and the decoder would generate word sequences that recovered the general meaning. It wasn’t a word-for-word transcript, but it captured the gist of what someone was thinking about with surprising fidelity.
The Gap Between Waking Thoughts and Dreams
Nearly all of these breakthroughs involve awake participants. Dreams present several unique challenges. The biggest is that fMRI requires you to lie perfectly still inside a massive, loud machine. People can fall asleep in an MRI scanner, and some studies have captured brain activity during the earliest, lightest stages of sleep. But reaching the deep REM sleep stage where vivid dreaming occurs is difficult in that environment, and any head movement corrupts the data.
There’s also no way to verify what someone actually dreamed. With waking studies, researchers can compare the reconstruction to the original image or audio clip. With dreams, the only reference point is the dreamer’s own report after waking, which is often fragmented and unreliable. This makes it nearly impossible to measure how accurate a dream “recording” really is.
EEG, which uses electrodes on the scalp, is far more sleep-friendly. People wear EEG caps routinely in sleep labs. But EEG captures electrical signals that are blurred by the skull and overlapping brain sources, a problem called volume conduction. It can reliably detect sleep stages and broad patterns of brain activity, but it lacks the spatial resolution needed to decode specific visual content. Researchers have used EEG to classify general categories of mental imagery (faces versus houses, for instance), but reconstructing anything resembling a dream scene from EEG alone remains out of reach.
Communicating From Inside a Dream
While recording dream imagery is still theoretical, scientists have demonstrated something arguably stranger: two-way communication with lucid dreamers. In lucid dreams, the sleeper is aware they’re dreaming and can exert some voluntary control.
In a laboratory study published in the International Journal of Dream Research, participants entered verified lucid dreams (confirmed by pre-agreed eye movement signals during REM sleep) and controlled a virtual car on a screen using subtle muscle contractions in their arms and legs. Sensors on their bodies detected residual electrical activity in those muscles, and software translated the impulses into movements of the virtual car in real time. Red lights placed in front of their closed eyes signaled obstacles. Across 18 confirmed lucid dreams, participants successfully drove the car, made controlled turns, and avoided obstacles in 12 of them.
This isn’t dream recording, but it shows that information can flow both directions between a dreaming brain and the outside world. The dreamer receives signals (light cues) and sends signals back (muscle twitches), all while remaining asleep in REM.
How Much Training the Technology Requires
Every brain is wired slightly differently, which means these decoding systems must be custom-trained for each individual. The University of Texas semantic decoder, for example, required each participant to spend many hours in the fMRI scanner listening to stories so the algorithm could learn their unique neural patterns. Research consistently shows that decoding performance improves with more training data, and there’s no shortcut around this yet.
A large-scale study published in Nature Communications in 2025 tested decoding approaches across 723 participants and found that even with massive datasets, performance on individual word-level decoding remained limited. The researchers used both EEG and MEG (a related technique that measures magnetic fields from brain activity) and found that a 10-hour recording session for a single participant was needed to reach useful accuracy levels for language comprehension tasks. For the foreseeable future, any meaningful brain decoding requires lengthy, individualized calibration that makes casual or consumer use impractical.
Consumer Devices and Their Limits
A few startups have entered the space with consumer-oriented products. Prophetic, a US company, has marketed a headband called the Halo that uses ultrasound and AI to supposedly induce and stabilize lucid dreams. The company has taken reservations and projects delivery in 2025. Independent sleep researchers remain skeptical, noting the absence of peer-reviewed evidence supporting the device’s claims. No consumer product currently on the market can record dream content in any meaningful sense.
The gap between a research-grade fMRI setup (which costs millions of dollars and fills a room) and a wearable headband is enormous. The physics of brain measurement impose hard limits: the more portable and comfortable a device is, the less detailed the brain data it can collect.
Privacy Before the Technology Arrives
Even before dream recording becomes possible, lawmakers are already grappling with who owns your neural data. Chile passed a constitutional amendment in 2021 protecting “cerebral activity and the information drawn from it” as a fundamental right. In 2023, Chile’s Supreme Court unanimously ordered a company to delete a consumer’s neural data for violating those protections.
In the United States, Colorado, California, and Montana have added neural data to their definitions of sensitive personal information under existing privacy laws. Several other states are considering similar measures. At the international level, the OECD issued neurotechnology guidelines in 2019, UNESCO published a draft instrument in 2024, and the UN Special Rapporteur on privacy urged all countries in 2025 to create specific regulations for neurotechnology.
One reassuring finding from the University of Texas decoder study: their system only works when the person actively cooperates. Subjects who didn’t want to be decoded could easily disrupt the system by thinking about something else. At least with current, non-invasive technology, you can’t read someone’s mind without their willing participation.