In the blink of an eye, three weeks have passed since my last update. This transition in identity has brought new challenges, and at the same time prompted me to reflect on my years as a student. As mentioned in my previous blog post, in this article I aim to organize and summarize my master’s research, while also sharing some thoughts on the future development of this field.
Can we really read the mind?
At first glance, I believe many people would find this idea almost unbelievable. However, after two years of systematic research on brainwave signals, I began to feel, for the first time, that perhaps we can indeed extract something meaningful from them. Of course, this does not mean literally reading people’s inner thoughts. Such a notion still belongs, to some extent, in the realm of fantasy. But if the goal is to uncover neural mechanisms and deepen our understanding of the human brain, then I can say with confidence: yes—and in fact, this has never been particularly difficult. This is precisely what neuroscientists have been doing for decades.
For a long time, researchers have known that by analyzing the waveforms of electroencephalography (EEG) signals (leaving aside fMRI for now), we can gain insights into human perception. Taking vision as an example—my own area of research—scientists typically rely on a specific waveform known as the visual evoked potential (VEP) to study visual processing mechanisms. Different visual stimuli generate different VEPs. By comparing these waveforms under varying stimulus conditions, we can begin to glimpse how the brain processes visual information.
Everything sounds quite straightforward, doesn’t it?
This intuitive and rather straightforward approach has long been the mainstream in the field. But have we overlooked something? Does VEP truly capture all the relevant information? Personally, I would argue that it does not. The issue is that it focuses on too little. Attempting to infer the whole picture from a limited subset of information is, by nature, imperfect.
So why not analyze all the data comprehensively?
If I told you that just one second of EEG data can contain thousands of data points, you would probably agree that this is no trivial task. It is certainly beyond what the human eye can discern.
Fortunately, with the help of machine learning algorithms, we can allow machines to extract features from the data and compute something akin to a “distance” metric to quantify the similarity between neural responses under different conditions. If two stimulus conditions produce similar neural responses, it is highly likely that they share underlying visual mechanisms.
Through this approach, we can construct a representation of the entire neural mechanism, revealing insights that cannot be fully captured by VEP waveforms alone. At this point, it almost feels as though we are truly beginning to “read” the brain. This is the power of machine learning.
My own research is centered on this idea: exploring how the human brain processes different color stimuli.Looking back on these two years of research, perhaps the most profound realization I have had is this: humanity still understands itself far less than we imagine.
Today, we are capable of extraordinary feats—exploring space, traversing the depths of the ocean, and even developing transformative technologies such as large language models. But when it comes to ourselves, we still have only a limited understanding of how the brain processes information. Our grasp of the intelligence embedded within the brain remains superficial, like trying to glimpse a leopard through a narrow tube.
Many people may never have considered how the visual cortex processes color information. But even such seemingly ordinary aspects of perception remain far from fully understood. At one point, I might have regretted stepping away from traditional computer science. But now, I can say with confidence: I am truly glad I chose this path. With the aid of machine learning, we are beginning to probe the deep neural representations hidden within high-dimensional signals. I am grateful to have had the opportunity to contribute, even in a small way, to our understanding of human visual mechanisms.
Of course, my research is not without its limitations. In particular, signals that should theoretically emerge from the temporal lobe have proven difficult to extract, for reasons that remain unclear. My personal hypothesis is that this may reflect inherent limitations of EEG itself.
But still, I believe this is only the beginning. As more researchers join this field, I am confident that one day we will eventually decipher the code of the human brain.
Weihang Jiang