Lossy Compression and Coordinated Memory Replay in the Visual Cortex and Hippocampus during Sleep Alexander Sexton The Wilson article bridges a gap between experimental science and theory. The evidence of coordinated memory replay has never been shown in the lab with any convincing authority until Wilson and Ji executed these experiments on rats. During certain types of sleep, such as Slow Wave Sleep and Rapid Eye Movement sleep, the hippocampus and the visual cortex were known to be active. However, the relation of these two sub-systems was unknown. The experiments and data outlined in this paper successfully show that the two parts of the brain are related, and in many cases correspond with each other on a one to one basis,. The hippocampus is a crucial factor in episodic memory. According to the theory of system memory consolidation, episodic memories must be “compressed,” or otherwise converted into long-term memory which is stored in the cortex. The paper describes an experiment where a series of rats are placed in a figure 8 maze. Key points on the rats’ hippocampus and cortex were recorded during activity in those areas over the time the rat spent in the maze. The rats were then observed during natural sleep following the maze run. During this sleep, the same points on the rats’ brain were watched for activity. Most successfully, during SWS, the rats began to show activity in these areas. Theories have proposed that this activity is the “replaying” of previous memories in the brain in order to store into long-term memory. The data seemed to back this theory as the firing patterns during SWS were easily mapped back to the same firings during the RUN portion of the experiment. The cortex seemed to start firings just before the hippocampus, which was hypothesized to mean that the cortex was responsible for activating the process, but the data supporting this was inconclusive. There is no evidence that the hippocampus did not initiate a request via some other method, which was followed by a reaction by the cortex, this would be easily missed due to the small portion of firings that were captured in comparison to the vast amounts that actually occur. When looking at this situation computationally, it is easy to map the process that is occurring to processes that occur regularly in the computational world. Episodic memory from the RUN stage is much more specific than the “compressed,” long-term memory that is eventually stored in the cortex. According to the Wilson paper, exact firings of brain synapse are remembered in a nearly precise order until the memory consolidation occurs during SWS. This is an incredible discovery, because it means that an encoding of an exact event, as the brain sees it, is stored in the hippocampus. Until this time, we have not seen evidence that these synapse firings were remembered in such detail (sequence and timing), a lot of this process was as good being described by magic. It also explains the necessity for long-term memory and short-term memory, since the brain more than likely does not likely have the capacity to remember the exact synapse firings of an event for every memory that you ever have. There seems to be an inherent need for compression of these encodings, in order to be able to store more information. The process of compression is one that is well known in the field of computation. The lack of space has always been a determining factor and storing data, but there are a couple of specific examples in modern day technology, that have seemingly similar goals as the brain when it comes to compression. Lossy MP3 compression, for example, is an instance where an “experience” has an exact encoding (WAV), and a computer can take that encoding, and turn it into a much smaller encoding that represents, very closely, the original experience (MP3). There is loss in exactness in this type of compression, but there is a much more significant decrease in the size of the encoding in comparison to the loss. A typical WAV file is a “lossless” encoding of audio and can be reduced to 1/30th of the size in an MP3 encoding that sounds good to the average human. The fact of the matter is that all the data is not necessary for the encoding to give an acceptable representation of the experience. More specifically, certain events require more attention to detail to remember, while others do not. This is also the case with MP3 compression. A segment of audio that has very few frequencies represented can be encoded at a lower bit-rate than one with a lot of audio. This type of encoding is called a Variable Bit-Rate (VBR) MP3. During compression, frequencies are analyzed and then a bit-rate is chosen to best represent the data with minimal loss in quality. This is done by removing things that humans cannot hear, and approximating waveforms at a given sample rate. The wave forms shown in the graphs of the Wilson paper could easily be compressed using this same technique. One way this might happen in the brain, would be an approximation of the synapses, or a filling in of the blanks. The brain would be able to cut out irrelevant details that may have made synapses fire, but keep the ones that were key to the situation. For instance, you might not remember what the room smelled like during your last lecture in school, but you more than likely remember what the room smelled like the last time someone cooked brownies. In the first case, the smell of the room would be analogous to something that the human ear could not hear (and was subsequently removed) in an MP3 encoding. It would be interesting to see how these same experiences are recalled after they are stored to long-term memory. If the encoding of these memories by the hippocampus into the cortex occurs like MP3 compression, or any lossy VBR encoding for that matter, the replay of the synapses might occur at a slower rate, and could also appear to have a more rounded representative wave-form. This is due to the chosen “bit-rate” of the memory, as well as the approximation of firings between the stored firings. This new lossy format memory might be able to get the gist of the experience back into the brain, but may not be as clear as a real experience due to the approximations. An interesting thing to consider, when thinking about computation compression, is multiple-pass compression. The hippocampus and cortex are active in several stages of sleep, including REM, this could possibly be encoding different things, or perhaps even making different passes of the same experiences in order to get maximum compression. While there is no proof that any of the relationships that have been drawn about compression are necessarily accurate, they serve as a good basis for drawing up new tests to learn about the interaction of the hippocampus and the cortex. The fact that the memories are being replayed with such a high-level of detail make it evident that the brain might be in need of some sort of compression method, in order to store more information. It would serve the community better if cues were taken from an area of science that has already significantly studied compression. The data is evident, but theories now need to be drawn in order to stimulate growth in the field. This example of lossless compression in the brain is one of many ways to map what occurs in the brain to something that the field of computation has already heavily studied. It seems only logical that experiments be developed that more closely test for events that we have already designed for ourselves to do the same job. It seems natural that we would do things in a similar way when designing things computationally, as evolution would naturally design in the brain.