There is good evidence that static structures in the brain contain information about valued cognitive functions such as long-term memories and personality traits. However, we still do not have complete models of how the information for these cognitive functions is stored in the brain. Despite lacking complete models, we can use existing knowledge in neuroscience to evaluate the hypothetical process by which this structural information could be mapped in the future. In this section, I describe a framework for predicting what might be the structural basis of long-term memories, known as engrams. Until an actual revival has been performed, this prediction process will always contain a degree of uncertainty; in fact, it is one of the major sources of uncertainty in the brain preservation project.
A classic distinction is often made in biology between structure and function. Functional properties of a biological system are dynamic (i.e. time-varying) and produce actions or outputs that are generally relevant for organism survival. Structural features of a biological system are static (i.e. time-invariant) and contain the information to produce functions. There are some problems with this dichotomy, but overall I think it has been a useful framework.
Ideally, a brain preservation procedure would preserve functional properties of the brain and the rest of the body. If this were true, then we would not need to speculate about what structural properties are necessary to preserve. Alas, there are no long-term preservation procedures available today that can preserve large mammals such as humans and allow for the reactivation of global cognitive functions such as long-term memory recall upon reanimation. Most likely, this will continue to be the case of all preservation procedures until actual suspended animation is possible.
As a result, we must predict what structural information future medicine will require to revive a person on the basis of our relatively limited current knowledge. As the saying goes: it is difficult to make predictions, especially about the future. The uncertainty in this prediction is a major source of uncertainty about whether brain preservation with maintenance of memories will be possible.
Based on the previous essays, we have established that static structures in the brain almost certainly contain the information for important cognitive functions like memories. But which static structures? Sadly, we don’t know. If we did, then it would greatly simplify the problem of brain preservation discussion. We could test different methods to conserve these structures and see whether they worked or not.
Because we don’t know what these static structures are, we are forced to playing a guessing game. This introduces considerable uncertainty into the whole project and generally makes people uncomfortable. For example, as Ken Miller has pointed out: “What are odds that that uncertain preservation just happens to capture all of the unknown info needed?” But in my opinion, it doesn’t mean that brain preservation is not worth trying.
Instead, a meliorist approach is to take a hard look at the neuroscience literature and make an honest best guess about what the structures are that contain the information for long-term cognitive functions like memories. And to commit to staying on top of new neuroscience literature about these structures, so as to continually update our beliefs, and, if needed, our brain preservation procedures.
In this essay, I don’t give my hypothesis about which static structures store encoded memories – that will come in a later section. Instead, I lay out the framework for how I think about the problem.
Scale separation is a property of some physical systems in which a portion of the events on lower spatial or time scales need not be captured in order to produce the functionality at a higher spatial or time scale (Anders Sandberg, 2013). Events on the lower spatial or time scales can therefore be abstracted away. For example, for pure gases in certain environmental conditions, it is possible to predict macroscale behavior through statistical mechanics analysis that abstracts away the exact molecular interactions. It is also possible to abstract away the underlying electrical currents when emulating the bit-based logic operations in a transistor-based computer.
The reason scale separation is helpful to think about in brain preservation is because it helps us to imagine how some structures in the brain might not be necessary for the information in a particular cognitive function. For instance, if scale separation occurs in long-term memory recall so that one can abstract away underlying structures such as the particular atoms in a molecule, then the particular atoms in the brain need not be conserved in order to retain the information content for long-term memory recall.
Scale separation has primarily been discussed when considering the whole brain emulation revival approach, which is the idea that the information in a brain could be represented and re-instantiated in an artificial emulation (A. Sandberg et al., 2008). However, the concept of scale is more general than any one revival strategy. Another difference between brain preservation as opposed to whole brain emulation is for the former, we are less focused on how the information is instantiated, but rather on how it can be maintained in a brain preservation procedure.
I don’t think that scale separation is the whole story in understanding what structural information we need to focus on in brain preservation. But it is a way to draw upon useful analogies from other fields.
Another way to think about scale separation is model simplification. (Wybo et al., 2021) describe model simplification as finding the “lowest level of complexity at which computational features are preserved. Conceptually, the simplification thus extracts the essential elements required for the computation from the underlying biophysics.”
It’s also important to point out that scale separation is related to the distinction between information necessary for an individual-level model of a brain and a species-generic model of a brain. The idea here is that the amount of information that will be necessary to build accurate species-generic models of the brain is likely to be immense and much more detailed than would be required to model a specific person with their important cognitive functions intact (once species-generic models have already been built). What we want in brain preservation is to retain the information necessary to revive a unique individual person, not the information required to understand how the human brain works in general. This is obviously an important task and one that will be necessary prior to the possibility of revival, but it is a separate one.
A distinction that follows from this principle is that we are interested in the structural features that make an individual unique, rather than the structural features that defines a species. For example, imagine the hypothetical that all humans have generally the same type of ribosomal structure and resulting function. In that case, while it may be necessary to build a model of a species-generic ribosome, it is unlikely to be essential to preserve each individual ribosome’s structure in its exact in vivo shape. This could be inferred on the basis of the species-generic model of a ribosome.
It is common for people to point out that a given level of structural information, such as some level of description of the structural connectome, is not a sufficient amount of information to understand the functional organization of a nervous system (Brembs, 2020). Of course this is true. Yet most of the information needed to build or repair a brain is likely species-generic. Understanding this species-generic information will require extensive molecular biology, cellular biology, and neuroscience research to be able to predict the function of cells on the basis of structural information. However, if the brain is a physical system and a brain is sufficient to produce a mind, then an understanding must be at least theoretically possible. Moreover, this process need not be repeated from scratch for each revived person. It must only be done initially to understand how human brains in general work and the ways in which the structural basis of cognitive and neural processes can vary between people.
In other words, our primary goal is not to understand the mechanisms of the brain on the basis of these static structures; it is to preserve the structural information with the reasonable expectation that later, when we understand the rules of the brain based on countless other experiments, we will be able to use this knowledge to revive people based on their preserved brain tissue.
The cognitive function that we will focus on here is rapid long-term memory recall. The focus is on this cognitive function because: (a) long-term memories tend to be highly valued, (b) it is a well-studied topic in neuroscience, (c) it serves as a model for other cognitive information that is capable of being stored for a long period of time and rapidly accessed.
The term “engram” refers to the physical representation of a memory in the brain (Ryan et al., 2021) (Josselyn et al., 2017). It is the set of enduring structural information that is necessary for long-term memory storage and is activated in order to recall a memory.
Engrams were first described in the early 1900s by an independent researcher named Richard Semon. Semon introduced the term engram to refer to the lasting physical changes in the brain’s structure that come about as a result of an experience. In Semon’s conception an engram is dormant after formation, but can be awakened by retrieval cues related to the experience through the process of memory recall (Josselyn et al., 2017).
Semon declined to speculate upon the neural substrates of engrams, instead stating “to follow this into the molecular field seems to me… a hopeless undertaking at the present stage of our knowledge; and for my part I renounce the task” (Josselyn et al., 2017). A century later, armed with much more information about neuroscience, we are much better equipped to follow engrams into the field of molecules in the brain.
Memory encoding, also known as consolidation, is the process of converting information about a memory into a stable long-term format as an engram. After the process of encoding, we can say that the engram has been “encoded.”
There are many different types of memory. One classic division in neuroscience is between short-term and long-term memories. Long-term memories are those that have already been consolidated into a stable form. The precise dividing line between short-term and long-term memory is unclear and an area of uncertainty. But at the extremes, we define long-term memory as the type of memory that can persist for the time scale of at least years (Cowan, 2008).
Short-term memories are likely to be lost by many brain preservation procedures as they are not yet encoded in long-term structural forms. This means that if anyone is ever revived from a brain preservation procedure, they almost certainly wouldn’t remember the procedure or the period immediately before it. As Andrew Hires has pointed out, this is probably a feature, not a bug. Instead, what we are most focused on is long-term memories.
Long-term memory recall, also known as retrieval, is the dynamic process of accessing those memories. Sometimes, this process can cause the phenomenological experience of the long-term memory being re-experienced. The process of recalling a memory is equivalent to reawakening an engram, and is called “ecphory.” Ecphory can be artificially induced by electrically stimulating the cells in the brain that make up an engram.
As opposed to the slower process of memory formation, memory recall can occur quite quickly, as rapidly as 500 ms (Staresina et al., 2019). Because memory recall can also occasionally take longer than this, I specify that we are particularly interested in rapid forms of memory recall. When I use the term “memory recall,” I am referring to rapid long-term memory recall.
Engram storage refers to processes which promote the long-term maintenance of the encoded engram. In the absence of retrieval cues, a stored engram can remain dormant for years.
It’s also helpful to recognize that there are different types of long-term memory in the brain. One influential paradigm for classifying memory systems is by (Squire, 2004), an adaptation of which is shown here:
As discussed a previous essay, in these essays I am focusing on information stored in the brain, which eliminates certain implicit pain and reflex memories that seem to be stored in the spinal cord. Beyond this, there is an obvious temptation to focus on declarative memories, because, for most people, these are probably the most valued type of memories. But this is problematic, because the theoretical prospect of decoding a declarative memory would require other memory systems as well, such as perceptual memory, emotional memory, and procedural memory.
For example, some have speculated that one could “download” an isolated memory based on patterns elicited from electrophysiological recordings from certain brain regions. But this seems likely impossible, in part because you can’t take an isolated electrophysiological pattern from one brain and have it make sense in another brain. The systems are much too intertwined.
There may be differences in the way that declarative memories are stored in the brain compared to other types of memories. For example, in their epic review of what they call the “synaptic plasticity and memory” hypothesis, (Martin et al., 2000) describe the case that synaptic connectivity patterns mediate memory information in the brain. They primarily focus on evidence from studies of the types of memory mediated by the hippocampus, amygdala, and neocortex, which includes declarative memories. Their discussion therefore leaves open the possibility that other types of memory primarily stored in other brain regions could be encoded in different ways.
My main focus is on declarative memories, since this seems to be the most valuable memory system to most people. But in order to focus on this, we need to consider a broader class of memory systems that includes declarative memories as a particular type. This broader class would theoretically be required to decode declarative memory information. As a result, it is prudent to cast a wide net when trying to determine what structures are important for engrams.
Some of the text so far in this section might make it seem like memories can be isolated from other parts of one’s cognitive functions. This is misleading and I want to be clear that I am not making this claim. Isolated from other cognitive functions, the concept of a memory is highly problematic. Memories are deeply intertwined with other aspects of our cognitive functions. It’s unclear whether we will ever be able to extract memories without interfacing with other cognitive functions – it depends on how memories are represented in the brain and, currently, it seems unlikely.
The advantage of focusing on long-term memory recall and the associated engrams is that, at least theoretically, it’s more experimentally testable than other types of cognitive functions. One can ask questions such as what is the probability that the memory for a particular password is preserved by a given preservation procedure. I will define sufficiency for engram preservation as a level of detail that would allow for the high-fidelity read out of a precise memory such as a memorized password, a learned olfactory preference, or trained behavior to complete a maze.
In practice, we are nowhere close to the level of technology that would be required to actually read-out a static engram without a dynamic behavioral experiment. We can imagine different possible engram read-out methods from unlabeled tissue such as one would be able to access in humans, but none of them are close to being possible today.
First, one could imagine profiling preserved brain tissue to compare between two groups of animals, one of whom had learned a particular type of memory and one of whom had not. There could be a between-group comparison to see whether a particular type of engram could be identified or not on the basis of a microscopic imaging technique. However, such a comparison would severely lack statistical power due to a multiple comparisons problem. All of the other engrams, and other neural connectivity patterns that share characteristics, would not be able to be distinguished a priori. Moreover, it is likely that engrams would not be stored in the same way even if they were the same memory. Perhaps this sort of analysis could be done for a particular type of unique memory that is highly identifiable for some reason, but it remains to be seen if such memories exist in humans. (While this discussion refers to unlabeled tissue, with engram labeling techniques that are possible in animal models, it may be possible to identify the information for an engram in static brain tissue today. For example, it has been shown that functional measurements of synaptic strength in postmortem brain slices can allow us to distinguish between groups of animals trained in different auditory tasks (Xiong et al., 2015), which should also have a structural measure.)
Second, it is plausible that there is a type of neural “code” that we will uncover in the future that will make structural identification of engrams in preserved tissue much easier. For example, there may be a type of epigenetic marker that can be found in all cells that are involved in a particular engram or engrams that were formed at a particular time in a person’s life. One could draw an analogy to biological tissue preserved in the early 1900s. Scientists at that time had reason to suspect that heritable information was likely present in this material; however, the existence of the DNA-based genetic code as the vehicle of heritable information, let alone the technologies to cheaply sequence and decode it at scale, were certainly not yet available. On the other hand, there are good reasons to believe that no similar type of neural code will be identified. For example, there does not appear to be such a clearly identifiable code in artificial neural networks.
Third, one could also imagine “reanimating” the engrams by emulating the brain on the basis of the static fixed tissue (Morgan et al., 2017). Because dissociating long-term memory retrieval from executive functioning, personality, affective processing, physiologic context, environmental context, and other factors may be impossible, it seems likely that this would require whole brain emulation or something similar to it (A. Sandberg et al., 2008). For example, one could imagine making a computational model of a nervous system that could reproduce realistic electrophysiology output and associated cognitive functions or behavior. This would likely be initially possible on a smaller nervous system, such as an olfactory memory behavior in C. elegans. However, emulating the nervous system of C. elegans would require dramatically more advanced neural imaging technologies, models of C. elegans neurobiology, neural simulation technology, and world modeling technology, among other required technology. If whole brain emulation is ever possible in C. elegans, let alone for humans, it would only be possible far in the future.
In summary, engram read-out technology from unlabeled preserved brain tissue is highly unlikely to be developed anytime soon, if ever. It either requires sophisticated labeling of the engram during formation in a non-human animal study or something similar to whole brain emulation technology. This is one of the reasons that there is considerable uncertainty in this step of the brain preservation project.
There are three reasons that an animal might not be able to recall a memory: (a) it may not have been learned, (b) it may have been learned but forgotten, or (c) it may be learned and still present in the brain but not successfully retrieved (Frankland et al., 2019). The information for a memory that has been learned and is present in the brain can be called an “available” engram, whereas a memory that can be retrieved in a known way can be called “accessible.”
Even when an engram can be shown to be available in the brain, such as via electrical stimulation or the presentation of specific recall cues, it can be more or less accessible by known means. Engrams can be accessible via natural cues such as certain environmental stimuli, only accessible via highly specific artificial stimuli, or not accessible at all. Engrams that are available but not accessible by any known means are called “silent.”
It is unclear how to distinguish between a silent engram and simply the loss of the memory information altogether (Frankland et al., 2019). Given our finite resources to present recall cues or stimulate certain neural ensembles, we can’t test them all. So, given our current state of knowledge, you might say that memories can never truly be said to be unavailable as opposed to inaccessible.
While it is hard to identify them, there are surely circumstances in which memories are truly lost. A traumatic injury or stroke that damages a large part of the brain will almost certainly lead to the loss of available (not just inaccessible) memories. And even in the absence of cell damage, forgetting curves (Murre et al., 2015) for most memories tend to approach zero despite the number of recall cues or behavioral tests assessed. At this point, the corresponding engrams could be truly unavailable.
As an even more speculative aspect of engram preservation, we can imagine that engrams might be partially preserved by a given brain preservation procedure. For example, data has shown that larger engrams are associated with more precise memories (Leake et al., 2021).
One could imagine a future field of memory archaeology that would stimulate unconscious neural circuits or measure morphomolecular maps in brains in order to identify many or even all of the available engrams. This field would likely be better able to answer our current questions about the difference between engram silencing and forgetting; for now, the best we can do is guess.
There are numerous reviews about the mechanisms of engrams and memory storage in the brain. It is a very active area of research and one of the most fundamental questions in neuroscience. What engrams are made of remains an actively debated question.
Here is a small sample of articles discussing this topic, each presenting somewhat different theories: (Crick, 1984), (Martin et al., 2000), (Tonegawa et al., 2015), (Chaudhuri et al., 2016), (Poo et al., 2016), (Si et al., 2016), (Lisman, 2017), (Ryan et al., 2021), (Goult, 2021).
How can I say for sure that the neuroscience community doesn’t understand the neural substrates of long-term memories yet? I can’t. As Tal Yarkoni points out, such a statement is really about my personal knowledge rather than a statement about what is collectively known by the field at large.
But my impression is that if we knew better how memory worked, we would be able to build much more detailed simulations of memory behavior that make more specific predictions. For example, if we understood the brain better in general, we would be able to build an artificial animal brain that could remember things in the same way a biological animal brain does. (Even if we could do this, maybe we shouldn’t until we understand better what type of experience that emulated mind might have. Personally, I feel we need to be very cautious about the ethics of creating digital minds, insect and other non-human minds very much included, because they could undergo significant suffering.)
In these essays, I do not attempt to distinguish between different plausible hypotheses for the structural components of engrams. Instead, I address the question of whether any currently plausible theory of engrams is consistent with the ability to conserve them in postmortem brains.
An engram is a theoretical entity; however, that doesn’t mean that it is not real. Positing a theoretical entity without knowing the details yet has a rich tradition in biology, such as the gene, the receptor, and the enzyme–substrate complex (Gunawardena, 2013). We can reason about high-level properties of engrams that will guide us in how to conserve them without understanding their detailed, more elusive mechanisms.
Brain mapping is a growing and active field. It involves the study of the anatomy of the brain at multiple levels, including the microscopic level. It is worth pointing out that brain mapping includes biomolecular studies as well. Molecular information can be conceptualized as made up of static structures.
The reason that I titled this essay with the word cartography is that we are not discussing the maps themselves but rather the process for how future brain maps might be created.
Maps can be topographic or topological. A topographic map contains detailed information about the scale and features of the items in the map. A topological map omits details and scale information to focus on important relational information. So more specifically, we are discussing the topological cartography of engrams – which details would be necessary to include for a hypothetical future brain map of engrams and which could be omitted.
What I have presented is certainly not a heterodox view of engrams. For example, (Ryan et al., 2021) describe engrams as consisting of topological information:
Considering the distributed, sparse, and stable nature of memory; it may be misleading to think of engrams as lying within cells or synapses or molecules. It seems more realistic that these components contribute to making up an engram, which functionally lies in the topology of the brain’s wiring diagram. An engram for a given memory may be considered as a difference in the brain’s connectome such that information is acquired and recall can occur.
This might seem to imply that only a brain map with a certain type of information could describe the information in an engram. In reality, there are likely numerous such topological brain maps. This is because of the highly correlated nature of structures in the brain. For example, one can imagine a map of all of the RNA molecules in the brain and their compositions; one might speculate that this might be sufficient for describing the information content of engrams. On the other hand, one might imagine a map of “only” all of the proteins; this too, could potentially contain a sufficient amount of information to describe engrams. This relies on the idea that the information contained in other biomolecules can be inferred, which will be discussed more in a later essay.
We will not be creating the maps ourselves in the near term. If it were possible to do so, one might imagine that an alternative to preserving the brain would be to map the information in the brain and conserve the information files. This would make the process of storage much, much easier. Unfortunately, we do not have the technology to perform single molecule-level mapping throughout large regions of the brain. This will not be possible for many, many years. Some might question whether single molecule-level mapping will ever be possible across the whole brain, although it does seem to not violate any known laws of physics, so this is mostly a question of what one thinks about the future of technologic civilization.
Any given proposal to perform engram mapping will always have unknown unknowns until it is actually performed. And it very much has not been performed yet. As a result, engram cartography is currently an exercise involving uncertainty. This is one of many reasons that brain preservation has uncertain prospects. How much uncertainty one places on this step of the proposal for brain preservation depends in part on how one feels about the current state of the science. The Aspirational Neuroscience Prize is designed to honor and highlight neuroscience research that tells us how memory is encoded in the brain based on static structural information.
With the previous section aside, here are the four constraints that I will use when attempting to predict what level of structural detail might be necessary to produce a given cognitive process. These constraints are not meant to be fundamental categories that carve nature at its joints. Instead, they’re meant to be helpful heuristics to discuss what level of detail model simplification can be performed occur for a given cognitive process (Wybo et al., 2021).
The first criterion is the robustness of the cognitive process to perturbations that destroy structural features. This means that if a natural or experimental event destroys a structural feature but not the information necessary for a cognitive process, then that structural feature alone cannot be necessary for the information in that cognitive process. Perturbational robustness is the strongest criterion, although the assumptions in applying it in any given case always need to be carefully examined.
Independent perturbations cannot be stacked. For any given structural feature, we can only choose one perturbation. This is because independent perturbations might destroy different types of correlated structural information, either of which is sufficient to maintain the information content about the cognitive process. But if both of the independent perturbations were to occur at once, then the structural information for the cognitive function would be lost.
The second criterion is longevity: whether the structural feature lasts for as long as the cognitive process is known to last. The idea behind this criterion is that if a structural feature does not last over a period of time for which the cognitive process does, then that structural feature alone cannot solely code that cognitive process. This criterion has been described previously in the field of memory research. For example, (Ryan et al., 2021): “[a]ny putative mechanism of memory storage needs to satisfy the criterion of longevity.”
A key exception to the longevity constraint is the existence of structural cycles, which can maintain structural states over time even if the individual components are replaced. Structural cycles are common for biomolecular features. Cellular states can be maintained by self-regulating biomolecular cycles, for example in the biorhythms that drive circadian clocks, cell division cycles, and metabolic cycles (Mellor, 2016). As another example, the self-reinforcing autophosphorylation of the regulatory protein CaMKII can also act as a molecular switch (Hayer et al., 2005). However, it is not clear that these biomolecular cycles can last for years, as engrams can.
Another example of a structural cycle is network-level stability of neural networks. Individual structural features that are part of a neural network, such as the strength of the functional connection at a synapse, might drift over time. However, these synaptic strength drifts could be associated with changes in other synaptic strengths that allow the maintenance of the same functional properties of the network. When applied to long-term memory storage, this phenomenon has been called the “restless engram” (Dudai, 2012). As a result, while the synaptic strengths at any given time are only one of many ways of encoding the same network-level properties, it still might be sufficient to capture the full set of synaptic strengths at any given time in order to accurately infer the network-level properties.
Structural cycles are a less robust way of storing information, so they are not likely to be favored by evolution as the sole form of long-term storage of information that is important for an organism. However, because of the possibility of structural cycles, we cannot say that a short turnover time of a biomolecule or a short lifecycle of an abstract structural feature logically constrains their ability to encode cognitive processes that last longer time periods. When structural cycles are a possibility, the longevity criterion is much weaker than the robustness criterion and should only be thought of as a clue.
The third constraint is the speed of the characteristic spatiotemporal scales at which structural features operate relative to the cognitive process. Every cognitive process can be said to have a characteristic timescale over which it occurs (Papo, 2013). For example, perceptual processes such as visual recognition are fast, requiring only hundreds of milliseconds (Dering et al., 2011). Long-term memory recall is also fast: the phenomenological experience of long-term memory recall can occur in less than one second, with a reported range of 500-1500 ms (Staresina et al., 2019).
In addition to characteristic timescales, cognitive functions also have characteristic spatial scales in the brain over which they operate. For example, long-term memory recall involves communication between multiple brain regions such as the hippocampus, entorhinal cortex, and parietal cortex, which are millimeters to centimeters apart (Staresina et al., 2019). Specifically, memory recall seems to require particular frequencies of electrochemical activity to be communicated across brain regions.
Combining these two properties, we come to the idea of each cognitive function having a characteristic spatiotemporal scale. The spatiotemporal criterion claims that if a structural feature is to play a role in producing a cognitive process, then it must play a role in some sort of neural process that can mediate neural information flow (Seeliger et al., 2021) as fast or faster than the cognitive process over a large enough spatial scale.
For example, microglia, an immune cell in the brain, undergo movements in their cellular processes in a way that appears to be important for various cognitive processes including memory consolidation. However, the movement of microglial processes occurs on the timescale of seconds to minutes and on a highly local spatial scale. This is too slow for microglia process motility to be necessary for long-term memory recall, which can occur in less than one second. Because of the spatial aspect of this constraint, we can also say that the neural processes involved in long-term memory recall must not only be rapid themselves but also capable of transmitting information across large distances in the brain on the scale of millimeters to centimeters.
While the spatiotemporal scale of functioning is theoretically a logical constraint, one of the tricky aspects of this form of reasoning is that each structural feature in the brain must be assigned to functions. We know most structural features are involved in more than just one biological process. For example, most biomolecules are pleiotropic, which means they can play a role in multiple biological processes, such as different signaling pathways, cell types, sub cellular compartments, or protein complexes (Sivakumaran et al., 2011). As another example, even if the movement of a cell membrane takes a relatively long time to occur, its shape will still affect local electrochemical ion diffusion, which is involved in many different neural processes in the brain. As a result, associating structural features with neural processes is a probabilistic form of reasoning, based on our current knowledge of neuroscience, which makes the spatiotemporal constraint weaker.
Another exception to this criterion is that there might be variability in the characteristic spatiotemporal scales of the cognitive functions and neural processes. As an example, while long-term memory recall is usually fast, there are instances in which it takes longer than expected. Many of us have had the experience of having a piece of information “on the tip of our tongues.” We know that we know it, but it takes us a few moments longer than normal, perhaps up to a few minutes, to actually recall it. At least for that one engram, our long-term memory recall can be said to require a few minutes. From the perspective of the spatiotemporal constraint, the universe of possible neural processes that could be involved in the information content of that long-term memory recall ability would be much expanded. This is why, for our purposes, we will focus on rapid long-term memory recall, which we will define as that which takes only 500-1500 ms to occur.
To make the discussion of this criterion more concrete, here are some examples of timescales of neural processes:
Process | Timescale | Reference |
---|---|---|
Rate of ion transport in voltage-gated Na+ channels | ~10-1000 nanoseconds | https://bionumbers.hms.harvard.edu/bionumber.aspx?&id=103163 |
Movements of protein subdomains | Nanoseconds to microseconds | As cited in (Hartel et al., 2018) |
Movements of larger protein domains | Microseconds to seconds | As cited in (Hartel et al., 2018) |
Electrochemical signaling via cell membranes | Microseconds to milliseconds | (Herbst et al., 2017) |
Presynaptic calcium-triggered neurotransmitter release | 200 microseconds to 50 milliseconds | https://web.williams.edu/imput/synapse/pages/IIA2.htm |
Calcium wave signaling through the endoplasmic reticulum from the nucleus to synapse | Seconds | (Herbst et al., 2017) |
Endothelial dilation | Seconds | (Miezin et al., 2000) |
Transcription of 1000 base pair RNA molecule | ~16 seconds | http://book.bionumbers.org/what-is-faster-transcription-or-translation |
Translation of 333 amino acid protein | ~67 seconds | http://book.bionumbers.org/what-is-faster-transcription-or-translation |
Movement of signaling molecules from the nucleus to synapse | Minutes to hours | (Herbst et al., 2017) |
Polyribosome accumulation in dendritic shafts and spines | < 5 mins | (Ostroff et al., 2018) |
Oligodendrocyte precursor cell differentiation | Hours | (Xiao et al., 2016) |
Myelin remodeling | Hours | (Czopka et al., 2013) |
Myelin internode extension and retraction | Days to weeks | (Hill et al., 2018) |
And here are some examples of timescales of cognitive functions:
Process | Timescale | Reference |
---|---|---|
Top sprinter reaction time in response to sound of gun | 100-160 ms | https://bionumbers.hms.harvard.edu/bionumber.aspx?&id=111450 |
Recognition and response to visual stimulus | 400-500 ms | (Tovée, 1994) |
Ecphory in episodic memory retrieval | <500 ms | (Waldhauser et al., 2016) |
Short-term memory consolidation | Minutes to hours | (Vertes, 2004) |
Habit formation | Weeks to months | (Lally et al., 2010) |
Long-term memory consolidation | Up to years | (Vertes, 2004) |
The fourth constraint is the uniqueness of the structural feature relative to other structural features that encode the same information about the cognitive process. If one structural feature is perfectly correlated with another, then the uniqueness criterion states that we only need to preserve and/or measure one of these two structural features in order to capture the information.
The uniqueness criterion will never be completely valid, because no two structural features will ever be perfectly correlated. For example, the relative concentration of two biomolecules in a cellular compartment may be close to perfectly correlated, but their location information at any given time will still be slightly uncorrelated as a result of diffusion about the compartment. However, as a result of the longevity criterion, the exact relative locations of the diffusing biomolecules at any given time are exceptionally unlikely to encode any long-term cognitive process. More generally, many of the precise structural features of engrams will change over time as the brain’s molecular components turnover, yet the information remains generally intact, suggesting that the uniqueness criterion need not be completely valid in order to be useful.
As one example of the uniqueness criterion, consider the structure of cell membranes, which can be made up of thousands of different biomolecules. If you preserve and label any one of these thousands of biomolecules with antibody staining, you can get a very good sense of the structural information of the cell membrane shape. In this sense, the structural information about the cell membrane is not unique to any one of the biomolecules. It is in this abstract sense of “you only need to preserve a small subset of arbitrarily chosen biomolecules” that we can speak of the lack of uniqueness of an abstract structural feature.
Correlated biomolecular information is not a distinctive phenomenon to cell membranes. It is a general rule that biomolecule distributions in brain cells (and cells of other organs) are highly correlated. As one example, brain cells tend to have different relative levels of gene expression “programs” that can be usefully modeled at the network level rather than the level of individual RNA molecules (McKenzie et al., 2018). Higher-level structural features tend to be correlated as well. For example, the diameter of an axon and the amount of myelin enclosing it tend to be correlated, albeit not perfectly, along the length of a myelinated axon (Lee et al., 2019). Therefore, if one segment of a myelinated axon were distorted by a preservation procedure, such as via cracking, it would still be possible to infer the original structure to a reasonably high degree of precision.
I have mostly focused on long-term memories in this section, but personality traits and other aspects of personal identity are also clearly essential to preserve as well.
Because of the spatiotemporal constraint, it is helpful to first focus on rapid long-term memory recall. But slower forms of memory recall might also be important for personal identity. While there is some additional uncertainty, these are very likely encoded via the same or very similar structural mechanisms. It would be extremely surprising if there were particular mechanisms of engram storage based on how long it takes for the associated memories to be recalled.
Let’s consider the tip of the tongue phenomenon. Most likely, memories that take awhile to be recalled due to the tip of the tongue phenomenon are encoded in similar structural forms as long-term memories that are rapidly recalled. The difference causing the tip of the tongue phenomenon is likely that the retrieval cue presented was weaker, so it takes longer for the brain to spin its wheels to instantiate the proper attractor-like electrophysiological activity within the encoding neuronal ensemble. If the retrieval cue had been stronger, it would likely have been a faster recall process.
As with long-term memories, long-term personality traits can be maintained over many years and can have rapid effects that occur within seconds. (Ryan et al., 2021) have defined ingrams as similar to engrams in their neural structure, but related to instinctual functions of the brain as created by evolution. An ingram might correspond to the tendency of an animal to have a flight behavior when exposed to a cue indicating that a predator is nearby. These innate circuits could differ from person to person in important ways. Long-lasting cognitive functions that can be accessed rapidly are subject to many of the same constraints as rapid long-term memory recall and are likely encoded via similar structural mechanisms.
If we consider symbolic models of cognition, we can reason a bit further about which cognitive processes are likely encoded by similar structural mechanisms as engrams. Production system models of cognitive architecture such as ACT-R formalize the spatiotemporal scale over which cognitive functions operate (Anderson et al., 2004). In ACT-R, a production rule is basically an IF/THEN statement: if a particular representation of the world is pattern recognized in a module of the brain, then it is updated in a particular way as defined by the production rule. This operates over a spatial loop from the cerebral cortex to sub-cortical regions and back. For some context, here is a quote from (Anderson et al., 2004):
An important function of the production rules is to update the buffers in the ACT-R architecture. The organization of the brain into segregated, cortico-striatal-thalamic loops is consistent with this hypothesized functional specialization. Thus, the critical cycle in ACT-R is one in which the buffers hold representations determined by the external world and internal modules, patterns in these buffers are recognized, a production fires, and the buffers are then updated for another cycle. The assumption in ACT-R is that this cycle takes about 50 ms to complete – this estimate of 50 ms as the minimum cycle time for cognition has emerged in a number of cognitive architectures including Soar (Newell, 1990), CAPS (Just & Carpenter, 1992), 1/16/04 9 and EPIC (Meyer & Kieras, 1997). Thus, a production rule in ACT-R corresponds to a specification of a cycle from the cortex, to the basal ganglia, and back again. The conditions of the production rule specify a pattern of activity in the buffers that the rule will match and the action specifies changes to be made to buffers.
ACT-R is meant to define the basic cognitive operations that allow the mind to work. It makes the empirically grounded assumption that the minimum time for a cognitive cycle in the brain – likely operating through cortical-striatal-thalamic loops – takes 50 ms. If taken at face value, then by the spatiotemporal criterion, the basic operations of the brain that chain together to give rise to complex cognitions must involve neurobiological processes on the timescale of milliseconds and the spatial scale of cortical-subcortical communication, even if those complex cognitions take minutes to unfold. So then by the spatiotemporal criterion, even complex cognitions taking minutes most likely rely on fundamental biological processes that occur over the timescale of 50 ms or less.
In summary, while they are less testable and less easily reasoned about, most other valued long-term aspects of personal identity are probably encoded in a structural form that meets similar spatiotemporal constraints as engrams.
The robustness, spatiotemporal scale, and uniqueness criteria all provide relatively strong constraints on what structural features could be responsible for engram information, although applying them in any specific case warrants thoughtfulness. Because of the possibility of structural cycles, the longevity criterion is weaker, but still listed here because it can be a helpful clue.
There are certainly problems with the constraint-based approach described here. It is qualitative, not quantitative. I am not trying to logically prove or disprove that a particular brain preservation method definitely retains the information content of long-term memories. I don’t think that such a proof is possible given our state of uncertainty. Rather, I am trying to build off of current consensus models of how the brain works to come up with a logical way of reasoning about the problem, with premises that can be queried. I hope that others can poke holes in how this framework is applied or come up with better alternatives in the future.
We can now apply these four constraints to the problem of which structural features are necessary for retaining the information content of long-term memory recall via the preservation of engrams in the brain.