Because of the correlated nature of structural information in the brain, it is likely that there are numerous topological maps of the biomolecule-annotated connectome that could sufficiently describe engrams. Even if many of these maps were damaged or destroyed by aspects of the brain preservation procedure, if at least one could still be ascertained, then the information content for engrams would still be present. Important structural features in the brain that are correlated with the map that is retained could be inferred, while structural features not necessary for valued cognitive functions could be replaced.
To make the discussion more concrete, we describe different possible inference channels for different levels of the connectome. Primarily, the inference process relies on mapping the biomolecular breakdown products of abstract structural features that might be damaged. I currently think that the high-level goal of brain preservation should be preserving enough structural information to infer the biomolecule-annotated connectome to a sufficient degree to describe engrams.
The difficulty of inferring the biomolecule-annotated connectome is highly unclear and it is correlated with all of the other areas of uncertainty in brain preservation. This is why naïve Drake-like equations to estimate the probability of brain preservation success can be highly problematic. Our current lack of knowledge cuts both ways: many of the proponents of brain preservation, as well as its critics, often underestimate the uncertainty about how much inference of the biomolecule-annotated connectome will be possible.
The connectome need not be perfectly preserved in order to be inferred. If there is damage to the connectome during the dying and/or preservation processes, but the decomposition products of the connectome could only have come about from one original connectome, then there is still enough information about the connectome in that brain to perform repair and revival. The connectome also does not need to be perfectly inferred, just well enough for ordinary survival. This means that the types of alterations that occur to the connectome on a daily basis are acceptable in brain preservation.
The inference step of figuring out what the original neural structure was based on the decomposed parts is the process of neural archaeology, first described by Thomas Donaldson in 1987. This was also coined reconstructive connectomics by Aschwin de Wolf in 2013. Neural archaeology is not currently much of a research field, but I expect it will be in the future, in the same way that biomolecular archaeology is now (Hunter, 2007).
The possibility of extensive neural archaeology is a highly controversial point within and outside of the cryonics/brain preservation community. People can simply claim that any amount of preservation damage could potentially be repaired in the future following a neural archaeology investigation. This allows for all sorts of wishful thinking, lack of urgency, and swindling. At the same time, I do expect that neural archaeology will be the essential component during any potential revival process.
In information theory, a channel is a medium used to communicate a signal. For our purposes, we can consider an inference channel to be the measurement of a particular type of structural feature in the brain as a proxy of the information in the brain. Information loss in this channel includes changes in the perimortem period, the brain conservation process, and any inaccuracies in the process of measurement.
As an example of a high-capacity inference channel, let us imagine that one performed wide-scale, high-resolution spatial RNA mapping (which is not currently possible) to identify the location and sequence of all of the RNA molecules in a brain. For some context, it’s important to recognize that translation in brain cells is often local. As a result, thousands of types of messenger RNA (mRNA) molecules, alongside ribosomes, can be found in dendrites, axons, and synapses (Holt et al., 2019).
Because so much of the translation in brain cells is local, even if you only had a map of RNA molecules, you could likely predict to a high degree of certainty the protein components in local subcellular compartments. Because it would allow you to identify RNA molecules specific to different organelles and other sub-cellular compartments, this would also allow for a type of ultrastructural feature level of connectome detail. It could be harder to predict the copy number of each translated protein, but this might be regulated in a relatively predictable fashion.
As an example of the capacity of RNA as an inference channel, the odorant receptors of olfactory sensory neurons have already been predicted to a high degree of accuracy using the transcriptional profile of cells, even when the olfactory receptor transcript itself is removed from the analysis (Bast et al., 2022).
Would this inference channel be sufficient to describe engrams? That’s an open question, which depends on the presence of long-lived proteins, the presence of subcellular areas in which local translation is not widespread, how much local protein copy number varies based on local mRNA content, and other factors. It’s also somewhat of an artificial hypothetical question because other biomolecules would likely be present in the preserved brain as well. However, the information capacity of mRNA is relevant in brain preservation, because some data suggests that RNA molecules are relatively stable in the postmortem brain (Zhu et al., 2017), due in part to the relative dearth of endonucleases.
Over the years, various people have claimed that brain preservation will not be possible because a particular structural feature of the brain will likely not be preserved by a particular brain preservation procedure. For example, Ken Miller made various claims of this sort in a debate with Ken Hayworth.
Specific claims about the lack of structural feature preservation need to be examined carefully. Yet, even if true, that doesn’t mean that there is necessarily a loss of information sufficient to destroy engrams. The reason for this is that an inference channel may still be available to infer the information in this damaged or destroyed structural feature.
This possibility of inference is a counterbalancing force in weighing the uncertainty involved in brain preservation. While many people who critique brain preservation feel that the number of sources of damage to preserved brains are numerous, many proponents of brain preservation likewise feel that the number of inference channels are also numerous. Critics of brain preservation often do not grapple with the possibility of inference channels.
In my experience, there is a strong correlation between one’s knowledge of information theory and how likely one thinks that brain preservation may be successful. For example, in the 2021 ACX Biostasis Survey, academic computer scientists – perhaps the profession that is most likely to be knowledgeable about information theory – were the most likely among different professions to be signed up for cryonics:
As the survey author wrote: “The vast majority of people in the world are not signed up for cryonics. Only about 2000 in 7.6 billion people, or 0.0000263%. Whereas if you’re an academic computer scientist who responds to ACX surveys, the probability is about 10%, which is 380,000 times higher.”
One way to integrate the bottom-up and top-down approaches to engram structural information is to imagine a connectome inference process in the setting of damage. Because we have established that anchored biomolecules are the most likely building blocks of engrams, we can imagine that if the connectome is damaged, it could be inferred via measuring its component anchored biomolecules. I will discuss how inference might be performed for the information contained at each level of description of the connectome.
An implication of my suggestion that inference of the damaged connectome will require brain-wide biomolecule mapping is that biomolecule mapping will almost certainly be necessary as a component of any hypothetical revival process. The only possible exception to this is if it turns out that the ultrastructural feature level connectome or above is sufficient to describe engrams, and there is so little damage to the preserved brain that it is not necessary to do connectome inference. I consider both of these possibilities to be unlikely, so I expect that precise biomolecule-level mapping in the brain will be necessary for connectome inference and revival.
As with all other structural features, the most straightforward way to capture connectivity information content would be to preserve the connections intact, so that they can be seen under the electron microscope. If they are no longer intact, the next best approach might be an inference method based on the mapping of numerous individual biomolecules throughout the preserved brain tissue and modeling their decomposition during the dying and/or conservation processes.
Chemical synaptic connections are made up of many thousands of biomolecules. Some of the most important ones are proteins, such as ephrins (Patrizio et al., 2016). These markers would allow the inference of whether there was a chemical synapse in a particular location, based on modeling the decomposition and diffusion processes of the nearby biomolecules.
Because each synapse and each cell will also contain a unique set of biomolecules, such as protocadherins and neurexins, it should also be feasible to determine the pre- and postsynaptic connections of a synapse on the basis of the biomolecules present nearby (O’Rourke et al., 2012). Electrical synapses are likely to be more difficult to infer than chemical synapses, because they are smaller and contain fewer constituent biomolecules, but they are still made of biomolecules that could theoretically be mapped and triangulated to infer their original locations and properties.
mRNA molecules will also be trafficked to local areas of the cell. As previously discussed, there is evidence that local mRNA translation is a widespread phenomenon, for example occurring within axons (Deglincerti et al., 2012). mRNA molecules are even more likely to be unique to each cell as a result of unique alternate splicing patterns. For example, the three Nrxn genes are alternatively spliced to create more than 3000 forms, which is thought to help define synapse specificity (Vuong et al., 2016). mRNA molecules are also transferred between cells, which could allow the probabilistic inference of a connection between two cells if an mRNA molecule known to be synthesized in one cell is found in another cell. Therefore, mRNA is likely to be a very useful inference channel for the adjacency connectome if there is damage present.
Another possible inference channel is perineuronal nets. These are lattice-like structures that are primarily made of chondroitin sulfate proteoglycans and are found in the extracellular space (Heo et al., 2018). They wrap dendrites and synapses in an inflexible manner. During one’s lifespan, perineuronal nets may help to maintain the structure of dendritic spines, axonal boutons, and synapses in stable positions over time to aid in the maintenance of cognitive functions, including engrams. While perineuronal nets may not play a significant role in rapid ion flow, from the perspective of engram inference preserving them may help with the inference of synaptic connections that are damaged during the dying and/or preservation processes.
The key question for inference of the adjacency connectome is the degree to which the biomolecules found in nearby synapses have been altered from their original states. If there is too much decomposition and/or diffusion of these biomolecules, then it will no longer be possible to infer the original location of the synapse and its cellular origins with a high enough degree of accuracy to retain the information in engrams.
Because the cell membrane connectome seems to be such an important aspect of connectome information content, from a brain preservation perspective extensive cell membrane damage is a big problem. For example, if damage causes cell membranes to come apart into droplets and vesicles are formed from the original membrane, it is likely to be extremely difficult to infer the original shape of the membrane; which, in turn, will make it difficult to infer how information originally flowed across the cell in the form of ions.
Cell membranes can rearrange during the decomposition process. For example, if a cell undergoes osmotic stress, causing it to swell sufficiently, it can eventually burst. If this happens, the lipids associated with the cell membrane can rearrange into lipid droplets and vesicles. This will make it much more difficult to infer where the cell membranes originally were than if the damaged pieces did not rearrange.
During my PhD, I was working in the lab with an immortalized cell line of oligodendrocyte precursor cells. Because I did the calculations wrong, I accidentally added a 1000x higher dose of one chemical to the cells on one of the plates. When I came in to look at them under the microscope the next morning and the membranes were completely fractured into pieces. I can’t imagine that it would have been possible to computationally reconstruct what the shapes originally were after the membranes were destroyed and the decomposition products had so much time to diffuse about. This is the concern in brain preservation and it’s a real concern. People who have done cell biology know that cells can be extremely fragile.
That said, if the damage to cell membranes is not too severe, there are two key channels of information to infer the original structure. The first is biomolecules present in each cell membrane/surface that are relatively unique to each cell (Ray et al., 2020). For example, protocadherin proteins seem to have a locally unique expression pattern in each cell, allowing for self vs non-self neurite recognition and therefore functioning as a unique identity code (Wu et al., 2021):
(Wu et al., 2021) describe the potential for variable protocadherin isoform expression to form a locally unique barcode for each neuron:
Each cortical neuron stochastically expresses up to 2 alternate Pcdhα genes, 4 Pcdhβ genes, and 4 alternate Pcdhγ genes as well as all of the 5 C-type Pcdh genes (up to 15 in total). These combinatorial expression patterns could generate the large number of address codes required for neuronal identity. For example, the 22 encoded Pcdhγ proteins have been predicted to form up to 234,256 distinct tetramers of cell-surface assemblies. In conjunction with the encoded 15 Pcdhα and 22 Pcdhβ proteins, Pcdh proteins could generate the enormous diversity of cell-surface assemblies required for coding single neurons in the brain.
A second major channel for inference are cytoskeleton biomolecules. Cell membrane location is highly constrained by the cytoskeleton, especially the outermost area of the cytoskeleton, which is called the actin cortex. The actin cortex tethers the cell membrane by particular proteins that cross-link the plasma membrane with the cytoskeleton (Hohmann et al., 2019). In axons, the diffusion of proteins and lipids along the plasma membrane has been found to be constrained by actin ring-like structures that are present periodically along its distance (Yihao Zhang et al., 2019).
Because the actin cytoskeleton is highly crosslinked in vivo, it forms a gel-like structure and tends to be relatively stable during the dying and/or preservation processes. However, these cytoskeleton biomolecules are likely not as unique to each cell as cell surface proteins. Some proteins that help to stabilize dendritic spines could also likely help with the inference of which dendritic spines were most stable in vivo (Shaw et al., 2021).
By measuring the locations of these cell surface and cytoskeleton biomolecules and modeling their decomposition and diffusion, it might be possible to infer where the cell membranes were originally located. Another factor for inference is that we know how cell membranes “should look.” Chana Phaedra has discussed how we could theoretically use our prior knowledge of in vivo cell membrane shapes to infer the original structures from damaged samples of neural tissue. For example, we know that cell membranes should be relatively continuous. Discontinuities such as cell membrane blebs are generally thought to be artifacts due to the lack of adhesion between the cell cortex and the cell membrane biomolecules. As a result, they could likely be repaired relatively easily without substantial loss of information.
Cell process diameter information can potentially be inferred even if aspects of it are damaged. For example, the diameter of the axon and the amount of myelin enclosing the axon tends to be correlated, albeit not perfectly, along the length of a myelinated axon (Lee et al., 2019). Therefore, if one segment of a myelinated axon were distorted by a preservation procedure, such as via cracking, it would likely still be possible to infer the original structure to a reasonably high degree of precision. This is especially true if the biophysics of the myelin cracking process during the preservation procedure could be modeled in order to help with inference of the original structure.
This is not a theoretical concern. Shawn Mikula’s brain preservation procedure submission was deemed insufficient to win the Brain Preservation Foundation’s prize in part because of myelin artifacts. John Smart has pointed out that there is a strong argument that this would not have been a major concern from an information conservation perspective.
Another important question is the degree of information content of lipids in cell membranes. Lipids may affect cell membrane diameter more than they affect cell membrane shape, because cell membrane shape is sculpted primarily by the cytoskeleton. From a brain preservation perspective, lipids are probably the most controversial biomolecules. Because they are hydrophobic and not easily cross-linked, they are a constant thorn during preservation procedures and are wont to be extracted by agents that otherwise conserve proteins and nucleic acids. Less is also known about the biological functions of lipids than proteins, which should not comfort us; instead, it should concern us.
Lipids are synthesized, metabolized, shuttled, and tethered in their sub-cellular locations by protein molecules. As a result, it is unclear the degree to which they store unique data that is not already covered by proteins. One of the key questions is whether the cell membrane has enough proteins in it so that it can be visualized as an abstract structure even without lipids. This may differ between different types of cell membranes that are either protein-rich or lipid-rich (Carlemalm et al., 1985).
It is unclear how stable membrane thickness is over time, but at least one review notes that the cell membrane lipid composition is closely regulated by the cell, suggesting that there are multiple inference mechanisms if this structure is damaged (Ingólfsson et al., 2017). Overall, my guess is that if proteins are retained but the lipids are lost during a brain preservation procedure, it is likely that the original locations of the cell membranes and their diameters could be inferred by the presence of the proteins. This is true for myelin as well, which also contains an abundant number of proteins. The bigger question with lipids seems to be their local effects on protein conformation and function, which, as described in a previous essay, could affect ion flow.
As discussed, because changes in extracellular space are largely driven by predictable osmotic forces such as intracellular swelling, it appears to be relatively straightforward to predict the original extracellular space volume based on the distorted structural information. One study has already performed this type of computational inference on nervous system tissue preserved with a fixation method that led to distorted extracellular space (Kinney et al., 2013). They used a computational algorithm to expand the extracellular space volume fraction from 8% to the expected in vivo estimate of approximately 20%. This study could be considered one of the first examples of the field of reconstructive connectomics. And because the extracellular space is not particularly stable during life, long-term memory recall ability should not be dramatically affected even if this inference process is not exact.
As with cell membranes, from the perspective of engram building blocks, abstract structural features are made up of many biomolecules. Therefore, even if the abstract structure itself is damaged and the biomolecules that make it up have decomposed and/or diffused away from their original locations, it may still be possible to triangulate their original positions. However, it is likely to be more difficult to infer the original location of abstract structural features on the basis of biomolecules than cell membranes because there are less likely to be unique biomolecules for each organelle. For example, if there is a lot of damage to the synaptic vesicles, it may be possible to estimate approximately how many synaptic vesicles were originally in the synapse, but more difficult their original locations in the synapse.
That said, there is still a wide set of biomolecules associated with each of these structural features to help with inference. For example, the location and composition of the ready releasable pool of synaptic vesicles is dependent upon a wide range of biomolecules, as can be seen when one considers all of the biomolecules involved in their docking and recycling (Kaeser et al., 2017). So, based on the number of nearby scaffolding biomolecules, it may be possible to infer the approximate number of synaptic vesicles that were originally in a particular relative location of that synapse at its steady state.
If there is damage to the epigenome, one could attempt to model and reverse the breakdown and diffusion of nuclear biomolecules. The precise relative locations and binding patterns of DNA and protein molecules in the epigenome might be relatively fragile and more easily damaged. There also may be conformational data in the shape of the chromosomes that stores information and could be lost in the dying and/or preservation processes. However, there is a lot of highly correlated information about the epigenome present in cells, in DNA, nucleosomes, RNA, proteins, and other biomolecules, so if there is damage to one, the information could most likely be adequately inferred based on the others.
The epigenome likely does not play a significant direct role in rapid electrochemical ion flow, cordoned off as it is in the nucleus and requiring the relatively slow function of transcription to exert its major functions. However, to the extent that brain cells can store memory information within its epigenome as a stable site of storage, DNA and its associated epigenetic modifications may be an important way to infer what the rest of the cellular biomolecule distributions were, if they happen to be damaged. One could also imagine using the epigenome as a latent variable in an inference process to reconstruct the original biomolecule distribution across damaged cells.
If the other levels of connectome detail are damaged, then we can rely upon biomolecular information for inference. What can the biomolecular level rely on for inference? The main thing is other biomolecular information that has not been damaged. This type of inference relies primarily on the uniqueness criterion.
First, if biomolecules have degraded/diffused, one can attempt to infer their original states based on the locations of their mapped breakdown and diffusion products. Second, one could infer the original state of a damaged biomolecule based on the mapped degraded/diffusion products of other biomolecules whose information content is correlated with that biomolecule during life. Third, information at different connectome levels can be used for inference in a bidirectional manner. For example, one could potentially predict the original location of synaptic biomolecules based on combining epigenetic information and cytoskeletal trafficking data in neurites and synapses that would allow them to capture the products of gene expression.
As an example of this, consider the protein family CaM kinase II (CaMKII). In their 2019 Twitter debate about brain preservation, Ken Miller pointed out:
Miller: “What if any disruptions would be expected at the molecular level? Is the idea that it would freeze every molecule in place?? e.g., every CamKII molecule and its phosphorylation state? I also wonder if there could be dynamical interactions that get lost in freezing a snapshot…?”
But as Ken Hayworth responded:
Hayworth: “But you know that CamKII is not in a position to effect millisecond neuronal transmission directly. It is part of feedback loops (http://learnmem.cshlp.org/content/26/5/133.short …) that ultimately stabilize the true functional synaptic weight -dependent on receptor proteins like AMPA.” “These feedback loops contain a plethora of molecular and structural modifications that all correlate with the functional strength of a synapse. GA [AM: glutaraldehyde] would have to erase ALL of this correlated information to prevent the possibility of future decoding.”
The only quibble I have with Hayworth’s response is that synapses could theoretically have multiple properties, not just one weight (although whether this matters in practice is – as far as I can tell – an open question). But his general point that functional information is redundantly encoded via multiple biomolecules remains valid and critical for inference of the sparse biomolecule-annotated connectome.
Inference problems often end up without complete solutions, as a result of the curse of dimensionality, such that there are numerous ways in which the same data could be fit. One way to address this is to use external data and attempt to use that to help constrain the inference problem. There are a couple of types of such data that are commonly discussed:
1. Mindfiles: One source of data that might be helpful is records of how one behaved during life. We can take a behaviorist approach here and think of self-report measures of cognition, which might be included, as simply another type of behavior. A collection of such data is often called a mindfile or lifelog.
Mindfile data points could include writings, decisions, audio recordings, first-person videos, or records of one’s interactions with computers such as keystrokes. The idea is that these data points could be helpful in inferring how neural circuitry was originally wired if it is in a damaged state after brain conservation.
Among other publications, (Seymour, 2013) discusses this problem in-depth. More recently, Matthew Barnett and Mati Roy have been pursuing the idea of lifelogging as a form of life extension.
2. Imaging: Another possible source of external data is neuroimaging. One could imagine a detailed structural model of the brain during life being helpful for inferring the original structure. Some people also think that functional imaging data, such as functional MRI scans, might be helpful. To me, extant non-invasive neuroimaging/neurophysiology techniques seem to have too low of spatiotemporal resolution to be very helpful. But I’m certainly not an expert in this area and I could certainly be wrong, so this is worthy of consideration.
Alternatively, one might imagine that a whole body scan could serve as an external reference point when inferring the rest of the body to synthesize or simulate it during revival. This could be especially useful in a brain-only preservation approach.
The idea of incorporating external information to aid in revival is an old one. In Ettinger’s 1964 book on cryopreservation (Ettinger, 1964):
We normally think of information about the body as being preserved in the body - but this is not the only possibility. It is conceivable that ordinary written records, photographs, tapes, etc. may give future technicians enough clues to fill in missing or damaged areas in the brain of the frozen.
The time will certainly come when the brain’s method of coding memories is thoroughly understood, and messages can be “read” directly from nervous tissue, and also “read” into it. It is not likely that the relation will be a simple one, nor will it necessarily even be exactly the same for every brain; nevertheless, by knowing that the frozen had a certain item of information, it may be possible to infer helpful conclusions about the character of certain regions in his brain and its cells and molecules.
Similarly, a mass of detailed information about what he did may allow advanced physiological psychologists to deduce important conclusions about what he was, once more providing opportunity to fill in gaps in brain structure.
It follows that we should all make reasonable efforts to obtain and preserve a substantial body of data concerning what we have seen, heard, felt, thought, said, written, and done in the course of our lives. These should probably include a battery of psychological tests. Encephalograms might also be useful.
Like anything else, this notion can be carried too far. Pushing this kind of reasoning to the extreme, one might say that one need only preserve a single cell of his body, for its genetic content; from this he could be regrown, and the original personality and memories, at least in coarse outline, implanted from the records. But this sort of connection is both too difficult and too tenuous and unsatisfying for most people.
A critical point is that, in my opinion, collecting this type of information alone would almost certainly not be enough to reconstruct engrams that is sufficient for ordinary survival. Lifelogging data is underdetermined in terms of the actual internal brain states that could generate the external behaviors. There is too much information that is lost. For example, in my view, Ray Kurzweil has been too pollyannish about the potential benefits of the external information-only approach.
In other words, I agree with Ettinger: might be helpful, almost certainly not sufficient. However, the extent to which incorporating external information might be helpful for revival alongside structural information in the preserved brain is an area of uncertainty for me. Currently, I think effort is better spent on brain preservation instead.
Related to the idea of incorporating external information is the idea of imperfect reconstruction. We can imagine a few possible levels of revival, with increasing degrees of fidelity to the person who is preserved:
1. DNA-based cloning - This would not require the preservation of the brain at all. In fact, it could be done on DNA samples of people who died tens of thousands of years ago. Some people have claimed that this would be sufficient for revival. Others feel that it’s basically like a completely estranged identical twin – even if you don’t know them at all, you might value their life more than average – but nothing even close to personal healthspan extension. This person wouldn’t contain any of the same memories, or even the same patterns of stochastic neural development.
2. Brain epigenetic-based cloning - Much of the divergence between genetics and behavior/cognition seems to be due to randomness and environmental influences in neural development. Plausibly, a large percentage of this might be encoded by epigenetic information that could be present in the brain even if synaptic connectivity information is destroyed. Speculatively, this seems like it might allow for the revival of some type of a “super clone” who would share a lot of personality with the person whose brain was preserved. But at this level, it’s hard to see how one would retain memories, which seem to almost certainly require synaptic connectivity information. So my guess is that most people would not consider this to be healthspan extension.
3. Above + small number of memories or limited information content of memories - It’s possible that synaptic connectivity information could be partially destroyed during brain preservation but still partially inferable. In this case, it’s theoretically possible to imagine that there could be a fractional number of engrams/long-term memories that are able to be inferred. Or a fractional amount of information content per engram/long-term memory that is able to be inferred.
4. Above + close to the full information content of one’s memories - With a high-fidelity brain preservation procedure, then it seems that most of one’s engrams could theoretically be preserved. At this point, this seems to me like it reaches the level of “ordinary survival,” as described in a previous essay.
5. Above + short-term cognition information from prior to the preservation - This would include information in one’s short-term memory or working memory prior to the procedure. This information is quite possibly lost in brain preservation procedures that fall short of suspended animation – or potentially even those that meet suspended animation standards, such as in amnesia due to anesthesia; however, there is considerable uncertainty about that. I don’t focus on short-term cognition information, because it seems like a much harder problem to preserve than #4, and #4 seems sufficient to me, but others might reasonably disagree.
The dividing line between #3 and #4 is an interesting question that could potentially be answered in the future. Some introspection might be helpful here. I know that for myself, there are some memories I can recall that are quite rich. For these rich memories, I can remember minute details of the event as well as details of how I felt at the time. On the other hand, there are other memories for which the details I can recall are fuzzy, or for which I can remember factual aspects of the event without remembering my feelings about it. Perhaps it is similar for you.
If some brain regions were well preserved but others were not, then it seems possible to be able to infer a large percentage of engrams, but only partial information content for those memories. This is because particular brain regions seem to play particular roles in holding the information content of memories (Roy et al., 2022). For example, amygdala engrams seem to mediate valence information for particular memories.
Engrams are likely stored on a continuum of richness based on how redundantly and how strongly they are stored within a brain region and across brain regions. Therefore, we can imagine that if there is damage during the brain preservation process, some of the details of memories, or some of the emotional content of memories, would be lost. There is likely a spectrum of how much memory information content will be preserved following a brain preservation procedure. While this may sound insurmountable, partial memory loss can already happen today to some of the people who suffer strokes – and yet, many of these people can go on to live fulfilling lives (Al-Qazzaz et al., 2014).
There is still considerable uncertainty about this idea of partial engram inference, as we are still in the early days of our knowledge about engrams.
Arguably, people should list in their preservation preferences the extent to which they would or would not want to be restored with different options. This will be discussed more in a subsequent essay on “Directives.”
In discussions of brain preservation, people frequently invoke an adaptation of the Drake equation, to create component-wise probability estimates of the probability that people will be revived. The components are supposed to be independent. For example, the components could include:
1. The probability that the information that defines the cognitive functions you value is possible to preserve.
2. The probability that enough information is retained during the agonal and postmortem stages.
3. The probability that the preservation procedure that you receive retains the information defining you over the long-term.
4. The probability that the organization where you are conserved is stable until the revival technology can be used
5. The probability that you are not removed from conservation during that time period.
6. The probability that revival technology is ever developed.
7. The probability that revival technology is used on you in an effective way.
Theoretically, you could imagine trying to come up with probability estimates for each component and then multiplying them together to estimate an overall probability.
In practice, the probabilities are highly correlated. And one of the major ways that they are correlated is that they all depend on how easy it is to infer the information content of engrams from partially degraded structural components in the brain. While this is prima facie obvious for #1, #2, #3, and #6, it also relates to #4 and #7, because how valuable society comes to see the brain preservation project over the coming decades will depend on what we learn about how engram information content is stored in the brain. This is very much an active area of neuroscience research. If neuroscience reveals that engrams are more redundantly encoded than we realize, then there is likely to be an increased societal interest in brain preservation, which will make #4 and #7 more likely. On the other hand, if neuroscience tells us that engrams are more fragile than we realize, then the converse will be true, and #4 and #7 become less likely.
Another relevant point is that the success criterion is not necessarily binary. It’s plausible that only a certain amount of one’s memories or personality traits are likely to be preserved.
As a result of the problems with the Drake-like equation, I do not favor the use of percentages to estimate the overall probability that a brain preservation procedure will be successful. There are too many unknown unknowns, too many assumptions of the goals of the procedure that will vary by the individual, and, of course, limited available data. I certainly understand the appeal of such a question. In the past, I used to ask prominent researchers in cryonics what their percentage estimates of success were, but I now think it is highly unproductive, and worse, potentially harmful. When percentages are speculated upon, they are often anchored on in unproductive ways. Point estimates of probabilities also do not highlight the ways that people can improve the success probabilities for themselves and others based on the actions they take.
An example is to ask the question: What is the probability that a heart transplant conducted in 150 years will be successful? This brings up a large number of sub-questions, such as: Is there a heart available for transplant at the appropriate time? What is the condition of the person who needs the heart transplant? What are the technical parameters of the operation? Do most humans even have biological bodies in 150 years? It’s just a strange question to ask. You probably wouldn’t ask this question. You’d just say, yeah in the circumstance that it needs to be done, it seems worth trying.
My opinion is that, rather than speculating upon probabilities, it makes more sense to say that brain preservation is currently a reasonable idea that makes biophysical sense but is completely unproven.
Here are two key areas of uncertainty in how engram inference might work:
1. One obvious area of uncertainty is what level of biomolecular mapping will be necessary to annotate the connectome. While we know that the rich biomolecule-annotated connectome preserved in a high-fidelity way would be sufficient to describe engrams, because there is effectively no other type of structural information in the brain, producing a complete biomolecule-annotated connectome would require measuring all biomolecules at the atomic level throughout the brain.
While mapping at complete biomolecule level does not seem to violate known laws of physics, there’s certainly no technology that can currently accomplish this and it’s unclear if this will ever be practical. However, producing a sparse biomolecule-annotated connectome by profiling only a relatively small number of biomolecules might be more feasible over the coming decades, for example with the use of high-throughput immunoelectron microscopy and similar technologies. For some people, how much biomolecular mapping would be sufficient to describe engrams relates to how feasible they consider the possibility of revival from brain conservation.
The fundamental principle behind this is to say that no biomolecule which is involved in long-term information storage can do so as an island. It is almost certain that the biomolecule information important for engrams is redundantly coded in the location information of multiple biomolecules. Therefore, we most likely don’t need to preserve and measure every biomolecule, we can infer them. And we don’t need to infer every biomolecule, just the ones that make brains unique in a way that affect engrams, such as the biomolecules involved in the organization of the synaptic active zones. And it doesn’t need to be inferred perfectly, just well enough to sufficiently describe engram information that allows for ordinary survival. Exactly how much detail is sufficient for engram information is, of course, an open question.
Because biomolecular information is so interdependent and correlated, my guess is that mapping at the richest level of detail is exceptionally unlikely to be necessary. The eventual answer to the question of what amount of biomolecular annotation is necessary will depend on how much information can be gleaned from abstract structural features, how much biomolecular composition variation there is between and within brains, and how much information can be predicted by more parsimonious methods such as profiling the epigenome. Querying the extent to which different biomolecular features are necessary or sufficient for engram information content is a basic science question that could be addressed in the coming decades, to help reduce our uncertainty about how well connectome inference might work.
2. Another key area of uncertainty is about biomolecular conformation: (a) to what degree it matters for long-term memory recall, (b) to what degree it is predictable by the anchored biomolecules that are preserved by a given procedure, and (c) to what degree conformations are directly preserved by different brain preservation methods. This relates to the degree to which lipid preservation is essential.
While long-term suspended animation is not yet possible, we don’t want to just preserve brains without having any idea of whether the method is likely to lead to sufficient structural preservation. That would allow for all sorts of magical thinking and associated problems. Following the Baconian method, we want to expose brain preservation methods to criticisms based on empirical findings and get an actual sense of structural preservation quality.
A critical question, then, is how we can measure the information content in a preserved brain that is available to infer the biomolecule-annotated connectome. One obvious answer is to measure whether we have preserved the location information of as many biomolecules involved in rapid electrochemical information flow as possible. The more diverse types of biomolecules preserved, the more will be available for inference. But given the limitations of our current biomolecule mapping technology, that doesn’t necessarily lend itself to testing in a way that the ultrastructure feature connectome does, via electron microscopy.
One approach might be to try to directly measure a few biomolecules that seem to be particularly important in information flow, such as certain ion channels along a dendrite, neurotransmitter receptors at synapses, or connexin subunits at gap junctions. As previously discussed, it also seems important to further dissect the biomolecule correlation landscape in preserved postmortem brains. This will help us to understand which aspects of biomolecular networks are more isolated and fragile than others and thus more important to directly preserve.
Generally speaking, the possibility of inference is usually not considered by critics of cryonics/brain preservation. It is an area of uncertainty that complicates any attempt to claim that brain preservation is not possible or will not work. Unlike most areas of uncertainty discussed in these essays, the possibility of inference cuts both ways: it could turn out to be harder or easier than we think. Given the history of archaeological research, such as the molecular archaeology of ancient DNA, it seems unwise to dismiss the capabilities of future neural archaeology technology without a detailed argument for why it will never be possible.
It’s worthwhile to point out that inference would require detailed mapping and modeling of biomolecular locations and diffusion patterns. The mapping will almost certainly need to be done by advanced imaging technology that does not exist today. The whole procedure, including the modeling to determine the original states, will almost certainly require advanced machine intelligence capabilities. But just because inference will require machine intelligence, it does not necessitate a certain type of revival. Modeling could be done in a computer to specify how repairs should be done, and then repair could be performed in the physically preserved brain.
My personal feeling is that in the end, the process of engram archaeology isn’t going to be that complicated. It just seems really complicated now because we’re still in the infancy of science. As technologic civilization advances and we become sufficiently data- and compute-rich, it’s likely to come down to applying a series of deterministic equations to infer the decomposition and diffusion patterns of biomolecules important for the biomolecule-annotated connectome.
Here is a comment by Robert McIntyre about the utility of external information with the goal of recreating someone:
I also agree that with “sufficient information” you can of course recreate someone. Though I don’t think that I’m attacking a strawman either, since there are actual projects such as Terasem (https://terasemmovementfoundation.com/) which work to make a “Mindfile” and eventually recreate a person through, essentially, a detailed personality quiz.
One thing that I think is key here is that language is almost certainly incapable of pinning down what a person is in enough detail to recreate them, mostly because the subconscious is relatively inaccessible to language. It’s not that the details don’t EXIST in some physical form, just that language is inadequate at getting at those details. It would be the equivalent of taking a photograph of a ROM chip in our metaphorical plane, but not being able to read the firmware off of the chip.
The only thing I can think of in today’s world that can capture enough detail would be to preserve a person’s body and physically retain all of that language-inaccessible information encoded in their brain / nervous system.