Summary

Measuring the quality of brain preservation, both in experimental settings and in each case, is an essential part of doing our best to preserve information in the brain. Without evaluating the procedures in empirical tests, it is too easy to throw up one’s hands and say that “our friends from the future” will solve everything.

There is debate about what type of metric is best to optimize for during preservation: structural metrics or viability/electrophysiological/functional metrics. While there are trade-offs to each approach, I prefer to focus on structural metrics because they are simpler, seem likely to be sufficient, and are possible to measure today.

Next, I address the practical question of how to assess structural preservation quality. As opposed to the more theoretical discussions in previous essays, in this essay I give my current best guesses about what structural features to preserve and how we can most effectively measure the quality of their preservation.

I focus on measuring cell membrane morphology because it is a potentially fragile structure, it is straightforward to measure, it is a proxy for the preservation of other gel-like structures in the brain, and it seems to be a high-information aspect of the biomolecule-annotated connectome.

A multimodal approach using microscopy data and biophysical reasoning, while also considering practical factors, is in my opinion the best way to evaluate different possible brain preservation methods. I explain my reasoning through a discussion of the achievements and limitations of the Brain Preservation Foundation prize, which was proposed in 2010 and finally won in 2018.

The no feedback problem

Since it was first proposed in the 1960s, one of the major debates about cryonics is how good it must be before it is offered to the public. More people are on board with the idea of long-term suspended animation: applying a technique known to be reversible at the time of people’s legal death with the goal of reversing the preservation when the cause of their death can be fixed. But cryonics, and brain preservation more generally, is not long-term suspended animation. By definition, we don’t know whether it is reversible. This makes the threshold for when it should be offered much less clear.

The first proponents of cryonics in the early 1960s, Ettinger and Cooper, had read the experimental data from cryobiology studies suggesting that cryopreservation made biophysical sense (Ettinger, 1964) (Duhring, 1962). Ettinger’s The Prospect of Immortality describes how the cryoprotectants glycerol and ethylene glycol can allow small organisms and cells to survive cooling to freezing temperatures. The book also describes evidence that liquid nitrogen temperature storage would likely allow maintenance for centuries. For Ettinger and the first cryonicists, despite the problems and likely damage that would occur, biophysical reasoning based on indirect experimental data was enough to start preserving people via cold temperatures right away. This position was known as “freeze now” (Shoffstall, 2016).

Ideally, prior to offering it to the public, there would be more direct evidence suggesting that a cryonics or brain preservation procedure preserves enough structure to make it plausible to allow for revival with long-term memories intact. Ideally, we would have evidence based on prospective human studies. But that data would take years to decades to gather and require a significant amount of funding that – historically – has not been available. And in the meantime, people who want preservation would legally die and not be able to access it. So, for many people, it seems reasonable to compromise on ideal research standards and allow people to offer the best methods that are available now. From a compassionate use perspective, there is a clear argument that people deserve the right to choose the best preservation methods available at the time of their legal deaths.

But this compromise causes a big problem, which Mike Darwin has called the “no feedback problem.” As an analogy, Darwin notes that if you go to a hospital for surgery, or bring your car to a mechanic, you and your family can get a reasonable sense of whether the procedure was successful. The knee replacement improves the pain from osteoarthritis or it doesn’t. The car is fixed or it isn’t.

In cryonics, there isn’t that same type of obvious feedback. This was especially true in the early days of cryonics in the late 1960s, when cryonics bootstrapped from nothing. At that time, getting people actually stored in liquid nitrogen dewars and keeping them there was a hard enough problem that there were effectively no metrics on preservation quality. People had little to no idea about how the procedures went and what the condition of the brain was. This caused all sorts of problems.

One example of the problems arising from the relative lack of metrics and feedback in cryonics is macroscale fractures. In 1983, multiple people who had been preserved were converted from whole body preservation to head-only preservation due to insufficient funding and the need to transfer them from one cryonics organization to another. This allowed Mike Darwin and colleagues to perform the first gross examination of the actual effects of human cryopreservation. They found extensive macroscale fractures throughout the tissues of the bodies. Although the team did not examine the brains, they reasoned that there was no reason for the brain to be spared from these fractures.

In hindsight, it is not surprising that fractures could occur, because this is a common phenomenon in cryobiology that occurs as a result of thermomechanical stress. That it took so long to anticipate or identify this basic type of severe damage is a sign of how little research – theoretical or experimental – had been done into preservation quality.

Thermomechanical stress fractures can occur due to differences in temperature in two locations of the material during cooling; this is a model of thermal stress simulation from (Li et al., 2017)

Another problem that arises from the “preserve now, research whether the method is reasonably likely to allow for revival later” compromise is that it creates incentives against doing methods research.

First, methods research becomes unnecessary. This is certainly true in pharmacologic research, where most research is performed prior to regulatory approval – once something is approved, there is much less of an incentive to do it.

Second, doing methods research on preservation procedures that have already been applied becomes scary. You might discover something suggesting that the preservations you have done thus far have been flawed. What if someone you loved was preserved with that method? This causes psychological resistance to empirical testing.

Third, from a cynical perspective, if members are already willing to pay for the procedure in the absence of research, doing methods research that might invalidate the procedure risks the loss of one’s income stream.

All of these incentives contribute to lock-in of whatever preservation methods were initially chosen. This happens in conventional medicine all the time: if a type of medical management or treatment starts to be performed before it is known whether it actually works, it can become entrenched even if it is unclear whether or not it is effective.

Part of the problem is the double-edged sword of hope. Hope can be profoundly motivating and it is the basis of most of the interest in brain preservation. But the desire to maintain hope can prevent forthright inquiry into the quality of preservation and long-term protection.

Surrogate outcomes for brain preservation

Right now cryonics is able to operate in many countries because of what many consider a wonderful property of a liberal society, which is that most things are considered legal by default unless the government says they are not. Brain preservation hasn’t been formally approved by the US FDA as a medical product so it is not a type of medicine but rather an anatomical donation to science, usually operating under the Uniform Anatomical Gift Act.

But let’s imagine that investigators were to run an actual human trial to test one or more types of preservation procedure and present the results to the FDA for approval as a medical procedure. The conventional outcome metric – revival – would be clearly impossible; otherwise it would be a trial of long-term suspended animation, not brain preservation.

An alternative metric that the FDA uses in the circumstance that the clinical outcome metric would take too long to adjudicate is called a surrogate outcome. This is a biomarker, such as a molecular, histologic, or radiographic measure.

Usually, a biomarker needs to be validated in separate studies to predict the actual clinical outcome metric before it can be accepted for use in such a trial. But in some circumstances, a biomarker may be accepted as a surrogate outcome for the approval of a medical product even if it hasn’t been validated to actually predict clinical outcomes. This is in the case that the biomarker is considered “reasonably likely to predict a clinical benefit,” which means that it is supported by sound mechanistic or epidemiological reasoning. An approval can be made on this basis if data is collected in the post-approval setting to ensure that the reasonably likely surrogate endpoint actually predicts the clinical outcome.

If we stretch the timescale of this “post-approval” setting to decades or centuries, then arguably one or more surrogate outcomes of preservation quality could be used for FDA approval of brain preservation. This seems quite unlikely to happen anytime soon, in part because of societal prejudices, although one could imagine that it might happen in a few decades.

For now, this exercise is most beneficial in helping us to recognize that, in the absence of clinical outcomes, surrogate biomarkers are absolutely critical in brain preservation. Arguably, by its own standards, this is how society ought to be judging brain preservation. Surrogate biomarkers of preservation quality need to be carefully chosen and diligently measured.

What surrogate outcome metrics would be required for society’s stamp of approval is a separate question from what standards individuals would require before they think that brain preservation is a worthwhile endeavor for themselves and/or their families.

What type of preservation quality metrics are we actually talking about? Let’s get a sense of them so that the discussion is more grounded. We can distinguish four types of metrics: performance metrics, gross neuroanatomic metrics, neuroimaging metrics, and tissue biospecimen metrics.

1. Performance metrics.

Performance metrics are measurable aspects of the preservation process. One obvious performance metric is a quantification of the amount of time the brain spent in various conditions during which it is likely to decompose, as well as the estimated temperatures during those time periods. This includes the amount of time with poor or no brain perfusion during the agonal period and the postmortem interval, weighted by the amount of time at cold temperatures – ideally, this time would be minimized, but it’s important to know either way.

Measuring the time period of decomposition is essential to improve outcomes. For example, in stroke care, there is a “door to needle” time that quantifies the amount of time it takes from when a patient enters the hospital (“door”) to the start of the infusion of tPA (“needle”) (Man et al., 2020). By tracking this, their goal is to reduce the time it takes to administer this therapy, because this leads to better outcomes (Saver, 2006).

Performance metrics can also be data gathered during the preservation procedure. Obviously, this will vary significantly based on the type of preservation procedure. For cryopreservation procedures, it includes perfusion outcomes such as the color of the venous effluent. It also includes the cooling curve, which can indicate temperature changes suggestive of ice nucleation (Tan et al., 2021).

Model of the time vs temperature curve during the cooling of water; (Tan et al., 2021)

Performance metrics are relatively easy and important to measure. But they don’t tell us all that much about actual brain preservation quality, and shouldn’t be used as surrogate outcomes by themselves.

2. Gross neuroanatomic metrics.

In many preservation procedures, such as those that take the brain out of the skull, create a skull window, or use a Burr hole, it is possible to view at least part of the brain tissue. If the brain can be visualized, then pictures or videos should be taken. The stiffness can potentially be measured via minimally invasive tactile sensors. In this way, proxies for the amount of decomposition and preservation quality, such as color changes, can be assessed.

Skull window creation to visualize a mouse brain; (Guo et al., 2017)

Another aspect of the gross anatomic condition of the brain that can be measured indirectly is the presence of fractures. It seems that this can be measured in part by acoustic measurements during cooling. The cryonics organization Alcor uses a device called the “crackphone” to measure fracture events, although it is unclear if it has been validated. While this is likely not specific to the brain, if there are fractures anywhere in the body, it is reasonable to assume that there are also fractures in the brain.

As with performance metrics, gross neuroanatomic metrics are important, but are not very dispositive of the preservation quality of the biomolecule-annotated connectome. So they could not be used as good surrogate outcome metrics by themselves.

3. Neuroimaging metrics.

In 2011, Alcor began to use CT scans of brain tissue to assess preservation quality. This was accidentally noticed to be possible in the course of a different experiment to measure fractures. Because of logistical considerations related to scanning the brain inside of a liquid nitrogen dewar, as far as I know, it has currently only been reported on people who opted for neuropreservation (i.e. head-only preservation).

Frozen brain tissue has a lower density on CT because ice is less dense than water (Sugimoto et al., 2016). Tissue cryopreserved with the Alcor’s cryoprotectant has a higher density on CT because of its sulfur content. This allows very important feedback about the degree to which the brain has actually been cryoprotected and whether ice has formed.

A CT scan can also be used for other measures of brain tissue quality. For example, it can measure the amount of brain shrinkage, which is an almost inevitable consequence of current methods that use cryopreservation by vitrification.

In my view, neuroimaging is a helpful surrogate outcome metric, because it actually measures brain tissue, but it is not sufficient on its own, because it cannot visualize the biomolecule-annotated connectome.

4. Tissue biospecimen metrics.

Ultimately, engrams are not encoded in the morphologies of macroanatomic features such as the size of the cerebral ventricles. Instead, engrams seem to be encoded by microanatomical structures that make up the biomolecule-annotated connectome, such as synapses and dendritic spines. Measuring these microanatomic features requires examination of tissue biospecimens. As a result, examination of tissue biospecimens is clearly the gold standard for surrogate outcomes.

There are two major potential sources of brain tissue biospecimens: whole brains that are not preserved with the long-term goal of revival and small biopsy samples from those that are.

When new methods are first being developed, it clearly makes the most sense to test these methods on brains where there is no intention of potential revival. These brain samples could either be from animal studies or from people who want to donate their brains to science but do not want to be potentially revived themselves. Because these whole brains can be fully dissected and evaluated, this source should make up the lion’s share of the information about the expected microscopic outcomes of a brain preservation procedure.

The other option is to perform a very small biopsy from the brain or upper spinal cord of people who are preserving their brains and do have the long-term goal of revival. At least in the initial application of brain preservation procedures in practical settings, it seems crucial to measure the tissue quality via biopsy samples during or after the preservation process.

As discussed in a previous essay, performing a small brain biopsy is a common neurosurgical procedure. It is exceptionally unlikely that this biopsy alone would lead to any significant damage to one’s memories. But it is likely that it would help with organizational accountability and methodological improvements that might allow for the consistent preservation of long-term memories.

I don’t want to take the idea of performing brain biopsies lightly. A brain biopsy is widely known as one of the most delicate diagnostic procedures in medicine. Biopsy procedures have often been resisted by cryonics organizations and by many cryonicists opting for preservation. It’s also true that performing central nervous system biopsy procedures will not always be possible depending on the resources of the organization.

Ultimately, if someone doesn’t want to preserve their brain with an organization that performs biopsies, then that is their choice. But when it is possible, performing tissue biopsies would allow stakeholders to get a much higher level of direct feedback about the microanatomic quality of the brain preservation procedure.

The viability vs structure debate

We have established that revival is not an appropriate outcome metric for brain preservation, because brain preservation is by definition not long-term suspended animation.

What should be the surrogate outcome metric, then? This has been a hotly debated topic over the years. There are two major camps: those who think it is best to optimize procedures for viability preservation metrics and those who think it is best to optimize for structural preservation metrics.

Viability or structure are, of course, not themselves brain preservation methods or even metrics. They are categories of metrics. The debate is over whether we should research and/or use brain preservation methods that optimize for the category of viability or structural metrics.

Let’s define these terms. First, what is viability? It can be a slippery term.

For example, consider fetal viability. This was defined in Roe v. Wade as the “interim point at which the fetus becomes … potentially able to live outside the mother’s womb, albeit with artificial aid.” But this clearly depends upon factors that affect the quality of medical care for that fetus, including the level of available medical technology, which is likely to change over time and will differ in different areas of the world. Randomness in outcomes due to our limited understanding of biology also plays a role in fetal viability, as it does in all areas of medicine. The “potential” point of fetal viability could be defined as the point when all fetuses will be able to survive outside of the uterus, or when fifty percent will, or when one fetus has been shown to survive, or when no fetus has yet survived, but a statistical model based on the available data suggests that a fetus has greater than a probability ε (epsilon) of surviving, where ε is some extremely small number. It’s hard to know where to draw this line.

Although the topic is much less controversial, similar challenges are found in discussions of viability following cryopreservation. For example, one definition of cell viability is whether cells can sustainably divide in cell culture. But the cell division metric in cryopreservation is tricky for a reason that Goetz and Goetz pointed out in 1938: this type of cell viability experiment only tells us whether at least one cell has survived the process (Alexander Goetz et al., 1938). But the survival of one cell could be due to random factors that don’t affect most of the cells in the tissue. Our question in brain preservation is whether all or nearly all of the cells are viable following the procedure.

Another problem is that we might call cells treated with certain chemicals “non-viable” today, correctly, because they would not be able to divide and multiply in cell culture. However, this doesn’t mean that technology won’t advance in the future to render them able to divide and thus viable. Thus, viability metrics are dependent on our current understanding and technology. This potential for improvements in future technology is the hope that underlies all of brain preservation.

Instead of cell division, contemporary metrics for viability tend to assess other functional outcomes, such as the presence of active metabolism in cells or electrophysiological responses in the brain tissue. The most useful viability metric relevant for cognitive functions seems to be retaining global electrophysiology functions across the brain, such as those seen on electroencephalography (EEG).

Structural metrics, on the other hand, are easier to define. We can define a structural metric as an observable, static property of brain tissue, for example as seen under a microscope or using a biomolecule profiling method.

Before we go further, I should say that another approach is to optimize for both structural and viability methods at once. However, there are deep methodologic trade-offs between optimizing for these categories of metrics that will be explained in later essays, so I do think that one must choose given our current technology.

Arguments for focusing on the preservation of viability metrics

First, we already know that preservation methods which retain measures of viability, such as vitrification by cryopreservation, can be reversible on small biospecimens via simple rewarming. We can use cryopreservation methods to preserve and revive human cells, human tissues, and small animals. Vitrification by cryopreservation has already been shown to preserve a type of long-term memory in C. elegans (Vita-More et al., 2015). There are many people walking around today who were previously cryopreserved as embryos.

There is a sense in which maybe all that needs to be done is to scale the existing cryopreservation procedures to the size of the human brain or whole body. This seems like an engineering problem that may not be all too difficult to solve.

Second, functional viability tests such as electrophysiology have the potential to “screen off” the need to guess about what the structural components of meaningful neural activity are. In other words, if meaningful neural activity can be directed shown to be present on reanimated brain tissue, then we know that this must be retained by the brain preservation procedure. This could potentially render several of the earlier essays about the structural correlates of engrams superfluous and avoid the seemingly endless debates about this topic. There would be far fewer unknown unknowns in brain preservation if one were able to demonstrably preserve functional viability metrics such as global electrophysiological patterns.

We wouldn’t even need to understand how a viability-optimizing preservation method works, if it works well enough. In medicine, we are often able to use interventions to improve health outcomes before we understand how those interventions work mechanistically – arguably, this is the more common scenario.

Third, in brain preservation procedures that focus on viability, it is easier to see how to iteratively improve upon those methods towards long-term suspended animation. Because a long-term suspended animation procedure is likely to be much more appealing to society than an uncertain brain preservation procedure, as research advances closer to this point, it seems likely that more societal interest and investment will follow.

Fourth, brain preservation methods focusing on viability are more easily translated to other areas in biomedicine, such as reproductive medicine, organ transplantation, tissue engineering, agriculture, and many other fields. As a result, there is broader societal interest and investment in preservation methods focusing on viability metrics.

Arguments for focusing on the preservation of structural metrics

First, while it would be great if long-term suspended animation were possible, it’s not. Long-term suspended animation advocates often seem to think that it is right around the corner, but people have been saying that since the 1960s (Prehoda, 1969). It’s hard to tell how far away we are. We can’t yet reversibly preserve a single human brain region with functional properties intact, let alone a whole human brain, let alone a whole human body. There’s no guarantee that such a procedure would ever be practical, especially in realistic cases with agonal or postmortem damage.

Second, many commonly used viability metrics are potentially irrelevant to the preservation of long-term memory recall. For example, individual cells could theoretically retain their metabolic activity – a viability metric – even if the structural connectivity patterns between them that seem to be necessary for long-term memory information is lost. The information for memories has much more to do with patterns of connectivity between cells than it does with ribosomes, but the former is not strictly required for cell metabolism while the latter is.

Even if local or global electrophysiology could be produced on reanimated tissue, it would still be hard to tell if engrams are preserved. Anything short of actually reviving an organism with long-term memory recall intact would require guesswork based on our knowledge of neuroscience about whether engrams are still present. And any procedure to accomplish that would meet the criteria of long-term suspended animation. So in the absence of long-term suspended animation, any procedure that optimizes for the preservation of viability metrics is going to be shrouded in uncertainty, just as with procedures that optimize for the preservation of structural metrics.

Third, structural preservation metrics are more straightforward than viability metrics. The idea behind a microscope is pretty simple. You literally look at the tissue and see whether the most likely substrates of engrams look like they normally do.

Fourth, preservation methods optimizing for structural metrics have the potential to be much cheaper. For example, it’s much more plausible to imagine a method that optimizes for structural preservation being compatible with room temperature storage. This could help significantly with financial access to brain preservation.

Summary of arguments for focusing on viability or structural metrics

Consideration when using procedures that optimize for this class of metrics Viability metrics Structural metrics
Difficulty in defining the goal of the procedure High; “viability” is a notoriously slippery concept, and reproducible measurement of viability is challenging Moderate; most of the difficulty is in choosing which structural metrics are best to focus on
Reversible preservation procedures possible on small biospecimens Yes; e.g. in embryos, brain cells, ovarian tissue, and small animals No, methods that best optimize for structural metrics are not reversible with today’s technology even on small biospecimens
Need to debate about what is required to retain the information for valued cognitive functions, such as engrams Today, yes; theoretically, in the future, may be able to avoid this debate, if reversible preservation can be achieved with an excellent proxy of memory recall function intact Yes, unless an actual revival method becomes possible
Ability to iteratively improve the method until long-term suspended animation is possible Potentially, yes; methods preserving viability are much more plausibly on a path towards this, although whether it will ever be possible is still unknown No; much less plausible
Societal incentive to develop procedures aside from brain preservation High; numerous potential applications of related technologies in organ transplantation, tissue engineering, reproductive medicine, etc Low to moderate; helpful for certain niche pathology applications or biology studies, but not as much broader societal interest
Potential to achieve low-cost methods Low; it is difficult to imagine how it could be possible without an expensive procedure and continuous low-temperature storage for the long-term High; it is possible to imagine a procedure compatible with room temperature storage
Example of high-quality outcome metric on brain tissue Retaining global electrophysiologic functions across the brain, such as those seen using electroencephalography (EEG) Retaining nanoscale-level topological arrangements across the brain, such as those seen using volume electron microscopy
Already a procedure to achieve this high-quality outcome on large brains today No Yes; aldehyde-stabilized cryopreservation

Inclusionist perspective

This is an important debate with real consequences. But as with many debates within cryonics, it often ends up so heated that the groups involved forget that they are both tiny compared to general society that at best doesn’t care about cryonics and at worst is trying to impede it. So as an inclusionist, I try to respect both perspectives, and often wish that we could argue about it less. I think both perspectives have merit.

It is often said that brain preservation procedures optimizing for structural preservation metrics would only be compatible with the revival method of whole brain emulation. Because all potential revival methods are highly speculative at this point, it is hard to know anything for sure about this topic, but this does not seem to be true. For example, both pure cryopreservation and aldehyde-based approaches seem to be compatible with the potential revival method of molecular nanotechnology-based repair. In fact, one of the key conceptual innovators in molecular nanotechnology-based repair, Eric Drexler, was among the first to propose an aldehyde-based approach for preservation, in his 1986 book Engines of Creation (Drexler, 1986).

Even though structural preservation metrics are not as good with brain preservation methods that attempt to optimize for viability metrics, some make the argument that they may be good enough. We know there is some leeway given the likely capabilities of future technology to infer the original states. For example, when asked about distorted cell membranes in cryopreserved brain tissue at the conference Biostasis 2020, one researcher pointed out that aldehyde-stabilized cryopreservation had better ultrastructure preservation than pure cryopreservation approaches, but they thought it didn’t matter because there was still sufficient structural information present in the brains preserved with pure cryopreservation approaches. Personally, I do not agree, because I am uncertain about what degree of structural brain preservation is “good enough,” but I can see why others might share this perspective.

Viability or structural preservation in research or practice

I find the arguments for pursuing brain preservation procedures that optimize for viability metrics in a research setting to be reasonable. But in terms of what type of metric we should optimize for when preserving brains in practice today, to me there is no contest. All that we can achieve on human-sized brains today is structural preservation. The quality of outcomes on viability metrics that I have seen with all currently available brain preservation methods are very poor. It seems misguided to me to think about improvements in viability-focused preservation technology – which might be possible in the future – when we are talking about what we can achieve with preservation today.

As a result, I prefer to use structural preservation metrics to compare between current methods for preserving brains. I like that they are straightforward, relatively easier to measure, and possible to accomplish today. So now let’s shift gears and talk about the most useful structural preservation metrics to target.

What structural information should we focus on preserving?

In this section, I provide my current best guesses about what makes the most sense to try to preserve. These are provisional and subject to change based on new information.

The abstract structural features I focus on are:

Synapse shape and myelin shape are widely studied special cases of cell membrane shape. I will consider these separately because there is often a literature on them in particular.

The biomolecule features I focus on are:

For the first five biomolecule features, which are different types of biomolecules, I focus on their composition and location information content. Biomolecular conformation states, as discussed in a previous essay, should be predictable based on composition and location information; however, they may be more liable to be lost in a brain preservation procedure, so they are considered separately.

I consider chromatin separately because it is a combination of biomolecules, including nucleic acids, proteins, small molecules, and lipids (Fernandes et al., 2018). It is also a widely studied structure in cells.

On a lower level, chromatin is primarily made up of histone octamers (which are proteins) and strands of DNA, which are organized into nucleosomes:

Low-level chromatin and nucleosome organization principles; (Morgan, 2007)

On a higher level, chromatin is organized in the nucleus into territories by chromosome, organized into primarily open/transcriptionally active (compartment A) and transcriptionally repressed (compartment B) regions, and further organized into topologically associated domains (TADs) (Bak et al., 2021):

High-level chromatin organization principles; (Bak et al., 2021)

Chromatin information content consists of cell-to-cell heterogeneity in the way that these structures are organized. Chromatin state variability helps to dictate whether particular DNA regions are transcribed into RNA in a given cell.

Chromatin states are not in a position to directly play a role in electrochemical information flow through the connectome. However, because they regulate gene expression in individual cells, chromatin is a potentially powerful inference channel if aspects of the biomolecule-annotated connectome are damaged – which is why it’s included in this discussion.

What structures are most important to directly test the preservation of?

Of all the structures mentioned above, cell membrane morphology seems to me clearly the most valuable structural feature to directly measure in brain preservation.

First, there is the straightforward reason that much of the electrochemical information flow in the nervous system occurs at cell membranes. To a large extent, this is where the action is.

Second, cell membranes are also relatively easy to measure, for example via various staining techniques. As a result they are a good proxy for the states of other gel-like structures in cells, such as organelles.

Measuring cell membrane shape gives us a good sense of whether we have achieved a cell membrane level of connectome detail. As discussed in a previous essay, my guess is that this level of connectome detail will not be sufficient to describe engrams. So why do I think cell membranes are the most important type of structure to measure?

Partially, my rationale is that once you have morphology data, it is possible to use biophysical mechanisms to reason about how biomolecules associated with the gel-like networks making up the abstract structural features visualized under the microscope are likely to be preserved.

On the other hand, it’s very difficult to make reasoned biophysical arguments about how morphology will change in a brain preservation procedure in the absence of actual data. A priori, based on biophysical mechanisms alone, a preservation procedure could result in excellent in vivo-like states or it could result in near-complete destruction/liquefaction of cellular morphologies. The effects of different preservation procedures on neural morphology in different situations are simply empirical questions.

Considerations on biomolecular preservation

As discussed in a previous essay, evidence suggests that it is the distributed activity of neuronal ensembles communicating through the biomolecule-annotated connectome that instantiates long-term memory recall. What we need to do is to preserve enough information in the brain that we can predict how this communication occurs. So the question of biomolecular preservation becomes: What biomolecular information is needed to predict neuronal communication patterns above and beyond cell morphology and connectivity?

As far as I can tell, the answer to this question is not known. This is a major source of uncertainty in brain preservation.

Given our uncertainty, all we can do is make our best guesses. My own best guess is based on contemporary computational neuroscience models.

To model neuronal communication, we want to know – to a sufficient degree of accuracy – the transfer function of any individual neuron (Carlu et al., 2020) (Cook et al., 2007). The transfer function of a single neuron is the set of rules that determines how its presynaptic excitatory and inhibitory inputs are converted to its outputs. Its outputs come in the form of the firing rate of its action potentials and ultimately the current that reaches its presynaptic connections to other neurons.

Modeling the transfer function of a neuron requires accurately predicting the postsynaptic potential functions for the synapses, how those synaptic currents add up with others in the active dendrite, how it is summed up and compared to some threshold in the axon hillock, how rapidly the action potential travels down the axon and into its presynaptic terminals, and any other complex electrophysiology such as bursting (Spruston et al., 2016).

Theoretically, a lot of biomolecules could be necessary for specifying the transfer function. For example, computational models suggest that dendritic Na+ spikes are critical to determine the input/output properties of an active dendrite, which in turn seem to be dependent on the underlying voltage-gated Na+ channels in the dendrite (Lea Goetz et al., 2021). But that doesn’t necessarily mean that we need to directly preserve voltage-gated Na+ channels, because the information for them is likely able to be predicted based on other biomolecular information, and all we need is the information about them, not the channels themselves.

Given that we don’t know which biomolecules, if any, are necessary to directly preserve and map to specify the neuronal transfer function, it doesn’t seem wise to put very much weight on the preservation of particular biomolecules when evaluating a brain preservation procedure. Especially because our current biomolecule mapping techniques are relatively limited. Instead, I think it makes the most sense to accept a degree of uncertainty in our reasoning process and consider the preservation of classes of biomolecules when evaluating brain preservation procedures.

If a brain preservation procedure is a standard technique in neuroscience or a slight variation of one, then we can use our knowledge of the preservation mechanisms and a review of the existing literature to determine what classes of biomolecules we expect to be preserved.

Even if we expect that a particular class of biomolecules is generally preserved based on this sort of reasoning, it’s still possible that there could be exceptions where certain biomolecules in this class are not preserved. However, because it is very unlikely in my view that a single type of biomolecule is necessary for engrams on its own, then the loss of a single type of biomolecule almost certainly wouldn’t be sufficient to destroy engram information content. Even if a small subset of the biomolecules is lost, damaged, or inaccessible to our profiling methods, if the overall morphology is still present, then my best guess is that other biomolecules will likely be available to infer the lost information content to a sufficient degree of accuracy.

The ability to do biophysical reasoning and refer to the existing literature is a major advantage of using standard techniques that are already used in neuroscience. To the extent that the brain preservation procedure used is non-standard, biophysical reasoning becomes less trustworthy, as there will be less literature and we will likely understand the preservation mechanisms less well. So in the case of non-standard techniques, it seems more essential to directly measure the preservation of biomolecule classes.

As discussed in a previous essay, another advantage of using a method with more biomolecular preservation capability is that biomolecules are an inference channel for morphology if cell membranes or other abstract structural features are damaged. But this is likely only possible if morphologic structures are partially damaged, such as if they look “hazy” or indistinct under the microscope; in this case, mapping the diffusion of the decomposed biomolecules that typically made up that structure might be helpful. On the other hand, if the constituent biomolecules of an abstract structural feature have totally liquefied and either leaked out of the tissue or have become jumbled up into a soup, then reconstructing the original state of that abstract structural feature seems like a much more difficult – likely impossible – problem to solve based on our current understanding of physics.

What are the best ways to assess the preservation of these structures?

“Faith” is a fine invention
For Gentlemen who see!
But Microscopes are prudent
In an Emergency! - Emily Dickinson

Volume electron microscopy

Volume electron microscopy, or volume EM for short, is a technique that starts with a large block of preserved tissue and then uses systematic techniques to section and image it under the electron microscope.

Overview of a volume EM brain tissue processing pipeline used to image a Drosophila brain; (a) shows an x-ray micro-CT image; (c) shows a mounted hot knife section; (d) shows FIB-SEM imaged volumes; Figure 5 from (Xu et al., 2017)

Volume EM is clearly one of the best methods available to measure the structural preservation of the brain, because it uses the highest-resolution imaging methodology we have (electron microscopy) and does it at large-scale so that we can look at a lot of cells.

The Brain Preservation Foundation prize

Perhaps the best way to see how volume EM can operate as a structural preservation metric is by looking at the history of the Brain Preservation Foundation (BPF) prize. I was previously a volunteer at the BPF and was therefore able to observe how this played out. In my opinion, one of the key lessons from the BPF prize is how a focus on metrics can help people to re-examine their approaches to brain preservation.

Consider the use of cold temperature for preservation at all. In the early 1960s, Ettinger and Cooper originally suggested the use of “freezers” and cold temperature storage, which is what caught on (Ettinger, 1964) (Duhring, 1962). Today, the field is usually called “cryonics.” But not all of the possible methods for brain preservation must necessarily employ cold temperatures. Ben Franklin’s proposal (“being immersed with a few friends in a cask of Madeira”) certainly did not. Plastic embedding might not. However, in the absence of clear metrics for what one is trying to achieve, it is hard to know which of the possible methods is best.

Along comes the BPF prize. In 2010, the BPF offered two innovative prizes. They offered one prize (~$25,000) for a method that could preserve and maintain the connectome in a small mammalian brain, such as a mouse, and a larger prize (~$75,000) for using such a method to preserve a larger mammalian brain, such as a pig. Primarily, the focus was on a cell membrane or ultrastructural level of connectome detail.

The BPF proposed to visualize the connectome in preserved brain tissue using electron microscopy; the organization’s president, Ken Hayworth, is a pioneer in this technique and was able to use his expertise to perform the evaluations. The way the prize worked is that competitors preserved brains and sent them to the BPF for evaluation. The key points that Hayworth and the other evaluator, Sebastian Seung, evaluated for were:

  1. Whether there was “well preserved ultrastructure” on the surfaces of cut sections of the brain block. “Well preserved ultrastructure” is a holistic measure that includes whether cell membranes are intact and visible, organelles are intact and in their typical locations, and synapses contain visible vesicles and synaptic densities.

  2. Whether it was possible to trace the connectivity from cell to cell in randomly sampled volumes of the brain block. The way that Hayworth evaluated this was a stepwise process (Ken Hayworth, personal communication). First, he would pick a random dendritic spine synapse in a volume electron microscopy image. On the presynaptic side of the synapse, he would attempt to trace the dendritic spine back to the nearest dendrite; if this were not possible, then it would be evidence that the connectivity was disrupted. Next, on the other side of the synapse, he would attempt to trace the axon to the outside of the volume. Axons go further, so preserving and tracing them was the more challenging part of the connectivity test.

In addition to passing this connectome preservation test, the other major requirement for the winning procedure was that it would allow long-term storage for at least 100 years. There was no stipulation that required the preservation of any type of viability metric.

From my perspective, the simplicity of this prize allowed for a “back to basics” moment for the field. Prior to this, aside from a few voices over the years who had suggested alternative methods (primarily fixation), the majority of people in cryonics assumed that pure cryopreservation was the only real option.

Three groups of people submitted preserved brain tissue for electron microscopy evaluation, each with different preservation methods.

1. The first method submitted for evaluation was a pure chemical preservation method called BROPA (Mikula et al., 2015). This method was submitted by Shawn Mikula, working with Winfried Denk in Heidelberg, Germany.

Up until BROPA was proposed, tissue embedding methods most compatible with electron microscopy only worked on small pieces of brain tissue. This is in part because the chemicals used for embedding, such as osmium, have poor diffusion properties.

Mikula’s method set out to solve this problem by improving the ability of the embedding chemicals to penetrate deep into tissue. One of Mikula’s innovations was adding a new chemical to the preservation procedure, formamide, which prevented the formation of barriers to diffusion.

The use of BROPA to preserve small mammalian brains, visualized with x-ray micro-CT (Mikula, 2016)

Mikula submitted three mouse brains to the BPF for evaluation. One submission was rejected because there were cracks in the cortex that were seen when imaged via x-ray. These developed after the resin dried. The second submission was rejected because the osmium penetration was uneven, leading to inferior preservation in deep interior regions such as the thalamus. The third submission was evaluated in-depth and found to have good preservation in parts of the tissue:

High preservation quality in one part of Mikula’s submission

However, in other parts of the brain, which seem to be more inner areas, the tissue that should have been myelinated axons was obliterated:

Poor preservation quality in another part of Mikula’s submission

These destroyed areas would prevent high-quality tracing of the full connectome. As a result, this submission was also rejected.

According to BPF co-founder John Smart, while the diffusion and the likely associated obliteration problem could have potentially been solved given more effort, the cracking problem was arguably the more fundamental reason that the prize submission was rejected. Given the way that the prize rules were written, Mikula’s mouse brain submissions would not be consistently well preserved due to the cracking.

Smart called this a “minor tragedy”, because cracks would likely not be a significant problem for revival methods that use software-based inference of the original neural states as part of its procedure. The software would almost certainly be able to stitch cracked brain tissue back together. But regardless, the method was not accepted as the winner.

The other problem with BROPA is that it relied on immersion of chemicals that have poor diffusion properties, such as osmium. Because of this, scaling the method up from the size of a mouse brain to a pig brain would be difficult, not to mention the close to 10x additional scaling required to go from a pig brain (180 grams) to a human brain (1300 grams). So while Mikula’s method may have won the small mammal prize with slight improvements or a change in evaluation criteria, it likely wouldn’t have won the large mammal prize or have been possible on intact whole human brains regardless.

2. The second method submitted for evaluation was a pure cryopreservation protocol. Cryopreserved rabbit brains were submitted for evaluation by a team from 21st Century Medicine, in Fontana, California. During this procedure, cryoprotective agents were perfused into the brains of rabbits as they were cooled down to very low temperatures.

This method had previously shown good functional preservation results in thin (0.5 mm) slices of brain tissue. Specifically, they reported that they had recovery of some electrophysiological function upon rewarming, including a long-term potentiation response (Fahy et al., 2013).

However, in whole rabbit brains, the method ran into problems. The main problem was that they couldn’t get the cryoprotective agent into and out of the whole brain as easily as they were able to in the tissue slices. This caused significant dehydration and shrinkage, which made the brain tissue look warped and difficult to assess with volume electron microscopy.

3. The third method submitted for evaluation was also performed at 21st Century Medicine. It was the technique of aldehyde-stabilized cryopreservation. To simplify a complex procedure, in aldehyde-stabilized cryopreservation, the fixative glutaraldehyde is first perfused into the tissue, and then cryoprotective agents are perfused into the tissue. This allows for much better structural preservation upon low-temperature cooling, avoiding the significant dehydration and shrinkage seen in the pure cryopreservation procedure.

Robert McIntyre, who spearheaded the development of this method at 21st Century Medicine, actually came up with the idea in 2013 when he was volunteering for the BPF and trying to make a visualization of the different possible approaches.

This technique was able to win both BPF prizes: the small mammal prize in 2016 and the large mammal prize in 2018.

Aldehyde-stabilized cryopreservation BPF large mammal prize evaluation data, from 2018

Although it may seem obvious with the benefit of hindsight, I don’t think anyone predicted that something like aldehyde-stabilized cryopreservation would be the method that would win the BPF prizes. Similar approaches had been published before, but they weren’t really on the radar as brain preservation methods. This shows the clear value of the volume electron microscopy metric and the BPF prize.

Limitations of volume electron microscopy and the BPF prize

“A man with one watch always knows what time it is. A man with two watches is never sure.” - Segal’s law

One of the other advantages of the in-depth investigation of the BPF prize is that it helped illuminate some of its limitations. Each of the limitations can suggest alternative structural metrics we can use in brain preservation:

1. Volume electron microscopy is expensive and complex to perform. Instead, one could consider using light microscopy, which trades off imaging resolution for decreased cost and increased accessibility, while still potentially allowing for sufficient visualization of cell membrane morphology.

2. Individual biomolecules were not directly mapped. Instead, one could consider targeted biomolecular mapping studies as a complementary metric to directly investigate biomolecule preservation.

3. The brains submitted for the BPF prize evaluation were preserved in ideal experimental settings. The procedures were performed on healthy animals, with no significant agonal state or postmortem interval prior to preservation. Instead, one could consider testing preservation quality in more realistic conditions that are relevant in human brain preservation.

4. The prize evaluation was done using qualitative assessment. Instead, one could imagine a more quantitative approach, although this would require some basic research to develop such quantitative metrics first.

5. Other than the requirement for a procedure that could maintain the preserved structure for at least 100 years, the BPF prize did not specify any additional constraints beyond high-quality volume EM imaging results. Instead, one could imagine adding additional constraints, such as on the cost, storage fidelity, and/or environmental impact of the brain preservation method.

Light microscopy

There is an argument to be made that for a useful structural preservation metric, it may not be necessary to go down to the level of resolution provided by electron microscopy.

Light microscopy, which is much cheaper and more accessible, could be a good enough proxy for assessing information-theoretic death in brain tissue. It would tell you if cell membranes are generally intact and whether the cells are generally in the location where you expect them to be. With certain staining methods, it would also be possible to achieve coarse-grained information about synapse preservation.

Compared to light microscopy, the electron microscopy level gives you: (a) detailed information about synapses, which seem to be pretty stable anyway postmortem; (b) the precise textures of cell membranes, which probably aren’t crucial unique stores of information from an ordinary survival perspective; and (c) organelle shape information, which does not seem as critical for engrams as cell membrane morphology and may have more sufficient inference channels.

This is relevant in part because light microscopy is a more widely used technique, so when evaluating the literature about different brain preservation methods, there is much more light microscopy data available.

At the very least, light microscopy could be used as a screening test to rule out brain tissue as clearly not sufficiently preserved prior to spending the resources on electron microscopy.

Biomolecule profiling

Theoretically, a brain preservation method could have led to the loss of a large percentage of the underlying biomolecules and still won the BPF prize. That’s because ultrastructure imaging relies on abstract structural features, which tend to be made of many different types of biomolecules.

In the case of aldehyde-stabilized cryopreservation, we have theoretical reasons to think that the majority of biomolecules will also be preserved. So directly testing biomolecule preservation does not seem mission-critical given our limited resources. But ideally, brain tissue preserved with any method would be tested for biomolecule preservation in addition to looking at the ultrastructure under electron microscopy.

Biomolecule mapping could be done with any number of techniques, such as immunohistochemistry, spatial transcriptomics, or immuno-electron microscopy.

The most common method for biomolecule mapping is probably immunohistochemistry. Immunohistochemistry uses antibodies that target specific epitopes on proteins or other biomolecules. The antibodies that are produced are ideally specific and only bind to the biomolecule of interest. The tissue is then stained, and the antibody-epitope binding can be detected using light microscopy.

A problem with immunohistochemical staining is that it’s difficult to interpret a negative result, because the biomolecule could be truly lost or merely unable to be visualized. Unfortunately, it’s hard to distinguish between these possibilities, although sometimes antigen retrieval techniques can tell us that the problem is one of accessibility, rather than the loss of the actual biomolecule’s information content (Shi et al., 2011). If the biomolecule is merely unable to be visualized with today’s technology, that is much less of a big deal, considering that we expect our biomolecule mapping technology to be dramatically improved by the time that an eventual revival might be possible, and likely not limited by today’s constraints.

Related to biomolecule profiling, the BPF prize didn’t really take into account the possibility of structural inference in damaged tissue. I currently struggle to see how to account for the possibility of connectome inference in a rigorous way, but it seems important to figure this out. One way to do so might be via mapping of biomolecules that seem to have high structural information content, such as protocadherins.

Realistic brain preservation settings

It is a common refrain for seemingly intractable diseases like cancer or Alzheimer’s that we have already found the cure for them – in mice. Actually, in most cases, this is not entirely true. But it is certainly the case that across the biomedical sciences, it is much easier to show that a treatment or procedure is helpful in a controlled experimental setting such as an animal model than it is to do so in the real world on actual patients. We must keep in mind that this will certainly be true in brain preservation as well.

From the perspective of brain preservation, laboratory animal data is biased in numerous ways. Animals can have controlled deaths with no agonal or postmortem period; they do not die of compromising conditions such as a brain bleed; they tend to be younger so their blood vessels are in better condition and more amenable to perfusion; and if the procedure does not work on one laboratory animal, it can simply be performed on another one.

Any promising data we identify in laboratory animal study should be considered very much an upper bound on what is likely to be possible in practical human settings. In fact, our default expectation should be that results from animal studies will not translate to realistic human cases until it is demonstrated otherwise.

As a result, a major confound of the BPF’s evaluation metric is that it was measured in ideal research settings in laboratory animals rather than in practical real-world preservation cases. I don’t want to dismiss the importance of the BPF’s prize. The experimental setting was an amazing start. But it’s not far enough. We also need to see whether structural preservation is sufficient in real-world cases.

One caveat I want to point out here is that the distinction I have drawn between ideal experimental settings and realistic settings is a false dichotomy. In reality there is a spectrum between these two. For example, in laboratory animals, one could imagine simulating a postmortem period prior to the start of the preservation procedure. But in the end, surrogate outcomes in realistic cases are the most important metric of a preservation procedure.

One major upside of fixation methods is that they have been shown to lead to high-quality structural preservation in the neuroscience literature in general and in the human brain banking literature in particular. The latter could be considered analogous to realistic cases from a brain preservation perspective.

Quantitative scoring

The BPF prize evaluation was done in a non-quantitative way. Ken Hayworth and an additional evaluator looked at the images and decided that the ultrastructure was well-preserved and the connectivity was intact. But just because it looks good enough to a human eye doesn’t mean that there aren’t some subtle problems with the brain tissue. Additionally, there is more of a risk of bias in human measurements.

Ideally, the metrics for quality would be more quantitative and more objective. This could potentially be achieved by vision machine learning research. But it is a difficult problem. Most histology scoring is semi-quantitative at best (Lindblom et al., 2021):

Analysis of histological sections is difficult and by necessity subject to semiquantitative scales, not always corroborated by international classification and consensus standards.

What does semi-quantitative mean? Here is the NIST definition:

Use of a set of methods, principles, or rules for assessing risk based on bins, scales, or representative numbers whose values and meanings are not maintained in other contexts.

There have been some attempts at histology quality scoring systems. For example, here is a semi-quantitative light microscopy scoring system described by (Kiernan, 2009) and applied to brain tissue:

The quality of fixation was considered at two levels, as discussed by Gabe (1976). Microanatomical fixation was considered optimal when there was no differential shrinkage of cells, tubules, extracellular material, neuropil or other components. Thus, badly fixed specimens contained various artifactual spaces with tears and breaks in epithelia and other structures. Cytological fixation was optimal when interphase nuclei exhibited their characteristically organized patterns of heterochromatin, the cytoplasm was intact without fractures or coarse flocculation, and erythrocytes appeared as intact biconcave discs. With the scoring system described here, microanatomical and cytological fixation are assessed separately.

It’s also possible to imagine a more quantitative connectivity metric in volume electron microscopy data. For example, using automated tissue segmentation and connectivity tracing algorithms, one could quantify the number of ambiguous axon tracing errors in a block of preserved brain tissue.

Accessibility constraints on the brain preservation procedure

While the BPF prize had a constraint that the winning method must allow for storage of at least 100 years, it didn’t have a strictly defined constraint on the cost, storage fidelity, or environmental impact of the method. This is important because one of the biggest problems with brain preservation is that a lot of people want it but – tragically – cannot access it at the time of their legal death.

The BPF’s 2010 prize rules did stipulate that “as this preservation procedure is meant to eventually become a widely accessible medical procedure, a case must be made that it could eventually be implemented inexpensively enough that it might become broadly adopted by all who might choose it.” It also stated that the winning method would use “a procedure that, with minor modifications, might potentially be offered for less than US$20,000 by appropriately trained medical professionals.” But the cost requirement was not a major focus of the prize evaluation.

The winning method, aldehyde-stabilized cryopreservation, gets us towards higher-quality structural brain preservation. It also helps with storage fidelity, because even if the cryopreservation fails, there would still be chemical preservation via fixatives that may be sufficient, at least in the short term.

But aldehyde-stabilized cryopreservation may not help much with the cost of brain preservation, as it still requires long-term sub-zero storage. So it may not meaningfully move the needle on financial access to high-quality brain preservation among those who want it. Financial access is certainly not the only barrier for people to access consistently high-quality brain preservation, but it’s one of the major factors.

One way to try to address this would be a prize for a room temperature preservation method that achieved similar structural preservation quality on volume electron microscopy. In 2020, BPF co-founder John Smart proposed a room temperature brain preservation prize.

Perhaps an even more direct way to address the problem of financial accessibility would be to stipulate the maximum cost of the procedure. For example, another issue with Mikula’s BROPA method, which might allow for long-term room temperature preservation, is that osmium is quite expensive. As Mikula described it in a BPF interview in 2015:

The extension of the BROPA method to a pig brain faces a financial hurdle due to the high cost of osmium tetroxide, which is about $30/g. For a typical pig brain weight of about 180 g (according to online sources), about the same weight of osmium tetroxide is required for the BROPA protocol and thus would cost $5400. For a 1.2 kg human brain, this is $36,000… In terms of accelerating the process of extending BROPA or a related epoxy embedding-based method to large brains, this is mostly a question of money for chemical costs and having an adequate supply of well-fixed brains.

Because room temperature brain preservation methods such as BROPA can be expensive as well, it seems better to directly assess the cost of the procedure rather than only specifying the storage temperature.

There is plenty of room for improvement on methods that optimize for structural preservation quality

One higher-level point about the limitations present in the BPF metric is that there is still plenty of space left to innovate. It is certainly not the case that optimizing for structural preservation is a dead end or already accomplished. Hopefully, the field is just getting started.

Also, as progress in neuroscience and towards revival technology improves, our understanding of what is necessary to preserve and measure will continue to improve. This will help us to design better metrics to test and optimize our preservation methods.

Structural preservation metrics are independent of the method used for revival

BPF president Ken Hayworth primarily discusses his motivation for developing brain preservation as allowing people to pursue mind uploading (Hayworth, 2015). Mind uploading, more formally known as whole brain emulation, is the idea that we could scan a brain accurately enough to create an emulation of a person’s mental state and then instantiating this emulation in digital form.

But mind uploading is certainly not the only proposed revival method compatible with fixation-based brain preservation approaches. Molecular nanotechnology-based revival, which is a revival strategy that aims to perform molecule-by-molecule level repair of the brain using hypothetical atom-manipulation capabilities, would also be compatible with fixation-based approaches, if it ever becomes feasible. So the focus on mind uploading has more to do with Hayworth’s emphasis rather than the fundamental neuroscience.

More generally, optimizing for structural preservation seems to be largely independent of the method for revival. In my view, there’s no plausible revival method for any current preservation method that doesn’t involve complex repair at the molecular level, either in situ or in silico. Perhaps that will change if we eventually become very close to long-term suspended animation, but even then it would likely only apply to ideal cases with minimal agonal and postmortem interval damage.

Open questions

Conclusion

Some in the cryonics community would prefer there be less focus on preservation quality metrics and instead more focus on marketing or some other aspect of cryonics. Arguably, they have a point in that variation between different preservation methods may matter less than we think in terms of the overall likelihood of successful revival following brain preservation, relative to other factors. However, it’s not clear that they are right that more marketing would be helpful for expanding interest and access in cryonics/brain preservation.

Surveys, such as the 2015 BPF survey and the 2020 European Biostasis Foundation survey, show that a majority of people in wealthy nations such as the United States have heard of cryonics (Gillett et al., 2021). The main problem is not lack of awareness, but, to a rough approximation, that the product is perceived as low quality and therefore not worth the costs. The topic of more why people don’t pursue cryonics is a complicated one that will be discussed later. For now, I will say that even if you do believe in marketing, it seems that the best marketing strategy must have at its forefront delivering a high-quality, high-fidelity, trustworthy service. Focusing on structural preservation quality metrics is, in my opinion, the best way to get there.

Over the last decade or so, there has been more of a push in cryonics and brain preservation to develop and apply preservation quality metrics. In addition to the BPF prize, a key development has been CT brain scanning at Alcor, which can be done through the dewar and gives a sense of the tissue condition, including the amount of ice formation. This is very helpful and worthy of praise.

But a sustained focus on metrics is still lacking. Ideally, there would also be histologic quality control measurements in addition to radiologic ones. Measuring and being transparent about outcomes data is tough. It can be discouraging and scary. Yet, without a more consistent focus on surrogate outcomes, the field is not going to advance.

Of course, any time one discusses metrics, it is also important to keep in mind Goodhart’s law: when a measure becomes a target, it ceases to be a good measure. Our actual, long-term goal is not to achieve good preservation metrics. Rather, our long-term goal is to allow for eventual revival. So our metrics need to be carefully chosen, multimodal, and thoughtfully applied towards the actual goal. Being aware of Goodhart’s law is not an excuse to ignore metrics, but rather a reminder to be thoughtful about applying them and willing to update them.

Metrics allow us to create a feedback mechanism to incentivize performance, iterate our methods to identify better ones, and be transparent about what we are achieving. By using metrics, we embed an appropriate amount of skepticism into the brain preservation project and proceed in a more scientific manner. As the saying goes, in God we trust; all others must bring data.

Endnotes

Further reading

Viability vs structural preservation trade-offs are difficult to avoid

The major debate today about this trade-off is often between those who favor aldehyde-based methods and those who favor pure cryopreservation methods. But this debate also exists within methods. For example, there is sometimes a viability vs structural preservation trade-off within pure cryopreservation methods. Slower cooling can lead to a higher likelihood of cellular survival upon rewarming but worse cellular structural preservation (Robards et al., 1985):

“It is usually necessary to make a choice between using a freezing protocol that changes the internal structure and solute concentrations of the cells but allows them to survive after thawing and a method that preserves ‘good’ cell structure even though the cell will not be viable if brought back to its normal growth temperature. The reasons for this distinction are relatively straightforward and arise from a consideration of what happens when cells are frozen at different rates.” - (Robards et al., 1985)

As another study points out (Farrant et al., 1977):

“[T]he conditions used for the freezing of cells for ultrastructural examination in the frozen state are so different from those used when the recovery of function is the aim, that there is a distinct possibility that the cells have been damaged both functionally and structurally.” - (Farrant et al., 1977)

As a result, the viability vs structure metrics debate is difficult to ignore, even if we imagine restricting ourselves to a pure cryopreservation approach for brain preservation.

References

Bak, J. H., Kim, M. H., Liu, L. and Hyeon, C., A Unified Framework for Inferring the Multi-Scale Organization of Chromatin Domains from Hi-C, PLOS Computational Biology, vol. 17, no. 3, p. e1008834, March 2021. DOI: 10.1371/journal.pcbi.1008834
Carlu, M., Chehab, O., Dalla Porta, L., Depannemaecker, D., Héricé, C., Jedynak, M., Köksal Ersöz, E., et al., A Mean-Field Approach to the Dynamics of Networks of Complex Neurons, from Nonlinear Integrate-and-Fire to Hodgkin-Huxley Models, Journal of Neurophysiology, vol. 123, no. 3, pp. 1042–51, March 2020. DOI: 10.1152/jn.00399.2019
Cook, E. P., Wilhelm, A. C., Guest, J. A., Liang, Y., Masse, N. Y. and Colbert, C. M., The Neuronal Transfer Function: Contributions from Voltage- and Time-Dependent Mechanisms, Progress in Brain Research, vol. 165, pp. 1–12, 2007. DOI: 10.1016/S0079-6123(06)65001-2
Drexler, K. E., Engines of Creation, Anchor Press/Doubleday, 1986.
Duhring, N., Immortality: Physically, Scientifically, Now: A Reasonable Guarantee of Bodily Preservation, a General Discussion, & Research Targets, Alcor Life Extension Foundation, 1962.
Ettinger, R. C. W., The Prospect of Immortality, Ria University Press, 1964.
Fahy, G. M., Guan, N., de Graaf, I. A. M., Tan, Y., Griffin, L. and Groothuis, G. M. M., Cryopreservation of Precision-Cut Tissue Slices, Xenobiotica, vol. 43, no. 1, pp. 113–32, January 2013. DOI: 10.3109/00498254.2012.728300
Farrant, J., Walter, C. A., Lee, H., Morris, G. J. and Clarke, K. J., Structural and Functional Aspects of Biological Freezing Techniques, Journal of Microscopy, vol. 111, no. 1, pp. 17–34, September 1977. DOI: 10.1111/j.1365-2818.1977.tb00044.x
Fernandes, V., Teles, K., Ribeiro, C., Treptow, W. and Santos, G., Fat Nucleosome: Role of Lipids on Chromatin, Progress in Lipid Research, vol. 70, pp. 29–34, April 2018. DOI: 10.1016/j.plipres.2018.04.003
Gillett, C. R., Brame, T. and Kendiorra, E. F., Comprehensive Survey of United States Internet Users’ Sentiments Towards Cryopreservation, PLOS ONE, vol. 16, no. 1, p. e0244980, January 2021. DOI: 10.1371/journal.pone.0244980
Goetz, A. and Goetz, S. S., Vitrification and Crystallization of Organic Cells at Low Temperatures, Journal of Applied Physics, vol. 9, no. 11, pp. 718–29, November 1938. DOI: 10.1063/1.1710381
Goetz, L., Roth, A. and Häusser, M., Active Dendrites Enable Strong but Sparse Inputs to Determine Orientation Selectivity, Proceedings of the National Academy of Sciences, vol. 118, no. 30, p. e2017339118, July 2021. DOI: 10.1073/pnas.2017339118
Guo, D., Zou, J., Rensing, N. and Wong, M., In Vivo Two-Photon Imaging of Astrocytes in GFAP-GFP Transgenic Mice, PLOS ONE, vol. 12, no. 1, p. e0170005, January 2017. DOI: 10.1371/journal.pone.0170005
Hayworth, K., An Argument for the Scientific and Technical Plausibility of Mind Uploading, 2015.
Kiernan, J., A System for Quantitative Evaluation of Fixatives for Light Microscopy Using Paraffin Sections of Kidney and Brain, Biotechnic & Histochemistry, vol. 84, no. 1, pp. 1–10, January 2009. DOI: 10.1080/10520290802646619
Li, C., Zhu, D., Li, X., Wang, B. and Chen, J., Thermalstress Analysis on the Crack Formation of Tungsten During Fusion Relevant Transient Heat Loads, Nuclear Materials and Energy, vol. 13, pp. 68–73, December 2017. DOI: 10.1016/j.nme.2017.06.008
Lindblom, R. P. F., Tovedal, T., Norlin, B., Hillered, L., Englund, E. and Thelin, S., Mechanical Reperfusion Following Prolonged Global Cerebral Ischemia Attenuates Brain Injury, Journal of Cardiovascular Translational Research, vol. 14, no. 2, pp. 338–47, April 2021. DOI: 10.1007/s12265-020-10058-9
Man, S., Xian, Y., Holmes, D. N., Matsouaka, R. A., Saver, J. L., Smith, E. E., Bhatt, D. L., Schwamm, L. H. and Fonarow, G. C., Association Between Thrombolytic Door-to-Needle Time and 1-Year Mortality and Readmission in Patients With Acute Ischemic Stroke, JAMA, vol. 323, no. 21, pp. 2170–84, June 2020. DOI: 10.1001/jama.2020.5697
Mikula, S., Progress Towards Mammalian Whole-Brain Cellular Connectomics, Frontiers in Neuroanatomy, vol. 10, 2016.
Mikula, S. and Denk, W., High-Resolution Whole-Brain Staining for Electron Microscopic Circuit Reconstruction, Nature Methods, vol. 12, no. 6, pp. 541–46, June 2015. DOI: 10.1038/nmeth.3361
Morgan, D. O., The Cell Cycle: Principles of Control, New Science Press, 2007.
Prehoda, R. W., Suspended Animation: The Research Possibility That May Allow Man to Conquer the Limiting Chains of Time, Chilton Book Company, 1969.
Robards, A. W. and Sleytr, U. B., Low Temperature Methods in Biological Electron Microscopy, Elsevier Science, 1985.
Saver, J. L., Time Is Brain–Quantified, Stroke, vol. 37, no. 1, pp. 263–66, January 2006. DOI: 10.1161/01.STR.0000196957.55928.ab
Shi, S.-R., Shi, Y. and Taylor, C. R., Antigen Retrieval Immunohistochemistry, Journal of Histochemistry and Cytochemistry, vol. 59, no. 1, pp. 13–32, January 2011. DOI: 10.1369/jhc.2010.957191
Shoffstall, G. W., Failed Futures, Broken Promises, and the Prospect of Cybernetic Immortality: Toward an Abundant Sociological History of Cryonic Suspension, 1962-1979, PhD thesis, University of Illinois at Urbana-Champaign, 2016.
Spruston, N., Stuart, G. and Häusser, M., Principles of Dendritic Integration, Dendrites, vol. 351, no. 597, p. 1, 2016.
Sugimoto, M., Hyodoh, H., Rokukawa, M., Kanazawa, A., Murakami, R., Shimizu, J., Okazaki, S., Mizuo, K. and Watanabe, S., Freezing Effect on Brain Density in Postmortem CT, Legal Medicine (Tokyo, Japan), vol. 18, pp. 62–65, January 2016. DOI: 10.1016/j.legalmed.2015.12.007
Tan, M., Mei, J. and Xie, J., The Formation and Control of Ice Crystal and Its Impact on the Quality of Frozen Aquatic Products: A Review, Crystals, vol. 11, no. 1, p. 68, January 2021. DOI: 10.3390/cryst11010068
Vita-More, N. and Barranco, D., Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis Elegans, Rejuvenation Research, vol. 18, no. 5, pp. 458–63, October 2015. DOI: 10.1089/rej.2014.1636
Xu, C. S., Hayworth, K. J., Lu, Z., Grob, P., Hassan, A. M., García-Cerdán, J. G., Niyogi, K. K., Nogales, E., Weinberg, R. J. and Hess, H. F., Enhanced FIB-SEM Systems for Large-Volume 3d Imaging, eLife, vol. 6, p. e25916, May 2017. DOI: 10.7554/eLife.25916