Speaker
Details
Living systems need to remember information about their environment in order to take decisions that ultimately ensure survival. But storing information about past experiences costs energy, while only a fraction of the vast amount of information available can be useful to the living system. An intelligent memory formation strategy should take that into account by filtering out meaningful bits. To develop an abstract theory for memory making, one needs to quantify the cost-benefit trade-off, but it is not always easy to specify the benefits an organism could potentially derive from a memory. On a fundamental level, however, the cost-benefit trade-off can be quantified by noting that part of the thermodynamic costs incurred could in principle be recovered. The remaining dissipation, on average, is proportional to how much information is memorized about quantities that are not controllable.
In this talk, we will step back to the most basic level. We will recall the physical basis of information, for which Szilard laid the foundation by pointing out that Maxwell's "demon" can be replaced by machinery [1]. To this day, Szilard's "information engine", still serves as a canonical example, despite significant growth of the field over the past decade, accelerated by increased control over small systems enabling experimental demonstrations of these thought experiments. An information engine contains two parts: a system which does work on the environment, and an observer of this system. The role of the observer (the "demon") can be reduced to mapping available data to a meta-stable memory, which is used by the engine to do work. While Szilard showed that the map can be implemented mechanistically, he chose the map a priori. The choice of how to construct a meaningful memory reflects the demon's intelligence. I will show that this can be automated as well. To that end, I introduce generalized, partially observable information engines. They can do work at a different temperature, T' > T, from the memory forming process, thus allowing for the combined treatment of heat- and information engines. Partial observability is ubiquitous in living systems which have limited sensor types and information acquisition bandwidths. I will show that minimizing the lower bound on engine dissipation (equivalently, maximizing the upper bound on net average work output) over all possible memories yields a method for finding those memories that optimally trade off between thermodynamic cost and benefit. To illustrate how the demon's intelligence can be automated, I will discuss a simple model that nonetheless displays physical richness. A minor change to Szilard's engine - inserting the divider at an angle - results in a family of partially observable Szilard engines. At fixed angle, there is an optimal memory for each value of T'/T, enabling maximal engine efficiency. Those optimal memories are probabilistic maps, computed by the Information bottleneck algorithm [2], which is derived from minimizing dissipation in the quasi-static limit.
This talk covers two recent papers: PRL 124 050601, and arXiv:2103.15803
[1] L. Szilard. Z. Phys., 53:840–856, 1929.
[2] N. Tishby, F. Pereira, and W. Bialek. Proc. 37th Allerton Conference, 1999. arXiv:0004057