Theoretical work is necessary for the proper interpretation of data, for modelling biological systems, for the understanding of the physical phenomena exploited for sensing applications and for engineering the optimal detection systems. Fundamental questions that can be addressed with theoretical works are, for instance: which is the information content of images in fluorescence microscopy? How many photons are required to correctly estimate a fluorescence lifetime? In recent years, I worked to use biophysical imaging techniques as a tool for systems biology of cancer. The next big theoretical question I wish to address is: how biochemical networks encode for cellular decisions and maintain functional states? To address this question, we will have to develop accurate models of our data and the biology under study.

Over the last few years, we have developped several theoretical models:
  • Photon-paritioning theorem: definition and optimization of biochemical resolving power in fluorescence microscopy (~2013)
  • High (super?) resolution volume rendering of confocal data (~2010)
  • Quantifying anylite concentrations by FRET imaging and phasor transforms (~2007)
  • Maximization of photon-economy and acquisition throughput in FLIM applications
  • Lifetime Moment Analysis (LiMA): graphical representations and missing analytical solutions for FLIM analysis in the frequency domain

Theory Pipeline

Photon partitioning theorem and biochemical resolving power

Read the paper at PLOS ONE

After my 2007 theoretical work on photon-economy and acquisition throughput, I occasionally worked on a more general framework to demonstrate that multi-channel or multi-parametric imaging can deliver better results than other simpler techniques.

My proposal to develop instrumentation to achieve spectrally and polarization resolved lifetime imaging (later defined as HDIM) was met with scepticism by many. The recurrent question was: if you struggle to do a double exponential fit with the small photon budget we have available in biological applications, how could you possibly dilute these photons over several channels and analyse them with more complex algorithms?

Here, there are a few fundamental misunderstandings. First, the analysis should not be carried out on each “detection channel” independently, but global analysis algorithms should be used to exploit all information present in the data set at once. Second, the use of dispersive optics rather than filters permits to acquire a higher number of photons. We can then dismiss the “bad” photons during post-processing (e.g. autofluorescence on a spectral band of no interest) optimizing the imaging conditions rather than relying on fixed optics that may not have been optimized for a specific experiment. Third, limitations in current technologies (e.g., speed or photon-collection efficiency) should not be an obstacle to the development of these techniques because these are not conceptual flaws, but simply technology obstacles that can be removed.

Although I have a lot of (unpublished) work used to describe performances of multi-channel systems, the breakthrough for me was to understand that I should have described the general properties of the Fisher information in fluorescence detection rather than the Fisher information in a specific experiment. Fisher information is the information content that an experiment provides about an unknown we wish to estimate. Its inverse is the smallest variance ever attainable within an experiment, or what is called the Rao-Cramer limit. In other words, Fisher information permits to understand how much noise we get in an experiments. By maximizing Fisher information, we maximize the precision of our experiments.

Photon-partitioning theorem

The second breakthrough was the understanding that the best description of precision in biophysical imaging techniques was possible only defining the concept of biochemical resolving power that is a generalization of the resolving power of a spectrograph to any measured photophysical parameter and then to its application to biochemistry. The biochemical resolving power is proportional to the square root of the photon-efficiency of a microscopy technique and the number of detected photons. Maximization of Fisher information leads to the maximization of photon-efficiency and, therefore, net improvements in biochemical resolving power. This definition complement the definition of spatial resolution in microscopy and allows to define when two objects are spatially and/or biochemically distinct. It is worth to mention that this is equivalent in stating that two objects are spatially and photo-physically distinct, but we use the photophysics of fluorophores to do biochemistry, hence my nomenclature. I see possible implications for other techniques, including super-resolution and, perhaps, this will be the subject of a future work.

The third breakthrough was the utilization of numerical computation of Fisher information rather than the analytical solutions of equations that are not always available. This process is very common in engineering but not in our field. Therefore, we can now optimize the properties of any detection scheme in order to attain the highest performance.

This work is a very specialist one and I assume there will be not many people interested in it, although the implications of this piece of theory for everyone’s experiment are significant. I believe that this is my most elegant theoretical work, but I guess it is a matter of opinion. The paper in itself had to be expanded well beyond what I wished to publish during the refereeing process and it is now including examples, software, etc. I think the theoretical introduction and the mathematical demonstrations are the best part and the description of the numerical optimization of Fisher information the most useful.
NOTE: there are two typographical errors in the published manuscript within the definitions of photon economy and separability. These are described in a comment on PLOS ONE.
Maximization of photon-economy and acquisition throughput in FLIM applications

SECTION TO BE COMPLETED - Follow me on Twitter for updates

Simple analysis of lifetime images by linear transforms

SECTION TO BE COMPLETED - Follow me on Twitter for updates

Volume rendering: is this localization-based super-resolution?

SECTION TO BE COMPLETED - Follow me on Twitter for updates