When it comes to generating 3D digital geometric models of historical buildings, the automation of methods is still limited. Existing research focused on sacral structures, on 3D model generation of the exterior envelope of buildings and on segmentation of interior spaces. The goal of this project is (i) to develop a data acquisition and post-processing pipeline for deriving the exterior and interior geometry of historical buildings in terms of 3D point clouds derived from spherical RGB images, (ii) to augment this data with information extracted from historical architectural drawings, and (iii) to approximate the 3D point clouds by geometrical primitives describing the architectural and structural elements in historical buildings. Because interior spaces are often very privacy-sensitive spaces, privacy-issues will be considered from the start, limiting the stored data to data containing architectural and structural elements. The derived models can be used for a wide range of research applications, such as 4D modelling showing the evolution of a historical building over time and structural modelling of historical buildings, which requires as input a geometric model of the building.
A key tool for studying the dynamics of living systems is the light microscope. Microscopes allow real-time recording of spontaneous or evoked spatio-temporal dynamics, data that can be used to develop models for how complex systems function. Today, cutting-edge microscopes can image below the diffraction limit of light (super-resolution microscopy), or over days, gently enough to allow an organism to develop and walk away (light-sheet microscopy). Yet, microscopy studies of biological systems largely rely on human control or pre-defined acquisition parameters, to identify features of interest, perturb the system, and collect data in a given location and at a given timescale. This is because subtle changes in protein dynamics and assembly patterns often herald events of interest – _too subtle and unreliable to act as inputs to existing microscope automation.
Advances in intelligent systems and adaptive control have the potential to revolutionize how microscopy data is collected, and to then enable breakthroughs in our understanding of biological systems. We propose to develop a neural network-based microscope controller that is capable of detecting image signatures related to biological activity, and in response, adapting illumination patterns at multiple locations across an imaging field of view. The proposed project aims to build upon a neural-network microscope control framework previously developed in the Manley group, to make it suitable for spatially and temporally adapted control. We will apply this to push beyond the state-of-the art, using as proof-of-concept organismal studies performed in the Oates group, and biofilm studies in the Manley group.
Many questions in biology, from development to neuroscience and medicine require the identification of finegrained behaviors. We will develop novel computer vision and natural language processing technology to improve behavioral analysis in biology and medicine. Specifically, we will build deep learning models that can efficiently learn joint representations from video and heterogeneous data sources (e.g., textual descriptions, knowledge graphs). To do so, we will mine the written literature as well as video sharing platforms to extract a knowledge graph of behavior and then learn tri-modal models based on vision, language and this knowledge graph. We believe that these models will be able to more robustly and efficiently generalize to various applications in biology.
Next-generation radio telescopes such as the Square Kilometer Array (SKA) will observe the sky with unprecedented resolution, sensitivity, and survey speed. However, this precise instrument will demand reliable, precise, and high dynamic range deconvolution techniques to form images. The popular CLEAN algorithm, while efficient, often produces images of suboptimal quality. In recent years convex and nonconvex optimization algorithms have been demonstrated to produce images with superior quality, but at the cost of efficiency and scalability. Deep learning solutions offer a compromise between speed and quality, but at the cost of reliability and generalizability. In this project, the researchers will leverage the expertise between the astronomy and signal processing research groups at EPFL to develop an end-to-end imaging solution that is precise, robust, and scalable.
Scanning probe methods – and in particular, the combination of scanning ion conductance microscopy (SICM) and scanning electrochemical microscopy (SECM) – have emerged as unique tools for studying materials and mechanisms in complex, multistep chemical reactions such as CO2 reduction. However, they are notoriously slow in image acquisition, making them ill-suited for studying the dynamics of energy conversion processes. In this project, these two EPFL labs will develop advanced hardware and software components for a unique, fast SICM-SECM imaging method that can be easily deployed within the EPFL community, and beyond. Their method has great potential for the design of energy devices, as well as emerging cross-disciplinary applications such as the nanoelectrochemistry of single-cell signaling.
3D image reconstruction or depth estimation is at the core of applications in navigation as well as Earth system science. Significant advances have been made in the field of computer vision to obtain 3D information from various types of cameras. Yet, these techniques still face limitations for a number of applications. In this project, the VITA (Prof. Alahi ) and EERL (Prof. Schmale) groups will pool their complementary skills to develop new machine-learning-based methods that can estimate depth from a large number of camera configurations, including from a novel 360° camera application. With their work, the teams will spearhead developments in the domain of 3D wave reconstruction and sea-ice classification, as well as autonomous navigation on water. iThe learning framework will be available as an open-source library that caters to the needs of many imaging applications.
Each human cell contains around two meters of DNA tightly packaged in its nucleus. An exquisite organization is critical to ensure that the DNA can be accessed by the many important genetic processes. This organization is achieved by wrapping the DNA around millions of tiny protein spindles, forming a complex called chromatin. Chromatin governs many key cellular functions and, when malfunctions in its organization can lead to serious diseases. As of now, there exist no imaging methods that allows scientists to observe chromatin organization directly in the nucleus without seriously interfering with its local structure. In this project, two EPFL labs from different schools will develop novel methods for imaging the ultrastructure of chromatin on the level of individual genes and their regulatory regions in cells using in situ fluorescent chemical labeling, 3D nanoscopy and sequencing-based methods.
In this project, scientists from two EPFL labs will combine their know-how to develop a new high-speed microscopy system that can reveal single-molecule dynamics with unprecedented detail, including in liquids. The system will also allow scientists to assess how individual molecules behave, interact and self-organize at the solid-liquid interface. More specifically, they will enhance the system’s high-speed temporal acquisition capability by incorporating the single-photon avalanche diode (SPAD) arrays developed in Prof. Charbon’s lab into the state-of-the-art widefield super-resolution microscope developed in Prof. Radenovic’s lab.
When it comes to characterizing mechanics at the cellular scale, the accuracy and precision of current methods are still limited. In this project, Prof. Kolinski and Prof. Persat will connect imaging to mechanical measurements by developing a set of hardware and software tools that can measure microscale 3D force fields and surface stresses. The goal will be to improve our understanding of the function of forces in the physiology of biological systems. The new tools can then be used to study the mechanobiology of bacterial pathogens and will be widely applicable in other fields as well, such as microscale mechanics and the study of soft matter.
Spatial transcriptomics – a nascent field arising from the combination of cutting-edge microscopy with gene-specific in-situ labeling – can be used to generate large gene expression profiles of messenger RNA. This gives scientists an indication of the relative expression rates of different genes in the same environment. EPFL scientists at these two labs are are profiling the expression of up to two hundred genes simultaneously in the developing brain using Hybridization In Situ Sequencing (HybISS). This cutting-edge method relies on computational processing methods that are still immature, can be hard to use and error-prone.
In this project, Dr. La Manno and Dr. Weigert will develop a computational spatial transcriptomics framework – called Codebook-Aware ILP Detection and Tracking (CBAIDT) – that leverages modern computer vision techniques while using a novel tracking approach to substantially increase the accuracy and robustness of gene expression map generation.