The field of image denoising is currently dominated by discriminative deep learning methods that are trained on pairs of noisy input and clean target images. Recently it has been shown that such ...methods can also be trained without clean targets. Instead, independent pairs of noisy images can be used, in an approach known as Noise2Noise (N2N). Here, we introduce Noise2Void (N2V), a training scheme that takes this idea one step further. It does not require noisy image pairs, nor clean target images. Consequently, N2V allows us to train directly on the body of data to be denoised and can therefore be applied when other methods cannot. Especially interesting is the application to biomedical image data, where the acquisition of training targets, clean or noisy, is frequently not possible. We compare the performance of N2V to approaches that have either clean target images and/or noisy image pairs available. Intuitively, N2V cannot be expected to outperform methods that have more information available during training. Still, we observe that the denoising performance of Noise2Void drops in moderation and compares favorably to training-free denoising methods.
Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, ...the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.
Multiple approaches to use deep learning for image restoration have recently been proposed. Training such approaches requires well registered pairs of high and low quality images. While this is ...easily achievable for many imaging modalities, e.g. fluorescence light microscopy, for others it is not. Cryo-transmission electron microscopy (cryo-TEM) could profoundly benefit from improved denoising methods, unfortunately it is one of the latter. Here we show how recent advances in network training for image restoration tasks, i.e. denoising, can be applied to cryo-TEM data. We describe our proposed method and show how it can be applied to single cryo-TEM projections and whole cryo-tomographic image volumes. Our proposed restoration method dramatically increases contrast in cryo-TEM images, which improves the interpretability of the acquired data. Furthermore we show that automated downstream processing on restored image data, demonstrated on a dense segmentation task, leads to improved results.
Cilia or eukaryotic flagella are microtubule-based organelles found across the eukaryotic tree of life. Their very high aspect ratio and crowded interior are unfavorable to diffusive transport of ...most components required for their assembly and maintenance. Instead, a system of intraflagellar transport (IFT) trains moves cargo rapidly up and down the cilium (Figure 1A).1–3 Anterograde IFT, from the cell body to the ciliary tip, is driven by kinesin-II motors, whereas retrograde IFT is powered by cytoplasmic dynein-1b motors.4 Both motors are associated with long chains of IFT protein complexes, known as IFT trains, and their cargoes.5–8 The conversion from anterograde to retrograde motility at the ciliary tip involves (1) the dissociation of kinesin motors from trains,9 (2) a fundamental restructuring of the train from the anterograde to the retrograde architecture,8,10,11 (3) the unloading and reloading of cargo,2 and (4) the activation of the dynein motors.8,12 A prominent hypothesis is that there is dedicated calcium-dependent protein-based machinery at the ciliary tip to mediate these processes.4,13 However, the mechanisms of IFT turnaround have remained elusive. In this study, we use mechanical and chemical methods to block IFT at intermediate positions along the cilia of the green algae Chlamydomonas reinhardtii, in normal and calcium-depleted conditions. We show that IFT turnaround, kinesin dissociation, and dynein-1b activation can consistently be induced at arbitrary distances from the ciliary tip, with no stationary tip machinery being required. Instead, we demonstrate that the anterograde-to-retrograde conversion is a calcium-independent intrinsic ability of IFT.
Display omitted
•Anterograde IFT trains can change direction without aid of a ciliary tip•IFT trains turn around normally in the absence of free calcium ions•Disengagement from the microtubule likely triggers anterograde-to-retrograde conversion
Intraflagellar transport (IFT) allows the assembly of cilia by moving components from the cell to the ciliary tip and back. Nievergelt et al. demonstrate that, contrary to what was previously hypothesized, no stationary tip machinery is required at the tip for IFT turnaround, which is instead a calcium-independent intrinsic ability of IFT trains.
Multiple approaches to use deep neural networks for image restoration have recently been proposed. Training such networks requires well registered pairs of high and low-quality images. While this is ...easily achievable for many imaging modalities, e.g., fluorescence light microscopy, for others it is not. Here we summarize on a number of recent developments in the fast-paced field of Content-Aware Image Restoration (CARE), in particular, and the associated area of neural network training, more in general. We then give specific examples how electron microscopy data can benefit from these new technologies.
Deep learning (DL) has arguably emerged as the method of choice for the detection and segmentation of biological structures in microscopy images. However, DL typically needs copious amounts of ...annotated training data that is for biomedical projects typically not available and excessively expensive to generate. Additionally, tasks become harder in the presence of noise, requiring even more high-quality training data. Hence, we propose to use denoising networks to improve the performance of other DL-based image segmentation methods. More specifically, we present ideas on how state-of-the-art self-supervised CARE networks can improve cell/nuclei segmentation in microscopy data. Using two state-of-the-art baseline methods, U-Net and StarDist, we show that our ideas consistently improve the quality of resulting segmentations, especially when only limited training data for noisy micrographs are available.
Transformer architectures show spectacular performance on NLP tasks and have recently also been used for tasks such as image completion or image classification. Here we propose to use a sequential ...image representation, where each prefix of the complete sequence describes the whole image at reduced resolution. Using such Fourier Do-main Encodings (FDEs), an auto-regressive image completion task is equivalent to predicting a higher resolution out-put given a low-resolution input. Additionally, we show that an encoder-decoder setup can be used to query arbitrary Fourier coefficients given a set of Fourier domain observations. We demonstrate the practicality of this approach in the context of computed tomography (CT) image reconstruction. In summary, we show that Fourier Image Trans-former (FIT) can be used to solve relevant image analysis tasks in Fourier space, a domain inherently inaccessible to convolutional architectures.
Transformer architectures show spectacular performance on NLP tasks and have recently also been used for tasks such as image completion or image classification. Here we propose to use a sequential ...image representation, where each prefix of the complete sequence describes the whole image at reduced resolution. Using such Fourier Domain Encodings (FDEs), an auto-regressive image completion task is equivalent to predicting a higher resolution output given a low-resolution input. Additionally, we show that an encoder-decoder setup can be used to query arbitrary Fourier coefficients given a set of Fourier domain observations. We demonstrate the practicality of this approach in the context of computed tomography (CT) image reconstruction. In summary, we show that Fourier Image Transformer (FIT) can be used to solve relevant image analysis tasks in Fourier space, a domain inherently inaccessible to convolutional architectures.