Adversarial Domain Adaptation and Pseudo-Labeling for Cross-Modality Microscopy Image Quantification

Abstract
Cell or nucleus quantification has recently achieved state-of-the-art performance by using convolutional neural networks (CNNs). In general, training CNNs requires a large amount of annotated microscopy image data, which is prohibitively expensive or even impossible to obtain in some applications. Additionally, when applying a deep supervised model to new datasets, it is common to annotate individual cells in those target datasets for model re-training or fine-tuning, leading to low-throughput image analysis. In this paper, we propose a novel adversarial domain adaptation method for cell/nucleus quantification across multimodality microscopy image data. Specifically, we learn a fully convolutional network detector with task-specific cycle-consistent adversarial learning, which conducts pixel-level adaptation between source and target domains and then completes a cell/nucleus detection task. Next, we generate pseudo-labels on target training data using the detector trained with adapted source images and further fine-tune the detector towards the target domain to boost the performance. We evaluate the proposed method on multiple cross-modality microscopy image datasets and obtain a significant improvement in cell/nucleus detection compared to the reference baselines and a recent state-of-the-art deep domain adaptation approach. In addition, our method is very competitive with the fully supervised models trained with all real target training labels.

This publication has 19 references indexed in Scilit: