2D Measurement of Retinal Pigment Epithelium Function Using Quantitative Bright-Field Microscopy

View Example: Deep Zoom view of Retinal Pigment Epithelium (RPE) cell images Absorbance, fluorescent Z01 and manual segmentation for 1032 images.

Download data in Deep Zoom

  1. zip file download of raw absorbance images
  2. zip file download of Z01 fluorescent images
  3. zip file download of manually segmented images

Purpose: Demonstrate that the biological function of a tissue can be predicted from bright-field microscope images using deep neural networks and other machine learning algorithms.

Paper Title: Quantitative Bright-Field Microscopy Combined with Deep Neural Networks Predict Live Tissue Function

Abstract: Progressive increases in the number of cell therapies in the preclinical and clinical phases has prompted the need for reliable and non-invasive assays to validate transplant function in clinical biomanufacturing. Here, we developed a robust characterization platform composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to non-invasively predict tissue function and cellular donor identity. The platform was validated using clinical-grade induced pluripotent stem cell derived retinal pigment epithelial cells (iRPE) . QBAM images of iRPE were used to train Deep Neural Networks (DNNs) that predicted iRPE monolayer transepithelial resistance (TER) (R2=0.97), predicted polarized Vascular Endothelial Growth Factor (VEGF) secretion (R2=0.89), and matched iRPE monolayers to the iPSC donors (accuracy=85%). DNN predictions were supplemented with traditional machine learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity. These results demonstrate non-invasive cell therapy characterization can be achieved with quantitative bright-field microscopy and machine learning.

  1. iRPE from Healthy Donors
  2. iRPE culture plate
    An illustration of how the iRPE culture plate was organized. The Blank Well (blue) was filled with iRPE culture medium and used for benchmarking and calibration. The iRPE Wells (green) are locations in the plate that contained iRPE cells. Not Used (red) indicates empty wells not used for the experiment. The grids in the figure indicates that a 4x3 grid of overlapping images were captured near the center of the well. The order in which wells were imaged was always B2, B1, C1, C2, C3, C4.

    iRPE were developed from two healthy donors (Healthy-1 and Healthy-2). Live iRPE were imaged using QBAM every week during the in vitro maturation process. The data is organized by the culture plate, imaging date and time, color filter used to capture the images, and finally images that include the well ID and grid position in the name of the image. Both Healthy-1 and Healthy-2 iRPE were cultured in 12-well plates as outlined in Figure 1, with only half of the 12-well plate containing cells (green circles, Figure 1). The Blank Well (blue circles) was filled with culture medium but contained no cells and was used for benchmarking and calibration protocols that are part of the QBAM process. The grid in each well indicates that a 4x3 grid of overlapping images (~10-15% overlap) were captured for each well. One unique characteristic of this dataset is that each plate contains iRPE treated maturation inhibitors (negative control, HPI4), maturation promotors (positive control, Aphidicolin), or neither. The imaging parameters and functional data collected for each dataset for Healthy-1 and Healthy-2 were different, and the details of data collection and contents are included in the “DataSummary.txt” file included in each subfolder.

    Folder: Healthy1_Data

    Access to Healthy1_Data (Live RPE - Broadband, 232 GB)

    Data from this donor have end-point functional data, and the images were collected using broadband color filters. These “broadband” color filters are filters that allow a broad range of wavelengths to pass through them. This dataset contains 8 replicates of each experimental condition (no treatment, HPI4, Aphidicolin).

    Folder: Healthy2_Data (Used to train DNN-F)

    Access to Healthy2_Data (Live RPE - Narrowband, 291 GB)
    CNN training
    An explanation of how the Healthy2 data acquired from iRPE culture plate were partitioned to train the DNN-Z for TER

    Data from this donor have weekly transepithelial resistance (TER) measurements for all but 6 samples. Thus, there are 216 TER measurements (36 wells x 6 timepoints) associated with 2580 fields of view. In addition, VEGF-Ratio data is included for 12 wells for five different timepoints. The images were collected using narrowband (bandpass) color filters. This dataset contains 12 replicates of each experimental condition (no treatment, HPI4, Aphidicolin).

  3. iRPE from Donors with Age Related Macular Degeneration (AMD)
  4. Folder: AMD_Data

    Access to AMD_Data (3.97 GB)

    iRPE were developed from three patients with AMD in a current good manufacturing practices (cGMP) facility. The entire process of developing iRPE for each patient was repeated multiple times, resulting in multiple iRPE clones per AMD donor. Images were captured after the cells had matured and were fixed. Additional information on this dataset is described in the “DataSummary.txt” file within the dataset.

    Subfolder: AnalysisOfSegmentedData (Used to Train DNN-I)

    This dataset includes QBAM images all donors and clones, segmentation images that indicate the location of cells borders (generated using DNN-S), and features of cells in each imaged (extracted using WIPP), and functional data.

    Subfolder: DonorMatching

    This data contains all the outputs from the algorithms developed to associate cells with donors from the paper (DNN-I and LSVM).

  5. iRPE from Donors with Oculocutaneous Albinism (OCA_Data)
  6. Folder: OCA_Data

    Access to OCA_Data (Diseased RPE, 35.7 GB)

    iRPE from five different OCA patients were imaged after the cells had matured and were fixed. Cells were fluorescently labeled to identify cell borders (ZO-1) and imaged using QBAM. Some of the data in this dataset also contain phase contrast images, bright-field images captured at the same time as the fluorescent images, and/or images of cell nuclei (DAPI). QBAM images are not registered to other images in these datasets, but non-QBAM images are registered to the fluorescent cell border images. Registration may be performed on the raw bright-field images captured during QBAM and either the phase contrast or bright-field images.

  7. Fluorescent and QBAM Images for Neural Network Segmentation (NN_Data)
  8. Folder: NN_Data

    Access to NN_Data (Neural Network Segmentation Data, 4.12 GB)

    The datasets contained in this folder contain the data (partitioned into training and test data) used to segment RPE cells in fluorescent or QBAM images.

    Subfolder: ZO1_DNNZ (Used to train DNN-Z)

    This dataset contains fluorescent images of RPE from humans and mice used to train a deep neural network to segment cell borders. The image labels are hand corrected border segmentations. All images are 256x256 pixels in size. Additional information on this dataset is described in the “DataSummary.txt” file within each datasets folder.

    Subfolder: QBAM_DNNS (Used to train DNN-S)

    This dataset contains QBAM images of iRPE from patients with AMD. The image labels were generated by segmenting fluorescently labeled cell borders that were registered to the QBAM images. All images are 256x256 pixels in size and contain three color channels. Additional information on this dataset is described in the “DataSummary.txt” file within each datasets folder.

    Subfolder: Unregistered_Images

    This dataset contains the fluorescent and QBAM images, bright-field images, and fluorescent images of cell borders (ZO-1). This is the original, unregistered data used in the paper to register QBAM images with ZO-1 fluorescent images. Two sets of bright-field images are included in this unregistered set of images: one set of bright-field images is registered to the QBAM images and the other set of bright-field images is registered to the ZO-1 fluorescent images. In the paper, these images were registered by correlating the two sets of bright-field images with each other, then overlaying the ZO-1 fluorescent images with the QBAM images. The data in the uncorrelated images dataset was used to train a neural network to associate cells with a donor (DNN-I). Additional information on this dataset is described in the “DataSummary.txt” file within each datasets folder.

Credit

A DOI has been minted for this data record: https://doi.org/doi:10.18434/T4/1503229

If you use this data then, please, cite the DOI and the appropriate publication:

Quantitative Bright-Field Microscopy Combined with Deep Neural Networks Predict Live Tissue Function
Nicholas J. Schaub, Nathan A. Hotaling, Sarala Padi, Petre Manescu, Qin Wan, Ruchi Sharma, Joe Chalfoun, Mylene Simon, Mohamed Ouladi, Carl G. Simon, Jr., Peter Bajcsy, Kapil Bharti
(under review)