Paper Title: Quantitative Bright-Field Microscopy Combined with Deep Neural Networks Predict Live Tissue Function
Abstract: Progressive increases in the number of cell therapies in the preclinical and clinical phases has prompted the need for reliable and non-invasive assays to validate transplant function in clinical biomanufacturing. Here, we developed a robust characterization platform composed of quantitative bright-field absorbance microscopy (QBAM) and deep neural networks (DNNs) to non-invasively predict tissue function and cellular donor identity. The platform was validated using clinical-grade induced pluripotent stem cell derived retinal pigment epithelial cells (iRPE). QBAM images of iRPE were used to train DNNs that predicted iRPE monolayer transepithelial resistance (R2=0.97), predicted polarized VEGF secretion (R2=0.89), and matched iRPE monolayers to the iPSC donors (accuracy=85%). DNN predictions were supplemented with traditional machine learning algorithms that identified shape and texture features of single cells that were used to predict tissue function and iPSC donor identity. These results demonstrate non-invasive cell therapy characterization can be achieved with quantitative bright-field microscopy and machine learning.
iRPE were developed from two healthy donors (Healthy-1 and Healthy-2). Live iRPE were imaged using QBAM every week during the in vitro maturation process. The data is organized by the culture plate, imaging date and time, color filter used to capture the images, and finally images that include the well ID and grid position in the name of the image. Both Healthy-1 and Healthy-2 iRPE were cultured in 12-well plates as outlined in Figure 1, with only half of the 12-well plate containing cells (green circles, Figure 1). The Blank Well (blue circles) was filled with culture medium but contained no cells and was used for benchmarking and calibration protocols that are part of the QBAM process. The grid in each well indicates that a 4x3 grid of overlapping images (~10-15% overlap) were captured for each well. One unique characteristic of this dataset is that each plate contains iRPE treated maturation inhibitors (negative control, HPI4), maturation promotors (positive control, Aphidicolin), or neither. The imaging parameters and functional data collected for each dataset for Healthy-1 and Healthy-2 were different, and the details of data collection and contents are included in the “DataSummary.txt” file included in each subfolder.
Data from this donor have end-point functional data, and the images were collected using broadband color filters. These “broadband” color filters are filters that allow a broad range of wavelengths to pass through them. This dataset contains 8 replicates of each experimental condition (no treatment, HPI4, Aphidicolin).
Data from this donor have weekly transepithelial resistance (TER) measurements for all but 6 samples. Thus, there are 216 TER measurements (36 wells x 6 timepoints) associated with 2580 fields of view. In addition, VEGF-Ratio data is included for 12 wells for five different timepoints. The images were collected using narrowband (bandpass) color filters. This dataset contains 12 replicates of each experimental condition (no treatment, HPI4, Aphidicolin).
iRPE were developed from three patients with AMD in a current good manufacturing practices (cGMP) facility. The entire process of developing iRPE for each patient was repeated multiple times, resulting in multiple iRPE clones per AMD donor. Images were captured after the cells had matured and were fixed. Additional information on this dataset is described in the “DataSummary.txt” file within the dataset.
This dataset includes QBAM images all donors and clones, segmentation images that indicate the location of cells borders (generated using DNN-S), and features of cells in each imaged (extracted using WIPP), and functional data.
This data contains all the outputs from the algorithms developed to associate cells with donors from the paper (DNN-I and LSVM).
iRPE from five different OCA patients were imaged after the cells had matured and were fixed. Cells were fluorescently labeled to identify cell borders (ZO-1) and imaged using QBAM. Some of the data in this dataset also contain phase contrast images, bright-field images captured at the same time as the fluorescent images, and/or images of cell nuclei (DAPI). QBAM images are not registered to other images in these datasets, but non-QBAM images are registered to the fluorescent cell border images. Registration may be performed on the raw bright-field images captured during QBAM and either the phase contrast or bright-field images.
The datasets contained in this folder contain the data (partitioned into training and test data) used to segment RPE cells in fluorescent or QBAM images.
This dataset contains QBAM images of iRPE from patients with AMD. The image labels were generated by segmenting fluorescently labeled cell borders that were registered to the QBAM images. All images are 256x256 pixels in size and contain three color channels. Additional information on this dataset is described in the “DataSummary.txt” file within each datasets folder.
This dataset contains the fluorescent and QBAM images, bright-field images, and fluorescent images of cell borders (ZO-1). This is the original, unregistered data used in the paper to register QBAM images with ZO-1 fluorescent images. Two sets of bright-field images are included in this unregistered set of images: one set of bright-field images is registered to the QBAM images and the other set of bright-field images is registered to the ZO-1 fluorescent images. In the paper, these images were registered by correlating the two sets of bright-field images with each other, then overlaying the ZO-1 fluorescent images with the QBAM images. The data in the uncorrelated images dataset was used to train a neural network to associate cells with a donor (DNN-I). Additional information on this dataset is described in the “DataSummary.txt” file within each datasets folder.