We will discuss the segmentation of single cells from GFP (Green Fluorescent Protein)
images and display the histogram of cell size to detect and count the objects with more
than one cell in them.
Challenges
Cell seeding on a plate is a common practice in many laboratories. The operator has
limited control of the cell placement, leading to an often-random spatial distribution of cells.
Within 24–48 h after cell seeding, cells are often in contact. Segmentation techniques can
detect single cells in images with higher accuracy and confidence if cells are well separated.
The confidence in the cell count decreases when the cells in a FOV are clumped together.
Inputs
We will analyze one well on a plate that is randomly seeded with A10 cells. The well is
imaged using a phase contrast microscope as a grid of 23 × 29 tiles with 10% overlap.
Each tile has a dimension of 1392 x 1040 pixels with intensities represented by 16 bits per
pixels (BPP). Each pixel dimension is equivalent to (i.e., 10X magnification).
Analyses
We begin by stitching the single image tiles into a large FOV image and then use segmentation
to detect all cells in the well. Next, we use WIPP to identify the single cells from a group
of cells. Once we identify their location in the dish, the biologist can perform additional
manual validation of the results.
Image Processing Pipeline
The pipeline to extract cell count consists of:
1. creating a new collection and uploading images,
2. stitching image tiles,
3. intensity scaling and pyramid building,
4. assembling tiles into a large FOV image,
5. segmentation, and
6. feature extraction.
This diagram displays a detailed pipeline to solve the problem of cell counting.
The items marked in long dotted orange lines are for visualization purposes only.
Please note that the intensity scaling is applied only for visualization while the
image assembly is applied to the raw input tiles.
Upload Dataset
From the “Main Page”, click on “Image Collections” tab.
Access the “Manage Image Collections” page.
Press the “Create new collection” button and enter the name of the dataset.
This name will be tagged to that dataset.
Push the “add files to collection” button and browse for the saved files or you can
drag and drop the files into the browser area.
Note: Once the dataset is locked, it is available to the algorithms (or jobs) as input.
Jobs are accessed from the “Image Processing” tab.
Image Processing - Stitching
Cell culture microscopy must address the spatial scale mismatch between
the microscope's FOV (Field of View) and the size of the specimen under study.
For example, the area of a standard 6-well plate well is approximately 1000
times larger than the FOV acquired with a 10X objective. Automated microscopy
overcomes this issue by acquiring a grid of partially overlapping images (tiles)
that cover most of the experimental area. Stitching is the name used in literature
that refers to the action of combining single images into one large mosaic.
“Image processing” page gives access to all the image processing
algorithms in WIPP.
Click on “Stitching jobs” and then “Create new job”.
Enter the parameters into the “Create new stitching job”
as shown here and then click “Submit”.
When stitching is done, the job metadata can be found by
name searching which will give access to the following page.
Image Processing - Intensity Scaling and Pyramid Building
Most microscopy images are in the 16 BPP (uint16) format. Web browsers
are able to only render 8 BPP images, and require scaling the original images
before launching the pyramid building to enable visualization of the large FOV image.
Click on “Intensity scaling jobs” and then “Create new job”.
Enter the name of job and select the corresponding collection to be scaled.
Go to “Pyramid building jobs” and click on “Create new job”.
Enter the name of the job, the name of the stitching vector
created from the stitching job and the scaled collection.
View the large image by clicking on “Pyramids” and select the newly created job.
Use the mouse left button to pan around the image and the scroll wheel
to zoom in and out.
Image Processing - Image assembly
Quantitative analyses are performed on the original raw intensity images.
We need to assemble the large FOV image before segmenting the cells.
Click on “Image Assembling jobs” and then “Create new job”.
Enter the name of job and select the corresponding stitching
vector and the original collection (not the scaled one).
Image Processing - Segmentation
Click on “EGT Segmentation Jobs” and then “Create new job”.
Enter the name of job and select the assembled image.
Input 250 as minimum object size and submit.
Visualize and verify the segmentation output by navigating
to “stitching job” from the “image processing” panel and
select “Time sequence of 1 FOV” option from the algorithms
drop-down menu. This operation creates a stitching vector
for the binary output of EGT segmentation. Input this stitching
vector into the pyramid job to create a pyramid for the binary
image as described in step 10.
Image Processing - Visualization
The Visualizations tab can be used to inspect multiple layers overlaid
on top of each other. It can be used to scan around the large image and
inspect he segmentation results.
Click on “Visualizations” and then “Create a new visualization”.
Enter the name of job.
Enter the name of the group to visualize and push on the
“+” sign to add the group.
Enter the name of the layer to display (GFP in this case),
the name of the pyramid and click the “+” sign to add the layer.
Repeat this process for the second layer (image segmentation).
The user can now scan around the image and choose to
display one layer or overlay multiple layers on top of
each other. Use the slider bar to change the transparency
of the two layers and visually check the segmentation result.
Image Processing - Binary Image Labeling
The output from EGT Segmentation is a binary image with the label set to 1 for
all foreground pixels and 0 for all background pixels. To distinguish single
segmented objects (cell), we need to run the “mask labeling job”.
This operation assigns a unique label to image regions that contain pixels
connected via either by 4 or 8 neighbors.
Select “Mask Labeling Jobs” and click on “Create new job”.
Enter the name of the job, the binary collection and click submit.
A pyramid can be built for the labeled mask using the same
stitching vector created from the binary image.
Image Processing - Feature Extraction and Single Cell Detection
To detect single cells, we will compute the area of each object
in the labeled image and display some population statistics on the web.
Go to the “Feature Extraction” tab, select “Feature Extraction jobs”
and click on “Create new job”. Enter the name of the job,
an optional email address and click on “Next step”.
Input the name of the stitched image collection.
Check the box that says “pyramid-optional”.
This option will allow the feature extraction job to create
the web statistics tool, populated by the current dataset.
Input the name of the labeled image collection and click “Next step”.
Under “Search” type “Area” to narrow down the search,
and then select the first option.
Scroll down to the end of the menu and Click “Add selected features”.
When the job is complete we can now select the option to
see the population statistics by clicking on “Stat modeling”
and select “Area” as the feature to analyze.
We can now sort cell areas in the large image.
We can visually choose the area threshold beyond which a
cell is considered a group of cells rather than a single cell.
The confidence in detecting isolated cells is higher than those
in contact with others. By finding spatial regions with groups
of cells, we can simply ignore them from analysis or analyze
them visually.