Image Processing

The Image Processing module is designed to run and consult image processing jobs and artifacts in the WIPP system.
It is accessible from the "Image Processing" tab from the top menu bar, and allows:
  • - image calibration (flat field and background correction)
  • - image tile stitching
  • - image segmentation to obtain a binary mask or labeled image
  • - image intensity scaling
  • - object tracking
  • - image pyramid building for deep zoom viewing
  • - configuration of multiple image pyramids into a multiple layer deep zoom visualization
  • - image tessellation to create rectangular or hexagonal image partitions
  • - image assembly of tiles
  • - object labeling from binary masks
WIPP Image Processing module screenshot
WIPP Image Processing module screenshot

Click on the one of the tabs on the left menu to access the corresponding list of jobs or artifacts.

Below are explanations on how to configure and run image processing jobs per category.

For running a job using an Image collection, you must wait for images to be uploaded and converted to ome.tif before launching.

Stitching jobs

This processing step generates position information about each small field of view image in the coordinate system of a large field of view image.

From the Image Processing view, click on the "Stitching jobs" tab to access the Stitching jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Stitching jobs screenshot
WIPP Stitching jobs screenshot
To create a new Stitching job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Stitching job with MIST screenshot
WIPP Stitching job with MIST screenshot
WIPP Stitching job with metadata screenshot
WIPP Stitching job with metadata screenshot
WIPP Stitching job Mosaic screenshot
WIPP Stitching job Mosaic screenshot
WIPP Stitching job Time series of 1 FOV screenshot
WIPP Stitching job Time series of 1 FOV screenshot

Job name

Unique name for the job. The resulting stitching vector will be named after the stitching job name.

Grid collection

Input image collection.

Algorithm

MIST

A stitching algorithm for small and large two-dimensional (2D) image grid collections. This method is used for optimizing the translations computed by the phase-correlation method using the Fourier transform approach. Learn more by clicking here.

Stage metadata

This option uses the microscope stage information recorded during the acquisition of each small image tile (one field of view) to compute its global position in the grid of image tiles (one large field of view).

No Overlap Mosaic

This option creates a stitching vector for image tiles without any spatial overlap using the MIST algorithm.

Time Sequence of 1 FOV

This option is useful when building a pyramid from a sequence of images. It does not perform any stitching because the input collection has only one field view per time point. The images are sorted alphabetically if no filename pattern is inputed, or sequentially according to a temporal filename pattern.

File name pattern type

This field is used to specify the type of filename pattern used in the acquired images. Possible options: Sequential and Row-Column.

Sequential Filename Pattern Types

Sequential Filename Pattern Types have only one set of curly brackets "{}". The type denotes the image position in the image grid. The special character "p" must be used between the curly brackets. Therefore, a valid sequential filename pattern must have one set of curly brackets "{p}" with at least one "p" character between the curly brackets.
Examples:
  • 1. Img_pos001.ome.tif = Img_pos{ppp}.ome.tif
  • 2. ImageName01.ome.tif = ImageName{pp}.ome.tif

Row-column Filename Pattern Types

Row-column Filename Pattern Types have two sets of curly brackets "{r}" "{c}". One block denotes the image row index within the image grid and uses the special character "r". The other block denotes the image column index within the image grid and uses the special character "c". For a valid row-column Filename Pattern there must be one "{r}" block with at least one "r" between the curly brackets and one "{c}" block with at least one "c" between the curly brackets.
Examples:
  • 1. Img_row01_col01.ome.tif = Img_row{rr}_col{cc}.ome.tif
  • 2. ImageName_001_001.ome.tif = ImageName_{rrr}_{ccc}.ome.tif
The usage of "{p}" or "{r}" and "{c}" must match the Filename Pattern Type selected. If the Filename Pattern contains "{p}" the sequential Filename Pattern Type must be selected. If the Filename Pattern contains "{r}" and "{c}" then row-column Filename Pattern Type must be selected.

Filename pattern

The Filename Pattern is used to match specific image files within the Image Collection. The stage metadata algorithm does not support this field.

MIST and No Overlap Mosaic

There are two types of Filename Pattern, Sequential and Row-Column (see Filename Pattern Types above), both of which can handle time slices.

Note: Time Slices

MIST algorithm can stitch a series of independent 2D image grids, for example, a time-lapse series of image grids. The time-slice stitching is controlled by an additional set of curly brackets in the Filename Pattern with the "{ttt}" special text. The special text "{ttt}" must be used regardless of whether the independent 2D image grid are time slices or z-stack slices.
Examples:
  • 1. Img_pos001_time01.ome.tif = Img_pos{ppp}_time{tt}.ome.tif
  • 2. Img_r0001_c0001_t001.ome.tif = Img_r{rrrr}_c{cccc}_t{ttt}.ome.tif
  • 3. ImageName_001_001_01.ome.tif = ImageName_{rrr}_{ccc}_{tt}.ome.tif
  • 4. Dataset01_000001.ome.tif = Dataset{tt}_000{ppp}.ome.tif
  • 5. Img_t001.ome.tif = Img_t{ttt}.ome.tif

Time Sequence of 1 FOV

Only the time index as in "{ttt}" is accepted when using this algorithm. Since this option does not stitch the images of he input collection, grid indexes used in MIST such as "{rrr}", "{ccc} or "{ppp}" are not handled, and therefore a warning message is displayed when they are used in the filename pattern.
Examples:
  • 1. Img_time01.ome.tif = Img_time{tt}.ome.tif
  • 2. Img_t001.ome.tif = Img_t{ttt}.ome.tif
The file name pattern can also be left blank. In this case, the images will be sorted alphabetically and time slices are created according to that order.

Time slices

Leave this field blank to stitch all time slices (Starting from 0 or 1). To stitch time slices, you must add the special format text "{ttt}" to the Filename Pattern. If there is no special format text "{ttt}" in the Filename Pattern then this field must be blank. This input supports a single value or a range using a '-'.
Examples:
  • 1. "1-25" stitches timeslices 1 through 25 (Note: pyramid building does not support time slices that are not contiguous)
  • 2. "" stitches all available timeslices
  • 3. "3" stitches timeslice 3
  • 4. "0" stitches timeslice 0

Starting point

The starting point of the microscope scan. Possible options: Top Left, Top Right, Bottom Left, Bottom Right. This specifies the scanning origin for the input collection (a grid of images).

Direction

The direction and pattern of the microscope motion during acquisition. Possible options: Vertical Combing, Vertical Continuous, Horizontal Combing, and Horizontal Continuous.

Stage repeatability

Sets the stage repeatability variable when computing the global optimization. This value is used to represent the repeatability of the microscope stage movement (A to B and then back to A ~ delta A). It is used for determining the search space of the hill climbing algorithm. This value is specified in pixels.

Number of columns

The number of images in one row of the grid (The number of columns, the width of the image grid).

Number of rows

The number of images in one column of the grid (The number of rows, the height of the image grid).

Horizontal overlap

Sets the horizontal spatial overlap of adjacent image tiles. The value is used when optimizing the global position of image tiles to filter translations as good or bad. Good translations serve as starting positions for the hill climbing algorithm. Setting this value can improve the accuracy of stitching. This value is specified in percent and must be between 0 and 100.

Vertical overlap

Sets the vertical spatial overlap of adjacent image tiles. The value is used the same way as the horizontal overlap. This value is specified in percent and must be between 0 and 100.

Filtering jobs

Microscopy imaging introduces a variety of artifacts including noise. This processing step is designed to reduce the effect of noise on image quality.

From the Image Processing view, click on the "Filtering jobs" tab to access the Filtering jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Filtering jobs screenshot
WIPP Filtering jobs screenshot
To create a new Filtering job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Filtering job (mean filter) screenshot
WIPP Filtering job (mean filter) screenshot

Job name

Unique name for the job. The resulting image collection will be named after the filtering job name.

Image collection

Input image collection.

Filter type

Choose a filter type among the available options: Mean, Median, Min, Max and Gaussian blur.

Radius

Kernel size in pixels.

Flat Field Correction jobs

This processing step corrects the distortions introduced by the optics in microscopes.

From the Image Processing view, click on the "Flat Field Correction jobs" tab to access the Flat Field Correction jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Flat Field Correction jobs screenshot
WIPP Flat Field Correction jobs screenshot
To create a new Flat Field Correction job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Flat Field Correction job screenshot
WIPP Flat Field Correction job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to be corrected.

Dark collection

Image collection containing one image acquired by the microscope with a closed camera shutter.

Fluorescein collection

Image collection containing one image of a solution without cells acquired by the microscope.

Background Correction jobs

This processing step corrects intensity values in fluorescent microscopy images by subtracting the fluorescent signal introduced by the surrounding media from the signal measured at the cell locations.

From the Image Processing view, click on the "Background Correction jobs" tab to access the Background Correction jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Background correction jobs screenshot
WIPP Background correction jobs screenshot
To create a new Background Correction job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Background Correction job screenshot
WIPP Background Correction job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to be corrected.

Binary collection

Image collection of binary images. Binary values are 0 for background and 1 for foreground. Image file names in the binary image collection should match those from the input image collection.

Gap size

Distance in pixels between the circumference of the region of interest to the closest inner circumference of the doughnut area to be considered as background, in pixels.

Ring size

Thickness of the doughnut area or the distance in pixels between the inner and outer rings.

For more details, click here.

Image Assembling jobs

This step builds a large field of view image from many small field of view images. The resulting assembled images can then be used, for instance, to compute features using the Web Feature Extraction.

From the Image Processing view, click on the "Image Assembling jobs" tab to access the Image Assembling jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Image Assembling jobs screenshot
WIPP Image Assembling jobs screenshot
To create a new Image Assembling job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Image Assembling job screenshot
WIPP Image Assembling job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Stitching vector

Metadata of the location of each image in the acquired grid, after stitching is performed.

Grid collection

Collection of images as acquired on the microscope as a grid.

Intensity Scaling jobs

Images with the bit depth more than 8 bits per pixel (BPP) cannot be rendered in current web browsers. The intensity scaling job distributes the intensity values so that the 8 BPP image rendering delivers sufficient contrast for visual inspection. Since microscopy images are acquired with the bit depth larger than 8 BPP and the intensities might not be evenly distributed, this processing step can be executed before pyramid building.

From the Image Processing view, click on the "Intensity Scaling jobs" tab to access the Intensity Scaling jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Intensity Scaling jobs screenshot
WIPP Intensity Scaling jobs screenshot
To create a new Intensity Scaling job, click on the "Create new job" button. Below is a description on how to configure each input parameter. This computational job will create a new image collection to store the intensity rescaled results.
WIPP Intensity Scaling job screenshot
WIPP Intensity Scaling job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection.

Intensity scaler type

Use truncation unless there is a compelling reason not to. There is two types of intensity scaling:

Truncation (preferred)

By default, it will saturate the bottom and top 1% of intensities, linearly mapping intensities into that range. This is the same as Fiji/ImageJ auto-contrast. If range start and end are specified those values are used instead of computing range start as 1st percentile and end as 99th percentile.

Gamma Correction

This performs a non-linear (exponential) rescaling. By default the start and end values are the 1st percentile and the 99th percentile.

EGT Segmentation jobs

This processing step labels image pixels as background and foreground.

From the Image Processing view, click on the "EGT Segmentation jobs" tab to access the EGT Segmentation jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP EGT Segmentation jobs screenshot
WIPP EGT Segmentation jobs screenshot
To create a new EGT Segmentation job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP EGT Segmentation job screenshot
WIPP EGT Segmentation job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to segment.

Min object size

The size above which a detected foreground is considered as a region of interest. All objects below this value are considered as noise and are eliminated from the segmented mask.

Keep Holes with

  • Minimum hole size [pixels]

  • Minimum size of holes below which a hole will be filled.
  • Maximum hole size [pixels]

  • Maximum size of holes above which a hole will be filled.
  • Join operator

  • Boolean operation of “AND” or “OR”.
  • Minimum intensity of a hole [percentile]

  • The minimum average intensity for a hole below which it will be filled.
  • Maximum Intensity of a hole [percentile]

  • The maximum average intensity for a hole below which it will be filled.
  • Greedy

  • Controls how greedy foreground is with respect to background. If the segmentation is mislabeling some foreground pixels as background pixels then increasing the greedy parameter in the positive direction results in labeling more image pixels as foreground pixels.

For more details, click here.

Fogbank Segmentation jobs

This processing step labels image pixels with a unique label and has been specifically designed for cell segmentation from phase contrast images (including mitotic events).

From the Image Processing view, click on the "Fogbank Segmentation jobs" tab to access the Fogbank Segmentation jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Fogbank Segmentation jobs screenshot
WIPP Fogbank Segmentation jobs screenshot
To create a new Fogbank Segmentation job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Fogbank Segmentation job screenshot
WIPP Fogbank Segmentation job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to segment.

Labeled collection

Binary image collection which contains the background and foreground labels.

Use Border Mask

The geodesic distance dI (a,b) between two pixels a and b in the image I, is the minimum of the length L of the path(s) P = (c1,c2, ..., cl) joining p and q in I.
Fogbank geodesic distance
Fogbank geodesic distance
dI (a,b) = ∞, if a and b are not connected in I. The geodesic distance prevents pixels that are close to a cell but separated by a border from being assigned to that cell. Those pixels are instead assigned to a different cell that is further away in terms of number of pixels on the image, but closer in terms of geodesic distance as shown in the following picture.
Allocation of an unassigned pixel (x marked) to the closest seed point
A schematic figure to display the allocation of an unassigned pixel (x marked) to the closest seed point (yellow path) by means of the minimum geodesic distance between that pixel and the seed points in the image. The yellow path has a geodesic distance smaller than the orange or green path. The red pixels represent cell boundaries that cannot be traversed.
There are two choices to define the border mask: (1) all pixels can be traversed, or (2) the geodesic mask is used. The geodesic mask is a binary image where pixels with value equal to zero represent boundaries that cannot be traversed, and pixels with value equal to one are paths that can connect two pixels of interest together. Borders are defined through the input Percentile Threshold. This mask can help separate single cells with boundaries close to manually drawn ones.
Geodesic mask
Geodesic mask

Minimum Seed Size

The detection of seed points determines whether an image is over or under-segmented. There are three different methods for automatic detection of seed points that minimize over-segmentation: (1) Apply Minimum Seed Size Threshold on every histogram percentile binning quantization that filters the small noisy seed points. (2) Generate a fixed number of seed points per frame, which incorporates biological insight to locate the seeds. (3) Import the seed mask. The choice depends on the problem being solved.
This method computes seed points as a function of histogram percentile binning quantization with seed size constraint. In contrast with other techniques, intensity thresholds are not defined at every unique intensity value in the image but rather on each percentile value of the image. Using every unique value leads to multiple local peaks and thus to over-segmentation, but binning the pixel intensities reduces the over-segmentation.

Minimum Object Size

This parameter represents the minimum size that any cellular object must have in order to be recognized as a single cell. Any object with the size smaller than this threshold will be deleted from the mask and its corresponding pixels will be assigned to the closest neighboring cell.

Fogbank Direction

This selects the direction between seed points and boundaries. If low intensity pixels correspond to seed points and high intensity pixels correspond to boundaries, then Fogbank direction should be from Min to Max and vice-versa

Foreground masks can be generated by running the EGT segmentation job on the image collection to segment. For more details, click here.

CNN Segmentation jobs (beta)

This processing step labels image pixels as background and foreground using Convolutional Neural Networks (CNN) in TensorFlow.

From the Image Processing view, click on the "CNN Segmentation jobs" tab to access the CNN Segmentation jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP CNN Segmentation jobs screenshot
WIPP CNN Segmentation jobs screenshot
To create a new CNN Segmentation job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP CNN Segmentation job screenshot
WIPP CNN Segmentation job screenshot
WIPP CNN Trained Params screenshot
WIPP CNN Trained Params screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to segment.

Trained Model

Chose the trained network model parameters to load.
The trained parameters details are available from the CNN Trained Parameters tab on the left menu. Two sets of pre-trained parameters are currently available in WIPP, and new ones can also be creating from a CNN Training job, using a U-net model.
U-Net model
A schematic of the architecture of the U-Net model
The architecture has four convolutional layers and four deconvolutional layers. The depth of the first convolutional layer is fixed at 64 and the following layers double in depth with each layer. The size of the convolutional kernels is fixed at 5x5 with a stride of 2. The model was implemented in Tensorflow 1.2 and is currently running on Python2.7. The implementation is based on the original paper: Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234-241).

CNN Training jobs (beta)

This processing step trains a deep learning model (Unet - see above) for segmentation using Tensorflow.

From the Image Processing view, click on the "CNN Training jobs" tab to access the CNN Training jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP CNN Training jobs screenshot
WIPP CNN Training jobs screenshot
To create a new CNN Training job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP CNN Training job screenshot
WIPP CNN Training job screenshot

Job name

Unique name for the job. The resulting Trained Parameters and image collection will be named after the job name.

Image collection

Input image collection to use for training.

Ground truth image collection

Ground truth image collection to use for training.

Number of epochs

Number of epochs (20 by default).

Learning rate

Learning rate (0.0002 by default).

Foreground pixel weighting

Foreground pixel weighting (from 0 to 1).

Thresholding jobs

This processing step converts an input image into a binary image by assigning dark and bright intensity values (value = 0 or 255) based on the pixel intensity being less than or larger than a threshold value.

From the Image Processing view, click on the "Thresholding jobs" tab to access the Thresholding jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Thresholding jobs screenshot
WIPP Thresholding jobs screenshot
To create a new Thresholding job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Thresholding job screenshot
WIPP Thresholding job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to threshold.

Thresholding type

Thresholding method for choosing a threshold value. The methods were adapted from ImageJ and include Manual, IJDefault, Huang, Huang2, Intermodes, IsoData, Li, MaxEntropy, Mean, MinError, Minimum, Moments, Otsu, Percentile, RenyiEntropy, Shanbhag, Triangl, and Yen. If the selection of Thresholding type is “manual”, then the edit box for entering the threshold value is visible and it is considered as a mandatory input (red asterisk). The Java code and the list of contributors can be found here.

Tracking jobs

The Lineage-Mapper tracks a series of labeled segmented images. A labeled image is a segmented image where the regions of interest (ROI) are labeled from 1 to maximum number of objects per image. The ROI numbering does not need to reflect any organization. The labeled ROIs in the segmented images all consist of pixels that have the value of the ROI label. For example, every pixel in the ROI labeled 5 has a pixel value of 5. Background pixels have the value 0. For more details, click here.

From the Image Processing view, click on the "Tracking jobs" tab to access the Tracking jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Tracking jobs screenshot
WIPP Tracking jobs screenshot
To create a new Tracking job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Tracking job screenshot
WIPP Tracking job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Input image collection to track.

File name pattern

Used to select which images to load from a directory. Important: the character "%" cannot be used as part of a filename pattern.
Example: img_001.tif = img_{iii}.ome.tif
{iii} - Special text that represents index numbering between 0 and 999. Increase the number of i's to represent larger numbers.

Prefix

The prefix given to output files.

Minimum Object Size

The minimum size a labeled foreground object must be in order to be recognized as a cell. Objects smaller than this threshold will be deleted if they were created by splitting the group of cellular area into single cell segments. This parameter is only used when a fusion cases is detected and the cells are cut apart. Otherwise no minimum object size will be enforced on the input labeled masks.

Maximum Centroid Displacement

The maximum centroid distance (in pixels) is used to consider which cells could possibly be tracked together. The radius from a cell centroid to the max centroid distance represents the area of a possible cell migration. For example, the red cell represents the current frame, whereas the blue cells represent the next frame. The Cell Tracker would consider the upper blue cell as a possible tracking option to the red cell and the lower blue cell would be ignored.
WIPP Tracking Maximum Centroid Displacement
WIPP Tracking Maximum Centroid Displacement

Enable Cell Division

If cell division is enabled, then the daughters cells of a mitotic event will be assigned new cell labels that are different from their mother cell label. If disabled, then the daughters will keep the same label as the mother cell and no mitotic event is considered. This functionality is helpful when dealing with particle tracking or colony tracking.
The cell tracker bases its decision on detecting mitotic events using cell overlap between the mother and its two daughter cells. If the cell overlap between the current frame and the next frame is above the Min Division Overlap threshold, then the Cell Tracker labels that as a possible mitotic event. The Cell Tracker then tests the Daughter Size Similarity, Daughter Aspect Ratio Similarity, and Mother Circularity Index thresholds to determine if a mitotic event has occurred. If all of tests pass, then the Cell Tracker records the mitotic event in the division table.

Minimum Division Overlap

If the cell overlap in percent is above this threshold between the current frame and the next frame, then the Cell Tracker records a possible mitotic event. The following table illustrates the value of this parameter with respect to the overlapping positions between a red cell from the current frame and the blue cell from the next frame. If this parameter is set to 0%, then all cases are considered as potential mitotic events. If this parameter is set to 100%, then cell mitosis is discarded. In this case, the daughter cell that overlaps the most with the mother cell keeps its unique global ID label and the other one is assigned a new label.
WIPP Tracking Minimum Division Overlap
WIPP Tracking Minimum Division Overlap

Daughter Size Similarity

This parameter is a measure of the size similarity between daughter cells. In a real mitotic event, the sizes of the daughter cells should be very similar to each other. A mother cell does not really produce a large daughter and a small one. Set this parameter to 0% to discard it.
WIPP Tracking Daughter Size Similarity
WIPP Tracking Daughter Size Similarity

Daughter Aspect Ratio Similarity

This parameter is a measure of the aspect ratio similarity between daughter cells. In a real mitotic event daughter cells should have similar shapes to each other. Set this parameter to 0% to discard it.
WIPP Tracking Daughter Aspect Ratio Similarity
WIPP Tracking Daughter Aspect Ratio Similarity

Mother Circularity Threshold

For a cell to be considered a mother cell in a possible mitotic event it must have had a round shape during the previous Number of Frames to Check Circularity parameter. This circularity threshold determines what is round enough to be considered a mitotic cell. Set this parameter to 0% to discard it.
WIPP Tracking Mother Circularity Threshold
WIPP Tracking Mother Circularity Threshold

Number Frames to Check Circularity

The Cell Tracker will determine if the cell had a circularity threshold above the Mother Circularity Index between the current frame and the previous number of frames. If the cell’s circularity is not above the threshold at least for one frame within this range, then the mitotic event will not be recorded.

Enable Fusion

If cell fusion is enabled, the cell tracker will assign a new unique global ID number to the fused region and will consider all the cells from the previous frame as dead. If disabled, the cell tracker will separate the cellular area in the current frame into a group of single cells by relying on the previous frames information.
Cell fusion occurs when multiple cells get together and form one cellular object. It can come from an actual fusion where, for example, two colonies merge into one or from cells migrating so close together that segmentation technique considers them a single cell.

Minimum Fusion Overlap

This parameter represents the amount of overlap in percent of cell area, above which an area at the current frame is considered as a group of cells from the previous frame. In this case, this area needs to be split into multiple single cells.
For example: if two cells A and B at frame t have tracks to the same cell C at frame t+1 and the amount of overlap between A and C = 45% of size A and the overlap between B and C is 50% of size B, then C should be split into two single cells.

Mask Labeling jobs

This step is used for assigning unique labels to contiguous sets of pixels in binary images (such as the ones generated by EGT segmentation or by thresholding). The labeled images (masks) can be used by the Web Feature Extraction for computing features.

From the Image Processing view, click on the "Mask Labeling jobs" tab to access the Mask Labeling jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Mask Labeling jobs screenshot
WIPP Mask Labeling jobs screenshot
To create a new Mask Labeling job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Mask Labeling job screenshot
WIPP Mask Labeling job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Binary collection

Input image collection to label.

Connectivity

Specified as the values 4, for 4-connected objects, or 8, for 8-connected objects (recommended).

Pyramid Building jobs

In order to visually inspect very large images using the Deep Zoom viewer, microscopy image tiles need to be converted into pyramids that can be viewed in the browser.

From the Image Processing view, click on the "Pyramid Building jobs" tab to access the Pyramid Building jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Pyramid jobs screenshot
WIPP Pyramid jobs screenshot
To create a new Pyramid Building job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Pyramid job screenshot
WIPP Pyramid job screenshot

Job name

Unique name for the job. The resulting pyramid will be named after the job name.

Stitching vector

Input image collection to label.

Image collection

Input image collection to label.

If images names in the image collection and the stitching vector differ, you must specify the images pattern. The system will try to guess the patterns automatically but you should double check.

Scale Input Images

Recommended if the images from the input collection are not 8bpp. Checking this option will scale the images before creating the pyramid, using a truncation 1-99 percentile scaling.

Deep Zoom Visualizations

Multiple pyramids can be combined into a single visualization with multiple layers. This step must be executed once all pyramids have been built.

From the Image Processing view, click on the "Visualizations" tab to access the Visualizations view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Visualizations screenshot
WIPP Visualizations screenshot
To create a new Visualization, click on the "Create a new visualization" button. Below is a description on how to configure each input parameter.
WIPP Visualization screenshot
WIPP Visualization screenshot

Visualization name

Unique name for the visualization

Groups

Create one or more groups to group pyramids. Give the group a label and click "+"

Layers

Add layers (pyramids) to the group with "+".
For example, add Transmitted, Excitation, Segmentation (if pyramid was built from the masks)

Export and download

Visualizations can be exported using the download button (top-right corner) and the WebDeepZoomToolkit can be used to explore them.

Tessellation jobs

This processing step is executed to generate a mask that sub-divides any image into rectangular or hexagonal partitions (tesselations). Tessellation masks are used together with segmentation masks to compute spatially local image features for studying spatial heterogeneity. This step is independent of all other steps.

From the Image Processing view, click on the "Tessellation jobs" tab to access the Tessellation jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Tessellation jobs screenshot
WIPP Tessellation jobs screenshot
To create a new Tessellation job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Tessellation job screenshot
WIPP Tessellation job for a collection screenshot
WIPP Tessellation job screenshot
WIPP Tessellation job for one image screenshot

Job name

Unique name for the job, the resulting image collection will be named after the job name.

Tile shape

Specify the shape of each partition in a tessellation image (square or hexagon).

Radius

Specify the radius of the shape in a tessellation:
- For square shapes, the radius is one half of its width.
- For hexagonal shapes, the radius is the distance between the center and any of the summits.

Option: Generate for an image collection

Choose this option to generate one tessellation image for each image of an image collection. Mask width and height will be automatically determined from the image.

Image collection

Specify the input image collection.

Option: Generate a single mask

Choose this option to generate a single tessellation image with fixed width and height. A visualization PNG image will also be created.

Mask width

Specify the mask width of the final tessellation image.

Mask height

Specify the mask height of the final tessellation image.

Image Type Conversion jobs

This step allows the conversion of the image type of images from an input image collection to a target image type, 8 bpp, 16 bpp or 32 bpp.

From the Image Processing view, click on the "Image Type Conversion jobs" tab to access the Image Type Conversion jobs view. The list of jobs can be sorted by name, status, creation date, start time and end time, and filtered by name or status.

WIPP Image Type Conversion jobs screenshot
WIPP Type Conversion jobs screenshot
To create a new Image Type Conversion job, click on the "Create new job" button. Below is a description on how to configure each input parameter.
WIPP Image Type Conversion job screenshot
WIPP Image Type Conversion job screenshot

Job name

Unique name for the job. The resulting image collection will be named after the job name.

Image collection

Collection of images to be converted.

Convert to

Target image type, 8 bpp, 16 bpp or 32 bpp.

Pyramids and visualizations

For an in-depth user manual of the deep zoom view of pyramids, see this webpage (please note that some of deep zoom tools are not available in WIPP, such as colony searching and features).

Example Workflow

The figure below illustrates possible workflows of computational steps that have to be executed in order to visually inspect two image collections of overlapping fields of views (FOV). The web image processing always includes stitching, pyramid building, and visualization creation. If the images have pixel depth more than 8 bits per pixel (BPP), then intensity rescaling has to be executed. Flat field correction and segmentation steps are optional. However, they are important for quantitative analyses.

Example of a workflow using the Web Image Processing Pipeline
Example of a workflow using the Web Image Processing Pipeline to process two image collections (transmitted and excitation channels) acquired at the same time (the channels are registered).