SoImageRegistrationTransform Class Reference
[Registration]

ImageViz SoImageRegistrationTransform image filter More...

#include <ImageViz/Engines/GeometryAndMatching/Registration/SoImageRegistrationTransform.h>

Inheritance diagram for SoImageRegistrationTransform:
SoImageVizEngine SoEngine SoFieldContainer SoBase SoRefCounter SoTypedObject

List of all members.

Classes

struct  RegistrationEvent
 This event describes the evolution of the registration process. More...

Public Types

enum  TransformationType {
  TRANSLATION = 4,
  RIGID = 0,
  RIGID_ISOTROPIC_SCALING = 1,
  RIGID_ANISOTROPIC_SCALING = 2,
  AFFINE = 3
}
enum  MetricType {
  EUCLIDIAN = 0,
  CORRELATION,
  NORMALIZED_MUTUAL_INFORMATION
}

Public Member Functions

 SoImageRegistrationTransform ()
SbMatrix getOutputTransformation ()

Public Attributes

SbEventHandler
< RegistrationEvent & > 
onProgressRegistration
SoSFEnum computeMode
SoSFImageDataAdapter inMovingImage
SoSFImageDataAdapter inFixedImage
SoSFMatrix initialTransform
SoSFBool autoParameterMode
SoSFVec2f optimizerStep
SoSFVec3i32 coarsestResampling
SoSFEnum transformType
SoSFBool ignoreFinestLevel
SoSFEnum metricType
SoImageVizEngineOutput
< SoSFFieldContainer,
SoRegistrationResult * > 
outTransform

Detailed Description

ImageViz SoImageRegistrationTransform image filter

SoImageRegistrationTransform computes the best transformation for the co-registration of two images, using an iterative optimization algorithm.

The goal of registration is to find a transformation aligning a model image, which is moving while being processed, with a reference image, which remains fixed, starting from an initial transformation and by optimizing a similarity criterion between both images.

The estimated transformation can be a single translation, rigid (translation and rotation only), rigid with scale factors (isotropic or anisotropic along axis directions) or affine (including shear transformation).

SoImageRegistrationTransform_image01.png

Types of transformations

A hierarchical strategy is applied, starting at a coarse resampling of the data set, and proceeding to finer resolutions later on. Different similarity measurements like Euclidean distance, mutual information, and correlation can be selected. After each iteration a similarity score is computed, and the transformation is refined according to an optimizer algorithm. If this score cannot be computed, for instance when the resampling or step parameters are not adapted, it remains at its default value -1000.

The optimizer behavior depends on the optimizerStep parameter which affects the search extent, precision and computation time. A small optimizerStep is recommended when a pre-alignment has been performed in order to be more precise and avoid sending the transformation at a wrong location.

Two different optimization strategies are used for coarsest and finest resolution levels. The Extensive Direction optimizer is used at coarse levels. This optimizer is well suited for coarse resolution levels and potentially search registration further. A Quasi Newton optimizer is used on the finest level computed excepted if there is only one level. This optimizer is more suited for finer resolution levels in order to refine the transformation.

By default, the coarsestResampling and optimizerStep parameters are automatically estimated from the reference image properties. If the model and reference have different resolution or size, for instance in multi-modality case, these settings may be inappropriate and lead the registration to fail. In this case, the autoParameterMode parameter should be set to false and both parameters should be manually set to relevant values so that the coarsest resolution level generates a representative volume (i.e., not made of too few voxels) the displacement step is precise enough to not skip the searched transformation.

The SoImagePreAlignmentTransform3d engine can be used beforehand to estimate a rough initial transformation.

If the two input images have been carefully pre-aligned, it is not recommended to perform the registration at a too low sub-resolution level. It would not only perform useless computations but could also send the transformation at a wrong location and thus miss the right transformation. Consequently, the following recommendations can be applied in this case:

This engine can notify some information during the processing (progression, similarity) and can be interrupted. Intercepting these events slows down the algorithm execution.

References

The Correlation Ratio metric is explained in the following publication:

The Normalized Mutual Information metric is based on the following publication:

Further references include:

FILE FORMAT/DEFAULT

SEE ALSO

SoImagePreAlignmentTransform3d

See related examples:

Registration


Member Enumeration Documentation

This enum defines the different types of metric used to compute the similarity value.

Enumerator:
EUCLIDIAN 

Euclidean means the Euclidean distance, i.e., the mean squared difference between the gray values of model and reference.

This metric computes values between $-\infty$ and 0. Images closer to each other will have a metric closer to 0.

\[Similarity=\frac{\sum -(Image_{fixed} - Image_{moving})^2}{n}\]

where $Image$ are the individual voxels and $n$ is the voxel count.

Warning, this mode is temporary disabled, its selection will run the mutual information metric.

CORRELATION 

Correlation measures the correlation ratio of the registered images.

This metric computes values between 0 and 1. The case correlation ratio = 1 corresponds to the case of two identical and registered images.

\[Similarity=\frac{\sum (Image_{moving} - \overline{Image_{moving}} ).(Image_{fixed} - \overline{Image_{fixed}} )} { \sqrt{\sum (Image_{moving} - \overline{Image_{moving}} )^2}.\sqrt{\sum(Image_{fixed} - \overline{Image_{fixed}} )^2}}\]

where $\overline{Image}$ is the voxel mean.

NORMALIZED_MUTUAL_INFORMATION 

The mutual information measures the information shared by both images.

This metric computes values between 0 and $\infty$. For instance, if images are independent, knowing $Image_{fixed}$ does not give any information about $Image_{moving}$ and vice versa, so their mutual information is zero.

The mutual information metrics, especially the normalized one, are recommended when images from different modalities, e.g., CT and MRT, are to be registered.

The mutual information is:

\[Similarity=\frac{H(Image_{moving})+H(Image_{fixed})}{H(Image_{moving,fixed})}\]

Where $H(Image_{moving})$ and $H(Image_{fixed})$ are the marginal entropies of the image histrograms and $H(Image_{moving,fixed})$ is the joint entropy of the image histograms.

This enum defines the types of transforms that can be computed.

Default value is RIGID.

Enumerator:
TRANSLATION 

Translation only.

RIGID 

Rigid transformation consisting of translation and rotation.

RIGID_ISOTROPIC_SCALING 

Transformation consisting of translation, rotation, and scale (only one scale for all dimensions).

RIGID_ANISOTROPIC_SCALING 

Transformation consisting of translation, rotation, and scales (one scale per dimension).

AFFINE 

Affine transformation consisting of translation, rotation, scale, and shear.


Constructor & Destructor Documentation

SoImageRegistrationTransform::SoImageRegistrationTransform (  ) 

Constructor.


Member Function Documentation

SbMatrix SoImageRegistrationTransform::getOutputTransformation (  )  [inline]

return the output transform matrix that aligns the model image to the reference image.


Member Data Documentation

The way to determine the coarsestResampling and optimizerStep parameters.

If true, these parameters are automatically computed.

In this case the optimizerStep, for the coarsest resolution is 1/5 of the size of the reference image bounding box and for the finest resolution it is 1/6 of the reference image voxel size.

For the coarsestResampling, if the voxels of the reference image are anisotropic, i.e., have a different size in X, Y, and Z directions, the default resampling rates are around 8 and adapted in order to achieve isotropic voxels on the coarsest level.

If the voxels of the reference image are isotropic, i.e., have a the same size in X, Y, and Z directions, the default resampling rate is computed in order to get at least 30 voxels along each direction.

Default value is true.

The sub-sampling factor along each axis.

This parameter defines the resampling rate for the coarsest resolution level where registration starts. The resampling rate refers to the reference data set.

If the voxel sizes of model and reference differ, the resampling rates for the model are adapted in order to achieve similar voxel sizes as for the reference on the same level.

A coarsest resampling factor of 8 means that one voxel at the coarsest level is equal to 8 voxels at the finest level for the related dimension.

This resampling factor is specified for each dimension of the input volume.

This parameter is ignored if autoParameterMode is set to true.

Default value is SbVec3i32( 8, 8, 8 ).

Select the compute Mode (2D or 3D or AUTO) Use enum ComputeMode.

Default is MODE_AUTO

Skip the finest level of the pyramid.

Default value is false.

The input reference image.

Default value is NULL. Supported types include: grayscale binary label.

The initial transformation that pre-aligns the model onto the reference.

Default value is SbMatrix::identity(). The SoImagePreAlignmentTransform3d engine can be used to compute an initial transform.

The input model image.

Default value is NULL. Supported types include: grayscale binary label.

Select the metric type.

Use enum MetricType. Default is CORRELATION

Specific event handler for registration.

The step sizes, in world coordinates, used by the optimizer at coarsest and finest scales.

These step sizes refer to translations. For rotations, scalings, and shearings appropriate values are chosen accordingly. The first parameter is applied to the coarsest resolution level, the second to the finest level. Steps at intermediate levels are deduced from them. High step values cover a larger registration area but increase the risk of failure.

If the input transformation already provides a reasonable alignment, the steps can be set smaller than the values given by the automatic mode in order to reduce computation time and risk of failure.

Assuming a voxel size of (1,1,1) and a coarsestResampling of SbVec3i32(8,8,8), these parameters correspond to a displacement of half a voxel for the coarsest and finest level. As it is rarely the case, it is essential to set this parameter in relation with the reference image voxel size if the automatic mode is disabled.

This parameter is ignored if autoParameterMode is set to true.

Default value is SbVec2f( 4.0f, 1.0f / 2.0f ) ).

Output structure storing registration results.

Select the type of transform.

Use enum TransformationType. Default is RIGID


The documentation for this class was generated from the following file:

Open Inventor Toolkit reference manual, generated on 12 Feb 2024
Copyright © Thermo Fisher Scientific All rights reserved.
http://www.openinventor.com/