viqa.fr_metrics.mad.MAD

class viqa.fr_metrics.mad.MAD(data_range=255, normalize=False, **kwargs)[source]

Class to calculate the most apparent distortion (MAD) between two images.

score_val

MAD score value of the last calculation.

Type:

float

parameters

Dictionary containing the parameters for MAD calculation.

Type:

dict

Parameters:
  • data_range ({1, 255, 65535}, optional) – Data range of the returned data in data loading. Is used for image loading when normalize is True and for the MAD calculation. Passed to viqa.utils.load_data() and viqa.fr_metrics.mad.most_apparent_distortion().

  • normalize (bool, default False) – If True, the input images are normalized to the data_range argument.

  • **kwargs (optional) – Additional parameters for data loading. The keyword arguments are passed to viqa.utils.load_data().

  • chromatic (bool, default False) – If True, the input images are expected to be RGB images. If False, the input images are converted to grayscale images if necessary.

Raises:

ValueError – If data_range is None.

Warning

The metric is not yet validated. Use with caution.

Notes

data_range for image loading is also used for the MAD calculation if the image type is integer and therefore must be set. The parameter is set through the constructor of the class and is passed to score(). MAD [1] is a full-reference IQA metric. It is based on the human visual system and is designed to predict the perceived quality of an image.

References

score(img_r, img_m, dim=None, im_slice=None, **kwargs)[source]

Calculate the MAD between two images.

The metric can be calculated for 2D and 3D images. If the images are 3D, the metric can be calculated for the full volume or for a given slice of the image by setting dim to the desired dimension and im_slice to the desired slice number.

Parameters:
  • img_r (np.ndarray or Tensor or str or os.PathLike) – Reference image to calculate score against.

  • img_m (np.ndarray or Tensor or str or os.PathLike) – Distorted image to calculate score of.

  • dim ({0, 1, 2}, optional) – MAD for 3D images is calculated as mean over all slices of the given dimension.

  • im_slice (int, optional) – If given, MAD is calculated only for the given slice of the 3D image.

  • **kwargs (optional) – Additional parameters for MAD calculation. The keyword arguments are passed to viqa.fr_metrics.mad.most_apparent_distortion().

Returns:

score_val – MAD score value.

Return type:

float

Raises:

ValueError – If invalid dimension given in dim. If images are neither 2D nor 3D. If images are 3D, but dim is not given. If im_slice is given, but not an integer.

Warns:

RuntimeWarning – If dim or im_slice is given for 2D images.

If im_slice is not given, but dim is given for 3D images, MAD is calculated for the full volume.

Notes

For 3D images if dim is given, but im_slice is not, the MAD is calculated for the full volume of the 3D image. This is implemented as mean of the MAD values of all slices of the given dimension. If dim is given and im_slice is given, the MAD is calculated for the given slice of the given dimension (represents a 2D metric of the given slice).

print_score(decimals=2)[source]

Print the MAD score value of the last calculation.

Parameters:

decimals (int, default=2) – Number of decimal places to print the score value.

Warns:

RuntimeWarning – If score_val is not available.

export_results(path, filename)

Export the score to a csv file.

Parameters:
  • path (str) – The path where the csv file should be saved.

  • filename (str) – The name of the csv file.

Notes

The arguments get passed to viqa.utils.export_results().

load_images(img_r, img_m)

Load the images and perform checks.

Parameters:
  • img_r (np.ndarray, viqa.ImageArray, torch.Tensor, str or os.PathLike) – The reference image.

  • img_m (np.ndarray, viqa.ImageArray, torch.Tensor, str or os.PathLike) – The modified image.

Returns: