Melanoma Detection
Melanoma is the deadliest form of skin cancer. In a report published in 2000, the World Health Organization (WHO) estimated that approximately 65,000 global deaths related to melanoma occurred that year. If caught early, a simple extraction of the cancerous
tissue can completely cure the patient of melanoma. However, if identified late, the cancer can spread and the prognosis is bleak. Thus, early detection is critical to prognosis.
The objective of this project is to develop a computational framework to analyze and assess the risk of melanoma using cutaneous images. The skin cancer detection framework consists of novel algorithms to perform the following:
Illumination Correction Preprocessing
The first step in the proposed framework is a preprocessing step, where the image is corrected for illumination variation (i.e., the presence of shadows and brightly illuminated areas). Our approach to the problem of correcting is a multi-stage illumination modeling algorithm. We assume an illumination-reflectance multiplicative model where each pixel from the Value colour channel (in the HSV colour space) can be decomposed into an illumination component and a reflectance component. The goal of the algorithm is to first estimate the illumination component and calculate the reflectance component using that illumination estimate.
In the VIP lab, the proposed multi-stage illumination modeling algorithm uses the following stages to estimate and correct for illumination variation (shown in Fig. 1):
tissue can completely cure the patient of melanoma. However, if identified late, the cancer can spread and the prognosis is bleak. Thus, early detection is critical to prognosis.
The objective of this project is to develop a computational framework to analyze and assess the risk of melanoma using cutaneous images. The skin cancer detection framework consists of novel algorithms to perform the following:
- illumination correction preprocessing
- lesion segmentation
- intuitive feature extraction
- image classification
Illumination Correction Preprocessing
The first step in the proposed framework is a preprocessing step, where the image is corrected for illumination variation (i.e., the presence of shadows and brightly illuminated areas). Our approach to the problem of correcting is a multi-stage illumination modeling algorithm. We assume an illumination-reflectance multiplicative model where each pixel from the Value colour channel (in the HSV colour space) can be decomposed into an illumination component and a reflectance component. The goal of the algorithm is to first estimate the illumination component and calculate the reflectance component using that illumination estimate.
In the VIP lab, the proposed multi-stage illumination modeling algorithm uses the following stages to estimate and correct for illumination variation (shown in Fig. 1):
- Initial Monte Carlo illumination estimate
- Final parametric illumination estimate
- Calculate the reflectance component
Skin Lesion Segmentation
The objective of the skin lesion segmentation step is to find the border of the skin lesion. It is important that this step is performed accurately because many features used to assess the risk of melanoma are derived based on the lesion border. Our approach to finding the lesion border is a texture distinctiveness-based lesion segmentation.
The objective of the skin lesion segmentation step is to find the border of the skin lesion. It is important that this step is performed accurately because many features used to assess the risk of melanoma are derived based on the lesion border. Our approach to finding the lesion border is a texture distinctiveness-based lesion segmentation.
The first stage of the skin lesion segmentation algorithm is to learn representative texture distributions and calculate the texture distinctiveness metric for each distribution (shown in Fig. 2). A texture vector is extracted for each pixel in the image. Then, a Gaussian mixture model is implemented that learns the texture distributions based on the set of texture vectors. Finally, the dissimilarity between a texture distribution and all other texture distirbutions is measured, which is quantified as the texture distinctiveness metric.
In the second stage, the pixels in the image are classified as being part of the normal skin or lesion class (shown in Fig. 3). To do this,
the image is divided into a number of regions. These regions are combined with the texture distinctiveness map to find the skin lesion.
the image is divided into a number of regions. These regions are combined with the texture distinctiveness map to find the skin lesion.
Feature Extraction
In order to use classification techniques, the image must be transformed such that it represented a point in some n-dimensional feature space. The axes in this feature space represent calculations that are relevant to describing the observed phenomenon (e.g., malignancy).
We propose a set of what we call high-level intuitive features (HLIFS). A HLIF is defined as follows (Amelard 2013):
In order to use classification techniques, the image must be transformed such that it represented a point in some n-dimensional feature space. The axes in this feature space represent calculations that are relevant to describing the observed phenomenon (e.g., malignancy).
We propose a set of what we call high-level intuitive features (HLIFS). A HLIF is defined as follows (Amelard 2013):
- High-Level Intuitive Feature (HLIF): A mathematical model that has been carefully designed to describe some human-observable characteristic, and whose score can be intuited in a natural way.
Fig. 5 shows an example interface. Since the features were designing according to the HLIF framework, the system is able to convey to the user why it detected colour asymmetry.
Related Publications
Journal Publications
|
R. Amelard, J. Glaister, A. Wong, D. A. Clausi, “High-level intuitive features (HLIFs) for intuitive skin lesion description,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 3, pp. 820-831, 2015.
J. Glaister, R. Amelard, A. Wong, D. A. Clausi, “MSIM: Multi-stage illumination modeling of dermatological photographs for illumination-corrected skin lesion analysis,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 7, pp. 1873-1883, 2013. |
Book Chapters
|
R. Amelard, J. Glaister, A. Wong, D. A. Clausi, “Melanoma decision support using lighting-corrected intuitive feature models,” Computer Vision Techniques for the Diagnosis of Skin Cancer (Springer 2014).
|
Conference Publications
|
D. S. Cho, S. Haider, R. Amelard, A. Wong, D. A. Clausi, “Quantitative features for computer-aided melanoma classification using spatial heterogeneity of eumelanin and pheomelanin concentrations,” IEEE International Symposium on Biomedical Imaging, New York, Apr 2015.
D. S. Cho, S. Haider, R. Amelard, A. Wong, D. A. Clausi, “Physiological characterization of skin lesion using non-linear random forest regression model,” 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, Aug 2014. S. Haider, D. S. Cho, R. Amelard, A. Wong, D. A. Clausi, “Enhanced classification of malignant melanoma lesions via the integration of physiological features from dermatological photographs,” 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, Aug 2014. R. Amelard, A. Wong, D. A. Clausi, “Extracting morphological high-level intuitive features (HLIF) for enhancing skin lesion classification,” 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, Aug 2012. R. Amelard, A. Wong, D. A. Clausi, “Extracting high-level intuitive features (HLIF) for classifying skin lesions using standard camera images,” 9th Conference on Computer and Robot Vision, Toronto, pp. 396-403, May 2012. |
Theses
|
R. Amelard, “High-Level Intuitive Features (HLIFs) for Melanoma Detection,” M.A.Sc. thesis, Systems Design Engineering, University of Waterloo, Canada, 2013.
|