segmentation
¶skimage.segmentation.random_walker (data, labels) |
Random walker algorithm for segmentation from markers. |
skimage.segmentation.active_contour (image, snake) |
Active contour model. |
skimage.segmentation.felzenszwalb (image[, …]) |
Computes Felsenszwalb’s efficient graph based image segmentation. |
skimage.segmentation.slic (image[, …]) |
Segments image using k-means clustering in Color-(x,y,z) space. |
skimage.segmentation.quickshift (image[, …]) |
Segments image using quickshift clustering in Color-(x,y) space. |
skimage.segmentation.find_boundaries (label_img) |
Return bool array where boundaries between labeled regions are True. |
skimage.segmentation.mark_boundaries (image, …) |
Return image with boundaries between labeled regions highlighted. |
skimage.segmentation.clear_border (labels[, …]) |
Clear objects connected to the label image border. |
skimage.segmentation.join_segmentations (s1, s2) |
Return the join of the two input segmentations. |
skimage.segmentation.relabel_from_one (…) |
Deprecated function. |
skimage.segmentation.relabel_sequential (…) |
Relabel arbitrary labels to {offset, … |
skimage.segmentation.watershed (image, markers) |
Find watershed basins in image flooded from given markers. |
skimage.segmentation.chan_vese (image[, mu, …]) |
Chan-Vese segmentation algorithm. |
skimage.segmentation.morphological_geodesic_active_contour (…) |
Morphological Geodesic Active Contours (MorphGAC). |
skimage.segmentation.morphological_chan_vese (…) |
Morphological Active Contours without Edges (MorphACWE) |
skimage.segmentation.inverse_gaussian_gradient (image) |
Inverse of gradient magnitude. |
skimage.segmentation.circle_level_set (…[, …]) |
Create a circle level set with binary values. |
skimage.segmentation.checkerboard_level_set (…) |
Create a checkerboard level set with binary values. |
skimage.segmentation.
random_walker
(data, labels, beta=130, mode='bf', tol=0.001, copy=True, multichannel=False, return_full_prob=False, spacing=None)[source]¶Random walker algorithm for segmentation from markers.
Random walker algorithm is implemented for gray-level or multichannel images.
Parameters: | data : array_like
labels : array of ints, of same shape as data without channels dimension
beta : float
mode : string, available options {‘cg_mg’, ‘cg’, ‘bf’}
tol : float
copy : bool
multichannel : bool, default False
return_full_prob : bool, default False
spacing : iterable of floats
|
---|---|
Returns: | output : ndarray
|
See also
skimage.morphology.watershed
Notes
Multichannel inputs are scaled with all channel data combined. Ensure all channels are separately normalized prior to running this algorithm.
The spacing argument is specifically for anisotropic datasets, where data points are spaced differently in one or more spatial dimensions. Anisotropic data is commonly encountered in medical imaging.
The algorithm was first proposed in Random walks for image segmentation, Leo Grady, IEEE Trans Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83.
The algorithm solves the diffusion equation at infinite times for sources placed on markers of each phase in turn. A pixel is labeled with the phase that has the greatest probability to diffuse first to the pixel.
The diffusion equation is solved by minimizing x.T L x for each phase, where L is the Laplacian of the weighted graph of the image, and x is the probability that a marker of the given phase arrives first at a pixel by diffusion (x=1 on markers of the phase, x=0 on the other markers, and the other coefficients are looked for). Each pixel is attributed the label for which it has a maximal value of x. The Laplacian L of the image is defined as:
- L_ii = d_i, the number of neighbors of pixel i (the degree of i)
- L_ij = -w_ij if i and j are adjacent pixels
The weight w_ij is a decreasing function of the norm of the local gradient. This ensures that diffusion is easier between pixels of similar values.
When the Laplacian is decomposed into blocks of marked and unmarked pixels:
L = M B.T
B A
with first indices corresponding to marked pixels, and then to unmarked pixels, minimizing x.T L x for one phase amount to solving:
A x = - B x_m
where x_m = 1 on markers of the given phase, and 0 on other markers. This linear system is solved in the algorithm using a direct method for small images, and an iterative method for larger images.
Examples
>>> np.random.seed(0)
>>> a = np.zeros((10, 10)) + 0.2 * np.random.rand(10, 10)
>>> a[5:8, 5:8] += 1
>>> b = np.zeros_like(a)
>>> b[3, 3] = 1 # Marker for first phase
>>> b[6, 6] = 2 # Marker for second phase
>>> random_walker(a, b)
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 2, 2, 2, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)
skimage.segmentation.
active_contour
(image, snake, alpha=0.01, beta=0.1, w_line=0, w_edge=1, gamma=0.01, bc='periodic', max_px_move=1.0, max_iterations=2500, convergence=0.1)[source]¶Active contour model.
Active contours by fitting snakes to features of images. Supports single and multichannel 2D images. Snakes can be periodic (for segmentation) or have fixed and/or free ends. The output snake has the same length as the input boundary. As the number of points is constant, make sure that the initial snake has enough points to capture the details of the final contour.
Parameters: | image : (N, M) or (N, M, 3) ndarray
snake : (N, 2) ndarray
alpha : float, optional
beta : float, optional
w_line : float, optional
w_edge : float, optional
gamma : float, optional
bc : {‘periodic’, ‘free’, ‘fixed’}, optional
max_px_move : float, optional
max_iterations : int, optional
convergence: float, optional
|
---|---|
Returns: | snake : (N, 2) ndarray
|
References
[R441] | Kass, M.; Witkin, A.; Terzopoulos, D. “Snakes: Active contour models”. International Journal of Computer Vision 1 (4): 321 (1988). |
Examples
>>> from skimage.draw import circle_perimeter
>>> from skimage.filters import gaussian
Create and smooth image:
>>> img = np.zeros((100, 100))
>>> rr, cc = circle_perimeter(35, 45, 25)
>>> img[rr, cc] = 1
>>> img = gaussian(img, 2)
Initiliaze spline:
>>> s = np.linspace(0, 2*np.pi,100)
>>> init = 50*np.array([np.cos(s), np.sin(s)]).T+50
Fit spline to image:
>>> snake = active_contour(img, init, w_edge=0, w_line=1)
>>> dist = np.sqrt((45-snake[:, 0])**2 +(35-snake[:, 1])**2)
>>> int(np.mean(dist))
25
skimage.segmentation.
felzenszwalb
(image, scale=1, sigma=0.8, min_size=20, multichannel=True)[source]¶Computes Felsenszwalb’s efficient graph based image segmentation.
Produces an oversegmentation of a multichannel (i.e. RGB) image
using a fast, minimum spanning tree based clustering on the image grid.
The parameter scale
sets an observation level. Higher scale means
less and larger segments. sigma
is the diameter of a Gaussian kernel,
used for smoothing the image prior to segmentation.
The number of produced segments as well as their size can only be
controlled indirectly through scale
. Segment size within an image can
vary greatly depending on local contrast.
For RGB images, the algorithm uses the euclidean distance between pixels in color space.
Parameters: | image : (width, height, 3) or (width, height) ndarray
scale : float
sigma : float
min_size : int
multichannel : bool, optional (default: True)
|
---|---|
Returns: | segment_mask : (width, height) ndarray
|
References
[R442] | Efficient graph-based image segmentation, Felzenszwalb, P.F. and Huttenlocher, D.P. International Journal of Computer Vision, 2004 |
Examples
>>> from skimage.segmentation import felzenszwalb
>>> from skimage.data import coffee
>>> img = coffee()
>>> segments = felzenszwalb(img, scale=3.0, sigma=0.95, min_size=5)
skimage.segmentation.
slic
(image, n_segments=100, compactness=10.0, max_iter=10, sigma=0, spacing=None, multichannel=True, convert2lab=None, enforce_connectivity=True, min_size_factor=0.5, max_size_factor=3, slic_zero=False)[source]¶Segments image using k-means clustering in Color-(x,y,z) space.
Parameters: | image : 2D, 3D or 4D ndarray
n_segments : int, optional
compactness : float, optional
max_iter : int, optional
sigma : float or (3,) array-like of floats, optional
spacing : (3,) array-like of floats, optional
multichannel : bool, optional
convert2lab : bool, optional
enforce_connectivity: bool, optional
min_size_factor: float, optional
max_size_factor: float, optional
slic_zero: bool, optional
|
---|---|
Returns: | labels : 2D or 3D array
|
Raises: | ValueError
|
Notes
sigma=1
and spacing=[5, 1, 1]
, the effective sigma is [0.2, 1, 1]
. This
ensures sensible smoothing for anisotropic images.References
[R443] | Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Süsstrunk, SLIC Superpixels Compared to State-of-the-art Superpixel Methods, TPAMI, May 2012. |
[R444] | (1, 2) http://ivrg.epfl.ch/research/superpixels#SLICO |
Examples
>>> from skimage.segmentation import slic
>>> from skimage.data import astronaut
>>> img = astronaut()
>>> segments = slic(img, n_segments=100, compactness=10)
Increasing the compactness parameter yields more square regions:
>>> segments = slic(img, n_segments=100, compactness=20)
skimage.segmentation.
quickshift
(image, ratio=1.0, kernel_size=5, max_dist=10, return_tree=False, sigma=0, convert2lab=True, random_seed=42)[source]¶Segments image using quickshift clustering in Color-(x,y) space.
Produces an oversegmentation of the image using the quickshift mode-seeking algorithm.
Parameters: | image : (width, height, channels) ndarray
ratio : float, optional, between 0 and 1
kernel_size : float, optional
max_dist : float, optional
return_tree : bool, optional
sigma : float, optional
convert2lab : bool, optional
random_seed : int, optional
|
---|---|
Returns: | segment_mask : (width, height) ndarray
|
Notes
The authors advocate to convert the image to Lab color space prior to segmentation, though this is not strictly necessary. For this to work, the image must be given in RGB format.
References
[R445] | Quick shift and kernel methods for mode seeking, Vedaldi, A. and Soatto, S. European Conference on Computer Vision, 2008 |
skimage.segmentation.
find_boundaries
(label_img, connectivity=1, mode='thick', background=0)[source]¶Return bool array where boundaries between labeled regions are True.
Parameters: | label_img : array of int or bool
connectivity: int in {1, …, `label_img.ndim`}, optional
mode: string in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}
background: int, optional
|
---|---|
Returns: | boundaries : array of bool, same shape as label_img
|
Examples
>>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 1, 1, 1, 5, 5, 5, 0, 0],
... [0, 0, 0, 0, 0, 5, 5, 5, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=np.uint8)
>>> find_boundaries(labels, mode='thick').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 0, 1, 1, 0, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0, 1, 1, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='inner').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels, mode='outer').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1, 0, 0, 1, 0],
[0, 0, 1, 1, 1, 1, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> labels_small = labels[::2, ::3]
>>> labels_small
array([[0, 0, 0, 0],
[0, 0, 5, 0],
[0, 1, 5, 0],
[0, 0, 5, 0],
[0, 0, 0, 0]], dtype=uint8)
>>> find_boundaries(labels_small, mode='subpixel').astype(np.uint8)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0],
[0, 1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1, 0],
[0, 0, 0, 1, 0, 1, 0],
[0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]], dtype=uint8)
>>> bool_image = np.array([[False, False, False, False, False],
... [False, False, False, False, False],
... [False, False, True, True, True],
... [False, False, True, True, True],
... [False, False, True, True, True]], dtype=np.bool)
>>> find_boundaries(bool_image)
array([[False, False, False, False, False],
[False, False, True, True, True],
[False, True, True, True, True],
[False, True, True, False, False],
[False, True, True, False, False]], dtype=bool)
skimage.segmentation.
mark_boundaries
(image, label_img, color=(1, 1, 0), outline_color=None, mode='outer', background_label=0)[source]¶Return image with boundaries between labeled regions highlighted.
Parameters: | image : (M, N[, 3]) array
label_img : (M, N) array of int
color : length-3 sequence, optional
outline_color : length-3 sequence, optional
mode : string in {‘thick’, ‘inner’, ‘outer’, ‘subpixel’}, optional
background_label : int, optional
|
---|---|
Returns: | marked : (M, N, 3) array of float
|
See also
skimage.segmentation.
clear_border
(labels, buffer_size=0, bgval=0, in_place=False)[source]¶Clear objects connected to the label image border.
Parameters: | labels : (M[, N[, …, P]]) array of int or bool
buffer_size : int, optional
bgval : float or int, optional
in_place : bool, optional
|
---|---|
Returns: | out : (M[, N[, …, P]]) array
|
Examples
>>> import numpy as np
>>> from skimage.segmentation import clear_border
>>> labels = np.array([[0, 0, 0, 0, 0, 0, 0, 1, 0],
... [0, 0, 0, 0, 1, 0, 0, 0, 0],
... [1, 0, 0, 1, 0, 1, 0, 0, 0],
... [0, 0, 1, 1, 1, 1, 1, 0, 0],
... [0, 1, 1, 1, 1, 1, 1, 1, 0],
... [0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> clear_border(labels)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]])
skimage.segmentation.
join_segmentations
(s1, s2)[source]¶Return the join of the two input segmentations.
The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2.
Parameters: | s1, s2 : numpy arrays
|
---|---|
Returns: | j : numpy array
|
Examples
>>> from skimage.segmentation import join_segmentations
>>> s1 = np.array([[0, 0, 1, 1],
... [0, 2, 1, 1],
... [2, 2, 2, 1]])
>>> s2 = np.array([[0, 1, 1, 0],
... [0, 1, 1, 0],
... [0, 1, 1, 1]])
>>> join_segmentations(s1, s2)
array([[0, 1, 3, 2],
[0, 5, 3, 2],
[4, 5, 5, 3]])
skimage.segmentation.
relabel_sequential
(label_field, offset=1)[source]¶Relabel arbitrary labels to {offset, … offset + number_of_labels}.
This function also returns the forward map (mapping the original labels to the reduced labels) and the inverse map (mapping the reduced labels back to the original ones).
Parameters: | label_field : numpy array of int, arbitrary shape
offset : int, optional
|
---|---|
Returns: | relabeled : numpy array of int, same shape as label_field
forward_map : numpy array of int, shape
inverse_map : 1D numpy array of int, of length offset + number of labels
|
Notes
The label 0 is assumed to denote the background and is never remapped.
The forward map can be extremely big for some inputs, since its
length is given by the maximum of the label field. However, in most
situations, label_field.max()
is much smaller than
label_field.size
, and in these cases the forward map is
guaranteed to be smaller than either the input or output images.
Examples
>>> from skimage.segmentation import relabel_sequential
>>> label_field = np.array([1, 1, 5, 5, 8, 99, 42])
>>> relab, fw, inv = relabel_sequential(label_field)
>>> relab
array([1, 1, 2, 2, 3, 5, 4])
>>> fw
array([0, 1, 0, 0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 5])
>>> inv
array([ 0, 1, 5, 8, 42, 99])
>>> (fw[label_field] == relab).all()
True
>>> (inv[relab] == label_field).all()
True
>>> relab, fw, inv = relabel_sequential(label_field, offset=5)
>>> relab
array([5, 5, 6, 6, 7, 9, 8])
skimage.segmentation.
watershed
(image, markers, connectivity=1, offset=None, mask=None, compactness=0, watershed_line=False)[source]¶Find watershed basins in image flooded from given markers.
Parameters: | image: ndarray (2-D, 3-D, …) of integers
markers: int, or ndarray of int, same shape as `image`
connectivity: ndarray, optional
offset: array_like of shape image.ndim, optional
mask: ndarray of bools or 0s and 1s, optional
compactness : float, optional
watershed_line : bool, optional
|
---|---|
Returns: | out: ndarray
|
See also
skimage.segmentation.random_walker
Notes
This function implements a watershed algorithm [R446] [R447] that apportions pixels into marked basins. The algorithm uses a priority queue to hold the pixels with the metric for the priority queue being pixel value, then the time of entry into the queue - this settles ties in favor of the closest marker.
Some ideas taken from Soille, “Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology”, Signal Processing 20 (1990) 171-182
The most important insight in the paper is that entry time onto the queue solves two problems: a pixel should be assigned to the neighbor with the largest gradient or, if there is no gradient, pixels on a plateau should be split between markers on opposite sides.
This implementation converts all arguments to specific, lowest common denominator types, then passes these to a C algorithm.
Markers can be determined manually, or automatically using for example the local minima of the gradient of the image, or the local maxima of the distance function to the background for separating overlapping objects (see example).
References
[R446] | (1, 2) http://en.wikipedia.org/wiki/Watershed_%28image_processing%29 |
[R447] | (1, 2) http://cmm.ensmp.fr/~beucher/wtshed.html |
[R448] | (1, 2) Peer Neubert & Peter Protzel (2014). Compact Watershed and Preemptive SLIC: On Improving Trade-offs of Superpixel Segmentation Algorithms. ICPR 2014, pp 996-1001. DOI:10.1109/ICPR.2014.181 https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf |
Examples
The watershed algorithm is useful to separate overlapping objects.
We first generate an initial image with two overlapping circles:
>>> x, y = np.indices((80, 80))
>>> x1, y1, x2, y2 = 28, 28, 44, 52
>>> r1, r2 = 16, 20
>>> mask_circle1 = (x - x1)**2 + (y - y1)**2 < r1**2
>>> mask_circle2 = (x - x2)**2 + (y - y2)**2 < r2**2
>>> image = np.logical_or(mask_circle1, mask_circle2)
Next, we want to separate the two circles. We generate markers at the maxima of the distance to the background:
>>> from scipy import ndimage as ndi
>>> distance = ndi.distance_transform_edt(image)
>>> from skimage.feature import peak_local_max
>>> local_maxi = peak_local_max(distance, labels=image,
... footprint=np.ones((3, 3)),
... indices=False)
>>> markers = ndi.label(local_maxi)[0]
Finally, we run the watershed on the image and markers:
>>> labels = watershed(-distance, markers, mask=image)
The algorithm works also for 3-D images, and can be used for example to separate overlapping spheres.
skimage.segmentation.
chan_vese
(image, mu=0.25, lambda1=1.0, lambda2=1.0, tol=0.001, max_iter=500, dt=0.5, init_level_set='checkerboard', extended_output=False)[source]¶Chan-Vese segmentation algorithm.
Active contour model by evolving a level set. Can be used to segment objects without clearly defined boundaries.
Parameters: | image : (M, N) ndarray
mu : float, optional
lambda1 : float, optional
lambda2 : float, optional
tol : float, positive, optional
max_iter : uint, optional
dt : float, optional
init_level_set : str or (M, N) ndarray, optional
extended_output : bool, optional
|
---|---|
Returns: | segmentation : (M, N) ndarray, bool
phi : (M, N) ndarray of floats
energies : list of floats
|
Notes
The Chan-Vese Algorithm is designed to segment objects without clearly defined boundaries. This algorithm is based on level sets that are evolved iteratively to minimize an energy, which is defined by weighted values corresponding to the sum of differences intensity from the average value outside the segmented region, the sum of differences from the average value inside the segmented region, and a term which is dependent on the length of the boundary of the segmented region.
This algorithm was first proposed by Tony Chan and Luminita Vese, in a publication entitled “An Active Countour Model Without Edges” [R449].
This implementation of the algorithm is somewhat simplified in the sense that the area factor ‘nu’ described in the original paper is not implemented, and is only suitable for grayscale images.
Typical values for lambda1 and lambda2 are 1. If the ‘background’ is very different from the segmented object in terms of distribution (for example, a uniform black image with figures of varying intensity), then these values should be different from each other.
Typical values for mu are between 0 and 1, though higher values can be used when dealing with shapes with very ill-defined contours.
The ‘energy’ which this algorithm tries to minimize is defined as the sum of the differences from the average within the region squared and weighed by the ‘lambda’ factors to which is added the length of the contour multiplied by the ‘mu’ factor.
Supports 2D grayscale images only, and does not implement the area term described in the original article.
References
[R449] | (1, 2) An Active Contour Model without Edges, Tony Chan and Luminita Vese, Scale-Space Theories in Computer Vision, 1999, DOI:10.1007/3-540-48236-9_13 |
[R450] | Chan-Vese Segmentation, Pascal Getreuer Image Processing On Line, 2 (2012), pp. 214-224, DOI:10.5201/ipol.2012.g-cv |
[R451] | The Chan-Vese Algorithm - Project Report, Rami Cohen, http://arxiv.org/abs/1107.2782, 2011 |
skimage.segmentation.
morphological_geodesic_active_contour
(gimage, iterations, init_level_set='circle', smoothing=1, threshold='auto', balloon=0, iter_callback=<function <lambda>>)[source]¶Morphological Geodesic Active Contours (MorphGAC).
Geodesic active contours implemented with morphological operators. It can be used to segment objects with visible but noisy, cluttered, broken borders.
Parameters: | gimage : (M, N) or (L, M, N) array
iterations : uint
init_level_set : str, (M, N) array, or (L, M, N) array
smoothing : uint, optional
threshold : float, optional
balloon : float, optional
iter_callback : function, optional
|
---|---|
Returns: | out : (M, N) or (L, M, N) array
|
Notes
This is a version of the Geodesic Active Contours (GAC) algorithm that uses morphological operators instead of solving partial differential equations (PDEs) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the GAC PDEs (see [R452]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (e.g., it is not necessary to find the right time step for the evolution), and are computationally faster.
The algorithm and its theoretical derivation are described in [R452].
References
[R452] | (1, 2, 3) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI 10.1109/TPAMI.2013.106 |
skimage.segmentation.
morphological_chan_vese
(image, iterations, init_level_set='checkerboard', smoothing=1, lambda1=1, lambda2=1, iter_callback=<function <lambda>>)[source]¶Morphological Active Contours without Edges (MorphACWE)
Active contours without edges implemented with morphological operators. It can be used to segment objects in images and volumes without well defined borders. It is required that the inside of the object looks different on average than the outside (i.e., the inner area of the object should be darker or lighter than the outer area on average).
Parameters: | image : (M, N) or (L, M, N) array
iterations : uint
init_level_set : str, (M, N) array, or (L, M, N) array
smoothing : uint, optional
lambda1 : float, optional
lambda2 : float, optional
iter_callback : function, optional
|
---|---|
Returns: | out : (M, N) or (L, M, N) array
|
See also
Notes
This is a version of the Chan-Vese algorithm that uses morphological operators instead of solving a partial differential equation (PDE) for the evolution of the contour. The set of morphological operators used in this algorithm are proved to be infinitesimally equivalent to the Chan-Vese PDE (see [R453]). However, morphological operators are do not suffer from the numerical stability issues typically found in PDEs (it is not necessary to find the right time step for the evolution), and are computationally faster.
The algorithm and its theoretical derivation are described in [R453].
References
[R453] | (1, 2, 3) A Morphological Approach to Curvature-based Evolution of Curves and Surfaces, Pablo Márquez-Neila, Luis Baumela, Luis Álvarez. In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2014, DOI 10.1109/TPAMI.2013.106 |
skimage.segmentation.
inverse_gaussian_gradient
(image, alpha=100.0, sigma=5.0)[source]¶Inverse of gradient magnitude.
Compute the magnitude of the gradients in the image and then inverts the result in the range [0, 1]. Flat areas are assigned values close to 1, while areas close to borders are assigned values close to 0.
This function or a similar one defined by the user should be applied over the image as a preprocessing step before calling morphological_geodesic_active_contour.
Parameters: | image : (M, N) or (L, M, N) array
alpha : float, optional
sigma : float, optional
|
---|---|
Returns: | gimage : (M, N) or (L, M, N) array
|
skimage.segmentation.
circle_level_set
(image_shape, center=None, radius=None)[source]¶Create a circle level set with binary values.
Parameters: | image_shape : tuple of positive integers
center : tuple of positive integers, optional
radius : float, optional
|
---|---|
Returns: | out : array with shape image_shape
|
See also
skimage.segmentation.
checkerboard_level_set
(image_shape, square_size=5)[source]¶Create a checkerboard level set with binary values.
Parameters: | image_shape : tuple of positive integers
square_size : int, optional
|
---|---|
Returns: | out : array with shape image_shape
|
See also