You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ Below code snippet shows a simple procedure to create disparity map.
48
48
plt.imshow(disparity,'gray')
49
49
plt.show()
50
50
51
-
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get more better result.
51
+
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get better results.
Copy file name to clipboardExpand all lines: source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst
+6-5
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ You can modify the pixel values the same way.
49
49
50
50
.. warning:: Numpy is a optimized library for fast array calculations. So simply accessing each and every pixel values and modifying it will be very slow and it is discouraged.
51
51
52
-
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be more better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
52
+
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
53
53
54
54
Better pixel accessing and editing method :
55
55
@@ -94,7 +94,7 @@ Image datatype is obtained by ``img.dtype``:
94
94
Image ROI
95
95
===========
96
96
97
-
Sometimes, you will have to play with certain region of images. For eye detection in images, first face detection is done all over the image and when face is obtained, we select the face region alone and search for eyes inside it instead of searching whole image. It improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area)
97
+
Sometimes, you will have to play with certain region of images. For eye detection in images, first perform face detection over the image until the face is found, then search within the face region for eyes. This approach improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area).
98
98
99
99
ROI is again obtained using Numpy indexing. Here I am selecting the ball and copying it to another region in the image:
100
100
::
@@ -111,7 +111,7 @@ Check the results below:
111
111
Splitting and Merging Image Channels
112
112
======================================
113
113
114
-
Sometimes you will need to work separately on B,G,R channels of image. Then you need to split the BGR images to single planes. Or another time, you may need to join these individual channels to BGR image. You can do it simply by:
114
+
The B,G,R channels of an image can be split into their individual planes when needed. Then, the individual channels can be merged back together to form a BGR image again. This can be performed by:
115
115
::
116
116
117
117
>>> b,g,r = cv2.split(img)
@@ -121,15 +121,16 @@ Or
121
121
122
122
>>> b = img[:,:,0]
123
123
124
-
Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing, and that is more faster.
124
+
Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing which is faster.
125
125
::
126
126
127
127
>>> img[:,:,2] = 0
128
128
129
-
.. warning:: ``cv2.split()`` is a costly operation (in terms of time). So do it only if you need it. Otherwise go for Numpy indexing.
129
+
.. warning:: ``cv2.split()`` is a costly operation (in terms of time), so only use it if necessary. Numpy indexing is much more efficient and should be used if possible.
130
130
131
131
Making Borders for Images (Padding)
132
132
====================================
133
+
133
134
If you want to create a border around the image, something like a photo frame, you can use **cv2.copyMakeBorder()** function. But it has more applications for convolution operation, zero padding etc. This function takes following arguments:
Copy file name to clipboardExpand all lines: source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst
+20-18
Original file line number
Diff line number
Diff line change
@@ -7,21 +7,21 @@ Goals
7
7
=======
8
8
9
9
Learn to:
10
-
* Blur the images with various low pass filters
10
+
* Blur imagess with various low pass filters
11
11
* Apply custom-made filters to images (2D convolution)
12
12
13
13
2D Convolution ( Image Filtering )
14
14
====================================
15
15
16
-
As in one-dimensional signals, images also can be filtered with various low-pass filters(LPF), high-pass filters(HPF) etc. LPF helps in removing noises, blurring the images etc. HPF filters helps in finding edges in the images.
16
+
As for one-dimensional signals, images also can be filtered with various low-pass filters(LPF), high-pass filters(HPF), etc. A LPF helps in removing noise, or blurring the image. A HPF filters helps in finding edges in an image.
17
17
18
-
OpenCV provides a function **cv2.filter2D()** to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel will look like below:
18
+
OpenCV provides a function, **cv2.filter2D()**, to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel can be defined as follows:
Operation is like this: keep this kernel above a pixel, add all the 25 pixels below this kernel, take its average and replace the central pixel with the new average value. It continues this operation for all the pixels in the image. Try this code and check the result:
24
+
Filtering with the above kernel results in the following being performed: for each pixel, a 5x5 window is centered on this pixel, all pixels falling within this window are summed up, and the result is then divided by 25. This equates to computing the average of the pixel values inside that window. This operation is performed for all the pixels in the image to produce the output filtered image. Try this code and check the result:
25
25
::
26
26
27
27
import cv2
@@ -48,20 +48,20 @@ Result:
48
48
Image Blurring (Image Smoothing)
49
49
==================================
50
50
51
-
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noises. It actually removes high frequency content (eg: noise, edges) from the image. So edges are blurred a little bit in this operation. (Well, there are blurring techniques which doesn't blur the edges too). OpenCV provides mainly four types of blurring techniques.
51
+
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noise. It actually removes high frequency content (e.g: noise, edges) from the image resulting in edges being blurred when this is filter is applied. (Well, there are blurring techniques which do not blur edges). OpenCV provides mainly four types of blurring techniques.
52
52
53
53
1. Averaging
54
54
--------------
55
55
56
-
This is done by convolving image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replace the central element. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like below:
56
+
This is done by convolving the image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replaces the central element with this average. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like this:
.. note:: If you don't want to use normalized box filter, use **cv2.boxFilter()**. Pass an argument ``normalize=False`` to the function.
62
+
.. note:: If you don't want to use a normalized box filter, use **cv2.boxFilter()** and pass the argument ``normalize=False`` to the function.
63
63
64
-
Check a sample demo below with a kernel of 5x5 size:
64
+
Check the sample demo below with a kernel of 5x5 size:
65
65
::
66
66
67
67
import cv2
@@ -85,10 +85,10 @@ Result:
85
85
:align:center
86
86
87
87
88
-
2. Gaussian Blurring
88
+
2. Gaussian Filtering
89
89
----------------------
90
90
91
-
In this, instead of box filter, gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of kernel which should be positive and odd. We also should specify the standard deviation in X and Y direction, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as same as sigmaX. If both are given as zeros, they are calculated from kernel size. Gaussian blurring is highly effective in removing gaussian noise from the image.
91
+
In this approach, instead of a box filter consisting of equal filter coefficients, a Gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of the kernel which should be positive and odd. We also should specify the standard deviation in the X and Y directions, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as equal to sigmaX. If both are given as zeros, they are calculated from the kernel size. Gaussian filtering is highly effective in removing Gaussian noise from the image.
92
92
93
93
If you want, you can create a Gaussian kernel with the function, **cv2.getGaussianKernel()**.
94
94
@@ -105,12 +105,12 @@ Result:
105
105
:align:center
106
106
107
107
108
-
3. Median Blurring
108
+
3. Median Filtering
109
109
--------------------
110
110
111
-
Here, the function **cv2.medianBlur()** takes median of all the pixels under kernel area and central element is replaced with this median value. This is highly effective against salt-and-pepper noise in the images. Interesting thing is that, in the above filters, central element is a newly calculated value which may be a pixel value in the image or a new value. But in median blurring, central element is always replaced by some pixel value in the image. It reduces the noise effectively. Its kernel size should be a positive odd integer.
111
+
Here, the function **cv2.medianBlur()** computes the median of all the pixels under the kernel window and the central pixel is replaced with this median value. This is highly effective in removing salt-and-pepper noise. One interesting thing to note is that, in the Gaussian and box filters, the filtered value for the central element can be a value which may not exist in the original image. However this is not the case in median filtering, since the central element is always replaced by some pixel value in the image. This reduces the noise effectively. The kernel size must be a positive odd integer.
112
112
113
-
In this demo, I added a 50% noise to our original image and applied median blur. Check the result:
113
+
In this demo, we add a 50% noise to our original image and use a median filter. Check the result:
114
114
::
115
115
116
116
median = cv2.medianBlur(img,5)
@@ -125,11 +125,11 @@ Result:
125
125
4. Bilateral Filtering
126
126
-----------------------
127
127
128
-
**cv2.bilateralFilter()**is highly effective in noise removal while keeping edges sharp. But the operation is slower compared to other filters. We already saw that gaussian filter takes the a neighbourhood around the pixel and find its gaussian weighted average. This gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider whether pixels have almost same intensity. It doesn't consider whether pixel is an edge pixel or not. So it blurs the edges also, which we don't want to do.
128
+
As we noted, the filters we presented earlier tend to blur edges. This is not the case for the bilateral filter, **cv2.bilateralFilter()**, which was defined for, and is highly effective at noise removal while preserving edges. But the operation is slower compared to other filters. We already saw that a Gaussian filter takes the a neighborhood around the pixel and finds its Gaussian weighted average. This Gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It does not consider whether pixels have almost the same intensity value and does not consider whether the pixel lies on an edge or not. The resulting effect is that Gaussian filters tend to blur edges, which is undesirable.
129
129
130
-
Bilateral filter also takes a gaussian filter in space, but one more gaussian filter which is a function of pixel difference. Gaussian function of space make sure only nearby pixels are considered for blurring while gaussian function of intensity difference make sure only those pixels with similar intensity to central pixel is considered for blurring. So it preserves the edges since pixels at edges will have large intensity variation.
130
+
The bilateral filter also uses a Gaussian filter in the space domain, but it also uses one more (multiplicative) Gaussian filter component which is a function of pixel intensity differences. The Gaussian function of space makes sure that only pixels are 'spatial neighbors' are considered for filtering, while the Gaussian component applied in the intensity domain (a Gaussian function of intensity differences) ensures that only those pixels with intensities similar to that of the central pixel ('intensity neighbors') are included to compute the blurred intensity value. As a result, this method preserves edges, since for pixels lying near edges, neighboring pixels placed on the other side of the edge, and therefore exhibiting large intensity variations when compared to the central pixel, will not be included for blurring.
131
131
132
-
Below samples shows use bilateral filter (For details on arguments, visit docs).
132
+
The sample below demonstrates the use of bilateral filtering (For details on arguments, see the OpenCV docs).
133
133
::
134
134
135
135
blur = cv2.bilateralFilter(img,9,75,75)
@@ -140,12 +140,14 @@ Result:
140
140
:alt:Bilateral Filtering
141
141
:align:center
142
142
143
-
See, the texture on the surface is gone, but edges are still preserved.
143
+
Note that the texture on the surface is gone, but edges are still preserved.
144
144
145
145
Additional Resources
146
146
======================
147
147
148
-
1. Details about the `bilateral filtering <http://people.csail.mit.edu/sparis/bf_course/>`_
148
+
1. Details about the `bilateral filtering can be found at <http://people.csail.mit.edu/sparis/bf_course/>`_
149
149
150
150
Exercises
151
151
===========
152
+
153
+
Take an image, add Gaussian noise and salt and pepper noise, compare the effect of blurring via box, Gaussian, median and bilateral filters for both noisy images, as you change the level of noise.
Copy file name to clipboardExpand all lines: source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst
+1-1
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Morphological transformations are some simple operations based on the image shap
21
21
22
22
1. Erosion
23
23
--------------
24
-
The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what it does? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
24
+
The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what does it do? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
25
25
26
26
So what happends is that, all the pixels near boundary will be discarded depending upon the size of kernel. So the thickness or size of the foreground object decreases or simply white region decreases in the image. It is useful for removing small white noises (as we have seen in colorspace chapter), detach two connected objects etc.
0 commit comments