Skip to content

Commit e206272

Browse files
committed
Merge remote-tracking branch 'origin/master'
2 parents ab6dcc5 + f0ce3c2 commit e206272

File tree

7 files changed

+32
-29
lines changed

7 files changed

+32
-29
lines changed

source/py_tutorials/py_calib3d/py_depthmap/py_depthmap.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ Below code snippet shows a simple procedure to create disparity map.
4848
plt.imshow(disparity,'gray')
4949
plt.show()
5050
51-
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get more better result.
51+
Below image contains the original image (left) and its disparity map (right). As you can see, result is contaminated with high degree of noise. By adjusting the values of numDisparities and blockSize, you can get better results.
5252

5353
.. image:: images/disparity_map.jpg
5454
:alt: Disparity Map

source/py_tutorials/py_core/py_basic_ops/py_basic_ops.rst

+6-5
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ You can modify the pixel values the same way.
4949

5050
.. warning:: Numpy is a optimized library for fast array calculations. So simply accessing each and every pixel values and modifying it will be very slow and it is discouraged.
5151

52-
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be more better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
52+
.. note:: Above mentioned method is normally used for selecting a region of array, say first 5 rows and last 3 columns like that. For individual pixel access, Numpy array methods, ``array.item()`` and ``array.itemset()`` is considered to be better. But it always returns a scalar. So if you want to access all B,G,R values, you need to call ``array.item()`` separately for all.
5353

5454
Better pixel accessing and editing method :
5555

@@ -94,7 +94,7 @@ Image datatype is obtained by ``img.dtype``:
9494
Image ROI
9595
===========
9696

97-
Sometimes, you will have to play with certain region of images. For eye detection in images, first face detection is done all over the image and when face is obtained, we select the face region alone and search for eyes inside it instead of searching whole image. It improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area)
97+
Sometimes, you will have to play with certain region of images. For eye detection in images, first perform face detection over the image until the face is found, then search within the face region for eyes. This approach improves accuracy (because eyes are always on faces :D ) and performance (because we search for a small area).
9898

9999
ROI is again obtained using Numpy indexing. Here I am selecting the ball and copying it to another region in the image:
100100
::
@@ -111,7 +111,7 @@ Check the results below:
111111
Splitting and Merging Image Channels
112112
======================================
113113

114-
Sometimes you will need to work separately on B,G,R channels of image. Then you need to split the BGR images to single planes. Or another time, you may need to join these individual channels to BGR image. You can do it simply by:
114+
The B,G,R channels of an image can be split into their individual planes when needed. Then, the individual channels can be merged back together to form a BGR image again. This can be performed by:
115115
::
116116

117117
>>> b,g,r = cv2.split(img)
@@ -121,15 +121,16 @@ Or
121121

122122
>>> b = img[:,:,0]
123123

124-
Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing, and that is more faster.
124+
Suppose, you want to make all the red pixels to zero, you need not split like this and put it equal to zero. You can simply use Numpy indexing which is faster.
125125
::
126126

127127
>>> img[:,:,2] = 0
128128
129-
.. warning:: ``cv2.split()`` is a costly operation (in terms of time). So do it only if you need it. Otherwise go for Numpy indexing.
129+
.. warning:: ``cv2.split()`` is a costly operation (in terms of time), so only use it if necessary. Numpy indexing is much more efficient and should be used if possible.
130130

131131
Making Borders for Images (Padding)
132132
====================================
133+
133134
If you want to create a border around the image, something like a photo frame, you can use **cv2.copyMakeBorder()** function. But it has more applications for convolution operation, zero padding etc. This function takes following arguments:
134135

135136
* **src** - input image

source/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ Below is the code which are commented in detail :
5858
upper_blue = np.array([130,255,255])
5959
6060
# Threshold the HSV image to get only blue colors
61-
mask = cv2.inRange(hsv, lower_green, upper_green)
61+
mask = cv2.inRange(hsv, lower_blue, upper_blue)
6262
6363
# Bitwise-AND mask and original image
6464
res = cv2.bitwise_and(frame,frame, mask= mask)

source/py_tutorials/py_imgproc/py_contours/py_contours_begin/py_contours_begin.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ To draw the contours, ``cv2.drawContours`` function is used. It can also be used
4343
To draw all the contours in an image:
4444
::
4545

46-
img = cv2.drawContour(img, contours, -1, (0,255,0), 3)
46+
img = cv2.drawContours(img, contours, -1, (0,255,0), 3)
4747
4848
To draw an individual contour, say 4th contour:
4949
::

source/py_tutorials/py_imgproc/py_filtering/py_filtering.rst

+20-18
Original file line numberDiff line numberDiff line change
@@ -7,21 +7,21 @@ Goals
77
=======
88

99
Learn to:
10-
* Blur the images with various low pass filters
10+
* Blur imagess with various low pass filters
1111
* Apply custom-made filters to images (2D convolution)
1212

1313
2D Convolution ( Image Filtering )
1414
====================================
1515

16-
As in one-dimensional signals, images also can be filtered with various low-pass filters(LPF), high-pass filters(HPF) etc. LPF helps in removing noises, blurring the images etc. HPF filters helps in finding edges in the images.
16+
As for one-dimensional signals, images also can be filtered with various low-pass filters (LPF), high-pass filters (HPF), etc. A LPF helps in removing noise, or blurring the image. A HPF filters helps in finding edges in an image.
1717

18-
OpenCV provides a function **cv2.filter2D()** to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel will look like below:
18+
OpenCV provides a function, **cv2.filter2D()**, to convolve a kernel with an image. As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel can be defined as follows:
1919

2020
.. math::
2121
2222
K = \frac{1}{25} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}
2323
24-
Operation is like this: keep this kernel above a pixel, add all the 25 pixels below this kernel, take its average and replace the central pixel with the new average value. It continues this operation for all the pixels in the image. Try this code and check the result:
24+
Filtering with the above kernel results in the following being performed: for each pixel, a 5x5 window is centered on this pixel, all pixels falling within this window are summed up, and the result is then divided by 25. This equates to computing the average of the pixel values inside that window. This operation is performed for all the pixels in the image to produce the output filtered image. Try this code and check the result:
2525
::
2626

2727
import cv2
@@ -48,20 +48,20 @@ Result:
4848
Image Blurring (Image Smoothing)
4949
==================================
5050

51-
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noises. It actually removes high frequency content (eg: noise, edges) from the image. So edges are blurred a little bit in this operation. (Well, there are blurring techniques which doesn't blur the edges too). OpenCV provides mainly four types of blurring techniques.
51+
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is useful for removing noise. It actually removes high frequency content (e.g: noise, edges) from the image resulting in edges being blurred when this is filter is applied. (Well, there are blurring techniques which do not blur edges). OpenCV provides mainly four types of blurring techniques.
5252

5353
1. Averaging
5454
--------------
5555

56-
This is done by convolving image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replace the central element. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like below:
56+
This is done by convolving the image with a normalized box filter. It simply takes the average of all the pixels under kernel area and replaces the central element with this average. This is done by the function **cv2.blur()** or **cv2.boxFilter()**. Check the docs for more details about the kernel. We should specify the width and height of kernel. A 3x3 normalized box filter would look like this:
5757

5858
.. math::
5959
6060
K = \frac{1}{9} \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{bmatrix}
6161
62-
.. note:: If you don't want to use normalized box filter, use **cv2.boxFilter()**. Pass an argument ``normalize=False`` to the function.
62+
.. note:: If you don't want to use a normalized box filter, use **cv2.boxFilter()** and pass the argument ``normalize=False`` to the function.
6363

64-
Check a sample demo below with a kernel of 5x5 size:
64+
Check the sample demo below with a kernel of 5x5 size:
6565
::
6666

6767
import cv2
@@ -85,10 +85,10 @@ Result:
8585
:align: center
8686

8787

88-
2. Gaussian Blurring
88+
2. Gaussian Filtering
8989
----------------------
9090

91-
In this, instead of box filter, gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of kernel which should be positive and odd. We also should specify the standard deviation in X and Y direction, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as same as sigmaX. If both are given as zeros, they are calculated from kernel size. Gaussian blurring is highly effective in removing gaussian noise from the image.
91+
In this approach, instead of a box filter consisting of equal filter coefficients, a Gaussian kernel is used. It is done with the function, **cv2.GaussianBlur()**. We should specify the width and height of the kernel which should be positive and odd. We also should specify the standard deviation in the X and Y directions, sigmaX and sigmaY respectively. If only sigmaX is specified, sigmaY is taken as equal to sigmaX. If both are given as zeros, they are calculated from the kernel size. Gaussian filtering is highly effective in removing Gaussian noise from the image.
9292

9393
If you want, you can create a Gaussian kernel with the function, **cv2.getGaussianKernel()**.
9494

@@ -105,12 +105,12 @@ Result:
105105
:align: center
106106

107107

108-
3. Median Blurring
108+
3. Median Filtering
109109
--------------------
110110

111-
Here, the function **cv2.medianBlur()** takes median of all the pixels under kernel area and central element is replaced with this median value. This is highly effective against salt-and-pepper noise in the images. Interesting thing is that, in the above filters, central element is a newly calculated value which may be a pixel value in the image or a new value. But in median blurring, central element is always replaced by some pixel value in the image. It reduces the noise effectively. Its kernel size should be a positive odd integer.
111+
Here, the function **cv2.medianBlur()** computes the median of all the pixels under the kernel window and the central pixel is replaced with this median value. This is highly effective in removing salt-and-pepper noise. One interesting thing to note is that, in the Gaussian and box filters, the filtered value for the central element can be a value which may not exist in the original image. However this is not the case in median filtering, since the central element is always replaced by some pixel value in the image. This reduces the noise effectively. The kernel size must be a positive odd integer.
112112

113-
In this demo, I added a 50% noise to our original image and applied median blur. Check the result:
113+
In this demo, we add a 50% noise to our original image and use a median filter. Check the result:
114114
::
115115

116116
median = cv2.medianBlur(img,5)
@@ -125,11 +125,11 @@ Result:
125125
4. Bilateral Filtering
126126
-----------------------
127127

128-
**cv2.bilateralFilter()** is highly effective in noise removal while keeping edges sharp. But the operation is slower compared to other filters. We already saw that gaussian filter takes the a neighbourhood around the pixel and find its gaussian weighted average. This gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It doesn't consider whether pixels have almost same intensity. It doesn't consider whether pixel is an edge pixel or not. So it blurs the edges also, which we don't want to do.
128+
As we noted, the filters we presented earlier tend to blur edges. This is not the case for the bilateral filter, **cv2.bilateralFilter()**, which was defined for, and is highly effective at noise removal while preserving edges. But the operation is slower compared to other filters. We already saw that a Gaussian filter takes the a neighborhood around the pixel and finds its Gaussian weighted average. This Gaussian filter is a function of space alone, that is, nearby pixels are considered while filtering. It does not consider whether pixels have almost the same intensity value and does not consider whether the pixel lies on an edge or not. The resulting effect is that Gaussian filters tend to blur edges, which is undesirable.
129129

130-
Bilateral filter also takes a gaussian filter in space, but one more gaussian filter which is a function of pixel difference. Gaussian function of space make sure only nearby pixels are considered for blurring while gaussian function of intensity difference make sure only those pixels with similar intensity to central pixel is considered for blurring. So it preserves the edges since pixels at edges will have large intensity variation.
130+
The bilateral filter also uses a Gaussian filter in the space domain, but it also uses one more (multiplicative) Gaussian filter component which is a function of pixel intensity differences. The Gaussian function of space makes sure that only pixels are 'spatial neighbors' are considered for filtering, while the Gaussian component applied in the intensity domain (a Gaussian function of intensity differences) ensures that only those pixels with intensities similar to that of the central pixel ('intensity neighbors') are included to compute the blurred intensity value. As a result, this method preserves edges, since for pixels lying near edges, neighboring pixels placed on the other side of the edge, and therefore exhibiting large intensity variations when compared to the central pixel, will not be included for blurring.
131131

132-
Below samples shows use bilateral filter (For details on arguments, visit docs).
132+
The sample below demonstrates the use of bilateral filtering (For details on arguments, see the OpenCV docs).
133133
::
134134

135135
blur = cv2.bilateralFilter(img,9,75,75)
@@ -140,12 +140,14 @@ Result:
140140
:alt: Bilateral Filtering
141141
:align: center
142142

143-
See, the texture on the surface is gone, but edges are still preserved.
143+
Note that the texture on the surface is gone, but edges are still preserved.
144144

145145
Additional Resources
146146
======================
147147

148-
1. Details about the `bilateral filtering <http://people.csail.mit.edu/sparis/bf_course/>`_
148+
1. Details about the `bilateral filtering can be found at <http://people.csail.mit.edu/sparis/bf_course/>`_
149149

150150
Exercises
151151
===========
152+
153+
Take an image, add Gaussian noise and salt and pepper noise, compare the effect of blurring via box, Gaussian, median and bilateral filters for both noisy images, as you change the level of noise.

source/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Morphological transformations are some simple operations based on the image shap
2121

2222
1. Erosion
2323
--------------
24-
The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what it does? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
24+
The basic idea of erosion is just like soil erosion only, it erodes away the boundaries of foreground object (Always try to keep foreground in white). So what does it do? The kernel slides through the image (as in 2D convolution). A pixel in the original image (either 1 or 0) will be considered 1 only if all the pixels under the kernel is 1, otherwise it is eroded (made to zero).
2525

2626
So what happends is that, all the pixels near boundary will be discarded depending upon the size of kernel. So the thickness or size of the foreground object decreases or simply white region decreases in the image. It is useful for removing small white noises (as we have seen in colorspace chapter), detach two connected objects etc.
2727

source/py_tutorials/py_ml/py_svm/py_svm_basics/py_svm_basics.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -62,9 +62,9 @@ Let us define a kernel function :math:`K(p,q)` which does a dot product between
6262

6363
.. math::
6464
65-
K(p,q) = \phi(p).\phi(q) &= \phi(p)^T \phi(q) \\
65+
K(p,q) = \phi(p).\phi(q) &= \phi(p)^T , \phi(q) \\
6666
&= (p_{1}^2,p_{2}^2,\sqrt{2} p_1 p_2).(q_{1}^2,q_{2}^2,\sqrt{2} q_1 q_2) \\
67-
&= p_1 q_1 + p_2 q_2 + 2 p_1 q_1 p_2 q_2 \\
67+
&= p_{1}^2 q_{1}^2 + p_{2}^2 q_{2}^2 + 2 p_1 q_1 p_2 q_2 \\
6868
&= (p_1 q_1 + p_2 q_2)^2 \\
6969
\phi(p).\phi(q) &= (p.q)^2
7070

0 commit comments

Comments
 (0)