What Is Image Processing? – Vision Campus

Welcome to our Vision Campus! Today, I will focus on image processing. There are endless situations calling for image
processing done by software. In general, they can be clustered into a few
groups: Checking for presence, object detection and
localization, measurement as well as identification and verification. In most cases, the acquired image from the camera is not directly processed within the application. Instead, it is preprocessed, to enhance the
image according to the specific task. Examples of preprocessing are noise reduction
as well as brightness and contrast enhancement. Some of these steps can be done directly by
the camera itself, and thus save CPU load on the host side. To use the camera as a measuring device, it
must be calibrated to the physical world. Camera calibration can actually refer to two
things: geometric calibration and color calibration. With geometric calibration we correct the
lens distortion. Furthermore, we can also determine the relationship between the camera’s natural units – meaning pixels – and the real world units, like millimeters
or inches, for example. With color calibration we ensure an accurate
reproduction of colors. The better the preprocessing, the better the image quality and the results of the image processing within your application. Now let’s have a closer look at image processing. When it comes to locating parts, usually matching
is involved. This means looking for regions that are similar
to or the same as a predefined template. This template can either be also an image
or a geometric pattern which contains information regarding edges and geometric features. These methods are called correlation pattern
matching and geometric pattern matching respectively. Let’s have a look at cookie inspection. Your template would be the image of a perfectly
shaped cookie. A camera takes images of all cookies on the
conveyor belt. As soon as there is a cookie that doesn’t
match the template, the cookie is rejected. The main use of measurement with the help of image processing is in alignment or inspection applications. Most measurement techniques rely on edge-detection
algorithms. An edge is an area in an image displaying
significant change in the image intensity –or in other words, a high local contrast. This means your software analyzes the grey
levels of the image and based on this, identifies shapes, measures distances and calculates
the geometry. This measurement and calculation is made possible by the camera calibration that established the relationship between pixels and real world units. Take the label of a bottle, for example. With measurement, you can check if the label
has been placed correctly. Typical applications for identifications are
barcode and 2D matrix code reading or opticalcharacter recognition, also called OCR. One way to manage optical character recognition
is by separating the characters in the image and comparing them with a set of templates. Afterwards, the software can convert the captured
data into editable and searchable data. A popular example for OCR is automatic number
plate recognition – also known as ANPR. Each of these processing techniques covers
a wide spectrum of machine vision applications, but combining them can give you even more
possibilities. Imagine a car is entering a parking garage; a camera takes an image of the license plate
and the car. When the car is exiting the garage, the camera
takes another image. Then the software compares those two images. The gate only opens when the license plate
and the car model are the same as in the first image. This was only a short overview of the information you can gain from your images by using image processing, but it offers a taste of the many
different possibilities. Thanks for watching!

Posts created 3637

Leave a Reply

Your email address will not be published. Required fields are marked *

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top