Deep Learning - Machine Vision
"Deep Learning" and "Convolutional Neural Network" are currently the most popular topics in the field of artificial vision. Artificial intelligence studies, which have their roots in the 1950s and reached theoretical maturity in the 1990s, are currently experiencing a golden age with the increasing power of processors, access to big data, and advances in imaging hardware.
The basic motivation behind deep learning applications in artificial vision is to enable automatic identification of objects that are difficult to identify using traditional image processing methods. Instead of describing each object based on its individual features, we provide the system with numerous examples of what the objects are beforehand and expect the system to extract the common and different aspects of these examples. This way, when a new image is presented to the system, it can tell us which of the previous examples it most closely resembles.
Example 1, let's consider an image processing application that identifies the model of each brake disc in the above image. Using a classic approach, we could differentiate them based on the number of holes and grooves on the surface, which will likely work. However, when a new brake disc is introduced, the system would need to observe and code its distinguishing features before a new version could be produced. the use of classical methods and deep learning in object detection, specifically in the context of image processing. Classical methods involve separating objects based on characteristics such as the number of large holes, small holes, and notches. However, for more complex applications like facial recognition, emotion analysis, and behavioral tests, classical methods may not suffice. Deep learning, on the other hand, offers a solution to these problems by allowing the system to learn the classes of objects and then identify them in new images. The more images the system is fed, the more it learns, and the easier it becomes to introduce new objects.
Example 2, the text discusses how classifying objects in an image of a beach and sky is more difficult than classifying geometric shapes, as the colors of the sky, sea, and sand can change based on various factors. For these types of problems, deep learning techniques such as CNN classification are used, where the system is trained on a large number of examples with corresponding labels. The goal is to enable the system to identify the classes of objects in new images based on what it has learned from previous examples.
In the above image, each region is marked with a label to inform the system. An operation called Offline Training is performed, which involves extracting information about the labeled classes from all fed images. Training is generally a time-consuming process, and since the workload is mostly on the graphics processor, good graphics processors usually provide an advantage. Fortunately, the long training process is only done once. The system will make decisions quickly while it is running.
In summary, in deep learning, there are classes and labels. The class can be of any desired size. For example, in the above image, the class is three-dimensional: sea, cloud, and beach. If we wanted to separate green areas, humans, ships, and other objects, we would have to find images containing those objects and teach the system about the regions with those labels. The number of dimensions would increase.
The detailed working principles and sample projects of deep learning, CNN, and other classifiers will be included in our blog pages.
At Pi Robotics, we also use deep learning methods to find error types that can be difficult to model in real life, as the area, size, color, and shape can constantly change.
We use deep learning in projects
-Detecting fabric errors (such as oil stains, warp and weft defects, and tears, which can be different in every fabric, making it difficult to write a general application with classical models),
-Finding errors on leather (difficult to catch with classical methods because of its natural texture)
-Detecting cracks in glass bottle necks (difficult to model with classical methods because the cracks can occur in any shape).
"Not OK" set consisting of bottle mouth breakages. At least 1 photo containing every type of error is shown to the system.
Our deep learning application that detects bottle mouth breakages.
In technological fields such as facial recognition in crowded areas (stadium, metro entrance, etc.), object recognition in images taken from the outside world, streets and roads, and learning the environment for autonomous vehicles, usage is becoming widespread.
At Pi Robotics, we develop deep learning-based machines specifically for the textile industry, to teach and find all types of fabric defects. In addition to the methods used by classical systems, we achieve much higher success rates by supporting them with texture inspection, background estimation, and deep learning techniques.
- Deep Learning
- Artificial Intelligence
- Deep Learning
Sorularınız mı var? Bizimle iletişime geçin.Request a Project