Posted 2022-12-27 00:00:00 +0000 UTC
According to foreign media reports, waymo's car uses computer vision technology and artificial intelligence to identify the surrounding environment and make real-time decisions on how the vehicle should react and move. When the camera and sensor in the car perceive objects, such objects will match with the objects in the large database compiled by alphabet for recognition. (photo source: Waymo) a large number of data sets are essential for the training of autopilot cars, because data sets can make AI better and improve performance. However, engineers need some methods to effectively match the objects in the data set with the objects queried, so that they can study how AI processes specific types of images. To solve this problem, waymo recently studied a tool called content search, which functions similar to the operations of image search and Google photos, foreign media reported. This kind of system can match the query content with the semantic content in the image to represent the object, making the image retrieval based on natural language query easier. Before the "content search" tool, if waymo's researchers want to retrieve a specific sample from the log, they need to use heuristics to describe the object. Waymo's log must be searched using the command to search for objects using rules, which means that objects need to be searched with the command "below x height" or "moving at Y miles per hour". Such rule-based search results are usually very extensive, and researchers need to comb them manually to get results. By creating a data directory, similarity search is carried out for different directories to find the most similar category when presenting objects. The "content search" tool successfully solves the above problems. If a truck or a tree is shown to the content search tool, the result of the tool is the other truck or tree that Waymo self driving car meets. Because autopilot records the image of the surrounding objects while driving, and then stores such objects in embedded or mathematical form, which means that the tool can compare the object classes, and sort the response according to the similarity between the stored object images and the objects to be provided, which is similar to that of Google's embedded similarity matching service. Although the objects that waymo meets have various shapes and sizes, they need to be refined into basic components and classified so that the "content search" tool can work. To achieve this goal, waymo uses multiple AI models, and these models are trained on a variety of objects. Different models learn to recognize different objects and are supported by content search tools, so that they can understand whether they can find objects belonging to a specific category in a given image. In addition to the main model, waymo also uses an additional optical character recognition model, which allows waymo's vehicles to add additional recognition information to the objects in the image according to the text in the image. For example, a truck equipped with an identification will have text containing the identification in the content search description. Because the models work together, waymo's researchers and engineers can search image data logs for specific objects, such as specific kinds of trees and specific brands of cars. This is not the first time waymo has used multiple machine learning models to improve vehicle reliability and accuracy. In the past, waymo worked with Alphabet / Google to help deepmind develop AI technology together. The AI system gets inspiration from evolutionary biology. Firstly, it creates a variety of machine learning models. After training, the underperforming models will be kicked out and replaced by the offspring models. It is reported that this technology has successfully reduced the number of false positives, as well as the required computing resources and training time.
Copyright © 2020. TUTESL All rights reserved.