当前位置:首页 > 资讯 > MIT scientists trick Google AI into misidentifying a cat as guacamole

MIT scientists trick Google AI into misidentifying a cat as guacamole

2024-09-22 15:25:26 [资讯] 来源:Anhui News

Scientists at MIT's LabSix, an artificial intelligence research group, tricked Google's image-recognition AI called InceptionV3 into thinking that a baseball was an espresso, a 3D-printed turtle was a firearm, and a cat was guacamole.

The experiment might seem outlandish initially, but the results demonstrate why relying on machines to identify objects in the real world could be problematic. For example, the cameras on self-driving cars use similar technology to identify pedestrians while in motion and in all sorts of weather conditions. If an image of a stop sign was blurred (or altered), an AI program controlling a vehicle could theoretically misidentify it, leading to terrible outcomes.

The results of the study, which were published online today, show that AI programs are susceptible to misidentifying objects in the real-world that are slightly distorted, whether manipulated intentionally or not.

SEE ALSO:After getting shade from a robot, Elon Musk fires back

AI scientists call these manipulated objects or images, such as turtle with a textured surface that might mimic the surface of a rifle, "adversarial examples."

Mashable Games

"Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought," the scientists wrote in the published research.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

The example of the 3D-printed turtle below proves their point. In the first experiment, the team presents a typical turtle to Google's AI program, and it correctly classifies it as a turtle. Then, the researchers modify the texture on the shell in minute ways — almost imperceptible to the human eye — which makes the machine identify the turtle as a rifle.

The striking observation in LabSix's study is that the manipulated or "perturbed" turtle was misclassified at most angles, even when they flipped the turtle over.

To create this nuanced design trickery, the MIT researchers used their own program specifically designed to create "adversarial" images. This program simulated real-world situations like blurred or rotating objects that an AI program could likely experience in the real-world — perhaps like the input an AI might get from cameras on fast-moving self-driving cars.

With the seemingly incessant progression of AI technologies and their application in our lives (cars, image generation, self-taught programs), it's important that some researchers are attempting to fool our advanced AI programs; doing so exposes their weaknesses.

After all, you wouldn't want a camera on your autonomous vehicle to mistake a stop sign for a person — or a cat for guacamole.


Featured Video For You
Walmart is testing self-scanning robots

(责任编辑:产品中心)

推荐文章
热点阅读