Written by Jessie Cordwell
Account Executive
On October 2, in the midst of the SAG-AFTRA strike and concern of A.I. likeness of actors being used for video production, the New York Times published an article about actor Tom Hanks and TV host Gayle King both having their likeness used for advertisements of products they did not agree to promote. Meta, parent company of Facebook and Instagram, did not comment directly on the ads, but said it was against their policy to run ads that use the likeness of public figures in a deceptive manner.
On September 27, less than a week earlier, Meta introduced multiple new A.I. experiences for users to interact with. These include new A.I. stickers that are generated by any combination of words a user might come up with, A.I. image editing, an A.I. virtual assistant, and most notably, 28 A.I. “characters”. These characters have personalities, interests and opinions of their own, and resemble the physical and audible likeness of some of the world’s most prominent public figures, including Kendall Jenner, Tom Brady, and Mr. Beast. Aside from the real-life people they’re based off of, these A.I.’s have their own social media accounts, and operate as their own entities.
Deep fakes have paved the way for an uncanny valley that is becoming increasingly rather canny. For years the uncanny valley separated A.I. and computers from reality and humanity, noe the line is becoming blurred but was it inevitable? The anthropomorphism of computers used to be easily distinguishable from life, however the gap is starting to close. And despite the discomfort associated with the uncanny valley, it acted as a way to distinguish the real from the fake.
The “uncanny valley” refers to a phenomena brought to light by Masahiro Mori, a robotics professor in 1970, in which his original hypothesis considers that as a robot’s appearance becomes more human appearing, it can elicit a positive or even empathic response from people until that likeliness becomes so “uncanny” that people expel feelings of distaste and discomfort.
An October 2022 article from the Wall Street Journal anticipated a rise in deepfake technology in ads, “...experts and practitioners say deepfake technology will become increasingly popular in advertising, because it can help brands and agencies produce more content faster while eliminating many of the expenses involved in production.”
A.I. generated content ranging from fake product reviews on Amazon to fake presidential speeches, even virtual influencers, have been around for nearly the past decade. The central concerns are that A.I. is developing at a rate faster than we can regulate it and maintain authority over it, and the merging of and even prioritization of the online world and reality. Not only is there an issue with regulating A.I., but scientists like Dr. Geoffrey Hinton (who is considered to be the “Godfather of A.I.”) are concerned with the embracing of A.I. especially in realms not only of military use, but also communication, as Hinton in particular predicts a sort of mass inundation of fake imagery, text and video that will become more and more difficult to distinguish as real for the average person.
Was this inevitable? According to a Wired article from 2017, we were already there then, and evidently now we're even deeper, now, it’s a matter of adapting to a new symbiotic relationship with artificial intelligence.
コメント