A growing concern for deepfakes as AI technology advances
[Photo Credits to Pixabay]
With the rapid advancement in Artificial Intelligence (AI), many have voiced their concerns about an emerging technology called deepfake.
As its name entails, AI is when a machine simulates the essence of human intelligence and cognitive functions.
It allows the machine to perceive, reason, learn, and improve its capabilities based on the collection or procession of accumulations of data.
There are numerous contemporary implementations of applied AI to real-world problems that the majority of the population utilizes or interacts with on a regular basis.
Some examples of AI that many may be familiar with include digital maps, voice assistants like Siri or Alexa, facial recognition, and chatbots like the recently trending ChatGPT.
Despite AI’s promising innovations and applications, it is still a developing field, yielding known and unknown risks.
Deepfakes are a commonly known risk that has risen due to AI’s newer capabilities, resulting in criticism and discussions by experts and the public.
The word “deepfake” is a combination of the words “fake” and “learning,” a simplification of what it essentially is.
A deepfake is when media such as a video is manipulated by deep learning–when a machine uses data and algorithms to “learn” in a process similar to the human brain–to make an individual appear as someone they are not.
This manipulation occurs by deconstructing the person’s facial features and integrating what it “learns” through the data to produce a deepfake.
The creation and use of this fraudulent media can have overt societal disadvantages and may even pose security risks.
For example, an individual could use this technology to create a “morph,” a combination of individuals into a unique face, to fabricate official identifying documents such as passports.
These passports could then be exploited to overtake border security.
There are ample societal disadvantages as well: false information and propaganda, harassment through malicious content, false hoaxes, and intentional reputational damage.
Figures like Brad Smith, the president of Microsoft, have vocalized their apprehension regarding the advancement of deepfakes.
Smith claimed that one of his greatest concerns surrounding AI is deepfake media.
He recently stated this month, “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
Smith’s statement is relevant in the context of how, as AI as a whole advances progressively, so does the quality and refinement of deepfake technology.
In the earlier stages of its usage, several aspects of imperfections in deepfake media, such as blurry content or video corruption, made it easier than it is today to identify its deceitful content.
However, the problem that has risen through time is the difficulty in differentiating between legitimate and fraudulent media due to improvements in deepfake implementation.
Additionally, because deepfakes are difficult to moderate, experts warn that they will increasingly become harder to identify.
As this occurs, society may reach a point where the public eye fails to detect propaganda among legitimate news.
As AI technology continues its rapid development, the severity of deepfakes will likely continue to grow alongside new advancements in technology.
- Katie Lee / Grade 11
- Episcopal Academy