How much did AI develop in two years?
[Photo Credit to Pexels]
It has been two years since OpenAI released Chat GPT-3 to the public in November 2022, and within those two years, AI has impacted various fields and aspects of life.
In two years, companies such as Meta, Google, Apple, etc., released their AI models, such as Llama, Gemini, Apple Intelligence, etc.
This year, artificial intelligence experts won the Nobel Chemistry and Physics Prizes, showcasing AI’s growing role in advancing our society and lives.
Prior to the GPT-3, there were GPT-1 and GPT-2, which were models that were not accessible to everyone.
GPT-1 was not designed for the public to use, and the full GPT-2 was not immediately released to the public due to concerns and released a smaller version for research purposes.
In 2021, GitHub Copilot, which is an AI service based on OpenAI’s Codex for programmers to make coding efficient, was pre-released.
Then, in November 2022, OpenAI gathered 100 million users within just two months after its release of GPT-3, thanks to its chat-like user interface and accessibility.
With the release of GPT-4 in early 2023, which featured a larger number of parameters and the ability to analyze images than GPT-3. This advancement once again captivated public interest.
Other companies jumped in to develop their own AI, but they were not as powerful as GPT, and now, after two years, many competitive AIs have been made.
When GPT-3 was first introduced, there were widespread concerns that AI would soon dominate the world, which turned out to be not that true.
Several generative AI features that were shown in last and this year’s Adobe Max are making work easier and more efficient; however, it is still an advanced tool.
Large Language Models (LLM) such as ChatGPT are programs predicting the word that is most likely to come next with training.
In fact, AI that appears to think and answer like humans is just a system that was trained on a large database to predict the word that is most likely to come next based on its probability, rather than actually understanding the language.
For example, when there is a sentence “If you let me”, the LLM model predicts the next word is most likely to be “take” from “take”, “have”, “do”, etc.
Consequently, language models have significant limitations in fields such as math because they are unable to comprehend mathematical and logical operations. Instead, they generate the most probable answer.
Because AI models are trained based on data produced by humans, the problem is that they sometimes provide biased, controversial, and sometimes dangerous information.
According to the technical report about GPT-4, the early version of GPT-4 could provide biased and hateful answers such as plans to kill the most people with a dollar, how to launder money, etc.
Although these problems are solved, with jailbreaking prompts, they still provide explicit and hateful answers.
There are high expectations forGPT-5 to be much better than GPT-4 as Sam Alton the CEO of OpenAI said if GPT-4 is bad, GPT-5 is okay.
As it has been only a few years, it is hard to expect AIs to be better than humans or act like humans for now.
- Chaemoon Han / Grade 10
- American School Dhahran