![]() |
![]() |
By Chyung Eun-ju and Joel Cho
In today's fast-paced, information-driven world, keeping up with the latest news and knowledge can be a daunting task. With the rise of digital technologies, however, there is a new tool that is revolutionizing the way we process and consume information.
Meet ChatGPT, a cutting-edge AI language model that provides on-demand access to vast amounts of knowledge and personalized content. ChatGPT is changing the information landscape and transforming the way people access and process information. From fact-checking and customized responses to new opportunities for learning and discovery, ChatGPT is redefining what it means to be an informed citizen in the digital age.
AI systems are changing the intellectual significance of how we do things, such as how we access and process information. Does the easy access to more information improve or impair the way our brains process information?
Whether digital tools allow us to be smarter or actually make us dumber has been a long-standing discussion.
Some researchers argue that using digital tools such as navigation systems or spell checkers doesn't make someone dumb or reliant on the respective tool, but rather helps them become more efficient, allowing them greater time to evolve. On the other hand, logically, when we have a tool that helps us search, navigate and organize information with more efficiency and ease, it certainly will impact the knowledge actually processed and absorbed. With the surplus of information we hold at our fingertips nowadays, the problem is that we're facing cognitive overload.
New information does not necessarily translate to knowledge, intelligence or wisdom. In order for new information to be taken in as knowledge, we need it to be processed properly and independently. In other words, technology can make us smarter in the sense that we are able to understand more from what we have, but conversely, it can affect the way we process the information, making us reliant on these technologies in order to make use of the knowledge.
This discussion is being brought into the spotlight as AI generators are taking the digital world by storm. AI generators like ChatGPT have introduced a new world of AI-generated content and information. The exponential growth of AI-generated material can be seen all over the news. Amazon has seen an increase in self-published e-books with various authors disclosing that the material has been AI-generated, educational institutions are growing wary of the misuse of AI for academic work and copyright matters are being discussed over AI-generated art.
Just within the first paragraph of this article, we used ChatGPT to generate the introduction to our column by inputting prompts to the platform.
Although it seems that the quality of these AI-generated materials are still not up to par with works created by human professionals, the results are pretty convincing. It seems that it could be a matter of time before the technology improves itself to the point where AI could generate material for which it would be difficult to identify the origin and authorship.
And it seems that AI generators like ChatGPT are taking over the world at an alarming speed. The adoption of AI tools is only increasing by the day, and real-life implications are already taking form. One of the major concerns over the popularized use of AI tools is in the field of education. With this surge in the popularity of AI technology, a tool that is able to process information for us based on simple prompts, the question of whether digital tools are actually making us smarter takes on relevance.
If a student uses AI generators to write an essay for them, would they be considered dumber than one who did their own research? Should educational institutions drive away from this technology or embrace and adapt with it? Should we start moving past the idea of a society based on knowledge transfer towards one of knowledge-sharing?
From any perspective, this new era in AI technology is very exciting, but equally frightening too. The truth is that these AI tools that are popularized all rely on the processing of information publicly available, so, just as it is the case for AI image generators, these AI-generated texts may contain and propagate bias.
In other words, this exploding popularity of generative AI tools holds the potential to propagate even more misinformation, only extending the modern issues society is facing with the poorly regulated digital frontier.
Sam Altman, the CEO of OpenAI, the company behind ChatGPT, tweeted recently on concerns over the frightening speed that AI tools are being adopted without proper regulation: "[...] society needs time to adapt to something so big. There will be more challenges like bias (we don't want ChatGPT to be pro or against any politics by default [...]."
So even though we are undeniably witnessing an exciting phase of AI technology evolution, we must recognize that this does not come without its issues. Information provided through these tools shall be considered with the same attention and precautions that should be applied for any information available publicly. Thus, as we are becoming more heavily reliant on AI, we should not forget that AI should replace our critical thinking and human judgment.
Information is becoming faster and easier to access, but regulatory measures must keep up with this growth, otherwise we will be exposed to risks for malicious or unethical use of this technology.
Just at the end of last year, at the height of the popularity of ChatGPT, Altman admitted, "ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It's a mistake to be relying on it for anything important right now. It's a preview of progress; we have lots of work to do on robustness and truthfulness."
Technological advancements have democratized access to information and AI generators seem to be taking a big step into questioning our views on knowledge. However, we cannot just embrace the positive aspects of a new technology without cautiously weighing the potential dangers that it can present. We are at the beginning of a new era with AI, and evidently we must tread with caution.
Chyung Eun-ju (ejchyung@snu.ac.kr) is studying for a master's degree in marketing at Seoul National University. Her research focuses on digital assets and the metaverse. Joel Cho (joelywcho@gmail.com) is a practicing lawyer specializing in IP and digital law.