메뉴 메뉴
닫기
검색
 

OPINION

제 27 호 What is the truth?

  • 작성일 2025-10-01
  • 좋아요 Like 1
  • 조회수 3101
강명관

Separating Fact from AI Fiction in Modern Media

Kicker : Opinion


What is the truth?

Separating Fact from AI Fiction in Modern Media

By Jung - Hyun Kang, Trainee - Reporter

eyesens1262@naver.com


            Until just a few years ago, AI was considered a distant, future technology that was inaccessible to the average person. But now, it seems we use it so frequently that it's hard to imagine our daily lives without it. How many times a day do you use AI? For college students, the more effectively you utilize AI in class and on assignments, the better your results will be, virtually guaranteeing improved academic performance. However, there's a problem: is the information you use for assignments truly trustworthy?" One of the drawbacks of AI's advancement is that it can create believable information, even if it doesn't exist. People of all ages are still unaware that AI can create news, so they tend to believe any text, photo, or video is true. AI has advanced to the point where memes have even emerged on social media, asking, "Is this AI?" when they don't believe something actually happened. Just how convincing is fake news? Why has this phenomenon arisen?




Real-World Cases


            In May 2024, a Korean woman lost approximately $55,000 in a romance scam after video chatting with someone impersonating Elon Musk. The scammer used deepfake technology to create a convincing video call, leading the victim to believe she was in a romantic relationship with the Tesla founder. This case illustrates how AI-generated content can exploit human emotions and trust. Deepfake technology is also being weaponized in investment fraud. What began as malicious celebrity image manipulation has evolved into sophisticated fake advertisements featuring deepfake versions of famous executives endorsing fraudulent investment schemes, causing significant financial losses to unsuspecting investors. A South Korean media outlet recently aired troubling AI-generated content, including a video of a sparrow supposedly saying "The lovebug's natural enemy is the sparrow," photos of fake environmental activists, and images of alleged North Korean factory pollution—all created by artificial intelligence. Perhaps most shocking was the discovery that "The Velvet Sundown," a band with over one million monthly Spotify listeners, was entirely AI-generated and never existed.



The Detection Challenge


            Paradoxically, young people who grew up in the digital age, despite being technologically savvy, are more vulnerable to AI-generated misinformation. Their rapid information consumption through social media platforms prioritizes speed over verification. Social media algorithms compound this problem by promoting engaging content regardless of authenticity, creating echo chambers where misinformation spreads unchecked. A 2024 survey by South Korea's Ministry of Science and ICT revealed that citizens ranked financial fraud and voice phishing crimes as the third most concerning impact of deepfake-based fake news, affecting 16.75% of respondents. This demonstrates widespread public awareness and concern about AI-generated misinformation. The psychological impact extends beyond individual deception. If people begin questioning every piece of content they encounter, it could erode trust in legitimate news sources and institutions. The speed of AI content generation far exceeds existing fact-checking capabilities—AI systems can produce hundreds of persuasive but false articles while human fact-checkers verify a single claim.



Building Information Literacy


            Addressing AI-generated misinformation requires a fundamental shift in how we consume and verify information. Traditional media literacy, while still important, is insufficient when misinformation quality rivals authentic content. Educational institutions must prioritize critical thinking about information sources, teaching students to verify content across multiple channels before accepting or sharing it. This includes understanding AI systems, recognizing artificially generated content, and cultivating healthy skepticism toward convenient or bias-confirming information. Media organizations bear critical responsibility in this challenge. News outlets must invest in better verification tools, train staff to recognize AI-generated content, and maintain transparency about fact-checking procedures. Technology companies must develop detection systems that can identify and flag AI-generated content appropriately. Individual information consumers must adapt their habits and develop new navigation skills. This includes cross-checking sources, verifying publication dates and claims through official channels, and remaining aware of personal biases.




Moving Forward


            The challenge of distinguishing fact from AI-generated falsehoods represents a defining issue of our technological age. Successfully addressing it requires coordinated efforts from educators, technologists, journalists, policymakers, and citizens. The ability to critically evaluate information will become as essential as traditional literacy skills. The stakes are too high to ignore this challenge. We must act now—before AI-generated misinformation becomes so sophisticated and widespread that distinguishing truth from fabrication becomes virtually impossible for average people. Our response to this challenge will determine whether technology serves to inform and enlighten society or becomes a tool for widespread deception and manipulation.




Sources:

https://www.chosun.com/economy/tech_it/2024/06/11/OQQJM3JH4JFNBLTTUTBZQBOXWE/

https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART003093079

https://www.msit.go.kr/bbs/view.do?sCode=user&mId=307&mPid=208&bbsSeqNo=94&nttSeqNo=3185217

https://www.youtube.com/watch?v=QXG5afPjGME

https://www.youtube.com/watch?v=YS87K647XM4