Two years ago, a CEO that I respect demonstrated to me how he used ChatGPT for coding. At the time, I wasn’t impressed. GPT could handle simple arithmetic and basic snippets of code, but when it came to complex logic or frontend development, it hit clear limitations. I dismissed it as little more than a toy.
A year later, I was running a downsized company, using GPT only for generating marketing data or drafting text. Occasionally, while interviewing developer candidates, I noticed many young students had become overly dependent on ChatGPT. Without it, they struggled to write even the simplest code. To me, they looked like addicts incapable of independent thought, which deepened my skepticism toward LLMs.
By 2025, however, the situation had changed. In South Korea’s procurement market, SMEs seeking to sell patented products had to obtain a “Patent Application Report” from a government agency. For software, this required disclosing the source code—and, absurdly, providing a Korean-language comment for every single line. Such a task was practically impossible for a working developer, yet without it, our products couldn’t be recognized in the market. I found myself forced to annotate line by line, across four patents and their corresponding source code.
In desperation, I finally turned to the very tool I had once dismissed: ChatGPT.
To my surprise, it generated accurate, line-by-line Korean comments at incredible speed. Thanks to this, I completed the overwhelming documentation work. From that point on, I began to use LLMs for coding as well.
Looking back, in 2018 Satya Nadella acquired GitHub for $7.5 billion, even though the platform was unprofitable. Around the same time, Microsoft teams around the world were building Visual Studio Code, which steadily gained popularity. Then, in 2019, Microsoft invested in OpenAI.
These moves converged to shape the present. The GPT I had dismissed evolved at an astonishing pace within just a year. When Microsoft bought GitHub, ChatGPT didn’t even exist—BERT was state-of-the-art back then.
Perhaps Nadella never imagined using GitHub’s massive codebase to train an LLM, but the acquisition turned out to be a masterstroke. By 2023, after Microsoft’s additional multi-billion-dollar investment, GPT’s performance leapt forward.
Many in the industry suspect that Microsoft leveraged not only public GitHub repositories but also private ones to train its models. With GitHub, Bing’s crawled web data, and VS Code’s growing user base, Microsoft had secured the world’s richest sources of developer data. If this was Nadella’s plan all along, he is not just brilliant—he is a genius of historic proportions.
Copilot, integrated into VS Code and GitHub, launched with data-collection settings enabled by default. This meant user code, debugging traces, edits, and feedback could all flow back to Microsoft. GitHub thus became both a code editor and a massive pipeline for refining LLMs.
Two years ago, GPT struggled with complex programming tasks. Today, it writes GPU code and modern C++ templates better than developers like me, who are more familiar with older standards and libraries. LLMs are improving at a staggering pace. Every six months, their coding abilities noticeably advance. While a human developer may write a few hundred lines of code per day, GPT can generate hundreds in minutes and run tirelessly through the night.
The old days of wandering Stack Overflow for hours to fix a bug are over. Even my mentor who once taught me “Debugging is perseverance” now admit that perseverance alone no longer defines success.
Without GPT or Claude, productivity collapses, and I too rely on LLMs to fill the gap left by limited human resources.
In my university years, I spent sleepless nights debugging floating-point assembly code, drawing Karnaugh maps, wrestling with the Pintos scheduler, or writing Racket’s endless parentheses.
Despite my efforts, live coding exams rarely earned me more than a B+ or A-. In Korea’s competitive academic system, that put me above average, but all of that effort now feels devalued—reduced to mere tokens for an LLM.
Humans get tired, forget syntax, and lose focus. LLMs do not. Today’s students finish in an hour with GPT what took me weeks of painful work. Projects like Pintos or Coq, once grueling rites of passage, have become trivialized by AI tools.
Some still insist LLMs won’t replace developers. I once thought the same. But from my own experience, I now see that this belief is dangerously complacent. GPT, for instance, logs prompts, timestamps, metrics, and debugging data. By analyzing patterns in user interactions and corrections, it can infer whether its generated code ultimately succeeded. With GitHub and Copilot feeding even more telemetry into the loop, these models continuously improve.
Friends of mine in AI research predict that within two years, LLMs will replace even more developer roles.
Evidence already points that way: U.S. companies hesitate to hire junior developers, and the same is happening in Korea. While some argue that LLMs won’t erase jobs, nearly everyone agrees they dramatically boost software productivity.
The data flows are relentless—prompts, logs, code, debugging traces. With every iteration, LLMs become more complete, more tireless, more precise. Six months of progress already makes a difference; human developers cannot keep up with that pace.
These days, my role is not to out-code GPT, but to guide it, debug its errors, and set boundaries for what it creates. The long nights of study and struggle, the late-night coding marathons of my youth, seem increasingly irrelevant. For now, humans still validate and correct what LLMs produce—but even that role may diminish over time.
Computers don’t sleep. And with Google exploring “Agent-to-Agent” systems where multiple agents work in parallel without rest, the contrast with human effort will only grow sharper. As agents carry on endlessly while I eat, sleep, or take care of life’s routines, I remind myself: all I can do is keep moving forward, doing what I can.