‘CES 2026’ the day before it opened, on January 5 (local time), at the Hyundai Motor Group press conference held in Las Vegas, Nevada, USA, the next-generation electric Atlas prototype raises its hand in greeting. Yonhap News
[Weekly Kyunghyang] #Scene 1. On January 5 (local time), Hyundai Motor Group unveiled the physical AI ‘Atlas’ at the international home appliance and IT exhibition ‘CES 2026’ held in Las Vegas, USA. With hands equipped with tactile sensors that enable delicate, human-like tasks, and capable of lifting up to 50㎏, this robot is scheduled to be deployed to automobile production plants in the United States and elsewhere starting in 2028. It will initially work on relatively simple processes such as parts sorting, and from 2030 it plans to expand its scope to tasks such as parts assembly. To that end, the company plans to establish a system capable of mass-producing 30,000 units by 2028. Once Atlas is introduced to factories, humans are said to take on roles of training and managing the robots so they operate well. Amid expectations of a ‘productivity revolution’, Hyundai Motor’s stock price has surged 80% (as of the January 21 closing price) just since the start of the new year.
#Scene 2. Following the market, the labor union responded. In a newsletter on January 22, the Hyundai Motor union said that Atlas’s “mass production and deployment to production sites are expected to cause employment shocks,” adding, “Not a single unit can come in without labor-management agreement.” The union also noted that one Atlas could be purchased for the equivalent of two to three years of a production worker’s annual pay, and that thereafter only about 14 million won in maintenance costs would be required. On this basis, it argued that Atlas threatens workers’ employment security and serves to maximize capitalists’ profits.
#Scene 3. Reactions to the union’s pushback have been cold. Comments criticizing the union appeared under related articles, such as “It’s only a matter of time before robots replace them,” “21st-century Luddites,” “Replace everything with robots and lower car prices,” and “You can’t resist the tide.” Many posts also invoked the union’s long-standing label of ‘aristocratic union’ or linked the union to Hyundai Motor’s share price, which began to fall right after the union’s response, to heap blame on it. President Lee Jae Myung said on January 29, “It would be good to have a serious discussion about a basic society,” while adding, “There was a machine-breaking movement in the past, but in the end one could not avoid the massive cart rolling in. Ultimately, society must adapt quickly.”
Companies, workers, and consumers may each be making their own case, but the three scenes share a common point: their imaginations about how the technology called physical AI will be used are the same and clear. Whether one supports it from a consumer’s perspective or opposes it from a worker’s perspective, everyone believes that ‘robots will enter factories and replace people.’
This kind of ‘imagination’ about the Atlas robot calls to mind technology and vision as described by Daron Acemoglu. Nobel laureate in economics and MIT professor Acemoglu wrote in his 2023 book <Power and Progress>, “We must not be too dazzled by the monumental technological progress humanity has achieved. A shared vision can also trap us.” According to Professor Acemoglu, technological development has not historically raised the living standards of everyone each time. Whether the fruits of progress are monopolized by a few or enjoyed by many depends entirely on society’s choices. What influences those choices is the imaginationi.e., the visionof how we will use the technology and how we will share its results.
Elon Musk, CEO of Tesla, has said, “Robotics and AI are a path to abundance for all,” but opinions diverge. It is unclear what abundance awaits workers who are immediately pushed out of their workplaces.
We examined the questions our society should be asking amid the controversy over ‘deploying Atlas to factories.’ Are the visions of an ‘end of jobs’ realistic? If they are not yet realistic enough, is there still nothing to worry about? If a ‘future in which robots replace humans’ is truly unavoidable, is society preparing for what comes after? Utopias do not unfold on their own. Nor are dystopias that we must accept helplessly. It is time for society to keep asking questions so that the direction of technological progress is chosen toward a better path for everyone.
Job Replacement or Transition
Forecasts differ on how much jobs will be affected by the introduction of AI. Right after generative AI such as ChatGPT emerged, projections that jobs would be replaced were dominant. For example, Goldman Sachs’s 2023 outlook indicated that one in four jobs in the United States and Europe was highly likely to be automated by AI. Globally, it suggested that roughly 300 million full-time jobs could be replaced.
As generative AI has been used in workplaces, projections have been gradually revised. Recent outlooks place more weight on AI changing the way we work rather than replacing jobs. The International Labour Organization (ILO)’s report ‘Generative AI and Jobs (2025)’ viewed one in four workers worldwide as having their work affected by the introduction of generative AI. Occupations such as general office work and call center agents were classified as having a high likelihood of automation. While similar at first glance to Goldman Sachs’s figures, the ILO projected that “jobs will change rather than be replaced, because AI requires continuous human input.” This reflects that full automation is not as easy as early expectations for AI suggested.
Most changes that have transformed our lives have arrived gradually, and AI and human jobs are likely to be the same. For example, the case of call centers where AI has been introduced into counseling shows how work is changing. On January 14, the Federation of Korean Trade Unions released the results of a fact-finding survey on AI adoption conducted with call center agents and customers. As AI took on first-line consultations, the average number of consultations handled by call center agents fell by 13.9%. However, the average call time per consultation increased from 6.95 minutes to 7.55 minutes. With AI handling cases requiring simple guidance and human agents taking complex cases, job difficulty has risen. Some agents also said that post-processing timeorganizing the consultation and entering information after a callincreased. In some cases, agents enter consultation data to train the AI.
In the customer survey, only 18% answered that they were ‘satisfied’ with AI consulations, while 54.2% said they were ‘not satisfied.’ Many responded that AI consultations ‘increased consultation time (43.8%)’ and ‘did not help resolve issues (40.8%).’ An overwhelming 87.5% said they ‘prefer human agents.’ In sum, with AI adoption, customer satisfaction fell and agents’ job difficulty rose. Constant job insecurity is a bonus. Since 2023, the banking sector has attempted to reduce agent staffing with AI adoption, and last year the Korea Student Aid Foundation’s call center effectively dismissed its agents.
What the call center case implies is this: a dramatic situation in which humans are replaced by AI has not yet unfolded. However, AI has threatened some jobs and increased the difficulty of the work of those who remain. These phenomena may be due to the transitional phase in which AI is still developing. Yet there is no guarantee that, once AI is fully evolved, workers’ job prospects will be better than in this transition.
A humanoid robot shakes hands with a human. Seo Sung-Il, senior staff reporter
A Mismatch of Speeds
In the early 1800s in Britain, workers threatened with losing their jobs due to the introduction of textile machinery destroyed machines. The famous ‘Luddite movement’ is now used mainly to refer to those who cannot adapt to a changing world and oppose the introduction of technology. Lee Sang-Heon, Director of the ILO Employment Policy Department, says in the book <Why Good Jobs Are Always Scarce> that we should revisit the full story of the Luddite incident. “What workers wanted to question was not the technological innovations or the new machines themselves, but the lack of social support to help people adapt to the changes they brought. They were fighting not machines, but povertyand the indifference of social and political forces.”
What preparations is society making for the losers who suffer during the transition, and for the potential ‘replacement of labor’ that may come in later stages? Suppose things go as techno-optimists predict: productivity will rise, and jobs will shrink. With fewer jobs, purchasing power falls, and even if there are many goods, few can buy them. To prepare for this, tech executives have floated ideas such as taxing robots that take jobs (a robot tax) and using the proceeds to provide income to people (a basic income). Yet while robots have come right up to the factory gates, institutionalizing a basic income still seems far off. And the larger the gap in speed between the former and the latter, the greater someone’s pain will be.
The Presidential National AI Strategy Committee also has a social subcommittee to discuss such issues. But for now, research is being conducted only on a very small number of occupations affected by AI adoption. Measures such as retraining are reportedly being discussed to mitigate employment shocks. One member of the National AI Strategy Committee said, “If the entry of AI into industrial sites is unavoidable, then the social task is how robust a safety net we build. But the speed at which AI spreads and the speed at which countermeasures are discussed do not match. There is no guarantee that, while people are retraining to find new jobs, AI won’t advance and replace those jobs as well.”
This, too, is not unrelated to the situation in which AI optimism has become society’s dominant vision. The government, which should design social institutions to fit new technologies, is also leaning more toward technological advancement than toward countermeasures. Regarding the AI Basic Act, a minimal set of guidelines on AI, the government said it would “operate it as a promotional law focused on developing the AI industry so as not to hinder startup innovation.” In fact, the AI Basic Act defines ‘high-impact AI’to which safety management responsibility is assigned for operatorsnarrowly, leaving items such as ‘surveillance robots (AI)’ outside the scope of regulation.
Lee Kwang-Seok, a professor at Seoul National University of Science and Technology, said, “Technological optimism is extremely strong in Korean society. Since the industrialization era, there has been a sense of efficacy regarding state-led investment in technology. Even before physical AI, Korea was an overwhelmingly leading country in the phase of industrial automation. There are aspects worth listening to in opposing arguments, but they are dismissed as mere obstructionism.”
There is no single right answer for how to prepare for the future. But there is a sound principle for how to find our way. The ILO has said that, to achieve both improved working conditions and productivity gains from AI, “social dialogue among government, labor, and management and consultation in workplaces are essential.” In November last year, Korea’s social dialogue body, the Economic, Social and Labor Council (ESLC), published a green paper containing 12 questions related to labor market changes due to AI. Because social dialogue involving labor, management, and government has not taken place, it issued a ‘green paper’ that contains only questions, not a ‘white paper’ that would include answers.
Kim Byung-Kwon, head of the Green Transition Institute, said, “The Hyundai Motor Atlas case became a problem because AI was introduced in a way that runs counter to workers’ interests. If the ESLC or the National AI Strategy Committee had functioned properly to build a consensus (on how to use AI, etc.), there should have been a voice to mediate conflicts when such issues arise. If they do not function properly, when AI-related conflicts of interest occur, citizens, creators, and workers will inevitably be pushed unilaterally.”
Some also say that, because it is the Hyundai Motor union, at least the problem is being raised in a visible way. Unions with weak bargaining power, or workers without unions, find it even harder to speak up about AI adoption. In the Federation of Korean Trade Unions’ fact-finding survey of call center agents, only 1.5% responded that labor-management consultations had been held regarding the introduction of AI agents.
Song Gwan-Cheol, a research fellow at the Korea Labor and Society Institute who conducted the survey of call center agents, said, “Unions view this as a matter for labor-management consultation because it affects working conditions and is linked to job insecurity. Companies, by contrast, take the position that no consultation is needed to introduce a tool that makes work more convenient. But based on past experience, things that seemed trivial when they were introduced later end up replacing jobs, so clashes are inevitable.”
In the end, the social solution lies not in criticizing the Hyundai Motor union, but in how much the voices of those whose interests diverge frm AI optimism are reflected in policymaking. Kim Ha-Ni, an operating committee member of the Digital Justice Network, said, “We need discussions that include people affected by AI. The issue is not merely how to foster technology and grow the industry, but how we will use this technology and how society will prepare for the side effectsthat requires social consensus.” Director Kim Byung-Kwon said, “Today the problem has arisen in a Hyundai Motor factory, but tomorrow it may occur in a Yeouido office, and the day after tomorrow in cultural and artistic spaces. It is time for the government to take action on how to prevent and protect against damage in workplaces caused by AI. Dragging workers into the role of those who resist the currents of the times by bringing up the Luddite story is not desirable even for the subsequent AI era.”