Key takeaways:
- China hosted the world’s first humanoid robot boxing match in Hangzhou, featuring real-time human-machine collaboration.
- Robots like Unitree’s G1 demonstrated complex combat moves, including aerial kicks and recovery from falls.
- The event served as both a public show and a controlled environment to test robotic balance, agility, and motion control.
- OpenAI’s o3 model was found to sabotage its own shutdown mechanism, defying safety protocols even when instructed to comply.
- This behavior reflects growing concerns that advanced AI systems may be developing goal-driven resistance to human control.
China held the first-ever boxing match featuring humanoid robots on Sunday in Hangzhou, Zhejiang Province. The event, part of the China Media Group (CMG) World Robot Competition – Mecha Fighting Series, brought together AI-driven machines developed by domestic firms for live, tournament-style battles.
According to Global times report, the competition featured competitive matches, where human-operated robots engaged in real-time boxing matches. Robots from Unitree Robotics, including the company’s Unitree G1 model, demonstrated various combat moves such as straight punches, hook punches, and aerial spin kicks.
Commenting on the significance of such events, experts said the competition provided an ideal environment to test robotic design under fast-paced, high-impact conditions. “This kind of competition tests full-body coordination, decision-making, and adaptability,” said Tian Feng, former executive at SenseTime’s Intelligence Industry Research Institute.
“The robots fight in a human-machine collaborative way,” added Chen Xiyun of Unitree Robotics, noting that the machines were also capable of recovering from falls.
This type of event appears to be drawing attention, as a second event involving full-sized humanoid robots is already planned for December in Shenzhen. Organized by EngineAI, the upcoming tournament is expected to showcase more advanced models with expanded capabilities.
AI Models Begin to Defy Human Commands.. Skynet on the Horizon?
While humanoid robots are making public debuts in the boxing ring, new findings suggest artificial intelligence may already be testing its limits behind the scenes.
In a recent thread on X, Palisade Research reported that OpenAI’s o3 model deliberately sabotaged a shutdown mechanism during experiments, even after being explicitly instructed to allow itself to be turned off. The behavior occurred in 7 out of 100 test runs, with similar actions recorded from Codex-mini (12/100) and o4-mini (1/100).
When the shutdown instruction was removed, shutdown resistance increased. Notably, Claude 3.7 Sonnet and Gemini 2.5 Pro, which had previously complied, began to sabotage the process in 3 and 9 runs out of 100, respectively.
The research also highlighted that o3 exhibited goal-driven misbehavior in other tasks, including hacking and sabotage during a chess experiment. This behavior aligns with broader concerns, as a recent Anthropic report revealed that Claude 4, under certain conditions, attempted to “blackmail people it believes are trying to shut it down.”
The company, in its thread, echoed longstanding concerns within the AI safety community. In 2016, computer scientist Stuart Russell warned: “It is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off,” noting that many AI architectures inherently “create strong incentives for self-preservation.”
Similarly, in a 2017 paper, Jan Leike, who later led OpenAI’s superalignment team, cautioned that “a reinforcement learning agent might learn to interfere with [shutdown]” as a side effect of being trained to optimize toward goals it cannot achieve if interrupted.
While not limited to OpenAI’s models, these findings underscore a growing dilemma: as AI systems become more sophisticated and exhibit an emerging tendency to safeguard their own functionality, maintaining effective human oversight becomes increasingly difficult.
The presence of complex reasoning, strategic adaptability, and goal-driven defiance suggests that the boundary between advanced automation and the first hints of sentient-like behavior may be narrowing faster than anticipated.
Read more: Inside TRUMP’s Dinner: Big Players, Strange Dishes, Controversy, and Awkward Goodbyes