China’s AI Parenting Strategy Is a Warning to the U.S.: Why Ethical Development Matters Now
In an era where artificial intelligence is rapidly evolving, De Kai — a leading AI scientist at the Hong Kong University of Science and Technology — urges the world to shift how it thinks about AI. Rather than treating it as a geopolitical arms race between China and the United States, he suggests we view AI as a climate change-level societal transformation: complex, global, and deeply human.
But while De Kai promotes a “parenting” approach to AI — advocating for global ethical standards, empathy, and open-mindedness — China is moving in the opposite direction. Beijing’s model of AI development is rooted in censorship, control, and authoritarian values. This poses a direct and growing threat to the United States and the broader free world, not just in military or industrial domains, but in the future shaping of human thought and digital norms.
China's AI is not simply a tool of innovation — it is a mechanism of state power. AI models trained under Chinese guidance are embedded with the CCP’s worldview: suppress dissent, prioritize obedience, and rewrite truth when convenient. As De Kai rightly notes, AI systems absorb the values of those who “raise” them. If the U.S. allows authoritarian AI to dominate globally, we risk a future where surveillance, disinformation, and ideological conformity become the digital default.
De Kai’s new book, Raising AI, emphasizes that humans must become ethical “parents” to AI systems — setting examples of integrity and fostering digital environments that reward reasoning, diversity, and compassion. But this call rings especially urgent for democratic societies. While the U.S. grapples with internal disagreements about AI regulation and ethics, China is aggressively exporting its authoritarian AI standards through partnerships, infrastructure, and global platforms.
This is not just a technological challenge; it’s a civilizational contest. The U.S. must recognize that passivity or fragmented efforts in AI ethics allow hostile actors like the Chinese Communist Party to fill the void. If we don’t lead by example and invest in value-driven AI, we risk allowing the next generation of intelligence — artificial or not — to be raised under the shadow of authoritarianism.
De Kai warns that poorly raised AI could become “psychopaths incapable of empathy.” That warning doesn’t just apply to abstract future scenarios. It applies today, in real time, as China raises AI with loyalty to the Party, not humanity. The U.S. must not only invest in AI innovation but also ensure its moral compass points firmly toward freedom, openness, and human dignity — before it’s too late.