Hu Jiaqi: The 11th Open Letter to Leaders of Mankind

To:

Leaders of all countries, the Secretary-General of the United Nations, world-top scientists and scholars, renowned entrepreneurs and prominent media figures.

Distinguished leaders,

Looking back on the year of 2024, I am acutely aware that we stand once again at a historic crossroads. Technological advancements have brought immense promise, along with equally profound crises. As the founder of Save Human Action Organization (SHAO), I am writing to you once again with a deep sense of anxiety and responsibility. At present, we are not only grappling with the threats posed by technological progress but also confronting a crisis that imperils the very existence of humankind.

Since the Industrial Revolution, science and technology has been advancing at an unprecedented pace. Particularly in the past few decades, our lives have been transformed in astonishing ways—from the Information Revolution to leaps in biotechnology and to the rapid rise of artificial intelligence (AI). Our wisdom and power have grown immensely, yet it is this very power that raises concerns, as it is accelerating our species’ demise.

Standing at the pinnacle of today’s technological achievements and looking back, we find that countless scientists and sages have warned us: “Science and technology is a double-edged sword.” It can bring tremendous benefits but also harbors potential for colossal destruction. This warning has become increasingly precise and tangible today. Breakthroughs in various cutting-edge tech fields are pushing us towards unknown boundaries, especially AI, whose rapid development cannot be ignored due to its inherent risks.

In 2024, the Nobel Prize in Physics was awarded to two scientists who made outstanding contributions in the field of AI, and half of the Nobel Prize in Chemistry also went to two scientists for their work in AI. This award represents the highest accolade of scientific achievement, with AI being one of the most revolutionary technologies of our time. However, alongside transformation, AI brings unforeseen dangers—uncontrolled AI could pose a direct threat to our species’ existence.

Here, I would like to highlight a specific example: Geoffrey Hinton, the scientist who was awarded the 2024 Nobel Prize in Physics for his remarkable contributions in the field of artificial intelligence, had warned about the dangers of uncontrolled AI. He explicitly stated that the development of AI poses unpredictable risks, which could potentially accelerate the process of human extinction within the coming decades. Hinton’s warnings are alarming. He estimated that the probability of AI leading to human extinction within the next 30 years could be as high as 10% to 20%. Despite these warnings from a distinguished scientist, society has not become truly vigilant. Instead, the prize has served as an impetus for the further advancement of AI.

This reflects an extreme irrationality in today’s society. The Nobel Prizes, symbolizing the highest honor in global science, should serve as a cautious guide for scientific progress. Yet, while AI laureates themselves issue warnings about “runaway” technology, society continues to pursue extreme tech breakthroughs, seemingly oblivious to the potential for disaster. This phenomenon epitomizes the greatest challenge humanity faces today: when greed and short-term interests override rationality and long-term survival considerations, we set ourselves on a path towards self-destruction.

Having engaged in this research since 1979, and it has been 46 years to date. Through my book "Saving Humanity" published 18 years ago in 2007, I have long warned that unchecked technological advancement may lead to human extinction within centuries or even this century. In this book, I called for strict limitations on technology development, advocating for a defined framework to prevent human extinction. Following its publication, I wrote to world leaders for the first time, urging them to prioritize this issue.

In 2018, I established Save Human Action Organization (SHAO) in the United States to promote the resolution of this global issue. Yet, despite numerous warnings from scientists and scholars, global discussions on technology regulation still remain superficial. The unrestrained technological development has become a significant hazard, with scientists raising concerns that go largely unheeded.

As the founder of Save Human Action Organization (SHAO), I am writing to you once again today—leaders who can influence the future of humanity. I urge you to seriously consider this question: has the rapid development of science and technology already surpassed what our civilization can bear? Is it a bridge leading to a glorious future or a fuse leading to self-annihilation?

Herein, I reiterate my urgent appeal: in the face of this impending calamity, we must undertake decisive action.

It is imperative to underscore that the challenges posed by technology cannot be addressed by any single nation or region alone. The threats engendered by technological advancements are global in nature, necessitating worldwide coordination and cooperation. A prerequisite for such efforts is the acknowledgment that the development of science and technology must be subjected to stringent limitations and governed by a unified global regulatory mechanism.

In my book “Saving Humanity”, I have proposed a solution: humanity must move towards a great unification, establishing a globally unified government. This entity would serve as the foundation for exercising comprehensive control over technological development, strictly regulating both the pace and direction of technological progress. This proposal is not an abstract ideal but represents the sole viable pathway to avert human extinction.

Esteemed leaders, there is no more time for indecision. While current technological breakthroughs may promise boundless profits and short-term benefits, they are also inexorably accelerating our path toward extinction. The issue of human survival is no longer a question for some future era; it is a pressing reality. We must act now as a global community to prevent the catastrophe that could result from uncontrolled technological advancement.

To avoid extinction, we must first recognize the critical link between technological development and human survival, ensuring that technology does not overstep human ethical and existential boundaries. Secondly, a global regulatory framework must be established to guarantee stringent review and constraints of each technology. Ultimately, we must encourage the unification of all nations and create a globally unified government to ensure that technology management is not interfered by regional and political interests.

Humanity stands at this crossroads, where the path forward will be determined by our choices. There is only one chance to prevent extinction; without taking immediate and decisive measures, missing the chance could lead to irreversible loss. As global leaders, you are entrusted with the responsibility to change humanity’s fate. It is time to awaken and act, forging a path that does not lead to our own annihilation.

Let us embark on a collaborative endeavor to establish a more secure and accountable framework for the advancement of science and technology, safeguarding our shared future.

Founder of Save Human Action Organization (SHAO)

Hu Jiaqi

January 1, 2025

Media Contact
Company Name: SHAO
Contact Person: Mr. Hu
Email: Send Email
City: Beijing
Country: China
Website: http://savinghuman.hujiaqi.com/