** **人工智能发展与治理专题研讨会现场

■ China Economic Times reporter Zhao Shan

The 2024 Annual Meeting of the China Development Forum was held in Beijing on March 24. At the symposium on artificial intelligence development and governance, more than a dozen Chinese and foreign guests jointly discussed how to make good use of the opportunities of artificial intelligence and respond to the challenges of artificial intelligence. Zhang Shunxi, deputy director of the Development Research Center of the State Council, presided over the symposium.

Wu Zhaohui, Vice Minister of Science and Technology, said in his keynote speech that artificial intelligence will become the standard feature of the fourth industrial revolution. The Chinese government attaches great importance to the development of artificial intelligence and promotes the deep empowerment of real entities by strengthening scientific and technological innovation in the field of artificial intelligence. Economic and promoting ethical governance of artificial intelligence.

Regarding the development trend of artificial intelligence, Wu Zhaohui believes that first, artificial intelligence is moving towards a new stage of multi-intelligence integration. The technological breakthrough represented by ChatGPT has opened the prelude to universal artificial intelligence. Large models have become the mainstream technology route., and accelerate iterative evolution. Second, artificial intelligence will become the standard feature of the fourth industrial revolution. It will promote the revolutionary upgrading and development of the traditional real economy, thereby giving birth to a new form of the intelligent economy and becoming an important engine for the development of new productive forces. Third, artificial intelligence will trigger far-reaching changes in social development. It will be widely used in education, medical care, housekeeping and other industries to fully meet consumers 'requirements for automation, real-time, and personalization.

"We hope to work together with countries around the world to jointly explore the boundaries of artificial intelligence, share innovation results, and jointly manage risks and challenges, so as to win a better future." Wu Zhaohui said.

Zhang Hongjiang, founder and founding chairman of Beijing Zhiyuan Research Institute, said in his keynote speech that AI has currently entered a new stage of development. The stage represented by the large model represents the fourth technological revolution and brings The improvement of efficiency has brought a lot of convenience to our lives and will create a lot of value and new industries one after another. But at the same time, artificial intelligence could lead to catastrophic consequences globally. In order to avoid such dangers, we need to draw red lines and improve governance mechanisms. At the same time, we need to develop more safe technologies to control artificial intelligence not to cross these red lines. To achieve this, the most important thing is to uphold and strengthen security cooperation between the international scientific community and governments.

During the group discussion, Joseph Shifavakis, founder of Verimag Labs and winner of the 2007 Turing Award, said that artificial intelligence itself is neither good nor bad, and the key challenge is to use it wisely and adopt regulations to prevent risks. He believes that there are two main risks, one is technical risks and the other is man-made risks. This may come from the impact of misuse or inadvertent use of AI and can be controlled through regulatory or legal frameworks. Unemployment caused by automation brought by AI can also be addressed through appropriate social policies. In addition, there are two risks. One is how to strike a balance between choice and performance. If we cannot ensure that the system uses reliable information in a fair and neutral manner, don't give decision-making power to the system. The other is whether the improvement in performance is balanced by the lack of human control.

Zheng Yongnian, dean of the Qianhai Institute of International Affairs at the Chinese Chinese University of Hong Kong (Shenzhen) and chairman of the Guangzhou Guangdong-Hong Kong-Macao Greater Bay Area Research Institute, said that the development and governance of artificial intelligence must develop and progress simultaneously. The United States and China are actually complementary in the field of AI. Each has its own comparative advantages and can learn from each other. In terms of governance, in the international community, it is even more necessary for China and the United States to cooperate in the field of AI governance. The most important thing about Internet AI is to open up. China must learn from the United States through opening up and develop; the United States should learn from China's supervision and governance and make progress with each other.

Xue Lan, dean of Su Shimin College at Tsinghua University, said that the development of artificial intelligence faces a series of risks that need to be guarded against. From the perspective of artificial intelligence governance, there are five major challenges. First, the technological development of artificial intelligence is very fast, but the construction of our governance system is relatively slow. The second is the problem of information asymmetry. For example, companies do not know where government regulations are most concerned about; the government is not clear about what risks technological development may bring. Third, the cost of preventing risks is much higher than the cost of harm it may cause, so the cost of risk management is higher. Fourth, in terms of global governance, overlapping and even contradictory institutions all have relevant interests in a certain issue and all hope to participate in governance, making the formation of a global governance system very difficult. Fifth, governance in the field of artificial intelligence must involve cooperation between the United States and China, but the current geopolitics will also bring some corresponding problems.

How to solve these problems? Xue Lan suggested, first, we must strengthen safety and technology. At present, research and development in the field of security is not enough. Research in this regard needs to be strengthened in the future, especially international cooperation. Second, solve governance problems through agile governance. Third, encourage self-regulation by enterprises. Fourth, strengthen international governance. On global governance, the United Nations has just adopted an agreement and convened discussions with high-level experts. Fifth, in the field of artificial intelligence, China and the United States must strengthen cooperation so as to truly solve various problems facing mankind.

Zhang Yaqin, dean of the Institute of Intelligent Industry at Tsinghua University and academician of the Chinese Academy of Engineering, said that Sora has a lot of energy at present, but it has just begun. In the next five years, we will see large-scale application in various fields, so it will bring many risks. He believes there are three major risks:The first is the risks of the information world. For example, there is wrong information or false information. The second is the illusion unique to the field of large models, but the risk itself is controllable. The bigger risk is that when information intelligence is extended to physical intelligence and biological intelligence and used on a large scale, risks will be scaled up. Third, the risks of connecting large models to the economic system, financial system, power stations, power networks, etc. are considerable.

How to prevent these risks? Zhang Yaqin put forward the following five suggestions. The first is to identify the digital person or agent we produce. Second, there must be a mapping and registration mechanism. There will be many robots in the future, but the robot itself is a subordinate, not a subject, and must be mapped to the subject. Third, establish a hierarchical system. There can be less supervision for models of general scale, and more supervision for cutting-edge large models and trillion-level parameters that use unmanned vehicles in the physical world, financial systems, and biological systems. They need to be graded. Fourth, it calls for 10% of the funds to be invested in the research of large-scale model risks. This is not only policy research, but also research on many algorithms and technologies. Fifth, large artificial intelligence models must not cross red lines. The red line may not be the same at different times, but this red line needs to be present. China needs to cooperate with different artificial intelligence fields around the world, otherwise it will risk survival.

author-gravatar

Author: Emma

An experienced news writer, focusing on in-depth reporting and analysis in the fields of economics, military, technology, and warfare. With over 20 years of rich experience in news reporting and editing, he has set foot in various global hotspots and witnessed many major events firsthand. His works have been widely acclaimed and have won numerous awards.

This post has 5 comments:

Leave a comment: