当前位置: 主页 > 天剑狂刀私服发布 >

司晓:打造伦理“方舟”,让人工智能可知、可控、可用、可靠(3)

时间:2020-10-14 02:29来源:8N.org.Cn 作者:天剑狂刀私服 点击:

I would like to share some of my thoughts about the recent development of Artificial Intelligence from an ethicology perspective. That is, “Towards an Ethical Framework for Artificial Intelligence”.

The recent hype of AI was largely built upon the tremendous amount of data we piled up through internet. The internet connects more than half of the world population nowadays. And there are 800 million netizens living in China’s cyberspace alone.

Along with the convenience and efficiency brought by internet, comes risks. This is especially true in an age where a good deal of our daily life is driven by big data and artificial intelligence. Algorithms have been widely used to determine what we read, where we go and how we get there, what music we listen to, and what we buy at what price. Self-driving cars, automatic cancer diagnosis, and machine writing have never been so close to large scale commercial application.

It is therefore, to a certain extent, proper to call data the new oil, and AI the new drill. Following this line of analogue, the malfunctioned algorithms the new carbon dioxide emission.

Note that malfunction does not equal to malevolent. A good intent does not guarantee free of legal, ethical and social troubles. As to AI, we have observed a fairly large amount of such troubles, namely, unintended behaviors, lack of foresight, difficult to monitor and supervise, distributed liability, privacy violation, algorithmic bias and abuse. Moreover, some researchers started to worry about the potential unemployment rate hike cause by smart machines that doom to replace human labor.

Troubles are looming.

A series of AI misbehaves had occupied media headlines lately. A facial recognition app tagged African Americans as gorillas; another one matched a US Congressman with a criminal warrant. Risks assessment tool used by US courts was alleged as biased against African Americans. More seriously, Uber’s self-driving car killed a pedestrian in Arizona. Facebook and other big companies were sued for discriminative advertising practices. And we have just learned from Angel that some AI empowered machines are aimed to kill.

We are marching into unmanned territories. We need rules and principles as compass to orient this great voyage. The tradition of technology ethics is for sure at the core of this set of rules and principles.

The study of technology ethics has gone through three phases over the past several decades. The first phase of studies focused on computers. During this period, ethical codes and laws concerning computer use, fraud, crime and abuse were enacted. Most of them still apply today.

The second phase came with the internet, where ethical norms and laws concerning information’s creation, protection, dissemination and prevention of abuse were established.

Now, this research field has quietly entering into a new third phase, which I called “the data and algorithm ethics”. New ethical framework and laws concerning the development and application of AI will be gravely in need in the coming years.

Here we should acknowledge some of the early stage efforts to build such a framework by some governmental subsidiaries and industrial collaborations. Notable examples include Asilomar AI Principles, and IEEE’s ethics standards and certification program.

In September of this year, when speaking at Shanghai World AI Conference, Mr. Pony Ma, the Chair and CEO of Tencent, challenged the high-tech industry to build available, reliable, comprehensible, and controllable AI systems.

Available, Reliable, Comprehensible, and Controllable. ARCC, or ar-k.

It seems Pony’s call had laid out a foundation for further development of an AI ethical framework. To some extent, like the Ark of Noah saved human civilization thousands of years ago, the ARCC for AI may secure a friendly and healthy relationship between humanity and machinery in the thousands of years to come.

So that, it is worth to study these four terms carefully.

Available. AI should be available to the mass not the few. We are so used to the benefits brought by the smartphones, internet, apps, etc. and often than not, forget the fact that there is still a half world are cut off from this digital revolution.

The advances of AI should fix this problem, not exacerbate it. We are to bring residence of in undeveloped areas, the elderly and those disabled back to this digital world, rather than take the digital divide as a failure admitted. An AI in need, is an AI that is inclusive and broadly-shared. The wellbeing of humanity in whole should be the sole purpose of AI development. Only by then, we could be sure that AI will not advance the interests of some human over that of the rests.

Take the recent development of medical bots as an example. Like the “Looking for Shadow” (miying) developed Tencent’s AI team, which is currently working with radiologists in hundreds of local hospitals. This cancer pre-screening system had by now attend billions of medical images and detected hundreds of thousands high-risk cases. This bot then past these cases to experts. By doing so, it freed doctors from daily labor of watching pictures and spare them with more time to attend patients. Let the machine do what machine is good at and human do what human is good at. In this case, doctors are at peace with the seemingly job-threatening machines.

------分隔线----------------------------