当前位置: 主页 > 天剑狂刀私服发布 >

司晓:打造伦理“方舟”,让人工智能可知、可控、可用、可靠(4)

时间:2020-10-14 02:29来源:8N.org.Cn 作者:天剑狂刀私服 点击:

Moreover, an available AI is a fair AI. A machine completely following rationality should be impartial and free of humane weakness such as being emotional or prejudicial. This should not be taken for granted, however. Recent incidences, like the vulgar language used by a Chat bot developed by Microsoft, had shown us that AI can go seriously wrong when fed by inaccurate, out-of-time, uncomplete, or human-flawed data. An Ethics by Design approach is preferred here. That is, to carefully identify, solve and eliminate bias during the process of AI developing.

Regulatory bodies, like government branches and internet industry organizations are already formulating guidelines and principles on solving bias and discrimination. Big techs like Google and Microsoft have already set up their own internal ethical boards to guild their AI research.

Reliable.

Since AI had been put into millions of households already, we need them to be safe, reliable, and capable of safeguarding against cyberattacks and other accidents.

Take autonomous driving cars as an example. Tencent currently is developing a level 3 autonomous driving system and has obtained the license to test our self-driving cars in certain public roads in Shenzhen. But before getting the test license, its self-driving cars have tested in closed site for more than a thousand kilometers. Today, no real self-driving car is being commercially used on our road, because related standards and regulations concerning its certification are still to be established.

Besides, for AI to be reliable, it should ensure digital security, physical security, and political security, especially privacy protection. Because AI companies collect personal data to train their AI systems. Therefore, they should comply with privacy requirements, protect privacy by design, and safeguard against data abuse.

Comprehensible.

Easy to say, hard to do. The popularity of AI developing methods such as deep learning is increasingly sunk the detailed underlying mechanisms into a black-box. These hidden layers between input and output of a deep neutral network make it impenetrable even for its developers. As a result, in case of a car accident guiding by an algorithm, it may take years for us to find the clue that leads to the accident.

Fortunately, the AI industry have already done some researches on explainable AI models. Algorithmic transparency is one way to achieve comprehensible AI. While users may not care about the algorithm behind a product, regulators need deep knowledge of technical details to supervise. Nonetheless, a good practice would be to provide users with easy-to-understand information and explanation in respect of decisions assisted or made by AI systems.

To develop a comprehensible AI, public engagement and exercise of individuals’ rights should be guaranteed and encouraged. AI development should not be a secret undertaking by commercial companies. The public as end users may provide valuable feedbacks which is critical for the development of a high-quality AI. It is the right for individuals to challenge bot-decisions that may cause harms or embarrassment to their owners.

This argument then goes back to the requirement of information releasing of AI developer to the public. That is, requirements to tech companies to provide their customers with enough information concerning AI system’s purpose, function, limitation, and impact.

Controllable.

The last but not the least principle to make sure that we, human beings, are in charge, ALWAYS.

From the dawn of civilization, we are in charge of all our inventions, till now. AI must not be an exception. Actually, no technologies should be. This is the precondition for us to stop the humming machine whenever something goes wrong and damaging the interests of us.

Only by strictly following the controllable principle, we can avoid the Si-Fi style nightmare pictured by prominent figures like Stephen Hawking and Elon Musk. With every innovation, comes risks. But we should not let the worries about humanity extinction caused by some artificial general intelligence or super bots to prevent us from pursuing a better future with new technologies. What we should do, is to make sure that the benefits of AI substantially outweigh the potential risks. What we should also do, is to take the rein and set up appropriate precautionary measures to safeguard against the foresaw risks.

For now, people often trust a stranger more than an AI without a good reason.

We frequently come across comments and words that self-driving cars are unsafe, filters are unfair, recommendation algorithms restrict our choices, and pricing bots charge us more. This deeply embedded suspicion rooted in information shortage, since most of us either don’t care or don’t have the necessary knowledge to understand an AI.

What to do?

I would like to propose a spectrum of rules started from an ethical framework that may help AI developers and their products to earn the trust they deserve.

------分隔线----------------------------