當前位置:
首頁 > 知識 > 再見「黑盒」:研究人員教AI解釋自己!

再見「黑盒」:研究人員教AI解釋自己!

黑盒問題,即無法確切了解其內部運作方式,是我們信任人工智慧系統的一大障礙。近日,一個由UC Berkeley,University of Amsterdam, MPI for Informatics, 和Facebook AI Research的研究人員組成的研究小組正在使"黑盒"變得透明。該小組教AI對其推理進行解釋, 並在做出決定時指出證據。研究人員開發了一個回答關於圖像的簡單語言查詢的AI模型,該模型可以回答有關特定場景中對象和動作的問題,並通過描述它所看到的和突出圖像的相關部分對答案進行解釋。


附原文:

Bye bye black box: Researchers teach AI to explain itself

A team of international researchers recently taught AI to justify its reasoning and point to evidence when it makes a decision. The 『black box』 is becoming transparent, and that』s a big deal.

Figuring out why a neural network makes the decisions it does is one of the biggest concerns in the field of artificial intelligence. The black box problem, as it』s called, essentially keeps us from trusting AI systems.

The team was comprised of researchers from UC Berkeley, University of Amsterdam, MPI for Informatics, and Facebook AI Research. The new research builds on the group』s previous work, but this time around they』ve taught the AI some new tricks.

Like humans, it can 「point」 at the evidence it used to answer a question and, through text, it can describe how it interpreted that evidence. It』s been developed to answer questions that require the average intellect of a nine year old child.

According to the team』s recently published white paper this is the first time anyone』s created a system that could explain itself in two different ways:

Our model is the first to be capable of providing natural language justifications of decisions as well as pointing to the evidence in an image.

The researchers developed the AI to answer plain language queries about images. It can answer questions about objects and actions in a given scene. And it explains its answers by describing what it saw and highlighting the relevant parts of the image.

It doesn』t always get things right. During experiments the AI got confused determining whether a person was smiling or not, and couldn』t tell the difference between a person painting a room and someone using a vacuum cleaner.

But that』s sort of the point: when a computer gets things wrong we need to know why.

For the field of AI to reach any measurable sense of maturity we』ll need methods to debug, error-check, and understand the decision making process of machines. This is especially true as neural networks advance and become our primary source of data analysis.

Creating a way for AI to show its work and explain itself in layman』s terms is a giant leap towards avoiding the robot apocalypse everyone seems to be so worried about.

Want to hear more about AI from the world』s leading experts? Join our Machine:Learners track atTNW Conference 2018. Check out info and get your ticketshere.

原文:https://thenextweb.com/artificial-intelligence/2018/02/27/bye-bye-black-box-researchers-teach-ai-to-explain-itself/

推薦《人工智慧系列課》

喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 AI講堂 的精彩文章:

炒幣這麼久,這些專業術語你都知道了嗎?
滴滴成立AI Labs加大人工智慧領域投入 布局智慧交通

TAG:AI講堂 |