當前位置:
首頁 > 最新 > 微軟放大招!面部識別無歧視,Face API更加精準識別人類膚色

微軟放大招!面部識別無歧視,Face API更加精準識別人類膚色

GIF

轉載聲明

本文為燈塔大數據原創內容,歡迎個人轉載至朋友圈,其他機構轉載請在文章開頭標註:「轉自:燈塔大數據;微信:DTbigdata」

導讀:微軟近日在博客文章中宣布了Face API的重大更新,它改進了面部識別平台識別不同人種性別的能力,此前,這一直是計算機視覺平檯面臨的挑戰。本文將進行詳細介紹。(文末更多往期譯文推薦)

微軟近日在博客文章中宣布了Face API的重大更新,它改進了面部識別平台識別不同人種性別的能力,此前,這一直是計算機視覺平檯面臨的挑戰。

隨著這些改進,Redmond公司表示,它將深膚色人種的男性和女性的誤差率降低了20倍,針對女性降低了9倍。

多年來,研究人員已經證實了面部識別系統存在種族偏見。 2011年的一項研究發現,中國,日本和韓國的演算法在識別白人面孔方面比識別東亞人更困難,另一項研究表明,安全廠商們大肆推廣的面部識別技術對非洲裔美國人的識別準確率會下降5%至10%。

為了解決這個問題,微軟的研究人員修改並擴展了Face API的培訓和基準數據集,並建立了膚色、性別和年齡的新資料庫。它還與人工智慧(AI)公平方面的專家合作來提高演算法性別分類器的精度。

「我們探討了消除偏見、實現公平的很多方式,」微軟紐約研究實驗室的高級研究員Hanna Wallach在一份聲明中表示,「我們談到了收集更多的數據,以增加培訓數據的多樣性。我們也討論了在真正應用之前,對我們的系統進行內部測試的很多策略。」

Face AI技術的增強只是在公司角度上盡量減少了AI的偏見,這僅僅是個開端。微軟正在開發一種工具,可以幫助工程師識別訓練數據中導致性別分類錯誤率較高的演算法盲點。據該博客文章稱,公司還正在建立檢測和減少AI系統開發中不公平性的最佳實踐。

更具體地說,微軟的Bing團隊正在與倫理學專家合作,探索 「董事會、學術界和社交媒體上的熱烈討論——缺乏女性CEO」,這項搜索結果的不含偏見的呈現方式。微軟指出,在財富榜前500位首席執行官中,女性只有不到5%,目前網路中,「首席執行官」的搜索結果都大大提高了男性的形象。

「如果我們用有偏見的社會產生的數據去訓練機器學習系統,讓它模擬該社會中做出的決策,,那麼這個系統必然會再現社會的偏見。」 Wallach說,「這是一次很好的機會去真正地思考,我們究竟要在這些系統中體現怎樣的價值觀,以及它們是否真正反映了這些價值觀。」

微軟並不是唯一試圖最小化演算法偏見的公司。今年5月,Facebook發布了Fairness Flow,如果一個演算法因為某人的種族、性別或年齡對一個人做出不公正的判斷,Fairness Flow會自動警告該演算法。 IBM的Watson和Cloud Platforms小組最近的研究也集中在減輕AI模型中的偏見,特別是與面部識別相關的偏見。

原文

Microsoft』s improved Face API

more accurately recognizes a range of skin tones

In a blog post today, Microsoft announced an update to Face API that improves the facial recognition platform』s ability to recognize gender across different skin tones, a longstanding challenge for computer vision platforms.

With the improvements, the Redmond company said, it was able to reduce error rates for men and women with darker skin by up to 20 times, and by 9 times for women.

For years, researchers have demonstrated facial ID systems』 susceptibility to ethnic bias. A 2011 study found that algorithms in China, Japan, and South Korea had more trouble distinguishing between Caucasian faces than faces of East Asians, and a separate study showed that widely deployed facial recognition tech from security vendors performed 5 to 10 percent worse on African American faces.

To tackle the problem, researchers at Microsoft revised and expanded Face API』s training and benchmark datasets and collected new data across skin tones, genders, and ages. It also worked with experts in artificial intelligence (AI) fairness to improve the precision of the algorithm』s gender classifier.

「We had conversations about different ways to detect bias and operationalize fairness,」 Hanna Wallach, senior researcher at Microsoft』s New York research lab, said in a statement. 「We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.」

The enhanced Face AI tech is just the start of a company-wide effort to minimize bias in AI. Microsoft is developing tools that help engineers identify blind spots in training data that might result in algorithms with high gender classification error rates. The company is also establishing best practices for detecting and mitigating unfairness in the course of AI systems development, according to the blog post.

More concretely, Microsoft』s Bing team is collaborating with ethics experts to explore ways to surface search results that reflect 「the active discussion in boardrooms, throughout academia and on social media about the dearth of female CEOs.」 Microsoft notes that less than 5 percent of Fortune 500 CEOs are women and that web search results for 「CEO」 largely turn up images of men.

「If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,」 Wallach said. 「This is an opportunity to really think about what values we are reflecting in our systems, and whether they are the values we want to be reflecting in our systems.」

Microsoft isn』t the only company attempting to minimize algorithmic bias. In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Recent studies from IBM』s Watson and Cloud Platforms group have also focused on mitigating bias in AI models, specifically as they relate to facial recognition.

文章編輯:小柳

徵稿通知

你是一位喜愛編程/熱衷AI/鍾情演算法的文藝程序猿(媛)嗎?

你是一位關心科技圈動態、想要與更多的人探討前沿、愛寫作文的胖友嗎?

歡迎聯繫小編給原創模塊投稿!

有趣譯文亦或爬蟲分享、演算法介紹或者編程心得,你的文章一經採用,將收到豐厚的稿酬!


喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 燈塔大數據 的精彩文章:

TAG:燈塔大數據 |