【】

  发布时间:2025-10-30 10:29:35   作者:玩站小弟   我要评论
Worried about a dystopian future in which AI rule the world and humans are enslaved to autonomous te 。

Worried about a dystopian future in which AI rule the world and humans are enslaved to autonomous technology? You're not alone. So are billionaires (kind of).

First it was the Partnership on AI formed by Google, Amazon, Microsoft, Facebook and IBM.

Then came Elon Musk and Peter Thiel's recent investment in $1 billion research body, OpenAI.

Now, a new batch of tech founders are throwing money at ethical artificial intelligence (AI) and autonomous systems (AS). And experts say it couldn't come sooner.

Mashable GamesSEE ALSO:Doctors take inspiration from online dating to build organ transplant AI

LinkedIn founder, Reid Hoffman, and eBay founder, Pierre Omidyar (through his philanthropic investment fund) donated a combined $20 million to the Ethics and Governance of Artificial Intelligence Fund on Jan. 11 -- helping ensure the future's more "man and machine, not man versus machine," as IBM CEO Ginny Rometty put it to WSJ Thursday.

But how will they put their praxis where their prose is, and what's at stake if they don’t?

"There's an urgency to ensure that AI benefits society and minimises harm," said Hoffman in a statement distributed via fellow fund contributors, The Knight Foundation. "AI decision-making can influence many aspects of our world -- education, transportation, healthcare, criminal justice and the economy -- yet data and code behind those decisions can be largely invisible."

That's a sentiment echoed by Raja Chatila, executive committee chair for the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The IEEE Standards Association aims educate and empower technologists in the prioritisation of ethical considerations that, in their opinion, will make or break our relationship with AI and AS.

The organisation's Ethically Aligned Design study, published in Dec., is step one in what they hope will be the beginning of a smarter working relationship between humans and systems.

"You either prioritise well-being or you don't -- it's a binary choice," said Chatila.

Like Hoffman, Chatila feels a palpable sense of urgency when it comes to work of these research bodies. For him, our sense of democracy could be sacrificed if we begin fearing algorithms or data usage that we don't fully understand, could distort our voice.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

"The United Nations has chosen to prioritise the analysis and adoption of autonomous weapons in 2017. This is because beyond typical military issues, these discussions will very likely set precedents for every vertical in AI," he told Mashable.

"Beyond the issue of weapons, what's also really at stake is human agency as we know it today. When individuals have no control over how their data is used, especially in the virtual and augmented reality environment to come, we risk having outlets to express our subjective truth." The algorithmic nightmare that was Facebook's "fake news" comes to mind.

Meanwhile the Ethics and Governance of Artificial Intelligence Fund says it will aim to support a "cross-section of AI ethics and governance projects and activities," globally and other members mentioned to date include Raptor Group founder Jim Pallotta and William and Flora Hewlett Foundation, who've committed another $1 million each.

Activities the fund will support, according to the statement, include a joint AI fellowship for people helping keep human interests at the forefront of their work, cross-institutional convening, research funding and promoting topics like ethical design, accountability, innovation and communicating about AI and AS more broadly.

Prioritising wellbeing from the get-go

While stewardship of ethical research in AI seems more urgent than ever, there's no concrete cause for concern when it comes to innovation in the field. According to Chatila, current or future unintended ethical consequences aren't the result of AI designers or companies being "evil" or uncaring.

"It's just that you can't build something that's going to directly interact with humans and their emotions, that makes choices surrounding intimate aspects of their lives, and not qualify the actions a machine or system will take beforehand," he said.

"For instance, if you build a phone with no privacy settings that captures people's data, some users won't care if they don't mind sharing their data in a typical fashion. 

"But for someone who doesn't want to share their data in this way, they'll buy a phone that honours their choices with settings that do so. This is why a lot of people are saying consumers will 'pay for privacy.'" Which of course, becomes less of an issue if manufacturers are "building for values" from the get-go.

"We need to move beyond fear regarding AI, at least in terms of Terminator-like scenarios. This is where applied ethics, or due diligence around asking tough questions regarding the implementation of specific technologies, will best help end users,” he said.

IEEE is currently working on a Standard along the lines of a "best practice" document called "P7000" that Chatila says will help update the systems development process to include explicitly Ethical factors.

“Having organisations and companies become signatories [to industry standards] would be fantastic, where they reorient their innovation practices to include ethical alignment in this way from the start," he said.

With OpenAI, the IEEE’s Ethically Aligned Design project and now the Ethics and Governance of Artificial Intelligence Fund, there could be every chance companies will move beyond good intentions and into standardised practices that factor human well-being into design.

So long as they hurry the heck up. Innovation waits for nought.

"You either prioritise well-being or you don't -- it's a binary choice," said Chatila. "And if you prioritise exponential growth, for instance, that means you can't focus on a holistic picture that best reflects all of society's needs."


Featured Video For You
This Albert Einstein robot can help you learn science

TopicsArtificial IntelligenceLinkedIn

  • Tag:

相关文章

  • Here's what 'Game of Thrones' actors get up to between takes

    Warning: Contains some mild Season 6 spoilers right at the end (the video is spoiler-free). 。LONDON -
    2025-10-30
  • 寶寶補充鈣鐵鋅哪個好

    對於許多寶寶來說 ,可能身體會出現缺乏鈣鐵鋅元素的情況 ,這樣的現象都是較為多見的,各位家長不用過於擔心,可以通過飲食的補給來給孩子進行調理身體的。比如寶寶想要補充鈣質元素的話 ,可以多吃一些豆製品和鯽魚等
    2025-10-30
  • 牙硬腫痛吃什麽消炎藥

    牙齦腫痛的發病原因是比較的多的,有可能是因為上火的因素引起的 ,也有可能是因為牙齦發炎的因素的 ,所以針對牙齦腫痛的症狀  ,我們應該要采用正規的藥物進行進行治療 。牙齦腫痛的患者可以采用牛黃解毒丸進行止痛,也
    2025-10-30
  • 男假性尖銳濕有哪些症狀

    男性出現假性尖銳濕疣這種疾病 ,會表現出陰莖瘙癢以及紅腫的情況  ,還會在陰莖處形成乳頭瘤等 ,這種症狀是可以重疊的,一般是發生在陰莖以及包皮、陰囊附近  ,男性可以感覺到陰莖周圍會癢。對於男性假性尖銳濕疣的表現
    2025-10-30
  • The five guys who climbed Australia's highest mountain, in swimwear

    Climbing a freezing cold mountain is already hard enough work. But in briefs? Nope.。It's too late fo
    2025-10-30
  • 生完寶寶盆骨寬怎麽辦

    生完寶寶之後產婦的盆骨是會變寬的,主要是因為盆骨是胎兒生長的地方,當胎兒出生之後 ,盆骨會開始收縮 ,恢複到原來的狀態 ,但是有一些產婦會出現盆骨無法收縮到原來的情況,主要鬆弛素分泌增多以及盆骨錯位引起的,
    2025-10-30

最新评论