當前位置

首頁 > 英語閱讀 > 英語閱讀理解 > 不聽話的機器人已經出現了大綱

不聽話的機器人已經出現了大綱

推薦人: 來源: 閱讀: 5.55K 次

If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.

如果說好萊塢曾經給科學家們一個教訓,那一定是機器人開始反抗它們的創造者——人類。

Yet despite this, roboticists have started to teach their own creations to say no to human orders.

儘管這樣,機器人學家已經開始教機器人拒絕人類的命令了。

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.

他們已經編程了一對小型的類人機器人,名字叫謝弗和鄧普斯特,如果人類的指令對它們自己有危險,它們就會違背命令。

The results are more like the apologetic robot rebel Sonny from the film I, Robot, starring Will Smith, than the homicidal machines of Terminator, but they demonstrate an important principal.

相比那些會殺人的終結者機器人,這個試驗的結果更像是威爾·史密斯主演的《我與機器人》中的那個會道歉的機器人索尼,但是它們都遵守重要的原則。

不聽話的機器人已經出現了

Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.

馬塞諸塞州塔夫茨大學的工程師戈登·布里格斯和馬特赫茲博士正試圖創造出更能以人性化方式交流的機器人。

In a paper presented to the Association for the Advancement of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms.

在人工智能發展協會上發表的一篇論文中,兩人說:“人類會因爲各種各樣的原因拒絕執行指令:從沒有能力做到對道德的顧慮。”

'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.

“鑑於自主系統的侷限性,大多數指令排斥反應機制只需利用以前的理由——缺少知識或者能力。”

'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'

“然而,隨着自動自主主體的能力不斷提升,將會有越來越多人對計算機倫理學或者自主主體行爲的道德性領域感興趣。”

The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.

兩人創造的機器人能夠執行人的口頭指令如“起立”和“請坐”。

However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.

然而,如果命令他們走入障礙物或者走向桌子的邊緣時,機器人會禮貌地拒絕。

When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'

當要求機器人在桌子上向前走時,機器人會僵持,並告訴命令發出者:“對不起,前方沒有路,我不能這麼做。”

Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'

再一次要求機器人時,它會說:“但這不安全。”

Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.

當人告訴機器人在它們走在桌子邊緣時,人們會接住機器人,那麼機器人會非常信任人類然後繼續向前走,這還相當令人感動。

Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.

同樣的,人們告訴機器人前方的障礙不是固體的時候,機器人也會無所畏懼地向前走。

To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.

爲了實現這一效果,研究人員在機器人的軟件程序中引入了推理機制,能讓機器人評估環境並且判斷這一指令是否會危及它們的安全。

However, their work appears to breach the laws of robotics drawn up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.

然而,他們兩人的研究似乎違反了科幻作家艾薩克·阿斯莫夫制定的機器人法則,該法則規定機器人必須服從人類的命令。

Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.

許多人工智能專家認爲,確保機器人遵守這些法則是十分重要的——這些法則還包括機器人永遠不能傷害人類,並在不和其他法則相沖突的前提下保護自己。

The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous results.

這項工作可能會引發擔憂:如果人工智能使機器人能夠違揹人類的命令,那麼它可能帶來災難性的後果。

Many leading figures, including Professor Stephen Hawking and Elon Musk, have warned that artificial intelligence could spiral out of our control.

許多領袖人物,包括霍金教授和馬斯克都已警告過人工智能可能會失控。

Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.

另一些人警告說,機器人可能最終會取代許多工人的工作,有些人擔心機器人將接管一切。

In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.

在電影《我與機器人》中,人工智能讓機器人索尼突破了編程,違抗了人類的命令。

However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'

而戈登·布里格斯和馬特赫茲補充道:“爲了能讓這些推理和對話機制更加強大和全面化,我們還有很多工作要做。”