Culture
文藝板塊
Johnson
約翰遜專欄
The ghost in the AI machine
AI機器中的魂靈
Talking about artificial intelligence in human terms is natural—but wrong.
用談論人的方式來談論人工智能是很自然的,但這是錯誤的。
My love’s like a red, red rose.
我的愛人像一朵紅紅的玫瑰。
It is the east, and Juliet is the sun.
那是東方,而朱麗葉就是太陽。
Life is a highway, I wanna ride it all night long.
人生如大路,我想徹夜飛馳其上。
Metaphor is a powerful and wonderful tool.
比喻是一種強大而奇妙的工具。
Explaining one thing in terms of another can be both illuminating and pleasurable, if the metaphor is apt.
如果比喻恰當的話,用一件事來解釋另一件事既能帶來啟發,又令人感到愉悅。
But that “if” is important.
但這個“如果”很重要。
Metaphors can be particularly helpful in explaining unfamiliar concepts: imagining the Einsteinian model of gravity (heavy objects distort space-time) as something like a bowling ball on a trampoline, for example.
在解釋不熟悉的概念時,比喻尤其有用:例如,把愛因斯坦的重力模型(重物會扭曲時空)想象成蹦床上的保齡球。
But metaphors can also be misleading: picturing the atom as a solar system helps young students of chemistry, but the more advanced learn that electrons move in clouds of probability, not in neat orbits as planets do.
但比喻也可能具有誤導性:把原子想象成一個太陽系有助于低年級化學學生的理解,但高年級的學生會了解到,電子在充滿概率性的迷霧中運動,而不是像行星那樣在規整的軌道上運動。
What may be an even more misleading metaphor—for artificial intelligence (AI)—seems to be taking hold.
一個可能更具誤導性的比喻--對人工智能的比喻--似乎正在開始占上風。
AI systems can now perform staggeringly impressive tasks, and their ability to reproduce what seems like the most human function of all, namely language, has ever more observers writing about them.
人工智能系統現在可以執行極其令人驚艷的任務,語言似乎是最具人類本性的功能,而它們能夠再次生成語言的這一能力讓越來越多的觀察家寫下了關于它們的文章。
When they do, they are tempted by an obvious (but obviously wrong) metaphor, which portrays AI programmes as conscious and even intentional agents.
當觀察家們寫這些文章時,他們會受到一個明顯的(但明顯錯誤的)比喻的誘惑,即將人工智能程序描述為有意識的、甚至是有意圖的行為主體。
After all, the only other creatures which can use language are other conscious agents—that is, humans.
畢竟,唯一能夠使用語言的其他生物就是其他有意識的行為主體,也就是人類。
Take the well-known problem of factual mistakes in potted biographies, the likes of which ChatGPT and other large language models (LLMs) churn out in seconds.
以人物生平簡介中的著名的事實性錯誤問題為例,ChatGPT和其他大型語言模型在短短幾秒鐘內就能炮制出這樣的生平簡介。
Incorrect birthplaces, non-existent career moves, books never written: one journalist at The Economist was alarmed to learn that he had recently died.
錯誤的出生地、不存在的職業變動、從未寫過的書:《經濟學人》的一名記者很震驚地得知自己最近被去世了。
In the jargon of AI engineers, these are “hallucinations”.
用人工智能工程師的行話來說,這些都是“幻覺”。
In the parlance of critics, they are “lies”.
用批評者的用語說,這些是“謊言”。
“Hallucinations” might be thought of as a forgiving euphemism.
“幻覺”可能被認為是一種帶有寬恕性的委婉說法。
Your friendly local AI is just having a bit of a bad trip; leave him to sleep it off and he’ll be back to himself in no time.
你那友好的當地人工智能只是腦子發了點昏,讓他睡一覺,他很快就會清醒過來的。
For the “lies” crowd, though, the humanising metaphor is even more profound: the AI is not only thinking, but has desires and intentions.
然而,對于“謊言”派來說,這個擬人化的比喻有更深刻的含義:人工智能不僅在思考,而且有欲望和意圖。
A lie, remember, is not any old false statement.
請記住,謊言不只是任何虛假的陳述。
It is one made with the goal of deceiving others.
而且還要以欺騙他人為目的。
ChatGPT has no such goals at all.
ChatGPT根本沒有這樣的目的。
Humans’ tendency to anthropomorphise things they don’t understand is ancient, and may confer an evolutionary advantage.
人類把自己不理解的東西擬人化的傾向自古有之,這可能會帶來進化優勢。
If, on spying a rustling in the bushes, you infer an agent (whether predator or spirit), no harm is done if you are wrong.
如果在偵察到灌木叢沙沙作響時,你推斷灌木叢里有一個行為主體(無論是捕食者還是鬼魂),如果你推斷錯了,也不會造成任何傷害。
If you assume there is nothing in the undergrowth and a leopard jumps out, you are in trouble.
但如果你假定灌木叢里什么都沒有,然后一只豹子跳了出來,那么你就有麻煩了。
The all-too-human desire to smack or yell at a malfunctioning device comes from this ingrained instinct to see intentionality everywhere.
對出故障的設備拍打或大喊大叫的愿望是一種人之常情,這種愿望就來自于這種根深蒂固的本能,即隨處可看見意圖。
It is an instinct, however, that should be overridden when writing about AI.
然而,在寫關于人工智能的文章時,這種本能應該被壓倒。
These systems, including those that seem to converse, merely take input and produce output.
這些系統,包括那些似乎能與人對話的系統,只是接受了輸入并產生輸出。
At their most basic level, they do nothing more than turn strings like 0010010101001010 into 1011100100100001 based on a set of instructions.
在最基本的層次上,它們只不過是根據一組指令,將0010010101001010之類的字符串轉換為1011100100100001。
Other parts of the software turn those 0s and 1s into words, giving a frightening—but false—sense that there is a ghost in the machine.
軟件的其他部分再將這些0和1轉換為單詞,給人一種可怕但錯誤的感覺:機器內部有一個魂靈。
Whether they can be said to “think” is a matter of philosophy and cognitive science, since plenty of serious people see the brain as a kind of computer.
是否可以說它們在“思考”是哲學和認知科學上的問題,因為許多嚴肅的學者將大腦視為一種計算機。
But it is safer to call what LLMs do “pseudo-cognition”.
但將大型語言模型所做的事稱為“偽認知”是更為安全的說法。
Even if it is hard on the face of it to distinguish the output from human activity, they are fundamentally different under the surface.
即使表面上很難區分機器產出和人類活動,二者在表面之下是有根本區別的。
Most importantly, cognition is not intention.
最重要的是,認知不是意圖。
Computers do not have desires.
計算機沒有愿望。
It can be tough to write about machines without metaphors.
描寫機器時不用比喻會很困難。
People say a watch “tells” the time, or that a credit-card reader which is working slowly is “thinking” while they wait awkwardly at the checkout.
人們說手表“會告訴”時間,或者當人們在收銀臺尷尬地等待反應很慢的信用卡讀卡器時,人們會說讀卡器在“思考”。
Even when machines are said to “generate” output, that cold-seeming word comes from an ancient root meaning to give birth.
即使當人們說機器“產生”輸出時,這個看起來冷冰冰的詞其實也源于一個古老的詞根,意思是生孩子。
But AI is too important for loose language.
但人工智能太重要了,不能用不嚴謹的語言。
If entirely avoiding human-like metaphors is all but impossible, writers should offset them, early, with some suitably bloodless phrasing.
如果完全避免擬人比喻幾乎是不可能的,那么寫作者們應該及早用一些恰當的冷血措辭來抵消比喻。
“An LLM is designed to produce text that reflects patterns found in its vast training data,” or some such explanation, will help readers take any later imagery with due scepticism.
“大型語言模型用于生成文本,這種文本反映了從其海量訓練數據中發現的模式”,或某種類似的解釋,這種說法將幫助讀者對之后出現的任何意象持以適當的懷疑態度。
Humans have evolved to spot ghosts in machines.
人類通過進化而能夠識別出機器中的魂靈。
Writers should avoid ushering them into that trap.
寫作者應該避免將人們帶入這個陷阱。
Better to lead them out of it.
不如帶領他們走出陷阱。