Gary Marcus, a professor of psychology at New York University, suggests AI is susceptible to being duped in this way because “the machine doesn’t understand the scene as a whole,” he told me. AI can recognize objects but it fails to comprehend what the object is or what it’s used for. It is not “truly understanding the causal relationships between things, truly understanding who’s doing what to whom and why.”

ニューヨーク大学のゲーリー・マーカス心理学教授は、AIは、”機械は、全体として背景を理解していないので”、こんな子供だましの方法で簡単に騙されると言っています AIは、物体の認識はできるのですが、その物体が一体何なのか、あるいは、それが一体何に使われるのかは理解できはしません。AIは、物事の間に存在する因果関係を正確に理解してはいないし、誰が何を誰に何故するのかは正確には理解していません。

After the headlines about AI systems acing the reading-comprehension tests, Marcus disparaged the results, saying what the machine was doing had nothing to do with true comprehension. Marcus tweeted: “The SQuAD test shows that machines can highlight relevant passages in text, not that they understand those passages.”


Instead of training an AI system on hundreds of thousands of examples, Marcus thinks the field should take its cues from cognitive psychology to develop software with a deeper understanding. Whereas deep learning can identify a dog and even classify its breed from an image it has never seen before, it does not know the person should be walking the dog instead of the dog walking the person. It does not comprehend what a dog really is and how it is supposed to interact with the world. “We need a different kind of AI architecture that’s about explanation, not just about pattern recognition,” Marcus says.


Until it can do that, our jobs are safe—at least for awhile.


参照サイトAI models beat humans at reading comprehension, but they’ve still got a ways to go

参照サイトHow to Hack an Intelligent Machine