ChatGPT makes a lot of mistakes. But it’s fun to talk to, and it knows its limitations. ["Knows" as in "understands"? No.]
...One primary criticism of systems like ChatGPT, which are built using a computational technique called “deep learning,” is that they are little more than souped-up versions of autocorrect — that all they understand is the statistical connections between words, not the concepts underlying words. Gary Marcus, a professor emeritus in psychology at New York University and a skeptic of deep learning, told me that while an A.I. language model like ChatGPT makes for “nifty” demonstrations, it’s “still not reliable, still doesn’t understand the physical world, still doesn’t understand the psychological world and still hallucinates.”
... https://www.nytimes.com/2022/12/16/opinion/conversation-with-chatgpt.html?smid=em-share
No comments:
Post a Comment