NYU Professor Emeritus Gary Marcus,a frequent critic of the hype that surrounds artificial intelligence, recently sat down with to offer a rebuttal to remarks by Yann LeCun, Meta's chief AI scientist, in a interview with LeCun in September.
LeCun had cast doubt on Marcus' argument in favor of symbol manipulation as a path to more sophisticated AI. LeCun also remarked that Marcus had no peer-reviewed papers in AI journals.
Marcus has, in fact, published peer-reviewed papers, a list of which appears in context in the interview below. But Marcus' rebuttal deals more substantively with the rift between the two, who have sparred with one another on social media for years.
"There's a space of possible architectures for AI," says NYU Professor Emeritus Gary Marcus. "Most of what we've studied is in one little tiny corner of that space."
Gary MarcusMarcus claims LeCun has not really engaged with Marcus' ideas, simply dismissing them. He argues, too, that LeCun has not given other scholars a fair hearing, such as Judea Pearl, whose views about AI and causality form a noteworthy body of work.
Marcus argues LeCun's behavior is part of a pattern of deep learning researchers dismissing peers from outside of deep learning who voice criticism or press for other avenues of inquiry.
"You have some people who have a ton of money, and a bunch of recognition, who are trying to crowd other people out," Marcus said of LeCun and other deep learning scholars. They are, he said, borrowing a term from computational linguist Emily Bender, "sucking the oxygen from the room" by not engaging with competing ideas.
The rift between Marcus and LeCun, in Marcus's view, is odd given that Marcus contends LeCun has finally come around to agreeing with many criticisms Marcus has made for years.
"It basically seemed like he was saying that all the things that I had said, which he had said were wrong, were the truth," said Marcus. Marcus has expressed his strong views on deep learning both in books, the most recent being 2019's Rebooting AI, with Ernie Davis, although there are elements in a much earlier work, The Algebraic Mind;and in numerous papers, including his most extensive critique, in 2018, "Deep Learning:A Critical Appraisal."
In fact, the points of common ground between the two scholars are such that, "In a different world, LeCun and I would be allies," Marcus said.
Also:Meta's AI guru LeCun:Most of today's AI approaches will never lead to true intelligence
"The No. 1 point on which LeCun and I are in alignment is that scaling alone is not enough," said Marcus, by which he means that making ever-larger versions of neural nets such as GPT-3 will not, in and of itself, lead to the kind of intelligence that matters.
There also remain fundamental disagreements between the two scholars. Marcus has, as far back asThe Algebraic Mind,argued passionately for what he calls "innateness," something that is wired into the mind to give structuring to intelligence.
"My view is if you look at biology that we are just a huge mix of innate structure," Marcus said. LeCun, he said, would like everything to be learned.
"I think the great irony is that LeCun's own greatest contribution to AI is the innate prior of convolution, which some people call translation invariance," said Marcus, alluding to convolutional neural networks.
The one thing that is bigger than either researcher, and bigger than the dispute between them, is that AI is at an impasse, with no clear direction to achieving the kind of intelligence the field has always dreamed of.
"There's a space of possible architectures for AI," said Marcus. "Most of what we've studied is in one little tiny corner of that space; that corner of the space is not quite working. The question is, How do we get out of that corner and start looking at other places?"
What follows is a transcript of the interview edited for length.
If you'd like to dip into Marcus's current writing on AI, check out his Substack.
:This conversation is in response to the recent interview with Yann LeCun of Meta in which you were mentioned. And so, first of all, what is important to mention about that interview with LeCun?
Gary Marcus:LeCun's been critiquing me a lot lately, in the interview, in an article in Noema, and on Twitter and Facebook, but I still don't know how much LeCun has actually read of what I've said. And I think part of the tension here is that he has sometimes criticized my work without reading it, just on the basis of things like titles. I wrote this 2018 piece, "Deep Learning: A Critical Appraisal," and he smacked it down, publicly, the first chance he got on Twitter. He said it was "mostly wrong." And I tried to push him on what about it was wrong. He never said.
I believe that he thinks that that article says that we should throw away deep learning. And I've corrected him on that numerous times. He again made that error [in the interview]. If you actually read the paper, what it says is that I think deep learning is just one tool among many, and that we need other things as well.
So anyway, he attacked this paper previously, and he's a big senior guy. At that time [2018], he was running Facebook AI. Now he's the chief AI scientist at Facebook and a vice president there. He is a Turing Award winner. So, his words carry weight. And when he attacks somebody, people follow suit.
Of course, we don't all have to read each other's articles, but we shouldn't be saying they're mostly wrong unless we've read them. That's not really fair. And to me it felt like a little bit of an abuse of power. And then I was really astounded by the interview that you ran with him because it sounded like he was arguing for all the things I had put out there in that paper that he ridiculed: We're not going to get all the way there, at least with current deep learning techniques. There were many other, kind of, fine points of overlap such that it basically seemed like he was saying that all the things that I had said, which he had said were wrong, were the truth.
And that would be, sort of, irritating enough for me - no academic likes to not be cited - but then he took a pot shot at me and said that I'd never published anything in a peer-reviewed AI journal. Which isn't true. He must not have fact-checked that. I'm afraid you didn't either. You kindly corrected it.
: I apologize for not fact-checking it.
[Marcus points out several peer-reviewed articles in AI journals: Commonsense Reasoning about Containers using Radically Incomplete Information in Artificial Intelligence; Reasoning from Radically Incomplete Information: The Case of Containers in Advances In Cog Sys; The Scope and Limits of Simulation in Automated Reasoning in Artificial Intelligence; Commonsense Reasoning and Commonsense Knowledge in Communications of the ACM; Rethinking eliminative connectionism, Cog Psy)]
GM:This stuff happens. I mean, part of it, it's like an authority says something and you just believe it. Right. I mean, he's Yann LeCun.
: It should be fact-checked. I agree with you.
GM:Anyway. He said it. I corrected him. He never apologized publicly. So, anyway, what I saw there, the combination of basically saying the same things that I've been saying for some time, and attacking me, was part of a repositioning effort. And I really lay out the case for that in this Substack piece: "How New Are Yann LeCun's 'New' Ideas?"
And the case I made there is that he's, in fact, trying to rewrite history. I gave numerous examples; as they say nowadays, I brought receipts. People who are curious can go read it. I don't want to repeat all the arguments here, but I see this on multiple dimensions. Now, some people saw that and were like, "Will LeCun be punished for this?" And, of course, the answer is, no, he won't be. He's powerful. Powerful people are never punished for things, or rarely.
Also:Resisting the urge to be impressed; what we talk about when we talk about AI
But there's a deeper set of points. You know, aside from me personally being pissed and startled, I'm not alone. I gave one example [in the Substack article] of [J