When Yoav Goldberg said,
for f*ks sake, DL people, leave language alone and stop saying you solve it.",
he actually started three separate debates.
- First it's his criticized the paper "Adversarial Generation of Natural Language" by Rajesar et al. and "Controllable Text Generation" by Hu et al. He was asking if the authors were overselling their papers.
- Perhaps more importantly, it is whether we should see his commentary as back-paddling the deep learning community. You can trace the debate from Goldberg's clarification, then Prof. Lecun's Response and again Goldberg's response to Lecun's repsonse
- Then the last is whether the practice of arxiv publishing and flag-planting is problematic. e.g. Is it right to publish incomplete results so that you can easily claim merit later? Even when others are able to present the idea in a more complete form?
Other than the exchange between Goldberg and Prof. LeCun, here are couple of interesting viewpoints which you should look at before you judge the matter:
What do we think then? First off, it's important to notice that Goldberg is actually a deep learning practitioner. So while his criticism stems the standpoint of more conventional NLPers, he also truly understand the power and limitation of deep learning. That's why many of his technical criticism on the two papers are dead on. We should also appreciate he initiated an open debate. This also carries to his criticism on deep learning in general.
But then, how come Goldberg's post got such strong reaction then? It has to do with his criticism being abrasive, and language strong and harsh. So even he has tried to clarified and re-clarified, he never alter his original text to soften this tone. May be it has to do with Goldberg is not living/teaching in an English-speaking country. May be it has something to do with his general dark yet humorous writing. (Check out his web site?)
Berkeley just release a new blog site called "Berkeley Artificial Intelligence Research" (BAIR) blog which we found it fairly interesting. It's comparable to blogs from OpenAI or DeepMind, yet from an academic institution.