This post is a condensed version of a talk I gave at my research group.
I've been lucky enough to attend both EMNLP 2018 and INLG 2018, and I thought it would be useful to share my main impressions of which cool things happened during the conference. I've split them into four sections: General tendencies about the direction that modern NLG research is taking, Practical considerations to be aware of for your own daily research, Exciting research areas where interesting results are being produced, and Fun ideas that I stumbled upon while walking around.
For me, one of the most important aspects of going to conferences is getting a sense of what has changed and what is coming. In that spirit, this conference season taught me that...
It is now okay to make a presentation full of memes: Are you leading a half-day tutorial for over 1K researchers? Put some minions on your slides, it's cool.
The text produced by neural networks is fine: For the last years, computational linguists have been worrying that, while neural networks produce text that looks okay-ish, it is still far from natural. This year we have finally decided not to worry about this anymore. Or, as described in Yoav Goldberg's slides, "Yay it looks readable!".
BLEU is bad: It is not every day that you see presenters apologizing
for the metric they've used in their paper. It's even less common when
they do it unprompted, and yet here we are. BLEU is a poor fit for
modern NLG tasks, and yet everyone is still waiting for someone to come
up with something better.
Further reading: BLEU is not suitable for the evaluation of text simplification, and ChatEval: A tool for the systematic evaluation of Chatbots for a human-evaluation platform.
Need something done? Templates are your friends: We are all having a
lot of fun with Neural this and Deep that. The companies that are
actually using NLP, though? Templates, all of them. So don't discount
Further reading: Learning Neural Templates for text generation.
Classical NLP is still cool, as long as you call it anything else:
Everyone knows that real scientists train end-to-end neural networks.
But did you know that tweaking your data just a bit is still fine? All
you have to do is pick one of the many principles that the NLP community
developed in the last 50+ years, call it something else, and you're
Further reading: Handling rare items in data-to-text generation.
Be up to date in your embeddings: It is now widely accepted that you
should know what fastText and Glove embeddings are. And ELMO and BERT
are here to stay too. So if you're not up to date with those (or, at the
very least, my post on
then it's time to start reading.
Further reading: want to use fastText, but at a fraction of the cost? Generalizing word embeddings using bag of subwords.
Data is a problem, and transfer learning is here to help: Transfer Learning is a technique in which you take a previously-trained model and tweak it to work on something else. Seeing as how difficult it is to collect data for specific domains, starting from a simpler domain may be more feasible that training everything end-to-end yourself.
Exciting research areas
If you are working on NLG, which I do, then you might be interested in a couple specific research directions:
Understanding what neural networks do: This topic has been going on
for a couple years, and shows no end in sight. With neural methods
everywhere, it only makes sense to try and understand what is it exactly
that our morels are learning.
Further reading: Want to take a look at the kind of nonsense that a neural network might do? Pathologies of Neural Models Make Interpretations Difficult.
Copy Networks and Coverage: The concepts of Copy Networks (a neural
network that can choose between generating a new word or copying one
from the input) and Coverage (mark which sections of the input have
already been used) where very well put together in a summarization paper
titled "Get to the point! Summarization with Pointer-Generator
techniques are still being explored, and everyone working on
summarization should be at least familiar with them.
Further reading: On the abstractiveness of Neural Document Summarization explores what are copy networks actually doing.
AMR Generation is coming: Abstract Meaning Representation (AMR) is a
type of parsing in which we obtain formal representations of the meaning
of a sentence. No one has managed yet to successfully parse an entire
document (as far as I know), but once it's done the following steps are
mostly obvious: obtain the main nodes on the text, feed them to a neural
network, and obtain a summary of your document. Work on this has already
began, and I look forward to it.
Further reading: Guided Neural Language Generation for Abstractive Summarization using AMR.
I don't want to finish this post without including a couple extra papers that caught my eye:
- Here's an interesting idea: why doing a single pass of the input through your encoder, when a multi-pass approach may do better? If you like the idea, here's a paper for you: Iterative Document Representation learning towards summarization with polishing.
- How do you detect insults when the people insulting do not use the actual insulting words? These two papers have ideas on how to do it: Determining code words in euphemistic hate speech using Word Embedding Networks and Improving moderation of Online Discussions via Interpretable Neural Models.
- As told by someone during the conference: "The job of news agencies is to publish news in an objective, unbiased way, and the role of newspapers is to add bias. This paper does the opposite of that": Learning to flip the bias of news headlines.
- This paper, titled Assisted nominalization for Academic English Writing, is an example of two interesting yet unrelated phenomena. First: you can do fun things with language tools. Second: just because you can, it doesn't mean you should. Active voice, everyone!