Coarse-to-Fine Decoding for Neural Semantic Parsing

by Kyriakos Chatzidimitriou | Nov 29, 2018 09:54 | research

mlsw reading groupsoftwaremachine learningreview


I decided to have a machine learning on software (MLSW) paper reading group on my own :) and start writing short summaries (bits) on papers I read on the subject. I want them to serve as long term memory for me and make me write better summaries and reviews. I am not sure if they will be of use to anyone else, but in anycase I make them public. I will start by reading all the papers from the awesome machine learning on source code repo.


Coarse2fine method learns semantic parsers from instances of natural language expressions paired with structured meaning representations that are machine interpretable. More specifically, the structured meaning representations are logical forms (Ī»-calculus), django (python) expressions and SQL queries. As an example the goal is to transform:

What record company did conductor Mikhail Snitko record for after 1996?


SELECT Record Company WHERE (Year ofRecording > 1996) AND (Conductor = Mikhail Snitko)

To do that coarse2fine transforms the input x into a meaning sketch a and then into the final meaning representation y. Bi-directional LSTMs are used for encoding the input x and RNNs with an attention mechanism are used to decode the encoded input into the abstract sketch a. A similar encoding-decoding scheme is used for transforming a to y. Of course certain fine-tunings are added to encounter for differences among tasks.

The experimental results show that coarse2fine does a pretty good job and is worth taking a better look.

The paper can be found here and the code is provided here.