Comparing LSTM and FOFE-based Architectures for Named Entity Recognition
LSTM architectures (Hochreiter and Schmidhuber, 1997) have become standard to recognize named entities (NER) in text (Lample et al., 2016; Chiu and Nichols, 2016). Nonetheless, Zhang et al. (2015) recently proposed an approach based on fixed-size ordinally forgetting encoding (FOFE) to translate variable-length contexts into fixed-length features. This encoding method can be used with feed-forward
