What I read in August
reading by Vagrant GautamIf you think I read too much, you haven't met my friends.
For fun and edification
Books
- McPherron, P., & Ramanathan, V. (2011). Language, body, and health.
- Yergeau, M. (2017). Authoring autism: on rhetoric and neurological queerness.
- McRuer, R. (2006). Crip theory: cultural signs of queerness and disability.
- Feinberg, L. (1993). Stone butch blues.
Papers
- Irvine, J. T., & Gal, S. (2000). Language Ideology and Linguistic Differentiation.
- Cuboniks, L. (2015). Xenofeminism: A Politics for Alienation.
- Toze, M. (2019). Developing a critical trans gerontology.
- Bucholtz, M., & Hall, K. (2016). Embodied sociolinguistics.
- Ahearn, L. M. (2012). Language, Thought, and Culture.
- Schoenebeck, S., & Blackwell, L. (2021). Reimagining Social Media Governance: Harm, Accountability, and Repair.
- Heilker, P., & Yergeau, M. (2011). Autism and Rhetoric.
- Konnelly, L. (2021). Nuance and normativity in trans linguistic research.
- Andrus, J. (2012). Language ideology, fractal recursivity, and discursive agency in the legal construction of linguistic evidence.
- Bauer, H. (2010). 'Race', normativity and the history of sexuality: Magnus Hirschfeld's racism and the early-twentieth-century sexology.
Other
- 150 pages of readings for the University of Alberta's Palaeontology: Theropod Dinosaurs and the Origin of Birds course
For pay
Papers
- Singh, M., Oualil, Y., & Klakow, D. (2017). Approximated and domain-adapted LSTM language models for first-pass decoding in speech recognition.
- Deena, S., Hasan, M., Doulaty, M., Saz, O., & Hain, T. (2016). Combining feature and model-based adaptation of RNNLMs for multi-genre broadcast speech recognition.
- Huang, Z., Zweig, G., & Dumoulin, B. (2014). Cache based recurrent neural network language model inference for first pass speech recognition.
- Klakow, D. (2006). Language model adaptation for tiny adaptation corpora.
- Kneser, R., Peters, J., & Klakow, D. (1997). Language model adaptation using dynamic marginals.
- Besling, S., & Meier, H.-G. (1995). Language Model Speaker Adaptation.
- Kneser, R., & Ney, H. (1993). Improved clustering techniques for class-based statistical language modelling.
- Moore, G., & Young, S. (2000). Class-based language model adaptation using mixtures of word-class weights.
- Shi, Y., Wiggers, P., & Jonker, C. M. (2012). Towards recurrent neural networks language models with linguistic and contextual features.
- Bengio, Y., Ducharme, R., & Vincent, P. (2003). A Neural Probabilistic Language Model.
- Klakow, D. (2000). Selecting articles from the language model training corpus.
- Adel, H., Kirchhoff, K., Vu, N. T., Telaar, D., & Schultz, T. (2014). Comparing approaches to convert recurrent neural networks into backoff language models for efficient decoding.
- Jalalvand, S. (2013). Improving Language Model Adaptation using Automatic Data Selection and Neural Network.
- Bacchiani, M., Roark, B., & Saraclar, M. (2004). Language model adaptation with MAP estimation and the perceptron algorithm.
- Sanchis-Trilles, G., & Cettolo, M. (2010). Online language model adaptation via n-gram mixtures for statistical machine translation.
- Vapnik, V. (1991). Principles of Risk Minimization for Learning Theory.
- Mikolov, T., Karafiat, M., Burget, L., Cernocky, J., & Khudanpur, S. (2010). Recurrent Neural Network Based Language Model.
- Chen, X., Liu, X., Gales, M. J. F., & Woodland, P. C. (2015). Improving the training and evaluation efficiency of recurrent neural network language models.
- Chen, X., Liu, X., Wang, Y., Gales, M. J. F., & Woodland, P. C. (2016). Efficient Training and Evaluation of Recurrent Neural Network Language Models for Automatic Speech Recognition.
- Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations.
- Chen, X., Tan, T., Liu, X., Lanchantin, P., Wan, M., Gales, M. J. F., & Woodland, P. C. (2015). Recurrent neural network language model adaptation for multi-genre broadcast speech recognition.
- Gangireddy, S. R., Swietojanski, P., Bell, P., & Renals, S. (2016). Unsupervised Adaptation of Recurrent Neural Network Language Models.
- Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting.
- Li, K., Xu, H., Wang, Y., Povey, D., & Khudanpur, S. (2018). Recurrent Neural Network Language Model Adaptation for Conversational Speech Recognition.
- Kuhn, R., & De Mori, R. (1990). A cache-based natural language model for speech recognition.
- Grave, E., Joulin, A., & Usunier, N. (2017). Improving neural language models with a continuous cache.
- Grave, E., Cisse, M. M., & Joulin, A. (2017). Unbounded cache model for online language modeling with open vocabulary.
- Lau, R., Rosenfeld, R., & Roukos, S. (1993). Trigger-based language models: a maximum entropy approach.
- Singh-Miller, N., & Collins, M. (2007). Trigger-Based Language Modeling using a Loss-Sensitive Perceptron Algorithm.
- Deoras, A., Mikolov, T., Kombrink, S., Karafiat, M., & Khudanpur, S. (2011). Variational approximation of long-span language models for LVCSR.
- Chu, C., & Wang, R. (2018). A Survey of Domain Adaptation for Neural Machine Translation.
- Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space.
- Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed Representations of Words and Phrases and their Compositionality.
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors.
- Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2017). Enriching Word Vectors with Subword Information.
- Singh, M., Virpioja, S., Smit, P., & Kurimo, M. (2019). Subword RNNLM Approximations for Out-Of-Vocabulary Keyword Search.
- Pennington, J., Socher, R., & Manning, C. (2014). GloVe: Global Vectors for Word Representation.
- Next post: An autistic defense of joke explanations
- Previous post: Gentle intro to category theory
- Back to the archive