What I read in September
reading by Vagrant GautamIf you think I read too much, you haven't met my friends.
For fun and edification
Books
- hooks, bell. (2000). All About Love: New Visions.
- Wright, K. E. (2022). Black Professionalism: Perception and Metalinguistic Assessment of Black American Speakers' Sociolinguistic Labor.
- Graeber, D. (2018). Bullshit Jobs.
Papers
- Rogoff, I. (1996). Gossip as testimony: a postmodern signature.
- de Sousa, R. (1994). In Praise of Gossip: Indiscretion as a Saintly Virtue.
- Collins, L. (1994). A Feminist Defense of Gossip.
- Ayim, M. (1994). Knowledge Through the Grapevine: Gossip as Inquiry.
- Hammer, J., & Reig, S. (2022). From Individual Rights to Community Obligations: A Jewish Approach to Speech.
- Spiel, K. (2022). Transreal tracing: Queer-feminist speculations on disabled technologies.
- Vieth, R. (2022). Addressing Serious Harm, Reconsidering Policy and Building Towards Repair:
- Warren, D. C. (1928). Inheritance of Earlobe Color in Poultry.
- Goldinger, S. D. (1996). Words and Voices: Episodic Traces in Spoken Word Identification and Recognition Memory.
- Schmidt, M. (2019). Creating Worlds: Fan Modifications of Civilization 4.
For pay
Papers
- Sennrich, R., Haddow, B., & Birch, A. (2016). Neural Machine Translation of Rare Words with Subword Units.
- Kudo, T. (2018). Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates.
- Seppnen, J. J. (1982). Recursive Functions for Computation of Natural Secret Languages.
- Gaspari, F. (2006). Look who's translating.
- Somers, H. (2005). Round-Trip Translation: What Is It Good For?
- Tilk, O., & Alumäe, T. (2014). Multi-Domain Recurrent Neural Network Language Model for Medical Speech Recognition.
- Shi, Y., Larson, M., & Jonker, C. M. (2015). Recurrent neural network language model adaptation with curriculum learning.
- Varjokallio, M., & Klakow, D. (2016). Unsupervised morph segmentation and statistical language models for vocabulary expansion.
- Durrani, N., Dalvi, F., Sajjad, H., Belinkov, Y., & Nakov, P. (2019). One Size Does Not Fit All: Comparing NMT Representations of Different Granularities.
- Mager, M., Oncevay, A., Mager, E., Kann, K., & Vu, N. T. (2022). BPE vs. Morphological Segmentation: A Case Study on Machine Translation of Four Polysynthetic Languages.
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners.
- Smit, P., Virpioja, S., Gronroos, S.-A., & Kurimo, M. (2014). Morfessor 2.0: Toolkit for statistical morphological segmentation.
- Arisoy, E., Chen, S. F., Ramabhadran, B., & Sethy, A. (2013). Converting Neural Network Language Models into back-off language models for efficient decoding in automatic speech recognition.
- Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
- Creutz, M., & Lagus, K. (2002). Unsupervised discovery of morphemes.
- Kohonen, O., Virpioja, S., & Lagus, K. (2010). Semi-Supervised Learning of Concatenative Morphology.
- Koehn, P. (2005). Europarl: A Parallel Corpus for Statistical Machine Translation.
- Feng, S. Y., Gangal, V., Wei, J., Chandar, S., Vosoughi, S., Mitamura, T., & Hovy, E. (2021). A Survey of Data Augmentation Approaches for NLP (arXiv:2105.03075).
- Li, B., Hou, Y., & Che, W. (2022). Data augmentation approaches in natural language processing: A survey.
- Liu, P., Wang, X., Xiang, C., & Meng, W. (2020). A Survey of Text Data Augmentation.
- Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for Deep Learning.
- Bayer, M., Kaufhold, M.-A., & Reuter, C. (2022). A Survey on Data Augmentation for Text Classification.
- Shijie, J., Ping, W., Peiyi, J., & Siping, H. (2017). Research on data augmentation for image classification based on convolution neural networks.
- Park, D. S., Chan, W., Zhang, Y., Chiu, C.-C., Zoph, B., Cubuk, E. D., & Le, Q. V. (2019). SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition.
- Chia-Hui Chang, Kayed, M., Girgis, M. R., & Shaalan, K. F. (2006). A Survey of Web Information Extraction Systems.
- Mollá, D., & Vicedo, J. L. (2007). Question Answering in Restricted Domains: An Overview.
- Gupta, P., & Gupta, V. (2012). A Survey of Text Question Answering Techniques.
- Grishman, R. (2015). Information Extraction.
- Almansor, E. H., & Hussain, F. K. (2020). Survey on Intelligent Chatbots: State-of-the-Art and Future Research Directions.
- Chen, H., Liu, X., Yin, D., & Tang, J. (2017). A Survey on Dialogue Systems: Recent Advances and New Frontiers.
- Shum, H., He, X., & Li, D. (2018). From Eliza to XiaoIce: challenges and opportunities with social chatbots.
- Hussain, S., Ameri Sianaki, O., & Ababneh, N. (2019). A Survey on Conversational Agents/Chatbots Classification and Design Techniques.
- Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., & Henighan, T. (2020). Language Models are Few-Shot Learners.
- Wang, T., Zhao, C., Wang, M., Li, L., & Xiong, D. (2021). Autocorrect in the Process of Translation — Multi-task Learning Improves Dialogue Machine Translation.
- Wermter, S., Riloff, E., & Scheler, G. (1996). Learning Approaches for Natural Language Processing.
- Marcus, G. F. (1998). Rethinking Eliminative Connectionism.
- Garoufi, K., & Koller, A. (2011). Combining symbolic and corpus-based approaches for the generation of successful referring expressions.
- Marcus, G. (2018). Deep Learning: A Critical Appraisal.
- Gonen, H., & Goldberg, Y. (2019). Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them.
- Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H. (2020). Language (Technology) is Power: A Critical Survey of "Bias" in NLP.
- Next post: Pronouning me in English
- Previous post: An autistic defense of joke explanations
- Back to the archive