Professional Documents
Culture Documents
Reading List and Presentation Schedule (CS 636 - Topics in Data Mining Research - Fall 2019-2020)
Reading List and Presentation Schedule (CS 636 - Topics in Data Mining Research - Fall 2019-2020)
ings of the 2019 CHI Conference on Human Factors in [11] O. Inel, K. Khamkham, T. Cristea, A. Dumitrache,
Computing Systems, 2019, p. 504. A. Rutjes, J. van der Ploeg, L. Romaszko, L. Aroyo,
[5] A. Bhowmick and S. M. Hazarika, “An insight into as- and R.-J. Sips, “Crowdtruth: Machine-human compu-
sistive technology for the visually impaired and blind tation framework for harnessing disagreement in gather-
people: State-of-the-art and future trends,” Journal on ing annotated data,” in International Semantic Web Con-
Multimodal User Interfaces, vol. 11, no. 2, pp. 149–172, ference, 2014, pp. 486–504.
2017. [12] N. B. Shah and D. Zhou, “Double or nothing: Mul-
[6] H. Kacorri, “Teachable machines for accessibility,” tiplicative incentive mechanisms for crowdsourcing,”
ACM SIGACCESS Accessibility and Computing, no. in Advances in neural information processing systems,
119, pp. 10–18, 2017. 2015, pp. 1–9.
[7] S. Wu, L. Reynolds, X. Li, and F. Guzmán, “Design and [13] R. A. Krishna, K. Hata, S. Chen, J. Kravitz, D. A.
evaluation of a social media writing support tool for peo- Shamma, L. Fei-Fei, and M. S. Bernstein, “Embracing
ple with dyslexia,” in Proceedings of the 2019 CHI Con- error to enable rapid crowdsourcing,” in Proceedings of
ference on Human Factors in Computing Systems, 2019, the 2016 CHI conference on human factors in computing
p. 516. systems, 2016, pp. 3167–3179.
[8] A. Guo, E. Kamar, J. W. Vaughan, H. Wallach, [14] Y. Fu, T. M. Hospedales, T. Xiang, J. Xiong, S. Gong,
and M. R. Morris, “Toward fairness in ai for people Y. Wang, and Y. Yao, “Robust subjective visual property
with disabilities: A research roadmap,” arXiv preprint prediction from crowdsourced pairwise labels,” IEEE
arXiv:1907.02227, 2019. transactions on pattern analysis and machine intelli-
[9] J. Yang, J. Fan, Z. Wei, G. Li, T. Liu, and X. Du, “Cost- gence, vol. 38, no. 3, pp. 563–577, 2015.
effective data annotation using game-based crowdsourc- [15] J. C. Chang, S. Amershi, and E. Kamar, “Revolt: Col-
ing,” Proceedings of the VLDB Endowment, vol. 12, laborative crowdsourcing for labeling machine learning
no. 1, pp. 57–70, 2018. datasets,” in Proceedings of the 2017 CHI Conference on
[10] A. Nottamkandath, J. Oosterman, D. Ceolin, G. K. D. Human Factors in Computing Systems, 2017, pp. 2334–
de Vries, and W. Fokkink, “Predicting quality of crowd- 2346.
sourced annotations using graph kernels,” in IFIP Inter- [16] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado,
national Conference on Trust Management, 2015, pp. and J. Dean, “Distributed representations of words and
134–148. phrases and their compositionality,” in Proceedings of
Neural Information Processing Systems (NIPS), 2013, N. Shazeer, A. Ku, and D. Tran, “Image transformer,”
pp. 3111–3119. arXiv preprint arXiv:1802.05751, 2018.
[17] Y. Goldberg and O. Levy, “word2vec explained: deriv- [31] M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spa-
ing mikolov et al.’s negative-sampling word-embedding tial transformer networks,” in Advances in neural infor-
method,” arXiv, 2014. mation processing systems, 2015, pp. 2017–2025.
[18] J. Pennington, R. Socher, and C. Manning, “Glove: [32] Z. Dai, Z. Yang, Y. Yang, W. W. Cohen, J. Carbonell,
Global vectors for word representation,” in Proceedings Q. V. Le, and R. Salakhutdinov, “Transformer-xl: At-
of the Conference on Empirical Methods in Natural Lan- tentive language models beyond a fixed-length context,”
guage Processing (EMNLP), 2014, pp. 1532–1543. arXiv preprint arXiv:1901.02860, 2019.
[19] M.-R. Bouguelia, S. Nowaczyk, K. Santosh, and [33] M. Dehghani, S. Gouws, O. Vinyals, J. Uszkoreit,
A. Verikas, “Agreeing to disagree: Active learning and Ł. Kaiser, “Universal transformers,” arXiv preprint
with noisy labels without crowdsourcing,” International arXiv:1807.03819, 2018.
Journal of Machine Learning and Cybernetics, vol. 9, [34] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner,
no. 8, pp. 1307–1319, 2018. C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextu-
[20] S. Wan, Y. Lan, J. Guo, J. Xu, L. Pang, and X. Cheng, “A alized word representations,” in Proceedings of NAACL-
deep architecture for semantic matching with multiple HLT, 2018, pp. 2227–2237.
positional sentence representations.” in Proceedings of
AAAI, 2016, pp. 2835–2841.
[21] B. Hu, Z. Lu, H. Li, and Q. Chen, “Convolutional neu-
ral network architectures for matching natural language
sentences,” in Proceedings of Neural Information Pro-
cessing Systems (NIPS), 2014, pp. 2042–2050.
[22] T. Bosc and P. Vincent, “Auto-encoding dictionary def-
initions into consistent word embeddings,” in Proceed-
ings of the 2018 Conference on Empirical Methods in
Natural Language Processing, 2018, pp. 1522–1532.
[23] C. N. Dos Santos and M. Gatti, “Deep convolutional
neural networks for sentiment analysis of short texts.” in
Proceedings of Conference on Computational Linguis-
tics (COLING), 2014, pp. 69–78.
[24] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to se-
quence learning with neural networks,” in Proceedings
of Neural Information Processing Systems (NIPS), 2014,
pp. 3104–3112.
[25] A. Piktus, N. B. Edizel, P. Bojanowski, E. Grave, R. Fer-
reira, and F. Silvestri, “Misspelling oblivious word em-
beddings,” in Proceedings of the 2019 Conference of the
North American Chapter of the Association for Com-
putational Linguistics: Human Language Technologies,
Volume 1 (Long and Short Papers), 2019, pp. 3226–
3234.
[26] J. Guo, Y. Fan, Q. Ai, and W. B. Croft, “A deep rele-
vance matching model for ad-hoc retrieval,” in Proceed-
ings of the ACM International on Conference on Infor-
mation and Knowledge Management. ACM, 2016, pp.
55–64.
[27] S. Rajeswar, S. Subramanian, F. Dutil, C. Pal, and
A. Courville, “Adversarial generation of natural lan-
guage,” arXiv preprint arXiv:1705.10929, 2017.
[28] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova,
“Bert: Pre-training of deep bidirectional trans-
formers for language understanding,” arXiv preprint
arXiv:1810.04805, 2018.
[29] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen,
O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov,
“Roberta: A robustly optimized bert pretraining ap-
proach,” arXiv preprint arXiv:1907.11692, 2019.
[30] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser,