My Qualifying Exam (Oral)

I’m taking my qualifying exam this Tuesday—which may surprise some of you that I haven’t already done it! This is mostly due to logistical kerfuffles as I transferred Ph.D.’s and I also tend to avoid coursework like the plague.

Each university has its own culture around an oral or qualifying exam. Columbia’s Computer Science department involves the following:

The committee, after consideration of the student’s input, selects a syllabus of the 20-30 most significant documents that encompass the state of the art in the area. […] The oral exam begins with the student’s 30 minute critical evaluation of the syllabus materials, and is followed by no more than 90 minutes of questioning by the committee on any subject matter related to their contents. The student is judged primarily on the oral evidence, but the content and style of the presentation can account for part of the decision. [url]

My syllabus concerns Bayesian deep learning, which is the synthesis of modern Bayesian analysis with deep learning. The syllabus includes 29 papers published in 2014 or later, representing “the most significant documents that encompass the state of the art in the area.” I got multiple requests from friends to share the list, so I decided to just share it publically.

Probabilistic programming & AI systems

  1. Mansinghka, Selsam, & Perov (2014)

  2. Tristan et al. (2014)

  3. Schulman, Heess, Weber, & Abbeel (2015)

  4. Narayanan, Carette, Romano, Shan, & Zinkov (2016)

  5. Abadi et al. (2016)

  6. Carpenter et al. (2016)

  7. Tran et al. (2016)

  8. Kucukelbir, Tran, Ranganath, Gelman, & Blei (2017)

  9. Tran et al. (2017)

  10. Neubig et al. (2017)

Variational inference

  1. Kingma & Welling (2014)

  2. Ranganath, Gerrish, & Blei (2014)

  3. Rezende, Mohamed, & Wierstra (2014)

  4. Mnih & Gregor (2014)

  5. Rezende & Mohamed (2015)

  6. Salimans, Kingma, & Welling (2015)

  7. Tran, Ranganath, & Blei (2016)

  8. Ranganath, Tran, & Blei (2016)

  9. Maaløe, Sønderby, Sønderby, & Winther (2016)

  10. Johnson, Duvenaud, Wiltschko, Datta, & Adams (2016)

  11. Ranganath, Altosaar, Tran, & Blei (2016)

  12. Gelman et al. (2017)

Implicit probabilistic models & adversarial training

  1. Goodfellow et al. (2014)

  2. Dziugaite, Roy, & Ghahramani (2015)

  3. Li, Swersky, & Zemel (2015)

  4. Radford, Metz, & Chintala (2016)

  5. Mohamed & Lakshminarayanan (2016)

  6. Arjovsky, Chintala, & Bottou (2017)

  7. Tran, Ranganath, & Blei (2017)

Committee: David Blei, Andrew Gelman, Daniel Hsu.

Disclaimer: I favored papers which have shown to be or are most likely to be long-lasting in influence (this means fewer papers from 2017); papers on methodology rather than applications (only to narrow the scope); original papers over surveys; and my own papers (because it’s my oral). If I did not cite you or if you have strong opinions about a missing paper, recall Hanlon’s razor. E-mail me your suggestions.

Update (08/08/2017): I passed the oral. :-)

  1. Abadi, Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., … Zhang, X. (2016). TensorFlow: A system for large-scale machine learning, cs.DC, 1–18.

  2. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. In International Conference on Machine Learning.

  3. Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., … Riddell, A. (2016). Stan: a probabilistic programming language. Journal of Statistical Software.

  4. Dziugaite, G. K., Roy, D. M., & Ghahramani, Z. (2015). Training generative neural networks via Maximum Mean Discrepancy optimization. In Uncertainty in Artificial Intelligence.

  5. Gelman, A., Vehtari, A., Jylänki, P., Sivula, T., Tran, D., Sahai, S., … Robert, C. (2017). Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data. ArXiv.org.

  6. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., … Bengio, Y. (2014). Generative Adversarial Nets. In Neural Information Processing Systems.

  7. Johnson, M. J., Duvenaud, D., Wiltschko, A. B., Datta, S. R., & Adams, R. P. (2016). Composing graphical models with neural networks for structured representations and fast inference. In Neural Information Processing Systems.

  8. Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. In International Conference on Learning Representations.

  9. Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., & Blei, D. M. (2017). Automatic Differentiation Variational Inference. Journal of Machine Learning Research, 18, 1–45.

  10. Li, Y., Swersky, K., & Zemel, R. (2015). Generative Moment Matching Networks. In International Conference on Machine Learning.

  11. Maaløe, L., Sønderby, C. K., Sønderby, S. K., & Winther, O. (2016). Auxiliary Deep Generative Models. In International Conference on Machine Learning.

  12. Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: a higher-order probabilistic programming platform with programmable inference. ArXiv.org.

  13. Mnih, A., & Gregor, K. (2014). Neural Variational Inference and Learning in Belief Networks. In International Conference on Machine Learning.

  14. Mohamed, S., & Lakshminarayanan, B. (2016). Learning in Implicit Generative Models. ArXiv.org.

  15. Narayanan, P., Carette, J., Romano, W., Shan, C.-chieh, & Zinkov, R. (2016). Probabilistic Inference by Program Transformation in Hakaru (System Description). In International Symposium on Functional and Logic Programming. Springer, Cham.

  16. Neubig, G., Dyer, C., Goldberg, Y., Matthews, A., Ammar, W., Anastasopoulos, A., … Yin, P. (2017). DyNet: The Dynamic Neural Network Toolkit. ArXiv.org.

  17. Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In International Conference on Learning Representations.

  18. Ranganath, R., Altosaar, J., Tran, D., & Blei, D. M. (2016). Operator Variational Inference. In Neural Information Processing Systems.

  19. Ranganath, R., Gerrish, S., & Blei, D. M. (2014). Black Box Variational Inference. In Artificial Intelligence and Statistics.

  20. Ranganath, R., Tran, D., & Blei, D. M. (2016). Hierarchical Variational Models. In International Conference on Machine Learning.

  21. Rezende, D. J., & Mohamed, S. (2015). Variational Inference with Normalizing Flows. In International Conference on Machine Learning.

  22. Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In International Conference on Machine Learning.

  23. Salimans, T., Kingma, D. P., & Welling, M. (2015). Markov Chain Monte Carlo and Variational Inference: Bridging the Gap. In International Conference on Machine Learning.

  24. Schulman, J., Heess, N., Weber, T., & Abbeel, P. (2015). Gradient Estimation Using Stochastic Computation Graphs. In Neural Information Processing Systems.

  25. Tran, D., Hoffman, M. D., Saurous, R. A., Brevdo, E., Murphy, K., & Blei, D. M. (2017). Deep Probabilistic Programming. In International Conference on Learning Representations.

  26. Tran, D., Kucukelbir, A., Dieng, A. B., Rudolph, M., Liang, D., & Blei, D. M. (2016). Edward: A library for probabilistic modeling, inference, and criticism. ArXiv.org.

  27. Tran, D., Ranganath, R., & Blei, D. M. (2017). Deep and Hierarchical Implicit Models. ArXiv.org.

  28. Tran, D., Ranganath, R., & Blei, D. M. (2016). The Variational Gaussian Process. In International Conference on Learning Representations.

  29. Tristan, J.-B., Huang, D., Tassarotti, J., Pocock, A. C., Green, S., & Steele, G. L. (2014). Augur: Data-Parallel Probabilistic Modeling. In Neural Information Processing Systems (pp. 2600–2608).