Interface to ‘JSON-stat’ (jsonstat)Interface to ‘JSON-stat’ https://…/, a simple lightweight ‘JSON’ format for data diss …
Some thoughts after reading “Bad Blood: Secrets and Lies in a Silicon Valley Startup”
I just read the above-titled John Carreyrou book, and it’s as excellent as everyone says it is. I suppose it’s the mark of any compelling story that it will bring to mind other things you’ve been thinking about, and in this case I saw many connections between the story of Theranos—a company that raised billions of dollars based on fake lab tests—and various examples of junk science that we’ve been discussing for the past ten years or so.
The Trillion Dollar Question
Roger Peng ** 2018/08/09
Whats new on arXiv
Rethinking Numerical Representations for Deep Neural Networks
“Richard Jarecki, Doctor Who Conquered Roulette, Dies at 86”
[relevant video]
In case you missed it: July 2018 roundup
In case you missed them, here are some articles from July of particular interest to R users.
How to Overcome That Awkward Silence in Interviews
Document worth reading: “Examining the Use of Neural Networks for Feature Extraction: A Comparative Analysis using Deep Learning, Support Vector Machines, and K-Nearest Neighbor Classifiers”
Neural networks in many varieties are touted as very powerful machine learning tools because of their ability to distill large amounts of information from different forms of data, extracting complex features and enabling powerful classification abilities. In this study, we use neural networks to extract features from both images and numeric data and use these extracted features as inputs for other machine learning models, namely support vector machines (SVMs) and k-nearest neighbor classifiers (KNNs), in order to see if neural-network-extracted features enhance the capabilities of these models. We tested 7 different neural network architectures in this manner, 4 for images and 3 for numeric data, training each for varying lengths of time and then comparing the results of the neural network independently to those of an SVM and KNN on the data, and finally comparing these results to models of SVM and KNN trained using features extracted via the neural network architecture. This process was repeated on 3 different image datasets and 2 different numeric datasets. The results show that, in many cases, the features extracted using the neural network significantly improve the capabilities of SVMs and KNNs compared to running these algorithms on the raw features, and in some cases also surpass the performance of the neural network alone. This in turn suggests that it may be a reasonable practice to use neural networks as a means to extract features for classification by other machine learning models for some datasets. Examining the Use of Neural Networks for Feature Extraction: A Comparative Analysis using Deep Learning, Support Vector Machines, and K-Nearest Neighbor Classifiers
Distilled News
Speech-to-Text Benchmark
Document worth reading: “Mathematics of Deep Learning”
Recently there has been a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for representation learning and classification. However, the mathematical reasons for this success remain elusive. This tutorial will review recent work that aims to provide a mathematical justification for several properties of deep networks, such as global optimality, geometric stability, and invariance of the learned representations. Mathematics of Deep Learning