Q-Kernel-Based and Conditionally Negative Definite Kernel-Based Machine Learning Tools (qkerntool)Nonlinear machine learning tool for classification, clustering and dimensionality reduction. It integrates 12 q-kernel functions and 14 conditional neg …
Generating data to explore the myriad causal effects that can be estimated in observational data analysis
I’ve been inspired by two recent talks describing the challenges of using instrumental variable (IV) methods. IV methods are used to estimate the causal effects of an exposure or intervention when there is unmeasured confounding. This estimated causal effect is very specific: the complier average causal effect (CACE). But, the CACE is just one of several possible causal estimands that we might be interested in. For example, there’s the average causal effect (ACE) that represents a population average (not just based the subset of compliers). Or there’s the average causal effect for the exposed or treated (ACT) that allows for the fact that the exposed could be different from the unexposed.
Introducing Octoparse New Version 7.1 – web scraping for dummies is official
Sponsored Post.
“The hype economy”
Palko writes:
Understanding object detection in deep learning
What is object detection?
Document worth reading: “A Learning Approach to Secure Learning”
Deep Neural Networks (DNNs) have been shown to be vulnerable against adversarial examples, which are data points cleverly constructed to fool the classifier. Such attacks can be devastating in practice, especially as DNNs are being applied to ever increasing critical tasks like image recognition in autonomous driving. In this paper, we introduce a new perspective on the problem. We do so by first defining robustness of a classifier to adversarial exploitation. Next, we show that the problem of adversarial example generation and defense both can be posed as learning problems, which are duals of each other. We also show formally that our defense aims to increase robustness of the classifier. We demonstrate the efficacy of our techniques by experimenting with the MNIST and CIFAR-10 datasets. A Learning Approach to Secure Learning
UnitedHealth Group: Sr Manager, Data Engineering [Minnetonka, MN]
At: UnitedHealth GroupLocation: Minnetonka, MN
Web: www.unitedhealthgroup.comPosition: Sr Manager, Data Engineering
How Important is that Machine Learning Model be Understandable? We analyze poll results
Change over time is not “treatment response”
This will be a non-technical post illustrating the problems with identifying treatment responders or non-responders using inappropriate within-group analyses. Specifically, I will show why it is pointless to try to identify a subgroup of non-responders using a naïve analysis of data from one treatment group only, even though we have weekly measures over time.
Build Your Own Natural Language Models on AWS (no ML experience required)
At AWS re:Invent last year we announced Amazon Comprehend, a natural language processing service which extracts key phrases, places, peoples’ names, brands, events, and sentiment from unstructured text. Comprehend – which is powered by sophisticated deep learning models trained by AWS – allows any developer to add natural language processing to their applications without requiring any machine learning skills.