At: Lehigh University
Location: Bethlehem, PAWeb: www1.lehigh.eduPosition: Tenure Track Positions in Foundations of Data Science
Document worth reading: “Resource Management in Fog/Edge Computing: A Survey”
Contrary to using distant and centralized cloud data center resources, employing decentralized resources at the edge of a network for processing data closer to user devices, such as smartphones and tablets, is an upcoming computing paradigm, referred to as fog/edge computing. Fog/edge resources are typically resource-constrained, heterogeneous, and dynamic compared to the cloud, thereby making resource management an important challenge that needs to be addressed. This article reviews publications as early as 1991, with 85% of the publications between 2013-2018, to identify and classify the architectures, infrastructure, and underlying algorithms for managing resources in fog/edge computing. Resource Management in Fog/Edge Computing: A Survey
Site Migration
–
Distilled News
A comparison of machine learning classifiers for energy-efficient implementation of seizure detection
Introduction to Deep Learning with Keras
By Derrick Mwiti, Data Analyst
If you did not already know
Calibrated Boosting-Forest
Excellent ranking power along with well calibrated probability estimates are needed in many classification tasks. In this paper, we introduce a technique, Calibrated Boosting-Forest that captures both. This novel technique is an ensemble of gradient boosting machines that can support both continuous and binary labels. While offering superior ranking power over any individual regression or classification model, Calibrated Boosting-Forest is able to preserve well calibrated posterior probabilities. Along with these benefits, we provide an alternative to the tedious step of tuning gradient boosting machines. We demonstrate that tuning Calibrated Boosting-Forests can be reduced to a simple hyper-parameter selection. We further establish that increasing this hyper-parameter improves the ranking performance under a diminishing return. We examine the effectiveness of Calibrated Boosting-Forest on ligand-based virtual screening where both continuous and binary labels are available and compare the performance of Calibrated Boosting-Forest with logistic regression, gradient boosting machine and deep learning. Calibrated Boosting-Forest achieved an approximately 4% improvement compared to a state-of-art deep learning model and has the potential to achieve an 8% improvement after tuning the single hyper-parameter. Moreover, it achieved around 98% improvement on probability quality measurement compared to the best individual gradient boosting machine. Calibrated Boosting-Forest offers a benchmark demonstration that in the field of ligand-based virtual screening, deep learning is not the universally dominant machine learning model and good calibrated probabilities can better facilitate virtual screening process. …
How do I visualise the results of a Bayesian Model: Rugby models in Arviz
I’ve been recently playing around with ‘arviz’. For those of you who don’t know Arviz is a library for exploratory analysis Bayesian Models.
R Packages worth a look
Bayesian Meta-Analysis via ‘Stan’ (MetaStan)Performs Bayesian meta-analysis using ‘Stan’. Includes binomial-normal hierarchical models and option to use weakly informative priors for the heteroge …
Bootstrap Testing with MCHT
Introduction
Now that we’ve seen MCHT basics, how to make MCHTest()
objects self-contained, and maximized Monte Carlo (MMC) testing with MCHT, let’s now talk about bootstrap testing. Not much is different when we’re doing bootstrap testing; the main difference is that the replicates used to generate test statistics depend on the data we feed to the test, and thus are not completely independent of it. You can read more about bootstrap testing in [1].
Amazing consistency: Largest Dataset Analyzed / Data Mined – Poll Results and Trends
What was the largest dataset you analyzed / data mined?