We are very excited to announce that you can deploy your computer vision model trained using TensorFlow (version 1.4) to AWS DeepLens. Head pose detection is part of the AWS DeepLens sample projects. In this blog post, we will show you how to train a model from scratch using a P2 training instance of Amazon SageMaker. We will use a ResNet-50 model and save the trained model in the “frozen” protobuff format. Although TensorFlow offers a variety of data formats to save a model graph (.such as checkpoint, .ckpt-XXX.meta, .ckpt_XXX.index, .ckpt_XXX.data-00000-of-00001, .pbtxt, optimized protobuff, frozen protobuff, etc.), the AWS DeepLens model optimizer supports the frozen protobuff format.
It should be ok to just publish the data.
Gur Huberman asked for my reaction to a recent manuscript, Are CEOs Different? Characteristics of Top Managers, by Steven Kaplan and Morten Sorensen. The paper begins:
Cool tennis-tracking app
R Packages worth a look
A Clean API for Lazy and Non-Standard Evaluation (nseval)Facilities to capture, inspect, manipulate, and create lazy values (promises), ‘…’ lists, and active calls.
TINT uses Amazon Comprehend to find and aggregate the best social media content for customers
TINT is a simple, DIY platform that helps brands find, curate, and display their most effective customer-generated content from social media on marketing channels such as websites, mobile apps, and event displays. Businesses can link their Twitter, YouTube, Pinterest, Instagram, Facebook, and RSS feeds to their TINT accounts
It was the weeds that bothered him.
Bill Jefferys points to this news article by Denise Grady. Bill noticed the following bit, “In male rats, the studies linked tumors in the heart to high exposure to radiation from the phones. But that problem did not occur in female rats, or any mice,” and asked:
Building a Linear Regression Model for Real World Problems, in R
Distill Update 2018
Contents
Document worth reading: “Does putting your emotions into words make you feel better? Measuring the minute-scale dynamics of emotions from online data”
Studies of affect labeling, i.e. putting your feelings into words, indicate that it can attenuate positive and negative emotions. Here we track the evolution of individual emotions for tens of thousands of Twitter users by analyzing the emotional content of their tweets before and after they explicitly report having a strong emotion. Our results reveal how emotions and their expression evolve at the temporal resolution of one minute. While the expression of positive emotions is preceded by a short but steep increase in positive valence and followed by short decay to normal levels, negative emotions build up more slowly, followed by a sharp reversal to previous levels, matching earlier findings of the attenuating effects of affect labeling. We estimate that positive and negative emotions last approximately 1.25 and 1.5 hours from onset to evanescence. A separate analysis for male and female subjects is suggestive of possible gender-specific differences in emotional dynamics. Does putting your emotions into words make you feel better? Measuring the minute-scale dynamics of emotions from online data
data.table is Really Good at Sorting
The data.table
R
package is really good at sorting. Below is a comparison of it versus dplyr
for a range of problem sizes.