The Developer Coefficient
While businesses today face myriad issues – security vulnerabilities, trade tariffs, complex government regulations, increased global competition – how they deploy their developers may be the single-biggest factor impacting their future success. Developers act as force-multipliers, and if used effectively, have the collective potential to raise global GDP by $3 trillion over the next ten years. While many people posit that lack of developers is the primary problem, this study – which surveyed thousands of C-level executives and developers across five different countries – found that businesses need to better leverage their existing software engineering talent if they want to move faster, build new products, and tap into new and emerging trends. Senior executives report that the lack of quality developer talent is one of the biggest potential threats to their businesses. In fact, they now worry about access to skilled developers more than they worry about access to capital, immigration concerns, and other challenges. Despite the number of developers increasing year-over-year at most companies, the best developers working on the right things can accelerate a company´s move into new markets or product areas and help companies differentiate themselves at disproportionate rates. This underscores the most important point about developers as force-multipliers: It´s not how many devs companies have; it´s how they´re being leveraged. Access
10 Bits: the Data News Hotlist
This week´s list of data news highlights covers September 8-14, 2018, and includes articles about an AI system that can identify images of child abuse and a new search engine for open data.1. Teaching Machines to Figure Out New Objects2. Taming Scientific Literature with AI3. AI Learns to Predict Aneurysm Risk4. Facebook is Using AI to Understand Memes5. AI Can Hear Depression In Your Voice6. This App Can Help You Figure Out What That Animal Is7. AI Can Create New Video Games8. AI Learns to Track Images Over Time9. AI Isn´t Saying It´s Aliens, but…10. Predicting Radiation Treatments with AI
Monotonicity constraints in machine learning
In practical machine learning and data science tasks, an ML model is often used to quantify a global, semantically meaningful relationship between two or more values. For example, a hotel chain might want to use ML to optimize their pricing strategy and use a model to estimate the likelihood of a room being booked at a given price and day of the week. For a relationship like this the assumption is that, all other things being equal, a cheaper price is preferred by a user, so demand is higher at a lower price. However what might easily happen is that upon building the model, the data scientist discovers that the model is behaving unexpectedly: for example the model predicts that on Tuesdays, the clients would rather pay $110 than $100 for a room! The reason is that while there is an expected monotonic relationship between price and the likelihood of booking, the model is unable to (fully) capture it, due to noisiness of the data and confounds in it.
Cognitive bias cheat sheet
I´ve spent many years referencing Wikipedia´s list of cognitive biases whenever I have a hunch that a certain type of thinking is an official bias but I can´t recall the name or details. It´s been an invaluable reference for helping me identify the hidden flaws in my own thinking. Nothing else I´ve come across seems to be both as comprehensive and as succinct.
You Aren’t So Smart: Cognitive Biases are Making Sure of It
Cognitive biases are tendencies to think in certain ways that can lead to systematic deviations from a standard of rationality or good judgment. They have all sorts of practical impacts on our lives, whether we want to admit it or not.
Machine learning in the cloud
Hagay Lupesko explores key trends in machine learning, the importance of designing models for scale, and the impact that machine learning innovation has had on startups and enterprises alike.
Data visualization with statistical reasoning: seeing uncertainty with the bootstrap
One of the most common concerns that I hear from dataviz people is that they need to visualise not just a best estimate about the behaviour of their data, but also the uncertainty around that estimate. Sometimes, the estimate is a statistic, like the risk of side effects of a particular drug, or the percentage of voters intending to back a given candidate. Sometimes, it is a prediction of future data, and sometimes it is a more esoteric parameter in a statistical model. The objective is always the same: if they just show a best estimate, some readers may conclude that it is known with 100% certainty, and generally that´s not the case.
Why Vectorize?
How to create a sequential model in Keras for R
This tutorial will introduce the Deep Learning classification task with Keras. We will particularly focus on the shape of the arrays, which is one of the most common pitfalls.
How to generate a lot more leads and reduce costs? A/B Testing vs Reinforcement Learning.
Marketers often use A/B testing?-?a technique of comparing 2 or more versions of a product (ad, design, landing page etc.) to assess which of them performs its functions better. But do you want to know how to achieve the same goal, getting much better results and reducing costs? In this article I will tell you about Reinforcement Learning.
Complete guide to Association Rules (2/2)
In this blog, I will discuss the algorithms that enable efficient extraction of association rules from a list of transactions. Part 1 of this blog covers the terminology and concepts that form the foundation of association rule mining. Motivation behind this whole concept and meaning of some basic terms is explained there. I highly recommend going through part 1 to make the most out of part 2. However, here are very brief definitions of some terms from the previous part.
What’s New in Deep Learning Research: How DeepMind Builds Multi-Task Reinforcement Learning…
Reinforcement learning(RL) has been at the center of some of the most publicized milestones of artificial intelligence(AI) in the last few years. From systems like AlphaGo to the recent progress on multi-player games such as OpenAI Five or DeepMind’s Quake III, RL has shown incredible progress mastering complex knowledge subjects. Despite the impressive results, most widely adopted RL algorithms focused on learning a single task and present a lot of challenges when used in multi-task environments. Recently, researchers from Alphabet’s subsidiary DeepMind published a paper in which they proposed a new method called PopArt to improve RL in multi-task environments.
Basic concepts of neural networks
After presenting in two previous post (post 1, post 2) the factors that have contributed to unleashing the potential of Artificial Intelligence and related technologies as Deep Learning, now is time to start to review the basic concepts of neural networks.
“Deep” Independent Component Analysis in Tensorflow
We can capture local changes using Independent components analysis, however, the image data we encounter in real life lives in a very high dimensional space. I wanted to see if we can perform ICA with a combination of deep learning.
A Density-based algorithm for outlier detection
Outlier detection (also known as anomaly detection) is the process of finding data objects with behaviors that are very different from expectation. Such objects are called outliers or anomalies. The most interesting objects are those, that deviates significantly from the normal object. Outliers are not being generated by the same mechanism as rest of the data.
Explore and get value out of your raw data: An Introduction to Splunk
You just got your hands into some raw data files (json, csv, etc). What happens now? How do you make sense of it? You open a console and start using less, grep, jq and other tools. It’s great at start but… complex and hard to do something more than just the basic. Does this sounds familiar? Great! Keep reading and learn how Splunk can help you out.
Dive into PCA (Principal Component Analysis) with Python
Getting stuck in the sea of variables to analyze your data ? Feeling lost in deciding which features to choose so that your model is safe from overfitting? Is there any way to reduce the dimension of the feature space? Well, PCA can surely help you.
Like this:
Like Loading…
Related