Distilled News

Improving Neural Networks – Hyperparameter Tuning, Regularization, and More

Building that first model – isn’t that what we strive for in the deep learning field? That feeling of euphoria when we see our model running successfully is unparalleled. But the buck doesn’t stop there. How can we improve the accuracy of the model? Is there any way to speed up the training process? These are critical questions to ask, whether you’re in a hackathon setting or working on a client project. And these aspects become even more prominent when you’ve built a deep neural network. Features like hyperparameter tuning, regularization, batch normalization, etc. come to the fore during this process. This is part 2 of the deeplearning.ai course (deep learning specialization) taught by the great Andrew Ng. We saw the basics of neural networks and how to implement them in part 1, and I recommend going through that if you need a quick refresher. In this article, we will explore the inner workings of these neural networks, including looking at how we can improve their performance and reduce the overall training time. These techniques have helped data scientists climb machine learning competition leaderboards (among other things) and earn top accolades. Yes, these concepts are invaluable!

SSpace: A Toolbox for State Space Modeling

SSpace is a MATLAB toolbox for state space modeling. State space modeling is in itself a powerful and flexible framework for dynamic system modeling, and SSpace is conceived in a way that tries to maximize this flexibility. One of the most salient features is that users implement their models by coding a MATLAB function. In this way, users have complete flexibility when specifying the systems, have absolute control on parameterizations, constraints among parameters, etc. Besides, the toolbox allows for some ways to implement either non-standard models or standard models with non-standard extensions, like heteroskedasticity, time-varying parameters, arbitrary nonlinear relations with inputs, transfer functions without the need of using explicitly the state space form, etc. The toolbox may be used on the basis of scratch state space systems, but is supplied with a number of templates for standard widespread models. A full help system and documentation are provided. The way the toolbox is built allows for extensions in many ways. In order to fuel such extensions and discussions an online forum has been launched.

RATest. A Randomization Tests package is available on CRAN

This blog post introduces the RATest package we released a while back on CRAN with my colleague and good friend Mauricio Olivares-Gonzalez. The package contains a collection of randomization tests, data sets and examples. The current version focuses on two testing problems and their implementation in empirical work, mostly related to economics. First, it facilitates the empirical researcher to test for particular hypotheses, such as comparisons of means, medians, and variances from k populations using robust permutation tests, which asymptotic validity holds under very weak assumptions, while retaining the exact rejection probability in finite samples when the underlying distributions are identical. Second, it implements Canay and Kamat (2017) permutation test for testing the continuity assumption of the baseline covariates in the sharp regression discontinuity design (RDD). In this post we’ll focus on the implementation of Canay and Kamat (2017) test presening a summarized version of the vignette that is available on CRAN. For those stata enthusiasts there is an implementation of this function in stata see RDPerm

A Two Stage Stage Approach to Counting Dice Values with Tensorflow and Pytorch

In a previous blog I discussed how I built a 12 class dice detector in Tensorflow using around 400 annotated images as training data. The goal of the model was to detect the presence of a dice face for a 6, 8, 10, or 12 sided dice and then also determine what the face value was. Once this was done I could get the total value of the dice on the screen. This model did decently at the task, but had the issue of either not identifying dice faces or misclassifying the face values that it did detect. So at the end of that previous post I mentioned that the other approach that I would take to this problem is to build a first stage object detector that specializes in just detecting dice faces and then a second stage CNN which could use the outputs from the first model to determine the numbers. While this adds additional complexity to the training and implementation pipelines, I felt that it could improve the overall performance.

Is Artificial Intelligence Racist? (And Other Concerns)

When we think of concerns in Artificial intelligence, the two main obvious connections are job loss and lethal autonomous weapons. While killer robots might be an actual threat in the future, the consequence of automation is a complicated phenomenon that experts are still actively analyzing. Very likely, as in any major Industrial revolution, the market will gradually stabilize. Advances in technology will create new types of jobs, inconceivable at the moment, which will be later disrupted by a new major technology takeover. We have seen this multiple times in modern history, and we are probably going to see this again, and again.

The long tail of medical data

Power law distributions and computer aided diagnosis

Automatic question answering

Querying Information from structured and unstructured data has become very important. There is a lot of textual data, FAQ, newspapers, articles, documentation, user cases, customer service requests, etc. Quite hard to keep in mind all that information. Who won the last football Euro Cup? What is Bitcoin? When is the birthday of a famous singer? To find some information, we need to search a document, spend a couple of minutes reading before you find the answer. Nazar Grycshuk, Data Scientists at Sigma Software and the author of this article, believes that automatic QnA system could save the most valuable resource in our lives?-?time.

Feature Engineering: What Powers Machine Learning

How to Extract Features from Raw Data for Machine Learning.

The Dunning -Kruger effect: When Data science becomes Data ‘sigh’ence

So you just read the Harvard article on how Data science is the sexiest job of the 21st century and you are super excited about the field. After reading a few more articles, you figure out it should be pretty easy. After all in college, you had an A in math, B in statistics and you had picked up basics of programming as a kid. You put in some effort for a few weeks and even complete 80 percent of a 10-week course in 10 days. As you approach the end of the course, you realise there is more than you had ever imagined and on a scale of 10, you rate yourself 1.5. Then weeks down the line, you hear about how tough it could be to get your first data science job. Frustration sets in and all the initial excitement dies down as you suddenly realise that data science is a broad and constantly evolving field. Now data science has become data ‘sigh’ence.

Tic Tac Toe – Creating Unbeatable AI with Minimax Algorithm

In today’s article, I am going to show you how to create an unbeatable AI agent that plays the classic Tic Tac Toe game. You will learn the concept of the Minimax algorithm that is widely and successfully used across the fields like Artificial Intelligence, Economics, Game Theory, Statistics or even Philosophy.

HandySpark: bringing pandas-like capabilities to Spark DataFrames

HandySpark is a new Python package designed to improve PySpark user experience, especially when it comes to exploratory data analysis, including visualization capabilities.

How to Use and Create a Z Table (Standard Normal Table)

To be able to utilize a z-score table and answer these questions, you have to turn the scores on the different tests into a standard normal distribution N(mean = 0, std = 1). Since these scores on these tests have a normal distribution, we can convert both of them into standard normal distributions by using the following formula.

The Ultimate Meta-Learning Challenge: OpenAI Teaches Agents to Learn Concepts from Experience

The ability to generalize concepts from experience is one of the foundational hallmarks of human intelligence. Since we are babies, we are constantly formulating new concepts based on limited experiences which, over time, become the foundation of our knowledge across different subjects. Many of the mysteries of human intelligence such as creative problem solving, creativity or abstract reasoning has their roots in abstract concept formulation. Is it possible to recreate this neuroscientific miracle in artificial intelligence(AI) agents? Recently, researchers from OpenAI published a paper proposing a technique for concept learning based on a deep learning method known as energy functions.

Data Visualization using Matplotlib

Data Visualization is an important part of business activities as organizations nowadays collect a huge amount of data. Sensors all over the world are collecting climate data, user data through clicks, car data for prediction of steering wheels etc. All of these data collected hold key insights for businesses and visualizations make these insights easy to interpret.

Variational Autoencoders Explained in Detail

In the previous post of this series I introduced the Variational Autoencoder (VAE) framework, and explained the theory behind it. In this post I’ll explain the VAE in more detail, or in other words – I’ll provide some code 🙂 After reading this post, you’ll understand the technical details needed to implement VAE. As a bonus point, I’ll show you how by imposing a special role to some of the latent vector’s dimensions, the model can generate images conditioned on the digit type.

Single-shot Person Pose Estimation and Instance Segmentation Part.1

In this article, we are going to talk about a breakthrough in AI that may change your life as a developer and consumer.

Approaches to Multi-label Classification

A multi-label classification for an image deals with a situation where an image can belong to more than one class. For example the below image has a train, woman, girl and Jacuzzi all in the same photo.

Seq2sql: Generating Structured Queries

Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe- loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.

The Future of Visualization for Business Intelligence

The future of Visualization for BI applications may not be visualization at all. Most companies have created sophisticate Business Intelligence systems complete with thematic and role based dashboards. This visualizations in theory should provide the company’s executives a snapshot of the past, present and future of the company. But executives hardly open these dashboards. And even if they do they can seldom go beyond doing a post-mortem of a earlier period. The challenge for the executive is to know what to look for. Modern BI system allows one to slice & dice data at various dimensions and granularity. So, a diligent executive should look at all possible combinations to understand the true picture of the organization. This task is difficult if not impossible to do.

Like this:

Like Loading…

Related