Sequence Modeling with Neural Networks – Part I
Guest blog post by Zied HY. Zied is Senior Data Scientist at Capgemini Consulting. He is specialized in building predictive models utilizing both traditional statistical methods (Generalized Linear Models, Mixed Effects Models, Ridge, Lasso, etc.) and modern machine learning techniques (XGBoost, Random Forests, Kernel Methods, neural networks, etc.). Zied run some workshops for university students (ESSEC, HEC, Ecole polytechnique) interested in Data Science and its applications, and he is the co-founder of Global International Trading (GIT), a central purchasing office based in Paris.
Using Confusion Matrices to Quantify the Cost of Being Wrong
There are so many confusing and sometimes even counter-intuitive concepts in statistics. I mean, come on…even explaining the differences between Null Hypothesis and Alternative Hypothesis can be an ordeal. All I want to do is to understand and quantify the cost of my analytical models being wrong.
Convolutional Neural Network – In a Nut Shell
In a regular neural network, the input is transformed through a series of hidden layers having multiple neurons. Each neuron is connected to all the neurons in the previous and the following layers. This arrangement is called a fully connected layer and the last layer is the output layer. In Computer Vision applications where the input is an image, we use convolutional neural network because the regular fully connected neural networks don´t work well. This is because if each pixel of the image is an input then as we add more layers the amount of parameters increases exponentially.
Introducing VisualData
Computer vision is without doubt going to change almost every aspect of how a machine interacts with our environment and us in the near future. It is still a young field (originated from 1960s) but the technology starts to work at the first time in history with recent advances in machine learning. Applications like self-driving cars, robotics, VR/AR have motivated people to enter the field and apply the technology to much broader areas. It has become one of the most active subfields of Artificial Intelligence.
Unfolding Naïve Bayes from Scratch !
Whether you are a beginner in Machine Learning or you have been trying hard to understand the Super Natural Machine Learning Algorithms and you still feel that the dots do not connect somehow, this post is definitely for you!
Litigating Algorithms: Challenging Government use of Algorithmic Decision Systems
There is currently much debate over the use of algorithmic decision systems in our core social institutions. From criminal justice to health care to education and employment, we are seeing computational and predictive technologies deployed into or supplanting private and governmental decision-making procedures and processes. As a result, many advocates, academics, and policymakers have begun to raise concerns, urging adequate safeguards, oversight, appeal, and redress mechanisms for protecting vulnerable populations from harm. For example, New York City recently established the first city-wide Automated Decision System Task Force to study and recommend policies, practices, standards or other guidelines on the use of such systems across all of its public agencies.
R developer’s guide to Azure
If you want to run R in the cloud, you can of course run it in a virtual machine in the cloud provider of your choice. And you can do that in Azure too. But Azure provides seven dedicated services that provide the ability to run R code, and you can learn all about them in the new R Developer’s Guide to Azure at Microsoft Docs.
Building Online Interactive Simulators for Predictive Models in R
Correctly interpreting predictive models can be tricky. One solution to this problem is to create interactive simulators, where users can manipulate the predictor variables and see how the predictions change. This post describes a simple approach for creating online interactive simulators. It works for any model where there is a predict method. Better yet, if the model´s not top secret, you can build and share the model for no cost, using the free version of Displayr!
Application of RNN for customer review sentiment analysis
In my previous blog post I wrote about using BeautifulSoup for scraping over two thousand Flixbus customer reviews and identifying company’s strengths and weaknesses by performing NLP analysis. Building up on previous story, I decided to use the collected text data to train a Recurrent Neural Network model for predicting customers’ sentiment, which proved to be highly efficient scoring 95.93% accuracy on the test set. What is sentiment analysis?
DCGANs (Deep Convolutional Generative Adversarial Networks)
One of the most interesting parts of Generative Adversarial Networks is the design of the Generator network. The Generator network is able to take random noise and map it into images such that the discriminator cannot tell which images came from the dataset and which images came from the generator.
Ridge and Lasso Regression: A Complete Guide with Python Scikit-Learn
Moving on from a very important unsupervised learning technique that I have discussed last week, today we will dig deep in to supervised learning through linear regression, specifically two special linear regression model?-?Lasso and Ridge regression.
Facebook Believes in Omni-Supervised Learning
Semi-supervised learning is one of the areas of machine learning that has received a lot of attention in recent years. Conceptually, semi-supervised learning is a variation of supervised learning that combines datasets of labeled and unlabeled data for training. The principle of semi-supervised learning is that leveraging a small amount of labeled through supervised learning with a larger amount of unlabeled data through unsupervised learning can yield better accuracy than completely supervised models in many scenarios. Despite its promises, most semi-supervised learning methods haven’t produced tangible benefits compared to supervised alternatives. Part of the challenge if the fact that semi-supervised learning techniques depend on simulated labeled/unlabeled data by splitting a fully annotated dataset and is therefore likely to be upper-bounded by fully supervised learning with all annotations. In other words, semi-supervised learning will only be as good as the equivalent fully supervised learning method running against a labeled dataset.
The ambiguity of p-value; What is it?
According to the academic syllabus I was taught by in the United Kingdom, a concept of p-value was introduced in year 12 without any emphasis on how much we should appreciate for its existence in Statistics and how easily it can be misleading. Even though the mass media exaggerate the power of Machine/Deep Learning (it is incredible tho) I believe the ideas of Probability and Statistics should never be buried down. The p-value is definitely one of them which lasts over the three centuries. The p-value in hypothesis testing is the probability for a given statistical model that, when the null hypothesis is true, the statistical summary would be the same as or of greater magnitude than the actual observed results?-?Wikipedia
Reinforcement Learning: An Introduction to the Concepts, Applications and Code
In this series of reinforcement learning blog posts, I will be trying to create a simplified explanation of the concepts required to understand reinforcement learning and their applications. In this initial post, I highlight some of the main concepts and terminology in reinforcement learning. These concepts will be further explained in future blog posts with the applications and implementations in real-world problems.
Like this:
Like Loading…
Related