Prescriptive Maintenance for Manufacturing Industry
Today’s trend of Artificial Intelligence (AI) and the increased level of Automation in manufacturing allow firms to flexibly connect assets and improve productivity through data-driven insights that has not been possible before. As more automation is used in manufacturing, the speed of responses required in dealing with maintenance issues is going to get faster and automated decisions as to what’s the best option from an economic standpoint are getting more complex.
Best books on Artificial Intelligence and Deep Learning for October 2018
–
Machine Learning with C++ – Faster R-CNN with MXNet C++ Frontend
I published implementation of Faster R-CNN with MXNet C++ Frontend. You can use this implementation as comprehensive example of using MXNet C++ Frontend, it has custom data loader for MS Coco dataset, implements custom target proposal layer as a part of the project without modification MXNet library, contains code for errors checking (Missed in current C++ API), have Eigen and NDArray integration samples. Feel free to leave comments and proposes.
A deep dive into glmnet: standardize
I´m writing a series of posts on various function options of the glmnet function (from the package of the same name), hoping to give more detail and insight beyond R´s documentation.
Make Beautiful Tables with the Formattable Package
I love the formattable package, but I always struggle to remember its syntax. A quick Google search reveals that I’m not alone in this struggle. This post is intended as a reminder for myself of how the package works – and hopefully you’ll find it useful too!
What is RQDA and what are its features?
RDQA is a R package for Qualitative Data Analysis, a free (free as freedom) qualitative analysis software application (BSD license). It works on Windows, Linux/FreeBSD and Mac OSX platforms. RQDA is an easy to use tool to assist in the analysis of textual data. At the moment it only supports plain text formatted data. All the information is stored in a SQLite database via the R package of RSQLite. The GUI is based on RGtk2, via the aid of gWidgetsRGtk2. It includes a number of standard Computer-Aided Qualitative Data Analysis features. In addition it seamlessly integrates with R, which means that a) statistical analysis on the coding is possible, and b) functions for data manipulation and analysis can be easily extended by writing R functions. To some extent, RQDA and R make an integrated platform for both quantitative and qualitative data analysis.
Discourse Network Analysis: Undertaking Literature Reviews in R
Literature reviews are the cornerstone of science. Keeping abreast of developments within any given field of enquiry has become increasingly difficult given the enormous amounts of new research. Databases and search technology have made finding relevant literature easy but, keeping a coherent overview of the discourse within a field of enquiry is an ever more encompassing task. Scholars have proposed many approaches to analysing literature, which can be placed along a continuum from traditional narrative methods to systematic analytic syntheses of text using machine learning. Traditional reviews are biased because they rely entirely on the interpretation of the researcher. Analytical approaches follow a process that is more like scientific experimentation. These systematic methods are reproducible in the way literature is searched and collated but still rely on subjective interpretation. Machine learning provides new methods to analyse large swaths of text. Although these methods sound exciting, these methods are incapable of providing insight. Machine learning cannot interpret a text; it can only summarise and structure a corpus. Machine learning still requires human interpretation to make sense of the information. This article introduces a mixed-method technique for reviewing literature, combining qualitative and quantitative methods. I used this method to analyse literature published by the International Water Association as part of my dissertation into water utility marketing. You can read the code below, or download it from GitHub. Detailed infromation about the methodology is available through FigShare.
Is the Food Here Yet?
Delivery time prediction has long been a part of city logistics, but refining accuracy has recently become very important for services such as Deliveroo, Foodpanda and Uber Eats which deliver food on-demand. These services and similar ones must receive an order and have it delivered within ~30 minutes to appease their users. In these situations +/- 5 minutes can make a big difference so it’s very important for customer satisfaction that the initial prediction is highly accurate and that any delays are communicated effectively.
How does Scaling change Principle Components? – Part 1
Assume that we have a some data that looks like above, and red arrow shows the first principle component, while the blue arrow shows the second principle component. How would normalization as well as standardization change those two? Also can we perform some kind of normalization respect to the variance we have? (I will make more post as I go along. )
Algorithmic Complexity [101]
Computers are fast: they store and manipulate data using electronic signals that travel across their silicon internals at hundreds of thousands of miles per hour. For comparison, the fastest signals in the human nervous system travel at about 250mph, which is about 3 million times slower, and those speeds are only possible for unconscious signals – signal speeds observed for conscious thought and calculation are typically orders of magnitude slower still. Basically, we’re never going to be able to out calculate a computer.
Speed up your deep learning language model up to 1000% with the adaptive softmax, Part 1
How would you like to speed up your language modeling (LM) tasks by 1000%, with nearly no drop in accuracy? A recent paper from Grave et al. (2017), called ‘Efficient softmax approximation for GPUs’, shows how you can gain a massive speedup in one of the most time-consuming aspects of language-modeling, the computation-heavy softmax step, through their ‘adaptive softmax’. The giant speedup from using the adaptive softmax comes with only minimal costs in accuracy, so anyone who is doing language modeling should definitely consider using it. Here in Part 1 of this blog post, I’ll fully explain the adaptive softmax, then in Part 2 I’ll walk you step by step through a Pytorch implementation (with an accompanying Jupyter notebook), which uses Pytorch’s built-in AdaptiveLogSoftmaxWithLoss function.
Speed up your deep learning language model up to 1000% with the adaptive softmax, Part 2: Pytorch implementation
Like this:
Like Loading…
Related