Machine Learning & AI Main Developments in 2018 and Key Trends for 2019

At KDnuggets, we try to keep our finger on the pulse of main events and developments in industry, academia, and technology. We also do our best to look forward to key trends on the horizon.

In previous years, we have brought collections of predictions and analysis from experts. This year we posed the question:

What were the main developments in Machine Learning and Artificial Intelligence in 2018, and what key trends do you expect in 2019?

 

Below are the responses from Anima Anandkumar, Andriy Burkov, Pedro Domingos, Ajit Jaokar, Nikita Johnson, Zachary Chase Lipton, Matthew Mayo, Brandon Rohrer, Elena Sharova, Rachel Thomas, and Daniel Tunkelang.

Key themes singled out by these experts include deep learning advancements, transfer learning, the limitations of machine learning, the changing landscape of natural language processing, and much more.

Be sure to check out collected opinions we shared last week when we asked a group of experts the related question, “What were the main developments in Data Science and Analytics in 2018 and what key trends do you expect in 2019?”

 Anima Anandkumar (@AnimaAnandkumar) is Director of ML research at NVIDIA and Bren Professor at Caltech.

What were the main developments in Machine Learning and Artificial Intelligence in 2018?

“Low hanging fruits of deep learning have been mostly plucked”

Focus started shifting away from standard supervised learning to more challenging machine-learning problems like semi-supervised learning, domain adaptation, active learning and generative models. GANs continued to be very popular with researchers attempting harder tasks like photo-realism (bigGANs) and video-to-video synthesis. Alternative generative models (e.g. neural rendering model) were developed to combine generation and prediction in a single network to help semi-supervised learning. Researchers expanded the application of deep-learning to many scientific areas such as earthquake prediction, material sciences, protein engineering, high-energy physics and control systems. In these cases, domain knowledge and constraints were combined with learning. For example, to improve autonomous landing of drones, the ground effect model was learnt to correct the base controller and the learning was guaranteed to be stable, which is important in a control system.

Predictions:

“AI will bridge simulation and reality to become safer and more physically aware”

We will see development of new domain-adaptation techniques to seamlessly transfer knowledge from simulations to the real world. Use of simulations will help us overcome data scarcity and speed up learning in new domains and problems. Adapting AI from simulations to real data (Sim2real) will have a significant impact in robotics, autonomous driving, medical imaging, earthquake forecasting etc. Simulations are a great way to account for all possible scenarios in safety-critical applications like autonomous driving. The knowledge built into sophisticated simulators will be used in novel ways to make AI more physically aware, robust, and be able to generalize to new and unseen scenarios.

 Andriy Burkov (@burkov) is Machine Learning Team Leader at Gartner.

This is my own perception as a practitioner and not Gartner’s official statement which is based on research. Here’re my thoughts:

What were the main developments in Machine Learning and Artificial Intelligence in 2018?

TensorFlow lost it to PyTorch in academic world. Sometimes the immense influence and reach of Google may send the market in a suboptimal direction, as it already happened with MapReduce and the subsequent hadoop-mania.

Deepfakes (and its alikes with the sound) crushed the most trustable source of information: video footage. Nobody will be able to tell anymore something like: I saw a video of that guy saying those words. We stopped believing printed words decades ago, but video was unshakeable until now.

Reinforcement learning comeback in the form of deep learning was quite unexpected and cool!

Google’s system that calls restaurants on your behalf and pretends (successfully) to be a real human being is quite a milestone. However, it raises lots of questions about ethics and AI.

Personal assistants and chatbots reached their limits quite quickly. They are better than ever before but not as good as everyone hoped last year.

What key trends do you expect in 2019?

1) I expect everybody getting excited about AutoML promise even more than this year. I also expect it to fail (with the exception of some very specific and well-defined use cases, like image recognition, machine translation, and text classification, where handcrafted features aren’t needed or are standard, raw data is close to what the machine expects as the input, and the data is in abundance).

2) Marketing automation: with mature generative adversarial networks and variational autoencoders it is becoming possible to generate thousands of pictures of the same person or paysage with small differences in facial expressions or mood between those images. Based on how consumers react to those pictures, we can generate optimal advertisement campaigns.

3) Real-time speech generation on mobile devices indistinguishable from the real-human.

4) Self-driving taxis will remain in the test/PoC phase.

 Pedro Domingos (@pmddomingos) is a Professor in the Dept. of Computer Science & Engineering, University of Washington.

After the years of hype, 2018 was the year of overblown fears about AI. To listen to the media and even some researchers, you’d think that Cambridge Analytica threw the 2016 election to Trump, machine learning algorithms are a cesspool of bias and discrimination, and robots are coming to take our jobs and then our lives. It’s not just talk, either: Europe and California have passed draconian privacy laws, the UN is debating a ban on intelligent weapons, etc. The public increasingly has a dark view of AI, which is both dangerous and unfair. Hopefully 2019 will be the year sanity returns.

 Ajit Jaokar (@AjitJaokar) is Principal Data Scientist and Creator of University of Oxford Data Science for Internet of Things Course.

In 2018 a number of trends started to take off. Automated Machine Learning is one and Reinforcement Learning is the other. Both these nascent trends will expand significantly in 2019. As a part of my teaching at Oxford University (Data Science for Internet of Things Course) I see IoT being increasingly interwoven into the large ecosystems such as autonomous cars, robotics and smart cities. Through work with Dobot I see a new class of robotics i.e. collaborative robots (cobots) as a key trend in 2019. Unlike the assembly line robots of before, the new robots will be able to be autonomous and also understand emotion (in my course we also work with Emotion Research Labs in this area). Finally, a bit of a controversial view: in 2019, the role of the Data Scientist as we know it will tend to move out from research into product development. I see Artificial Intelligence to be much more closely tied to the creation of Next Generation of Data Products. The role of the Data Scientist will change accordingly.

 Nikita Johnson (@nikitaljohnson) is the Founder of RE.WORK.

One development we have witnessed in 2018 is the increase in the number of open source tools that are lowering the barrier to entry and making AI more accessible to all, to ensure enhanced collaboration between organizations. These communities are essential to ensure the spread of AI across all areas of society and business.

Similarly, in 2019 we will see an increase in the number of companies focusing on ‘AI for Good’, building on Google’s recently announced AI for Social Good program, as well as Microsoft’s AI for Good initiative. This shift towards AI for a positive impact is gaining traction as society is demanding that companies have a higher social purpose.

 Zachary Chase Lipton (@zacharylipton) is an Assistant Professor of machine learning at Carnegie Mellon and Founder of the Approximately Correct Blog.

Let’s start off with the deep learning scene, which accounts for the lion’s share of the public discourse on machine learning and AI. Perhaps I’ll annoy some people with this observation, but here’s one reasonable read on 2018: the biggest development was that there were no developments! Of course that’s too simplistic a take, but allow me to unpack the point. A substantial fraction the biggest developments were more of the nature of “tuning” vs qualitatively new ideas. The BigGAN is a GAN, but bigger. Progressive growing of GANS produced really visually intriguing results, a big step in some senses, but methodologically, it’s just a GAN with a clever curriculum learning trick. Over on the NLP side of the fence, the biggest stories of the year were the contextualized embeddings of ELMO and BERT. And these are absolutely fantastic advances empirically. But we’ve been pre-training language models and fine-tuning to downstream classification tasks since at least 2015-16 when Andrew Dai and Quoc Le did this on a smaller scale. So perhaps the more cynical spin is that this was not a year dominated by new “big ideas”. On the flip side, the positive spin might be that the full capabilities of existing technologies haven’t yet been reached and the rapid development in hardware, systems and tools may have a second act to play here in squeezing all of the juice out of these 3-4yr old ideas.

I think there are a lot of fresh ideas brewing now is in the emerging theory of deep learning. There’s a whole body of researchers, including Sanjeev Arora, Tengyu Ma, Daniel Soudry, Nati Srebro, and so many more that are doing some very exciting work. For too long, we’ve had first-principles theory that’s rigorous but often oblivious of practice, and then “experimental” ML that is really concerned with science but with leaderboard chasing. Now there’s a new mode of inquiry emerging where theory and experiment are more tightly coupled. You’re starting to see theory papers inspired by experiments, theory papers that conduct experiments. Recently, I had the inspiring experience of getting an idea from a theory paper that really uncovered a natural phenomena that I hadn’t expected to find.

For 2019, and beyond. I think there will be a reckoning for applied machine learning. We’re rushing into all these practical domains claiming to “solve” problems, but the only reliable hammer in our toolbox so far has been supervised learning, and there are just some limits to what we can do with pattern matching alone. Supervised models find associations, but they don’t provide justification. They don’t know what information is safe to rely on vs brittle in the sense that it’s likely to change over time. These models don’t tell us the effects of interventions. And when we deploy automated systems based on supervised learning in human-interacting systems, we don’t anticipate the way that they distort incentives and thus alter their environments, breaking the very patterns that they rely upon. I think that over the next year we’ll see many more cases of ML projects getting scrapped, or getting into trouble over precisely these kinds of limitations, and we’ll see a bit of a shift among the more creative members of the community away from focusing on the function fitting leaderboard and focusing more on problems related to bridging the gap between representation learning and causal reasoning.

 Matthew Mayo (@mattmayo13) is the Editor of KDnuggets.

To me, 2018 in machine learning seemed like a year of refinement. For example, there was wider application and interest in transfer learning, notably in natural language processing thanks to techniques such as Universal Language Model Fine-tuning for Text Classification (ULMFiT) and Bidirectional Encoder Representations from Transformers (BERT). And these weren’t the only advances in NLP last year; of additional note is Embeddings from Language Models (ELMo), a deep contextualized word representation model, which made considerable improvements on every task the model was used for. Other breakthroughs for the year seemed to be centered around refinements on existing technologies such as BigGANs, for example. Also, non-technical discussions of inclusion and diversity in machine learning went mainstream, thanks to the voices of numerous advocating community members (refer to NeurIPS as one example).

I believe that in 2019, research attention will turn away from supervised learning to such areas as reinforcement learning and semi-supervised learning, as potential applications for these are realized more and more. We are now at the point where image recognition and generation has been “solved” (for lack of a less-loaded term), for example, and what has been learned along the way can aid researchers in the pursuit of more complex applications of machine learning.

As an amateur automated machine learning (AutoML) evangelist, I think we will keep seeing incremental advances in AutoML to the point that run-of-the-mill supervised learning tasks will be able to have algorithm selection and hyperparameter optimization confidently approached with available and under-development methods. I think the widespread perception of automated machine learning will turn (or maybe already has reached the tipping point) from replacing practitioners to augmenting them. AutoML will no longer be feared as a replacement for the machine learning toolbox, but as another tool to include within it. Conversely, I feel that it will be a foregone conclusion that practitioners will regularly reach for these tools in everyday scenarios, and be expected to know how to do so.

 Brandon Rohrer (@brohrer) is a Data Scientist at Facebook.

One important trend of 2018 is the proliferation and maturation of data science education opportunities. Online courses are the original data science education venue. They continue to be popular at all levels with more students, variants, and topics every year.

In the academic world, new data science masters programs are being kicked off at the rate of about a dozen per year. Our institutes of higher learning are responding to the pleas of companies and students to provide dedicated programs for data related fields . (This year, 18 industry co-authors and I, with 11 academic contributors, created a virtual industry advisory board to help support this explosive growth.)

On the informal end of the spectrum, tutorial blog posts are ubiquitous. They contribute a great deal to the collective understanding of data science, both for their readers and their authors.

In 2019 and beyond, academic data science programs will become a more common way to collect the baseline skills necessary to land a first data science position. This is a good thing. Institutions that are subject to accreditation will fill a long-standing gap. So far, data science qualifications have largely been demonstrated through previous work experience. This creates a Catch-22. New data scientists can’t show their qualifications because they’ve never had a data science job, and they can’t get a data science job because they can’t show their qualifications. Credentials from educational institutions are one way to break this cycle.

However, online courses aren’t going anywhere. There are many for whom the time and financial commitments of a university education make it unattainable. Now that it has been established, data science education will always have a practical track. Through dogged demonstration of project work, relevant experience, and online training, new data scientists will be able to demonstrate their skills even without a degree. Online courses and tutorials will continue to become more common, more sophisticated, and more important to data science education. In fact, several prominent data science and machine learning programs go so far as to put their courses online, and offer enrollment options even for non-matriculated students. I expect that the line between a data science university degree and an online training curriculum will continue to blur. To my mind, this is the truest form of the “democratization of data science.”

 Elena Sharova is Senior Data Scientist at ITV.

What were the main developments in Machine Learning (ML) and Artificial Intelligence (AI) in 2018?

In my view, 2018 will be remembered in the AI and ML communities by the following three events.

Firstly, the start of the EU Global Data Protection Regulation (GDPR) designed to increase fairness and transparency of use of personal data. This regulation brought to light individuals’ rights to control their personal data and access information on how it is used, but also caused some confusion in interpretation of the law. The end result to date is that a large number of companies believe themselves to be compliant, having made a few superficial changes to data handling, and ignoring the fundamental need to re-design the infrastructure for data storage and processing.

Secondly, there was the Cambridge Analytica scandal, which cast a dark shadow over the entire data science (DS) community. If previously the debate was mostly about ensuring fairness in AI and ML products, this scandal attracted questions of a deeper ethical nature. The latest enquiry into Facebook’s involvement means that it is not going to go away soon. As the data science field matures, developments like these will take place across many industries, beyond politics. Some will be more tragic, like the Uber self-driving car case in Arizona, and they will be followed by strongly charged public reactions. Technology is power, and with power comes responsibility. As Noam Chomsky said: “It is only in folk tales, children’s stories, and the journals of intellectual opinion that power is used wisely and well to destroy evil. The real world teaches very different lessons, and it takes wilful and dedicated ignorance to fail to perceive them.”

Lastly, on a more positive note, the latest development by Amazon of its own server processor chip means that we may be getting closer to the day when general access to cloud computing is no longer a cost issue.

What key trends do you expect in 2019?

The role and responsibilities of a data scientist are growing beyond building models that achieve accurate predictions. The main trend of 2019 for ML, AI and DS practitioners will be a growing responsibility to follow established software development practices, particularly regarding testing and maintenance. The end product of data science has to co-exist with the rest of a company’s technology stack. The requirements for efficient running and maintenance of proprietary software will apply to the models and solutions we build. This means that the best software development practices will underpin the rules of machine learning that we will need to follow.

 Rachel Thomas (@math_rachel) is fast.ai Founder, and a USF Assistant Professor.

Two main AI developments in 2018 were:

  1. the successful application of transfer learning to NLP2. growing attention to dystopian misuses of AI (including surveillance and manipulation by hate groups & authoritarians)Transfer learning is the practice of applying a pre-trained model to a new data set. Transfer learning was a key factor in the explosion of computer vision progress, and in 2018, transfer learning was successfully applied to NLP in work including ULMFiT from fast.ai and Sebastian Ruder, the Allen Institute’s ELMo, the OpenAI transformer, and Google’s Bert. These advances are cause for both excitement and concern, as discussed in this NY Times article.Ongoing issues such as Facebook’s determining role in the genocide in Myanmar, YouTube disproportionately recommending conspiracy theories (many of which promote white supremacy), and the use of AI for surveillance by governments and law enforcement agencies finally began getting more mainstream media attention in 2018. While these misuses of AI are severe and scary, it is good that more people are becoming aware of them and increasingly pushing back.

I expect these trends to continue in 2019, with rapid advances in NLP (as Sebastian Ruder wrote this summer, “NLP’s ImageNet moment has arrived”), as well as more dystopian developments in how technology is used for surveillance, inciting violence, and manipulation by dangerous political movements.

 Daniel Tunkelang (@dtunkelang) is an independent consultant specializing in search, discovery, and ML/AI.

2018 saw two major advances in the sophistication of word embeddings for natural language processing and understanding.

The first was In March. Researchers from the Allen Institute for AI and the University of Washington published “Deep contextualized word representations” and introduced ELMo (Embeddings from Language Models), a open-source deep contextualized word representation that improves on context-free embeddings like word2vec or GloVe. The authors demonstrated improvements to existing NLP systems, simply by substituting vectors from ELMo’s pre-trained models.

The second was in November. Google open-sourced BERT (Bidirectional Encoder Representations from Transformers), a bidirectional, unsupervised language representation, pre-trained on Wikipedia. As the authors demonstrated in “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, they achieved significant improvements across a wide variety of NLP benchmarks, even relative to ELMo.

Between the rapid adoption of smart speakers (~100M by the end of 2018) and the ubiquity of digital assistants on mobile phones, advances in natural language understanding are moving instantly from the lab to the field. These are exciting times for the research and practice of NLP.

But we still have a long way to go.

Also this year, researchers at the Allen Institute released “Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference”, a dataset for a sentence-completion task that requires common-sense understanding. Their experiments showed that state-of-the-art NLP still lags far behind human performance.

But hopefully we’ll see more NLP breakthroughs in 2019. Many of the best minds in computer science are working on it, and industry is eager to apply their results.

 Related: