The Mobility Robotics course is finally done, and I just started Perception. It seems to be way more concept-heavy than any of the other courses, but I like the content from Week 1 so far! I did not like Mobility as much, since it focussed exclusively on theory, and the content assumed a fair amount of comfort with kinematics/dynamics (which I don’t have anymore). Anyway, off to the articles for this week:
AI & the Blockchain
This article gives a quick introduction to Blockchain technologies, and then delves into the relationship between Artificial Intelligence and cryptocurrencies.
It discusses the various ways in which AI could transform blockchain tech, such as: 1. Improving the energy efficiency of mining centers (like DeepMind’s algorithms do for Google), 2. Increasing scalability using Federated Learning, 3. Predicting which nodes could solve a particular block, so as to ‘free’ up the others.
Federated Learning
Coming across the mention of Federated Learning made me realise that I did not remember what it was, so I revisited the old(ish) post on Google’s Research blog.
Federated Learning works by decentralizing the training process for ML models (unlike most other technologies that mainly do inference on end-devices). This is useful in cases where communicating data continuously from devices causes bandwidth and latency issues for the user/training server.
It works like this: Every device downloads the latest version of a model from the central server. Then, as it sees more data in deployment, it trains the local model to compute small ‘focussed’ updates based on the user. All these small updates (and the not the raw data that created them) are then sent to the central server, which aggregates all the updates using the FederatedAveraging algorithm. Privacy is ensured primarily by retraining the central model only after receiving a certain number of smaller updates.
AlphaZero Chess
Sometime back, DeepMind had unveiled the AlphaGo Zero, an algorithm that learned to play Go by playing only against itself (given the basic laws of the game). They then went on to try out the MCTS-based algorithm on chess, and it seems to be working really well! The AlphaZero algorithm apparently defeated Stockfish (current computer chess champion) 28 wins to none (and a bunch of draws).
Ofcourse, the superior hardware that AlphaZero uses does make a huge difference, but the very fact that such powerful computers can be optimally used to ‘meta-learn’ is in itself a game-changer. Do read the original paper to get an idea of their method (especially the section on input/outputs Representations to the deep network)
DeepVariant
High-Throughput Sequencing (HTS) is a method used in genome sequencing. HTS produces multiple reads of an individual’s genome, which are then compared to some ‘reference’ to explore variations.
To achieve this, it is necessary to properly align the reads with the reference genome, and also account for errors in measurement. Essentially, every nucleotide position that does not match with the reference could either be a genuine variant or an error in measurement. This is determined using data from all the reads produced by the method – this problem is called the ‘Variant Calling Problem‘.
DeepVariant, an algorithm co-developed by Google Brain & Verily, converts the variant-calling problem into an image classification problem to achieve state-of-the-art results. It was unveiled at NIPS-2017, and they have open-sourced the code.
Funny Programming Jargon
This is not really an ‘article’, but more of comic relief :-). It lists out various programming terms invented by real developers, that mock the various software engineering pitfalls in a typical workplace. Do read if you appreciate programming humor!
Like this:
Like Loading…