Shiny Interface for the CTT Package (CTTinShiny)A Shiny interface developed in close coordination with the CTT package, providing a GUI that guides the user through CTT analyses.
If you did not already know
Output Masks
In this paper we propose a novel method for achieving average consensus in a multiagent network while avoiding to disclose the initial states of the individual agents. In order to achieve privacy protection of the state variables, we introduce maps, called output masks, which alter the value of the states before publicly broadcasting them. These output masks are local (i.e., implemented independently by each agent), deterministic, time-varying and converging asymptotically to the true state. The resulting masked system is also time-varying and has the original (unmasked) system as its limit system. It is shown in the paper that the masked system has the original average consensus value as a global attractor. However, in order to preserve privacy, it cannot share an equilibrium point with the unmasked system, meaning that in the masked system the global attractor cannot be also stable. …
Document worth reading: “Quantizing deep convolutional networks for efficient inference: A whitepaper”
We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures. Model sizes can be reduced by a factor of 4 by quantizing weights to 8-bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights. We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks.We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks and review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits. Quantizing deep convolutional networks for efficient inference: A whitepaper
What if a big study is done and nobody reports it?
Prostate cancer screening: massive study gets minimal coverage. Why?
Document worth reading: “Concept Tagging for Natural Language Understanding: Two Decadelong Algorithm Development”
Concept tagging is a type of structured learning needed for natural language understanding (NLU) systems. In this task, meaning labels from a domain ontology are assigned to word sequences. In this paper, we review the algorithms developed over the last twenty five years. We perform a comparative evaluation of generative, discriminative and deep learning methods on two public datasets. We report on the statistical variability performance measurements. The third contribution is the release of a repository of the algorithms, datasets and recipes for NLU evaluation. Concept Tagging for Natural Language Understanding: Two Decadelong Algorithm Development
Why Would Prosthetic Arms Need to See or Connect to Cloud AI?
*Based on “Connected Arms”, a keynote talk at the O’Reilly AI Conference delivered by Joseph Sirosh, CTO for AI at Microsoft. Content reposted from this O’Reilly Media website. *
Magister Dixit
“Start small and go big: Analytical projects should not be planned across an entire company, or even division-wide. Initial pilots should focus on small, identifiable challenges and work to resolve those challenges. Once a project has been successfully piloted and measured, other teams within the organization will see the value in the new analytical technologies, and also understand the organizational changes required to adopt a new mindset and technology.” Dell ( 2014 )
“Check out table 4.”
A colleague sent along this article and writes:
If you did not already know
Discrete Choice
In economics, discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives, such as entering or not entering the labor market, or choosing between modes of transport. Such choices contrast with standard consumption models in which the quantity of each good consumed is assumed to be a continuous variable. In the continuous case, calculus methods (e.g. first-order conditions) can be used to determine the optimum amount chosen, and demand can be modeled empirically using regression analysis. On the other hand, discrete choice analysis examines situations in which the potential outcomes are discrete, such that the optimum is not characterized by standard first-order conditions. Thus, instead of examining ‘how much’ as in problems with continuous choice variables, discrete choice analysis examines ‘which one.’ However, discrete choice analysis can also be used to examine the chosen quantity when only a few distinct quantities must be chosen from, such as the number of vehicles a household chooses to own [1] and the number of minutes of telecommunications service a customer decides to purchase.[2] Techniques such as logistic regression and probit regression can be used for empirical analysis of discrete choice.…
Document worth reading: “Accelerating CNN inference on FPGAs: A Survey”
Convolutional Neural Networks (CNNs) are currently adopted to solve an ever greater number of problems, ranging from speech recognition to image classification and segmentation. The large amount of processing required by CNNs calls for dedicated and tailored hardware support methods. Moreover, CNN workloads have a streaming nature, well suited to reconfigurable hardware architectures such as FPGAs. The amount and diversity of research on the subject of CNN FPGA acceleration within the last 3 years demonstrates the tremendous industrial and academic interest. This paper presents a state-of-the-art of CNN inference accelerators over FPGAs. The computational workloads, their parallelism and the involved memory accesses are analyzed. At the level of neurons, optimizations of the convolutional and fully connected layers are explained and the performances of the different methods compared. At the network level, approximate computing and datapath optimization methods are covered and state-of-the-art approaches compared. The methods and tools investigated in this survey represent the recent trends in FPGA CNN inference accelerators and will fuel the future advances on efficient hardware deep learning. Accelerating CNN inference on FPGAs: A Survey