Whats new on arXiv

InfoSSM: Interpretable Unsupervised Learning of Nonparametric State-Space Model for Multi-modal Dynamics

The goal of system identification is to learn about underlying physics dynamics behind the observed time-series data. To model the nonparametric and probabilistic dynamics model, Gaussian process state-space models (GPSSMs) have been widely studied; GPs are not only capable to represent nonlinear dynamics, but estimate the uncertainty of prediction and avoid over-fitting. Traditional GPSSMs, however, are based on Gaussian transition model, thus often have difficulty in describing multi-modal motions. To resolve the challenge, this thesis proposes a model using multiple GPs and extends the GPSSM to information-theoretic framework by introducing a mutual information regularizer helping the model to learn interpretable and disentangled representation of multi-modal transition dynamics model. Experiment results show that the proposed model not only successfully represents the observed system but distinguishes the dynamics mode that governs the given observation sequence.

Educational Note: Paradoxical Collider Effect in the Analysis of Non-Communicable Disease Epidemiological Data: a reproducible illustration and web application

Parameter Estimation of Heavy-Tailed AR Model with Missing Data via Stochastic EM

The autoregressive (AR) model is a widely used model to understand time series data. Traditionally, the innovation noise of the AR is modeled as Gaussian. However, many time series applications, for example, financial time series data are non-Gaussian, therefore, the AR model with more general heavy-tailed innovations are preferred. Another issue that frequently occurs in time series is missing values, due to the system data record failure or unexpected data loss. Although there are numerous works about Gaussian AR time series with missing values, as far as we know, there does not exist any work addressing the issue of missing data for the heavy-tailed AR model. In this paper, we consider this issue for the first time, and propose an efficient framework for the parameter estimation from incomplete heavy-tailed time series based on the stochastic approximation expectation maximization (SAEM) coupled with a Markov Chain Monte Carlo (MCMC) procedure. The proposed algorithm is computationally cheap and easy to implement. The convergence of the proposed algorithm to a stationary point of the observed data likelihood is rigorously proved. Extensive simulations on synthetic and real datasets demonstrate the efficacy of the proposed framework.

A Methodology for Search Space Reduction in QoS Aware Semantic Web Service Composition

The semantic information regulates the expressiveness of a web service. State-of-the-art approaches in web services research have used the semantics of a web service for different purposes, mainly for service discovery, composition, execution etc. In this paper, our main focus is on semantic driven Quality of Service (QoS) aware service composition. Most of the contemporary approaches on service composition have used the semantic information to combine the services appropriately to generate the composition solution. However, in this paper, our intention is to use the semantic information to expedite the service composition algorithm. Here, we present a service composition framework that uses semantic information of a web service to generate different clusters, where the services are semantically related within a cluster. Our final aim is to construct a composition solution using these clusters that can efficiently scale to large service spaces, while ensuring solution quality. Experimental results show the efficiency of our proposed method.

FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices

Data-Driven Clustering via Parameterized Lloyd’s Families

Algorithms for clustering points in metric spaces is a long-studied area of research. Clustering has seen a multitude of work both theoretically, in understanding the approximation guarantees possible for many objective functions such as k-median and k-means clustering, and experimentally, in finding the fastest algorithms and seeding procedures for Lloyd’s algorithm. The performance of a given clustering algorithm depends on the specific application at hand, and this may not be known up front. For example, a ‘typical instance’ may vary depending on the application, and different clustering heuristics perform differently depending on the instance. In this paper, we define an infinite family of algorithms generalizing Lloyd’s algorithm, with one parameter controlling the the initialization procedure, and another parameter controlling the local search procedure. This family of algorithms includes the celebrated k-means++ algorithm, as well as the classic farthest-first traversal algorithm. We design efficient learning algorithms which receive samples from an application-specific distribution over clustering instances and learn a near-optimal clustering algorithm from the class. We show the best parameters vary significantly across datasets such as MNIST, CIFAR, and mixtures of Gaussians. Our learned algorithms never perform worse than k-means++, and on some datasets we see significant improvements.

The Key Concepts of Ethics of Artificial Intelligence – A Keyword based Systematic Mapping Study

The growing influence and decision-making capacities of Autonomous systems and Artificial Intelligence in our lives force us to consider the values embedded in these systems. But how ethics should be implemented into these systems? In this study, the solution is seen on philosophical conceptualization as a framework to form practical implementation model for ethics of AI. To take the first steps on conceptualization main concepts used on the field needs to be identified. A keyword based Systematic Mapping Study (SMS) on the keywords used in AI and ethics was conducted to help in identifying, defying and comparing main concepts used in current AI ethics discourse. Out of 1062 papers retrieved SMS discovered 37 re-occurring keywords in 83 academic papers. We suggest that the focus on finding keywords is the first step in guiding and providing direction for future research in the AI ethics field.

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices. Since such systems are where some of their most useful applications lie (e.g. obstacle detection for mobile robots, vision-based medical assistive technology), significant bodies of work from both machine learning and systems communities have attempted to provide optimisations that will make CNNs available to edge devices. In this paper we unify the two viewpoints in a Deep Learning Inference Stack and take an across-stack approach by implementing and evaluating the most common neural network compression techniques (weight pruning, channel pruning, and quantisation) and optimising their parallel execution with a range of programming approaches (OpenMP, OpenCL) and hardware architectures (CPU, GPU). We provide comprehensive Pareto curves to instruct trade-offs under constraints of accuracy, execution time, and memory space.

FRAGE: Frequency-Agnostic Word Representation

Continuous word representation (aka word embedding) is a basic building block in many neural network-based models used in natural language processing tasks. Although it is widely accepted that words with similar semantics should be close to each other in the embedding space, we find that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space, and the embedding of a rare word and a popular word can be far from each other even if they are semantically similar. This makes learned word embeddings ineffective, especially for rare words, and consequently limits the performance of these neural network models. In this paper, we develop a neat, simple yet effective way to learn \emph{FRequency-AGnostic word Embedding} (FRAGE) using adversarial training. We conducted comprehensive studies on ten datasets across four natural language processing tasks, including word similarity, language modeling, machine translation and text classification. Results show that with FRAGE, we achieve higher performance than the baselines in all tasks.

HDTCat: let’s make HDT scale

HDT (Header, Dictionary, Triples) is a serialization for RDF. HDT has become very popular in the last years because it allows to store RDF data with a small disk footprint, while remaining at the same time queriable. For this reason HDT is often used when scalability becomes an issue. Once RDF data is serialized into HDT, the disk footprint to store it and the memory footprint to query it are very low. However, generating HDT files from raw text RDF serializations (like N-Triples) is a time-consuming and (especially) memory-consuming task. In this publication we present HDTCat, an algorithm and command line tool to join two HDT files with low memory footprint. HDTCat can be used in a divide-and-conquer strategy to generate HDT files from huge datasets using a low-memory footprint.

Improving Moderation of Online Discussions via Interpretable Neural Models

Growing amount of comments make online discussions difficult to moderate by human moderators only. Antisocial behavior is a common occurrence that often discourages other users from participating in discussion. We propose a neural network based method that partially automates the moderation process. It consists of two steps. First, we detect inappropriate comments for moderators to see. Second, we highlight inappropriate parts within these comments to make the moderation faster. We evaluated our method on data from a major Slovak news discussion platform.

Exploration vs. Exploitation in Team Formation

An online labor platform faces an online learning problem in matching workers with jobs and using the performance on these jobs to create better future matches. This learning problem is complicated by the rise of complex tasks on these platforms, such as web development and product design, that require a team of workers to complete. The success of a job is now a function of the skills and contributions of all workers involved, which may be unknown to both the platform and the client who posted the job. These team matchings result in a structured correlation between what is known about the individuals and this information can be utilized to create better future matches. We analyze two natural settings where the performance of a team is dictated by its strongest and its weakest member, respectively. We find that both problems pose an exploration-exploitation tradeoff between learning the performance of untested teams and repeating previously tested teams that resulted in a good performance. We establish fundamental regret bounds and design near-optimal algorithms that uncover several insights into these tradeoffs.

Argumentation Mining: Exploiting Multiple Sources and Background Knowledge

The field of Argumentation Mining has arisen from the need of determining the underlying causes from an expressed opinion and the urgency to develop the established fields of Opinion Mining and Sentiment Analysis. The recent progress in the wider field of Artificial Intelligence in combination with the available data through Social Web has create great potential for every sub-field of Natural Language Process including Argumentation Mining.

Compressed sensing with a jackknife and a bootstrap

Compressed sensing proposes to reconstruct more degrees of freedom in a signal than the number of values actually measured. Compressed sensing therefore risks introducing errors — inserting spurious artifacts or masking the abnormalities that medical imaging seeks to discover. The present case study of estimating errors using the standard statistical tools of a jackknife and a bootstrap yields error ‘bars’ in the form of full images that are remarkably representative of the actual errors (at least when evaluated and validated on data sets for which the ground truth and hence the actual error is available). These images show the structure of possible errors — without recourse to measuring the entire ground truth directly — and build confidence in regions of the images where the estimated errors are small.

Multi-Task Learning for Machine Reading Comprehension

We propose a multi-task learning framework to jointly train a Machine Reading Comprehension (MRC) model on multiple datasets across different domains. Key to the proposed method is to learn robust and general contextual representations with the help of out-domain data in a multi-task framework. Empirical study shows that the proposed approach is orthogonal to the existing pre-trained representation models, such as word embedding and language models. Experiments on the Stanford Question Answering Dataset (SQuAD), the Microsoft MAchine Reading COmprehension Dataset (MS MARCO), NewsQA and other datasets show that our multi-task learning approach achieves significant improvement over state-of-the-art models in most MRC tasks.

Interpretable Reinforcement Learning with Ensemble Methods

We propose to use boosted regression trees as a way to compute human-interpretable solutions to reinforcement learning problems. Boosting combines several regression trees to improve their accuracy without significantly reducing their inherent interpretability. Prior work has focused independently on reinforcement learning and on interpretable machine learning, but there has been little progress in interpretable reinforcement learning. Our experimental results show that boosted regression trees compute solutions that are both interpretable and match the quality of leading reinforcement learning methods.

Focused econometric estimation for noisy and small datasets: A Bayesian Minimum Expected Loss estimator approach

Central to many inferential situations is the estimation of rational functions of parameters. The mainstream in statistics and econometrics estimates these quantities based on the plug-in approach without consideration of the main objective of the inferential situation. We propose the Bayesian Minimum Expected Loss (MELO) approach focusing explicitly on the function of interest, and calculating its frequentist variability. Asymptotic properties of the MELO estimator are similar to the plug-in approach. Nevertheless, simulation exercises show that our proposal is better in situations characterized by small sample sizes and noisy models. In addition, we observe in the applications that our approach gives lower standard errors than frequently used alternatives when datasets are not very informative.

Using Eigencentrality to Estimate Joint, Conditional and Marginal Probabilities from Mixed-Variable Data: Method and Applications

The ability to estimate joint, conditional and marginal probability distributions over some set of variables is of great utility for many common machine learning tasks. However, estimating these distributions can be challenging, particularly in the case of data containing a mix of discrete and continuous variables. This paper presents a non-parametric method for estimating these distributions directly from a dataset. The data are first represented as a graph consisting of object nodes and attribute value nodes. Depending on the distribution to be estimated, an appropriate eigenvector equation is then constructed. This equation is then solved to find the corresponding stationary distribution of the graph, from which the required distributions can then be estimated and sampled from. The paper demonstrates how the method can be applied to many common machine learning tasks including classification, regression, missing value imputation, outlier detection, random vector generation, and clustering.

Removing the Feature Correlation Effect of Multiplicative Noise

Multiplicative noise, including dropout, is widely used to regularize deep neural networks (DNNs), and is shown to be effective in a wide range of architectures and tasks. From an information perspective, we consider injecting multiplicative noise into a DNN as training the network to solve the task with noisy information pathways, which leads to the observation that multiplicative noise tends to increase the correlation between features, so as to increase the signal-to-noise ratio of information pathways. However, high feature correlation is undesirable, as it increases redundancy in representations. In this work, we propose non-correlating multiplicative noise (NCMN), which exploits batch normalization to remove the correlation effect in a simple yet effective way. We show that NCMN significantly improves the performance of standard multiplicative noise on image classification tasks, providing a better alternative to dropout for batch-normalized networks. Additionally, we present a unified view of NCMN and shake-shake regularization, which explains the performance gain of the latter.

NAIS: Neural Attentive Item Similarity Model for Recommendation

Item-to-item collaborative filtering (aka. item-based CF) has been long used for building recommender systems in industrial settings, owing to its interpretability and efficiency in real-time personalization. It builds a user’s profile as her historically interacted items, recommending new items that are similar to the user’s profile. As such, the key to an item-based CF method is in the estimation of item similarities. Early approaches use statistical measures such as cosine similarity and Pearson coefficient to estimate item similarities, which are less accurate since they lack tailored optimization for the recommendation task. In recent years, several works attempt to learn item similarities from data, by expressing the similarity as an underlying model and estimating model parameters by optimizing a recommendation-aware objective function. While extensive efforts have been made to use shallow linear models for learning item similarities, there has been relatively less work exploring nonlinear neural network models for item-based CF. In this work, we propose a neural network model named Neural Attentive Item Similarity model (NAIS) for item-based CF. The key to our design of NAIS is an attention network, which is capable of distinguishing which historical items in a user profile are more important for a prediction. Compared to the state-of-the-art item-based CF method Factored Item Similarity Model (FISM), our NAIS has stronger representation power with only a few additional parameters brought by the attention network. Extensive experiments on two public benchmarks demonstrate the effectiveness of NAIS. This work is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.

Adversarial Training Towards Robust Multimedia Recommender System

Prosocial or Selfish? Agents with different behaviors for Contract Negotiation using Reinforcement Learning

We present an effective technique for training deep learning agents capable of negotiating on a set of clauses in a contract agreement using a simple communication protocol. We use Multi Agent Reinforcement Learning to train both agents simultaneously as they negotiate with each other in the training environment. We also model selfish and prosocial behavior to varying degrees in these agents. Empirical evidence is provided showing consistency in agent behaviors. We further train a meta agent with a mixture of behaviors by learning an ensemble of different models using reinforcement learning. Finally, to ascertain the deployability of the negotiating agents, we conducted experiments pitting the trained agents against human players. Results demonstrate that the agents are able to hold their own against human players, often emerging as winners in the negotiation. Our experiments demonstrate that the meta agent is able to reasonably emulate human behavior.

Latent Topic Conversational Models

Latent variable models have been a preferred choice in conversational modeling compared to sequence-to-sequence (seq2seq) models which tend to generate generic and repetitive responses. Despite so, training latent variable models remains to be difficult. In this paper, we propose Latent Topic Conversational Model (LTCM) which augments seq2seq with a neural latent topic component to better guide response generation and make training easier. The neural topic component encodes information from the source sentence to build a global ‘topic’ distribution over words, which is then consulted by the seq2seq model at each generation step. We study in details how the latent representation is learnt in both the vanilla model and LTCM. Our extensive experiments contribute to better understanding and training of conditional latent models for languages. Our results show that by sampling from the learnt latent representations, LTCM can generate diverse and interesting responses. In a subjective human evaluation, the judges also confirm that LTCM is the overall preferred option.

Novelty-organizing team of classifiers in noisy and dynamic environments

In the real world, the environment is constantly changing with the input variables under the effect of noise. However, few algorithms were shown to be able to work under those circumstances. Here, Novelty-Organizing Team of Classifiers (NOTC) is applied to the continuous action mountain car as well as two variations of it: a noisy mountain car and an unstable weather mountain car. These problems take respectively noise and change of problem dynamics into account. Moreover, NOTC is compared with NeuroEvolution of Augmenting Topologies (NEAT) in these problems, revealing a trade-off between the approaches. While NOTC achieves the best performance in all of the problems, NEAT needs less trials to converge. It is demonstrated that NOTC achieves better performance because of its division of the input space (creating easier problems). Unfortunately, this division of input space also requires a bit of time to bootstrap.

A simple test for constant correlation matrix

We propose a simple procedure to test for changes in correlation matrix at an unknown point in time. This test requires constant expectations and variances, but only mild assumptions on the serial dependence structure. We test for a breakdown in correlation structure using eigenvalue decomposition. We derive the asymptotic distribution under the null hypothesis and apply the test to stock returns. We compute the power of our test and compare it with the power of other known tests.

Capacity Control of ReLU Neural Networks by Basis-path Norm

Recently, path norm was proposed as a new capacity measure for neural networks with Rectified Linear Unit (ReLU) activation function, which takes the rescaling-invariant property of ReLU into account. It has been shown that the generalization error bound in terms of the path norm explains the empirical generalization behaviors of the ReLU neural networks better than that of other capacity measures. Moreover, optimization algorithms which take path norm as the regularization term to the loss function, like Path-SGD, have been shown to achieve better generalization performance. However, the path norm counts the values of all paths, and hence the capacity measure based on path norm could be improperly influenced by the dependency among different paths. It is also known that each path of a ReLU network can be represented by a small group of linearly independent basis paths with multiplication and division operation, which indicates that the generalization behavior of the network only depends on only a few basis paths. Motivated by this, we propose a new norm \emph{Basis-path Norm} based on a group of linearly independent paths to measure the capacity of neural networks more accurately. We establish a generalization error bound based on this basis path norm, and show it explains the generalization behaviors of ReLU networks more accurately than previous capacity measures via extensive experiments. In addition, we develop optimization algorithms which minimize the empirical risk regularized by the basis-path norm. Our experiments on benchmark datasets demonstrate that the proposed regularization method achieves clearly better performance on the test set than the previous regularization approaches.

Efficient sampling of conditioned Markov jump processes

We consider the task of generating draws from a Markov jump process (MJP) between two time points at which the process is known. Resulting draws are typically termed bridges and the generation of such bridges plays a key role in simulation-based inference algorithms for MJPs. The problem is challenging due to the intractability of the conditioned process, necessitating the use of computationally intensive methods such as weighted resampling or Markov chain Monte Carlo. An efficient implementation of such schemes requires an approximation of the intractable conditioned hazard/propensity function that is both cheap and accurate. In this paper, we review some existing approaches to this problem before outlining our novel contribution. Essentially, we leverage the tractability of a Gaussian approximation of the MJP and suggest a computationally efficient implementation of the resulting conditioned hazard approximation. We compare and contrast our approach with existing methods using three examples.

Bayesian functional optimisation with shape prior

Real world experiments are expensive, and thus it is important to reach a target in minimum number of experiments. Experimental processes often involve control variables that changes over time. Such problems can be formulated as a functional optimisation problem. We develop a novel Bayesian optimisation framework for such functional optimisation of expensive black-box processes. We represent the control function using Bernstein polynomial basis and optimise in the coefficient space. We derive the theory and practice required to dynamically adjust the order of the polynomial degree, and show how prior information about shape can be integrated. We demonstrate the effectiveness of our approach for short polymer fibre design and optimising learning rate schedules for deep networks.

Interpretable Textual Neuron Representations for NLP

Input optimization methods, such as Google Deep Dream, create interpretable representations of neurons for computer vision DNNs. We propose and evaluate ways of transferring this technology to NLP. Our results suggest that gradient ascent with a gumbel softmax layer produces n-gram representations that outperform naive corpus search in terms of target neuron activation. The representations highlight differences in syntax awareness between the language and visual models of the Imaginet architecture.

Generative Adversarial Network in Medical Imaging: A Review

Generative adversarial networks have gained a lot of attention in general computer vision community due to their capability of data generation without explicitly modelling the probability density function and robustness to overfitting. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into the training and imposing higher order consistency that is proven to be useful in many cases, such as in domain adaptation, data augmentation, and image-to-image translation. These nice properties have attracted researcher in the medical imaging community and we have seen quick adoptions in many traditional tasks and some novel applications. This trend will continue to grow based on our observation, therefore we conducted a review of the recent advances in medical imaging using the adversarial training scheme in the hope of benefiting researchers that are interested in this technique.

The Multi-Round Sequential Selection Problem

• Distributed transient frequency control in power networks• Parameter Synthesis Problems for one parametric clock Timed Automata• Testing SensoGraph, a geometric approach for fast sensory evaluation• Binary Proportional Pairing Functions• Projective Splitting with Forward Steps only Requires Continuity• Computing Wasserstein Distance for Persistence Diagrams on a Quantum Computer• A new Fibonacci identity and its associated summation identities• Weighted Aleksandrov estimates: PDE and stochastic versions• Pan-disease clustering analysis of the trend of period prevalence• General Equitable Decompositions for Graphs with Symmetries• The Kirchhoff Index of Enhanced Hypercubes• Unbalanced Three-Phase Distribution Grid Topology Estimation and Bus Phase Identification• Parametric randomization, complex symplectic factorizations, and quadratic-exponential functionals for Gaussian quantum states• Better Conversations by Modeling,Filtering,and Optimizing for Coherence and Diversity• SilhoNet: An RGB Method for 3D Object Pose Estimation and Grasp Planning• Retrieval analysis of 38 WFC3 transmission spectra and resolution of the normalisation degeneracy• Testing Selective Influence Directly Using Trackball Movement Tasks• Non-Stationary Covariance Estimation using the Stochastic Score Approximation for Large Spatial Data• Identification of FIR Systems with Binary Input and Output Observations• Distributed Robust Dynamic Average Consensus with Dynamic Event-Triggered Communication• Optimal lower bounds for multiple recurrence• Predictive Collective Variable Discovery with Deep Bayesian Models• Non-intersecting Ryser hypergraphs• The Archive and Package (arcp) URI scheme• Chain lengths in the type $B$ Tamari lattice• Categories of Two-Colored Pair Partitions, Part I: Categories Indexed by Cyclic Groups• Finding cliques using few probes• Mind Your POV: Convergence of Articles and Editors Towards Wikipedia’s Neutrality Norm• PAIM: Platoon-based Autonomous Intersection Management• Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent• A Study on Deep Learning Based Sauvegrain Method for Measurement of Puberty Bone Age• Astrophysical S-factors, thermonuclear rates, and electron screening potential for the $^3$He(d,p)$^{4}$He Big Bang reaction via a hierarchical Bayesian model• Approximate Nash Region of the Gaussian Interference Channel with Noisy Output Feedback• Wearable-based Mediation State Detection in Individuals with Parkinson’s Disease• A Study of Energy Trading in a Low-Voltage Network: Centralised and Distributed Approaches• Decentralized P2P Energy Trading under Network Constraints in a Low-Voltage Network• Sublinear Time Low-Rank Approximation of Distance Matrices• Aligning Manifolds of Double Pendulum Dynamics Under the Influence of Noise• Deep-learning models improve on community-level diagnosis for common congenital heart disease lesions• Leveraging Contact Forces for Learning to Grasp• Tail redundancy and its characterization of compression of memoryless sources• Light Field Neural Network• Positive-Unlabeled Classification under Class Prior Shift and Asymmetric Error• On the least upper bound for the settling time of a class of fixed-time stable systems• Extremal curves on Stiefel and Grassmann manifolds• Extreme Scale De Novo Metagenome Assembly• Generating 3D Adversarial Point Clouds• Optimal Deployment of Drone Base Stations for Cellular Communication by Network-based Localization• Deployment of Drone Base Stations for Cellular Communication Without Apriori User Distribution Information• NICT’s Neural and Statistical Machine Translation Systems for the WMT18 News Translation Task• A revisit of the Borch rule for the Principal-Agent Risk-Sharing problem• Exploring Visual Relationship for Image Captioning• NICT’s Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task• Asymptotic exponential law for the transition time to equilibrium of the metastable kinetic Ising model with vanishing magnetic field• New approach for solar tracking systems based on computer vision, low cost hardware and deep learning• How locating sensors in thermo-acoustic tomography?• Encoding two-dimensional range top-k queries revisited• Measurement error in continuous endpoints in randomised trials: problems and solutions• Faster Training of Mask R-CNN by Focusing on Instance Boundaries• What Role Can NOMA Play in Massive MIMO?• Detect, anticipate and generate: Semi-supervised recurrent latent variable models for human activity modeling• The Aqualoc Dataset: Towards Real-Time Underwater Localization from a Visual-Inertial-Pressure Acquisition System• Deep Learning Based Rib Centerline Extraction and Labeling• Dynamical Optimal Transport on Discrete Surfaces• Analyzing behavioral trends in community driven discussion platforms like Reddit• Ultrafast Calculation of Diffuse Scattering from Atomistic Models• Monochromatic trees in random tournaments• On the Computation of the Weight Distribution of Linear Codes over Finite Fields• Counting the uncountable: deep semantic density estimation from Space• Points of infinite multiplicity of planar Brownian motion: measures and local times• Dual Reconstruction Nets for Image Super-Resolution with Gradient Sensitive Loss• A unifying Bayesian approach for preterm brain-age prediction that models EEG sleep transitions over age• One-shot Capacity bounds on the Simultaneous Transmission of Classical and Quantum Information• A topological obstruction to the controllability of nonlinear wave equations with bilinear control term• Simple, fast and accurate evaluation of the action of the exponential of a rate matrix on a probability vector• Pommerman: A Multi-Agent Playground• The topological support of the z-measures on the Thoma simplex• Convergence and Open-Mindedness of Discrete and Continuous Semantics for Bipolar Weighted Argumentation (Technical Report)• Bifurcation in the angular velocity of a circular disk propelled by symmetrically distributed camphor pills• A survey of advances in epistemic logic program solvers• The distribution of information for sEMG signals in the rectal cancer treatment process• Thermal coupling of silicon oscillators in cryogen-free dilution refrigerators• Survey: Sixty Years of Douglas–Rachford• String Transduction with Target Language Models and Insertion Handling• Direct Reconstruction of Saturated Samples in Band-Limited OFDM Signals• TStarBots: Defeating the Cheating Level Builtin AI in StarCraft II in the Full Game• Multi-agent structured optimization over message-passing architectures with bounded communication delays• Some remarks on combinatorial wall-crossing• 3D Human Pose Estimation with Siamese Equivariant Embedding• Exploring the Impact of Password Dataset Distribution on Guessing• Noise Statistics Oblivious GARD For Robust Regression With Sparse Outliers• Non-Orthogonal Multiple Access: Common Myths and Critical Questions• Deterministic limit of temporal difference reinforcement learning for stochastic games• Sensitivity Function Trade-offs for Networks with a String Topology• Modelling the data and not the images in FMRI• Unsupervised cross-lingual matching of product classifications• Counterexample to Equivalent Nodal Analysis for Voltage Stability Assessment• Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks• Graph magnitude homology via algebraic Morse theory• A threshold for cutoff in two-community random graphs• Prime-Residue-Class of Uniform Charges on the Integers• Algorithmic aspects of broadcast independence• The Measure Aspect of Quantum Uncertainty, of Entanglement, and Respective Entropies• Distributionally Robust Chance Constrained Optimal Power Flow Assuming Unimodal Distributions with Misspecified Modes• Audio Based Disambiguation Of Music Genre Tags• MTLE: A Multitask Learning Encoder of Visual Feature Representations for Video and Movie Description• DPPy: Sampling Determinantal Point Processes with Python• A Novel Warehouse Multi-Robot Automation System with Semi-Complete and Computationally Efficient Path Planning and Adaptive Genetic Task Allocation Algorithms• Markov selection for the stochastic compressible Navier–Stokes system• LFRic: Meeting the challenges of scalability and performance portability in Weather and Climate models• Towards Dialogue-based Navigation with Multivariate Adaptation driven by Intention and Politeness for Social Robots• Music Mood Detection Based On Audio And Lyrics With Deep Neural Net• Feedback Control of a Cassie Bipedal Robot: Walking, Standing, and Riding a Segway• Modeling Online Discourse with Coupled Distributed Topics• Time-varying Projected Dynamical Systems with Applications to Feedback Optimization of Power Systems• Online control of the false discovery rate in biomedical research• Clustering students’ open-ended questionnaire answers• A Game-Theoretic Analysis of Shard-Based Permissionless Blockchains• Combinatorial and Structural Results for gamma-Psi-dimensions• An Information Matrix Approach for State Secrecy• Symmetric Shannon capacity is the independence number minus 1• Analog Coding Frame-work• Towards Large-Scale Video Video Object Mining

Like this:

Like Loading…

Related