Bandit algorithms for real-time data capture on large social medias
We study the problem of real time data capture on social media. Due to the different limitations imposed by those media, but also to the very large amount of information, it is impossible to collect all the data produced by social networks such as Twitter. Therefore, to be able to gather enough relevant information related to a predefined need, it is necessary to focus on a subset of the information sources. In this work, we focus on user-centered data capture and consider each account of a social network as a source that can be listened to at each iteration of a data capture process, in order to collect the corresponding produced contents. This process, whose aim is to maximize the quality of the information gathered, is constrained by the number of users that can be monitored simultaneously. The problem of selecting a subset of accounts to listen to over time is a sequential decision problem under constraints, which we formalize as a bandit problem with multiple selections. Therefore, we propose several bandit models to identify the most relevant users in real time. First, we study of the case of the stochastic bandit, in which each user corresponds to a stationary distribution. Then, we introduce two contextual bandit models, one stationary and the other non stationary, in which the utility of each user can be estimated by assuming some underlying structure in the reward space. The first approach introduces the notion of profile, which corresponds to the average behavior of a user. The second approach takes into account the activity of a user in order to predict his future behavior. Finally, we are interested in models that are able to tackle complex temporal dependencies between users, with the use of a latent space within which the information transits from one iteration to the other. Each of the proposed approaches is validated on both artificial and real datasets.
Application of Self-Play Reinforcement Learning to a Four-Player Game of Imperfect Information
We introduce a new virtual environment for simulating a card game known as ‘Big 2’. This is a four-player game of imperfect information with a relatively complicated action space (being allowed to play 1,2,3,4 or 5 card combinations from an initial starting hand of 13 cards). As such it poses a challenge for many current reinforcement learning methods. We then use the recently proposed ‘Proximal Policy Optimization’ algorithm to train a deep neural network to play the game, purely learning via self-play, and find that it is able to reach a level which outperforms amateur human players after only a relatively short amount of training time and without needing to search a tree of future game states.
Syntactic Scaffolds for Semantic Structures
We introduce the syntactic scaffold, an approach to incorporating syntactic information into semantic tasks. Syntactic scaffolds avoid expensive syntactic processing at runtime, only making use of a treebank during training, through a multitask objective. We improve over strong baselines on PropBank semantics, frame semantics, and coreference resolution, achieving competitive performance on all three tasks.
Spectral Collaborative Filtering
Uniform Inference in High-Dimensional Gaussian Graphical Models
Graphical models have become a very popular tool for representing dependencies within a large set of variables and are key for representing causal structures. We provide results for uniform inference on high-dimensional graphical models with the number of target parameters being possible much larger than sample size. This is in particular important when certain features or structures of a causal model should be recovered. Our results highlight how in high-dimensional settings graphical models can be estimated and recovered with modern machine learning methods in complex data sets. We also demonstrate in simulation study that our procedure has good small sample properties.
A Self-Attention Network for Hierarchical Data Structures with an Application to Claims Management
Insurance companies must manage millions of claims per year. While most of these claims are non-fraudulent, fraud detection is core for insurance companies. The ultimate goal is a predictive model to single out the fraudulent claims and pay out the non-fraudulent ones immediately. Modern machine learning methods are well suited for this kind of problem. Health care claims often have a data structure that is hierarchical and of variable length. We propose one model based on piecewise feed forward neural networks (deep learning) and another model based on self-attention neural networks for the task of claim management. We show that the proposed methods outperform bag-of-words based models, hand designed features, and models based on convolutional neural networks, on a data set of two million health care claims. The proposed self-attention method performs the best.
Penalized Component Hub Models
Social network analysis presupposes that observed social behavior is influenced by an unobserved network. Traditional approaches to inferring the latent network use pairwise descriptive statistics that rely on a variety of measures of co-occurrence. While these techniques have proven useful in a wide range of applications, the literature does not describe the generating mechanism of the observed data from the network. In a previous article, the authors presented a technique which used a finite mixture model as the connection between the unobserved network and the observed social behavior. This model assumed that each group was the result of a star graph on a subset of the population. Thus, each group was the result of a leader who selected members of the population to be in the group. They called these hub models. This approach treats the network values as parameters of a model. However, this leads to a general challenge in estimating parameters which must be addressed. For small datasets there can be far more parameters to estimate than there are observations. Under these conditions, the estimated network can be unstable. In this article, we propose a solution which penalizes the number of nodes which can exert a leadership role. We implement this as a pseudo-Expectation Maximization algorithm. We demonstrate this technique through a series of simulations which show that when the number of leaders is sparse, parameter estimation is improved. Further, we apply this technique to a dataset of animal behavior and an example of recommender systems.
Multi-Hop Knowledge Graph Reasoning with Reward Shaping
Multi-hop reasoning is an effective approach for query answering (QA) over incomplete knowledge graphs (KGs). The problem can be formulated in a reinforcement learning (RL) setup, where a policy-based agent sequentially extends its inference path until it reaches a target. However, in an incomplete KG environment, the agent receives low-quality rewards corrupted by false negatives in the training data, which harms generalization at test time. Furthermore, since no golden action sequence is used for training, the agent can be misled by spurious search trajectories that incidentally lead to the correct answer. We propose two modeling advances to address both issues: (1) we reduce the impact of false negative supervision by adopting a pretrained one-hop embedding model to estimate the reward of unobserved facts; (2) we counter the sensitivity to spurious paths of on-policy RL by forcing the agent to explore a diverse set of paths using randomly generated edge masks. Our approach significantly improves over existing path-based KGQA models on several benchmark datasets and is comparable or better than embedding-based models.
The Causal Effect of Answer Changing on Multiple-Choice Items
The causal effect of changing initial answers on final scores is a long-standing puzzle in the educational and psychological measurement literature. This paper formalizes the question using the standard framework for causal inference, the potential outcomes framework. Our clear definitions of the treatment and corresponding counterfactuals, expressed with potential outcomes, allow us to estimate the causal effect of answer changing even without any study designs or modeling examinees’ answer change behaviors. We separately define the average treatment effect and the average treatment effect on the treated, and show that each effect can be directly computed from the proportions of examinees’ answer changing patterns. Our findings show that the traditional method in the literature of comparing the proportions of ‘wrong to right’ and ‘right to wrong’ patterns–a method which has recently been criticized–indeed correctly estimates the sign of the average answer changing effect but only for those examinees who actually changed their initial responses; this does not take into account those who retained their responses. We illustrate our procedures by reanalyzing van der Linden, Jeon, and Ferrara’s (2011) data. The results show that the answer changing effect is heterogeneous such that it is positive to examinees who changed their initial responses but is negative to those who did not change the responses. We discuss theoretical and practical implications of our findings.
Proximity Forest: An effective and scalable distance-based classifier for time series
Research into the classification of time series has made enormous progress in the last decade. The UCR time series archive has played a significant role in challenging and guiding the development of new learners for time series classification. The largest dataset in the UCR archive holds 10 thousand time series only; which may explain why the primary research focus has been in creating algorithms that have high accuracy on relatively small datasets. This paper introduces Proximity Forest, an algorithm that learns accurate models from datasets with millions of time series, and classifies a time series in milliseconds. The models are ensembles of highly randomized Proximity Trees. Whereas conventional decision trees branch on attribute values (and usually perform poorly on time series), Proximity Trees branch on the proximity of time series to one exemplar time series or another; allowing us to leverage the decades of work into developing relevant measures for time series. Proximity Forest gains both efficiency and accuracy by stochastic selection of both exemplars and similarity measures. Our work is motivated by recent time series applications that provide orders of magnitude more time series than the UCR benchmarks. Our experiments demonstrate that Proximity Forest is highly competitive on the UCR archive: it ranks among the most accurate classifiers while being significantly faster. We demonstrate on a 1M time series Earth observation dataset that Proximity Forest retains this accuracy on datasets that are many orders of magnitude greater than those in the UCR repository, while learning its models at least 100,000 times faster than current state of the art models Elastic Ensemble and COTE.
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation
The task of dialogue generation aims to automatically provide responses given previous utterances. Tracking dialogue states is an important ingredient in dialogue generation for estimating users’ intention. However, the \emph{expensive nature of state labeling} and the \emph{weak interpretability} make the dialogue state tracking a challenging problem for both task-oriented and non-task-oriented dialogue generation: For generating responses in task-oriented dialogues, state tracking is usually learned from manually annotated corpora, where the human annotation is expensive for training; for generating responses in non-task-oriented dialogues, most of existing work neglects the explicit state tracking due to the unlimited number of dialogue states. In this paper, we propose the \emph{semi-supervised explicit dialogue state tracker} (SEDST) for neural dialogue generation. To this end, our approach has two core ingredients: \emph{CopyFlowNet} and \emph{posterior regularization}. Specifically, we propose an encoder-decoder architecture, named \emph{CopyFlowNet}, to represent an explicit dialogue state with a probabilistic distribution over the vocabulary space. To optimize the training procedure, we apply a posterior regularization strategy to integrate indirect supervision. Extensive experiments conducted on both task-oriented and non-task-oriented dialogue corpora demonstrate the effectiveness of our proposed model. Moreover, we find that our proposed semi-supervised dialogue state tracker achieves a comparable performance as state-of-the-art supervised learning baselines in state tracking procedure.
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
Retrieve-and-Read: Multi-task Learning of Information Retrieval and Reading Comprehension
This study considers the task of machine reading at scale (MRS) wherein, given a question, a system first performs the information retrieval (IR) task of finding relevant passages in a knowledge source and then carries out the reading comprehension (RC) task of extracting an answer span from the passages. Previous MRS studies, in which the IR component was trained without considering answer spans, struggled to accurately find a small number of relevant passages from a large set of passages. In this paper, we propose a simple and effective approach that incorporates the IR and RC tasks by using supervised multi-task learning in order that the IR component can be trained by considering answer spans. Experimental results on the standard benchmark, answering SQuAD questions using the full Wikipedia as the knowledge source, showed that our model achieved state-of-the-art performance. Moreover, we thoroughly evaluated the individual contributions of our model components with our new Japanese dataset and SQuAD. The results showed significant improvements in the IR task and provided a new perspective on IR for RC: it is effective to teach which part of the passage answers the question rather than to give only a relevance score to the whole passage.
A novel graph-based model for hybrid recommendations in cold-start scenarios
Cold-start is a very common and still open problem in the Recommender Systems literature. Since cold start items do not have any interaction, collaborative algorithms are not applicable. One of the main strategies is to use pure or hybrid content-based approaches, which usually yield to lower recommendation quality than collaborative ones. Some techniques to optimize performance of this type of approaches have been studied in recent past. One of them is called feature weighting, which assigns to every feature a real value, called weight, that estimates its importance. Statistical techniques for feature weighting commonly used in Information Retrieval, like TF-IDF, have been adapted for Recommender Systems, but they often do not provide sufficient quality improvements. More recent approaches, FBSM and LFW, estimate weights by leveraging collaborative information via machine learning, in order to learn the importance of a feature based on other users opinions. This type of models have shown promising results compared to classic statistical analyzes cited previously. We propose a novel graph, feature-based machine learning model to face the cold-start item scenario, learning the relevance of features from probabilities of item-based collaborative filtering algorithms.
Extracting Keywords from Open-Ended Business Survey Questions
Open-ended survey data constitute an important basis in research as well as for making business decisions. Collecting and manually analysing free-text survey data is generally more costly than collecting and analysing survey data consisting of answers to multiple-choice questions. Yet free-text data allow for new content to be expressed beyond predefined categories and are a very valuable source of new insights into people’s opinions. At the same time, surveys always make ontological assumptions about the nature of the entities that are researched, and this has vital ethical consequences. Human interpretations and opinions can only be properly ascertained in their richness using textual data sources; if these sources are analyzed appropriately, the essential linguistic nature of humans and social entities is safeguarded. Natural Language Processing (NLP) offers possibilities for meeting this ethical business challenge by automating the analysis of natural language and thus allowing for insightful investigations of human judgements. We present a computational pipeline for analysing large amounts of responses to open-ended questions in surveys and extract keywords that appropriately represent people’s opinions. This pipeline addresses the need to perform such tasks outside the scope of both commercial software and bespoke analysis, exceeds the performance to state-of-the-art systems, and performs this task in a transparent way that allows for scrutinising and exposing potential biases in the analysis. Following the principle of Open Data Science, our code is open-source and generalizable to other datasets.
Learning Data-adaptive Nonparametric Kernels
Traditional kernels or their combinations are often not sufficiently flexible to fit the data in complicated practical tasks. In this paper, we present a Data-Adaptive Nonparametric Kernel (DANK) learning framework by imposing an adaptive matrix on the kernel/Gram matrix in an entry-wise strategy. Since we do not specify the formulation of the adaptive matrix, each entry in it can be directly and flexibly learned from the data. Therefore, the solution space of the learned kernel is largely expanded, which makes DANK flexible to adapt to the data. Specifically, the proposed kernel learning framework can be seamlessly embedded to support vector machines (SVM) and support vector regression (SVR), which has the capability of enlarging the margin between classes and reducing the model generalization error. Theoretically, we demonstrate that the objective function of our devised model is gradient-Lipschitz continuous. Thereby, the training process for kernel and parameter learning in SVM/SVR can be efficiently optimized in a unified framework. Further, to address the scalability issue in DANK, a decomposition-based scalable approach is developed, of which the effectiveness is demonstrated by both empirical studies and theoretical guarantees. Experimentally, our method outperforms other representative kernel learning based algorithms on various classification and regression benchmark datasets.
Autonomous Configuration of Network Parameters in Operating Systems using Evolutionary Algorithms
By default, the Linux network stack is not configured for highspeed large file transfer. The reason behind this is to save memory resources. It is possible to tune the Linux network stack by increasing the network buffers size for high-speed networks that connect server systems in order to handle more network packets. However, there are also several other TCP/IP parameters that can be tuned in an Operating System (OS). In this paper, we leverage Genetic Algorithms (GAs) to devise a system which learns from the history of the network traffic and uses this knowledge to optimize the current performance by adjusting the parameters. This can be done for a standard Linux kernel using sysctl or /proc. For a Virtual Machine (VM), virtually any type of OS can be installed and an image can swiftly be compiled and deployed. By being a sandboxed environment, risky configurations can be tested without the danger of harming the system. Different scenarios for network parameter configurations are thoroughly tested, and an increase of up to 65% throughput speed is achieved compared to the default Linux configuration.
Scalable Manifold Learning for Big Data with Apache Spark
Non-linear spectral dimensionality reduction methods, such as Isomap, remain important technique for learning manifolds. However, due to computational complexity, exact manifold learning using Isomap is currently impossible from large-scale data. In this paper, we propose a distributed memory framework implementing end-to-end exact Isomap under Apache Spark model. We show how each critical step of the Isomap algorithm can be efficiently realized using basic Spark model, without the need to provision data in the secondary storage. We show how the entire method can be implemented using PySpark, offloading compute intensive linear algebra routines to BLAS. Through experimental results, we demonstrate excellent scalability of our method, and we show that it can process datasets orders of magnitude larger than what is currently possible, using a 25-node parallel~cluster.
Bottom-Up Abstractive Summarization
Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content selector to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences, making it easy to transfer a trained summarizer to a new domain.
Improve Blockchain Performance using Graph Data Structure and Parallel Mining
Blockchain technology is ushering in another break- out year, the challenge of blockchain still remains to be solved. This paper analyzes the features of Bitcoin and Bitcoin-NG system based on blockchian, proposes an improved method of implementing blockchain systems by replacing the structure of the original chain with the graph data structure. It was named GraphChain. Each block represents a transaction and contains the balance status of the traders. Additionally, as everyone knows all the transactions in Bitcoin system will be baled by only one miner that will result in a lot of wasted effort, so another way to improve resource utilization is to change the original way to compete for miner to election and parallel mining. Researchers simulated blockchain with graph structure and parallel mining through python, and suggested the conceptual new graph model which can improve both capacity and performance.
Boosting Binary Optimization via Binary Classification: A Case Study of Job Shop Scheduling
Many optimization techniques evaluate solutions consecutively, where the next candidate for evaluation is determined by the results of previous evaluations. For example, these include iterative methods, ‘black box’ optimization algorithms, simulated annealing, evolutionary algorithms and tabu search, to name a few. When solving an optimization problem, these algorithms evaluate a large number of solutions, which raises the following question: Is it possible to learn something about the optimum using these solutions? In this paper, we define this ‘learning’ question in terms of a logistic regression model and explore its predictive accuracy computationally. The proposed model uses a collection of solutions to predict the components of the optimal solutions. To illustrate the utility of such predictions, we embed the logistic regression model into the tabu search algorithm for job shop scheduling problem. The resulting framework is simple to implement, yet provides a significant boost to the performance of the standard tabu search.
Seeing Colors: Learning Semantic Text Encoding for Classification
The question we answer with this work is: can we convert a text document into an image to exploit best image classification models to classify documents? To answer this question we present a novel text classification method which converts a text document into an encoded image, using word embedding and capabilities of Convolutional Neural Networks (CNNs), successfully employed in image classification. We evaluate our approach by obtaining promising results on some well-known benchmark datasets for text classification. This work allows the application of many of the advanced CNN architectures developed for Computer Vision to Natural Language Processing. We test the proposed approach on a multi-modal dataset, proving that it is possible to use a single deep model to represent text and image in the same feature space.
Tensor Embedding: A Supervised Framework for Human Behavioral Data Mining and Prediction
Today’s densely instrumented world offers tremendous opportunities for continuous acquisition and analysis of multimodal sensor data providing temporal characterization of an individual’s behaviors. Is it possible to efficiently couple such rich sensor data with predictive modeling techniques to provide contextual, and insightful assessments of individual performance and wellbeing? Prediction of different aspects of human behavior from these noisy, incomplete, and heterogeneous bio-behavioral temporal data is a challenging problem, beyond unsupervised discovery of latent structures. We propose a Supervised Tensor Embedding (STE) algorithm for high dimension multimodal data with join decomposition of input and target variable. Furthermore, we show that features selection will help to reduce the contamination in the prediction and increase the performance. The efficiently of the methods was tested via two different real world datasets.
Generalized probabilistic principal component analysis of correlated data
Principal component analysis (PCA) is a well-established tool in machine learning and data processing. \cite{tipping1999probabilistic} proposed a probabilistic formulation of PCA (PPCA) by showing that the principal axes in PCA are equivalent to the maximum marginal likelihood estimator of the factor loading matrix in a latent factor model for the observed data, assuming that the latent factors are independently distributed as standard normal distributions. However, the independence assumption may be unrealistic for many scenarios such as modeling multiple time series, spatial processes, and functional data, where the output variables are correlated. In this paper, we introduce the generalized probabilistic principal component analysis (GPPCA) to study the latent factor model of multiple correlated outcomes, where each factor is modeled by a Gaussian process. The proposed method provides a probabilistic solution of the latent factor model with the scalable computation. In particular, we derive the maximum marginal likelihood estimator of the factor loading matrix and the predictive distribution of the output. Based on the explicit expression of the precision matrix in the marginal likelihood, the number of the computational operations is linear to the number of output variables. Moreover, with the use of the Mat{\’e}rn covariance function, the number of the computational operations is also linear to the number of time points for modeling the multiple time series without any approximation to the likelihood function. We discuss the connection of the GPPCA with other approaches such as the PCA and PPCA, and highlight the advantage of GPPCA in terms of the practical relevance, estimation accuracy and computational convenience. Numerical studies confirm the excellent finite-sample performance of the proposed approach.
• Forecasting solar radiation during dust storms using deep learning• Victory Probability in the Fire Emblem Arena• The Hyper-Zagreb Index of Trees and Unicyclic Graphs• Matching preclusion number of graphs• Tests de bondad de ajuste para la distribución Poisson bivariante• The Asymmetric Index of a Graph• On RAC Drawings of Graphs with one Bend per Edge• A New Scheme of Gradient Flow and Saddle-Point Dynamics with Fixed-time Convergence Guarantees• Impact of Device Orientation on Error Performance of LiFi Systems• Simulation-Selection-Extrapolation: Estimation in High-Dimensional Errors-in-Variables Models• The number of crossings in multigraphs with no empty lens• Permutation tests of non-exchangeable null models• Optimum window length of Savitzky-Golay filters with arbitrary order• Superferromagnetism and domain-wall topologies in artificial ‘pinwheel’ spin ice• Contributions to the Problems of Recognizing and Coloring Gammoids• Distribution of inter-event avalanche times in disordered and frustrated spin systems• Deep learning, quantum chaos, and pseudorandom evolution• On self-avoiding polygons and walks: the snake method via polygon joining• Data-Driven Debugging for Functional Side Channels• Iterative Recursive Attention Model for Interpretable Sequence Classification• Maximum Entropy Principle Analysis in Network Systems with Short-time Recordings• On the Structure of Isometrically Embeddable Metric Spaces• A Heuristic Approach towards Drawings of Graphs with High Crossing Resolution• Bayesian Model Averaging for Model Implied Instrumental Variable Two Stage Least Squares Estimators• Total Recall: Understanding Traffic Signs using Deep Hierarchical Convolutional Neural Networks• Hashing-Based-Estimators for Kernel Density in High Dimensions• Randomized Polynomial-Time Root Counting in Prime Power Rings• Orthogonal and Smooth Orthogonal Layouts of 1-Planar Graphs with Low Edge Complexity• Gaussian process regression for survival time prediction with genome-wide gene expression• Hallucinating Dense Optical Flow from Sparse Lidar for Autonomous Vehicles• LUCSS: Language-based User-customized Colourization of Scene Sketches• Repair and Resource Scheduling in Unbalanced Distribution Systems using Neighborhood Search• Fair Algorithms for Learning in Allocation Problems• Securing Tag-based recommender systems against profile injection attacks: A comparative study• Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables• Directed Exploration in PAC Model-Free Reinforcement Learning• Speaker Fluency Level Classification Using Machine Learning Techniques• An explicit mean-covariance parameterization for multivariate response linear regression• Multi-Cell Multi-Task Convolutional Neural Networks for Diabetic Retinopathy Grading Kang• Exploring Time Flexibility in Wireless Data Plan• AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale• Learning to Describe Differences Between Pairs of Similar Images• On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data• Real and quaternionic second-order free cumulants and connections to matrix cumulants• Value of Information Systems in Routing Games• A global model for predicting the arrival of imported dengue infections• Ensemble Sequence Level Training for Multimodal MT: OSU-Baidu WMT18 Multimodal Machine Translation System Report• Asymptotic Seed Bias in Respondent-driven Sampling• Content-based feature exploration for transparent music recommendation using self-attentive genre classification• The Limiting Behavior of the FTASEP with Product Bernoulli Initial Distribution• Understanding the Characteristics of Frequent Users of Emergency Departments: What Role Do Medical Conditions Play?• Uniqueness of martingale solutions for the stochastic nonlinear Schrödinger equation on 3d compact manifolds• An exact copositive programming formulation for the Discrete Ordered Median Problem: Extended version• Learning in Memristive Neural Network Architectures using Analog Backpropagation Circuits• A novel extension of Generalized Low-Rank Approximation of Matrices based on multiple-pairs of transformations• TenDSuR: Tensor-Based 4D Sub-Nyquist Radar• A Unified Mammogram Analysis Method via Hybrid Deep Supervision• The integer homology threshold in $Y_d(n, p)$• Adaptation and Robust Learning of Probabilistic Movement Primitives• Graph reduction by local variation• Identifying the Discount Factor in Dynamic Discrete Choice Models• An Empirical Analysis of the Role of Amplifiers, Downtoners, and Negations in Emotion Classification in Microblogs• Gibson Env: Real-World Perception for Embodied Agents• Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs• Sparse and Switching Infinite Horizon Optimal Control with Nonconvex Penalizations• Sup-norm adaptive simultaneous drift estimation for ergodic diffusions• Enhanced arc-flow formulations to minimize weighted completion time on identical parallel machines• A Multi-layer Gaussian Process for Motor Symptom Estimation in People with Parkinson’s Disease• Determining the signal dimension in second order source separation• Product and Moment Formulas for Iterated Stochastic Integrals (associated with Lévy Processes• The coagulation-fragmentation hierarchy with homogeneous rates and underlying stochastic dynamics• Compact packings of the plane with three sizes of discs• Multilevel Monte Carlo for uncertainty quantification in structural engineering• Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation• APES: a Python toolbox for simulating reinforcement learning environments• How agents see things: On visual representations in an emergent language game• Large Deviations of Convex Hulls of the ‘True’ Self-Avoiding Random Walk• Imitation Learning for Neural Morphological String Transduction• State bounding for positive coupled differential – difference equations with bounded disturbances• Bayesian Classifier for Route Prediction with Markov Chains• From nonlinear Fokker-Planck equations to solutions of distribution dependent SDE• MobiBits: Multimodal Mobile Biometric Database• Influence Dynamics and Consensus in an Opinion-Neighborhood based Modified Vicsek-like Social Network• Bayesian quadrature and energy minimization for space-filling design• The ballistic annihilation threshold is 1/4• An improved upper bound on the integrality ratio for the $s$-$t$-path TSP• Pole Dancing: 3D Morphs for Tree Drawings• Full-Duplex Energy-Harvesting Enabled Relay Networks in Generalized Fading Channels• Extreme event quantification in dynamical systems with random components• Spoofing PRNU Patterns of Iris Sensors while Preserving Iris Recognition• Improved Chebyshev inequality: new probability bounds with known supremum of PDF• Using a Game Engine to Simulate Critical Incidents and Data Collection by Autonomous Drones• Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs• Univariate Ideal Membership Parameterized by Rank, Degree, and Number of Generators• Data-driven discovery of PDEs in complex datasets• Cognate-aware morphological segmentation for multilingual neural translation• The two-type Richardson model in the half-plane• The Namer-Claimer game• The MeMAD Submission to the WMT18 Multimodal Translation Task• Spherical Latent Spaces for Stable Variational Autoencoders• Risk averse stochastic programming: time consistency and optimal stopping• The Evolving Moran Genealogy• An inertial upper bound for the quantum independence number of a graph• Upward Planar Morphs• On Second Order Conditions in the Multivariate Block Maxima and Peak over Threshold Method• Finite LTL Synthesis with Environment Assumptions and Quality Measures• Diversity, Topology, and the Risk of Node Re-identification in Labeled Social Graphs• Queue Layouts of Planar 3-Trees• General lemmas for Berge-Turán hypergraph problems• Tropical Gaussians: A Brief Survey• Single Channel ECG for Obstructive Sleep Apnea Severity Detection using a Deep Learning Approach• Deep Neural Networks with Weighted Averaged Overnight Airflow Features for Sleep Apnea-Hypopnea Severity Classification• The exclusion process mixes (almost) faster than independent particles• Ordinary planes, coplanar quadruples, and space quartics• Fully Dense UNet for 2D Sparse Photoacoustic Tomography Artifact Removal• On sets defining few ordinary hyperplanes• Towards Asynchronous Motor Imagery-Based Brain-Computer Interfaces: a joint training scheme using deep learning• Automatic Lung Cancer Prediction from Chest X-ray Images Using Deep Learning Approach• Open Source Dataset and Machine Learning Techniques for Automatic Recognition of Historical Graffiti• On the Area-Universality of Triangulations• Algoritmos Genéticos Aplicado ao Problema de Roteamento de Veículos
Like this:
Like Loading…
Related