Whats new on arXiv

SAFE: A Neural Survival Analysis Model for Fraud Early Detection

Many online platforms have deployed anti-fraud systems to detect and prevent fraudster activities. However, there is usually a gap between the time that a user commits a fraudulent action and the time that the user is suspended by the platform. How to detect fraudsters in time is a challenging problem. Most of the existing approaches adopt classifiers to predict fraudsters given their activity sequences along time. The main drawback of classification models is that the prediction results between consecutive timestamps are often inconsistent. In this paper, we propose a survival analysis based fraud early detection model, SAFE, that maps dynamic user activities to survival probabilities that are guaranteed to be monotonically decreasing along time. SAFE adopts recurrent neural network (RNN) to handle user activity sequences and directly outputs hazard values at each timestamp, and then, survival probability derived from hazard values is deployed to achieve consistent predictions. Because we only observe in the training data the user suspended time instead of the fraudulent activity time, we revise the loss function of the regular survival model to achieve fraud early detection. Experimental results on two real world datasets demonstrate that SAFE outperforms both the survival analysis model and recurrent neural network model alone as well as state-of-the-art fraud early detection approaches.

Does Your Model Know the Digit 6 Is Not a Cat? A Less Biased Evaluation of ‘Outlier’ Detectors

In the real world, a learning system could receive an input that looks nothing like anything it has seen during training, and this can lead to unpredictable behaviour. We thus need to know whether any given input belongs to the population distribution of the training data to prevent unpredictable behaviour in deployed systems. A recent surge of interest on this problem has led to the development of sophisticated techniques in the deep learning literature. However, due to the absence of a standardized problem formulation or an exhaustive evaluation, it is not evident if we can rely on these methods in practice. What makes this problem different from a typical supervised learning setting is that we cannot model the diversity of out-of-distribution samples in practice. The distribution of outliers used in training may not be the same as the distribution of outliers encountered in the application. Therefore, classical approaches that learn inliers vs. outliers with only two datasets can yield optimistic results. We introduce OD-test, a three-dataset evaluation scheme as a practical and more reliable strategy to assess progress on this problem. The OD-test benchmark provides a straightforward means of comparison for methods that address the out-of-distribution sample detection problem. We present an exhaustive evaluation of a broad set of methods from related areas on image classification tasks. Furthermore, we show that for realistic applications of high-dimensional images, the existing methods have low accuracy. Our analysis reveals areas of strength and weakness of each method.

Anomaly Detection with Generative Adversarial Networks for Multivariate Time Series

Today’s Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor’s and actuator’s time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.

Superstatistics with cut-off tails for financial time series

Financial time series have been investigated to follow fat-tailed distributions. Further, an empirical probability distribution sometimes shows cut-off shapes on its tails. To describe this stylized fact, we incorporate the cut-off effect in superstatistics. Then we confirm that the presented stochastic model is capable of describing the statistical properties of real financial time series. In addition, we present an option pricing formula with respect to superstatistics.

Uncertainty in the Design Stage of Two-Stage Bayesian Propensity Score Analysis

The two-stage process of propensity score analysis (PSA) includes a design stage where propensity scores are estimated and implemented to approximate a randomized experiment and an analysis stage where treatment effects are estimated conditional upon the design. This paper considers how uncertainty associated with the design stage impacts estimation of causal effects in the analysis stage. Such design uncertainty can derive from the fact that the propensity score itself is an estimated quantity, but also from other features of the design stage tied to choice of propensity score implementation. This paper formalizes a Bayesian framework for obtaining the posterior distribution of causal effects after marginalizing over a distribution of design-stage outputs, lending underlying formality to Bayesian methods for PSA (BPSA) that have gained attention in recent literature. Formulation of a probability distribution for the design-stage output depends on how the propensity score is implemented in the design stage, and propagation of uncertainty into causal estimates depends on how the treatment effect is estimated in the analysis stage. We explore these differences within a sample of commonly-used propensity score implementations (quantile stratification, nearest-neighbor matching, caliper matching, inverse probability of treatment weighting, and doubly robust estimation) and compare operating characteristics with standard Frequentist PSA in a simulation study. The methods are then deployed in an investigation of the association between levels of fine particulate air pollution and elevated exposure to emissions from coal-fired power plants.

IncSQL: Training Incremental Text-to-SQL Parsers with Non-Deterministic Oracles

We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the model trained with traditional static oracles assuming a single correct target SQL query. When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%. This is a work-in-progress technical report.

Fairness-aware Classification: Criterion, Convexity, and Bounds

Fairness-aware classification is receiving increasing attention in the machine learning fields. Recently research proposes to formulate the fairness-aware classification as constrained optimization problems. However, several limitations exist in previous works due to the lack of a theoretical framework for guiding the formulation. In this paper, we propose a general framework for learning fair classifiers which addresses previous limitations. The framework formulates various commonly-used fairness metrics as convex constraints that can be directly incorporated into classic classification models. Within the framework, we propose a constraint-free criterion on the training data which ensures that any classifier learned from the data is fair. We also derive the constraints which ensure that the real fairness metric is satisfied when surrogate functions are used to achieve convexity. Our framework can be used to for formulating fairness-aware classification with fairness guarantee and computational efficiency. The experiments using real-world datasets demonstrate our theoretical results and show the effectiveness of proposed framework and methods.

Adversarial Examples: Opportunities and Challenges

With the advent of the era of artificial intelligence(AI), deep neural networks (DNNs) have shown huge superiority over human in image recognition, speech processing, autonomous vehicles and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs) which are designed by attackers to fool deep learning models. Different from real examples, AEs can hardly be distinguished from human eyes, but mislead the model to predict incorrect outputs and therefore threaten security critical deep-learning applications. In recent years, the generation and defense of AEs have become a research hotspot in the field of AI security. This article reviews the latest research progress of AEs. First, we introduce the concept, cause, characteristic and evaluation metrics of AEs, then give a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages. After that we review the existing defenses and discuss their limitations. Finally, the future research opportunities and challenges of AEs are prospected.

DPP-PMRF: Rethinking Optimization for a Probabilistic Graphical Model Using Data-Parallel Primitives

We present a new parallel algorithm for probabilistic graphical model optimization. The algorithm relies on data-parallel primitives (DPPs), which provide portable performance over hardware architecture. We evaluate results on CPUs and GPUs for an image segmentation problem. Compared to a serial baseline, we observe runtime speedups of up to 13X (CPU) and 44X (GPU). We also compare our performance to a reference, OpenMP-based algorithm, and find speedups of up to 7X (CPU).

The closed loop between opinion formation and personalised recommendations

In social media, recommender systems are responsible for directing the users to relevant content. In order to enhance the users’ engagement, recommender systems adapt their output to the expected reactions of the users, which are in turn affected by the recommended contents. In this work, we model a single user that interacts with an online news aggregator, with the purpose of making explicit the feedback loop between the evolution of the user’s opinion and the personalised recommendation of contents. We assume that the user has a scalar opinion on a certain issue: this opinion is influenced by all received news, which are characterized by a binary position on the issue at hand. The user has a confirmation bias, that is, a preference for news that confirm her current opinion. At the same time, we assume that the recommender has the goal of maximizing the number of user’s clicks (as a measure of her engagement): in order to fulfil its goal, the recommender has to compromise between exploring the user’s preferences and exploiting them. After defining suitable metrics for the effectiveness of the recommender systems and for its impact on the opinion, we perform both extensive numerical simulations and a mathematical analysis of the model. We find that personalised contents and confirmation bias do affect the evolution of opinions: the extent of these effects is inherently related to the effectiveness of the recommender. We also show that by tuning the amount of randomness in the recommendation algorithm, one can reduce the impact of the recommendation system on the opinions.

Semantic WordRank: Generating Finer Single-Document Summarizations

We present Semantic WordRank (SWR), an unsupervised method for generating an extractive summary of a single document. Built on a weighted word graph with semantic and co-occurrence edges, SWR scores sentences using an article-structure-biased PageRank algorithm with a Softplus function adjustment, and promotes topic diversity using spectral subtopic clustering under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and SummBank datasets and show that SWR produces better summaries than the state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then show that, under the same measures over SummBank, SWR outperforms each of the three human annotators (aka. judges) and compares favorably with the combined performance of all judges.

A Unified Batch Online Learning Framework for Click Prediction

We present a unified framework for Batch Online Learning (OL) for Click Prediction in Search Advertisement. Machine Learning models once deployed, show non-trivial accuracy and calibration degradation over time due to model staleness. It is therefore necessary to regularly update models, and do so automatically. This paper presents two paradigms of Batch Online Learning, one which incrementally updates the model parameters via an early stopping mechanism, and another which does so through a proximal regularization. We argue how both these schemes naturally trade-off between old and new data. We then theoretically and empirically show that these two seemingly different schemes are closely related. Through extensive experiments, we demonstrate the utility of of our OL framework; how the two OL schemes relate to each other and how they trade-off between the new and historical data. We then compare batch OL to full model retrains, and show how online learning is more robust to data issues. We also demonstrate the long term impact of Online Learning, the role of the initial Models in OL, the impact of delays in the update, and finally conclude with some implementation details and challenges in deploying a real world online learning system in production. While this paper mostly focuses on application of click prediction for search advertisement, we hope that the lessons learned here can be carried over to other problem domains.

Distilled Wasserstein Learning for Word Embedding and Topic Modeling

We propose a novel Wasserstein method with a distillation mechanism, yielding joint learning of word embeddings and topics. The proposed method is based on the fact that the Euclidean distance between word embeddings may be employed as the underlying distance in the Wasserstein topic model. The word distributions of topics, their optimal transports to the word distributions of documents, and the embeddings of words are learned in a unified framework. When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports. Such a strategy provides the updating of word embeddings with robust guidance, improving the algorithmic convergence. As an application, we focus on patient admission records, in which the proposed method embeds the codes of diseases and procedures and learns the topics of admissions, obtaining superior performance on clinically-meaningful disease network construction, mortality prediction as a function of admission codes, and procedure recommendation.

Semantically Enhanced Models for Commonsense Knowledge Acquisition

Commonsense knowledge is paramount to enable intelligent systems. Typically, it is characterized as being implicit and ambiguous, hindering thereby the automation of its acquisition. To address these challenges, this paper presents semantically enhanced models to enable reasoning through resolving part of commonsense ambiguity. The proposed models enhance in a knowledge graph embedding (KGE) framework for knowledge base completion. Experimental results show the effectiveness of the new semantic models in commonsense reasoning.

Evaluation of Semantic Metadata Pair Modelling Using Data Clustering

Metadata presents a medium for connection, elaboration, examination, and comprehension of relativity between two datasets. Metadata can be enriched to calculate the existence of a connection between different disintegrated datasets. In order to do so, the very first task is to attain a generic metadata representation for domains. This representation narrows down the metadata search space. The metadata search space consists of attributes, tags, semantic content, annotations etc. to perform classification. The existing technologies limit the metadata bandwidth i.e. the operation set for matching purposes is restricted or limited. This research focuses on generating a mapper function called cognate that can find mathematical relevance based on pairs of attributes between disintegrated datasets. Each pair is designed from one of the datasets under consideration using the existing metadata and available meta-tags. After pairs have been generated, samples are constructed using a different combination of pairs. The similarity and relevance between two or more pairs are attained by using a data clustering technique to generate large groups from smaller groups based on similarity index. The search space is divided using a domain divider function and smaller search spaces are created using relativity and tagging as the main concept. For this research, the initial datasets have been limited to textual information. Once all disjoint meta-collection have been generated the approximation algorithm calculates the centers of each meta-set. These centers serve the purpose of meta-pointers i.e. a collection of meta-domain representations. Each pointer can then join a cluster based on the content i.e. meta-content. It also facilitates the process of possible synonyms across cross-functional domains. This can be examined using meta-pointers and graph pools.

Receiver Operating Characteristic (ROC) Curves

Receiver operating characteristic (ROC) curves are used ubiquitously to evaluate covariates, markers, or features as potential predictors in binary problems. We distinguish raw ROC diagnostics and ROC curves, elucidate the special role of concavity in interpreting and modelling ROC curves, and establish an equivalence between ROC curves and cumulative distribution functions (CDFs). These results support a subtle shift of paradigms in the statistical modelling of ROC curves, which we view as curve fitting. We introduce the flexible two-parameter beta family for fitting CDFs to empirical ROC curves, derive the large sample distribution of the minimum distance estimator and provide software in R for estimation and testing, including both asymptotic and Monte Carlo based inference. In a range of empirical examples the beta family and its three- and four-parameter ramifications that allow for straight edges fit better than the classical binormal model, particularly under the vital constraint of the fitted curve being concave.

Graph powering and spectral robustness

Bayesian Structure Learning by Recursive Bootstrap

We address the problem of Bayesian structure learning for domains with hundreds of variables by employing non-parametric bootstrap, recursively. We propose a method that covers both model averaging and model selection in the same framework. The proposed method deals with the main weakness of constraint-based learning—sensitivity to errors in the independence tests—by a novel way of combining bootstrap with constraint-based learning. Essentially, we provide an algorithm for learning a tree, in which each node represents a scored CPDAG for a subset of variables and the level of the node corresponds to the maximal order of conditional independencies that are encoded in the graph. As higher order independencies are tested in deeper recursive calls, they benefit from more bootstrap samples, and therefore more resistant to the curse-of-dimensionality. Moreover, the re-use of stable low order independencies allows greater computational efficiency. We also provide an algorithm for sampling CPDAGs efficiently from their posterior given the learned tree. We empirically demonstrate that the proposed algorithm scales well to hundreds of variables, and learns better MAP models and more reliable causal relationships between variables, than other state-of-the-art-methods.

Unsupervised Machine Commenting with Neural Variational Topic Model

Article comments can provide supplementary opinions and facts for readers, thereby increase the attraction and engagement of articles. Therefore, automatically commenting is helpful in improving the activeness of the community, such as online forums and news websites. Previous work shows that training an automatic commenting system requires large parallel corpora. Although part of articles are naturally paired with the comments on some websites, most articles and comments are unpaired on the Internet. To fully exploit the unpaired data, we completely remove the need for parallel data and propose a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments. Our model is based on a retrieval-based commenting framework, which uses news to retrieve comments based on the similarity of their topics. The topic representation is obtained from a neural variational topic model, which is trained in an unsupervised manner. We evaluate our model on a news comment dataset. Experiments show that our proposed topic-based approach significantly outperforms previous lexicon-based models. The model also profits from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios.

• Tire Noise Optimization Problem: a Mixed Integer Linear Program Approach• IPAD: Stable Interpretable Forecasting with Knockoffs Inference• Fixed set search applied to the traveling salesman problem• Reductive property of new fuzzy reasoning method based on distance measure• Efficient Global Optimization using Deep Gaussian Processes• Linear Complexity of Geometric Sequences Defined by Cyclotomic Classes and Balanced Binary Sequences Constructed by the Geometric Sequences• Correcting Differential Correlation functions for Collision Centrality bin width effects• Cyclic Demazure modules and positroid varieties• Quantum criticality in Ising chains with random hyperuniform couplings• Bayesian sparse reconstruction: a brute-force approach to astronomical imaging and machine learning• Magic-angle semimetals• Pyaneti: a fast and powerful software suite for multi-planet radial velocity and transit fitting• Discrepancies between extinction events and boundary equilibria in reaction networks• Nodal points of Weyl semimetals survive the presence of moderate disorder• Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Non-Convex Stochastic Optimization: Non-Asymptotic Performance Bounds and Momentum-Based Acceleration• A Two-Step Learning Method For Detecting Landmarks on Faces From Different Domains• Visual-Quality-Driven Learning for Underwater Vision Enhancement• Iterative phase retrieval in coherent diffractive imaging: practical issues• Correlation and Simpson conversion in $2 \times 2 \times 2$ contingency tables• Connection Coefficients for Higher-order Bernoulli and Euler Polynomials: A Random Walk Approach• Jump to better conclusions: SCAN both left and right• Deep Spectral Correspondence for Matching Disparate Image Pairs• Comment on ‘Mind the Gap: Boltzmannian versus Gibbsian Equilibrium’• Characterization of queer supercrystals• Are object detection assessment criteria ready for maritime computer vision?• Compact QC-LDPC Block and SC-LDPC Convolutional Codes for Low-Latency Communications• Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk• PARyOpt: A software for Parallel Asynchronous Remote Bayesian Optimization• Prediction and estimation consistency of sparse multi-class penalized optimal scoring• A scalable method to find the shortest path in a graph with circuits of memristors• Automatic Program Synthesis of Long Programs with a Learned Garbage Collector• Fair lending needs explainable models for responsible recommendation• Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation• Global and short-range entanglement properties in excited, many-body localized spin chains• Hyperplane Sections of Determinantal Varieties over Finite Fields and Linear Codes• An Online Plug-and-Play Algorithm for Regularized Image Reconstruction• Geometric Image Synthesis• Learning to Summarize Radiology Findings• Reconciling Similar Sets of Data• Do-It-Yourself Single Camera 3D Pointer Input Device• Random walks with local memory• Linear Algebra and Duality of Neural Networks• Towards the linear arboricity conjecture• Singularity of random symmetric matrices — a combinatorial approach to improved bounds• Exponential tail estimates in the Law of Ordinary Logarithm (LOL) for arrays of random variables• Sim-to-Real Transfer Learning using Robustified Controllers in Robotic Tasks involving Complex Dynamics• Perspective from the Literature on the Role of Expert Judgment in Scientific and Statistical Research and Practice• The Paulsen Problem Made Simple• Text-based Passwords Generated From Topological Graphic Passwords• Adapting Semantic Segmentation Models for Changes in Illumination and Camera Perspective• PPtaxi: Non-stop Package Delivery via Multi-hop Ridesharing• DispSegNet: Leveraging Semantics for End-to-End Learning of Disparity Estimation from Stereo Imagery• SafeCity: Understanding Diverse Forms of Sexual Harassment Personal Stories• Adversarial Feature Sampling Learning for Efficient Visual Tracking• A Coupled Compressive Sensing Scheme for Uncoordinated Multiple Access• On equivalence of the LKJ distribution and the restricted Wishart distribution• Geodesic Clustering in Deep Generative Models• Strategies to Inject Spoofed Measurement Data• Head Reconstruction from Internet Photos• 3D Face Hallucination from a Single Depth Frame• Video to Fully Automatic 3D Hair Model• Real-Time Joint Semantic Segmentation and Depth Estimation Using Asymmetric Annotations• Deep Learning for Waveform Estimation and Imaging in Passive Radar• Domination Parameters of the Unitary Cayley Graph of $\mathbb{Z}/n\mathbb{Z}$• Towards Coinductive Theory Exploration in Horn Clause Logic: Position Paper• Disorder-driven transition and intermediate phase for $ν=5/2$ fractional quantum Hall effect• Generative adversarial network-based image super-resolution using perceptual content losses• Deep Learning-based Image Super-Resolution Considering Quantitative and Perceptual Quality• Focus Group on Artificial Intelligence for Health• Multi-Dimensional Spatially-Coupled Code Design Through Informed Relocation of Circulants• Robust Densest Subgraph Discovery• Canonical and Compact Point Cloud Representation for Shape Classification• Mahler’s method in several variables I: The theory of regular singular systems• Optimal stopping for measure-valued piecewise deterministic Markov processes• Mahler’s method in several variables II: Applications to base change problems and finite automata• Projected Dynamical Systems on Irregular, Non-Euclidean Domains for Nonlinear Optimization• Nonautonomous driving induces stability in network of identical oscillators• Image Captioning based on Deep Reinforcement Learning• Tübingen-Oslo system: Linear regression works the best at Predicting Current and Future Psychological Health from Childhood Essays in the CLPsych 2018 Shared Task• On Offline Evaluation of Vision-based Driving Models• Bifocal-Lens Antenna Based OAM Communications System• Convergence of discrete period matrices and discrete holomorphic integrals for ramified coverings of the Riemann sphere• Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership• Stochastic Variational Optimization• Relevance in Structured Argumentation• The Covering Radius of the Reed–Muller Code $RM(2,7)$ is 40• The stochastic viscous Cahn-Hilliard equation: well-posedness, regularity and vanishing viscosity limit• The Zeckendorf Game• The Generalized Zeckendorf Game• Varimax rotation based on gradient projection needs between 10 and more than 500 random start loading matrices for optimal performance• Error estimates for space-time discretization of parabolic time-optimal control problems with bang-bang controls• Deep Network Uncertainty Maps for Indoor Navigation• Data Rates for Stabilizing Control under Denial-of-Service Attacks• Rigid foldability of the augmented square twist• Query-Efficient Black-Box Attack by Active Learning• Algorithms for Optimal Control with Fixed-Rate Feedback• Coordination-driven learning in multi-agent problem spaces• Active Damping of a DC Network with a Constant Power Load: An Adaptive Passivity-based Control Approach• A Self-Stabilizing Hashed Patricia Trie• Measuring Systematic Risk with Neural Network Factor Model• Identifying Real Estate Opportunities using Machine Learning• LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts• Optimal timescale of community detection in growing networks• Valid Simultaneous Inference in High-Dimensional Settings (with the hdm package for R)• Computational complexity of the Rydberg blockade in two dimensions• Placement Optimization for UAV-Enabled Wireless Networks with Multi-Hop Backhaul• Gaussian process classification using posterior linearisation• Branching random walk in random environment with random absorption wall• Simulation-based Distributed Coordination Maximization over Networks• Sparse Label Smoothing for Semi-supervised Person Re-Identification• On Existence, Mixtures, Computation and Efficiency in Multi-objective Games• High-Accuracy Inference in Neuromorphic Circuits using Hardware-Aware Training• Part-based Graph Convolutional Network for Action Recognition• SiftingGAN: Generating and Sifting Labeled Samples to Improve the Remote Sensing Image Scene Classification Baseline in vitro• Synthetic Occlusion Augmentation with Volumetric Heatmaps for the 2018 ECCV PoseTrack Challenge on 3D Human Pose Estimation• Sequential Coordination of Deep Models for Learning Visual Arithmetic• Learning Hybrid Models to Control a Ball in a Circular Maze• Efficient Graph Cut Optimization for Full CRFs with Quantized Edges• Clipped Matrix Completion: a Remedy for Ceiling Effects• From Nonlinear Identification to Linear Parameter Varying Models: Benchmark Examples• Parallel Machine Scheduling with a Single Resource per Job• Tracer diffusion in crowded narrow channels. Topical review• Linear Parameter Varying Representation of a class of MIMO Nonlinear Systems• Relays: A New Approach for the Finite Departure Problem in Overlay Networks• Minimax Learning of Ergodic Markov Chains• A Hodge filtration of logarithmic vector fields for well-generated complex reflection groups• Weighted Turan Problems with Applications• Reduced critical Bellman-Harris branching processes for small populations• Discovering Features in Sr${14}$Cu${24}$O$_{41}$ Neutron Single Crystal Diffraction Data by Cluster Analysis• Hamiltonian Descent Methods• PhD Dissertation: Generalized Independent Components Analysis Over Finite Alphabets• Sparsity of solutions for variational inverse problems with finite-dimensional data• Learning to Group and Label Fine-Grained Shape Components• A general theory for preferential sampling in environmental networks• XNLI: Evaluating Cross-lingual Sentence Representations• Simulated Tempering Method in the Infinite Switch Limit with Adaptive Weight Learning• Seeing Tree Structure from Vibration• Learning Shape Priors for Single-View 3D Completion and Reconstruction• Physical Primitive Decomposition• Channel Polarization through the Lens of Blackwell Measures• Derivative-free online learning of inverse dynamics models• Where Does Haydn End and Mozart Begin? Composer Classification of String Quartets• Computer Vision-aided Atom Tracking in STEM Imaging• MSc Dissertation: Exclusive Row Biclustering for Gene Expression Using a Combinatorial Auction Approach• Competition and partnership between conformity and payoff-based imitations in social dilemmas• Submodular Secretary Problem with Shortlists• Quotients of the magmatic operad: lattice structures and convergent rewrite systems• Estimating Historical Functional Linear Models with a Nested Group Bridge Approach• Polynomial mixing time of edge flips on quadrangulations

Like this:

Like Loading…

Related