Personalized Embedding Propagation: Combining Neural Networks on Graphs with Personalized PageRank
DPASF: A Flink Library for Streaming Data preprocessing
Data preprocessing techniques are devoted to correct or alleviate errors in data. Discretization and feature selection are two of the most extended data preprocessing techniques. Although we can find many proposals for static Big Data preprocessing, there is little research devoted to the continuous Big Data problem. Apache Flink is a recent and novel Big Data framework, following the MapReduce paradigm, focused on distributed stream and batch data processing. In this paper we propose a data stream library for Big Data preprocessing, named DPASF, under Apache Flink. We have implemented six of the most popular data preprocessing algorithms, three for discretization and the rest for feature selection. The algorithms have been tested using two Big Data datasets. Experimental results show that preprocessing can not only reduce the size of the data, but to maintain or even improve the original accuracy in a short time. DPASF contains useful algorithms when dealing with Big Data data streams. The preprocessing algorithms included in the library are able to tackle Big Datasets efficiently and to correct imperfections in the data.
Adaptive Low-Nonnegative-Rank Approximation for State Aggregation of Markov Chains
This paper develops a low-nonnegative-rank approximation method to identify the state aggregation structure of a finite-state Markov chain under an assumption that the state space can be mapped into a handful of meta-states. The number of meta-states is characterized by the nonnegative rank of the Markov transition matrix. Motivated by the success of the nuclear norm relaxation in low rank minimization problems, we propose an atomic regularizer as a convex surrogate for the nonnegative rank and formulate a convex optimization problem. Because the atomic regularizer itself is not computationally tractable, we instead solve a sequence of problems involving a nonnegative factorization of the Markov transition matrices by using the proximal alternating linearized minimization method. Two methods for adjusting the rank of factorization are developed so that local minima are escaped. One is to append an additional column to the factorized matrices, which can be interpreted as an approximation of a negative subgradient step. The other is to reduce redundant dimensions by means of linear combinations. Overall, the proposed algorithm very likely converges to the global solution. The efficiency and statistical properties of our approach are illustrated on synthetic data. We also apply our state aggregation algorithm on a Manhattan transportation data set and make extensive comparisons with an existing method.
Accelerator Virtualization in Fog Computing: Moving From the Cloud to the Edge
Hardware accelerators are available on the Cloud for enhanced analytics. Next generation Clouds aim to bring enhanced analytics using accelerators closer to user devices at the edge of the network for improving Quality-of-Service by minimizing end-to-end latencies and response times. The collective computing model that utilizes resources at the Cloud-Edge continuum in a multi-tier hierarchy comprising the Cloud, the Edge and user devices is referred to as Fog computing. This article identifies challenges and opportunities in making accelerators accessible at the Edge. A holistic view of the Fog architecture is key to pursuing meaningful research in this area.
On the relationship between Dropout and Equiangular Tight Frames
Dropout is a popular regularization technique in neural networks. Yet, the reason for its success is still not fully understood. This paper provides a new interpretation of Dropout from a frame theory perspective. This leads to a novel regularization technique for neural networks that minimizes the cross-correlation between filters in the network. We demonstrate its applicability in convolutional and fully connected layers in both feed-forward and recurrent networks.
Distributed learning of deep neural network over multiple agents
In domains such as health care and finance, shortage of labeled data and computational resources is a critical issue while developing machine learning algorithms. To address the issue of labeled data scarcity in training and deployment of neural network-based systems, we propose a new technique to train deep neural networks over several data sources. Our method allows for deep neural networks to be trained using data from multiple entities in a distributed fashion. We evaluate our algorithm on existing datasets and show that it obtains performance which is similar to a regular neural network trained on a single machine. We further extend it to incorporate semi-supervised learning when training with few labeled samples, and analyze any security concerns that may arise. Our algorithm paves the way for distributed training of deep neural networks in data sensitive applications when raw data may not be shared directly.
PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems
Machine Learning models are often composed of pipelines of transformations. While this design allows to efficiently execute single model components at training time, prediction serving has different requirements such as low latency, high throughput and graceful performance degradation under heavy load. Current prediction serving systems consider models as black boxes, whereby prediction-time-specific optimizations are ignored in favor of ease of deployment. In this paper, we present PRETZEL, a prediction serving system introducing a novel white box architecture enabling both end-to-end and multi-model optimizations. Using production-like model pipelines, our experiments show that PRETZEL is able to introduce performance improvements over different dimensions; compared to state-of-the-art approaches PRETZEL is on average able to reduce 99th percentile latency by 5.5x while reducing memory footprint by 25x, and increasing throughput by 4.7x.
Learning to fail: Predicting fracture evolution in brittle materials using recurrent graph convolutional neural networks
Understanding dynamic fracture propagation is essential to predicting how brittle materials fail. Various mathematical models and computational applications have been developed to predict fracture evolution and coalescence, including finite-discrete element methods such as the Hybrid Optimization Software Suite (HOSS). While such methods achieve high fidelity results, they can be computationally prohibitive: a single simulation takes hours to run, and thousands of simulations are required for a statistically meaningful ensemble. We propose a machine learning approach that, once trained on data from HOSS simulations, can predict fracture growth statistics within seconds. Our method uses deep learning, exploiting the capabilities of a graph convolutional network to recognize features of the fracturing material, along with a recurrent neural network to model the evolution of these features. In this way, we simultaneously generate predictions for qualitatively distinct material properties. Our prediction for total damage in a coalesced fracture, at the final simulation time step, is within 3% of its actual value, and our prediction for total length of a coalesced fracture is within 2%. We also develop a novel form of data augmentation that compensates for the modest size of our training data, and an ensemble learning approach that enables us to predict when the material fails, with a mean absolute error of approximately 15%.
Variational Neural Networks: Every Layer and Neuron Can Be Unique
The choice of activation function can significantly influence the performance of neural networks. The lack of guiding principles for the selection of activation function is lamentable. We try to address this issue by introducing our variational neural networks, where the activation function is represented as a linear combination of possible candidate functions, and an optimal activation is obtained via minimization of a loss function using gradient descent method. The gradient formulae for the loss function with respect to these expansion coefficients are central for the implementation of gradient descent algorithm, and here we derive these gradient formulae.
ABACUS: Unsupervised Multivariate Change Detection via Bayesian Source Separation
Change detection involves segmenting sequential data such that observations in the same segment share some desired properties. Multivariate change detection continues to be a challenging problem due to the variety of ways change points can be correlated across channels and the potentially poor signal-to-noise ratio on individual channels. In this paper, we are interested in locating additive outliers (AO) and level shifts (LS) in the unsupervised setting. We propose ABACUS, Automatic BAyesian Changepoints Under Sparsity, a Bayesian source separation technique to recover latent signals while also detecting changes in model parameters. Multi-level sparsity achieves both dimension reduction and modeling of signal changes. We show ABACUS has competitive or superior performance in simulation studies against state-of-the-art change detection methods and established latent variable models. We also illustrate ABACUS on two real application, modeling genomic profiles and analyzing household electricity consumption.
An Optimal Control Approach to Sequential Machine Teaching
Given a sequential learning algorithm and a target model, sequential machine teaching aims to find the shortest training sequence to drive the learning algorithm to the target model. We present the first principled way to find such shortest training sequences. Our key insight is to formulate sequential machine teaching as a time-optimal control problem. This allows us to solve sequential teaching by leveraging key theoretical and computational tools developed over the past 60 years in the optimal control community. Specifically, we study the Pontryagin Maximum Principle, which yields a necessary condition for optimality of a training sequence. We present analytic, structural, and numerical implications of this approach on a case study with a least-squares loss function and gradient descent learner. We compute optimal training sequences for this problem, and although the sequences seem circuitous, we find that they can vastly outperform the best available heuristics for generating training sequences.
Revisit Batch Normalization: New Understanding from an Optimization View and a Refinement via Composition Optimization
Batch Normalization (BN) has been used extensively in deep learning to achieve faster training process and better resulting models. However, whether BN works strongly depends on how the batches are constructed during training and it may not converge to a desired solution if the statistics on a batch are not close to the statistics over the whole dataset. In this paper, we try to understand BN from an optimization perspective by formulating the optimization problem which motivates BN. We show when BN works and when BN does not work by analyzing the optimization problem. We then propose a refinement of BN based on compositional optimization techniques called Full Normalization (FN) to alleviate the issues of BN when the batches are not constructed ideally. We provide convergence analysis for FN and empirically study its effectiveness to refine BN.
Stopping times in the game Rock-Paper-Scissors
In this paper we compute the stopping times in the game Rock-Paper-Scissors. By exploiting the recurrence relation we compute the mean values of stopping times. On the other hand, by constructing a transition matrix for a Markov chain associated with the game, we get also the distribution of the stopping times and thereby we compute the mean stopping times again. Then we show that the mean stopping times increase exponentially fast as the number of the participants increases.
Robust descent using smoothed multiplicative noise
To improve the off-sample generalization of classical procedures minimizing the empirical risk under potentially heavy-tailed data, new robust learning algorithms have been proposed in recent years, with generalized median-of-means strategies being particularly salient. These procedures enjoy performance guarantees in the form of sharp risk bounds under weak moment assumptions on the underlying loss, but typically suffer from a large computational overhead and substantial bias when the data happens to be sub-Gaussian, limiting their utility. In this work, we propose a novel robust gradient descent procedure which makes use of a smoothed multiplicative noise applied directly to observations before constructing a sum of soft-truncated gradient coordinates. We show that the procedure has competitive theoretical guarantees, with the major advantage of a simple implementation that does not require an iterative sub-routine for robustification. Empirical tests reinforce the theory, showing more efficient generalization over a much wider class of data distributions.
Comparing Temporal Graphs Using Dynamic Time Warping
The connections within many real-world networks change over time. Thus, there has been a recent boom in studying temporal graphs. Recognizing patterns in temporal graphs requires a similarity measure to compare different temporal graphs. To this end, we initiate the study of dynamic time warping (an established concept for mining time series data) on temporal graphs. We propose the dynamic temporal graph warping distance (dtgw) to determine the (dis-)similarity of two temporal graphs. Our novel measure is flexible and can be applied in various application domains. We show that computing the dtgw-distance is a challenging (NP-hard) optimization problem and identify some polynomial-time solvable special cases. Moreover, we develop a quadratic programming formulation and an efficient heuristic. Preliminary experiments indicate that the heuristic performs very well and that our concept yields meaningful results on real-world instances.
CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning
In open-ended and changing environments, agents face a wide range of potential tasks that may or may not come with associated reward functions. Such autonomous learning agents must be able to generate their own tasks through a process of intrinsically motivated exploration, some of which might prove easy, others impossible. For this reason, they should be able to actively select which task to practice at any given moment, to maximize their overall mastery on the set of learnable tasks. This paper proposes CURIOUS, an extension of Universal Value Function Approximators that enables intrinsically motivated agents to learn to achieve both multiple tasks and multiple goals within a unique policy, leveraging hindsight learning. Agents focus on achievable tasks first, using an automated curriculum learning mechanism that biases their attention towards tasks maximizing the absolute learning progress. This mechanism provides robustness to catastrophic forgetting (by refocusing on tasks where performance decreases) and distracting tasks (by avoiding tasks with no absolute learning progress). Furthermore, we show that having two levels of parameterization (tasks and goals within tasks) enables more efficient learning of skills in an environment with a modular physical structure (e.g. multiple objects) as compared to flat, goal-parameterized RL with hindsight experience replay.
Dimensionality Reduction and (Bucket) Ranking: a Mass Transportation Approach
Improving Topic Models with Latent Feature Word Representations
Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on very large corpora to improve the word-topic mapping learnt on a smaller corpus. Experimental results show that by using information from the external corpora, our new models produce significant improvements on topic coherence, document clustering and document classification tasks, especially on datasets with few or short documents.
Deep Reinforcement Learning
We discuss deep reinforcement learning in an overview style. We draw a big picture, filled with details. We discuss six core elements, six important mechanisms, and twelve applications, focusing on contemporary work, and in historical contexts. We start with background of artificial intelligence, machine learning, deep learning, and reinforcement learning (RL), with resources. Next we discuss RL core elements, including value function, policy, reward, model, exploration vs. exploitation, and representation. Then we discuss important mechanisms for RL, including attention and memory, unsupervised learning, hierarchical RL, multi-agent RL, relational RL, and learning to learn. After that, we discuss RL applications, including games, robotics, natural language processing (NLP), computer vision, finance, business management, healthcare, education, energy, transportation, computer systems, and, science, engineering, and art. Finally we summarize briefly, discuss challenges and opportunities, and close with an epilogue.
Unsupervised Ensemble Learning via Ising Model Approximation with Application to Phenotyping Prediction
I can see clearly now: reinterpreting statistical significance
Null hypothesis significance testing remains popular despite decades of concern about misuse and misinterpretation. We believe that much of the problem is due to language: significance testing has little to do with other meanings of the word ‘significance’. Despite the limitations of null-hypothesis tests, we argue here that they remain useful in many contexts as a guide to whether a certain effect can be seen clearly in that context (e.g. whether we can clearly see that a correlation or between-group difference is positive or negative). We therefore suggest that researchers describe the conclusions of null-hypothesis tests in terms of statistical ‘clarity’ rather than statistical ‘significance’. This simple semantic change could substantially enhance clarity in statistical communication.
Measuring religious morality using very limited poll responses: Implementing ‘big-data analytics’ to small data
Opinion polls remain among the most efficient and widespread methods to capture psycho-social data at large scales. However, there are limitations on the logistics and structure of opinion polls that restrict the amount and type of information that can be collected. As a consequence, data from opinion polls are often reported in simple percentages and analyzed non-parametrically. In this paper, response data on just four questions from a national opinion poll were used to demonstrate that a parametric scale can be constructed using item response modeling approaches. Developing a parametric scale yields interval-level measures which are more useful than the strictly ordinal-level measures obtained from Likert-type scales common in opinion polls. The metric that was developed in this paper, a measure of religious morality, can be processed and used in a wider range of statistical analyses compared to conventional approaches of simply reporting percentages at item-level. Finally, this paper reports the item parameters so that researchers can adopt these items to future instruments and place their own results on the same scale, thereby allowing responses from future samples to be compared to the results from the representative data in this paper.
Predictor-Corrector Policy Optimization
We present a predictor-corrector framework, called PicCoLO, that can transform a first-order model-free reinforcement or imitation learning algorithm into a new hybrid method that leverages predictive models to accelerate policy learning. The new ‘PicCoLOed’ algorithm optimizes a policy by recursively repeating two steps: In the Prediction Step, the learner uses a model to predict the unseen future gradient and then applies the predicted estimate to update the policy; in the Correction Step, the learner runs the updated policy in the environment, receives the true gradient, and then corrects the policy using the gradient error. Unlike previous algorithms, PicCoLO corrects for the mistakes of using imperfect predicted gradients and hence does not suffer from model bias. The development of PicCoLO is made possible by a novel reduction from predictable online learning to adversarial online learning, which provides a systematic way to modify existing first-order algorithms to achieve the optimal regret with respect to predictable information. We show, in both theory and simulation, that the convergence rate of several first-order model-free algorithms can be improved by PicCoLO.
Successor Uncertainties: exploration and uncertainty in temporal difference learning
We consider the problem of balancing exploration and exploitation in sequential decision making problems. To explore efficiently, it is vital to consider the uncertainty over all consequences of a decision, and not just those that follow immediately; the uncertainties involved need to be propagated according to the dynamics of the problem. To this end, we develop Successor Uncertainties, a probabilistic model for the state-action value function of a Markov Decision Process that propagates uncertainties in a coherent and scalable way. We relate our approach to other classical and contemporary methods for exploration and present an empirical analysis.
Deep Imitative Models for Flexible Inference, Planning, and Control
Imitation learning provides an appealing framework for autonomous control: in many tasks, demonstrations of preferred behavior can be readily obtained from human experts, removing the need for costly and potentially dangerous online data collection in the real world. However, policies learned with imitation learning have limited flexibility to accommodate varied goals at test time. Model-based reinforcement learning (MBRL) offers considerably more flexibility, since a predictive model learned from data can be used to achieve various goals at test time. However, MBRL suffers from two shortcomings. First, the predictive model does not help to choose desired or safe outcomes — it reasons only about what is possible, not what is preferred. Second, MBRL typically requires additional online data collection to ensure that the model is accurate in those situations that are actually encountered when attempting to achieve test time goals. Collecting this data with a partially trained model can be dangerous and time-consuming. In this paper, we aim to combine the benefits of imitation learning and MBRL, and propose imitative models: probabilistic predictive models able to plan expert-like trajectories to achieve arbitrary goals. We find this method substantially outperforms both direct imitation and MBRL in a simulated autonomous driving task, and can be learned efficiently from a fixed set of expert demonstrations without additional online data collection. We also show our model can flexibly incorporate user-supplied costs as test-time, can plan to sequences of goals, and can even perform well with imprecise goals, including goals on the wrong side of the road.
• Existence of differentiable curves in convex sets and the concept of direction of the flow in mass transportation• Minimal time problem for crowd models with a localized vector field• The variance of the number of sums of two squares in $\mathbb{F}q[T]$ in short intervals• Linear Coded Caching Scheme for Centralized Networks• AAAI FSS-18: Artificial Intelligence in Government and Public Sector Proceedings• Rethinking the Reynolds Transport Theorem, Liouville Equation, and Perron-Frobenius and Koopman Operators: Spatial and Generalized Parametric Forms• Hierarchical Attention Networks for Knowledge Base Completion via Joint Adversarial Training• The Contact Process on Random Graphs and Galton-Watson Trees• Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost• Morphologies, metastability and coarsening of quantum nanoislands on the surfaces of the annealed Ag(110) and Pb(111) thin films• $q$-Stirling numbers arising from vincular patterns• Yet another note on the arithmetic-geometric mean inequality• A Simple Change Comparison Method for Image Sequences Based on Uncertainty Coefficient• Fine-Grained Classification of Cervical Cells Using Morphological and Appearance Based Convolutional Neural Networks• Schrödinger Approach to Optimal Control of Large-Size Populations• Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information• A Novel Extension to Fuzzy Connectivity for Body Composition Analysis: Applications in Thigh, Brain, and Whole Body Tissue Segmentation• Comparison of control strategies for the temperature control of a refrigeration system based on vapor compression• Dimension Reduction for Origin-Destination Flow Estimation: Blind Estimation Made Possible• Assessing the Potential of Classical Q-learning in General Game Playing• Stability of Dynamic Feedback Optimization with Applications to Power Systems• Super Strong ETH is False for Random $k$-SAT• Mixing Times and Hitting Times for General Markov Processes• A New Theory for Sketching in Linear Regression• A Feyman-Kac approach to a paper of Chung and Feller on fluctuations in the coin-tossing game• The stochastic geometry of unconstrained one-bit data compression• A sharp estimate for the first Robin-Laplacian eigenvalue with negative boundary parameter• Novel deep learning methods for track reconstruction• Symbolic Dynamics of Music from Europe and Japan• Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding• Distribution of a tagged particle position in the one-dimensional symmetric simple exclusion process with two-sided Bernoulli initial condition• SpotNet – Learned iterations for cell detection in image-based immunoassays• A unified approach to calculation of information operators in semiparametric models• The largest spectral radius of uniform hypertrees with a given size of matching• Bandit Inspired Beam Searching Scheme for mmWave High-Speed Train Communications• Finite-Time Distributed State Estimation over Time-Varying Graphs: Exploiting the Age-of-Information• Distribution of S-matrix poles for one-dimensional disordered wires• Traffic Signs in the Wild: Highlights from the IEEE Video and Image Processing Cup 2017 Student Competition [SP Competitions]• Higher Dimensional Lattice Walks: Connecting Combinatorial and Analytic Behavior• 3D Feature Pyramid Attention Module for Robust Visual Speech Recognition• Squared distance matrix of a weighted tree• Mapping Web Pages by Internet Protocol (IP) addresses: Analyzing Spatial and Temporal Characteristics of Web Search Engine Results• Coloring graphs with no induced five-vertex path or gem• Crystal constructions in Number Theory• Inverse Problems and Data Assimilation• Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism• SPECMAR: Fast Heart Rate Estimation from PPG Signal using a Modified Spectral Subtraction Scheme with Composite Motion Artifacts Reference Generation• Sticky Brownian motions and a probabilistic solution to a two-point boundary value problem• Solution for Large-Scale Hierarchical Object Detection Datasets with Incomplete Annotation and Data Imbalance• Dynamic Connected Cooperative Coverage Problem• Robust Transmission Constrained Unit Commitment:A Column Merging Method• Energy Efficiency Fairness for Multi-Pair Wireless-Powered Relaying Systems• The Focus-Aspect-Polarity Model for Predicting Subjective Noun Attributes in Images• Dodgson polynomial identities• Supervised COSMOS Autoencoder: Learning Beyond the Euclidean Loss!• Characterizations of continuous univariate probability distributions with applications to goodness-of-fit testing• A simple proof of Pitman-Yor’s Chinese restaurant process from its stick-breaking representation• Sparse-View CT Reconstruction via Convolutional Sparse Coding• Playing on a Level Field: Sincere and Sophisticated Players in the Boston Mechanism with a Coarse Priority Structure• Bounding Entities within Dense Subtensors• A Context-aware Capsule Network for Multi-label Classification• UMONS Submission for WMT18 Multimodal Translation Task• About kernel-based estimation of conditional Kendall’s tau: finite-distance bounds and asymptotic behavior• Self-Organized Scheduling Request for Uplink 5G Networks: A D2D Clustering Approach• Bringing back simplicity and lightliness into neural image captioning• An Output-Feedback Control Approach to the $H{\infty}$ Consensus Integrated with Transient Performance Improvement Problem• A Polynomial-Time Method for Testing Admissibility of Uncertain Power Injections in Microgrids• Small Space Stream Summary for Matroid Center• Playing for Depth• Functional limit theorems for random walks• Eigenvalue Analysis via Kernel Density Estimation• Feature Representation Analysis of Deep Convolutional Neural Network using Two-stage Feature Transfer -An Application for Diffuse Lung Disease Classification-• Hyperparameter Learning via Distributional Transfer• Regret vs. Bandwidth Trade-off for Recommendation Systems• Unified Statistical Channel Model for Turbulence-Induced Fading in Underwater Wireless Optical Communication Systems• Joint Optimization of Opportunistic Predictive Maintenance and Multi-location Spare Part Inventories for a Deteriorating System Considering Imperfect Actions• Compressively Sensed Image Recognition• Polyphonic Sound Event Detection by using Capsule Neural Network• Deep Photovoltaic Nowcasting• Diffusion in small time in incomplete sub-Riemannian manifolds• Fault Adaptive Routing in Metasurface Controller Networks• Randomly switched vector fields sharing a zero on a common invariant face• Exploratory Mediation Analysis with Many Potential Mediators• No-Key Semi-Quantum Direct Communication Protocol with Low Quantum Resource Requirements• Towards Providing Explanations for AI Planner Decisions• Scaling limits in divisible sandpiles: a Fourier multiplier approach• Inference When There is a Nuisance Parameter under the Alternative and Some Parameters are Possibly Weakly Identified• (Self-Attentive) Autoencoder-based Universal Language Representation for Machine Translation• Independence number of the double vertex graph and the pair graph of some graphs• CADbots: Algorithmic Aspects of Manipulating Programmable Matter with Finite Automata• An Efficient Fault Tolerant Workflow Scheduling Approach using Replication Heuristics and Checkpointing in the Cloud• Replica Analysis for Maximization of Net Present Value• Neural Adaptation Layers for Cross-domain Named Entity Recognition• Indistinguishability of collections of trees in the uniform spanning forest• A Priori Estimates of the Generalization Error for Two-layer Neural Networks• Real-Time Visual Tracking and Identification for a Team of Homogeneous Humanoid Robots• Virtualization of tissue staining in digital pathology using an unsupervised deep learning approach• Quantitative homogenization of the disordered $\nabla φ$ model• Calibration procedures for approximate Bayesian credible sets• Hedging Algorithms and Repeated Matrix Games• Orders of convergence in the averaging principle for SPDEs: the case of a stochastically forced slow component• Channel Splitting Network for Single MR Image Super-Resolution• Refacing: reconstructing anonymized facial features using GANs• Small One-Dimensional Euclidean Preference Profiles• CSV: Image Quality Assessment Based on Color, Structure, and Visual System• Power computation for the triboelectric nanogenerator• Image edges resolved well when using an overcomplete piecewise-polynomial model• Unsupervised Deep Features for Remote Sensing Image Matching via Discriminator Network• Population Symbolic Covariance Matrices for Interval Data• Caching in Heterogeneous Networks with Per-File Rate Constraints• Asymptotic adaptive threshold for connectivity in a random geometric social network• Generalised hypergeometric ensembles of random graphs: the configuration model as an urn problem• Elementary Polynomial Identities Involving $q$-Trinomial Coefficients• SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth• Simulation Framework for Cooperative Adaptive Cruise Control with Empirical DSRC Module• Diagonal Orbits in a Double Flag Variety of Complexity One, Type A• Deep Surface Light Fields• Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes• Structured Content Preservation for Unsupervised Text Style Transfer• On omega-categorical structures with few finite substructures• Visual Semantic Navigation using Scene Priors• Poincaré GloVe: Hyperbolic Word Embeddings• Seemingly stable chemical kinetics can be stable, marginally stable, or unstable
Like this:
Like Loading…
Related