Whats new on arXiv

HCqa: Hybrid and Complex Question Answering on Textual Corpus and Knowledge Graph

Question Answering (QA) systems provide easy access to the vast amount of knowledge without having to know the underlying complex structure of the knowledge. The research community has provided ad hoc solutions to the key QA tasks, including named entity recognition and disambiguation, relation extraction and query building. Furthermore, some have integrated and composed these components to implement many tasks automatically and efficiently. However, in general, the existing solutions are limited to simple and short questions and still do not address complex questions composed of several sub-questions. Exploiting the answer to complex questions is further challenged if it requires integrating knowledge from unstructured data sources, i.e., textual corpus, as well as structured data sources, i.e., knowledge graphs. In this paper, an approach (HCqa) is introduced for dealing with complex questions requiring federating knowledge from a hybrid of heterogeneous data sources (structured and unstructured). We contribute in developing (i) a decomposition mechanism which extracts sub-questions from potentially long and complex input questions, (ii) a novel comprehensive schema, first of its kind, for extracting and annotating relations, and (iii) an approach for executing and aggregating the answers of sub-questions. The evaluation of HCqa showed a superior accuracy in the fundamental tasks, such as relation extraction, as well as the federation task.

Arena Model: Inference About Competitions

The authors propose a parametric model called the arena model for prediction in paired competitions, i.e. paired comparisons with eliminations and bifurcations. The arena model has a number of appealing advantages. First, it predicts the results of competitions without rating many individuals. Second, it takes full advantage of the structure of competitions. Third, the model provides an easy method to quantify the uncertainty in competitions. Fourth, some of our methods can be directly generalized for comparisons among three or more individuals. Furthermore, the authors identify an invariant Bayes estimator with regard to the prior distribution and prove the consistency of the estimations of uncertainty. Currently, the arena model is not effective in tracking the change of strengths of individuals, but its basic framework provides a solid foundation for future study of such cases.

Cross-Validated Kernel Ensemble: Robust Hypothesis Test for Nonlinear Effect with Gaussian Process

The R package CVEK introduces a robust hypothesis test for nonlinear effect with Gaussian process. CVEK is an ensemble-based estimator that adaptively learns the form of the main-effect kernel from data, and constructs an companion variance component test. Package CVEK implements the estimator for two testing procedures, namely asymptotic test and bootstrap test. Additionally, it implements a variety of tuning parameter criteria, including Akaike Information Criteria, Generalized Cross Validation, Generalized Maximum Profile Marginal Likelihood and leave-one-out Cross Validation. Moreover, there are three kinds of ensemble strategies to create the ultimate ensemble kernel: Simple Averaging, Empirical Risk Minimization and Exponential Weighting. The null distribution of the test statistic can be approximated using a scaled chi-square distribution, and therefore statistical inference based on the results of this package, such as hypothesis testing can be performed. Extensive simulations demonstrate the robustness and correct implementation of the estimator.

A combined network and machine learning approaches for product market forecasting

Sustainable financial markets play an important role in the functioning of human society. Still, the detection and prediction of risk in financial markets remain challenging and draw much attention from the scientific community. Here we develop a new approach based on combined network theory and machine learning to study the structure and operations of financial product markets. Our network links are based on the similarity of firms’ products and are constructed using the Securities Exchange Commission (SEC) filings of US listed firms. We find that several features in our network can serve as good precursors of financial market risks. We then combine the network topology and machine learning methods to predict both successful and failed firms. We find that the forecasts made using our method are much better than other well-known regression techniques. The framework presented here not only facilitates the prediction of financial markets but also provides insight and demonstrate the power of combining network theory and machine learning.

Abduction-Based Explanations for Machine Learning Models

The growing range of applications of Machine Learning (ML) in a multitude of settings motivates the ability of computing small explanations for predictions made. Small explanations are generally accepted as easier for human decision makers to understand. Most earlier work on computing explanations is based on heuristic approaches, providing no guarantees of quality, in terms of how close such solutions are from cardinality- or subset-minimal explanations. This paper develops a constraint-agnostic solution for computing explanations for any ML model. The proposed solution exploits abductive reasoning, and imposes the requirement that the ML model can be represented as sets of constraints using some target constraint reasoning system for which the decision problem can be answered with some oracle. The experimental results, obtained on well-known datasets, validate the scalability of the proposed approach as well as the quality of the computed solutions.

Embedding Uncertain Knowledge Graphs

Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.

GANsfer Learning: Combining labelled and unlabelled data for GAN based data augmentation

Medical imaging is a domain which suffers from a paucity of manually annotated data for the training of learning algorithms. Manually delineating pathological regions at a pixel level is a time consuming process, especially in 3D images, and often requires the time of a trained expert. As a result, supervised machine learning solutions must make do with small amounts of labelled data, despite there often being additional unlabelled data available. Whilst of less value than labelled images, these unlabelled images can contain potentially useful information. In this paper we propose combining both labelled and unlabelled data within a GAN framework, before using the resulting network to produce images for use when training a segmentation network. We explore the task of deep grey matter multi-class segmentation in an AD dataset and show that the proposed method leads to a significant improvement in segmentation results, particularly in cases where the amount of labelled data is restricted. We show that this improvement is largely driven by a greater ability to segment the structures known to be the most affected by AD, thereby demonstrating the benefits of exposing the system to more examples of pathological anatomical variation. We also show how a shift in domain of the training data from young and healthy towards older and more pathological examples leads to better segmentations of the latter cases, and that this leads to a significant improvement in the ability for the computed segmentations to stratify cases of AD.

Environments for Lifelong Reinforcement Learning

To achieve general artificial intelligence, reinforcement learning (RL) agents should learn not only to optimize returns for one specific task but also to constantly build more complex skills and scaffold their knowledge about the world, without forgetting what has already been learned. In this paper, we discuss the desired characteristics of environments that can support the training and evaluation of lifelong reinforcement learning agents, review existing environments from this perspective, and propose recommendations for devising suitable environments in the future.

Automatic Induction of Neural Network Decision Tree Algorithms

This work presents an approach to automatically induction for non-greedy decision trees constructed from neural network architecture. This construction can be used to transfer weights when growing or pruning a decision tree, allowing non-greedy decision tree algorithms to automatically learn and adapt to the ideal architecture. In this work, we examine the underpinning ideas within ensemble modelling and Bayesian model averaging which allow our neural network to asymptotically approach the ideal architecture through weights transfer. Experimental results demonstrate that this approach improves models over fixed set of hyperparameters for decision tree models and decision forest models.

DONUT: CTC-based Query-by-Example Keyword Spotting

Keyword spotting–or wakeword detection–is an essential feature for hands-free operation of modern voice-controlled devices. With such devices becoming ubiquitous, users might want to choose a personalized custom wakeword. In this work, we present DONUT, a CTC-based algorithm for online query-by-example keyword spotting that enables custom wakeword detection. The algorithm works by recording a small number of training examples from the user, generating a set of label sequence hypotheses from these training examples, and detecting the wakeword by aggregating the scores of all the hypotheses given a new audio recording. Our method combines the generalization and interpretability of CTC-based keyword spotting with the user-adaptation and convenience of a conventional query-by-example system. DONUT has low computational requirements and is well-suited for both learning and inference on embedded systems without requiring private user data to be uploaded to the cloud.

MATCH-Net: Dynamic Prediction in Survival Analysis using Convolutional Neural Networks

Accurate prediction of disease trajectories is critical for early identification and timely treatment of patients at risk. Conventional methods in survival analysis are often constrained by strong parametric assumptions and limited in their ability to learn from high-dimensional data, while existing neural network models are not readily-adapted to the longitudinal setting. This paper develops a novel convolutional approach that addresses these drawbacks. We present MATCH-Net: a Missingness-Aware Temporal Convolutional Hitting-time Network, designed to capture temporal dependencies and heterogeneous interactions in covariate trajectories and patterns of missingness. To the best of our knowledge, this is the first investigation of temporal convolutions in the context of dynamic prediction for personalized risk prognosis. Using real-world data from the Alzheimer’s Disease Neuroimaging Initiative, we demonstrate state-of-the-art performance without making any assumptions regarding underlying longitudinal or time-to-event processes attesting to the model’s potential utility in clinical decision support.

Adaptive Wavelet Clustering for High Noise Data

In this paper we make progress on the unsupervised task of mining arbitrarily shaped clusters in highly noisy datasets, which is present in many real-world applications. Based on the fundamental work that first applies a wavelet transform to data clustering, we propose an adaptive clustering algorithm, denoted as AdaWave, which exhibits favorable characteristics for clustering. By a self-adaptive thresholding technique, AdaWave is parameter free and can handle data in various situations. It is deterministic, fast in linear time, order-insensitive, shape-insensitive, robust to highly noisy data, and requires no pre-knowledge on data models. Moreover, AdaWave inherits the ability from the wavelet transform to clustering data in different resolutions. We adopt the `grid labeling’ data structure to drastically reduce the memory consumption of the wavelet transform so that AdaWave can be used for relatively high dimensional data. Experiments on synthetic as well as natural datasets demonstrate the effectiveness and efficiency of the proposed method.

Unsupervised Image Captioning

Deep neural networks have achieved great successes on the image captioning task. However, most of the existing models depend heavily on paired image-sentence datasets, which are very expensive to acquire. In this paper, we make the first attempt to train an image captioning model in an unsupervised manner. Instead of relying on manually labeled image-sentence pairs, our proposed model merely requires an image set, a sentence corpus, and an existing visual concept detector. The sentence corpus is used to teach the captioning model how to generate plausible sentences. Meanwhile, the knowledge in the visual concept detector is distilled into the captioning model to guide the model to recognize the visual concepts in an image. In order to further encourage the generated captions to be semantically consistent with the image, the image and caption are projected into a common latent space so that they can be used to reconstruct each other. Given that the existing sentence corpora are mainly designed for linguistic research and thus with little reference to image contents, we crawl a large-scale image description corpus of 2 million natural sentences to facilitate the unsupervised image captioning scenario. Experimental results show that our proposed model is able to produce quite promising results without using any labeled training pairs.

A Fully Sequential Methodology for Convolutional Neural Networks

Recent work has shown that the performance of convolutional neural networks could be significantly improved by increasing the depth of the representation. We propose a fully sequential methodology to construct and train extremely deep convolutional neural networks. We first introduce a novel sequential convolutional layer to construct the network. The proposed layer is capable of constructing trainable and highly efficient feedforward networks that consist of thousands of vanilla convolutional layers with rather limited number of parameters. The layer extracts each feature of the produced representation in sequence, allowing feature reuse within the layer. This form of feature reuse introduces in-layer hierarchy to the extracted features which greatly increases the depth of the representation, enabling richer structures to be explored. Furthermore, we employ the progressive growing training method to optimize each module of the network in sequence. This training manner progressively increases the network capacity allowing later modules to be optimized conditioning on prior knowledge from earlier modules. Thus, it encourages long term dependency to be established among each module of the network, which increases the effective depth of networks with skip connections, as well alleviates multiple optimization difficulties for deep networks.

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on creating methods for explaining black-box ML models. However, these methods rely on the assumption that simple approximations, such as linear models or decision-trees, are inherently human-interpretable, which has not been empirically tested. Additionally, past efforts have focused exclusively on comprehension, neglecting to explore the trust component necessary to convince non-technical experts, such as clinicians, to utilize ML models in practice. In this paper, we posit that reinforcement learning (RL) can be used to learn what is interpretable to different users and, consequently, build their trust in ML models. To validate this idea, we first train a neural network to provide risk assessments for heart failure patients. We then design a RL-based clinical decision-support system (DSS) around the neural network model, which can learn from its interactions with users. We conduct an experiment involving a diverse set of clinicians from multiple institutions in three different countries. Our results demonstrate that ML experts cannot accurately predict which system outputs will maximize clinicians’ confidence in the underlying neural network model, and suggest additional findings that have broad implications to the future of research into ML interpretability and the use of ML in medicine.

• Unsupervised Post-processing of Word Vectors via Conceptor Negation• Correcting the Common Discourse Bias in Linear Representation of Sentences using Conceptors• Energy Efficiency Maximization in mmWave Wireless Networks with 3D Beamforming• Dynamic Non-Orthogonal Multiple Access (NOMA) and Orthogonal Multiple Access (OMA) in 5G Wireless Networks• Hyperspectral Super-Resolution with Coupled Tucker Approximation: Identifiability and SVD-based algorithms• Facilitating the Manual Annotation of Sounds When Using Large Taxonomies• Network Abstractions of Prescription Patterns in a Medicaid Population• Spectrum Sharing Protocols based on Ultra-Narrowband Communications for Unlicensed Massive IoT• Vacancy-induced Fano resonances in zigzag phosphorene nanoribbons• Application of Machine Learning in Fiber Nonlinearity Modeling and Monitoring for Elastic Optical Networks• Application of Clinical Concept Embeddings for Heart Failure Prediction in UK EHR data• Collective social behavior in a crowd controlled game• 100G Data Center Interconnections with Silicon Dual-Drive Mach-Zehnder Modulator and Direct Detection• Caching to the Sky: Performance Analysis of Cache-Assisted CoMP for Cellular-Connected UAVs• Cooperative Transmission and Probabilistic Caching for Clustered D2D Networks• Latent Dirichlet Allocation with Residual Convolutional Neural Network Applied in Evaluating Credibility of Chinese Listed Companies• Polynomial-time algorithms for 2-edge-connected subgraphs on fundamental classes by top-down coloring• A Central Limit Theorem for First Passage Percolation in the Slab• Sentiment Analysis of Financial News Articles using Performance Indicators• What is meant by ‘P(R Yobs)’?• 100% Reliable Frequency-Resolved Optical Gating Pulse-Retrieval Algorithmic Approach• On some properties of the new Sine-skewed Cardioid Distribution• Avalanche Dynamics and Correlations in Neural Systems• A new variational approach to linearization of traction problems in elasticity• Accelerating Alternating Least Squares for Tensor Decomposition by Pairwise Perturbation• Quantum Shockwave Communication• Quantized frequency-domain polarization of driven phases of matter• Evolving Space-Time Neural Architectures for Videos• The defining properties of the Kontsevich unoriented graph complex• Poset models for Weyl group analogs of symmetric functions and Schur functions• Understanding Image Quality and Trust in Peer-to-Peer Marketplaces• Noisy Computations during Inference: Harmful or Helpful?• Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions• HELOC Applicant Risk Performance Evaluation by Topological Hierarchical Decomposition• Stepping Stones to Inductive Synthesis of Low-Level Looping Programs• Art2Real: Unfolding the Reality of Artworks via Semantically-Aware Image-to-Image Translation• AI Fairness for People with Disabilities: Point of View• Adversarial Video Compression Guided by Soft Edge Detection• Exact Penalization of Generalized Nash Equilibrium Problems• LM-BIC Model Selection in Semiparametric Models• Matching Features without Descriptors: Implicitly Matched Interest Points (IMIPs)• Beyond ‘How may I help you?’: Assisting Customer Service Agents with Proactive Responses• On Composition Tableaux basis for the Plücker algebra• Sequence Alignment with Dirichlet Process Mixtures• Estimation of a Nonseparable Heterogeneous Demand Function with Shape Restrictions and Berkson Errors• Attentive Relational Networks for Mapping Images to Scene Graphs• LSTA: Long Short-Term Attention for Egocentric Action Recognition• Time-Aware and View-Aware Video Rendering for Unsupervised Representation Learning• On an almost all version of the Balog-Szemeredi-Gowers theorem• Combining High-Level Features of Raw Audio Waves and Mel-Spectrograms for Audio Tagging• Learning Robust Representations for Automatic Target Recognition• Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks• Learning View Priors for Single-view 3D Reconstruction• IGNOR: Image-guided Neural Object Rendering• Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations• MIST: Multiple Instance Spatial Transformer Network• Optimization of Information-Seeking Dialogue Strategy for Argumentation-Based Dialogue System• Day-to-Day Dynamic Traffic Assignment with Imperfect Information, Bounded Rationality and Information Sharing• DynamicGEM: A Library for Dynamic Graph Embedding Methods• Mixture of Regression Experts in fMRI Encoding• Joint Monocular 3D Vehicle Detection and Tracking• EnResNet: ResNet Ensemble via the Feynman-Kac Formalism• Best play in Dots and Boxes endgames• A Unified Coded Deep Neural Network Training Strategy Based on Generalized PolyDot Codes for Matrix Multiplication• A composition theorem for randomized query complexity via max conflict complexity• Towards Long-Term Memory for Social Robots: Proposing a New Challenge for the RoboCup@Home League• Speaker Diarization With Lexical Information• A Coarse-to-fine Deep Convolutional Neural Network Framework for Frame Duplication Detection and Localization in Video Forgery• Quality-Aware Multimodal Saliency Detection via Deep Reinforcement Learning• On Bollobás-Riordan random pairing model of preferential attachment graph• Synaptic Plasticity Dynamics for Deep Continuous Local Learning• The Batched Set Cover Problem• Generating Attention from Classifier Activations for Fine-grained Recognition• Event-Based Structured Light for Depth Reconstruction using Frequency Tagged Light Patterns• Verb Argument Structure Alternations in Word and Sentence Embeddings• Joint Representation Learning of Cross-lingual Words and Entities via Attentive Distant Supervision• Tackling Early Sparse Gradients in Softmax Activation Using Leaky Squared Euclidean Distance• A True Random Number Generator Method Embedded in Wireless Communication Systems• Reconstruction Loss Minimized FCN for Single Image Dehazing• Flexible Attributed Network Embedding• High-dimensional Index Volatility Models via Stein’s Identity• Accurate, Data-Efficient Learning from Noisy, Choice-Based Labels for Inherent Risk Scoring• Stochastic Gradient Push for Distributed Deep Learning• Anatomy of a six-parameter fit to the $b\to s \ell^+\ell^-$ anomalies• Adaptive-similarity node embedding for scalable learning over graphs• Probability-based Detection Quality (PDQ): A Probabilistic Approach to Detection Evaluation• Perceptual Conditional Generative Adversarial Networks for End-to-End Image Colourization• Successive Convexification for Real-Time 6-DoF Powered Descent Guidance with State-Triggered Constraints• Movie Recommendation System using Sentiment Analysis from Microblogging Data• A Scalable Optimization Mechanism for Pairwise based Discrete Hashing• Uncertainty aware multimodal activity recognition with Bayesian inference• Large-scale Speaker Retrieval on Random Speaker Variability Subspace• Noise-tolerant Audio-visual Online Person Verification using an Attention-based Neural Network Fusion• User Support for the Combinator Logic Synthesizer Framework• Improving the Visualization of Alloy Instances• Experience Report on Formally Verifying Parts of OpenJDK’s API with KeY• Finite-time Heterogeneous Cyclic Pursuit with Application to Target Interception

Like this:

Like Loading…

Related