Whats new on arXiv

Study of Set-Membership Adaptive Kernel Algorithms

In the last decade, a considerable research effort has been devoted to developing adaptive algorithms based on kernel functions. One of the main features of these algorithms is that they form a family of universal approximation techniques, solving problems with nonlinearities elegantly. In this paper, we present data-selective adaptive kernel normalized least-mean square (KNLMS) algorithms that can increase their learning rate and reduce their computational complexity. In fact, these methods deal with kernel expansions, creating a growing structure also known as the dictionary, whose size depends on the number of observations and their innovation. The algorithms described herein use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost. The data-selective update scheme also limits the number of operations performed and the size of the dictionary created by the kernel expansion, saving computational resources and dealing with one of the major problems of kernel adaptive algorithms. A statistical analysis is carried out along with a computational complexity analysis of the proposed algorithms. Simulations show that the proposed KNLMS algorithms outperform existing algorithms in examples of nonlinear system identification and prediction of a time series originating from a nonlinear difference equation.

Benchmarking Automatic Machine Learning Frameworks

AutoML serves as the bridge between varying levels of expertise when designing machine learning systems and expedites the data science process. A wide range of techniques is taken to address this, however there does not exist an objective comparison of these techniques. We present a benchmark of current open source AutoML solutions using open source datasets. We test auto-sklearn, TPOT, auto_ml, and H2O’s AutoML solution against a compiled set of regression and classification datasets sourced from OpenML and find that auto-sklearn performs the best across classification datasets and TPOT performs the best across regression datasets.

Optimizing Deep Neural Network Architecture: A Tabu Search Based Approach

The performance of Feedforward neural network (FNN) fully de-pends upon the selection of architecture and training algorithm. FNN architecture can be tweaked using several parameters, such as the number of hidden layers, number of hidden neurons at each hidden layer and number of connections between layers. There may be exponential combinations for these architectural attributes which may be unmanageable manually, so it requires an algorithm which can automatically design an optimal architecture with high generalization ability. Numerous optimization algorithms have been utilized for FNN architecture determination. This paper proposes a new methodology which can work on the estimation of hidden layers and their respective neurons for FNN. This work combines the advantages of Tabu search (TS) and Gradient descent with momentum backpropagation (GDM) training algorithm to demonstrate how Tabu search can automatically select the best architecture from the populated architectures based on minimum testing error criteria. The proposed approach has been tested on four classification benchmark dataset of different size.

Attainment Ratings for Graph-Query Recommendation

The video game industry is larger than both the film and music industries combined. Recommender systems for video games have received relatively scant academic attention, despite the uniqueness of the medium and its data. In this paper, we introduce a graph-based recommender system that makes use of interactivity, arguably the most significant feature of video gaming. We show that the use of implicit data that tracks user-game interactions and levels of attainment (e.g. Sony Playstation Trophies, Microsoft Xbox Achievements) has high predictive value when making recommendations. Furthermore, we argue that the characteristics of the video gaming hobby (low cost, high duration, socially relevant) make clear the necessity of personalized, individual recommendations that can incorporate social networking information. We demonstrate the natural suitability of graph-query based recommendation for this purpose.

Learning-based Automatic Parameter Tuning for Big Data Analytics Frameworks

Big data analytics frameworks (BDAFs) have been widely used for data processing applications. These frameworks provide a large number of configuration parameters to users, which leads to a tuning issue that overwhelms users. To address this issue, many automatic tuning approaches have been proposed. However, it remains a critical challenge to generate enough samples in a high-dimensional parameter space within a time constraint. In this paper, we present AutoTune–an automatic parameter tuning system that aims to optimize application execution time on BDAFs. AutoTune first constructs a smaller-scale testbed from the production system so that it can generate more samples, and thus train a better prediction model, under a given time constraint. Furthermore, the AutoTune algorithm produces a set of samples that can provide a wide coverage over the high-dimensional parameter space, and searches for more promising configurations using the trained prediction model. AutoTune is implemented and evaluated using the Spark framework and HiBench benchmark deployed on a public cloud. Extensive experimental results illustrate that AutoTune improves on default configurations by 63.70% on average, and on the five state-of-the-art tuning algorithms by 6%-23%.

A Stepwise Approach for High-Dimensional Gaussian Graphical Models

We present a stepwise approach to estimate high dimensional Gaussian graphical models. We exploit the relation between the partial correlation coefficients and the distribution of the prediction errors, and parametrize the model in terms of the Pearson correlation coefficients between the prediction errors of the nodes’ best linear predictors. We propose a novel stepwise algorithm for detecting pairs of conditionally dependent variables. We show that the proposed algorithm outperforms existing methods such as the graphical lasso and CLIME in simulation studies and real life applications. In our comparison we report different performance measures that look at different desirable features of the recovered graph and consider several model settings.

SeVeN: Augmenting Word Embeddings with Unsupervised Relation Vectors

We present SeVeN (Semantic Vector Networks), a hybrid resource that encodes relationships between words in the form of a graph. Different from traditional semantic networks, these relations are represented as vectors in a continuous vector space. We propose a simple pipeline for learning such relation vectors, which is based on word vector averaging in combination with an ad hoc autoencoder. We show that by explicitly encoding relational information in a dedicated vector space we can capture aspects of word meaning that are complementary to what is captured by word embeddings. For example, by examining clusters of relation vectors, we observe that relational similarities can be identified at a more abstract level than with traditional word vector differences. Finally, we test the effectiveness of semantic vector networks in two tasks: measuring word similarity and neural text categorization. SeVeN is available at bitbucket.org/luisespinosa/seven.

Learning to Compose over Tree Structures via POS Tags

Recursive Neural Network (RecNN), a type of models which compose words or phrases recursively over syntactic tree structures, has been proven to have superior ability to obtain sentence representation for a variety of NLP tasks. However, RecNN is born with a thorny problem that a shared compositional function for each node of trees can’t capture the complex semantic compositionality so that the expressive power of model is limited. In this paper, in order to address this problem, we propose Tag-Guided HyperRecNN/TreeLSTM (TG-HRecNN/TreeLSTM), which introduces hypernetwork into RecNNs to take as inputs Part-of-Speech (POS) tags of word/phrase and generate the semantic composition parameters dynamically. Experimental results on five datasets for two typical NLP tasks show proposed models both obtain significant improvement compared with RecNN and TreeLSTM consistently. Our TG-HTreeLSTM outperforms all existing RecNN-based models and achieves or is competitive with state-of-the-art on four sentence classification benchmarks. The effectiveness of our models is also demonstrated by qualitative analysis.

Tangent-Normal Adversarial Regularization for Semi-supervised Learning

The ever-increasing size of modern datasets combined with the difficulty of obtaining label information has made semi-supervised learning of significant practical importance in modern machine learning applications. Compared with supervised learning, the key difficulty in semi-supervised learning is how to make full use of the unlabeled data. In order to utilize manifold information provided by unlabeled data, we propose a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts. The two terms complement with each other and jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning. One is applied along the tangent space of the data manifold, aiming to enforce local invariance of the classifier on the manifold, while the other is performed on the normal space orthogonal to the tangent space, intending to impose robustness on the classifier against the noise causing the observed data deviating from the underlying data manifold. Both of the two regularizers are achieved by the strategy of virtual adversarial training. Our method has achieved state-of-the-art performance on semi-supervised learning tasks on both artificial dataset and FashionMNIST dataset.

Pangea: Monolithic Distributed Storage for Data Analytics

Storage and memory systems for modern data analytics are heavily layered, managing shared persistent data, cached data, and non- shared execution data in separate systems such as distributed file system like HDFS, in-memory file system like Alluxio and computation framework like Spark. Such layering introduces significant performance and management costs for copying data across layers redundantly and deciding proper resource allocation for all layers. In this paper we propose a single system called Pangea that can manage all data—both intermediate and long-lived data, and their buffer/caching, data placement optimization, and failure recovery—all in one monolithic storage system, without any layering. We present a detailed performance evaluation of Pangea and show that its performance compares favorably with several widely used layered systems such as Spark.

Multi-dimensional Graph Convolutional Networks

Convolutional neural networks (CNNs) leverage the great power in representation learning on regular grid data such as image and video. Recently, increasing attention has been paid on generalizing CNNs to graph or network data which is highly irregular. Some focus on graph-level representation learning while others aim to learn node-level representations. These methods have been shown to boost the performance of many graph-level tasks such as graph classification and node-level tasks such as node classification. Most of these methods have been designed for single-dimensional graphs where a pair of nodes can only be connected by one type of relation. However, many real-world graphs have multiple types of relations and they can be naturally modeled as multi-dimensional graphs with each type of relation as a dimension. Multi-dimensional graphs bring about richer interactions between dimensions, which poses tremendous challenges to the graph convolutional neural networks designed for single-dimensional graphs. In this paper, we study the problem of graph convolutional networks for multi-dimensional graphs and propose a multi-dimensional convolutional neural network model mGCN aiming to capture rich information in learning node-level representations for multi-dimensional graphs. Comprehensive experiments on real-world multi-dimensional graphs demonstrate the effectiveness of the proposed framework.

Linked Recurrent Neural Networks

Recurrent Neural Networks (RNNs) have been proven to be effective in modeling sequential data and they have been applied to boost a variety of tasks such as document classification, speech recognition and machine translation. Most of existing RNN models have been designed for sequences assumed to be identically and independently distributed (i.i.d). However, in many real-world applications, sequences are naturally linked. For example, web documents are connected by hyperlinks; and genes interact with each other. On the one hand, linked sequences are inherently not i.i.d., which poses tremendous challenges to existing RNN models. On the other hand, linked sequences offer link information in addition to the sequential information, which enables unprecedented opportunities to build advanced RNN models. In this paper, we study the problem of RNN for linked sequences. In particular, we introduce a principled approach to capture link information and propose a linked Recurrent Neural Network (LinkedRNN), which models sequential and link information coherently. We conduct experiments on real-world datasets from multiple domains and the experimental results validate the effectiveness of the proposed framework.

• Inter-IC for Wearables (I2We): Power and Data Transfer over Double-sided Conductive Textile• Optical wavelength conversion of high bandwidth phase-encoded signals in a high FOM 50cm CMOS compatible waveguide• Spatial Filtering for Brain Computer Interfaces: A Comparison between the Common Spatial Pattern and Its Variant• Sliding Z Transform: Applications to convolutive blind source separation• Target Image Video Search Based on Local Features• Epidemic spreading on time-varying multiplex networks• Learning Discriminative Hashing Codes for Cross-Modal Retrieval based on Multiorder Statistical Features• The perceived quality of process discovery tools• predictSLUMS: A new model for identifying and predicting informal settlements and slums in cities from street intersections using machine learning• Buildings Detection in VHR SAR Images Using Fully Convolution Neural Networks• Boundaries of Operation for Refurbished Parallel AC-DC Reconfigurable Links in Distribution Grids• Ricean K-factor Estimation based on Channel Quality Indicator in OFDM Systems using Neural Network• Divergence functions in dually flat spaces and their properties• Some New Results on l1-Minimizing Nullspace Kalman Filtering for Remote Sensing Applications• Target And Background Separation in Hyperspectral Imagery for Automatic Target Detection• Analyzing within Garage Fuel Economy Gaps to Support Vehicle Purchasing Decisions – A Copula-Based Modeling & Forecasting Approach• Bitstream-Based JPEG Image Encryption with File-Size Preserving• Disambiguating fine-grained place names from descriptions by clustering• Spin-mediated particle transport in the disordered Hubbard model• High-Accuracy and Fault Tolerant Stochastic Inner Product Design• A study on speech enhancement using exponent-only floating point quantized neural network (EOFP-QNN)• Collaborative Pressure Ulcer Prevention: An Automated Skin Damage and Pressure Ulcer Assessment Tool for Nursing Professionals, Patients, Family Members and Carers• The Mittag-Leffler function in the thinning theory for renewal processes• Weak Measurements Limit Entanglement to Area Law• Measurement-Induced Phase Transitions in the Dynamics of Entanglement• Mobility edge and intermediate phase in one-dimensional incommensurate lattice potentials• On Geometric Analysis of Affine Sparse Subspace Clustering• Stationary points in coalescing stochastic flows on $\mathbb{R}$• Node-Level Resilience Loss in Dynamic Complex Networks• Revisiting the proton-radius problem using constrained Gaussian processes• Memristor – The fictional circuit element• Bernoulli actions of amenable groups with weakly mixing Maharam extensions• Quantifying the Computational Advantage of Forward Orthogonal Deviations• Heuristics for publishing dynamic content as structured data with schema.org• Data-driven framework for real-time thermospheric density estimation• Ultra Reliable, Low Latency Vehicle-to-Infrastructure Wireless Communications with Edge Computing• Optimized Path Planning for Inspection by Unmanned Aerial Vehicles Swarm with Energy Constraints• What do the US West Coast Public Libraries Post on Twitter?• Characterizing Transgender Health Issues in Twitter• Support Neighbor Loss for Person Re-Identification• A stronger connection between the Erdös-Burgess and Davenport constants• Concept Mask: Large-Scale Segmentation from Semantic Concepts• A general approach to detect gene (G)-environment (E) additive interaction leveraging G-E independence in case-control studies• Optimal proposals for Approximate Bayesian Computation• CellLineNet: End-to-End Learning and Transfer Learning For Multiclass Epithelial Breast cell Line Classification via a Convolutional Neural Network• Cyclic sieving, necklaces, and branching rules related to Thrall’s problem• Distractor-aware Siamese Networks for Visual Object Tracking• Well-Posedness, Stability, and Sensitivities for Stochastic Delay Equations: A Generalized Coupling Approach• The Capacity of Some Pólya String Models• Observations of Turbulent Magnetic Reconnection Within a Solar Current Sheet• Odometer of long-range sandpiles in the torus: mean behaviour and scaling limits• Community detection in networks with unobserved edges• Generalized Mullineux involution and perverse equivalences• Clock theorems for triangulated surfaces• Skew RSK and coincidence of Littlewood-Richardson commutors• A regularity condition in polynomial optimization• Spanning tree packing, edge-connectivity and eigenvalues of graphs with given girth• Exact Passive-Aggressive Algorithms for Learning to Rank Using Interval Labels• Bayesian Hidden Markov Tree Models for Clustering Genes with Shared Evolutionary History• Emoji Sentiment Scores of Writers using Odds Ratio and Fisher Exact Test• Accelerated search and design of stretchable graphene kirigami using machine learning• Energy Efficiency of Server-Centric PON Data Center Architecture for Fog Computing• Impact of Link Failures on the Performance of MapReduce in Data Center Networks• A Recipe for Arabic-English Neural Machine Translation• Effect of secular trend in drug effectiveness study in real world data• Polyhedral geometry for lecture hall partitions• In Defense of Single-column Networks for Crowd Counting• Quantum Zeno Effect and the Many-body Entanglement Transition• Generalized Bregman and Jensen divergences which include some f-divergences• On Design of Problem Token Questions in Quality of Experience Surveys• On the mixing time of the Diaconis–Gangolli random walk on contingency tables over $\mathbb{Z}/ q \mathbb{Z}$• The 4-Component Connectivity of Alternating Group Networks• Hierarchical Neural Networks for Sequential Sentence Classification in Medical Scientific Abstracts• Domino Tile Placing on Graphs• Source-Critical Reinforcement Learning for Transferring Spoken Language Understanding to a New Language• Sharing within limits: Partial resource pooling in loss systems• Deep Multiple Instance Learning for Airplane Detection in High Resolution Imagery• Elements of the q-Askey scheme in the algebra of symmetric functions• Generalizations of the associative operad and convergent rewrite systems• The number of multiplicative Sidon sets of integers• Non-Asymptotic and Asymptotic Fundamental Limits of Guessing Subject to Distortion• Fourier analysis perspective for sufficient dimension reduction problem• A Fast and Robust Matching Framework for Multimodal Remote Sensing Image Registration• Möbius orthogonality for $q$-semimultiplicative sequences• Lower bound for the cost of connecting tree with given vertex degree sequence• Ensemble-based Overlapping Community Detection using Disjoint Community Structures• TLR: Transfer Latent Representation for Unsupervised Domain Adaptation• Haze Density Estimation via Modeling of Scattering Coefficients of Iso-depth Regions• GridFace: Face Rectification via Learning Local Homography Transformations• Let CONAN tell you a story: Procedural quest generation• Adapting the Neural Encoder-Decoder Framework from Single to Multi-Document Summarization• Automatic Detection of Vague Words and Sentences in Privacy Policies• Deep Multi-View Clustering via Multiple Embedding

Like this:

Like Loading…

Related