SunJackson Blog


  • 首页

  • 分类

  • 关于

  • 归档

  • 标签

  • 站点地图

  • 公益404

Whats new on arXiv

转载自:https://advanceddataanalytics.net/2018/08/01/whats-new-on-arxiv-723/

Michael Laux


发表于 2018-08-01

Neural Mesh: Introducing a Notion of Space and Conservation of Energy to Neural Networks

阅读全文 »

Thanks, NVIDIA

转载自:http://andrewgelman.com/2018/08/01/thanks-nvidia/

Bob Carpenter


发表于 2018-08-01

 

阅读全文 »

If you did not already know

转载自:https://advanceddataanalytics.net/2018/08/01/if-you-did-not-already-know-439/

Michael Laux


发表于 2018-08-01

SPSA-FSR This manuscript presents the following: (1) an improved version of the Binary Simultaneous Perturbation Stochastic Approximation (SPSA) Method for feature selection in machine learning (Aksakalli and Malekipirbazari, Pattern Recognition Letters, Vol. 75, 2016) based on non-monotone iteration gains computed via the Barzilai and Borwein (BB) method, (2) its adaptation for feature ranking, and (3) comparison against popular methods on public benchmark datasets. The improved method, which we call SPSA-FSR, dramatically reduces the number of iterations required for convergence without impacting solution quality. SPSA-FSR can be used for feature ranking and feature selection both for classification and regression problems. After a review of the current state-of-the-art, we discuss our improvements in detail and present three sets of computational experiments: (1) comparison of SPSA-FS as a (wrapper) feature selection method against sequential methods as well as genetic algorithms, (2) comparison of SPSA-FS as a feature ranking method in a classification setting against random forest importance, chi-squared, and information main methods, and (3) comparison of SPSA-FS as a feature ranking method in a regression setting against minimum redundancy maximum relevance (MRMR), RELIEF, and linear correlation methods. The number of features in the datasets we use range from a few dozens to a few thousands. Our results indicate that SPSA-FS converges to a good feature set in no more than 100 iterations and therefore it is quite fast for a wrapper method. SPSA-FS also outperforms popular feature selection as well as feature ranking methods in majority of test cases, sometimes by a large margin, and it stands as a promising new feature selection and ranking method. …

阅读全文 »

Is it really true that babies should sleep on their backs?

转载自:http://andrewgelman.com/2018/07/31/really-true-babies-sleep-backs/

Andrew


发表于 2018-07-31

Arnold Kling is a well-regarded economics blogger. Here he expresses skepticism about the strength of the evidence behind recommending that babies sleep on their backs.

阅读全文 »

What makes the Python Cool.

转载自:https://www.codementor.io/shankarj67/what-makes-the-python-cool-lyerw0u19

SHANKAR JHA (SKHK634)


发表于 2018-07-31

As topic says, we will look into some of the cool feature provided by Python.

阅读全文 »

New Dynamics for Topic Models

转载自:http://blog.fastforwardlabs.com/2018/07/31/new-dynamics-for-topic-models.html

未知


发表于 2018-07-31

Topic models can extract key themes from large collections of documents in an unsupervised manner, which makes them one of the most powerful tools in organizing, searching, and understanding the vast troves of text data produced by humanity. Their power derives, in part, from their in-built assumptions about the nature of text; specifically, to identify topics, the model has to give the notion of a topic a mathematical structure that echoes its significance to a human reader. In their recent paper, Scalable Generalized Dynamic Topic Models, Patrick Jähnichen, Florian Wenzel, Marius Kloft, and Stephan Mandt show scalable models that allow topics to change over time in a way that is more general than it was previously, extracting new forms of patterns from large-scale datasets.

阅读全文 »

Document worth reading: “Are Efficient Deep Representations Learnable”

转载自:https://advanceddataanalytics.net/2018/08/01/document-worth-reading-are-efficient-deep-representations-learnable/

Michael Laux


发表于 2018-07-31

Many theories of deep learning have shown that a deep network can require dramatically fewer resources to represent a given function compared to a shallow network. But a question remains: can these efficient representations be learned using current deep learning techniques In this work, we test whether standard deep learning methods can in fact find the efficient representations posited by several theories of deep representation. Specifically, we train deep neural networks to learn two simple functions with known efficient solutions: the parity function and the fast Fourier transform. We find that using gradient-based optimization, a deep network does not learn the parity function, unless initialized very close to a hand-coded exact solution. We also find that a deep linear neural network does not learn the fast Fourier transform, even in the best-case scenario of infinite training data, unless the weights are initialized very close to the exact hand-coded solution. Our results suggest that not every element of the class of compositional functions can be learned efficiently by a deep network, and further restrictions are necessary to understand what functions are both efficiently representable and learnable. Are Efficient Deep Representations Learnable

阅读全文 »

Recent top-selling books in AI and Machine Learning

转载自:http://rocketdatascience.org/?p=659

Kirk Borne


发表于 2018-07-31

Here are some Artificial Intelligence and Machine Learning books that are top sellers on Amazon.

阅读全文 »

Magister Dixit

转载自:https://advanceddataanalytics.net/2018/07/31/magister-dixit-1301/

Michael Laux


发表于 2018-07-31

“I can see at least two options where methods from Data Science will benefit from Linked Data technologies and vice versa:– Machine learning algorithms benefit from the linking of various data sets by using ontologies and common vocabularies as well as reasoning, which leads to a broader data basis with (sometimes) higher data quality– Linked Data based knowledge graphs benefit from Graph Data Analyses to identify data gaps and potential links (find an example for a semantic knowledge graph about ‘Data Science’ here: http://vocabulary.semantic-web.at/data-science)“ Andreas Blumauer ( October 28, 2014 )

阅读全文 »

Neural reinterpretations of movie trailers

转载自:http://blog.fastforwardlabs.com/2018/07/31/neural-reinterpretations-of-movie-trailers.html

Grant


发表于 2018-07-31

In his latest project, artist and coder Mario Klingemann uses a neural network to match archival movie footage with the content of recent movie trailers. He regularly posts the resulting “neural reinterpretations” on his Twitter. The results are technically impressive. They’re also a fascinating view into how to explore the creative possibilities of a machine learning technique.

阅读全文 »
1 … 267 268 269 … 398
SunJackson

SunJackson

3974 日志
5 分类
© 2018 - 2019 SunJackson