SunJackson Blog


  • 首页

  • 分类

  • 关于

  • 归档

  • 标签

  • 站点地图

  • 公益404

The Main Approaches to Natural Language Processing Tasks

转载自:http://feedproxy.google.com/~r/kdnuggets-data-mining-analytics/~3/agjkFDZ57sM/main-approaches-natural-language-processing-tasks.html

Matthew Mayo


发表于 2018-10-17

Source: Top 5 Semantic Technology Trends to Look for in 2017 (ontotext).

阅读全文 »

If you did not already know

转载自:https://advanceddataanalytics.net/2018/10/17/if-you-did-not-already-know-516/

Michael Laux


发表于 2018-10-17

Support Neighbor (SN) Person re-identification (re-ID) has recently been tremendously boosted due to the advancement of deep convolutional neural networks (CNN). The majority of deep re-ID methods focus on designing new CNN architectures, while less attention is paid on investigating the loss functions. Verification loss and identification loss are two types of losses widely used to train various deep re-ID models, both of which however have limitations. Verification loss guides the networks to generate feature embeddings of which the intra-class variance is decreased while the inter-class ones is enlarged. However, training networks with verification loss tends to be of slow convergence and unstable performance when the number of training samples is large. On the other hand, identification loss has good separating and scalable property. But its neglect to explicitly reduce the intra-class variance limits its performance on re-ID, because the same person may have significant appearance disparity across different camera views. To avoid the limitations of the two types of losses, we propose a new loss, called support neighbor (SN) loss. Rather than being derived from data sample pairs or triplets, SN loss is calculated based on the positive and negative support neighbor sets of each anchor sample, which contain more valuable contextual information and neighborhood structure that are beneficial for more stable performance. To ensure scalability and separability, a softmax-like function is formulated to push apart the positive and negative support sets. To reduce intra-class variance, the distance between the anchor’s nearest positive neighbor and furthest positive sample is penalized. Integrating SN loss on top of Resnet50, superior re-ID results to the state-of-the-art ones are obtained on several widely used datasets. …

阅读全文 »

KDnuggets™ News 18:n39, Oct 17: 10 Best Mobile Apps for Data Scientist; Vote in new poll: Largest dataset you analyzed?

转载自:http://feedproxy.google.com/~r/kdnuggets-data-mining-analytics/~3/E4Y3GOEUn2o/n39.html

Gregory PS Editor


发表于 2018-10-17

阅读全文 »

Four machine learning strategies for solving real-world problems

转载自:https://blogs.sas.com/content/subconsciousmusings/2018/10/17/four-machine-learning-strategies-for-solving-real-world-problems/

Susan Kahler


发表于 2018-10-17

There are four widely recognized styles of machine learning: supervised, unsupervised, semi-supervised and reinforcement learning. These styles have been discussed in great depth in the literature and are included in most introductory lectures on machine learning algorithms. As a recap, the table below summarizes these styles. For a comprehensive mapping of machine learning algorithms to machine learning styles, check out this blog post.

阅读全文 »

Music for Data Scientists? Music by Data Scientists? …What…?!

转载自:http://feedproxy.google.com/~r/kdnuggets-data-mining-analytics/~3/ZF-jZIdtZGY/music-data-scientists.html

Dan Clark


发表于 2018-10-17

By Foster Provost, NYU

阅读全文 »

Citizen Data Scientists | Why Not DIY AI?

转载自:http://feedproxy.google.com/~r/kdnuggets-data-mining-analytics/~3/wUVqKun2Ge8/citizen-data-scientists-automl-webinar.html

Gregory PS Editor


发表于 2018-10-17
Thursday, November 8
阅读全文 »

Fitting the Besag, York, and Mollie spatial autoregression model with discrete data

转载自:https://andrewgelman.com/2018/10/17/fitting-besag-york-mollie-spatial-autoregression-model-discrete-data/

Andrew


发表于 2018-10-17

Rudy Banerjee writes:

阅读全文 »

SatRday talks recordings

转载自:http://feedproxy.google.com/~r/RBloggers/~3/VZcsIWaW9SY/

Longhow Lam


发表于 2018-10-17

阅读全文 »

Slides from my talk at the R-Ladies Meetup about Interpretable Deep Learning with R, Keras and LIME

转载自:http://feedproxy.google.com/~r/RBloggers/~3/T9MIY4Ec524/

Dr. Shirin Glander


发表于 2018-10-17

During my stay in London for the m3 conference, I also gave a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME.

阅读全文 »

Estimating Control Chart Constants with R

转载自:http://feedproxy.google.com/~r/RBloggers/~3/YtSB_jSmV1o/

Kenith Grey


发表于 2018-10-17

In this post, I will show you how a very basic R code can be used to estimate quality control constants needed to construct X-Individuals, X-Bar, and R-Bar charts. The value of this approach is that it gives you a mechanical sense of where these constants come from and some reinforcement on their application.

阅读全文 »
1 … 173 174 175 … 398
SunJackson

SunJackson

3974 日志
5 分类
© 2018 - 2019 SunJackson