If you did not already know
Support Neighbor (SN)
Person re-identification (re-ID) has recently been tremendously boosted due to the advancement of deep convolutional neural networks (CNN). The majority of deep re-ID methods focus on designing new CNN architectures, while less attention is paid on investigating the loss functions. Verification loss and identification loss are two types of losses widely used to train various deep re-ID models, both of which however have limitations. Verification loss guides the networks to generate feature embeddings of which the intra-class variance is decreased while the inter-class ones is enlarged. However, training networks with verification loss tends to be of slow convergence and unstable performance when the number of training samples is large. On the other hand, identification loss has good separating and scalable property. But its neglect to explicitly reduce the intra-class variance limits its performance on re-ID, because the same person may have significant appearance disparity across different camera views. To avoid the limitations of the two types of losses, we propose a new loss, called support neighbor (SN) loss. Rather than being derived from data sample pairs or triplets, SN loss is calculated based on the positive and negative support neighbor sets of each anchor sample, which contain more valuable contextual information and neighborhood structure that are beneficial for more stable performance. To ensure scalability and separability, a softmax-like function is formulated to push apart the positive and negative support sets. To reduce intra-class variance, the distance between the anchor’s nearest positive neighbor and furthest positive sample is penalized. Integrating SN loss on top of Resnet50, superior re-ID results to the state-of-the-art ones are obtained on several widely used datasets. …
KDnuggets™ News 18:n39, Oct 17: 10 Best Mobile Apps for Data Scientist; Vote in new poll: Largest dataset you analyzed?
Four machine learning strategies for solving real-world problems
There are four widely recognized styles of machine learning: supervised, unsupervised, semi-supervised and reinforcement learning. These styles have been discussed in great depth in the literature and are included in most introductory lectures on machine learning algorithms. As a recap, the table below summarizes these styles. For a comprehensive mapping of machine learning algorithms to machine learning styles, check out this blog post.
Music for Data Scientists? Music by Data Scientists? …What…?!
By Foster Provost, NYU
Citizen Data Scientists | Why Not DIY AI?
Thursday, November 8 |
Fitting the Besag, York, and Mollie spatial autoregression model with discrete data
Rudy Banerjee writes:
SatRday talks recordings
Slides from my talk at the R-Ladies Meetup about Interpretable Deep Learning with R, Keras and LIME
During my stay in London for the m3 conference, I also gave a talk at the R-Ladies London Meetup on Tuesday, October 16th, about one of my favorite topics: Interpretable Deep Learning with R, Keras and LIME.
Estimating Control Chart Constants with R
In this post, I will show you how a very basic R code can be used to estimate quality control constants needed to construct X-Individuals, X-Bar, and R-Bar charts. The value of this approach is that it gives you a mechanical sense of where these constants come from and some reinforcement on their application.