SunJackson Blog


  • 首页

  • 分类

  • 关于

  • 归档

  • 标签

  • 站点地图

  • 公益404

“Principles of posterior visualization”

转载自:https://andrewgelman.com/2019/01/01/principles-posterior-visualization/

Andrew


发表于 2019-01-01

What better way to start the new year than with a discussion of statistical graphics.

阅读全文 »

Your and my 2019 R goals

转载自:http://feedproxy.google.com/~r/RBloggers/~3/Z3_gxMn2ZzE/

Posts on Maëlle's R blog


发表于 2019-01-01

Here we go again, using a Twitter trend as blog fodder! Colin Faylaunched an inspiring movement by sharing his R goals of 2019.

阅读全文 »

Seeing the wood for the trees

转载自:http://feedproxy.google.com/~r/RBloggers/~3/oT9QoC-AjNk/

Carl Goodwin


发表于 2019-01-01

Visualising “bigger data” In the blog post Criminal goings-on in a random forest, we used supervised machine learning to see how well we could predict crime in London. We began by rendering and exploring some of the many facets of the recorded crime summary data at London borough-level .

阅读全文 »

New Year's Resolutions 2019

转载自:http://korbonits.github.io/2019/01/01/New-Years-Resolutions-2019.html

未知


发表于 2019-01-01

A year back, a year ahead.

阅读全文 »

If you did not already know

转载自:https://analytixon.com/2019/01/01/if-you-did-not-already-know-596/

Michael Laux


发表于 2019-01-01

Gradient Adversarial Training We propose gradient adversarial training, an auxiliary deep learning framework applicable to different machine learning problems. In gradient adversarial training, we leverage a prior belief that in many contexts, simultaneous gradient updates should be statistically indistinguishable from each other. We enforce this consistency using an auxiliary network that classifies the origin of the gradient tensor, and the main network serves as an adversary to the auxiliary network in addition to performing standard task-based training. We demonstrate gradient adversarial training for three different scenarios: (1) as a defense to adversarial examples we classify gradient tensors and tune them to be agnostic to the class of their corresponding example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher’s gradient tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable. For each of the three scenarios we show the potential of gradient adversarial training procedure. Specifically, gradient adversarial training increases the robustness of a network to adversarial attacks, is able to better distill the knowledge from a teacher network to a student network compared to soft targets, and boosts multi-task learning by aligning the gradient tensors derived from the task specific loss functions. Overall, our experiments demonstrate that gradient tensors contain latent information about whatever tasks are being trained, and can support diverse machine learning problems when intelligently guided through adversarialization using a auxiliary network. …

阅读全文 »

Whats new on arXiv

转载自:https://analytixon.com/2019/01/02/whats-new-on-arxiv-856/

Michael Laux


发表于 2019-01-01

Generic adaptation strategies for automated machine learning

阅读全文 »

If you did not already know

转载自:https://analytixon.com/2019/01/01/if-you-did-not-already-know-597/

Michael Laux


发表于 2019-01-01

Deep Echo State Network (deepESN) The study of deep recurrent neural networks (RNNs) and, in particular, of deep Reservoir Computing (RC) is gaining an increasing research attention in the neural networks community. The recently introduced deep Echo State Network (deepESN) model opened the way to an extremely efficient approach for designing deep neural networks for temporal data. At the same time, the study of deepESNs allowed to shed light on the intrinsic properties of state dynamics developed by hierarchical compositions of recurrent layers, i.e. on the bias of depth in RNNs architectural design. In this paper, we summarize the advancements in the development, analysis and applications of deepESNs. …

阅读全文 »

Document worth reading: “Instance-Level Explanations for Fraud Detection: A Case Study”

转载自:https://analytixon.com/2019/01/01/document-worth-reading-instance-level-explanations-for-fraud-detection-a-case-study/

Michael Laux


发表于 2019-01-01

Fraud detection is a difficult problem that can benefit from predictive modeling. However, the verification of a prediction is challenging; for a single insurance policy, the model only provides a prediction score. We present a case study where we reflect on different instance-level model explanation techniques to aid a fraud detection team in their work. To this end, we designed two novel dashboards combining various state-of-the-art explanation techniques. These enable the domain expert to analyze and understand predictions, dramatically speeding up the process of filtering potential fraud cases. Finally, we discuss the lessons learned and outline open research issues. Instance-Level Explanations for Fraud Detection: A Case Study

阅读全文 »

Simulating Multi-state Models with R

转载自:http://feedproxy.google.com/~r/RBloggers/~3/8F833z6UXpg/

Health Economics with R


发表于 2019-01-01

Introduction Multi-state models are used to model a trajectory through multiple states. Survival models are a special case in which there are two states, alive and dead. Multi-state models are therefore useful in clinical settings because they can be used to predict or simulate disease progression in detail. Putter et al. provide a helpful tutorial.

阅读全文 »

Nimble tweak to use specific python version or virtual environment in RStudio

转载自:http://feedproxy.google.com/~r/RBloggers/~3/BKiTF6xpiOQ/

Pradeep Mavuluri


发表于 2019-01-01

Reticulate made switch between R & Python easy, and doing its best to facilitate both worlds of data science.

阅读全文 »
1 … 23 24 25 … 398
SunJackson

SunJackson

3974 日志
5 分类
© 2018 - 2019 SunJackson