By Charles Brecque, Mind Foundry
Loops and Pizzas
An Introduction to Loops in R –
I’m an Analyst and the software engineers made fun of my code!
My friend JD Long, has been a source of good inspiration over the years as I learned more about analytics, reporting, software and data science.
Young Investigator Special Competition for Time-Sharing Experiment for the Social Sciences
Time-Sharing Experiments for the Social Sciences is Having A Special Competition for Young Investigators
Statistics Challenge Invites Students to Tackle Opioid Crisis Using Real-World Data
Document worth reading: “Review of Deep Learning”
In recent years, China, the United States and other countries, Google and other high-tech companies have increased investment in artificial intelligence. Deep learning is one of the current artificial intelligence research’s key areas. This paper analyzes and summarizes the latest progress and future research directions of deep learning. Firstly, three basic models of deep learning are outlined, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. On this basis, we further analyze the emerging new models of convolution neural networks and recurrent neural networks. This paper then summarizes deep learning’s applications in many areas of artificial intelligence, including voice, computer vision, natural language processing and so on. Finally, this paper discusses the existing problems of deep learning and gives the corresponding possible solutions. Review of Deep Learning
Holy Grail of AI for Enterprise — Explainable AI
By Saurabh Kaushik.
Gold-Mining Week 7 (2018)
The post Gold-Mining Week 7 (2018) appeared first on Fantasy Football Analytics.
If you did not already know
Factorized Adversarial Network (FAN)
In this paper, we propose Factorized Adversarial Networks (FAN) to solve unsupervised domain adaptation problems for image classification tasks. Our networks map the data distribution into a latent feature space, which is factorized into a domain-specific subspace that contains domain-specific characteristics and a task-specific subspace that retains category information, for both source and target domains, respectively. Unsupervised domain adaptation is achieved by adversarial training to minimize the discrepancy between the distributions of two task-specific subspaces from source and target domains. We demonstrate that the proposed approach outperforms state-of-the-art methods on multiple benchmark datasets used in the literature for unsupervised domain adaptation. Furthermore, we collect two real-world tagging datasets that are much larger than existing benchmark datasets, and get significant improvement upon baselines, proving the practical value of our approach. …
Will Models Rule the World? Data Science Salon Miami, Nov 6-7
Sponsored Post.