Deep learning classifiers make the ladies (and gentlemen) swoon, but they often classify novel data that's not in the training set incorrectly with high confidence. This has serious real world consequences! In Medicine, this could mean misdiagnosing a patient. In autonomous vehicles, this could mean ignoring a stop sign. Machines are increasingly tasked with making life or death decisions like that, so it's important that we figure out how to correct this problem! I found a new, relatively obscure yet extremely fascinating paper out of Google Research that tackles this problem head on. In this episode, I'll explain the work of these researchers, we'll write some code, do some math, do some visualizations, and by the end I'll freestyle rap about AI and genomics. I had a lot of fun making this, so I hope you enjoy it!
Likelihood Ratios for Out-of-Distribution Detection paper:
The researcher's code:
TWITTER: INSTAGRAM: FACEBOOK: WEBSITE:
My notes from the paper (the images are from across the Web):
Visualization script:
Generative Modeling:
Defense Against Adversarial Attacks:
LSTM Networks Explained:
Recurrent Networks Explained:
Are you a total beginner to machine learning? Watch this:
Join us in the Wizards Slack channel:
Hit the Join button above to sign up to become a member of my channel for access to exclusive live streams!
Signup for my newsletter for exciting updates in the field of AI:
0 Yorumlar