Video ID
string | Channel ID
string | Title
string | Time Created
string | Time Published
string | Duration
string | Description
string | Category
string | Like Count
float64 | Dislike Count
float64 |
---|---|---|---|---|---|---|---|---|---|
j3psNM5y-eA | UCv83tO5cePwHMt1952IVVHw | Implementing Filters in the New Haystack Doc Store | 2022-04-06 15:53:46 UTC | 2022-04-06 16:26:54 UTC | 1695 seconds | ๐ฅณ Released with Haystack v1.3! Install direct from PyPI with:
pip install 'farm-haystack[pinecone]'
Join me as I work through the final few PR issues on the latest Haystack document store, and figure out how Haystack's filter_utils work.
PR:
https://github.com/deepset-ai/haystack/pull/2254
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:41 Filtering
05:36 Testing Existing Filter Utils
07:57 Making Sense of Filter Utils
10:35 Writing the First Filter
16:26 First Working Filter
18:24 Testing New Filters
21:27 Implementing in the Doc Store
24:02 Testing Pipeline Filters
27:11 Final Issue and Outro | Science & Technology | 3 | 0 |
ok0SDdXdat8 | UCv83tO5cePwHMt1952IVVHw | Spotify's Podcast Search Explained | 2022-04-13 15:02:31 UTC | 2022-04-14 13:14:50 UTC | 2998 seconds | The market for podcasts has grown tremendously in recent years.
Driving the charge in podcast adoption is Spotify. In a few short years, they have become the undisputed leaders in podcasting. Despite only entering the game in 2018, by late 2021, Spotify had already usurped Apple, the long-reigning leader in podcasts, with more than 28M monthly podcast listeners.
To back their podcast investments, Spotify has worked on making the podcast experience as seamless and accessible as possible. From their all-in-one podcast creation app (Anchor) to podcast APIs and their latest natural language enabled podcast search.
Spotifyโs natural language search for podcasts is a fascinating use case. In the past, users had to rely on keyword/term matching to find the podcast episodes they wanted. Now, they can search in natural language, in much the same way we might ask a real person where to find something.
In this video, we will take a look under the hood of Spotify's podcast search, and learn how to implement a similar system ourselves.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/spotify-podcast-search
๐ Code and tests:
https://github.com/pinecone-io/examples/tree/spotify-podcast-search/learn/search-in-wild/spotify-podcast-search
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
04:16 NLP in Semantic Search
08:35 Why Now?
09:29 Transformer Models
11:52 Sentence Transformers
13:12 Vector Search
15:56 How Spotify Built Podcast Search
17:35 Data Source, Fine-tuning, and Eval
22:58 Code Implementation, Dataset
24:44 Data Preparation
26:39 Query Generation
29:54 Fine-tuning a Podcast Model
41:40 Evaluation
48:05 Does it Scale?
49:00 Sharing Your Work | Science & Technology | 58 | 1 |
gVAJ_l_S7uQ | UCv83tO5cePwHMt1952IVVHw | How to learn NLP for free | 2022-04-24 16:41:28 UTC | 2022-04-26 13:05:48 UTC | 1402 seconds | Knowing what to learn is one of the hardest parts about self-learning. Imagine being thrown into the wilderness and being told to find a specific landmark. Without a map you will end up wandering to wilderness with no better option than taking one step after another.
I spent a long time wandering step-by-step and eventually found my way into working with deep learning and NLP full-time.
Here I will share many of the resources I used or wish I had used in the past. You can this "curriculum" as a rough guideline in self-learning ML and working towards a full-time position.
ALL LINKS in article/friend link below:
๐ Medium article:
https://jamescalam.medium.com/the-self-taught-nlp-engineer-curriculum-c425c3fc3ff6
๐ Friend link:
https://jamescalam.medium.com/the-self-taught-nlp-engineer-curriculum-c425c3fc3ff6?sk=986263c644d9b36699d800713faa478a
---
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:53 ML 101 + Prerequisites
04:58 Sentdex + Neural Nets from Scratch
07:32 ML Coursera
09:31 100 Page ML Book
11:14 Applied ML + Daniel Bourke
13:17 Origin of Modern NLP
13:41 CS224N
14:44 NLP Specialization Coursera
15:57 Modern NLP + Transformers Intro
16:54 Transformer Courses
18:14 Doing Projects
19:18 Semantic + Vector Search
19:54 NLP for Semantic Search
20:44 Mining of Massive Datasets
22:27 Final Points | Science & Technology | 165 | 1 |
fb7LENb9eag | UCv83tO5cePwHMt1952IVVHw | BERTopic Explained | 2022-05-10 14:13:06 UTC | 2022-05-11 15:10:23 UTC | 2714 seconds | 90% of the world's data is unstructured. It is built by humans, for humans. That's great for human consumption, but it is *very* hard to organize when we begin dealing with the massive amounts of data abundant in today's information age.
Organization is complicated because unstructured text data is not intended to be understood by machines, and having humans process this abundance of data is wildly expensive and *very slow*.
Fortunately, there is light at the end of the tunnel. More and more of this unstructured text is becoming accessible and understood by machines. We can now search text based on *meaning*, identify the sentiment of text, extract entities, and much more.
Transformers are behind much of this. These transformers are (unfortunately) not Michael Bay's Autobots and Decepticons and (fortunately) not buzzing electrical boxes. Our NLP transformers lie somewhere in the middle, they're not sentient Autobots (yet), but they can understand language in a way that existed only in sci-fi until a short few years ago.
Machines with a human-like comprehension of language are pretty helpful for organizing masses of unstructured text data. In machine learning, we refer to this task as *topic modeling*, the automatic clustering of data into particular topics.
BERTopic takes advantage of the superior language capabilities of these (not yet sentient) transformer models and uses some other ML magic like UMAP and HDBSCAN (more on these later) to produce what is one of the most advanced techniques in language topic modeling today.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/bertopic
๐ Code notebooks:
https://github.com/pinecone-io/examples/tree/master/learn/algos-and-libraries/bertopic
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:40 In this video
02:58 BERTopic Getting Started
08:48 BERTopic Components
15:21 Transformer Embedding
18:33 Dimensionality Reduction
25:07 UMAP
31:48 Clustering
37:22 c-TF-IDF
40:49 Custom BERTopic
44:04 Final Thoughts | Science & Technology | 153 | 3 |
O9lrWt15wH8 | UCv83tO5cePwHMt1952IVVHw | Long Form Question Answering (LFQA) in Haystack | 2022-05-17 15:22:17 UTC | 2022-05-17 15:46:21 UTC | 2159 seconds | Question-Answering (QA) has exploded as a subdomain of Natural Language Processing (NLP) in the last few years. QA is a widely applicable use case in NLP yet was out of reach until the introduction of [transformer models](/learn/transformers/) in 2017.
Without transformer models, the level of language comprehension required to make something as complex as QA work simply was not possible.
Although QA is a complex topic, it comes from a simple idea. The automatic retrieval of information via a more human-like interaction. The task of information retrieval (IR) is performed by almost every organization in the world. Without other options, organizations rely on person-to-person IR and rigid keyword search tools. This haphazard approach to IR generates a lot of friction, particularly for larger organizations.
QA offers a solution to this problem. Rather than these documents being lost in an abyss, they can be stored within a space where an intelligent QA agent can access them. Unlike humans, our QA agent can scan millions of documents in seconds and return answers from these documents almost instantly.
With QA tools, employees can stop wasting time searching for snippets of information and focus on their *real*, value-adding tasks.
A small investment in QA is, for most organizations, a no-brainer.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/haystack-lfqa
๐ Code notebooks:
https://github.com/pinecone-io/examples/blob/master/integrations/haystack/haystack_lfqa.ipynb
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
04:20 Approaches to Question Answering
05:43 Components of QA Pipeline
08:58 LFQA Generator
09:40 Haystack Setup
10:32 Initialize Document Store
13:02 Getting Data
17:53 Indexing Embeddings
21:51 Initialize Generator
24:10 Asking Questions
26:12 Common Problems
29:32 Generator Memory
31:30 Few More Questions
34:54 Outro | Science & Technology | 55 | 1 |
uYas6ysyjgY | UCv83tO5cePwHMt1952IVVHw | New GPU-Acceleration for PyTorch on M1 Macs! + using with BERT | 2022-05-22 16:37:37 UTC | 2022-05-24 13:00:34 UTC | 1140 seconds | GPU-acceleration on Mac is finally here!
Today's deep learning models owe a great deal of their exponential performance gains to ever increasing model sizes. Those larger models require more computations to train and run.
These models are simply too big to be run on CPU hardware, which performs large step-by-step computations. Instead, they need massively parallel computations. That leaves us with either GPU or TPU hardware.
Our home PCs aren't coming with TPUs anytime soon, so we're left with the GPU option. GPUs use a highly parallel structure, originally designed to process images for visual heavy processes. They became essential components in gaming for rendering real-time 3D images.
GPUs are essential for the scale of today's models. Using CPUs makes many of these models too slow to be useful, which can make deep learning on M1 machines rather disappointing.
Fortunately, this is changing with the support of GPU on M1 machines beginning with PyTorch v1.12. In this video we will explain the new integration and how to implement it yourself.
๐ Article:
https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1
๐ Friend Link (free access):
https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1?sk=a88acd35f600858093c177b97d690b03
๐ Code notebooks:
https://github.com/jamescalam/pytorch-mps
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:34 PyTorch MPS
04:57 Installing ARM Python
09:09 Using PyTorch with GPU
12:14 BERT on PyTorch GPU
13:51 Best way to train LLMs on Mac
16:01 Buffer Size Bug
17:24 When we would use Mac M1 GPU | Science & Technology | 115 | 3 |
FzLIIwiaXSU | UCv83tO5cePwHMt1952IVVHw | How to Build an AI-Powered Video Search App | 2022-06-01 12:37:21 UTC | 2022-06-01 16:29:43 UTC | 1343 seconds | Technology and culture have advanced and become ever more entangled. Some of the most significant technological breakthroughs are integrated so tightly into our culture that we never even notice theyโre there.
One of those is AI-powered search. It powers your Google results, Netflix recommendations, and ads you see everywhere. It is being rapidly weaved throughout all aspects of our lives. Further, this is a new technology; its full potential is unknown.
This technology weaves directly into the cultural phenomenon of YouTube. Imagine a search engine like Google that allows you to rapidly access the billions of hours of YouTube content. There is no comparison to that level of highly engaging video content in the world.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/youtube-search
๐ Code:
https://github.com/pinecone-io/examples/tree/master/learn/projects/yt-search
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:56 YouTube Search App
04:43 Getting Data
07:58 Enhancing the Data
12:45 Scraping Other Metadata
14:52 Loading Data from Hugging Face
15:42 Index and Query the Data
20:43 Streamlit App Code | Science & Technology | 58 | 0 |
xXsDIK9z_fg | UCv83tO5cePwHMt1952IVVHw | Using Semantic Search to Find GIFs | 2022-06-06 09:17:01 UTC | 2022-06-07 12:05:40 UTC | 1050 seconds | Vector search powers some of the most popular services in the world. It serves your Google results, delivers the best podcasts on Spotify, and accounts for at least 35% of consumer purchases on Amazon.
In this article, we will use vector search applied to language, called semantic search, to build a GIF search engine. Unlike more traditional search where we rely on keyword matching, semantic search enables search based on the human meaning behind text and images. That means we can find highly relevant GIFs with natural language prompts.
The pipeline for a project like this is simple, yet powerful. It can easily be adapted to tasks as diverse as video search or answering Super Bowl questions, or as weโll see, finding GIFs.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/gif-search
๐ Code:
https://github.com/pinecone-io/examples/tree/master/learn/projects/gif-search
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:17 GIF Search Demo
01:56 Pipeline Overview
05:33 Data Preparation
08:17 Vector Database and Retriever
12:37 Querying
15:42 Streamlit App Code | Science & Technology | 20 | 1 |
_OAU1kQdmgE | UCv83tO5cePwHMt1952IVVHw | How to Learn Data Science | ML | Programming | 2022-06-15 10:37:57 UTC | 2022-06-15 13:11:47 UTC | 992 seconds | In this video I share five of the approaches/thoughts I have regarding learning, in particular for learning data science, machine learning, or programming.
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:33 Scale of Theory vs. Applied
02:55 Shape of Learning
05:52 Courses vs. Projects
08:37 Open Source
10:44 Writing
12:44 Following Interests
15:42 Final Notes | Education | 24 | 0 |
BD9TkvEsKwM | UCv83tO5cePwHMt1952IVVHw | Evaluation Measures for Search and Recommender Systems | 2022-06-25 14:35:27 UTC | 2022-06-28 15:06:40 UTC | 1885 seconds | In this video you will learn about popular offline metrics (evaluation measures) like Recall@K, Mean Reciprocal Rank (MRR), Mean Average Precision@K (MAP@K), and Normalized Discounted Cumulative Gain (NDCG@K). We will also demonstrate how each of these metrics can be replicated in Python.
Evaluation of information retrieval (IR) systems is critical to making well-informed design decisions. From search to recommendations, evaluation measures are paramount to understanding what does and does not work in retrieval.
Many big tech companies contribute much of their success to well-built IR systems. One of Amazonโs earliest iterations of the technology was reportedly driving more than 35% of their sales. Google attributes 70% of YouTube views to their IR recommender systems.
IR systems power some of the greatest companies in the world, and behind every successful IR system is a set of evaluation measures.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/offline-evaluation
๐ Code notebooks:
https://github.com/pinecone-io/examples/tree/master/learn/algos-and-libraries/offline-evaluation
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:51 Offline Metrics
02:38 Dataset and Retrieval 101
06:08 Recall@K
07:57 Recall@K in Python
09:03 Disadvantages of Recall@K
10:21 MRR
13:32 MRR in Python
14:18 MAP@K
18:17 MAP@K in Python
19:27 NDCG@K
29:26 Pros and Cons of NDCG@K
29:48 Final Thoughts | Science & Technology | 48 | 0 |
coaaSxys5so | UCv83tO5cePwHMt1952IVVHw | How to build next-level Q&A with OpenAI | 2022-07-06 19:48:54 UTC | 2022-07-07 13:24:35 UTC | 1168 seconds | Walkthrough of the OpenAI x Pinecone Q&A app I built for a webinar with OpenAI. This is the coolest Q&A app I've ever built thanks to Pinecone vector search and OpenAI's incredible embeddings and generation endpoints.
LINKS:
๐น App:
https://pinecone-io-playground-beyond-search-openaisrcserver-h65vzl.streamlitapp.com
๐จโ๐ป Code and Data:
https://github.com/pinecone-io/examples/tree/master/integrations/openai/beyond_search_webinar
OpenAI x Pinecone Webinar:
โถ๏ธ https://www.youtube.com/watch?v=HtI9easWtAA
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 36 | 0 |
I3na13AESjw | UCv83tO5cePwHMt1952IVVHw | How to use Color Histograms for Image Retrieval | 2022-07-11 07:01:31 UTC | 2022-07-13 16:22:08 UTC | 1864 seconds | Browsing, searching, and retrieving images has never been easy. Traditionally, many technologies relied on manually appending metadata to images and searching via this metadata. This approach works for datasets with high-quality annotation, but most datasets are too large for manual annotation.
That means any large image dataset must rely on Content-Based Image Retrieval (CBIR). Search with CBIR focuses on comparing the *content* of an image rather than its metadata. Content can be color, shapes, textures โ or with some of the latest advances in ML โ the "human meaning" behind an image.
Color histograms represent one of the first CBIR techniques, allowing us to search through images based on their color profiles rather than metadata.
๐ฒ Pinecone article:
https://pinecone.io/learn/color-histograms
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:23 What are Color Histograms?
08:39 How to Built Color Histograms
16:56 Using OpenCV calcHist
20:36 Image Retrieval
27:37 Pros and Cons
30:40 Final Points | Science & Technology | 23 | 0 |
UzkdOg7wWmI | UCv83tO5cePwHMt1952IVVHw | ๐ค Hugging Face just released *Diffusers* - for models like DALL-E 2 and Imagen! | 2022-07-23 21:33:08 UTC | 2022-07-26 15:27:46 UTC | 934 seconds | Hugging Face of transformer fame have created a whole new Python library for diffusion models! Diffusion models are a key component of models like OpenAI's DALL-E-2, Google's Imagen, and Midjourney's image generation service. HuggingFace Diffusers brings these models to a new level of accessibility (and open source!).
๐ Article:
https://towardsdatascience.com/hugging-face-just-released-the-diffusers-library-846f32845e65
๐ Friend Link (free access):
https://towardsdatascience.com/hugging-face-just-released-the-diffusers-library-846f32845e65?sk=9ec4027460defa1fd25178af9a55da13
๐งจ Diffusers:
https://github.com/huggingface/diffusers
๐พ Discord:
https://discord.gg/c5QtDB9RAP
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
00:00 What are Diffusers?
01:55 Getting started
04:20 Prompt engineering
09:34 Testing other diffusers | Science & Technology | 61 | 0 |
szfG55juoJE | UCv83tO5cePwHMt1952IVVHw | How I work from anywhere | 2022-07-24 14:01:51 UTC | 2022-08-16 13:55:16 UTC | 767 seconds | Overview of how I deal with travel and work. Remote desk setup for staying as ergonomic and productive as possible, enjoy!
๐ Links to products (mostly affiliate):
Laptop stand: https://amzn.to/3bZqMHM
Second screen: https://amzn.to/3w6IT5B
Cable bag (international): https://amzn.to/3QBH7S7
... or UK: https://amzn.to/3ps5lT2
Peak Design backpacks: https://www.peakdesign.com/products/everyday-backpack
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 32 | 1 |
jjQetJtQDS4 | UCv83tO5cePwHMt1952IVVHw | Bag of *Visual* Words for Image Classification and Retrieval | 2022-08-02 20:39:30 UTC | 2022-08-03 13:00:35 UTC | 3367 seconds | In computer vision, bag of visual words (BoVW) is one of the pre-deep learning models used for building image embeddings. Allowing us to retrieve images from a database that are similar to another "query" image, perform object detection, and image classification.
๐ฒ Pinecone article:
https://www.pinecone.io/learn/bag-of-visual-words/
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 33 | 0 |
989aKUVBfbk | UCv83tO5cePwHMt1952IVVHw | Fast intro to multi-modal ML with OpenAI's CLIP | 2022-08-11 06:17:14 UTC | 2022-08-11 13:03:08 UTC | 1374 seconds | OpenAI's CLIP is "multi-modal" model capable of understanding the relationships and concepts between both text and images. As we'll see, CLIP is very capable, and when used via the Hugging Face library, could not be easier to work with.
๐ Article:
https://towardsdatascience.com/quick-fire-guide-to-multi-modal-ml-with-openais-clip-2dad7e398ac0
๐ Friend Link (free access):
https://towardsdatascience.com/quick-fire-guide-to-multi-modal-ml-with-openais-clip-2dad7e398ac0?sk=89bb2d8b8e583ed109d8a05e00366645
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:15 What is CLIP?
02:13 Getting started
05:38 Creating text embeddings
07:23 Creating image embeddings
10:26 Embedding a lot of images
15:08 Text-image similarity search
21:38 Alternative image and text search | Science & Technology | 31 | 0 |
c_u4AHNjOpk | UCv83tO5cePwHMt1952IVVHw | AlexNet and ImageNet Explained | 2022-08-23 22:13:25 UTC | 2022-08-24 13:00:22 UTC | 2180 seconds | Todayโs deep learning revolution traces back to the 30th of September, 2012. On this day, a Convolutional Neural Network (CNN) called AlexNet won the ImageNet 2012 challenge. AlexNet didnโt just win; it dominated.
AlexNet was unlike the other competitors. This new model demonstrated unparalleled performance on the largest image dataset of the time, ImageNet. This event made AlexNet the first widely acknowledged, successful application of deep learning. It caught peopleโs attention with a 9.8 percentage point advantage over the nearest competitor.
Until this point, deep learning was a nice idea that most deemed as impractical. AlexNet showed that deep learning was more than a pipedream, and the authors showed the world how to make it practical. Yet, the surge of deep learning that followed was not fueled solely by AlexNet. Indeed, without the huge ImageNet dataset, there would have been no AlexNet.
The future of AI was to be built on the foundations set by the ImageNet challenge and the novel solutions that enabled the synergy between ImageNet and AlexNet.
๐ฒ Pinecone article:
https://pinecone.io/learn/imagenet
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:06 Birth of Deep Learning
02:52 ImageNet
07:56 Lack of Readiness for Big Datasets
09:57 ImageNet Challenge (ILSVRC)
11:47 AlexNet
19:30 PYTORCH IMPLEMENTATION
19:55 Data Preprocessing
27:06 Class Prediction with AlexNet
31:50 Goldfish Results
34:27 Closing Notes | Science & Technology | 20 | 0 |
pfwBut7E60Q | UCv83tO5cePwHMt1952IVVHw | Ultra-efficient Classifier Fine-tuning with Vector Search | 2022-08-31 00:32:14 UTC | 2022-08-31 13:00:26 UTC | 1932 seconds | Learn how to use vector search to create highly targeted training for any classification model using a final linear classification layer. Easily fine-tune models in 10 minutes with less than 100 labeled examples.
๐ฒ Pinecone article:
https://pinecone.io/learn/classifier-train-vector-search/
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:14 Classification
02:49 Better Classifier Training
06:33 Classification as Vector Search
08:47 How Fine-tuning Works
10:50 Identifying Important Samples
12:39 CODE IMPLEMENTATION
13:13 Indexing
18:59 Fine-tuning the Classifier
27:37 Classifier Predictions
30:43 Closing Notes | Science & Technology | 49 | 0 |
-S20nblUuNw | UCv83tO5cePwHMt1952IVVHw | Hugging Face Datasets #1 - Hosting your datasets | 2022-09-09 12:52:32 UTC | 2022-09-09 14:18:34 UTC | 1382 seconds | Introduction to Hugging Face datasets, how it works, and how to host your own simple datasets (JSONL, TSV, CSV, etc) for free via Hugging Face Datasets Hub
Warp download:
https://app.warp.dev/referral/7G3N39
Git LFS Install:
Mac:
$ brew install git-lfs
Debian/Ubuntu:
$ curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
$ sudo apt-get install git-lfs
Windows:
Get install from https://github.com/git-lfs/git-lfs/releases
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
04:36 Creating our own Datasets
08:29 Creating JSONL for Hugging Face
15:15 Uploading Datasets for Git
19:10 LFS for Large Files
21:56 Closing Notes | Science & Technology | 14 | 0 |
fGwH2YoQkDM | UCv83tO5cePwHMt1952IVVHw | CLIP Explained | Multi-modal ML | 2022-09-14 23:08:40 UTC | 2022-09-15 13:00:22 UTC | 2013 seconds | Language models (LMs) can not rely on language alone. That is the idea behind the "Experience Grounds Language" paper, that proposes a framework to measure LMs' current and future progress. A key idea is that, beyond a certain threshold LMs need other forms of data, such as visual input.
The next step beyond well-known language models; BERT, GPT-3, and T5 is โWorld Scope 3โ. In World Scope 3, we move from large text-only datasets to large multi-modal datasets. That is, datasets containing information from multiple forms of media, like *both* images and text.
The world, both digital and real, is multi-modal. We perceive the world as an orchestra of language, imagery, video, smell, touch, and more. This chaotic ensemble produces an inner state, our "model" of the outside world.
AI must move in the same direction. Even specialist models that focus on language or vision must, at some point, have input from the other modalities. How can a model fully understand the concept of the word "person" without *seeing* a person?
OpenAI's Contrastive Learning In Pretraining (CLIP) is a world scope three model. It can comprehend concepts in both text and image and even connect concepts between the two modalities. In this video we will learn about multi-modality, how CLIP works, and how to use CLIP for different use cases like encoding, classification, and object detection.
๐ฒ Pinecone article:
https://pinecone.io/learn/clip/
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 50 | 1 |
ODdKC30dT8c | UCv83tO5cePwHMt1952IVVHw | Hugging Face Datasets #2 - Dataset Builder Scripts | 2022-09-23 14:06:51 UTC | 2022-09-23 14:45:22 UTC | 1404 seconds | How to work with dataset builder scripts, intro to the download manager, and Apache Arrow datatypes used in Hugging Face Datasets.
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:49 Creating Compressed Files
02:41 Creating Dataset Build Script
04:49 Download Manager
08:59 Finishing Split Generator
10:13 Generate Examples Method
14:47 Add Dataset to Hugging Face
17:49 Apache Arrow Features
22:52 What's Next? | Science & Technology | 13 | 0 |
98POYg2HZqQ | UCv83tO5cePwHMt1952IVVHw | Zero-Shot Image Classification with OpenAI's CLIP | 2022-10-04 05:29:02 UTC | 2022-10-05 14:00:03 UTC | 1303 seconds | State-of-the-art (SotA) computer vision (CV) models are characterized by a *restricted* understanding of the visual world specific to their training data [1].
These models can perform *very well* on specific tasks and datasets, but they do not generalize well. They cannot handle new classes or images beyond the domain they have been trained with.
Ideally, a CV model should learn the contents of images without excessive focus on the specific labels it is initially trained to understand.
Fortunately, OpenAI's CLIP has proved itself as an incredibly flexible CV classification model that often requires *zero* retraining. In this chapter, we will explore CLIP in zero-shot image classification.
๐ฒ Pinecone article:
https://pinecone.io/learn/clip-classification/
๐ค 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
๐ Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
๐พ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 10 | 0 |
efPrtcLdcdM | UCZHmQk67mSJgfCCTn7xBfew | This is the worst AI ever | null | 2022-06-03T15:25:58Z | null | gpt4chan #4chan #ai GPT-4chan was trained on over 3 years of posts from 4chan's "politically incorrect" (/pol/) board. (and no ... | null | null | null |
TrdevFK_am4 | UCZHmQk67mSJgfCCTn7xBfew | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained) | null | 2020-10-04T11:22:34Z | null | ai #research #transformers Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough ... | null | null | null |
6MUpWGeGMxs | UCZHmQk67mSJgfCCTn7xBfew | NeuralHash is BROKEN - How to evade Apple's detection & craft hash collisions (w/ Open Source Code) | null | 2021-08-19T14:03:52Z | null | apple #icloud #neuralhash Send your Apple fanboy friends to prison with this one simple trick ;) We break Apple's NeuralHash ... | null | null | null |
n622girLRNM | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Microsoft combines Images & Text | Meta makes artificial skin | Russians replicate DALL-E | null | 2021-11-12T09:29:59Z | null | mlnews #turing #reskin The latest and greatest from the Machine Learning world OUTLINE: 0:00 - Intro 0:15 - Sponsor: Weights ... | null | null | null |
W3mrgqtm5R4 | UCZHmQk67mSJgfCCTn7xBfew | [ML News] BLOOM: 176B Open-Source | Chinese Brain-Scale Computer | Meta AI: No Language Left Behind | null | 2022-07-27T20:22:22Z | null | mlnews #bloom #ai Today we look at all the recent giant language models in the AI world! OUTLINE: 0:00 - Intro 0:55 - BLOOM: ... | null | null | null |
iDulhoQ2pro | UCZHmQk67mSJgfCCTn7xBfew | Attention Is All You Need | null | 2017-11-28T08:04:38Z | null | https://arxiv.org/abs/1706.03762 Abstract: The dominant sequence transduction models are based on complex recurrent or ... | null | null | null |
TrLrBL1U8z0 | UCZHmQk67mSJgfCCTn7xBfew | [ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break | null | 2021-07-08T12:14:37Z | null | copilot #copyright #gpl GitHub and OpenAI release Copilot, an AI-powered code autocomplete system that can generate entire ... | null | null | null |
P_xeshTnPZg | UCZHmQk67mSJgfCCTn7xBfew | Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained) | null | 2021-03-22T17:26:39Z | null | perceiver #deepmind #transformer Inspired by the fact that biological creatures attend to multiple modalities at the same time, ... | null | null | null |
rl4nUngiR2k | UCZHmQk67mSJgfCCTn7xBfew | BLEURT: Learning Robust Metrics for Text Generation (Paper Explained) | null | 2020-06-07T14:11:39Z | null | Proper evaluation of text generation models, such as machine translation systems, requires expensive and slow human ... | null | null | null |
19Q-vMd9bYg | UCZHmQk67mSJgfCCTn7xBfew | Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained) | null | 2021-09-27T13:22:14Z | null | neurips #peerreview #nips The peer-review system at Machine Learning conferences has come under much criticism over the last ... | null | null | null |
rHQPBqMULXo | UCZHmQk67mSJgfCCTn7xBfew | Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more! | null | 2021-03-30T14:09:13Z | null | machinelearning #phd #howto This video is advice for new PhD students in the field of Machine Learning in 2021 and after. | null | null | null |
GWt6Fu05voI | UCZHmQk67mSJgfCCTn7xBfew | [Classic] Deep Residual Learning for Image Recognition (Paper Explained) | null | 2020-07-14T13:00:04Z | null | ai #research #resnet ResNets are one of the cornerstones of modern Computer Vision. Before their invention, people were not ... | null | null | null |
pH2jZun8MoY | UCZHmQk67mSJgfCCTn7xBfew | Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained) | null | 2021-05-08T10:17:15Z | null | involution #computervision #attention Convolutional Neural Networks (CNNs) have dominated computer vision for almost a ... | null | null | null |
mIZLGBD99iU | UCZHmQk67mSJgfCCTn7xBfew | Did Google's LaMDA chatbot just become sentient? | null | 2022-06-15T22:16:05Z | null | lamda #google #ai Google engineer Blake Lemoine was put on leave after releasing proprietary information: An interview with the ... | null | null | null |
RJwPN4qNi_Y | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Google's 540B PaLM Language Model & OpenAI's DALL-E 2 Text-to-Image Revolution | null | 2022-04-10T09:08:24Z | null | mlnews #palm #dalle2 Google releases PaLM and OpenAI releases DALL-E 2 (and more news). Sponsor: Weights & BIases Start ... | null | null | null |
YQ2QtKcK2dA | UCZHmQk67mSJgfCCTn7xBfew | The Man behind Stable Diffusion | null | 2022-08-13T10:52:40Z | null | stablediffusion #ai #stabilityai An interview with Emad Mostaque, founder of Stability AI. OUTLINE: 0:00 - Intro 1:30 - What is ... | null | null | null |
CRlN-cYFxTk | UCZHmQk67mSJgfCCTn7xBfew | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained) | null | 2021-04-19T15:21:29Z | null | nerf #neuralrendering #deeplearning View Synthesis is a tricky problem, especially when only given a sparse set of images as an ... | null | null | null |
_Z9ZP1eiKsI | UCZHmQk67mSJgfCCTn7xBfew | Curiosity-driven Exploration by Self-supervised Prediction | null | 2018-03-18T12:07:52Z | null | https://arxiv.org/abs/1705.05363 Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell Abstract: In many ... | null | null | null |
a-VQfQqIMrE | UCZHmQk67mSJgfCCTn7xBfew | mixup: Beyond Empirical Risk Minimization (Paper Explained) | null | 2020-05-27T14:13:12Z | null | Neural Networks often draw hard boundaries in high-dimensional space, which makes them very brittle. Mixup is a technique that ... | null | null | null |
af6WPqvzjjk | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Text-to-Image models are taking over! (Imagen, DALL-E 2, Midjourney, CogView 2 & more) | null | 2022-08-07T12:54:12Z | null | mlnews #dalle #imagen All things text-to-image models like DALL-E and Imagen! OUTLINE: 0:00 - Intro 0:30 - Imagen: Google's ... | null | null | null |
vLTmnaMpQCs | UCZHmQk67mSJgfCCTn7xBfew | Learning to summarize from human feedback (Paper Explained) | null | 2020-09-07T11:56:46Z | null | summarization #gpt3 #openai Text Summarization is a hard task, both in training and evaluation. Training is usually done ... | null | null | null |
We20YSAJZSE | UCZHmQk67mSJgfCCTn7xBfew | MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | null | 2019-11-21T12:23:05Z | null | MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based ... | null | null | null |
6dvcYx9hcbE | UCZHmQk67mSJgfCCTn7xBfew | Spurious normativity enhances learning of compliance and enforcement behavior in artificial agents | null | 2022-03-08T16:24:37Z | null | deepmind #rl #society This is an in-depth paper review, followed by an interview with the papers' authors! Society is ruled by ... | null | null | null |
PuOASKpiThY | UCZHmQk67mSJgfCCTn7xBfew | I'm taking a break | null | 2021-07-11T12:51:06Z | null | I'll be back, don't worry :) Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: ... | null | null | null |
OioFONrSETc | UCZHmQk67mSJgfCCTn7xBfew | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | null | 2019-02-02T15:16:38Z | null | https://arxiv.org/abs/1502.03167 Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each ... | null | null | null |
M2-BE5JotjA | UCZHmQk67mSJgfCCTn7xBfew | PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias. | null | 2021-04-07T12:38:57Z | null | In the recurring debate about bias in Machine Learning models, there is a growing argument saying that "the problem is not in the ... | null | null | null |
7K4Z8RqjWIk | UCZHmQk67mSJgfCCTn7xBfew | MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained) | null | 2021-05-06T11:53:49Z | null | mixer #google #imagenet Convolutional Neural Networks have dominated computer vision for nearly 10 years, and that might ... | null | null | null |
dND-7llwrpw | UCZHmQk67mSJgfCCTn7xBfew | Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained) | null | 2021-10-06T20:12:52Z | null | grokking #openai #deeplearning Grokking is a phenomenon when a neural network suddenly learns a pattern in the dataset and ... | null | null | null |
T35ba_VXkMY | UCZHmQk67mSJgfCCTn7xBfew | DETR: End-to-End Object Detection with Transformers (Paper Explained) | null | 2020-05-28T15:09:01Z | null | Object detection in images is a notoriously hard task! Objects can be of a wide variety of classes, can be numerous or absent, they ... | null | null | null |
_8KNb5iqblE | UCZHmQk67mSJgfCCTn7xBfew | Longformer: The Long-Document Transformer | null | 2020-04-20T14:07:56Z | null | The Longformer extends the Transformer by introducing sliding window attention and sparse global attention. This allows for the ... | null | null | null |
T9XSU0pKX2E | UCZHmQk67mSJgfCCTn7xBfew | OpenAI CLIP: ConnectingText and Images (Paper Explained) | null | 2021-01-12T14:52:03Z | null | ai #openai #technology Paper Title: Learning Transferable Visual Models From Natural Language Supervision CLIP trains on 400 ... | null | null | null |
h3ij3F3cPIk | UCZHmQk67mSJgfCCTn7xBfew | DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) | null | 2021-05-01T19:53:03Z | null | dino #facebook #selfsupervised Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features ... | null | null | null |
dmH1ZpcROMk | UCZHmQk67mSJgfCCTn7xBfew | Reward Is Enough (Machine Learning Research Paper Explained) | null | 2021-05-31T13:27:21Z | null | reinforcementlearning #deepmind #agi What's the most promising path to creating Artificial General Intelligence (AGI)? This paper ... | null | null | null |
rFwQDDbYTm4 | UCZHmQk67mSJgfCCTn7xBfew | [Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained) | null | 2020-07-26T13:00:23Z | null | ai #dqn #deepmind After the initial success of deep neural networks, especially convolutional neural networks on supervised ... | null | null | null |
aX8phGhG8VQ | UCZHmQk67mSJgfCCTn7xBfew | Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset | null | 2021-09-21T13:43:28Z | null | gpt-3 #truth #conspiracy A new benchmark paper has created quite an uproar in the community. TruthfulQA is a dataset of 817 ... | null | null | null |
Elxn8rS88bI | UCZHmQk67mSJgfCCTn7xBfew | Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained) | null | 2021-03-16T18:15:38Z | null | universalcomputation #pretrainedtransformers #finetuning Large-scale pre-training and subsequent fine-tuning is a common ... | null | null | null |
jltgNGt8Lpg | UCZHmQk67mSJgfCCTn7xBfew | Neural Ordinary Differential Equations | null | 2019-02-19T05:12:20Z | null | https://arxiv.org/abs/1806.07366 Abstract: We introduce a new family of deep neural network models. Instead of specifying a ... | null | null | null |
eYgPJ_7BkEw | UCZHmQk67mSJgfCCTn7xBfew | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence | null | 2020-04-15T18:01:56Z | null | FixMatch is a simple, yet surprisingly effective approach to semi-supervised learning. It combines two previous methods in a ... | null | null | null |
DEh1GR0t29k | UCZHmQk67mSJgfCCTn7xBfew | Peer Review is still BROKEN! The NeurIPS 2021 Review Experiment (results are in) | null | 2021-11-25T18:26:52Z | null | neurips #peerreview #machinelearning A look at the results of the 2021 NeurIPS peer review experiment. | null | null | null |
hsOMCwvFv80 | UCZHmQk67mSJgfCCTn7xBfew | I'm out of Academia | null | 2021-05-04T13:25:48Z | null | machinelearning #ai #phd Done with my PhD in Machine Learning at ETH Zurich. On to new lands! Links: TabNine Code ... | null | null | null |
VgqHitvEbR0 | UCZHmQk67mSJgfCCTn7xBfew | [Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning | null | 2020-08-18T11:16:07Z | null | ai #research #peerreview Machine Learning research is in dire straits as more people flood into the field and competent reviewers ... | null | null | null |
kOy49NqZeqI | UCZHmQk67mSJgfCCTn7xBfew | IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures | null | 2019-11-01T13:08:17Z | null | Policy Gradient RL on a massively distributed scale with theoretical guarantees! Abstract: In this work we aim to solve a large ... | null | null | null |
xbxe-x6wvRw | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Stable Diffusion Takes Over! (Open Source AI Art) | null | 2022-09-19T00:06:24Z | null | stablediffusion #aiart #mlnews Stable Diffusion has been released and is riding a wave of creativity and collaboration. But not ... | null | null | null |
0PAiQ1jTN5k | UCZHmQk67mSJgfCCTn7xBfew | How to make your CPU as fast as a GPU - Advances in Sparsity w/ Nir Shavit | null | 2022-09-17T12:19:00Z | null | ai #sparsity #gpu Sparsity is awesome, but only recently has it become possible to properly handle sparse models at good ... | null | null | null |
rNkHjZtH0RQ | UCZHmQk67mSJgfCCTn7xBfew | NFNets: High-Performance Large-Scale Image Recognition Without Normalization (ML Paper Explained) | null | 2021-02-14T16:51:10Z | null | nfnets #deepmind #machinelearning Batch Normalization is a core component of modern deep learning. It enables training at ... | null | null | null |
dWGjoInRaAs | UCZHmQk67mSJgfCCTn7xBfew | [ML News] DeepMind fails to get independence from Google | null | 2021-05-26T20:24:08Z | null | deepmind #google #mlnews DeepMind has reportedly failed to negotiate for greater independence from Google/Alphabet. | null | null | null |
iAR8LkkMMIM | UCZHmQk67mSJgfCCTn7xBfew | Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | null | 2021-01-22T19:38:10Z | null | ai #technology #switchtransformer Scale is the next frontier for AI. Google Brain uses sparsity and hard routing to massively ... | null | null | null |
_9aN1-0T8hg | UCZHmQk67mSJgfCCTn7xBfew | [ML News] AI models that write code (Copilot, CodeWhisperer, Pangu-Coder, etc.) | null | 2022-08-10T20:55:14Z | null | mlnews #ai #copilot OUTLINE: 0:00 - Intro 0:20 - Copilot Now Generally Available 3:20 - FOSS Org leaves GitHub 6:45 - Google's ... | null | null | null |
pwSnC8jlh50 | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Meta's OPT 175B language model | DALL-E Mega is training | TorToiSe TTS fakes my voice | null | 2022-05-10T12:54:25Z | null | mlnews #dalle #gpt3 An inside look of what's happening in the ML world! Sponsor: Weights & Biases https://wandb.me/yannic ... | null | null | null |
-9evrZnBorM | UCZHmQk67mSJgfCCTn7xBfew | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | null | 2019-01-30T08:51:15Z | null | https://arxiv.org/abs/1810.04805 Abstract: We introduce a new language representation model called BERT, which stands for ... | null | null | null |
awyuuJoHawo | UCZHmQk67mSJgfCCTn7xBfew | Dream to Control: Learning Behaviors by Latent Imagination | null | 2020-04-03T12:46:59Z | null | Dreamer is a new RL agent by DeepMind that learns a continuous control task through forward-imagination in latent space. | null | null | null |
FNDVy_BR8aA | UCZHmQk67mSJgfCCTn7xBfew | Can Wikipedia Help Offline Reinforcement Learning? (Author Interview) | null | 2022-02-28T15:24:13Z | null | wikipedia #reinforcementlearning #languagemodels Original paper review here: https://youtu.be/XHGh19Hbx48 Machel Reid and ... | null | null | null |
DEh1GR0t29k | UCZHmQk67mSJgfCCTn7xBfew | Peer Review is still BROKEN! The NeurIPS 2021 Review Experiment (results are in) | null | 2021-11-25T18:26:52Z | null | neurips #peerreview #machinelearning A look at the results of the 2021 NeurIPS peer review experiment. | null | null | null |
yPjuAo53uNI | UCZHmQk67mSJgfCCTn7xBfew | [Rant] The Male Only History of Deep Learning | null | 2020-04-22T11:58:28Z | null | This casting of our field in terms of ideological narrow-sighted group-think is disgusting. Keep Science about ideas! | null | null | null |
a4VvcmqnkhY | UCZHmQk67mSJgfCCTn7xBfew | What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained) | null | 2020-08-20T09:46:35Z | null | ai #research #machinelearning Online Reinforcement Learning is a flourishing field with countless methods for practitioners to ... | null | null | null |
dPsXxLyqpfs | UCZHmQk67mSJgfCCTn7xBfew | World Models | null | 2018-04-07T13:36:31Z | null | Authors: David Ha, Jรผrgen Schmidhuber Abstract: We explore building generative neural network models of popular ... | null | null | null |
q6Kyvy1zLwQ | UCZHmQk67mSJgfCCTn7xBfew | BERTology Meets Biology: Interpreting Attention in Protein Language Models (Paper Explained) | null | 2020-07-02T13:27:53Z | null | Proteins are the workhorses of almost all cellular functions and a core component of life. But despite their versatility, all proteins ... | null | null | null |
iAR8LkkMMIM | UCZHmQk67mSJgfCCTn7xBfew | Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | null | 2021-01-22T19:38:10Z | null | ai #technology #switchtransformer Scale is the next frontier for AI. Google Brain uses sparsity and hard routing to massively ... | null | null | null |
W3mrgqtm5R4 | UCZHmQk67mSJgfCCTn7xBfew | [ML News] BLOOM: 176B Open-Source | Chinese Brain-Scale Computer | Meta AI: No Language Left Behind | null | 2022-07-27T20:22:22Z | null | mlnews #bloom #ai Today we look at all the recent giant language models in the AI world! OUTLINE: 0:00 - Intro 0:55 - BLOOM: ... | null | null | null |
5skIqoO3ku0 | UCZHmQk67mSJgfCCTn7xBfew | OpenAI Embeddings (and Controversy?!) | null | 2022-02-07T20:10:42Z | null | mlnews #openai #embeddings COMMENTS DIRECTLY FROM THE AUTHOR (thanks a lot for reaching out Arvind :) ): 1. | null | null | null |
af6WPqvzjjk | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Text-to-Image models are taking over! (Imagen, DALL-E 2, Midjourney, CogView 2 & more) | null | 2022-08-07T12:54:12Z | null | mlnews #dalle #imagen All things text-to-image models like DALL-E and Imagen! OUTLINE: 0:00 - Intro 0:30 - Imagen: Google's ... | null | null | null |
dND-7llwrpw | UCZHmQk67mSJgfCCTn7xBfew | Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained) | null | 2021-10-06T20:12:52Z | null | grokking #openai #deeplearning Grokking is a phenomenon when a neural network suddenly learns a pattern in the dataset and ... | null | null | null |
FC-R4MlIqrc | UCZHmQk67mSJgfCCTn7xBfew | [ML News] Cedille French Language Model | YOU Search Engine | AI Finds Profitable MEME TOKENS | null | 2021-11-18T14:54:52Z | null | mlnews #cedille #wmt Only the greatest of news from the world of Machine Learning. OUTLINE: 0:00 - Sponsor: Weights & Biases ... | null | null | null |
56GW1IlWgMg | UCZHmQk67mSJgfCCTn7xBfew | Learning model-based planning from scratch | null | 2017-08-09T06:02:41Z | null | https://arxiv.org/abs/1707.06170 Abstract: Conventional wisdom holds that model-based planning is a powerful approach to ... | null | null | null |
T35ba_VXkMY | UCZHmQk67mSJgfCCTn7xBfew | DETR: End-to-End Object Detection with Transformers (Paper Explained) | null | 2020-05-28T15:09:01Z | null | Object detection in images is a notoriously hard task! Objects can be of a wide variety of classes, can be numerous or absent, they ... | null | null | null |
AJwnbSP_rq8 | UCZHmQk67mSJgfCCTn7xBfew | GPT-NeoX-20B - Open-Source huge language model by EleutherAI (Interview w/ co-founder Connor Leahy) | null | 2022-02-04T16:49:40Z | null | eleuther #gptneo #gptj EleutherAI announces GPT-NeoX-20B, a 20 billion parameter open-source language model, inspired by ... | null | null | null |
fvctpYph8Pc | UCZHmQk67mSJgfCCTn7xBfew | Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained) | null | 2020-04-27T12:42:47Z | null | Has the world overfitted to ImageNet? What if we collect another dataset in exactly the same fashion? This paper gives a ... | null | null | null |
ZVVnvZdUMUk | UCZHmQk67mSJgfCCTn7xBfew | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | null | 2020-04-13T14:51:36Z | null | Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly ... | null | null | null |
rFwQDDbYTm4 | UCZHmQk67mSJgfCCTn7xBfew | [Classic] Playing Atari with Deep Reinforcement Learning (Paper Explained) | null | 2020-07-26T13:00:23Z | null | ai #dqn #deepmind After the initial success of deep neural networks, especially convolutional neural networks on supervised ... | null | null | null |
xnChXNUNS2A | UCZHmQk67mSJgfCCTn7xBfew | [ML News] This AI completes Wikipedia! Meta AI Sphere | Google Minerva | GPT-3 writes a paper | null | 2022-07-31T13:33:13Z | null | mlnews #ai #minerva This episode is all about models that reason. OUTLINE: 0:00 - Intro 0:35 - Meta AI learns Wikipedia citations ... | null | null | null |
ZfDZRX3WiJg | UCZHmQk67mSJgfCCTn7xBfew | VirTex: Learning Visual Representations from Textual Annotations (Paper Explained) | null | 2020-06-12T17:28:15Z | null | Pre-training a CNN backbone for visual transfer learning has recently seen a big push into the direction of incorporating more data ... | null | null | null |
CA8JPbJ75tY | UCZHmQk67mSJgfCCTn7xBfew | CornerNet: Detecting Objects as Paired Keypoints (Paper Explained) | null | 2020-06-05T13:31:46Z | null | Many object detectors focus on locating the center of the object they want to find. However, this leaves them with the secondary ... | null | null | null |
wTzvKB6D_34 | UCZHmQk67mSJgfCCTn7xBfew | How far can we scale up? Deep Learning's Diminishing Returns (Article Review) | null | 2021-10-02T14:24:54Z | null | deeplearning #co2 #cost Deep Learning has achieved impressive results in the last years, not least due to the massive increases ... | null | null | null |
pZyxlf6l0N8 | UCZHmQk67mSJgfCCTn7xBfew | Thinking While Moving: Deep Reinforcement Learning with Concurrent Control | null | 2020-04-23T13:26:07Z | null | Classic RL "stops" the world whenever the Agent computes a new action. This paper considers a more realistic scenario where ... | null | null | null |
hMO6rbMAPew | UCZHmQk67mSJgfCCTn7xBfew | Adversarial Examples Are Not Bugs, They Are Features | null | 2019-05-14T13:45:57Z | null | Abstract: Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and ... | null | null | null |
WknN4E-y44E | UCZHmQk67mSJgfCCTn7xBfew | Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky | null | 2021-05-11T17:47:17Z | null | icml #machinelearning #conference In a controversial move, ICML Area Chairs were instructed to raise the bar on acceptance to ... | null | null | null |
rNkHjZtH0RQ | UCZHmQk67mSJgfCCTn7xBfew | NFNets: High-Performance Large-Scale Image Recognition Without Normalization (ML Paper Explained) | null | 2021-02-14T16:51:10Z | null | nfnets #deepmind #machinelearning Batch Normalization is a core component of modern deep learning. It enables training at ... | null | null | null |
o75ybZ-6Uu8 | UCZHmQk67mSJgfCCTn7xBfew | Dreamer v2: Mastering Atari with Discrete World Models (Machine Learning Research Paper Explained) | null | 2021-02-19T16:11:18Z | null | dreamer #deeprl #reinforcementlearning Model-Based Reinforcement Learning has been lagging behind Model-Free RL on Atari ... | null | null | null |