Video ID
string | Channel ID
string | Title
string | Time Created
string | Time Published
string | Duration
string | Description
string | Category
string | Like Count
float64 | Dislike Count
float64 |
---|---|---|---|---|---|---|---|---|---|
I3na13AESjw | UCv83tO5cePwHMt1952IVVHw | How to use Color Histograms for Image Retrieval | 2022-07-11 07:01:31 UTC | 2022-07-13 16:22:08 UTC | 1864 seconds | Browsing, searching, and retrieving images has never been easy. Traditionally, many technologies relied on manually appending metadata to images and searching via this metadata. This approach works for datasets with high-quality annotation, but most datasets are too large for manual annotation.
That means any large image dataset must rely on Content-Based Image Retrieval (CBIR). Search with CBIR focuses on comparing the *content* of an image rather than its metadata. Content can be color, shapes, textures β or with some of the latest advances in ML β the "human meaning" behind an image.
Color histograms represent one of the first CBIR techniques, allowing us to search through images based on their color profiles rather than metadata.
π² Pinecone article:
https://pinecone.io/learn/color-histograms
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:23 What are Color Histograms?
08:39 How to Built Color Histograms
16:56 Using OpenCV calcHist
20:36 Image Retrieval
27:37 Pros and Cons
30:40 Final Points | Science & Technology | 23 | 0 |
UzkdOg7wWmI | UCv83tO5cePwHMt1952IVVHw | π€ Hugging Face just released *Diffusers* - for models like DALL-E 2 and Imagen! | 2022-07-23 21:33:08 UTC | 2022-07-26 15:27:46 UTC | 934 seconds | Hugging Face of transformer fame have created a whole new Python library for diffusion models! Diffusion models are a key component of models like OpenAI's DALL-E-2, Google's Imagen, and Midjourney's image generation service. HuggingFace Diffusers brings these models to a new level of accessibility (and open source!).
π Article:
https://towardsdatascience.com/hugging-face-just-released-the-diffusers-library-846f32845e65
π Friend Link (free access):
https://towardsdatascience.com/hugging-face-just-released-the-diffusers-library-846f32845e65?sk=9ec4027460defa1fd25178af9a55da13
𧨠Diffusers:
https://github.com/huggingface/diffusers
πΎ Discord:
https://discord.gg/c5QtDB9RAP
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
00:00 What are Diffusers?
01:55 Getting started
04:20 Prompt engineering
09:34 Testing other diffusers | Science & Technology | 61 | 0 |
szfG55juoJE | UCv83tO5cePwHMt1952IVVHw | How I work from anywhere | 2022-07-24 14:01:51 UTC | 2022-08-16 13:55:16 UTC | 767 seconds | Overview of how I deal with travel and work. Remote desk setup for staying as ergonomic and productive as possible, enjoy!
π Links to products (mostly affiliate):
Laptop stand: https://amzn.to/3bZqMHM
Second screen: https://amzn.to/3w6IT5B
Cable bag (international): https://amzn.to/3QBH7S7
... or UK: https://amzn.to/3ps5lT2
Peak Design backpacks: https://www.peakdesign.com/products/everyday-backpack
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 32 | 1 |
jjQetJtQDS4 | UCv83tO5cePwHMt1952IVVHw | Bag of *Visual* Words for Image Classification and Retrieval | 2022-08-02 20:39:30 UTC | 2022-08-03 13:00:35 UTC | 3367 seconds | In computer vision, bag of visual words (BoVW) is one of the pre-deep learning models used for building image embeddings. Allowing us to retrieve images from a database that are similar to another "query" image, perform object detection, and image classification.
π² Pinecone article:
https://www.pinecone.io/learn/bag-of-visual-words/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 33 | 0 |
989aKUVBfbk | UCv83tO5cePwHMt1952IVVHw | Fast intro to multi-modal ML with OpenAI's CLIP | 2022-08-11 06:17:14 UTC | 2022-08-11 13:03:08 UTC | 1374 seconds | OpenAI's CLIP is "multi-modal" model capable of understanding the relationships and concepts between both text and images. As we'll see, CLIP is very capable, and when used via the Hugging Face library, could not be easier to work with.
π Article:
https://towardsdatascience.com/quick-fire-guide-to-multi-modal-ml-with-openais-clip-2dad7e398ac0
π Friend Link (free access):
https://towardsdatascience.com/quick-fire-guide-to-multi-modal-ml-with-openais-clip-2dad7e398ac0?sk=89bb2d8b8e583ed109d8a05e00366645
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:15 What is CLIP?
02:13 Getting started
05:38 Creating text embeddings
07:23 Creating image embeddings
10:26 Embedding a lot of images
15:08 Text-image similarity search
21:38 Alternative image and text search | Science & Technology | 31 | 0 |
c_u4AHNjOpk | UCv83tO5cePwHMt1952IVVHw | AlexNet and ImageNet Explained | 2022-08-23 22:13:25 UTC | 2022-08-24 13:00:22 UTC | 2180 seconds | Todayβs deep learning revolution traces back to the 30th of September, 2012. On this day, a Convolutional Neural Network (CNN) called AlexNet won the ImageNet 2012 challenge. AlexNet didnβt just win; it dominated.
AlexNet was unlike the other competitors. This new model demonstrated unparalleled performance on the largest image dataset of the time, ImageNet. This event made AlexNet the first widely acknowledged, successful application of deep learning. It caught peopleβs attention with a 9.8 percentage point advantage over the nearest competitor.
Until this point, deep learning was a nice idea that most deemed as impractical. AlexNet showed that deep learning was more than a pipedream, and the authors showed the world how to make it practical. Yet, the surge of deep learning that followed was not fueled solely by AlexNet. Indeed, without the huge ImageNet dataset, there would have been no AlexNet.
The future of AI was to be built on the foundations set by the ImageNet challenge and the novel solutions that enabled the synergy between ImageNet and AlexNet.
π² Pinecone article:
https://pinecone.io/learn/imagenet
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:06 Birth of Deep Learning
02:52 ImageNet
07:56 Lack of Readiness for Big Datasets
09:57 ImageNet Challenge (ILSVRC)
11:47 AlexNet
19:30 PYTORCH IMPLEMENTATION
19:55 Data Preprocessing
27:06 Class Prediction with AlexNet
31:50 Goldfish Results
34:27 Closing Notes | Science & Technology | 20 | 0 |
pfwBut7E60Q | UCv83tO5cePwHMt1952IVVHw | Ultra-efficient Classifier Fine-tuning with Vector Search | 2022-08-31 00:32:14 UTC | 2022-08-31 13:00:26 UTC | 1932 seconds | Learn how to use vector search to create highly targeted training for any classification model using a final linear classification layer. Easily fine-tune models in 10 minutes with less than 100 labeled examples.
π² Pinecone article:
https://pinecone.io/learn/classifier-train-vector-search/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:14 Classification
02:49 Better Classifier Training
06:33 Classification as Vector Search
08:47 How Fine-tuning Works
10:50 Identifying Important Samples
12:39 CODE IMPLEMENTATION
13:13 Indexing
18:59 Fine-tuning the Classifier
27:37 Classifier Predictions
30:43 Closing Notes | Science & Technology | 49 | 0 |
-S20nblUuNw | UCv83tO5cePwHMt1952IVVHw | Hugging Face Datasets #1 - Hosting your datasets | 2022-09-09 12:52:32 UTC | 2022-09-09 14:18:34 UTC | 1382 seconds | Introduction to Hugging Face datasets, how it works, and how to host your own simple datasets (JSONL, TSV, CSV, etc) for free via Hugging Face Datasets Hub
Warp download:
https://app.warp.dev/referral/7G3N39
Git LFS Install:
Mac:
$ brew install git-lfs
Debian/Ubuntu:
$ curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
$ sudo apt-get install git-lfs
Windows:
Get install from https://github.com/git-lfs/git-lfs/releases
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
04:36 Creating our own Datasets
08:29 Creating JSONL for Hugging Face
15:15 Uploading Datasets for Git
19:10 LFS for Large Files
21:56 Closing Notes | Science & Technology | 14 | 0 |
fGwH2YoQkDM | UCv83tO5cePwHMt1952IVVHw | CLIP Explained | Multi-modal ML | 2022-09-14 23:08:40 UTC | 2022-09-15 13:00:22 UTC | 2013 seconds | Language models (LMs) can not rely on language alone. That is the idea behind the "Experience Grounds Language" paper, that proposes a framework to measure LMs' current and future progress. A key idea is that, beyond a certain threshold LMs need other forms of data, such as visual input.
The next step beyond well-known language models; BERT, GPT-3, and T5 is βWorld Scope 3β. In World Scope 3, we move from large text-only datasets to large multi-modal datasets. That is, datasets containing information from multiple forms of media, like *both* images and text.
The world, both digital and real, is multi-modal. We perceive the world as an orchestra of language, imagery, video, smell, touch, and more. This chaotic ensemble produces an inner state, our "model" of the outside world.
AI must move in the same direction. Even specialist models that focus on language or vision must, at some point, have input from the other modalities. How can a model fully understand the concept of the word "person" without *seeing* a person?
OpenAI's Contrastive Learning In Pretraining (CLIP) is a world scope three model. It can comprehend concepts in both text and image and even connect concepts between the two modalities. In this video we will learn about multi-modality, how CLIP works, and how to use CLIP for different use cases like encoding, classification, and object detection.
π² Pinecone article:
https://pinecone.io/learn/clip/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 50 | 1 |
ODdKC30dT8c | UCv83tO5cePwHMt1952IVVHw | Hugging Face Datasets #2 - Dataset Builder Scripts | 2022-09-23 14:06:51 UTC | 2022-09-23 14:45:22 UTC | 1404 seconds | How to work with dataset builder scripts, intro to the download manager, and Apache Arrow datatypes used in Hugging Face Datasets.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:49 Creating Compressed Files
02:41 Creating Dataset Build Script
04:49 Download Manager
08:59 Finishing Split Generator
10:13 Generate Examples Method
14:47 Add Dataset to Hugging Face
17:49 Apache Arrow Features
22:52 What's Next? | Science & Technology | 14 | 0 |
98POYg2HZqQ | UCv83tO5cePwHMt1952IVVHw | Zero-Shot Image Classification with OpenAI's CLIP | 2022-10-04 05:29:02 UTC | 2022-10-05 14:00:03 UTC | 1303 seconds | State-of-the-art (SotA) computer vision (CV) models are characterized by a *restricted* understanding of the visual world specific to their training data [1].
These models can perform *very well* on specific tasks and datasets, but they do not generalize well. They cannot handle new classes or images beyond the domain they have been trained with.
Ideally, a CV model should learn the contents of images without excessive focus on the specific labels it is initially trained to understand.
Fortunately, OpenAI's CLIP has proved itself as an incredibly flexible CV classification model that often requires *zero* retraining. In this chapter, we will explore CLIP in zero-shot image classification.
π² Pinecone article:
https://pinecone.io/learn/clip-classification/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 12 | 0 |
Jk1YP4Y_U_0 | UCv83tO5cePwHMt1952IVVHw | Stoic Philosophy Text Generation with TensorFlow | 2020-04-19 11:33:45 UTC | 2020-04-19 13:52:43 UTC | 1859 seconds | Explanation of key parts to a RNN text generator built in TensorFlow with Python.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
I've written a couple of Medium articles on this project, if you're interested check them out here:
Stoic Philosophy - Built by Algorithms
https://towardsdatascience.com/stoic-philosophy-built-by-algorithms-9cff7b91dcbd
Supercharged Prediction with Ensemble Learning
https://towardsdatascience.com/recurrent-ensemble-learning-caffdcd94092
Music used by Lakey Inspired.
1 - Blue Boi
2 - Falling
https://www.youtube.com/channel/UCOmy8wuTpC95lefU5d1dt2Q | People & Blogs | 10 | 0 |
gXqHd6-NKBo | UCv83tO5cePwHMt1952IVVHw | How to Build TensorFlow Pipelines with tf.data.Dataset | 2020-11-02 08:23:38 UTC | 2020-11-02 08:57:48 UTC | 1853 seconds | Link to updated version (without video freeze): https://youtu.be/f6XVfgJTbp4
An introduction to building better input pipelines for Machine Learning in TF2.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Link to tf.data API docs: https://www.tensorflow.org/guide/data | People & Blogs | 46 | 9 |
yYEPNla4tlQ | UCv83tO5cePwHMt1952IVVHw | Every New Feature in Python 3.10.0a2 | 2020-11-08 18:09:49 UTC | 2020-11-10 16:44:05 UTC | 883 seconds | Every new feature in the early release alpha 2 preview of Python 3.10
There is video lag 5:00 - 9:55 covering the Type Alias section (sorry!) - the audio is okay though
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | People & Blogs | 88 | 5 |
GYDFBfx8Ts8 | UCv83tO5cePwHMt1952IVVHw | How-to Build a Transformer for Language Classification in TensorFlow | 2020-11-19 09:57:27 UTC | 2020-11-19 12:20:35 UTC | 2299 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
How to build a transformer model for sentiment analysis (language classification) using HuggingFace's Transformers library in TensorFlow 2 with Python.
We cover the full process from downloading data all the way through to building and training the transformer model.
This is a multi-class classification problem using both TensorFlow and Transformers to build a multiclass sentiment classifier.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Article version is here:
https://betterprogramming.pub/build-a-natural-language-classifier-with-bert-and-tensorflow-4770d4442d41
Or here (free link if you don't have Medium membership):
https://betterprogramming.pub/build-a-natural-language-classifier-with-bert-and-tensorflow-4770d4442d41?sk=346cd4ce5ee019c400835588b56d8574
Article extract:
"High-performance transformer models like BERT and GPT-3 are transforming a huge array of previously menial, language-based tasks, into the work of a few clicks, saving a lot of time.
In most industries, the newest wave of language optimization is just getting started β taking their first baby steps. But these seedlings are widespread, and sprouting quickly.
Much of this adoption is thanks to the incredibly low barrier-to-entry. If you know the basics of TensorFlow or PyTorch, and take a little time to get to grips with the Transformers library β youβre already halfway there.
With the Transformers library, it takes just three lines of code to initialize a cutting-edge ML model β a model built from the billions of research dollars spent by the likes of Google, Facebook, and OpenAI.
This article will take you through the steps to build a classification model that leverages the power of transformers, using Googleβs BERT.
Transformers
- Finding Models
- Initializing
- Bert Inputs and Outputs
Classification
- The Data
- Tokenization
- Data Prep
- Train-Validation Split
- Model Definition
- Train" | People & Blogs | 384 | 12 |
DgGFhQmfxHo | UCv83tO5cePwHMt1952IVVHw | How-to use the Kaggle API in Python | 2020-11-22 20:19:30 UTC | 2020-11-22 20:29:27 UTC | 462 seconds | Simple step-by-step tutorial covering the setup and use of the Kaggle API for downloading datasets using the Kaggle library in Python.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | People & Blogs | 121 | 6 |
YvVQgvAz9dY | UCv83tO5cePwHMt1952IVVHw | Language Generation with OpenAI's GPT-2 in Python | 2020-11-23 12:36:44 UTC | 2020-11-24 14:22:46 UTC | 498 seconds | Easy natural language generation with Transformers and PyTorch. We apply OpenAI's GPT-2 model to generate text in just a few lines of Python code.
Language generation is one of those natural language tasks that can really produce an incredible feeling of awe at how far the fields of machine learning and artificial intelligence have come.
GPT-1, 2, and 3 are OpenAIβs top language models β well known for their ability to produce incredibly natural, coherent, and genuinely interesting language.
In this article, we will take a small snippet of text and learn how to feed that into a pre-trained GPT-2 model using PyTorch and Transformers to produce high-quality language generation in just eight lines of code. We cover:
PyTorch and Transformers
- Data
Building the Model
- Initialization
- Tokenization
- Generation
- Decoding
Results
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium Article:
https://towardsdatascience.com/text-generation-with-python-and-gpt-2-1fecbff1635b
Friend Link (free access):
https://towardsdatascience.com/text-generation-with-python-and-gpt-2-1fecbff1635b?sk=930367d835f15abb4ef3164f7791e1b1
Thumbnail background by gustavo centurion on Unsplash
https://unsplash.com/photos/O6fs4ablxw8 | People & Blogs | 133 | 1 |
egDIqQIjDCI | UCv83tO5cePwHMt1952IVVHw | Text Summarization with Google AI's T5 in Python | 2020-11-24 21:26:27 UTC | 2020-11-27 06:00:07 UTC | 419 seconds | Easy text summarization using Google AI's T5 model using HuggingFace transformers and PyTorch in Python.
Automatic text summarization allows us to shorten long pieces of text into easy-to-read, short snippets that still convey the most important and relevant information of the original text.
In this video, weβll build a simple but incredibly powerful text summarizer using Googleβs T5. Weβll be using the PyTorch and HuggingFaceβs Transformers frameworks.
This is split into three parts:
1. Import and Initialization
2. Data and Tokenization
3. Summary Generation
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
You can read the article version of this on Medium here:
https://betterprogramming.pub/how-to-summarize-text-with-googles-t5-4dd1ae6238b6
(And for those of you without Medium membership, here's a free link):
https://betterprogramming.pub/how-to-summarize-text-with-googles-t5-4dd1ae6238b6?sk=740d3009282cb2c4f7478a0c073dedb3
Thumbnail background by gustavo centurion on Unsplash
https://unsplash.com/photos/O6fs4ablxw8 | People & Blogs | 115 | 1 |
DFtP1THE8fE | UCv83tO5cePwHMt1952IVVHw | How-to do Sentiment Analysis with Flair in Python | 2020-12-04 11:15:10 UTC | 2020-12-04 14:00:03 UTC | 848 seconds | Learn how to perform powerful sentiment analysis with no fine-tuning or pre-training required using the Flair NLP library in Python.
With the real-time information available to us on massive social media platforms like Twitter, we have all the data we could ever need to create these accurate and up-to-date sentiment metrics for different companies.
But then comes the question, how can our computer understand what this unstructured text data means?
That is where sentiment analysis comes in. Sentiment analysis is a particularly interesting branch of Natural Language Processing (NLP), which is used to rate the language used in a body of text.
Through sentiment analysis, we can take thousands of tweets about a company and judge whether they are generally positive or negative (the sentiment) in real-time!
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/sentiment-analysis-for-stock-price-prediction-in-python-bed40c65d178
(Free link if you don't have Medium membership):
https://towardsdatascience.com/sentiment-analysis-for-stock-price-prediction-in-python-bed40c65d178?sk=1cbf33a5d1fd2ed841f9487972c1cbed
Thumbnail photo by Alexander London on Unsplash
https://unsplash.com/@alxndr_london | People & Blogs | 64 | 2 |
8o3jvkK2GGU | UCv83tO5cePwHMt1952IVVHw | Python Environment Setup for Machine Learning | 2020-12-23 13:50:07 UTC | 2020-12-23 13:53:02 UTC | 754 seconds | Everything you need for a Python environment set up for Machine Learning and Data Science!
π Article:
https://towardsdatascience.com/how-to-setup-python-for-machine-learning-173cb25f0206
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Thumbnail background by Christian Wiediger on Unsplash
https://unsplash.com/@christianw | People & Blogs | 38 | 1 |
BYbJ_HH788U | UCv83tO5cePwHMt1952IVVHw | Functional API - TensorFlow Essentials #2 | 2020-12-28 16:41:11 UTC | 2020-12-29 10:04:40 UTC | 341 seconds | A look at the functional API method for building models in TensorFlow 2 for Python.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Thumbnail background by Darius Bashar on Unsplash
https://unsplash.com/@dariusbashar?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText | Education | 20 | 0 |
_8Bydxud1XU | UCv83tO5cePwHMt1952IVVHw | Training Parameters - TensorFlow Essentials #3 | 2020-12-28 19:30:23 UTC | 2020-12-29 23:37:57 UTC | 450 seconds | Learn how to set up model training parameters and compile the model before training.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Thumbnail background by Alex McCarthy on Unsplash
https://unsplash.com/@4lexmccarthy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText | Education | 17 | 0 |
f6XVfgJTbp4 | UCv83tO5cePwHMt1952IVVHw | Input Data Pipelines - TensorFlow Essentials #4 | 2020-12-28 23:25:54 UTC | 2020-12-30 11:30:02 UTC | 751 seconds | Learn how to set-up efficient and clean input data pipelines using tf.data.Dataset
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Thumbnail background by Daria Nepriakhina on Unsplash
https://unsplash.com/@epicantus?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText | Education | 54 | 0 |
MQD1yMnZ_jk | UCv83tO5cePwHMt1952IVVHw | Sequential Model - TensorFlow Essentials #1 | 2020-12-29 09:46:00 UTC | 2020-12-29 09:50:23 UTC | 391 seconds | Learn how to use the sequential model building approach in TensorFlow 2.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Background thumbnail by Aryan Dhiman on Unsplash
https://unsplash.com/@mylifeasaryan_?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText | Education | 84 | 1 |
KTFWNI0qL28 | UCv83tO5cePwHMt1952IVVHw | 6 of Python's Newest and Best Features (3.7-3.9) | 2021-01-12 23:31:26 UTC | 2021-01-12 23:58:12 UTC | 1084 seconds | A rundown of the six most recent, and coolest features added to Python in the past few years!
2018 brought us a plethora of new features with the release of Python 3.7, followed by 3.8 in 2019, and 3.9 in 2020.
Many of those changes were behind the scenes. Optimizations and upgrades that the vast majority of us will never notice, despite their benefits.
Others are more obvious, additions to syntax or functionality that can change how we write our code. But even these visible changes can be hard to keep up with.
In this video, we will run through the more apparent upgrades to provide a brief but hopefully invaluable refresher on everything new to Python from the past few years.
- Python 3.7
- Breakpoints
- Python 3.8
- Walrus Operator
- F-string '=' Specifier
- Positional-only Parameters
- Python 3.9
- More Type Hinting
- Dictionary Unions
Medium Article:
https://towardsdatascience.com/amazing-features-added-to-python-from-3-7-to-now-4f35f0bb1ea6
(Free access link):
https://towardsdatascience.com/amazing-features-added-to-python-from-3-7-to-now-4f35f0bb1ea6?sk=bda3cb7717caa969b81619f85191f241
Thumbnail background by Martin Sanchez on Unsplash:
https://unsplash.com/photos/4PDPLw1flgE
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 15 | 2 |
GyJtxd14DTc | UCv83tO5cePwHMt1952IVVHw | Novice to Advanced RegEx in Less-than 30 Minutes + Python | 2021-01-27 09:06:42 UTC | 2021-01-27 09:51:32 UTC | 1769 seconds | A full tutorial covering everything you need to know about Regular Expressions - an essential for anyone learning to code - and even more so for anyone interested in Natural Language Processing.
This video includes:
- metacharacters
- quantifiers
- capture groups
- using capture groups in Python
- character sets
- look-ahead and look-behind assertions
- negative look-ahead and look-behind assertions
- inline modifiers
- passing modifiers as function parameters in Python
- conditionals (if-else statements for RegEx)
- re.match
- re.search
- re.findall
We cover all of this in-depth in this tutorial, incl. examples all the way through on RegEx101 (an interactive debugging/regex building tool) and also in Python.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 239 | 8 |
1ZcXmjZtJJ8 | UCv83tO5cePwHMt1952IVVHw | Building a PlotLy $GME Chart in Python | 2021-02-02 13:38:16 UTC | 2021-02-07 13:24:45 UTC | 4492 seconds | A code-along video covering the coding process from imagination to Python.
Something a little different, I'm not overly keen on this format - it's pretty long - but I've recorded it and I think maybe this can be useful for a few of you.
I haven't prepared anything beforehand, this is just going into the coding process with a rough outline of wanting to build a stock chart for GME (GameStop) and adding a few technical indicators - to get more familiar with PlotLy and the AlphaVantage API.
So, it's a weird one, but I hope a few of you enjoy it - thanks :)
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 10 | 0 |
ZIRmXkHp0-c | UCv83tO5cePwHMt1952IVVHw | How to Build Custom Q&A Transformer Models in Python | 2021-02-09 20:42:56 UTC | 2021-02-12 13:30:03 UTC | 4216 seconds | In this video, we will learn how to take a pre-trained transformer model and train it for question-and-answering. We will be using the HuggingFace transformers library with the PyTorch implementation of models in Python.
Transformers are one of the biggest developments in Natural Language Processing (NLP) and learning how to use them properly is basically a data science superpower - they're genuinely amazing I promise!
I hope you enjoy the video :)
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/the-ultimate-performance-metric-in-nlp-111df6c64460
(Free link):
https://towardsdatascience.com/how-to-fine-tune-a-q-a-transformer-86f91ec92997?sk=9344fd51afe71a0905db833d0183d436
Code:
https://gist.github.com/jamescalam/55daf50c8da9eb3a7c18de058bc139a3
Photo in thumbnail by Lorenzo Herrera on Unsplash
https://unsplash.com/@lorenzoherrera | Education | 163 | 5 |
FdjVoOf9HN4 | UCv83tO5cePwHMt1952IVVHw | How-to Use The Reddit API in Python | 2021-02-12 11:36:48 UTC | 2021-02-12 12:02:48 UTC | 1401 seconds | Learn how to use the Reddit API in Python, including setup, authorization, and pulling data from subreddits.
Reddit API docs:
https://www.reddit.com/dev/api/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c
π Free link:
https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c?sk=0295f297c1365bee7cc7a32bdff21b61
Extract from article:
"Reddit is a huge ecosystem brimming with data that is readily available at our very fingertips. As a data-minded person, I wanted to take advantage of this and perform some analysis using this vast repository of open-source data.
Initially, it turned out that getting to grip with Redditβs API wasnβt as clear-cut as expected β despite being a straightforward process; it can be a little confusing at first.
So, after figuring everything out, I wrote this article β which I hope will help a few of you to get familiar with using the Reddit API in Python. We will cover:
Getting Access
Making Requests
- Reading the Data
- Streaming New Posts
Parameters
Getting Access
First, we need access. Unlike most popular services, the Reddit API was somewhat difficult to figure out initially. There are several steps:
1. Go to App Preferences and click create another app⦠at the bottom.
2. Fill out the required details, make sure to select script β and click create app.
3. make a note of the personal use script and secret tokens.
4. Request a temporary OAuth token from Reddit. We need our username and password for this.
5. Add headers=headers to every request. The OAuth token will expire after ~2 hours, and a new one will need to be requested.
"
And so on, check it out if you're interested in reading (rather than watching).
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 627 | 11 |
scJsty_DR3o | UCv83tO5cePwHMt1952IVVHw | How to Build Q&A Models in Python (Transformers) | 2021-02-17 21:03:29 UTC | 2021-02-19 15:00:21 UTC | 1189 seconds | In this video we'll cover how to build a question-answering model in Python using HuggingFace's Transformers.
You will need to install the transformers library with:
pip install transformers
Alongside either TensorFlow or PyTorch (to follow this video exactly you will need PyTorch). To install TensorFlow just type:
pip install tensorflow
OR
conda install tensorflow
And for PyTorch follow the instructions under 'Install PyTorch' here:
https://pytorch.org/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Link to Q&A fine-tuning video:
https://youtu.be/ZIRmXkHp0-c
You can find the Medium article link below here:
https://towardsdatascience.com/question-and-answering-with-bert-6ef89a78dac | Education | 151 | 1 |
QJq9RTp_OVE | UCv83tO5cePwHMt1952IVVHw | How-to Decode Outputs From NLP Models (Python) | 2021-02-21 18:02:42 UTC | 2021-02-24 15:00:10 UTC | 577 seconds | In this video, we will cover three ways to decode the output probabilities from NLP models - greedy search, random sampling, and beam search.
Learning how to decode outputs can make a huge difference in diagnosing model issues and improving text output quality - and as an added bonus it's super easy.
One of the often-overlooked parts of sequence generation in natural language processing (NLP) is how we select our output tokens β otherwise known as decoding.
You may be thinking β we select a token/word/character based on the probability of each token assigned by our model.
This is half-true β in language-based tasks, we typically build a model which outputs a set of probabilities to an array where each value in that array represents the probability of a specific word/token.
At this point, it might seem logical to select the token with the highest probability? Well, not really β this can create some unforeseen consequences β as we will see soon.
When we are selecting a token in machine-generated text, we have a few alternative methods for performing this decode β and options for modifying the exact behavior too.
In this video we will explore three different methods for selecting our output token, these are:
- Greedy Decoding
- Random Sampling
- Beam Search
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Link to the article version on Medium:
https://towardsdatascience.com/the-three-decoding-methods-for-nlp-23ca59cb1e9d
Free link (if you don't have membership):
https://towardsdatascience.com/the-three-decoding-methods-for-nlp-23ca59cb1e9d?sk=64fbb0204c174dc520af027a69f88030 | Education | 28 | 0 |
TCZgXFPNnbc | UCv83tO5cePwHMt1952IVVHw | Identify Stocks on Reddit with SpaCy (NER in Python) | 2021-03-01 21:47:29 UTC | 2021-03-03 14:27:48 UTC | 1307 seconds | We will learn how to process unstructured text data from Reddit and extract organization names so that any further analysis is automatically classified and results assigned to the correct stocks.
Organizations are mentioned in each subreddit in a variety of formats. Typically we will find two formats:
- Organization name, eg Tesla/Tesla Motors
- Ticker symbol, eg TSLA, tsla, or $TSLA
We also need to be able to differentiate between tickers and other abbreviations/slang -some of these are unclear like AI (AI can mean both artificial intelligence and refer to the ticker symbol for C3.ai).
So, we need a reasonable competent NER process to accurately classify our data.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Reddit API video: https://youtu.be/FdjVoOf9HN4
/r/investing data: https://github.com/jamescalam/transformers/blob/main/course/named_entity_recognition/data/reddit_investing.csv
Medium article: https://towardsdatascience.com/ner-for-extracting-stock-mentions-on-reddit-aa604e577be
(Free version if you don't have Medium membership): https://towardsdatascience.com/ner-for-extracting-stock-mentions-on-reddit-aa604e577be?sk=d16305d40b18e7955a0665633182d2b4
Thanks for watching! | Education | 33 | 0 |
yDGo9z_RlnE | UCv83tO5cePwHMt1952IVVHw | Sentiment Analysis on ANY Length of Text With Transformers (Python) | 2021-03-10 08:15:21 UTC | 2021-03-10 13:15:03 UTC | 1630 seconds | The de-facto standard in many natural language processing (NLP) tasks nowadays is to use a transformer. Text generation? Transformer. Question-and-answering? Transformer. Language classification? Transformer!
However, one of the problems with many of these models (a problem that is not just restricted to transformer models) is that we cannot process long pieces of text.
Almost every article I write on Medium contains 1000+ words, which, when tokenized for a transformer model like BERT, will produce 1000+ tokens. BERT (and many other transformer models) will consume 512 tokens maxβ-βtruncating anything beyond this length.
Although I think you may struggle to find value in processing my Medium articles, the same applies to many useful data sourcesβ-βlike news articles or Reddit posts.
We will take a look at how we can work around this limitation. In this article, we will find the sentiment for long posts from the /r/investing subreddit. This video will cover:
High-Level Approach
Getting Started
- Data
- Initialization
Tokenization
Preparing The Chunks
- Split
- CLS and SEP
- Padding
- Reshaping For BERT
Making Predictions
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Here's a link to the Medium article:
https://towardsdatascience.com/how-to-apply-transformers-to-any-length-of-text-a5601410af7f
And a free access link if you don't have Medium membership:
https://towardsdatascience.com/how-to-apply-transformers-to-any-length-of-text-a5601410af7f?sk=d4e717eb2ff31fb27ea67019bbb63ad6 | Education | 111 | 2 |
9Od9-DV9kd8 | UCv83tO5cePwHMt1952IVVHw | Unicode Normalization for NLP in Python | 2021-03-16 09:27:24 UTC | 2021-03-17 13:30:00 UTC | 927 seconds | βπ -π ππ ππ π₯ππππ£ π£ππππ₯ ππππ π¨π π¦ππ ππ§ππ£ π¦π€π π₯πππ€π ππππ πͺπππ ππ ππ₯ π§ππ£ππππ₯π€. πππ π¨π π£π€π₯ π₯ππππ, ππ€ ππ πͺπ π¦ ππ πππͺ ππ π£π π π βπβ πππ πͺπ π¦ πππ§π ππππ£πππ₯ππ£π€ ππππ π₯πππ€ ππ πͺπ π¦π£ πππ‘π¦π₯, πͺπ π¦π£ π₯ππ©π₯ ππππ πππ€ ππ ππ‘πππ₯πππͺ π¦ππ£πππππππ.
We also find that text like this is incredibly commonβ-βparticularly on social media.
Another pain-point comes from diacritics (the little glyphs in Γ, Γ©, Γ
) that you'll find in almost every European language.
These characters have a hidden property that can trip up any NLP modelβ-βtake a look at the Unicode for two versions of Γ:
Latin capital letter C with cedilla: \u00C7
Latin capital letter C + combining cedilla: \u0043\u0327
Both are completely different, despite rendering as the same character.
To deal with all of these text variants we need to use Unicode normalization - which we will cover in this video.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/what-on-earth-is-unicode-normalization-56c005c55ad0
Friend link (free access):
https://towardsdatascience.com/what-on-earth-is-unicode-normalization-56c005c55ad0?sk=0cd19a9ad9f5d948b33179bab3c3b7cd | Education | 43 | 0 |
2qJavL-VX9Y | UCv83tO5cePwHMt1952IVVHw | The NEW Match-Case Statement in Python 3.10 | 2021-03-17 20:37:52 UTC | 2021-03-19 16:00:03 UTC | 1088 seconds | Python 3.10 is beginning to fill-out with plenty of fascinating new features. One of those, in particular, caught my attentionβ-βstructural pattern matchingβ-βor as most of us will know it, switch/case statements.
Switch-statements have been absent from Python despite being a common feature of most languages. Python is leapfrogging ahead of those languages by introducing the match-case statement as a switch-case v2.0.
Back in 2006, PEP 3103 was raised, recommending the implementation of a switch-case statement. However, after a poll at PyCon 2007 received no support for the feature, the Python devs dropped it.
Fast-forward to 2020, and Guido van Rossum, the creator of Python, committed the first documentation showing the new match-statements, which have been named Structural Pattern Matching, as found in PEP 634.
Let's take a look at how this new logic works.
Medium Article:
https://towardsdatascience.com/switch-case-statements-are-coming-to-python-d0caf7b2bfd3
Friend Link (free access):
https://towardsdatascience.com/switch-case-statements-are-coming-to-python-d0caf7b2bfd3?sk=363e0f7696502647e007f91910b4c817
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:58 Switch-Case
02:37 Flow of Logic
03:21 Second Example (Tuples)
05:00 Final Example Setup
11:30 Final Example If-Else Version
15:22 Final Example Match-Case Version | Education | 310 | 11 |
pjtnkCGElcE | UCv83tO5cePwHMt1952IVVHw | Multi-Class Language Classification With BERT in TensorFlow | 2021-03-24 17:51:01 UTC | 2021-03-25 16:00:15 UTC | 2604 seconds | Chapters for each section of the video (preprocessing, model build, prediction) are in the video timeline.
Transformers have been described as the fourth pillar of deep learning [1], alongside the three big neural net architectures of CNNs, RNNs, and MLPs.
However, from the perspective of natural language processingβ-βtransformers are much more than that. Since their introduction in 2017, they've come to dominate a majority of NLP benchmarksβ-βand continue to impress daily.
What I'm saying is, transformers are damn cool. And with libraries like HuggingFace's transformersβ-βit has become too easy to build incredible solutions with them.
So, what's not to love? Incredible performance paired with the ultimate ease-of-use.
In this video, we'll work through building a multi-class classification model using transformersβ-βfrom start-to-finish.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/multi-class-classification-with-transformers-6cf7b59a033a
Free access:
https://towardsdatascience.com/multi-class-classification-with-transformers-6cf7b59a033a?sk=544872025c2283c54cf4294814b8cae3
Link to Kaggle video:
https://youtu.be/DgGFhQmfxHo
[1] Fourth Pillar of AI:
https://ark-invest.com/articles/analyst-research/transformers-comprise-the-fourth-pillar-of-deep-learning/
00:00 Intro
01:21 Pulling Data
01:47 Preprocessing
14:33 Data Input Pipeline
24:14 Defining Model
33:29 Model Training
35:36 Saving and Loading Models
37:37 Making Predictions | Education | 264 | 1 |
JkeNVaiUq_c | UCv83tO5cePwHMt1952IVVHw | How to Build Python Packages for Pip | 2021-04-02 14:51:14 UTC | 2021-04-02 15:19:32 UTC | 1267 seconds | The most powerful feature of Python is its community. Almost every use-case out there has a package built specifically for it.
Need to send mobile/email alerts? pip install knockknock β- βBuild ML apps? pip install streamlit β- βBored of your terminal? pip install coloramaβ - βIt's too easy!
I know this is obvious, but those libraries didn't magically appear. For each package, there is a person, or many personsβ-βthat actively developed and deployed that package.
Every single one.
All 300K+ of them.
That is why Python is Python, the level of support is phenomenalβ-βmindblowing.
In this video, we will learn how to build our own packages. And add them to the Python Package Index (PyPI). Afterward, we will be able to install our packages using pip install!
GitHub Repo:
https://github.com/jamescalam/aesthetic_ascii
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium Article:
https://towardsdatascience.com/how-to-package-your-python-code-df5a7739ab2e
π Here's a free link:
https://towardsdatascience.com/how-to-package-your-python-code-df5a7739ab2e?sk=04d9f67c0654445bbcbbf6825f535900 | Education | 390 | 11 |
4Jmq28RQ3hU | UCv83tO5cePwHMt1952IVVHw | How-to Structure a Q&A ML App | 2021-04-09 15:02:44 UTC | 2021-04-09 15:22:50 UTC | 585 seconds | βΆοΈ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB
I'm planning on doing something different, a series of videos where we work through the steps - from start-to-finish - of (attempting) to build a Q&A web app that answers our questions with Stoic answers.
In this video, I'm outlining the idea and describing the high-level setup that I think we'll need to put together. It should be cool!
We'll be using the Haystack framework for 'Q&A at scale', which using HuggingFace transformers under-the-hood, and the Elasticsearch document store.
Find the repo here:
https://github.com/jamescalam/aurelius
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 46 | 0 |
Vwq7Ucp9UCw | UCv83tO5cePwHMt1952IVVHw | How to Index Q&A Data With Haystack and Elasticsearch | 2021-04-11 21:30:32 UTC | 2021-04-12 15:00:11 UTC | 807 seconds | βΆοΈ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB
The second video in 'Building a Stoic Q&A App' - here we're setting up Elasticsearch and Haystack to store the data (Meditations) ready for retrieval when we ask our app questions.
Find the code here:
https://github.com/jamescalam/aurelius/tree/main/code/labs
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 79 | 3 |
DBsxUSUhfRg | UCv83tO5cePwHMt1952IVVHw | Q&A Document Retrieval With DPR | 2021-04-12 14:44:59 UTC | 2021-04-15 15:00:10 UTC | 890 seconds | βΆοΈ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB
The third video in building our Stoic Q&A app.
In open-domain question answering, we typically design a model architecture that contains a data source, retriever, and reader/generator.
The first of these components is typically a document store. The two most popular stores we use here are Elasticsearch and FAISS.
Next up is our retriever β the topic of this video. The job of the retriever is to filter through our document store for relevant chunks of information (the documents) and pass them to the reader/generator model.
DPR (dense passage retriever) is a dense vector retriever that is trained on question-context pairs. Encoding both accordingly - enabling super accurate similarity indexing.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
If you're interested in learning more about DPR, I wrote about it on Medium here:
https://towardsdatascience.com/how-to-create-an-answer-from-a-question-with-dpr-d76e29cc5d60
(Free link):
https://towardsdatascience.com/how-to-create-an-answer-from-a-question-with-dpr-d76e29cc5d60?sk=1bdd7c1bff80bf51410962691c690c69
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 57 | 0 |
QrzHImDEq_w | UCv83tO5cePwHMt1952IVVHw | How to Use Type Annotations in Python | 2021-04-23 21:44:38 UTC | 2021-04-27 14:53:25 UTC | 907 seconds | Type annotationsβ-βalso known as type signaturesβ-βare used to indicate the datatypes of variables and input/outputs of functions and methods.
In many languages, datatypes are explicitly stated. In these languages, if you don't declare your datatypeβ-βthe code will not run.
Type annotations have a long and convoluted history with Python, going all the way back to the first release of Python 3 with the initial implementation of function annotations.
Type annotations in Python are not make-or-break like in other languages (like C). They're optional chunks of syntax that we can add to make our code more explicit.
Erroneous type annotations will do nothing more than highlight the incorrect annotation in our code editorβ-βno errors are ever raised due to annotations.
So, if type annotations are not enforced, why use them?
Well, as we touched upon alreadyβ-βdeclaring types makes our code more explicit, and if done well, easier to readβ-βboth for ourselves and others.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Read the Medium article here:
https://towardsdatascience.com/type-annotations-in-python-d90990b172dc
π Here's a free link:
https://towardsdatascience.com/type-annotations-in-python-d90990b172dc?sk=29bc29ab5478a842363963b421781b47
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:55 Datatypes Example in C
2:53 Static and Dynamic Typed Languages
3:47 Type Annotations in Python
4:25 How to Define Simple Types
6:04 IDE Warnings
8:20 More Complex Types
9:53 dict[str, int]
11.07 Multiple Types
11:38 Union Operator (Py 3.9)
12:34 Union Operator (Py 3.10)
13:21 Optional Operator | Education | 132 | 3 |
2tdLYIKPafc | UCv83tO5cePwHMt1952IVVHw | Extractive Q&A With Haystack and FastAPI in Python | 2021-04-26 22:03:55 UTC | 2021-04-29 15:00:04 UTC | 1058 seconds | βΆοΈ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB
In this video we work through building an extractive Q&A stack using Haystack, and embedding it within a FastAPI instance in Python.
We use the BERT transformer for our reader model, alongside Elasticsearch and the BM25 retriever algorithm.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 71 | 1 |
jVPd7lEvjtg | UCv83tO5cePwHMt1952IVVHw | Sentence Similarity With Transformers and PyTorch (Python) | 2021-05-04 15:25:17 UTC | 2021-05-05 15:00:20 UTC | 1270 seconds | Easy mode: https://youtu.be/Ey81KfQ3PQU
All we ever seem to talk about nowadays are BERT this, BERT that. I want to talk about something else, but BERT is just too good β- βso this video will be about BERT for sentence similarity.
A big part of NLP relies on similarity in highly-dimensional spaces. Typically an NLP solution will take some text, process it to create a big vector/array representing said textβ-βthen perform several transformations.
It's highly-dimensional magic.
Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be.
The logic is this:
- Take a sentence, convert it into a vector.
- Take many other sentences, and convert them into vectors.
- Find sentences that have the smallest distance (Euclidean) or smallest angle (cosine similarity) between themβ-βmore on that here.
- We now have a measure of semantic similarity between sentencesβ-βeasy!
At a high level, there's not much else to it. But of course, we want to understand what is happening in a little more detail and implement this in Python too.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1?sk=c0f2990b4660210b447e52d55bd0f4e5
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:16 BERT Base Network
1:11 Sentence Vectors and Similarity
1:47 The Data and Model
3:01 Two Approaches
3:16 Tokenizing Sentences
9:11 Creating last_hidden_state Tensor
11:08 Creating Sentence Vectors
17:53 Cosine Similarity | Education | 233 | 2 |
Ey81KfQ3PQU | UCv83tO5cePwHMt1952IVVHw | Sentence Similarity With Sentence-Transformers in Python | 2021-05-04 19:55:42 UTC | 2021-05-05 15:00:09 UTC | 370 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
Hard mode: https://youtu.be/jVPd7lEvjtg
All we ever seem to talk about nowadays are BERT this, BERT that. I want to talk about something else, but BERT is just too good β- βso this video will be about BERT for sentence similarity.
A big part of NLP relies on similarity in highly-dimensional spaces. Typically an NLP solution will take some text, process it to create a big vector/array representing said textβ-βthen perform several transformations.
It's highly-dimensional magic.
Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be.
The logic is this:
- Take a sentence, convert it into a vector.
- Take many other sentences, and convert them into vectors.
- Find sentences that have the smallest distance (Euclidean) or smallest angle (cosine similarity) between themβ-βmore on that here.
- We now have a measure of semantic similarity between sentencesβ-βeasy!
At a high level, there's not much else to it. But of course, we want to understand what is happening in a little more detail and implement this in Python too.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1?sk=c0f2990b4660210b447e52d55bd0f4e5
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 370 | 4 |
W8ZPQOcHnlE | UCv83tO5cePwHMt1952IVVHw | NER With Transformers and spaCy (Python) | 2021-05-09 20:57:10 UTC | 2021-05-11 15:00:28 UTC | 567 seconds | Named entity recognition (NER) consists of extracting 'entities' from textβ-βwhat we mean by that is given the sentence:
"Apple reached an all-time high stock price of 143 dollars this January."
We might want to extract the key pieces of informationβ-βor 'entities'β-βand categorize each of those entities. Like so:
- Apple β: Organization
- 143 dollarsβ: βMonetary Value
- this Januaryβ: βDate
For us humans, this is easy. But how can we teach a machine to distinguish between a granny smith apple and the Apple we trade on NASDAQ?
(No, we can't rely on the 'A' being capitalizedβ¦)
This is where NER comes inβ-βusing NER, we can extract keywords like apple and identify that it is, in fact, an organizationβ-βnot a fruit.
The go-to library for NER is spaCy, which is incredible. But what if we added transformers to spaCy? Even better - we'll cover exactly that in this video.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5 | Education | 120 | 2 |
q9NS5WpfkrU | UCv83tO5cePwHMt1952IVVHw | Training BERT #1 - Masked-Language Modeling (MLM) | 2021-05-19 09:31:26 UTC | 2021-05-19 14:51:39 UTC | 984 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
BERT, everyone's favorite transformer costs Google ~$7K to train (and who knows how much in R&D costs). From there, we write a couple of lines of code to use the same modelβ-βall for free.
BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP).
MLM consists of giving BERT a sentence and optimizing the weights inside BERT to output the same sentence on the other side.
So we input a sentence and ask that BERT outputs the same sentence.
However, before we actually give BERT that input sentenceβ-βwe mask a few tokens.
So we're actually inputting an incomplete sentence and asking BERT to complete it for us.
How to train BERT with MLM:
https://youtu.be/R6hcxMMOrPE
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
Medium article:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c?sk=17a19eca8dc8280bea4138802580ffe0
π€ 70% Discount on the NLP With Transformers in Python course:
https://www.udemy.com/course/nlp-with-transformers/?couponCode=MEDIUM3
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 277 | 3 |
R6hcxMMOrPE | UCv83tO5cePwHMt1952IVVHw | Training BERT #2 - Train With Masked-Language Modeling (MLM) | 2021-05-19 11:38:10 UTC | 2021-05-19 14:51:49 UTC | 1666 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP).
In many cases, we might be able to take the pre-trained BERT model out-of-the-box and apply it successfully to our own language tasks.
But often, we might need to pre-train the model for a specific use case even further.
Further training with MLM allows us to tune BERT to better understand the particular use of language in a more specific domain.
Out-of-the-box BERTβ-βgreat for general purpose use. Fine-tuned with MLM BERTβ-βgreat for domain-specific use.
In this video, we'll cover exactly how to fine-tune BERT models using MLM in PyTorch.
πΎ Code:
https://github.com/jamescalam/transformers/blob/main/course/training/03_mlm_training.ipynb
Meditations data:
https://github.com/jamescalam/transformers/blob/main/data/text/meditations/clean.txt
Understanding MLM:
https://youtu.be/q9NS5WpfkrU
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c?sk=17a19eca8dc8280bea4138802580ffe0
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 223 | 1 |
1gN1snKBLP0 | UCv83tO5cePwHMt1952IVVHw | Training BERT #3 - Next Sentence Prediction (NSP) | 2021-05-23 18:14:04 UTC | 2021-05-25 14:56:47 UTC | 823 seconds | Next sentence prediction (NSP) is one-half of the training process behind the BERT model (the other being masked-language modelingβ-βMLM).
Where MLM teaches BERT to understand relationships between wordsβ-βNSP teaches BERT to understand relationships between sentences.
In the original BERT paper, it was found that without NSP, BERT performed worse on every single metric - βso it's important.
Now, when we use a pre-trained BERT model, training with NSP and MLM has already been done, so why do we need to know about it?
Well, we can actually further pre-train these pre-trained BERT models so that they better understand the language used in our specific use-cases. To do that, we can use both MLM and NSP.
So, in this video, we'll go into depth on what NSP is, how it works, and how we can implement it in code.
Training with NSP:
https://youtu.be/x1lAcT3xl5M
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f?sk=3595968413abde1c5833e1a96e449673
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 94 | 6 |
x1lAcT3xl5M | UCv83tO5cePwHMt1952IVVHw | Training BERT #4 - Train With Next Sentence Prediction (NSP) | 2021-05-27 15:52:57 UTC | 2021-05-27 16:15:39 UTC | 2205 seconds | Next sentence prediction (NSP) is one-half of the training process behind the BERT model (the other being masked-language modelingβ-βMLM).
Although NSP (and MLM) are used to pre-train BERT models, we can use these exact methods to further pre-train our models to better understand the specific style of language in our own use cases.
So, in this video, we'll cover exactly how we take an unstructured body of text, and use it to pre-train a BERT model using NSP.
Meditations data:
https://github.com/jamescalam/transformers/blob/main/data/text/meditations/clean.txt
Jupyter Notebook
https://github.com/jamescalam/transformers/blob/main/course/training/06_nsp_training.ipynb
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f?sk=3595968413abde1c5833e1a96e449673
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 95 | 1 |
5-A435hIYio | UCv83tO5cePwHMt1952IVVHw | New Features in Python 3.10 | 2021-06-03 16:41:56 UTC | 2021-06-08 15:00:02 UTC | 800 seconds | The Python 3.10 release has several new features like structural pattern matching, a new typing Union operator, and parenthesized context managers!
Python 3.10 has now been released, here we test all of the best new features introduced.
We'll cover some of the most interesting additions to Pythonβ-βstructural pattern matching, parenthesized context managers, more typing, and the new and improved error messages.
Download the latest release:
https://www.python.org/downloads/release/python-3100/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/whats-new-in-python-3-10-a757c6c69342
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/whats-new-in-python-3-10-a757c6c69342?sk=648ae12c1025a83affba4eecec0d46c6
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:45 Type Annotations in Python
01:10 Typing Union Operator
02:07 Parenthesized Context Managers
05:07 Structural Pattern Matching
09:31 Better Error Messages | Education | 375 | 2 |
IC9FaVPKlYc | UCv83tO5cePwHMt1952IVVHw | Training BERT #5 - Training With BertForPretraining | 2021-06-04 05:13:06 UTC | 2021-06-15 15:00:19 UTC | 1306 seconds | NSP Logic
https://youtu.be/1gN1snKBLP0
MLM Logic
https://youtu.be/q9NS5WpfkrU
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/how-to-train-bert-aaad00533168
π Here's a free link:
https://towardsdatascience.com/how-to-train-bert-aaad00533168?sk=5ad4e5e44a6c573b3be1967c9abdcc35
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 128 | 1 |
fA0dFQacmic | UCv83tO5cePwHMt1952IVVHw | FREE 11 Hour NLP Transformers Course (Next 3 Days Only) | 2021-06-04 07:56:44 UTC | 2021-06-04 13:00:19 UTC | 267 seconds | The offer has now expired! You can find the final 70% discount here:
https://bit.ly/3DFvvY5
In total, 10823 people redeemed the code - which is incredible, I'm very happy so many of you were interested in the course and I hope it will help many of you in learning about transformers and NLP where it may have been too expensive to otherwise - so thank you all!
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 51 | 0 |
GhGUZrcB-WM | UCv83tO5cePwHMt1952IVVHw | How-to Use HuggingFace's Datasets - Transformers From Scratch #1 | 2021-06-21 21:56:31 UTC | 2021-06-22 13:00:07 UTC | 861 seconds | How can we build our own custom transformer models?
Maybe we'd like our model to understand a less common language, how many transformer models out there have been trained on Piemontese or the Nahuatl languages?
In that case, we need to do something different. We need to build our own modelβ-βfrom scratch.
In this video, we'll learn how to use HuggingFace's datasets library to download multilingual data and prepare it for training our custom transformer tokenizer and model.
---
Part 2: https://youtu.be/JIeAB8vvBQo
Part 3: https://youtu.be/heTYbpr9mD8
Part 4: https://youtu.be/35Pdoyi6ZoQ
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403?sk=aea909609f41be43bdb2dbbd75a801f2
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 147 | 3 |
JIeAB8vvBQo | UCv83tO5cePwHMt1952IVVHw | Build a Custom Transformer Tokenizer - Transformers From Scratch #2 | 2021-06-22 20:07:37 UTC | 2021-06-24 14:00:06 UTC | 857 seconds | How can we build our own custom transformer models?
Maybe we'd like our model to understand a less common language, how many transformer models out there have been trained on Piemontese or the Nahuatl languages?
In that case, we need to do something different. We need to build our own modelβ-βfrom scratch.
In this video, we'll learn how to use HuggingFace's tokenizers library to build our own custom transformer tokenizer.
Part 1: https://youtu.be/GhGUZrcB-WM
---
Part 3: https://youtu.be/heTYbpr9mD8
Part 4: https://youtu.be/35Pdoyi6ZoQ
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Medium article:
https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403?sk=aea909609f41be43bdb2dbbd75a801f2
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 80 | 3 |
ziiF1eFM3_4 | UCv83tO5cePwHMt1952IVVHw | 3 Vector-based Methods for Similarity Search (TF-IDF, BM25, SBERT) | 2021-06-28 13:25:28 UTC | 2021-06-29 13:00:23 UTC | 1764 seconds | Vector similarity search is one of the fastest-growing domains in AI and machine learning. At its core, it is the process of matching relevant pieces of information together.
Similarity search is a complex topic and there are countless techniques for building effective search engines.
In this video, we'll cover three vector-based approaches for comparing languages and identifying similar 'documents', covering both vector similarity search and semantic search:
- TF-IDF
- BM25
- Sentence-BERT
π° Original article:
https://www.pinecone.io/learn/semantic-search/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
Mining Massive Datasets Book (Similarity Search):
π https://amzn.to/3CC0zrc (3rd ed)
π https://amzn.to/3AtHSnV (1st ed, cheaper)
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
01:37 TF-IDF
11:44 BM25
20:30 SBERT | Education | 415 | 1 |
AY62z7HrghY | UCv83tO5cePwHMt1952IVVHw | 3 Traditional Methods for Similarity Search (Jaccard, w-shingling, Levenshtein) | 2021-06-28 17:44:01 UTC | 2021-06-29 12:00:04 UTC | 1520 seconds | Similarity search is one of the fastest-growing domains in AI and machine learning. At its core, it is the process of matching relevant pieces of information together.
Similarity search is a complex topic and there are countless techniques for building effective search engines.
In this video, we'll cover three traditional approaches for comparing languages and identifying similar 'documents':
- Jaccard Similarity
- w-shingling
- Levenshtein distance
π° Original article:
https://www.pinecone.io/learn/semantic-search/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
Mining Massive Datasets Book (Similarity Search):
π https://amzn.to/3CC0zrc (3rd ed)
π https://amzn.to/3AtHSnV (1st ed, cheaper)
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:23 Jaccard Similarity
02:39 w-shingling
07:17 Levenshtein Distance | Education | 86 | 0 |
heTYbpr9mD8 | UCv83tO5cePwHMt1952IVVHw | Building MLM Training Input Pipeline - Transformers From Scratch #3 | 2021-07-02 15:28:46 UTC | 2021-07-05 14:00:30 UTC | 1392 seconds | The input pipeline of our training process is the more complex part of the entire transformer build. It consists of us taking our raw OSCAR training data, transforming it, and preparing it for Masked-Language Modeling (MLM). Finally, we load our data into a DataLoader ready for training!
Part 1: https://youtu.be/GhGUZrcB-WM
Part 2: https://youtu.be/JIeAB8vvBQo
---
Part 4: https://youtu.be/35Pdoyi6ZoQ
π Medium article:
https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6
π Free link:
https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6?sk=9db6224efbd4ec6fd407a80b528e69b0
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Education | 69 | 0 |
ee71R4Cqb5o | UCv83tO5cePwHMt1952IVVHw | Angular App Setup With Material - Stoic Q&A #5 | 2021-07-05 08:50:04 UTC | 2021-07-20 14:00:28 UTC | 814 seconds | βΆοΈ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB
The fifth video in our Stoic Q&A series - setting up our Angular app with Angular Material.
Prerequisites:
Installation of Node.js and NPM - https://nodejs.org/en/
Angular - https://angular.io/guide/setup-local
πΎ Discord
https://discord.gg/c5QtDB9RAP | Science & Technology | 17 | 0 |
35Pdoyi6ZoQ | UCv83tO5cePwHMt1952IVVHw | Training and Testing an Italian BERT - Transformers From Scratch #4 | 2021-07-05 18:22:41 UTC | 2021-07-06 13:00:03 UTC | 1838 seconds | We need two things for training, our DataLoader and a model. The DataLoader we have β but no model.
For training, we need a raw (not pre-trained) RobertaForMaskedLM. To create that, we first need to create a RoBERTa config object to describe the parameters weβd like to initialize FiliBERTo with.
Once we have our model, we set up our training loop and train!
Post-training, we'll test the model with Laura, who is Italian - and hope for the best.
Part 1: https://youtu.be/GhGUZrcB-WM
Part 2: https://youtu.be/JIeAB8vvBQo
Part 3: https://youtu.be/heTYbpr9mD8
---
π Medium article:
https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6?sk=9db6224efbd4ec6fd407a80b528e69b0
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
00:35 Review of Code
02:02 Config Object
06:28 Setup For Training
10:30 Training Loop
14:57 Dealing With CUDA Errors
16:17 Training Results
19:52 Loss
21:18 Fill-mask Pipeline For Testing
21:54 Testing With Laura | Science & Technology | 94 | 1 |
sKyvsdEv6rk | UCv83tO5cePwHMt1952IVVHw | Faiss - Introduction to Similarity Search | 2021-07-09 13:47:26 UTC | 2021-07-13 15:00:19 UTC | 1896 seconds | Full Similarity Search Playlist:
https://www.youtube.com/watch?v=AY62z7HrghY&list=PLIUOU7oqGTLhlWpTz4NnuT3FekouIVlqc&index=1
Facebook AI Similarity Search (FAISS) is one of the most popular implementations of efficient similarity search, but what is itβ-βand how can we use it?
What is it that makes FAISS special? How do we make the best use of this incredible tool?
Fortunately, it's a brilliantly simple process to get started with. And in this video, we'll explore some of the options FAISS provides, how they work, andβ-βmost importantlyβ-βhow FAISS can make our semantic search faster.
π² Pinecone Article:
https://www.pinecone.io/learn/faiss-tutorial/
π Data:
https://github.com/jamescalam/data/tree/main/sentence_embeddings_15K
Notebook:
https://gist.github.com/jamescalam/7117aa92235a7f52141ad0654795aa48
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord
https://discord.gg/c5QtDB9RAP
Mining Massive Datasets Book (Similarity Search):
π https://amzn.to/3CC0zrc (3rd ed)
π https://amzn.to/3AtHSnV (1st ed, cheaper)
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 354 | 5 |
bWLvGGJLzF8 | UCv83tO5cePwHMt1952IVVHw | Why are there so many Tokenization methods in HF Transformers? | 2021-07-27 07:12:07 UTC | 2021-07-27 14:00:10 UTC | 1080 seconds | HuggingFace's transformers library is the de-facto standard for NLPβ-βused by practitioners worldwide, it's powerful, flexible, and easy to use. It achieves this through a fairly large (and complex) code-base, which has resulted in the question:
"Why are there so many tokenization methods in HuggingFace transformers?"
Tokenization is the process of encoding a string of text into transformer-readable token ID integers. In this video we cover five different methods for this - do these all produce the same output, or is there a difference between them?
π Medium article:
https://towardsdatascience.com/why-are-there-so-many-tokenization-methods-for-transformers-a340e493b3a8
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
π If membership is too expensive - here's a free link:
https://towardsdatascience.com/why-are-there-so-many-tokenization-methods-for-transformers-a340e493b3a8?sk=4a7e8c88d331aef9103e153b5b799ff5
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΎ Discord
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 51 | 0 |
B7wmo_NImgM | UCv83tO5cePwHMt1952IVVHw | Choosing Indexes for Similarity Search (Faiss in Python) | 2021-08-09 14:33:47 UTC | 2021-08-09 15:04:10 UTC | 1893 seconds | Facebook AI Similarity Search (Faiss) is a game-changer in the world of search. It allows us to efficiently search a huge range of media, from GIFs to articlesβ-βwith incredible accuracy in sub-second timescales for billion+ size datasets.
The success in Faiss is due to many reasons. One of those, in particular, is its flexibility. Faiss recognizes that there is no 'one-size-fits-all' in similarity search.
Instead, Faiss comes with a wide range of search indexesβ-βwhich we can mix and match to our choosing.
However, this great flexibility produces a questionβ-βhow do we know which size fits our use case?
Which index do we choose? Should we use multiple indexes, or is one enough?
This video will explore the pros and cons of some of the most important indexesβ-βFlat, LSH, HNSW, and IVF. We will learn how we decide which to use and the impact of parameters in each index to build some of the best indexes for semantic search.
π² Pinecone Article:
https://www.pinecone.io/learn/vector-indexes/
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
Download script for Sift1M dataset:
https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf
Similarity Search Series:
https://www.youtube.com/playlist?list=PLIUOU7oqGTLhlWpTz4NnuT3FekouIVlqc
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΎ Discord
https://discord.gg/c5QtDB9RAP
Mining Massive Datasets Book (Similarity Search):
π https://amzn.to/3CC0zrc (3rd ed)
π https://amzn.to/3AtHSnV (1st ed, cheaper)
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 122 | 1 |
e_SBq3s20M8 | UCv83tO5cePwHMt1952IVVHw | Locality Sensitive Hashing (LSH) for Search with Shingling + MinHashing (Python) | 2021-08-19 16:53:50 UTC | 2021-08-20 16:00:16 UTC | 1627 seconds | Locality sensitive hashing (LSH) is a widely popular technique used in approximate nearest neighbor (ANN) search. The solution to efficient similarity search is a profitable oneβ-βit is at the core of several billion (and even trillion) dollar companies.
LSH consists of a variety of different methods. In this video, we'll be covering the traditional approachβ-βwhich consists of multiple stepsβ-βshingling, MinHashing, and the final banded LSH function.
π² Pinecone article:
https://www.pinecone.io/learn/locality-sensitive-hashing/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
00:00 Intro
01:21 Overview
05:58 Shingling
08:45 Vocab
09:27 One-hot Encoding
11:10 MinHash
15:51 Signature Info
18:08 LSH
22:20 Tuning LSH | Science & Technology | 208 | 19 |
8bOrMqEdfiQ | UCv83tO5cePwHMt1952IVVHw | How LSH Random Projection works in search (+Python) | 2021-08-24 05:09:11 UTC | 2021-08-24 16:00:04 UTC | 1148 seconds | Locality sensitive hashing (LSH) is a widely popular technique used in approximate similarity search. The solution to efficient similarity search is a profitable oneβ-βit is at the core of several billion (and even trillion) dollar companies.
The problem with similarity search is scale. Many companies deal with millions-to-billions of data points every single day. Given a billion data points, is it feasible to compare all of them with every search?
Further, many companies are not performing single searchesβ-βGoogle deals with more than 3.8 million searches every minute.
Billions of data points combined with high-frequency searches are problematicβ-βand we haven't considered the dimensionality nor the similarity function itself. Clearly, an exhaustive search across all data points is unrealistic for larger datasets.
The solution to searching impossibly huge datasets? Approximate search. Rather than exhaustively comparing every pair, we approximateβ-βrestricting the search scope only to high probability matches.
π² Pinecone article:
https://www.pinecone.io/learn/locality-sensitive-hashing-random-projection/
Download Sift1M:
https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf
IndexLSH for Fast Similarity Search in Faiss:
https://youtu.be/ZLfdQq_u7Eo
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 66 | 3 |
ZLfdQq_u7Eo | UCv83tO5cePwHMt1952IVVHw | IndexLSH for Fast Similarity Search in Faiss | 2021-08-24 05:25:21 UTC | 2021-08-24 16:00:12 UTC | 1119 seconds | Faiss β- βor Facebook AI Similarity Searchβ - βis an open-source framework built for enabling similarity search.
Faiss has many super-efficient implementations of different indexes that we can use in similarity search. That long list of indexes includes IndexLSHβ-βan easy-to-use implementation of everything we have covered so far in LSH.
π² Pinecone article:
https://www.pinecone.io/learn/locality-sensitive-hashing-random-projection/
Download Sift1M:
https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf
How LSH Random Projection works in search (+Python):
https://youtu.be/8bOrMqEdfiQ
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 27 | 0 |
BMYBwbkbVec | UCv83tO5cePwHMt1952IVVHw | Faiss - Vector Compression with PQ and IVFPQ (in Python) | 2021-08-30 14:35:01 UTC | 2021-08-30 15:30:04 UTC | 1161 seconds | So far weβve worked through the logic behind a simple, readable implementation of product quantization (PQ) in Python for semantic search. Realistically we wouldnβt use this because it is not optimized and we already have excellent implementations elsewhere. Instead, we would use a library like Faiss (Facebook AI Similarity Search) β or a production-ready service like Pinecone.
Weβll take a look at how we can build a PQ index in Faiss, and weβll even take a look at combining PQ with an Inverted File (IVF) step to improve search speed.
Before we start, we need to get data. We will be using the Sift1M dataset. It can be downloaded and opened using this script:
https://gist.github.com/jamescalam/928a374b85daffa49a565f3dc18d059c#file-get_sift1m-ipynb
π² Pinecone article:
https://www.pinecone.io/learn/product-quantization/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 36 | 1 |
t9mRf2S5vDI | UCv83tO5cePwHMt1952IVVHw | Product Quantization for Vector Similarity Search (+ Python) | 2021-08-30 15:22:47 UTC | 2021-08-30 15:37:46 UTC | 1777 seconds | Vector similarity search can require huge amounts of memory. Indexes containing 1M dense vectors (a small dataset in todayβs world) will often require several GBs of memory to store. When building recommendation systems or semantic search engines, this is not acceptable.
The problem of excessive memory usage is exasperated by high-dimensional data, and with ever-increasing dataset sizes, this can very quickly become unmanageable.
Product quantization (PQ) is a popular method for dramatically compressing high-dimensional vectors to use 97% less memory, and for making nearest-neighbor search speeds 5.5x faster in our tests.
A composite IVF+PQ index speeds up the search by another 16.5x without affecting accuracy, for a whopping total speed increase of 92x compared to non-quantized indexes.
π² Pinecone article:
https://www.pinecone.io/learn/product-quantization/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free AI-Powered Code Refactoring with Sourcery:
https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff | Science & Technology | 116 | 2 |
GEhmmcx1lvM | UCv83tO5cePwHMt1952IVVHw | Composite Indexes and the Faiss Index Factory | 2021-09-11 17:27:12 UTC | 2021-09-24 12:53:58 UTC | 1063 seconds | In the world of vector search, there are many indexing methods and vector processing techniques that allow us to prioritize between recall, latency, and memory usage.
Using specific methods such as IVF, PQ, or HNSW, we can often return good results. But for best performance we will usually want to use composite indexes.
We can view a composite index as a step-by-step process of vector transformations and one or more indexing methods. Allowing us to place multiple indexes and/or processing steps together to create our βidealβ index.
For example, we can use an inverted file (IVF) index to reduce the scope of our search (increasing search speed), and then add a compression technique such as product quantization (PQ) to keep larger indexes within a reasonable size limit.
Where there is the ability to customize indexes, there is the risk of producing indexes with unnecessarily poor recall, latency, or memory usage.
We must know how composite indexes work if we want to build robust and high-performance vector similarity search applications. It is essential to understand where different indexes or vector transformations can be used β and when they are not needed.
Part 2: https://youtu.be/3Wqh4iUupbM
π² Pinecone article:
https://www.pinecone.io/learn/composite-indexes/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://jamescalam.medium.com/subscribe (it's free!)
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:54 Composite Indexes
06:43 Faiss Index Factory
11:34 Why we use Index Factory
17:11 Outro | Science & Technology | 21 | 0 |
3Wqh4iUupbM | UCv83tO5cePwHMt1952IVVHw | Best Indexes for Similarity Search in Faiss | 2021-09-12 07:02:26 UTC | 2021-09-24 12:54:07 UTC | 1582 seconds | In the world of vector search, there are many indexing methods and vector processing techniques that allow us to prioritize between recall, latency, and memory usage.
Using specific methods such as IVF, PQ, or HNSW, we can often return good results. But for best performance we will usually want to use composite indexes.
We can view a composite index as a step-by-step process of vector transformations and one or more indexing methods. Allowing us to place multiple indexes and/or processing steps together to create our βidealβ index.
For example, we can use an inverted file (IVF) index to reduce the scope of our search (increasing search speed), and then add a compression technique such as product quantization (PQ) to keep larger indexes within a reasonable size limit.
Where there is the ability to customize indexes, there is the risk of producing indexes with unnecessarily poor recall, latency, or memory usage.
We must know how composite indexes work if we want to build robust and high-performance vector similarity search applications. It is essential to understand where different indexes or vector transformations can be used β and when they are not needed.
Part 1: https://youtu.be/GEhmmcx1lvM
π² Pinecone article:
https://www.pinecone.io/learn/composite-indexes/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://jamescalam.medium.com/subscribe (it's free!)
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:30 IVFADC
03:30 IVFADC in Faiss
07:29 Multi-D-ADC
09:17 Multi-D-ADC in Faiss
14:43 IVF-HNSW
21:39 IVF-HNSW in Faiss
25:58 Outro | Science & Technology | 31 | 0 |
cR4qMSIvX28 | UCv83tO5cePwHMt1952IVVHw | How to Build a Bert WordPiece Tokenizer in Python and HuggingFace | 2021-09-13 20:13:08 UTC | 2021-09-14 13:30:06 UTC | 1880 seconds | Building a transformer model from scratch can often be the only option for many more specific use cases. Although BERT and other transformer models have been pre-trained for a vast number of languages and domains, they do not cover everything.
Often, it is these less common use cases that stand to gain the most from having someone come along and build a specific transformer model. It could be for an uncommon language or less tech-savvy domain.
BERT is the most popular transformer for a wide range of language-based machine learningβ-βfrom sentiment analysis to question and answering, BERT has enabled a diverse range of innovation across many borders and industries.
The first step for many in designing a new BERT model is the tokenizer. In this article, we'll take a look at the WordPiece tokenizer used by BERTβ-βand see how we can build our own from scratch.
π Medium article:
https://towardsdatascience.com/how-to-build-a-wordpiece-tokenizer-for-bert-f505d97dddbb
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
πΉοΈ Free Article link (if you don't have Medium membership):
https://towardsdatascience.com/how-to-build-a-wordpiece-tokenizer-for-bert-f505d97dddbb?sk=eea06e01c9faecd939e10589e9de1291 | Science & Technology | 95 | 1 |
H_kJDHvu-v8 | UCv83tO5cePwHMt1952IVVHw | Metadata Filtering for Vector Search + Latest Filter Tech | 2021-09-20 12:23:11 UTC | 2021-09-20 14:04:27 UTC | 2054 seconds | Vector similarity search makes massive datasets searchable in fractions of a second. Yet despite the brilliance and utility of this technology, often what seem to be the most straightforward problems are the most difficult to solve. Such as filtering.
Filtering takes the top place in being seemingly simple β but actually incredibly complex. Applying fast-but-accurate filters when performing a vector search (ie, nearest-neighbor search) on massive datasets is a surprisingly stubborn problem.
This article explains the two common methods for adding filters to vector search, and their serious limitations. Then we will explore Pineconeβs solution to filtering in vector search.
π£ Get the API key!
https://www.pinecone.io/start/
π² Pinecone article:
https://www.pinecone.io/learn/vector-search-filtering/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:24 Vector Search Recap
02:03 Why Filter?
02:56 Metadata Filtering 101
07:48 Pre-filtering
09:37 Post-filtering
11:30 Single-Stage Filtering
12:22 Vectors and Metadata Code
13:58 Connecting to Pinecone
14:55 Building Query Vector
16:47 Querying
21:37 First Filter
24:40 Adding More Conditions
27:03 Filtering with Numbers
30:55 Search Speed and Filtering
33:44 Outro | Science & Technology | 20 | 0 |
r-zQQ16wTCA | UCv83tO5cePwHMt1952IVVHw | Build NLP Pipelines with HuggingFace Datasets | 2021-09-20 14:58:03 UTC | 2021-09-23 13:30:07 UTC | 2030 seconds | HF Datasets is an essential tool for NLP practitionersβ-βhosting over 1.4K (mostly) high-quality language-focused datasets, and an easy-to-use treasure trove of functions for building efficient pre-processing pipelines.
In this article, we will take a look at the massive repository of datasets available, and explore some of the library's brilliant data processing capabilities.
π Medium article:
https://towardsdatascience.com/build-nlp-pipelines-with-huggingface-datasets-d597ff5f68ad
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
π Free Article Access (if you don't have Medium membership!):
https://towardsdatascience.com/build-nlp-pipelines-with-huggingface-datasets-d597ff5f68ad?sk=948106e47e64bc3e9e8a1358b0568d48 | Science & Technology | 53 | 1 |
QvKMwLjdK-s | UCv83tO5cePwHMt1952IVVHw | HNSW for Vector Search Explained and Implemented with Faiss (Python) | 2021-09-29 08:13:49 UTC | 2021-10-05 13:00:23 UTC | 2075 seconds | Hierarchical Navigable Small World (HNSW) graphs are among the top-performing indexes for vector similarity search. HNSW is a hugely popular technology that time and time again produces state-of-the-art performance with super-fast search speeds and flawless recall β HNSW is not to be missed.
Despite being a popular and robust algorithm for approximate nearest neighbors (ANN) searches, understanding how it works is far from easy.
This video helps demystify HNSW and explains this intelligent algorithm in an easy-to-understand way. Towards the end of the video, we'll look at how to implement HNSW using Faiss and which parameter settings give us the performance we need.
π² Pinecone article:
https://www.pinecone.io/learn/hnsw/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://jamescalam.medium.com/subscribe (it's free!)
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:41 Foundations of HNSW
08:41 How HNSW Works
16:38 The Basics of HNSW in Faiss
21:40 How Faiss Builds an HNSW Graph
26.49 Building the Best HNSW Index
33:33 Fine-tuning HNSW
34:30 Outro | Science & Technology | 131 | 3 |
g_yMowQikOE | UCv83tO5cePwHMt1952IVVHw | Intro to APIs in Python - API Series #1 | 2021-09-29 12:21:47 UTC | 2021-09-29 14:00:18 UTC | 1704 seconds | Taking those first steps into interacting with the web using Python can seem dauntingβ-βbut it need not be. It is a surprisingly simple process, with well established rules and guidelines.
We'll cover the absolute essentials for getting started, including:
- Application Program Interfaces (APIs)
- Javascript Object Notation (JSON)
- Requests with Python
- Real world use-cases
π Article:
https://towardsdatascience.com/quick-fire-guide-to-apis-in-python-891dd98c8877
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Sign-up For New Articles Every Week on Medium!
https://jamescalam.medium.com/subscribe (it's free!)
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
π Free Access Link (if you don't have Medium membership):
https://towardsdatascience.com/quick-fire-guide-to-apis-in-python-891dd98c8877?sk=7c159ba45154db23abcc6a7f9de4f910
Geocoding Docs:
https://developers.google.com/maps/documentation/geocoding/cloud-setup
GitHub Docs:
https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token
00:00 Intro
00:20 What is an API?
01:47 RESTful APIs
05:26 API Methods
07:20 HTTP Codes (200s)
08:14 HTTP Codes (400s)
10:00 JSON Format
11:21 Talking to APIs in Python
14:30 Google Geocoding API
22:08 GitHub API
27:48 Outro | Science & Technology | 119 | 0 |
bVZJ_O_-0RE | UCv83tO5cePwHMt1952IVVHw | Intro to Dense Vectors for NLP and Vision | 2021-10-04 08:28:38 UTC | 2021-10-12 17:47:15 UTC | 2629 seconds | There is perhaps no greater component to the success of modern Natural Language Processing (NLP) technology than vector representations of language. The meteoric early 2010s rise of NLP was ignited with the introduction of word2vec by a team lead by TomΓ‘Ε‘ Mikolov in 2013.
Word2vec is one of the most iconic and earliest examples of dense vectors representing text. But since the days of word2vec, developments in representing language have advanced at ludicrous speeds.
This video will explore *why* we use dense vectors β and some of the best approaches to building dense vectors available today.
π² Pinecone article:
https://www.pinecone.io/learn/dense-vector-embeddings-nlp/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:50 Why Dense Vectors?
03:55 Word2vec and Representing Meaning
08:40 Sentence Transformers
09:58 Sentence Transformers in Python
15:08 Question-Answering
18:18 DPR in Python
29:55 Vision Transformers
33:22 OpenAI's CLIP in Python
42:49 Review and What's Next | Science & Technology | 92 | 0 |
MF75aNH3Gjs | UCv83tO5cePwHMt1952IVVHw | API Series #2 - Building an API with Flask in Python | 2021-10-05 07:01:25 UTC | 2021-10-07 14:52:32 UTC | 1902 seconds | Next video - how to deploy to the cloud: https://youtu.be/3fsIcMgUOY8
How can we set up a way to communicate from one software instance to another? It sounds simple, and β to be completely honest β it is.
All we need is an API.
An API (Application Programming Interface) is a simple interface that defines the types of requests (demands/questions, etc.) that can be made, how they are made, and how they are processed.
In our case, we will be building an API that allows us to send a range of GET/POST/PUT/PATCH/DELETE requests (more on this later), to different endpoints, and return or modify data connected to our API.
We will be using the Flask framework to create our API and Insomnia to test it.
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
πΉοΈ Medium article:
https://towardsdatascience.com/the-right-way-to-build-an-api-with-python-cd08ab285f8f
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
Free article link:
https://towardsdatascience.com/the-right-way-to-build-an-api-with-python-cd08ab285f8f?sk=6e2dda4c8b6012767114e12ff34b1464
Download Insomnia:
https://insomnia.rest/download | Science & Technology | 117 | 2 |
WS1uVMGhlWQ | UCv83tO5cePwHMt1952IVVHw | Intro to Sentence Embeddings with Transformers | 2021-10-19 09:44:58 UTC | 2021-10-20 17:06:20 UTC | 1866 seconds | Transformers have wholly rebuilt the landscape of natural language processing (NLP). Before transformers, we had okay translation and language classification thanks to recurrent neural nets (RNNs) β their language comprehension was limited and led to many minor mistakes, and coherence over larger chunks of text was practically impossible.
Since the introduction of the first transformer model in the 2017 paper βAttention is all you needβ, NLP has moved from RNNs to models like BERT and GPT. These new models can answer questions, write articles (maybe GPT-3 wrote this), enable incredibly intuitive semantic search β and much more.
In this video, we will explore how these embeddings have been adapted and applied to a range of semantic similarity applications by using a new breed of transformers called βsentence transformersβ.
π² Pinecone article:
https://www.pinecone.io/learn/sentence-embeddings/
Vectors in ML:
https://www.youtube.com/playlist?list=PLIUOU7oqGTLgz-BI8bNMVGwQxIMuQddJO
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 188 | 1 |
aSx0jg9ZILo | UCv83tO5cePwHMt1952IVVHw | Fine-tune Sentence Transformers the OG Way (with NLI Softmax loss) | 2021-10-22 14:16:49 UTC | 2021-10-22 14:39:46 UTC | 2223 seconds | Sentence embeddings with transformers can be used across a range of applications, such as semantic textual similarity (STS), semantic clustering, or information retrieval (IR) using concepts rather than words.
This video dives deeper into the training process of the first sentence transformer, sentence-BERT, or more commonly known as SBERT. We will explore the Natural Language Inference (NLI) training approach of softmax loss to fine-tune models for producing sentence embeddings.
Be aware that softmax loss is no longer the preferred approach to training sentence transformers and has been superseded by other methods such as MSE margin and multiple negatives ranking loss. But weβre covering this training method as an important milestone in the development of ever-improving sentence embeddings.
π² Pinecone article:
https://www.pinecone.io/learn/train-sentence-transformers-softmax/
Check out the Sentence Transformers library:
https://github.com/UKPLab/sentence-transformers
Talk by Nils Reimers (one of the SBERT creators) on training:
https://www.youtube.com/watch?v=RHXZKUr8qOY
He does more NLP vids too:
https://www.youtube.com/channel/UC1zCuTrfpjT6Sv2kJk-JkvA
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:42 NLI Fine-tuning
01:44 Softmax Loss Training Overview
05:47 Preprocessing NLI Data
12:48 PyTorch Process
19:48 Using Sentence-Transformers
30:45 Results
35:49 Outro | Science & Technology | 83 | 0 |
or5ew7dqA-c | UCv83tO5cePwHMt1952IVVHw | Fine-tune High Performance Sentence Transformers (with Multiple Negatives Ranking) | 2021-10-25 20:18:30 UTC | 2021-10-26 13:00:22 UTC | 2213 seconds | Transformer-produced sentence embeddings have come a long way in a very short time. Starting with the slow but accurate similarity prediction of BERT cross-encoders, the world of sentence embeddings was ignited with the introduction of SBERT in 2019. Since then, many more sentence transformers have been introduced. These models quickly made the original SBERT obsolete.
How did these newer sentence transformers manage to outperform SBERT so quickly? The answer is multiple negatives ranking (MNR) loss.
This video will cover what MNR loss is, the data it requires, and how to implement it to fine-tune our own high-quality sentence transformers.
Implementation will cover two approaches. The first is more involved, and outlines the exact steps to fine-tune the model (we'll just run over it quickly). The second approach makes use of the sentence-transformers libraryβs excellent utilities for fine-tuning.
π² Pinecone article:
https://www.pinecone.io/learn/fine-tune-sentence-transformers-mnr/
Check out the Sentence Transformers library:
https://github.com/UKPLab/sentence-transformers
Talk by Nils Reimers (one of the SBERT creators) on training:
https://www.youtube.com/watch?v=RHXZKUr8qOY
He does more NLP vids too:
https://www.youtube.com/channel/UC1zCuTrfpjT6Sv2kJk-JkvA
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:02 NLI Training Data
02:56 Preprocessing
10:11 SBERT Finetuning Visuals
14:14 MNR Loss Visual
16:37 MNR in PyTorch
23:04 MNR in Sentence Transformers
34:20 Results
36:14 Outro | Science & Technology | 86 | 0 |
iCkftKsnQgg | UCv83tO5cePwHMt1952IVVHw | Hybrid Search Walkthrough in Pinecone | 2021-10-29 01:44:06 UTC | 2021-10-29 15:05:00 UTC | 1040 seconds | Pinecone offers a production-ready vector database for high performance and reliable *semantic search* at scale. But did you know Pinecone's semantic search can be paired with the more traditional keyword search?
Semantic search is a compelling technology allowing us to search using abstract concepts and *meaning* rather than relying on specific words. However, sometimes a simple keyword search can be just as valuable β especially if we know the exact wording of what we're searching for.
In this video, we will explore these features through a start-to-finish example of basic keyword search in Pinecone.
π² Check the docs:
https://www.pinecone.io/docs/examples/basic-hybrid-search/
π Free API key:
https://app.pinecone.io
00:52 How Hybrid Search Works
01:25 Preprocessing
03:01 Creating Keywords
05:34 Creating an Index
06:50 Data Upsert
08:33 Query Setup
10:52 Keyword Search
12:31 OR Logic
14:49 AND Logic
15:10 Negation
17:04 Outro | Science & Technology | 17 | 1 |
3fsIcMgUOY8 | UCv83tO5cePwHMt1952IVVHw | API Series #3 - How to Deploy Flask APIs to the Cloud (GCP) | 2021-11-01 23:16:31 UTC | 2021-11-02 14:30:00 UTC | 806 seconds | Building that first API is for many of us, a significant step towards creating impactful tools that may one day be used by many developers. But often those APIs don't make it out of our local machines.
Fortunately, it's incredibly easy to deploy APIs. Assuming you have no idea what you're doing right nowβ-βyou will probably be deploying your first API in around ten minutes.
I'm not joking, it's super easy. Let's get started.
π Article:
https://towardsdatascience.com/how-to-deploy-a-flask-api-8d54dd8d8b8a
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
π Free article link:
TO ADD | Science & Technology | 75 | 2 |
NNS5pOpjvAQ | UCv83tO5cePwHMt1952IVVHw | All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages) | 2021-11-04 11:27:18 UTC | 2021-11-04 13:00:10 UTC | 2392 seconds | Weβve learned about how sentence transformers can be used to create high-quality vector representations of text. We can then use these vectors to find similar vectors, which can be used for many applications such as semantic search or topic modeling.
These models are very good at producing meaningful, information-dense vectors. But they donβt allow us to compare sentences across different languages.
Often this may not be a problem. However, the world is becoming increasingly interconnected, and many companies span across multiple borders and languages. Naturally, there is a need for sentence vectors that are language agnostic.
Unfortunately, very few textual similarity datasets span multiple languages, particularly for less common languages. And the standard training methods used for sentence transformers would require these types of datasets.
Different approaches need to be used. Fortunately, some techniques allow us to extend models to other languages using more easily obtained language translations.
In this video, we will cover how multilingual models work and are built. Weβll learn how to develop our own multilingual sentence transformers, the datasets to look for, and how to use high-performing pretrained multilingual models.
π² Pinecone article:
https://www.pinecone.io/learn/multilingual-transformers/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:19 Multilingual Vectors
05:55 Multi-task Training (mUSE)
09:36 Multilingual Knowledge Distillation
11:13 Knowledge Distillation Training
13:43 Visual Walkthrough
14:53 Parallel Data Prep
20:23 Choosing a Student Model
24:55 Initializing the Models
30:05 ParallelSentencesDataset
33:54 Loss and Fine-tuning
36:59 Model Evaluation
39:23 Outro | Science & Technology | 30 | 0 |
-td57YvJdHc | UCv83tO5cePwHMt1952IVVHw | Question-Answering in NLP (Extractive QA and Abstractive QA) | 2021-11-13 19:09:02 UTC | 2021-11-16 12:06:13 UTC | 2886 seconds | Search is a crucial functionality in many applications and companies globally. Whether in manufacturing, finance, healthcare, or *almost* any other industry, organizations have vast internal information and document repositories.
Unfortunately, the scale of many companiesβ data means that the organization and accessibility of information can become incredibly inefficient. The problem is exacerbated for language-based information. Language is a tool for people to communicate often abstract ideas and concepts. Naturally, ideas and concepts are harder for a computer to comprehend and store in a meaningful way.
How do we minimize this problem? The answer lies with *semantic search*, specifically with the question-answering (QA) flavor of semantic search.
This article will introduce the different forms of QA, the components of these 'QA stacks', and where we might use them.
π² Pinecone article:
https://www.pinecone.io/learn/question-answering/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Meaningful Search
01:23 Use-case
02:22 Open Domain QA (ODQA)
06:41 SQuAD Format
10:45 Quick Preprocessing
15:18 Creating Context Vectors Database
23:24 Open-book Extractive QA
32:50 Open-book Abstractive QA
41:53 Closed-book Abstractive QA
47:27 Final Thoughts | Science & Technology | 72 | 0 |
pNvujJ1XyeQ | UCv83tO5cePwHMt1952IVVHw | Today Unsupervised Sentence Transformers, Tomorrow Skynet (how TSDAE works) | 2021-11-24 14:20:20 UTC | 2021-11-24 16:24:24 UTC | 2661 seconds | To adapt a pretrained transformer to produce meaningful sentence vectors, we typically need a more supervised fine-tuning approach. We can use datasets like natural language inference (NLI) pairs, labeled semantic textual similarity (STS) data, or parallel data (pairs of translations).
For some domains and languages, such as finance and English, this data is fairly easy to find or gather. But many domains and many languages have very little labeled data. If you can find semantic similarity pairs for the agriculture industry, please let me know. There are many languages, such as Dhivehi, where unlabelled data is hard to find and labelled data practically non-existent.
This means you either spend a very long time gathering tens of thousands of labeled samples or you can try an unsupervised fine-tuning approach.
Unsupervised training methods for sentence transformers are not as effective as their supervised counterparts, but they do work. And if you have no other choice, why not?
In this video, we will introduce the concept of unsupervised fine-tuning for sentence transformers. We will learn to train these models using the unsupervised Transformer-based Sequential Denoising Auto-Encoder (TSDAE) approach.
π² Pinecone article:
https://www.pinecone.io/learn/unsupervised-training-sentence-transformers/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Why Language Embedding Matters
05:12 Supervised Methods
05:29 Natural Language Inference
07:15 Semantic Textual Similarity
07:43 Multilingual Training
10:00 TSDAE (Unsupervised)
18:50 Data Preparation
29:05 Initialize Model
32:39 Model Training
36:25 NLTK Error
37:15 Evaluation
41:01 TSDAE vs Supervised Methods
42:42 Why TSDAE is Cool | Science & Technology | 70 | 0 |
3IPCEeh4xTg | UCv83tO5cePwHMt1952IVVHw | Making The Most of Data: Augmented SBERT | 2021-12-16 15:46:03 UTC | 2021-12-17 14:24:40 UTC | 3310 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
ML models are data-hungry. They consume massive amounts of data to identify generalized patterns and apply those learned patterns to new data.
As models get bigger, so do datasets. And although we have seen an explosion of data in the past decade, it is often not accessible or in an ML-friendly format, especially in niche domains.
For many niche, low-resource domains, finding or annotating a substantial dataset manually is practically impossible.
Fortunately, we don't need to label (or even find) this new data. Instead, we can automatically generate or label data using one or more *data augmentation* techniques.
In this video, we will introduce data augmentation and its application to the field of NLP. We will focus on the 'in-domain' flavor of a particular data-augmentation strategy named augmented SBERT (AugSBERT).
π² Pinecone article:
https://www.pinecone.io/learn/data-augmentation/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP | Science & Technology | 42 | 0 |
mjKqP3kRxbQ | UCv83tO5cePwHMt1952IVVHw | Building Transformer Tokenizers (Dhivehi NLP #1) | 2021-12-28 15:02:22 UTC | 2021-12-28 15:45:03 UTC | 1982 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
Get in touch with Ashraq:
https://www.linkedin.com/in/ismailashraq/
The language of Dhivehi (or Maldivian) is fascinating. It uses a complex writing system known as Thaana, and I absolutely cannot comprehend any of it. It is so wildly different from anything I knowβ-βbut, like the archipelago, it looks wonderful.
Ashraq described the difficulty of applying NLP to his native tongue of Dhivehi. There are several reasons for this, which we will explore in this video, and learn how to build an effective Dhivehi WordPiece tokenizer.
π Article:
https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Article Friend Link (Free Access):
https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4?sk=c0c16de9eea7dbe1d2a9c106abf38e1a
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:06 Dhivehi Project
02:28 Hurdles for Low Resource Domains
04:21 Dhivehi Dataset
04:52 Download Dhivehi Corpus
08:25 Tokenizer Components
08:44 Normalizer Component
11:55 Pre-tokenization Component
14:59 Post-tokenization Component
16:26 Decoder Component
17:41 Tokenizer Implementation
21:04 Tokenizer Training
24:22 Post-processing Implementation
27:12 Decoder Implementation
28:07 Saving for Transformers
30:33 Tokenizer Test and Usage
31:36 Download Dhivehi Models
32:21 First Steps | Science & Technology | 49 | 0 |
a8jyue22SJM | UCv83tO5cePwHMt1952IVVHw | AugSBERT: Domain Transfer for Sentence Transformers | 2022-01-04 05:14:16 UTC | 2022-01-04 14:59:50 UTC | 1750 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
When building language models, we can spend months optimizing training and model parameters, but itβs useless if we don't have the correct data.
The success of our language models relies first and foremost on data. The augmented SBERT training strategy can help us.
Given this scenario, we can transfer information from an out-of-domain (or *source*) dataset to our target domain. We will learn how to do this here. First, we will learn to assess which source datasets align best with our target domain quickly. Then we will explain and work through the AugSBERT domain-transfer training strategy.
π² Pinecone article:
https://www.pinecone.io/learn/augsbert-domain-transfer/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
π n-gram Similarity Script: https://gist.github.com/jamescalam/b73f37017ae32bd6094747c4b0fca94a
π AugSBERT In-Domain Article: https://www.pinecone.io/learn/data-augmentation/
00:00 Why Use Domain Transfer
04:08 Strategy Outline
06:05 Train Source Cross-Encoder
12:44 Cross-Encoder Outcome
15:12 Labeling Target Data
20:31 Training Bi-encoder
23:58 Evaluator Bi-encoder Performance
28:08 Final Points | Science & Technology | 41 | 0 |
w1dMEWm7jBc | UCv83tO5cePwHMt1952IVVHw | How to build a Q&A AI in Python (Open-domain Question-Answering) | 2022-01-10 07:19:13 UTC | 2022-01-11 14:00:20 UTC | 2364 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
How can we design these natural, human-like Q&A interfaces? The answer is open-domain question-answering (ODQA). ODQA allows us to use natural language to query a database.
That means that, given a dataset like a set of internal company documents, online documentation, or as is the case with Google, everything on the worldβs internet, we can retrieve relevant information in a natural, more human way.
π² Pinecone article:
https://www.pinecone.io/learn/retriever-models/
π Nils YT Talk: https://youtu.be/XNJThigyvos?t=118
π MNR Loss Article:
π Free Pinecone API Key: https://app.pinecone.io/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Why QA
04:05 Open Domain QA
08:24 Do we need to fine-tune?
11:44 How Retriever Training Works
12:59 SQuAD Training Data
16:29 Retriever Fine-tuning
19:32 IR Evaluation
25:58 Vector Database Setup
33:42 Querying
37:41 Final Notes | Science & Technology | 66 | 1 |
-fzCSPsfMic | UCv83tO5cePwHMt1952IVVHw | How to build a Q&A Reader Model in Python (Open-domain QA) | 2022-01-18 12:17:09 UTC | 2022-01-18 16:37:37 UTC | 1504 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
Open-domain question-answering (ODQA) is a wildly popular *pipeline* of databases and language models that allow us to ask a machine human-like questions and return comprehensible and even intelligent answers.
Despite the outward guise of simplicity, ODQA requires a reasonably advanced set of components placed together to enable the *extractive* Q&A functionality.
We call this *extractive* Q&A because the models are not generating an answer. Instead, the answer already exists but is hidden somewhere within potentially thousands, millions, or even more data sources.
By enabling extractive Q&A, we enable a more *intelligent* and *efficient* way to retrieve information from what can be massive stores of data.
π² Pinecone article:
https://www.pinecone.io/learn/reader-models/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:13 ODQA Components
03:09 Data Preprocessing
22:35 Fine-tuning | Science & Technology | 26 | 0 |
JLKUV-LiXjk | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #1 - Installation and API | 2022-01-25 12:04:00 UTC | 2022-01-25 16:00:09 UTC | 735 seconds | βΆοΈ Streamlit for ML Part 2:
https://www.youtube.com/watch?v=U0EoaFFGyTg&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=2
Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations.
All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need!
In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app.
π Article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:39 App Outline
03:36 Streamlit Installation
06:15 Streamlit API Basics | Science & Technology | 32 | 0 |
U0EoaFFGyTg | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #2 - ML Models and APIs | 2022-01-26 16:07:51 UTC | 2022-01-26 16:30:36 UTC | 911 seconds | βΆοΈ Streamlit for ML Part 3:
https://www.youtube.com/watch?v=lYDiSCDcxmc&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=3
Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations.
All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need!
In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app.
π Code to Create Index:
https://gist.github.com/jamescalam/2123ce0bb8a871f48a151a023a7ece67
π Article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:47 Creating the Vector DB
08:56 Implementing Retrieval | Science & Technology | 19 | 0 |
lYDiSCDcxmc | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #3 - Make Apps Fast with Caching | 2022-01-27 13:13:14 UTC | 2022-01-27 15:00:36 UTC | 584 seconds | βΆοΈ Streamlit for ML Part 4:
https://www.youtube.com/watch?v=XdxeKiY2UXg&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=4
Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations.
All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need!
In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app.
βΆοΈ Streamlit for ML Playlist:
https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1
π Article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:35 Streamlit Caching
06:56 Experimental Caching Primitives | Science & Technology | 24 | 0 |
XdxeKiY2UXg | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #4 - Adding Bootstrap Components | 2022-01-28 10:05:43 UTC | 2022-01-28 15:11:42 UTC | 590 seconds | βΆοΈ Streamlit for ML Part 5.1:
https://www.youtube.com/watch?v=SGazDb8o-to&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=5
Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations.
All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need!
In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app.
βΆοΈ Streamlit for ML Playlist:
https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1
π Article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:35 Streamlit Caching
06:56 Experimental Caching Primitives | Science & Technology | 38 | 1 |
JydpRavoJqI | UCv83tO5cePwHMt1952IVVHw | Adding New Doc Stores to Haystack | 2022-02-15 04:56:36 UTC | 2022-03-15 15:00:14 UTC | 1825 seconds | π₯³ Released with Haystack v1.3! Install direct from PyPI with:
pip install 'farm-haystack[pinecone]'
PR:
https://github.com/deepset-ai/haystack/pull/2254
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:15 Contributing or Testing
03:31 ODQA
06:20 What is Haystack?
08:13 Haystack QA Workflow
14:52 Contributing to Open Source
22:54 Haystack Doc Stores
26:09 Doc Store Core Methods
29:31 Final Notes, Contribute/Test | Science & Technology | 14 | 0 |
SGazDb8o-to | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #5.1 - Custom React Components in Streamlit Setup | 2022-02-17 15:24:47 UTC | 2022-02-17 15:45:58 UTC | 1158 seconds | βΆοΈ Streamlit for ML Part 5.2:
https://www.youtube.com/watch?v=mxm8ihWoVbk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=6
There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components.
If you're still stuck, and there is just no component that covers what you need, we can build our own custom components.
To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component.
In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements.
βΆοΈ Streamlit for ML Playlist:
https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1
π Article:
Coming soon
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
Coming soon
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
02:19 Environment Setup
03:42 Starting with a Template
07:41 Naming for Card Component
11:31 Installing Node Packages
15:12 Running the Component | Science & Technology | 26 | 1 |
mxm8ihWoVbk | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #5.2 - MUI Card Component Build | 2022-02-20 15:25:56 UTC | 2022-02-21 14:00:31 UTC | 1619 seconds | βΆοΈ Streamlit for ML Part 5.3:
https://www.youtube.com/watch?v=lZ2EaPUnV7k&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=7
There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components.
If you're still stuck, and there is just no component that covers what you need, we can build our own custom components.
To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component.
In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements.
βΆοΈ Streamlit for ML Playlist:
https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1
π Article:
Coming soon
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
Coming soon
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:59 Clearing Card Component
04:59 Building the Component
14:22 Pulling in MUI Code
24:08 Adding Roboto Font
26:05 Final Points | Science & Technology | 16 | 1 |
lZ2EaPUnV7k | UCv83tO5cePwHMt1952IVVHw | Streamlit for ML #5.3 - Publishing Components to Pip | 2022-02-27 16:28:49 UTC | 2022-02-28 17:00:29 UTC | 858 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components.
If you're still stuck, and there is just no component that covers what you need, we can build our own custom components.
To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component.
In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements.
β Python Packaging Video:
https://youtu.be/JkeNVaiUq_c
βΆοΈ Streamlit for ML Playlist:
https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1
π Article:
Coming soon
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
π Friend link to article:
Coming soon
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:09 PyPI
02:41 Preparing for Distribution
05:43 Build React Component
06:39 Create Python Package
11:57 Pip Install
13:58 Ending | Science & Technology | 10 | 0 |
J0cntjLKpmU | UCv83tO5cePwHMt1952IVVHw | Train Sentence Transformers by Generating Queries (GenQ) | 2022-03-08 03:10:28 UTC | 2022-03-08 14:52:23 UTC | 1634 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
Fine-tuning effective dense retrieval models is challenging. Bi-encoders (sentence transformers) are the current best models for dense retrieval in semantic search. Unfortunately, they're also notoriously data-hungry models that typically require a particular type of labeled training data.
Hard problems like this attract attention. As expected, there is plenty of attention on building ever better techniques for training retrievers.
One of the most impressive is GenQ. This approach to building bi-encoder retrievers uses the latest text generation techniques to synthetically generate training data. In short, all we need are passages of text. The generation model then augments these passages with synthetic queries, giving us the exact format we need to train an effective bi-encoder model.
π² Pinecone article:
https://www.pinecone.io/learn/genq/
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
00:32 Why GenQ?
02:23 GenQ Overview
04:28 Training Data
06:48 Asymmetric Semantic Search
07:54 T5 Query Generation
13:52 Finetuning Bi-encoders
16:02 GenQ Code Walkthrough
21:40 Finetuning Bi-encoder Walkthrough
26:48 Final Points | Science & Technology | 39 | 0 |
Dn8OYkatiU0 | UCv83tO5cePwHMt1952IVVHw | Testing the New Haystack Doc Store | 2022-03-22 17:15:10 UTC | 2022-03-22 19:26:00 UTC | 1399 seconds | π₯³ Released with Haystack v1.3! Install direct from PyPI with:
pip install 'farm-haystack[pinecone]'
PR:
https://github.com/deepset-ai/haystack/pull/2254
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:19 Demo Start and Install
03:25 Initialization
06:30 Download and Write Documents
10:55 Extractive QA Pipeline
11:23 Fetch by ID
19:01 Metadata Filtering
22:24 Get All Documents | Science & Technology | 5 | 0 |
uEbCXwInnPs | UCv83tO5cePwHMt1952IVVHw | Is GPL the Future of Sentence Transformers? | Generative Pseudo-Labeling Deep Dive | 2022-03-29 10:46:39 UTC | 2022-03-30 12:52:39 UTC | 3175 seconds | π Free NLP for Semantic Search Course:
https://www.pinecone.io/learn/nlp
Training sentence transformers is hard; they need vast amounts of labeled data. On one hand, the internet is full of data, and, on the other, this data is *not* in the format we need. We usually need to use a supervised training method to train a high-performance bi-encoder (sentence transformer) model.
There is research producing techniques placing us ever closer to fine-tuning high-perfomance bi-encoder models with unlabeled text data. One of the most promising is GPL. At its core, GPL allows us to take unstructured text data and use it to build models that can understand this text. These models can then intelligently respond to natural language queries regarding this same text data.
It is a fascinating approach, with massive potential across innumerous use cases spanning all industries and borders. With that in mind, let's dive into the details of GPL and how we can implement it to build high-performance LMs with nothing more than plain text.
π² Pinecone article:
https://www.pinecone.io/learn/gpl/
π Notebooks:
https://github.com/pinecone-io/examples/tree/master/learn/nlp_course/gpl
π€ 70% Discount on the NLP With Transformers in Python course:
https://bit.ly/3DFvvY5
π Subscribe for Article and Video Updates!
https://jamescalam.medium.com/subscribe
https://medium.com/@jamescalam/membership
πΎ Discord:
https://discord.gg/c5QtDB9RAP
00:00 Intro
01:08 Semantic Web and Other Uses
04:36 Why GPL?
07:31 How GPL Works
10:37 Query Generation
12:08 CORD-19 Dataset and Download
13:27 Query Generation Code
21:53 Query Generation is Not Perfect
22:39 Negative Mining
26:28 Negative Mining Implementation
27:21 Negative Mining Code
35:19 Pseudo-Labeling
35:55 Pseudo-Labeling Code
37:01 Importance of Pseudo-Labeling
41:20 Margin MSE Loss
43:40 MarginMSE Fine-tune Code
46:30 Choosing Number of Steps
48:54 Fast Evaluation
51:43 What's Next for Sentence Transformers? | Science & Technology | 76 | 2 |