File size: 4,010 Bytes
9aa57be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5b41ff
9aa57be
 
 
29bbffb
9aa57be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
task_categories:
- text-generation
- summarization
language:
- en
- hi
- ja
- fr
tags:
- textdataset
- text
- youtube
- webscrapped data
- youtube transcripts
- llm training
- transformer models
size_categories:
- 1B<n<10B
- 100M<n<1B
---

# Dataset Card for YouTubeTranscriptData

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->
This dataset contains transcripts of around 167K youtube videos that include coding lectures, podcasts, interviews, news videos, commentary and song lyrics. Also there are multiple files that have been generated using webscrapping.



- **Curated by:** [Shivendra Singh](https://linktr.ee/shivendrra_)
- **License:** [none]

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** [SmallLanguageModel](https://github.com/shivendrra/SmallLanguageModel-project)
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the dataset is intended to be used. -->
- Can be used to train Transformer model/BPE tokenizers
- Also for learning and research purposes
- whatever you can think of, do whatever the fuck you want.

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->
Used to train a 76million parameter transformer model.

[Github repo](https://github.com/shivendrra/SmallLanguageModel-project)

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
Not suitable for finetuning any base model or pre-trained models. Only NLP and base model training from scratch.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
I'll add some finetuning data and then will update this section

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->
I wanted to create an app that would help me write script for my youtube videos. I fucked around a little with gpt-3.5 finetuning and langchain, and Youtube/Google APIs and got an idea to make a model and train it from scratch, all by myself.

[Youtube video](https://youtu.be/PVpyN_2z5II?si=Q1yl-sVp8kxaGyre)

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
Youtube Videos: 

-podcasts like Lex Fridman's, Waveform, Joe Rogan, vergecast, bill gates, etc.
-videos from candaian lad, aevy tv, SNL, lemmino, mrwhosetheboss, johnny harris, and many more.
-news videos from vox, wallstreetjournal, newyorktimes, the guardian, etc.
-interviews from variety, wired, y-combinator, eo, etc.
-lectures from mit opencourseware, cs50, freecodecamp, crashcourse, etc.
-tech and science from kurzgesagt, real engineering, arvin ash, vsause, veritasium, etc.

Britannica.com:
-articles on various topics like Covid, Nuclear reactions, Antarctica, Nobel prize, Great leaders, countries, etc.

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Used [Youtube V3 API](https://console.cloud.google.com/apis/api/youtube.googleapis.com/) to fetch video ids from a particular Youtube channel and generated a traget url. Then used [Youtube Transcript API](https://pypi.org/project/youtube-transcript-api/) to fetch transcripts from the videos and write it in a .txt file.
Made a json file containing channel ids of around 45channels and fetched transcipts from around 167K videos

Webscrapping data was generated using webscrapper that scrapped data from britannica.com and some sites that were fetched by GoogleCustomSearch API.

[More Information Needed](https://medium.com/@shivendrra_/build-your-own-llm-using-youtube-transcript-data-87c04469c5e2)