shivendrra commited on
Commit
9aa57be
1 Parent(s): bff6350

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-generation
4
+ - summarization
5
+ language:
6
+ - en
7
+ - hi
8
+ - ja
9
+ - fr
10
+ tags:
11
+ - textdataset
12
+ - text
13
+ - youtube
14
+ - webscrapped data
15
+ - youtube transcripts
16
+ - llm training
17
+ - transformer models
18
+ size_categories:
19
+ - 1B<n<10B
20
+ - 100M<n<1B
21
+ ---
22
+
23
+ # Dataset Card for YouTubeTranscriptData
24
+
25
+ ## Dataset Details
26
+
27
+ ### Dataset Description
28
+
29
+ <!-- Provide a longer summary of what this dataset is. -->
30
+ This dataset contains transcripts of around 167K youtube videos that include coding lectures, podcasts, interviews, news videos, commentary and song lyrics. Also there are multiple files that have been generated using webscrapping.
31
+
32
+
33
+
34
+ - **Curated by:** [Shivendra Singh](https://linktr.ee/shivendrra_)
35
+ - **License:** [none]
36
+
37
+ ### Dataset Sources
38
+
39
+ <!-- Provide the basic links for the dataset. -->
40
+
41
+ - **Repository:** [SmallLanguageModel](https://github.com/shivendrra/SmallLanguageModel-project)
42
+ - **Demo [optional]:** [More Information Needed]
43
+
44
+ ## Uses
45
+
46
+ <!-- Address questions around how the dataset is intended to be used. -->
47
+ - Can be used to train Transformer model/BPE tokenizers
48
+ - Also for learning and research purposes
49
+ - whatever you can think of, do whatever the fuck you want.
50
+
51
+ ### Direct Use
52
+
53
+ <!-- This section describes suitable use cases for the dataset. -->
54
+ Used to train a 76million parameter transformer model.
55
+
56
+ [Github repo](https://github.com/shivendrra/SmallLanguageModel-project)
57
+
58
+ ### Out-of-Scope Use
59
+
60
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
61
+ Not suitable for finetuning any base model or pre-trained models. Only NLP and base model training from scratch.
62
+
63
+ ## Dataset Structure
64
+
65
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
66
+ I'll add some finetuning data and then will update this section
67
+
68
+ ## Dataset Creation
69
+
70
+ ### Curation Rationale
71
+
72
+ <!-- Motivation for the creation of this dataset. -->
73
+ I wanted to create an app that would help me write script for my youtube videos. I fucked around a little with gpt-3.5 finetuning and langchain, and Youtube/Google APIs and got an idea to make a model and train it from scratch, all by myself.
74
+
75
+ [Youtube video](https://youtu.be/PVpyN_2z5II?si=Q1yl-sVp8kxaGyre)
76
+
77
+ ### Source Data
78
+
79
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
80
+ Youtube Videos:
81
+ -podcasts like Lex Fridman's, Waveform, Joe Rogan, vergecast, bill gates, etc.
82
+ -videos from candaian lad, aevy tv, SNL, lemmino, mrwhosetheboss, johnny harris, and many more.
83
+ -news videos from vox, wallstreetjournal, newyorktimes, the guardian, etc.
84
+ -interviews from variety, wired, y-combinator, eo.
85
+ -lectures from mit opencourseware, cs50, freecodecamp, crashcourse, etc.
86
+ -tech and science from kurzgesagt, real engineering, arvin ash, vsause, veritasium, etc.
87
+
88
+ Britannica.com:
89
+ -articles on various topics like Covid, Nuclear reactions, Antarctica, Nobel prize, Great leaders, countries, etc.
90
+
91
+ #### Data Collection and Processing
92
+
93
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
94
+ Used [Youtube V3 API](https://console.cloud.google.com/apis/api/youtube.googleapis.com/) to fetch video ids from a particular Youtube channel and generated a traget url. Then used [Youtube Transcript API](https://pypi.org/project/youtube-transcript-api/) to fetch transcripts from the videos and write it in a .txt file.
95
+ Made a json file containing channel ids of around 45channels and fetched transcipts from around 167K videos
96
+
97
+ Webscrapping data was generated using webscrapper that scrapped data from britannica.com and some sites that were fetched by GoogleCustomSearch API.
98
+
99
+ [More Information Needed](https://medium.com/@shivendrra_/build-your-own-llm-using-youtube-transcript-data-87c04469c5e2)