community-blogs / README.md
ariG23498's picture
ariG23498 HF staff
Update README.md
66f114c verified
---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 503475
num_examples: 428
download_size: 218936
dataset_size: 503475
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Created by the following code:
```py
!pip install -Uq datasets
import requests
from bs4 import BeautifulSoup, Comment
import pandas as pd
from datasets import Dataset
def get_content(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
return soup
url = "https://huggingface.co/blog/community"
soup = get_content(url)
articles = soup.find_all("article")
titles = [article.h4.text for article in articles]
links = [f'https://hf.co{article.find("a", class_="block px-3 py-2 cursor-pointer").get("href")}' for article in articles]
def get_article(soup):
# Find all comments in the document
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
# Initialize variables to store the start and end comments
start_comment = None
end_comment = None
# Identify the start and end comments
for comment in comments:
comment_text = comment.strip()
if comment_text == 'HTML_TAG_START':
start_comment = comment
elif comment_text == 'HTML_TAG_END':
end_comment = comment
# Check if both comments were found
if start_comment and end_comment:
# Collect all elements between the start and end comments
contents = []
current = start_comment.next_sibling
while current and current != end_comment:
contents.append(current)
current = current.next_sibling
# Convert the contents to a string
between_content = ''.join(str(item) for item in contents)
# Output the extracted content
return between_content
else:
return "Start or end comment not found."
article_soups = [get_content(link) for link in links]
articles = [get_article(article_soup) for article_soup in article_soups]
# Assuming titles, links, articles are your lists
df = pd.DataFrame({
'title': titles,
'link': links,
'article': articles
})
# Create a Hugging Face Dataset object
dataset = Dataset.from_pandas(df)
# Push the dataset to the Hugging Face Hub
dataset.push_to_hub("ariG23498/community-blogs")
```