--- title: Tweet NLP Sentiment Analysis emoji: 🐦 colorFrom: indigo colorTo: gray sdk: streamlit app_file: app.py pinned: true --- # Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio` or `streamlit` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code). Path is relative to the root of the repository. `pinned`: _boolean_ Whether the Space stays on top of your list. # HISTORY OF THIS PROJECT 1. This project began when I saw the Twitter announcement by Facebook that they were re-branding to Meta, and I was curious what the public sentiment of this announcement would be. I suspected that it would be more negative than positive due to various reasons, but I wanted to find out nonetheless. 2. Tweet replies from the Facebook/Meta announcement were extracted using Twitter's API, then saved to a CSV file. 3. Due to Twitter's developer policies, I cannot share that file of extracted Tweet replies and associated metadata, so I ran sentiment analysis on those Tweet replies outside of this app and saved the results to the `df_redacted.csv` file. This file DOES contain the ids of the tweets which were analyzed, which is allowed per Twitter's policies. The sentiment model I used was VADER. 4. In future revisions, I plan on finding a way to post my code from the extraction portion of the project, along with demonstrating various methods of cleaning the Tweets and how that affects the outcome of the analysis.