Uses Tweet Data to classify as positive or negative
In this NLP project, I perform tweet sentiment classification using logistic regression. The code takes a dataset of tweets and their corresponding sentiment labels and performs various preprocessing steps, feature extraction, and training of a logistic regression model.
- The code starts by importing the necessary libraries and setting up the file paths for the dataset.
- The tweet text is read from the file
train_text.txt
, and the corresponding labels are read fromtrain_labels.txt
. - Preprocessing steps are applied to the tweet text:
- Removing
@user
mentions and characters before that. - Tokenizing the tweets into individual words.
- Removing stop words from the tweets.
- Removing
- Empty tweets and tweets with zero characters are removed from the dataset.
- Three initial features (f10, f11, f12) are computed:
- f10: Log count of words in each tweet.
- f11: Log of the length of the longest word in each tweet.
- f12: Log count of words with 5 or more characters in each tweet.
- Two feature sets (f1, f2) are created by matching words from the tweets with pre-defined word lists:
- f1: Scores from an adjective word list (
adj_2000.tsv
). - f2: Scores from a general word list (
f_words_2000.tsv
).
- f1: Scores from an adjective word list (
- Seven additional feature sets (f3-f9) are created by matching words from the tweets with pre-defined word lists specific to each feature set.
- All the feature sets are combined into a single feature matrix (Xtrain).
- The labels (sentiment values) are reshaped into a row vector (Ytrain).
- The logistic regression model is trained using batch gradient descent.
- The weights (W) of the logistic regression model are updated iteratively using the training data and learning rate (l_rate) for a specified number of iterations (iters).
- The sigmoid function is used to map the linear combination of features and weights to a probability score between 0 and 1.
- The cost function (log loss) is minimized to optimize the model's predictions.
- The trained logistic regression model is saved, along with the learned weights.
To use this code for tweet sentiment classification:
- Prepare the dataset:
- Create
train_text.txt
file containing the tweet text. - Create
train_labels.txt
file containing the corresponding sentiment labels.
- Create
- Modify the file paths in the code to match the locations of your dataset.
- Run the code to perform preprocessing, feature extraction, and training.
- The trained model and learned weights will be saved, which can be used for sentiment classification on new tweets.
Note: The code assumes the availability of pre-defined word lists for feature extraction. You may need to adjust the code or provide your own word lists depending on your specific requirements.
This README provides an overview of the code's functionality and usage. For more detailed and current information, refer to the Tweet_Classification.py
code, comments and variable names.
Happy sentiment classification!