Naive Bayes for Spam Classification
There are a lot of tutorials and youtube videos out there on using Naive Bayes for document classification. None of these tutorials are wrong, but they often hide some subtle points that if you think too hard about you will get confused. In this posts I want to explain what is really going on in Naive Bayes for spam classification.
This post assumes that you are already familiar with Bayes’ theorem.
Rather foolishly I did all the calculations in the post by hand. If you find any errors the please report them to me.
Our data set
To make things more concrete we will work on a very small data set where we can do the calculations directly. We are classifying micro-tweets of exactly 3 words. Our training set is as follows. \(S\) indicates that a message is spam and \(\overline{S}\) indicates that a message is not spam.
Number | Tweet | Spam (\(S\) or \(\overline{S}\)) |
---|---|---|
1 | money aardvark boondoggle | \(S\) |
2 | money money money | \(S\) |
3 | money money world | \(S\) |
4 | money world world | \(S\) |
5 | viagra money back | \(S\) |
6 | viagra heart honey | \(S\) |
7 | aardvark boondoggle world | \(\overline{S}\) (not spam) |
8 | honey honey honey | \(\overline{S}\) (not spam) |
9 | viagra heart money | \(\overline{S}\) (not spam) |
10 | money honey now | \(\overline{S}\) (not spam) |
Background: Classifying with only one word.
As a warm up let’s just build a classifier that uses one particular word \(w = \mathrm{money}\).
$$ P(S|w) = \frac{P(w|S)P(S)}{P(w)} $$\(P(S|w)\) is the even that an email is Spam given that the word \(w\) occurs in it. Using Bayes theorem we can calculate the probability that a message containing the word \(w\) is spam. We can estimate the values \(P(w|S)\), \(P(S)\) and \(P(w)\) from out data set.
$$ P(S) = \frac{\mathrm{number\ spam}}{\mathrm{total\ messages}} = \frac{6}{10} $$$$ P(\mathrm{money}|S) = \frac{5}{6} $$$$ P(\mathrm{money}) = \frac{7}{10} $$$$ P(S|\mathrm{money}) = \frac{P(w|S)P(S)}{P(w)} = \frac{\frac{5}{6}\frac{6}{10}}{\frac{7}{10}} = \frac{5}{10}\frac{10}{7} = \frac{5}{7} \approx 0.71 $$$$ P(\mathrm{money}) = P(\mathrm{money}|S)P(S) + P(\mathrm{money}|\overline{S})P(\overline{S}) $$$$ P(\mathrm{money})= \frac{5}{6}\frac{6}{10} + \frac{2}{4}\frac{4}{10} = \frac{7}{10} $$First Pitfall: Estimating the probabilities
So how to we estimate the probabilities \(P(w|S)\) , \(P(S)\) and \(P(w)\)? What do they really mean? The probabilities \(P(S)\) and \(P(\overline{S}\)) are unambiguous. They are just the probability that a tweet is spam or not. But \(P(w|S)\), \(P(w|\overline{S})\), and \(P(w)\) can mean different things depending exactly which model we use to calculate the probabilities.
There are two models:
(A): To calculate \(P(\mathrm{money}|S)\). There are 6 messages that are spam and in those 6 messages 5 of them (1,2,3,4,5) contain the word money so \(P(\mathrm{money}|S) = 5/6\), and of the 10 messages 7 of them (1,2,3,4,5,9,10)contain the word ‘money’ so \(P(\mathrm{money}) = 7/10\). This is exactly what we did above.
- $$P(\mathrm{money}) = 10/30.$$$$
P(\mathrm{money} | S) = \frac{8}{3 \times 6} = \frac{8}{18} =
\frac{4}{9} $$$$ P(S|\mathrm{money}) =
\frac{P(\mathrm{money}|S)P(S)}{P(\mathrm{money})} =
\frac{{\frac{8}{3 \times 6}\times \frac{6}{10}}}{ \frac{10}{30}} =
\frac{8}{10} $$
It seems that if spammers are prone to repeat words in their message then this increases the probability that a message containing that word is spam.
So how do you calculate the probability that the message ‘money money mammon’ is spam? In model (A) it does not matter how many times ‘money’ appears in a message: you only count the number of messages ‘money’ appears in. While in model (B) there is some weighting of the number of times that a word appears. But to calculate \(P(S|\mathrm{money}^2)\) (where is short hand for ‘money appearing twice) we have calculate \(P(\mathrm{money}^2|S)\). How you do this depends a bit on your model and the assumptions underlying the model. We’ll get to that later.
Take home message
So the first take home message is be careful how you count the words and how you calculate the probabilities. If you confuse model (A) and model (B) while writing your code you will get strange answers (as I did at one point).
Naive Bayes: first version
We are going to use model (A). That is we going to ignore how many times a word appears in a message. We are only interested if the word appears on the message or not. One word is not going to much of a spam classifier. Even in our little data set above, the word ‘money’ can appear in spam and non spam messages. We will get a better classifier if we take into account more words. Our data set is quite small and for each word we can count the number of times it appears in a spam tweet and the number of times it appears in a non-spam tweet.
Word | occurrences in spam | occurrences in non spam |
---|---|---|
\(w_1 = \mathrm{money}\) | 5 | 2 |
\(w_2 = \mathrm{world}\) | 2 | 1 |
\(w_3 = \mathrm{viagra}\) | 2 | 1 |
\(w_4 = \mathrm{aardvark}\) | 1 | 1 |
\(w_5 = \mathrm{heart}\) | 1 | 1 |
\(w_6 = \mathrm{boondoggle}\) | 1 | 1 |
\(w_7 = \mathrm{honey}\) | 1 | 2 |
\(w_8 = \mathrm{back}\) | 1 | 0 |
\(w_9 = \mathrm{now}\) | 0 | 1 |
You can turn these counts into probabilities, and thus you can calculate quantities like \(P(\mathrm{money}|S) = 5/6\), \(P(\mathrm{money}|\overline{S}) = 2/4\).
$$P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)$$$$\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}$$is the event that the words ‘viagra’, ‘money’ and ‘boondoggle’ appears in a message.
The Naive in Naive Bayes
$$ P(w_1 \land w_2 \land \cdots \land w_n | S) = P(w_1|S)P(w_1|S) \cdots P(w_n|S) $$$$ P(w_1 \land w_2 \land \cdots \land w_n | \overline{S}) = P(w_1|\overline{S})P(w_1|\overline{S}) \cdots P(w_n|\overline{S}) $$$$ P(w_1 \cdots w_n) = \prod_{1\leq i \leq n} P(w_i) $$Take home message
$$P(w_1 \land w_2 \land \cdots \land w_n | S) = \prod_{i=1}^n P(w_i|S)$$$$P(w_1 \land w_2 \land \cdots \land w_n | S) = \prod_{i=1}^n P(w_i|S)$$$$ P(w_1 \cdots w_n|S)P(S) + P(w_1 \cdots w_n|\overline{S})P(\overline{S}) $$The independence assumption is why Naive Bayes is referred to as naive. Although this model could be improved. It seems that the probability of one word appearing in a message should not be independent of another word. If a spammer write ‘money’ then he is likely to also include ‘viagra’ in the message. Even so, assuming independence works very well in practice.
Calculating the spam probability.
$$ \frac{ P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S) P(S)}{P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})} $$$$ \frac{ P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S)P(S)} {P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})} $$$$P(\mathrm{viagra})P(\mathrm{money})P(\mathrm{boondoggle})$$is the wrong answer.
So instead we get that \(P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})\) equals \(P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)P(S)\) plus \(P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|\overline{S})P(\overline{S})\).
The by the independence assumption \(P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)\) equals
$$ P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S) = \frac{2}{6}\frac{5}{6}\frac{1}{6} = \frac{5}{108} $$$$ \left(\frac{2}{6}\cdot \frac{5}{6}\cdot \frac{1}{6}\right)\frac{6}{10} + \left(\frac{1}{4}\cdot \frac{2}{4}\cdot \frac{1}{4}\right)\frac{4}{10} \approx 0.08 $$So \(P(S|\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})\) equals
$$ \frac{\frac{5}{108}\frac{6}{10}}{0.08} \approx 0.35 $$Not calculating the whole probability.
When implementing the spam filter we do not actually need to calculate the denominators. We just compare the expressions \(P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S)P(S)\) and \(P(\mathrm{viagra}|\overline{S})P(\mathrm{money}|\overline{S})P(\mathrm{boondoggle}|\overline{S})P(\overline{S})\) and see which one is bigger. This is also important because some of the numbers start getting smaller and smaller and you end up with floating point underflow errors. If the numbers get too small then you have to calculate with the logarithm of the probability and do additions rather than multiplication.
Laplacian Smoothing
$$ \frac{P(\mathrm{now}|S)P(\mathrm{money}|S)P(\mathrm{viagra}|S)P(S)}{P(\mathrm{now}\land\mathrm{money}\land\mathrm{viagra})} $$$$\frac{0 \cdot \frac{5}{6} \cdot \frac{2}{6}\frac{6}{10}}{P(\mathrm{now}\land\mathrm{money}\land\mathrm{viagra})} $$So even though the words ‘money’ and ‘viagra’ are pretty good indicators of a message being spam we get probability 0.
$$ \frac{0 + 1}{6 + 1} $$$$ \frac{0}{6} $$If you had a word that appeared in all 6 of the spam tweets then you would get an estimate \(\frac{6+1}{6+1}\) which would be \(1\).
I leave it an exercise to work out the correct thing to do in model (B).
Feature Modelling
All most all machine learning algorithms require numbers and vectors as inputs. Our Naive Bayes classifier does not really work with words, but feature vectors. There are different possible models, but we use something similar to model (A). First we take our data set and find the first \(n\) most popular words. The most popular word a data set consisting of English messages is typically the word ’the’. You can improve things by filtering out the most popular words that don’t contribute much a message being (referred to as stop words). We will not worry about that here. But a word like ’the’ is equally likely to appear in a spam tweet or a non spam tweet, so it is better to ignore it.
$$ f = (f_1, \ldots , f_i, \ldots, f_n) $$Where \(f_i\) is \(1\) if the \(i\)th most popular word \(w_i\) occurs in the message and \(0\) otherwise.
It is easy to write a function takes a message and turns it into our feature vector. Given our training set we can estimate two probabilities for our two classes \(S\) and \(\overline{S}\), \(P(w_i|S)\), \(P(\overline{w}_i|S)\), \(P(w_i| \overline{S})\) and \(P(\overline{w}_i | \overline{S})\), where \(w_i\) is the even that word \(i\) occurs in the message and \(\overline{w}_i\) is the event that word \(i\) does not occur in the message.
$$ f = (1,0,0,1,0,0,0,0,1) $$$$ w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7w_8 $$$$ \frac{P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8|S)P(S)}{P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8 w_9)} $$Calculating \(P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8|S)\) is easily done by the independence assumption. It is the product of the terms \(P(w_1|S)\),\(P(\overline{w}_2|S)\),\(P(\overline{w}_3|S)\),\(P(w_4|S)\),\(P(\overline{w}_5|S)\),\(P(\overline{w}_6|S)\),\(P(\overline{w}_7|S)\) and \(P(\overline{w}_8|S)\). All these values are easily estimated from our data set. For example \(P(\overline{w}_3|S)\) is the probability that the word ‘money’ does not appear in a spam tweet. We had 6 spam tweets and 5 of them contained the word money and so we get that \(P(\overline{w}_3|S)\) equals \(1/6\).
Model (A) and Feature Vectors
$$ \frac{P(w_1|S)(w_4|S)P(w_9|S)P(S)}{P(w_1\land w_w \land w_3)} $$since \(w_1\) is ‘money’, \(w_4\) is ‘aardvark’ and \(w_9\) is ’now’. We are throwing away information about the words that do not occur in the tweet. Does this matter? More importantly is this calculation incorrect.
To simply things imagine that we only had two words in our feature vector \(W_1\) and \(W_2\). Then given a message there are 4 possible atomic events:
$$ W_1 W_2, W_1 \overline{W}_2, \overline{W}_1 W_2, \overline{W}_1 \overline{W_2}$$$$ P( W_1 W_2 \lor W_1\overline{W}_2|S) $$$$ P(W_1|S)P(W_2|S) + P(W_1|S)P(\overline{W}_2|S)$$$$P(W_1|S)(P(W_2|S) + P(\overline{W}_2|S)) $$and since \(P(W_2|S) + P(\overline{W}_2|S)\) equals \(1\) we get \(P(W_1|S)\).
So if we ignore any information about \(W_2\) then we get a factor of \(1\). If we use what information we have about \(W_2\) then we get a better estimate for the probability. You can show that the same applies if you have lots of words in your feature vector. So our original model (A) is not wrong, but the feature vector model where we take into account if a word appears or not when we are estimating the probabilities and gives a better estimate if the word is spam or not. Note the above argument depends on our independence assumption (the naive in Naive Bayes).
If you did not look at any words then the only information that you would have is \(P(S)\) or \(P(\overline{S})\) as you look at more words in your feature you get a better estimate of the probability that the message is spam or not.
Model (B) with multiple words
$$ P(w^k|S) = P(w|S)^k $$That is each occurrence is independent, then we can calculate the probability that a message containing the multiple occurrences of a word is spam or not, but only if you use model (B) to calculate the probabilities. You should not mix up model (A) and model (B). It does not make sense in model (A) to ask what the probability of a word occurring \(k\) times in a spam message is. We can only ask if a m message contains the word or not.
If you want to take into account multiple words then do not use model (A) to calculate your probabilities.
Model (B) with multiple words and feature vectors.
The feature vector approach for model (A) considered vectors where the entries are \(0\) or \(1\). Given a feature vector \(f\) the entry \(i\)th entry is \(1\) if the word appears in the message and \(0\) otherwise.
$$ (2,0,0,0,0,1,0,0,0) $$Again it is not hard to use the information that a word appears \(0\) times.
Take home messages
Are you using model (A) or model (B). Don’t get them confused, especially when you are estimating the probabilities \(P(w_i|S)\) from the training data.
Are you using the negative information, the information that a word does not occur? It does not matter your maths is correct, but you are not using all the information that you have.
To understand how this is related to other machine learning algorithms, then you have to understand that we take a message and construct a feature vector. Depending on if you are using model (A) or (B) your feature vector either has \(0\) or \(1\) entries or positive integer that tells you how many times a word occurs in a message. Feature vectors is an example modelling that you often have to do in machine learning. Your data set and package does not always come as ready packaged as a set of vectors.
If you watch some random video on the internet, it is not always clear which model they are using when they calculate the probabilities.
The documentation to scikit-learn has a nice entry on naive Bayes, which discusses the various options on modelling as well as links to various interesting articles on the different modelling approaches to naive Bayes.