Search within Lanny's blog:


Leave me comments so I know people are actually reading my blogs! Thanks!

Tuesday, February 17, 2009

Paper Review: Detecting Spam Web Pages through Content Analysis

This paper was written by Ntoulas (UCLA) and et al. (Microsoft Research) and 15th international conference on World Wide Web, 2006.

This paper is continuing work following two other papers on detecting spam web pages by the same group of authors. It focuses on content analysis as apposed to links. The authors propose 10 heuristics and investigate how well these heuristics correlate with spam web pages using a dataset of 17,168 pages. These heuristics/metrics are then combined as features in addition to 28 others to build a training dataset, so machine learning classifiers can be used to classify spam web pages. Out of the several classifiers experimented, C4.5 decision tree algorithm performed the best, so bagging and boosting are used to improve the performance and the results are reported in terms of accuracy and the precision recall matrix.

The main contributions of this reference paper include detailed analysis of the 10 proposed heuristics and the idea of using machine learning classifiers to combine them in the specific spam web page detection application. Taking advantage of the large web page collection (over 105 million) and a good-sized labeled dataset (17,168 pages), the paper is able to show some nice statistical properties of web documents (spam or non-spam) and good performances of existing classifying methods when using these properties as features of a training set.
Not being an export in the IR field, I cannot tell which of the proposed 10 heuristics are novel ideas with respect to spam web page detection. However, fraction of visible content and compression ratio seem to be very creative ideas and look very promising. Using each heuristic by itself does not produce good performance, so the paper combined them into a multi-dimensional feature space. Note here that this method has been used in many research domains with various applications.

One common question IR researchers tend to ask is: how good is your dataset? In section 2, the paper did a good job acknowledging the biases of the document collection and then further provided good justifications. This makes the paper more sincere and convincing. The paper also did a good job explaining things clearly. For instance, in section 4.8, the example provided made it very easy to distinguish “Fraction of page drawn from globally popular words” from “Fraction of globally popular words”. Another example is in section 4.6 when the paper explained how some pages inflated during compression. I specifically liked how the authors explained the concepts of bagging and boosting briefly in this paper. They could have simply directed the readers to the references, but the brief introduction dramatically improves the experience for those readers who have not worked with such concepts (or are rusty on them such as in my case).
Although well-written, the paper still has some drawbacks and limitations. Firstly, section 6, related work, should really have been placed right after introduction. That way, readers can get a better picture of how this problem has been tackled in the IR community and also easily see how this paper differs. Also, this section gives a good definition of “content spam”, and it makes much more sense to talk about possible solutions after we have a clear definition.

Secondly, in section 3, the paper talks about 80% of all pages (as a result of uniform random sampling) being manually classified? I strongly suspect that is what the authors meant to say. 80% of over 105 million pages will take A LONG TIME to classify, period! Apparently this collection is not the same DS dataset mentioned in section 4 because the DS dataset only contained pages in English. So what is this collection? It apparently is a larger labeled dataset than the DS dataset. From Figures 6, 8, 10, and 11, we see the line graph touching the x-axis due to possibly not enough data. Using this larger labeled dataset (of the English portion) might have produced better graphs. Another thing I’d like to mention here is that spam web page is a “subjective classification” (at least for me it is). Naturally I’d think the large data collection was labeled under a divide-and-conquer approach, so each document is only looked at by one evaluator. If this were true, then the subjectivity of the evaluators plays an important role on the label. A better approach would have been having multiple evaluators working on the same set of web pages and label following the majority vote to minimize each evaluator’s subjectivity.

Thirdly, when building the training set, the proposed 10 heuristics are combined with 28 other features before applying the classifier. I think it would be better to compare results of using only these 10 features, using only those original 28 features, and using all features combined. That way, we can better evaluate how well these additional 10 heuristics contributed to the improvement of the classifiers.

Additionally, in section 4.1, the paper says “there is a clear correlation between word count and prevalence of spam” according to Figure 4. I failed to see the correlation.

Lastly, the experiment results are only for English web pages. Since the analysis in section 3 (Figure 3) clearly indicate that French and German web pages contained bigger portions of spam web pages, it would be great to see how proposed solution works with those languages. I understand the difficulty of working with other languages, but it would really improve the paper even if only some very initial experiments were performed and results reported.

There are other minor problems with the paper as well. For example, for each heuristic, the paper reported the mode, median, and mean. I think it is also necessary to provide variance (or standard deviation) because it is an important descriptor of a distribution. I would also suggest using a much lighter color so that the line graph is more readable for the portions where it overlaps with the bar graph. Dr. Snell once said that we should always print out your paper in black and white to make sure it looks okay, and I am strong believer of that! Also in section 4.3, the authors meant to say the horizontal axis represents the average “word length” within a page instead of “number of words”.

I think it’s worth mentioning that the authors did an awesome job in the conclusions and future work section. Detecting web spam is really like an “arms race” between the spam filter designers and spammers. As new technologies are developed to filter spam, spammers will always work hard to come up with ways to break the filtering technology. This is an ongoing battle and degradation of the classification performance over time is simply unavoidable.

This is a well-written paper that showed excellent performance, and I certainly enjoyed reading it. I’d like to end this report with a quote directly from the paper which is so well said:

“Victory does not require perfection, just a rate of detection that alters the economic balance for a would-be spammer. It is our hope that continued research on this front can make effective spam more expensive than genuine content.”






I just learned recently that Superman's father is the Godfather!

0 comments:

Post a Comment