Professional Documents
Culture Documents
Aspect Ranking: Identifying Important Product Aspects From Online Consumer Reviews
Aspect Ranking: Identifying Important Product Aspects From Online Consumer Reviews
Consumer Reviews
Abstract
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1496–1505,
Portland, Oregon, June 19-24, 2011.
2011
c Association for Computational Linguistics
these aspects and thus enhance the product reputa- periments on 11 popular products in four domains.
tion effectively. However, it is impractical for people The consumer reviews on these products are crawled
to identify the important aspects from the numerous from the prevalent forum Web sites (e.g., cnet.com
reviews manually. Thus, it becomes a compelling and viewpoint.com etc.) More details of our review
need to automatically identify the important aspects corpus are discussed in Section 3. The experimen-
from consumer reviews. tal results demonstrate the effectiveness of our ap-
A straightforward solution for important aspect proach on important aspects identification. Further-
identification is to select the aspects that are fre- more, we apply the aspect ranking results to the ap-
quently commented in consumer reviews as the im- plication of document-level sentiment classification
portant ones. However, consumers’ opinions on by carrying out the term-weighting based on the as-
the frequent aspects may not influence their over- pect importance. The results show that our approach
all opinions on the product, and thus not influence can improve the performance significantly.
consumers’ purchase decisions. For example, most The main contributions of this paper include,
consumers frequently criticize the bad “signal con- 1) We dedicate to the topic of aspect ranking,
nection” of iPhone 4, but they may still give high which aims to automatically identify important as-
overall ratings to iPhone 4. On the other hand, pects of a product from consumer reviews.
some aspects, such as “design” and “speed,” may not 2) We develop an aspect ranking algorithm to
be frequently commented, but usually more impor- identify the important aspects by simultaneously
tant than “signal connection.” Hence, the frequency- considering the aspect frequency and the influence
based solution is not able to identify the truly impor- of consumers’ opinions given to each aspect on their
tant aspects. overall opinions.
Motivated by the above observations, in this pa- 3) We apply aspect ranking results to the applica-
per, we propose an effective approach to automat- tion of document-level sentiment classification, and
ically identify the important product aspects from improve the performance significantly.
consumer reviews. Our assumption is that the There is another work named aspect ranking
important aspects of a product should be the as- (Snyder et al., 2007). The task in this work is differ-
pects that are frequently commented by consumers, ent from ours. This work mainly focuses on predict-
and consumers’ opinions on the important aspects ing opinionated ratings on aspects rather than iden-
greatly influence their overall opinions on the prod- tifying important aspects.
uct. Given the online consumer reviews of a spe- The rest of this paper is organized as follows. Sec-
cific product, we first identify the aspects in the re- tion 2 elaborates our aspect ranking approach. Sec-
views using a shallow dependency parser (Wu et al., tion 3 presents the experimental results, while Sec-
2009), and determine consumers’ opinions on these tion 4 introduces the application of document-level
aspects via a sentiment classifier. We then design an sentiment classification. Section 5 reviews related
aspect ranking algorithm to identify the important work and Section 6 concludes this paper with future
aspects by simultaneously taking into account the works.
aspect frequency and the influence of consumers’
opinions given to each aspect on their overall opin- 2 Aspect Ranking Framework
ions. Specifically, we assume that consumer’s over-
all opinion rating on a product is generated based In this section, we first present some notations and
on a weighted sum of his/her specific opinions on then elaborate the key components of our approach,
multiple aspects of the product, where the weights including the aspect identification, sentiment classi-
essentially measure the degree of importance of the fication, and aspect ranking algorithm.
aspects. A probabilistic regression algorithm is then
developed to derive these importance weights by 2.1 Notations and Problem Formulation
leveraging the aspect frequency and the consistency Let R = {r1 , · · · , r|R| } denotes a set of online con-
between the overall opinions and the weighted sum sumer reviews of a specific product. Each review
of opinions on various aspects. We conduct ex- r ∈ R is associated with an overall opinion rating
1497
Or , and covers several aspects with consumer com- unigram feature.
ments on these aspects. Suppose there are m aspects
A = {a1 , · · · , am } involved in the review corpus 2.3 Aspect Sentiment Classification
R, where ak is the k-th aspect. We define ork as the Since the Pros and Cons reviews explicitly express
opinion on aspect ak in review r. We assume that positive and negative opinions on the aspects, re-
the overall opinion rating Or is generated based on spectively, our task is to determine the opinions in
a weighted sum of the opinions on specific aspects free text reviews. To this end, we here utilize Pros
ork (Wang et al., 2010). The weights are denoted as and Cons reviews to train a SVM sentiment classifier.
{ωrk }mk=1 , each of which essentially measures the
Specifically, we collect sentiment terms in the Pros
degree of importance of the aspect ak in review r. and Cons reviews as features and represent each re-
Our task is to derive the important weights of as- view into feature vector using Boolean weighting.
pects, and identify the important aspects. Note that we select sentiment terms as those appear
Next, we will introduce the key components of in the sentiment lexicon provided by MPQA project
our approach, including aspect identification that (Wilson et al., 2005). With these features, we then
identifies the aspects ak in each review r, aspect sen- train a SVM classifier based on Pros and Cons re-
timent classification which determines consumers’ views. Given a free text review, since it may cover
opinions ork on various aspects, and aspect ranking various opinions on multiple aspects, we first locate
algorithm that identifies the important aspects. the opinionated expression modifying each aspect,
and determine the opinion on the aspect using the
2.2 Aspect Identification learned SVM classifier. In particular, since the opin-
ionated expression on each aspect tends to contain
As illustrated in Figure 1, there are usually two types
sentiment terms and appear closely to the aspect (Hu
of reviews, Pros and Cons review and free text re-
and Liu, 2004), we select the expressions which con-
views on the Web. For Pros and Cons reviews, the
tain sentiment terms and are at the distance of less
aspects are identified as the frequent noun terms in
than 5 from the aspect NP in the parsing tree.
the reviews, since the aspects are usually noun or
noun phrases (Liu, 2009), and it has been shown 2.4 Aspect Ranking
that simply extracting the frequent noun terms from
Generally, consumer’s opinion on each specific as-
the Pros and Cons reviews can get high accurate
pect in the review influences his/her overall opin-
aspect terms (Liu el al., 2005). To identify the as-
ion on the product. Thus, we assume that the con-
pects in free text reviews, we first parse each review
sumer gives the overall opinion rating Or based on
using the Stanford parser 1 , and extract the noun
phrases (NP) from the parsing tree as aspect can- ∑msum of his/her opinion ork on each as-
the weighted
pect ak : k=1 ωrk ork , which can be rewritten as
didates. While these candidates may contain much T
ωr or , where ωr and or are the weight and opinion
noise, we leverage the Pros and Cons reviews to
vectors. Inspired by the work of Wang et al. (2010),
assist identify aspects from the candidates. In par-
we view Or as a sample drawn from a Gaussian Dis-
ticular, we explore the frequent noun terms in Pros
tribution, with mean ωr T or and variance σ 2 ,
and Cons reviews as features, and train a one-class
SVM (Manevitz et al., 2002) to identify aspects in 1 (Or − ωr T or )2
the candidates. While the obtained aspects may con- p(Or ) = √ exp[− ]. (1)
2πσ 2 2σ 2
tain some synonym terms, such as “earphone” and
“headphone,” we further perform synonym cluster- To model the uncertainty of the importance
ing to get unique aspects. Specifically, we first ex- weights ωr in each review, we assume ωr as a sam-
pand each aspect term with its synonym terms ob- ple drawn from a Multivariate Gaussian Distribu-
tained from the synonym terms Web site 2 , and then tion, with µ as the mean vector and Σ as the covari-
cluster the terms to obtain unique aspects based on ance matrix,
1 1
1 p(ωr ) = exp[− (ωr − µ)T Σ−1 (ωr − µ)].
http://nlp.stanford.edu/software/lex-parser.shtml (2π)n/2 |Σ|1/2 2
2
http://thesaurus.com (2)
1498
∂Γ(Ψ) ∑ (Or −ωr T or )2
We further incorporate aspect frequency as a prior ∂σ 2
= (− σ12 + σ4
) = 0,
knowledge to define the distribution of µ and Σ. r∈R
(9)
Specifically, the distribution of µ and Σ is defined
which lead to the following solutions:
based on its Kullback-Leibler (KL) divergence to a
prior distribution with a mean vector µ0 and an iden- ω̂r = ( orσo2r + Σ−1 )−1 ( Oσr 2or + Σ−1 µ);
T
Table 4: Evaluations on Aspect Ranking. @5, @10, @15 denote the evaluation metrics of NDCG@5, NDCG@10,
and NDCG@15, respectively. * significant t-test, p-values<0.05.
provide product comparison on the important as- φ is set to 100. Based on the weighted features, we
pects to users, so that users can make wise purchase then trained a SVM classifier using the training re-
decisions conveniently. views to determine the overall opinions on the test-
In the following, we apply the aspect ranking re- ing reviews. For the performance comparison, we
sults to assist document-level review sentiment clas- compared our approach against two baselines, in-
sification. Generally, a review document contains cluding Boolean weighting method and frequency
consumer’s positive/negative opinions on various as- weighting (tf ) method (Paltoglou et al., 2010) that
pects of the product. It is difficult to get the ac- do not utilize the importance of aspects. The com-
curate overall opinion of the whole review without parison results are shown in Table 6. We can see
knowing the importance of these aspects. In ad- that our approach (IA) significantly outperforms the
dition, when we learn a document-level sentiment other methods in terms of average F1 -measure by
classifier, the features generated from unimportant over 2.79% and 4.07%, respectively. The results
aspects lack of discriminability and thus may dete- also show that the Boolean weighting method out-
riorate the performance of the classifier (Fang et al., performs the frequency weighting method in terms
2010). While the important aspects and the senti- of average F1 -measure by over 1.25%, which are
ment terms on these aspects can greatly influence the consistent with the previous research by Pang et al.
overall opinions of the review, they are highly likely (2002). On the other hand, from the IA weight-
to be discriminative features for sentiment classifica- ing formula, we observe that without using the im-
tion. These observations motivate us to utilize aspect portant aspects, our term-weighting function will be
ranking results to assist classifying the sentiment of equal to Boolean weighting. Thus, we can speculate
review documents. that the identification of important aspects is ben-
eficial to improving the performance of document-
Specifically, we randomly sampled 100 reviews of
level sentiment classification.
each product as the testing data and used the remain-
ing reviews as the training data. We first utilized our 5 Related Work
approach to identify the importance aspects from the
training data. We then explored the aspect terms and Existing researches mainly focused on determining
sentiment terms as features, based on which each re- opinions on the reviews, or identifying aspects from
view is represented as a feature vector. Here, we these reviews. They viewed each aspect equally
give more emphasis on the important aspects and without distinguishing the important ones. In this
the sentiment terms that modify these aspects. In section, we review existing researches related to our
particular, we set the term-weighting as 1 + φ · ϖk , work.
where ϖk is the importance score of the aspect ak , Analysis of the opinion on whole review text had
1502
SV M + Boolean SV M + tf SV M + IA
Data set P R F1 P R F1 P R F1
Canon EOS 0.689 0.663 0.676 0.679 0.654 0.666 0.704 0.721 0.713
Fujifilm 0.700 0.687 0.693 0.690 0.670 0.680 0.731 0.724 0.727
Panasonic 0.659 0.717 0.687 0.650 0.693 0.671 0.696 0.713 0.705
MacBook 0.744 0.700 0.721 0.768 0.675 0.718 0.790 0.717 0.752
Samsung 0.755 0.690 0.721 0.716 0.725 0.720 0.732 0.765 0.748
iPod Touch 0.686 0.746 0.714 0.718 0.667 0.691 0.749 0.726 0.737
Sony NWZ 0.719 0.652 0.684 0.665 0.646 0.655 0.732 0.684 0.707
BlackBerry 0.763 0.719 0.740 0.752 0.709 0.730 0.782 0.758 0.770
iPhone 3GS 0.777 0.775 0.776 0.772 0.762 0.767 0.820 0.788 0.804
Nokia 5800 0.755 0.836 0.793 0.744 0.815 0.778 0.805 0.821 0.813
Nokia N95 0.722 0.699 0.710 0.695 0.708 0.701 0.768 0.732 0.750
Table 6: Evaluations on Term Weighting methods for Document-level Review Sentiment Classification. IA denotes
the term weighing based on the important aspects. * significant t-test, p-values<0.05.
been extensively studied (Pang and Lee, 2008). Ear- pects and sentiments simultaneously. Afterwards,
lier research had been studied unsupervised (Kim et Wu et al. (2009) utilized the dependency parser to
al., 2004), supervised (Pang et al., 2002; Pang et al., extract the noun phrases and verb phrases from the
2005) and semi-supervised approaches (Goldberg et reviews as the aspect candidates. They then trained
al., 2006) for the classification. For example, Mullen a language model to refine the candidate set, and
et al. (2004) proposed an unsupervised classifica- to obtain the aspects. On the other hand, for the
tion method which exploited pointwise mutual in- topic of sentiment classification on the specific as-
formation (PMI) with syntactic relations and other pect, Snyder et al. (2007) considered the situation
attributes. Pang et al. (2002) explored several ma- when the consumers’ opinions on one aspect could
chine learning classifiers, including Naı̈ve Bayes, influence their opinions on others. They thus built
Maximum Entropy, SVM, for sentiment classifica- a graph to analyze the meta-relations between opin-
tion. Goldberg et al. (2006) classified the sentiment ions, such as agreement and contrast. And they pro-
of the review using the graph-based semi-supervised posed a Good Grief algorithm to leveraging such
learning techniques, while Li el al. (2009) tackled meta-relations to improve the prediction accuracy
the problem using matrix factorization techniques of aspect opinion ratings. In addition, Wang et al.
with lexical prior knowledge. (2010) proposed the topic of latent aspect rating
which aimed to infer the opinion rating on the as-
Since the consumer reviews usually expressed
pect. They first employed a bootstrapping-based al-
opinions on multiple aspects, some works had
gorithm to identify the major aspects via a few seed
drilled down to the aspect-level sentiment analysis,
word aspects. They then proposed a generative La-
which aimed to identify the aspects from the reviews
tent Rating Regression model (LRR) to infer aspect
and to determine the opinions on the specific aspects
opinion ratings based on the review content and the
instead of the overall opinion. For the topic of aspect
associated overall rating.
identification, Hu and Liu (2004) presented the asso-
ciation mining method to extract the frequent terms While there were usually huge collection of re-
as the aspects. Subsequently, Popescu et al. (2005) views, some works had concerned the topic of
proposed their system OPINE, which extracted the aspect-based sentiment summarization to combat
aspects based on the KnowItAll Web information the information overload. They aimed to summa-
extraction system (Etzioni et al., 2005). Liu el al. rize all the reviews and integrate major opinions on
(2005) proposed a supervised method based on lan- various aspects for a given product. For example,
guage pattern mining to identify the aspects in the Titov et al. (2008) explored a topic modeling method
reviews. Later, Mei et al. (2007) proposed a prob- to generate a summary based on multiple aspects.
abilistic topic model to capture the mixture of as- They utilized topics to describe aspects and incor-
1503
porated a regression model fed by the ground-truth References
opinion ratings. Additionally, Lu el al. (2009) pro-
P. Beineke, T. Hastie, C. Manning, and S. Vaithyanathan.
posed a structured PLSA method, which modeled An Exploration of Sentiment Summarization. AAAI,
the dependency structure of terms, to extract the as- 2003.
pects in the reviews. They then aggregated opinions G. Carenini, R.T. Ng, and E. Zwart. Extracting Knowl-
on each specific aspects and selected representative edge from Evaluative Text. K-CAP, 2005.
text segment to generate a summary. G. Carenini, R.T. Ng, and E. Zwart. Multi-document
In addition, some works proposed the topic of Summarization of Evaluative Text. ACL, 2006.
product ranking which aimed to identify the best China Unicom 100 Customers iPhone User Feedback
products for each specific aspect (Zhang et al., Report, 2009.
2010). They used a PageRank style algorithm to Y. Choi and C. Cardie. Hierarchical Sequential Learning
mine the aspect-opinion graph, and to rank the prod- for Extracting Opinions and Their Attributes. ACL,
2010.
ucts for each aspect.
H. Cui, V. Mittal, and M. Datar. Comparative Experi-
Different from previous researches, we dedicate ments on Sentiment Classification for Online Product
our work to identifying the important aspects from Reviews. AAAI, 2006.
the consumer reviews of a specific product. S. Dasgupta and V. Ng. Mine the Easy, Classify the Hard:
A Semi-supervised Approach to Automatic Sentiment
6 Conclusions and Future Works Classification. ACL, 2009.
K. Dave, S. Lawrence, and D.M. Pennock. Opinion Ex-
In this paper, we have proposed to identify the im- traction and Semantic Classification of Product Re-
portant aspects of a product from online consumer views. WWW, 2003.
reviews. Our assumption is that the important as- A. Esuli and F. Sebastiani. A Publicly Available Lexical
pects of a product should be the aspects that are fre- Resource for Opinion Mining. LREC, 2006.
quently commented by consumers and consumers’ O. Etzioni, M. Cafarella, D. Downey, A. Popescu,
T. Shaked, S. Soderland, D. Weld, and A. Yates. Un-
opinions on the important aspects greatly influence
supervised Named-entity Extraction from the Web: An
their overall opinions on the product. Based on this Experimental Study. Artificial Intelligence, 2005.
assumption, we have developed an aspect ranking al-
J. Fang, B. Price, and L. Price. Pruning Non-Informative
gorithm to identify the important aspects by simulta- Text Through Non-Expert Annotations to Improve
neously considering the aspect frequency and the in- Aspect-Level Sentiment Classification. COLING,
fluence of consumers’ opinions given to each aspect 2010.
on their overall opinions. We have conducted exper- O. Feiguina and G. Lapalme. Query-based Summariza-
iments on 11 popular products in four domains. Ex- tion of Customer Reviews. AI, 2007.
perimental results have demonstrated the effective- Forrester Research. State of Retailing Online 2009: Mar-
ness of our approach on important aspects identifi- keting Report. http://www.shop.org/soro, 2009.
cation. We have further applied the aspect ranking A. Goldberg and X. Zhu. Seeing Stars when There aren’t
results to the application of document-level senti- Many Stars: Graph-based Semi-supervised Learning
for Sentiment Categorization. ACL, 2006.
ment classification, and have significantly improved
M. Gamon, A. Aue, S. Corston-Oliver, and E. Ringger.
the classification performance. In the future, we will
Pulse: Mining Customer Opinions from Free Text.
apply our approach to support other applications. IDA, 2005.
M. Hu and B. Liu. Mining and Summarizing Customer
Acknowledgments Reviews. SIGKDD, 2004.
K. Jarvelin and J. Kekalainen. Cumulated Gain-based
This work is supported in part by NUS-Tsinghua Ex- Evaluation of IR Techniques. TOIS, 2002.
treme Search (NExT) project under the grant num- S. Kim and E. Hovy. Determining the Sentiment of Opin-
ber: R-252-300-001-490. We give warm thanks to ions. COLING, 2004.
the project and anonymous reviewers for their com- J. Kim, J.J. Li, and J.H. Lee. Discovering the Discrimi-
ments. native Views: Measuring Term Weights for Sentiment
Analysis. ACL, 2009.
1504
Kelsey Research and comscore. Online Consumer- B. Pang and L. Lee. Seeing stars: Exploiting Class Re-
Generated Reviews Have Significant Impact on Offline lationships for Sentiment Categorization with Respect
Purchase Behavior. to Rating Scales. ACL, 2005.
K. Lerman, S. Blair-Goldensohn, and R. McDonald. B. Pang and L. Lee. Opinion mining and sentiment
Sentiment Summarization: Evaluating and Learning analysis. Foundations and Trends in Information Re-
User Preferences. EACL, 2009. trieval, 2008.
B. Li, L. Zhou, S. Feng, and K.F. Wong. A Unified Graph A.-M. Popescu and O. Etzioni. Extracting Product Fea-
Model for Sentence-Based Opinion Retrieval. ACL, tures and Opinions from Reviews. HLT/EMNLP,
2010. 2005.
T. Li and Y. Zhang, and V. Sindhwani. A Non-negative R. Prabowo and M. Thelwall. Sentiment analysis: A
Matrix Tri-factorization Approach to Sentiment Clas- Combined Approach. Journal of Informetrics, 2009.
sification with Lexical Prior Knowledge. ACL, 2009. G. Qiu, B. Liu, J. Bu, and C. Chen.. Expanding Domain
B. Liu, M. Hu, and J. Cheng. Opinion Observer: Ana- Sentiment Lexicon through Double Propagation. IJ-
lyzing and Comparing Opinions on the Web. WWW, CAI, 2009.
2005. M. Sanderson and B. Croft. Document-word Co-
regularization for Semi-supervised Sentiment Analy-
B. Liu. Handbook Chapter: Sentiment Analysis and Sub-
sis. ICDM, 2008.
jectivity. Handbook of Natural Language Processing.
B. Snyder and R. Barzilay. Multiple Aspect Ranking us-
Marcel Dekker, Inc. New York, NY, USA, 2009.
ing the Good Grief Algorithm. NAACL HLT, 2007.
Y. Lu, C. Zhai, and N. Sundaresan. Rated Aspect Sum-
S. Somasundaran, G. Namata, L. Getoor, and J. Wiebe.
marization of Short Comments. WWW, 2009.
Opinion Graphs for Polarity and Discourse Classifica-
L.M. Manevitz and M. Yousef. One-class svms for Doc- tion. ACL, 2009.
ument Classification. The Journal of Machine Learn- Q. Su, X. Xu, H. Guo, X. Wu, X. Zhang, B. Swen, and
ing, 2002. Z. Su. Hidden Sentiment Association in Chinese Web
R. McDonal, K. Hannan, T. Neylon, M. Wells, and Opinion Mining. WWW, 2008.
J. Reynar. Structured Models for Fine-to-coarse Sen- C. Toprak, N. Jakob, and I. Gurevych. Sentence and
timent Analysis. ACL, 2007. Expression Level Annotation of Opinions in User-
Q. Mei, X. Ling, M. Wondra, H. Su, and C.X. Zhai. Topic Generated Discourse. ACL, 2010.
Sentiment Mixture: Modeling Facets and Opinions in P. Turney. Thumbs up or Thumbs down? Semantic Ori-
Weblogs. WWW, 2007. entation Applied to Unsupervised Classification of Re-
H.J. Min and J.C. Park. Toward Finer-grained Sentiment views. ACL, 2002.
Identification in Product Reviews Through Linguistic I. Titov and R. McDonald. A Joint Model of Text and
and Ontological Analyses. ACL, 2009. Aspect Ratings for Sentiment Summarization. ACL,
T. Mullen and N. Collier. Sentiment Analysis using 2008.
Support Vector Machines with Diverse Information H. Wang, Y. Lu, and C.X. Zhai. Latent Aspect Rating
Sources. EMNLP, 2004. Analysis on Review Text Data: A Rating Regression
N. Nanas, V. Uren, and A.D. Roeck. Building and Ap- Approach. KDD, 2010.
plying a Concept Hierarchy Representation of a User B. Wei and C. Pal. Cross Lingual Adaptation: An Exper-
Profile. SIGIR, 2003. iment on Sentiment Classifications. ACL, 2010.
H. Nishikawa, T. Hasegawa, Y. Matsuo, and G. Kikui. T. Wilson, J. Wiebe, and P. Hoffmann. Recognizing
Optimizing Informativeness and Readability for Senti- Contextual Polarity in Phrase-level Sentiment Analy-
ment Summarization. ACL, 2010. sis. HLT/EMNLP, 2005.
B. Ohana and B. Tierney. Sentiment Classification of Re- T. Wilson and J. Wiebe. Annotating Attributions and Pri-
views Using SentiWordNet. IT&T Conference, 2009. vate States. ACL, 2005.
G. Paltoglou and M. Thelwall. A study of Information Y. Wu, Q. Zhang, X. Huang, and L. Wu. Phrase Depen-
Retrieval Weighting Schemes for Sentiment Analysis. dency Parsing for Opinion Mining. ACL, 2009.
ACL, 2010. K. Zhang, R. Narayanan, and A. Choudhary. Voice of
B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? Sen- the Customers: Mining Online Customer Reviews for
timent Classification using Machine Learning Tech- Product Feature-based Ranking. WOSN, 2010.
J. Zhu, H. Wang, and B.K. Tsou. Aspect-based Sentence
niques. EMNLP, 2002.
Segmentation for Sentiment Summarization. TSA,
B. Pang, L. Lee, and S. Vaithyanathan. A Sentimen-
2009.
tal Education: Sentiment Analysis using Subjectivity
L. Zhuang, F. Jing, and X.Y. Zhu. Movie Review Mining
Summarization based on Minimum cuts Techniques.
and Summarization. CIKM, 2006.
ACL, 2004.
1505