Artificial Intelligence Review: Arabic Query-Based Update-Summarization System

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Artificial Intelligence Review

Arabic Query-Based Update-Summarization System


--Manuscript Draft--

Manuscript Number: AIRE-D-18-00173

Full Title: Arabic Query-Based Update-Summarization System

Article Type: Manuscript

Keywords: Automatic summarization; Arabic update summarization; Graph-ranking model;


Arabic WordNet

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Author Manuscript Click here to download Manuscript template.tex

Artificial Intelligence Review manuscript No.


(will be inserted by the editor)

Arabic Query-Based Update-Summarization System

Muneera Alhoshan · Najwa Altwaijry

Received: date / Accepted: date

Abstract Update summarization is the problem of extracting new information


on a specific topic from a collection of documents, with the assumption that the
reader has prior knowledge of that topic. It is useful for users who want to track
topic development over time. The availability of systems that provide update
summarization is important to save the time and effort of users. Unfortunately,
such resources are lacking for the Arabic language. In this paper, we provide a
query-based update-summarization system. The summary is generated from mul-
tiple documents based on the similarity between sentences, and between sentences
and user query. We use a graph-based ranking model to represent the similarity,
through a combination of lexical and semantic relations between words.
Keywords Automatic summarization · Arabic update summarization · Graph-
ranking model · Arabic WordNet

1 Introduction

? estimates that the number of Internet users will reach around seven billion users
by 2018. With the rapid growth of Internet users, the problem of information
overload arises, and users face difficulties in assimilating information. These diffi-
culties result in users not reading important documents, giving rise to a need for
an automated process that addresses that matter.
Research studies in this area have shown strong progress lately, especially in
the English language (??). Unfortunately, research in Arabic text summarization

Muneera Alhoshan
King Abdulaziz City for Science and Technology
Riyadh, Saudi Arabia
E-mail: malhawshan@kacst.edu.sa
Najwa Altwaijry
Department of Computer Science, College of Computer and Information Sciences
King Saud University
Riyadh, Saudi Arabia
E-mail: ntwaijry@ksu.edu.sa
2 M. Alhoshan, N. Altwaijry

is still at an early stage, and most of the published work falls under the heading
of generic summarization, which summarizes a static collection of documents on a
given topic (?). Natural language processing (NLP) tools made for other languages
are ineffective with Arabic, due to complexity of structure and morphology, which
requires special handling (?).
This paper provides an update-summarization system that can generate an up-
date summary from multiple web documents related to the user query, containing
new information on the user’s requested topic. The system considers that the user
already has information about the topic, and wants to track topic developments
over time, so it does not repeat information. This work is a first attempt at such
system in the Arabic language.
This system relies on similarity calculations to generate summaries using a
graph-based ranking model to represent the documents. Moreover, the similarity
is computed based on a combination of lexical and semantic features using the
Arabic WordNet dictionary (?).
The remaining part of the paper is organized as follows: Part 2 provides an
overview of available update-summarization systems. Part 3 explains the system
methodology. Part 4 discusses the evaluation of the system. Finally, Part 5 sum-
marizes the paper and discusses system limitations and future work.

2 Related Works

Update summarization was proposed by the Document Understanding Confer-


ence1 (DUC) in 2007. Several approaches were used to generate update summaries
for the English language. Update summaries are always related to time, so some
approaches were focused on getting the most recent facts. One of these approaches
searched for time expressions in the documents (?). Another approach detected
topics from a time-tagged corpus and represented them in a timeline (?). These
methods are not always accurate, as recent information is not necessarily new and
could be repeated information (?).
Many other approaches depend on sentence ranking. ? applied a graph-based
sentence-ranking algorithm (PNR2) that is an extension of the TextRank algo-
rithm (?). PNR2 performs negative and positive reinforcements. Positive reinforce-
ments determine the importance of a sentence, while negative reinforcements avoid
sentence redundancy. ? and ? use the Maximal Marginal Relevance (MMR) algo-
rithm to rank sentences. In the ? application, the algorithm selects the summary
sentences based on relevance to a query and their dissimilarity to old sentences
based on TF-IDF. ? uses a graph-based Marginal Ranking model to avoid the
problems of the reinforcements method. ? uses Manifold Ranking with sink points
to address non redundant, topic-relevant, and important sentences. ? propose a
graph-based ranking method that reformulates the update summarization as a
quadratically constrained quadratic programming problem.
? propose a generative Hierarchical Tree Model (HTM) based on the Latent
Dirichlet Allocation model (LDA) (?). HTM views the update summarization task
as a topic-detection problem, in contrast to the aforementioned works that view
the update-summarization task as a redundancy-removal problem. ? propose a
1 http://www-nlpir.nist.gov/projects/duc/duc2007/tasks.html
Arabic Query-Based Update-Summarization System 3

hierarchical sequential update-summarization system, consisting of two levels: a


level concerned with finding the most suitable topic and its most representative
keywords using LDA technique, and the other level concerned with sentence scor-
ing using three methods: the keywords diversity, the length of a sentence, and the
position of the sentence (KLP) method, short length sentence with larger keywords
diversity (SKD) method, and keyword shooting (KS) method.KLP scores a sen-
tence based on its position and length.SKD favors sentences with short length and
their wide variety of topic keywords, while KS is concerned only with sentences’
keyword variety.
As Arabic has no such systems (?), the main contribution of this paper is to
provide an update-summarization system for the Arabic language.

3 Methods

The system followed hybrid approaches to create an extractive, query-based up-


date text summarization system for the Arabic language. These approaches are:
graph-based approach and semantic-based approach. The system consists of four
components:
1. Documents retrieval
2. Documents Preprocessing
3. Graph-based ranking model creation
4. Summary generation

3.1 Documents retrieval

The system uses as input the user query, and then retrieves relevant documents
that concern the specific event or situation in which the user is interested in. Each
document are accessed and parsed to get the text from it for further analysis
described bellow. The retrieval engine employed in this stage is Google Custom
Search API 2 .

3.2 Documents preprocessing

Analyzing Arabic text is a very challenging process due to the complexity of the
Arabic grammatical rules(?).The preprocessing step is essential in any text-mining
task, as it helps improve runtime efficiency and increases the accuracy of the task.
The text documents are preprocessed using several techniques:

– Tokenization: the process of splitting the text into small units (sentences and
words).
– Number and non-Arabic words removal.
– Punctuation-mark and symbol removal: the process of removing punc-
tuation marks from each sentence.
2 https://developers.google.com/custom-search/
4 M. Alhoshan, N. Altwaijry

– Stop-words removal: the process of removing words that have no specific


meaning. A list of stop words used in this study appears in Appendix A.
– Words stemming: using the Khoja stemmer(?), which uses a root-based
approach and has good accuracy (?).

3.3 Graph-based ranking model creation

The graph-based summarization approach depends on representing a set of doc-


uments or sentences as a graph. Multiple studies used the graph-based approach
for update summarization (???). This system uses a model similar to the one pro-
posed by ? with some modifications, including relying on the similarity of sentences
for redundancy elimination and summary generation. The textual documents are
represented in a graph whose vertices (nodes) represent the document sentences
and query, and the association between them represents the similarity. Each ver-
tex has multiple weighted edges that represent similarities among vertices. Also,
each vertex has a timestamp that represents document date (see Fig. 1). Our
model differs from the above-mentioned model in three areas: we use a different
redundancy checking algorithm and different similarity measures, and it generate
a query-based update summary instead of a query-based summary only.

Fig. 1 Illustration of the graph-based ranking model

3.3.1 Measuring Similarity

As ? similarity between each sentence and the query and between the sentences is
measured based on two similarity levels: the lexical level and the semantic level,
however, this system used different dictionary. The lexical similarity (SL) is
measured by the Jaccard coefficient based on common terms between sentences
using the following equation:

SL(Si,Sj) = MC /(MSi + MSj − MC ) (1)

where:
MC : the number of common words between sentences
Arabic Query-Based Update-Summarization System 5

MSi : the total number of words in sentence i


MSj : the total number of words in sentence j

The second level of similarity is measured semantically based on the Arabic


WordNet (AWN) dictionary (??). On this level the sentences are represented by
Semantic Vectors (SV). The vectors contain distinct words from both sentences
and their similarity score. First, the similarity score for each word is calculated
as follows: if a word in exists in sentence, then the score is set to 1; otherwise
the score is computed between that word and each word in using Equation 2.
Finally, the score is set to the highest similarity score. It is computed by finding
the synonyms set of each word to detect the common synonyms between the two
words. The synonyms are acquired from the AWN dictionary, and searching for
the synonyms occurs either by the word or, if not found by its root. Once the
two sets of synonyms for each word have been found, the similarity score between
them is computed using the Jaccard coefficient:

sim(W i,W j) = MC /(MW i + MW j − MC ) (2)


where:
MC : the number of common words between the two synonym sets
Mwi : the total number of words in Wi synonym set
Mwj : the total number of words in Wj synonym set

Example:
H Qå…
. J
ÊmÌ '@
Sentence 1: I É®¢Ë@ . 

Sentence 2: á .ÊË@ YËñË@ úæ„Jk@ 

Two semantic vectors Vi and Vj are created for distinct words from both sen-
tences. In Vi the similarity score is set to 1 for the words from Sentence 1; then
Equation 2 is applied for the words from Sentence 2. For Vj the opposite occurs,
setting 1 for the words from Sentence 2, then applying Equation 2 for the words
from Sentence 1 (see Table 1).

Table 1 The example similarity scores

H. Qå…
É®¢Ë@ I.J
ÊmÌ '@ úæ„Jk@ YËñË@ á .ÊË@
Vi 1 1 1 0.5 0.32 1
Vj 0.5 0.32 1 1 1 1

The semantic vectors generated as described above are used to calculate the
overall semantic similarity (SM) for each SV using the Cosine similarity:

SM(Si,Sj) = Vi .Vj (k Vi k ∗ k Vj k) (3)

where:
Vi : he semantic vector of sentence Si
Vj : he semantic vector of sentence Sj
6 M. Alhoshan, N. Altwaijry

Finally, sentence similarity values are combined to represent the overall sen-
tence similarity, the sum of the lexical and semantic similarities, calculated as
follows:
sim(Si,Sj) = λ ∗ SL(Si,Sj) + β ∗ SM(Si,Sj) (4)

where:
Vi : he semantic vector of sentence Si
Vj : he semantic vector of sentence Sj

Where λ, β < 1. λ and β are weighting parameters, β should be a value greater


than λ as we are concerned with finding semantic similarity. The weights used are:
λ = 0.2 and β = 0.8.
Moreover, the similarity threshold between two sentences varies from one con-
text to another and should have a value greater than or equal to 0.7 based on
empirical experiments explained in Section 4.1 .
Similarity threshold between the query and other sentences was set to 0.2. The
Google search API is used so that all retrieved results are relevant to the user
query, but the most relevant sentence is identified by at least a 0.2 similarity to
the query.

3.4 Summary generation

3.4.1 Sentences selection

Selecting a sentence to include in the summary requires considering two aspects.


First, the sentence score. All the sentences that achieve high similarity scores with
the query will be considered summary sentences. Second, to comply with update-
summary characteristics, the method is time based and selects sentences from
the latest documents that do not appear in older ones. This is accomplished by
considering the date of the sentence and its similarity score. For example, assume
two sentences, A and B, where Sentence A appears in Document 1 with timestamp
T1, which means that it is an old sentence from an old document. Sentence B
appears in Document 2 with timestamp T2, which means that it is a new sentence
from a new document. The summary will be generated from sentences in Document
2. If A and B have high similarity, then B will be excluded from the summary. If
all sentences from the latest documents have high similarity to the old documents’
sentences, then no summary will be generated, as there is no new information
found. The timestamp can be sets either by the user providing the time, or the
system setting the time to the current time.

3.4.2 Redundancy checking

When multiple documents are summarized, removal of redundant sentences is a


must before generating the final summary. Table 2 shows the algorithm used for
that purpose.
Arabic Query-Based Update-Summarization System 7

Table 2 Redundancy removal and sentence selection algorithm

Get all nodes of the graph


Iterate over the graph’s nodes and for each node do the following:
Get all node’s edges
For each edge do the following:
IF edge weight >= 0.7
IF source node and target node are on the same document,
Remove current node and exit the current loop
Else IF both nodes’ dates are equal to latest date or after it,
Remove current node and exit the current loop
Else
Create a scores list and keep current node’s edges weights(similarity scores)
Iterate over created scores list and find the maximum score if its < 0.7
Get Time of the latest document in case the user doesn’t specified a time
IF node’s date after or equal the specified Time
ADD the sentence to the nominated sentences list

3.4.3 Sentence ordering and summary generating

For the tokenization process, both the order of retrieved documents and the or-
der of the sentences in each document are retained. After finding all nominated
sentences to be in the summary, they are sorted based on their appearance in
the document. In the case of different sentences from different documents, the are
ordered in paragraphs each group of sentences from a specific document will be
together in one paragraph and sorted based on their location in that document.
The order of these paragraphs is based on retrieved documents order. The maxi-
mum length of the summary is set at 30% of the average length of the documents.
Table 3 shows the algorithm used for that purpose.

3.5 System evaluation

Due to the lack of gold-standard summaries (reference summaries) for update


summaries in the Arabic language, because all available references are dedicated

Table 3 Summary generation algorithm

Sort the nominated sentences based on it source document and their location in
the document
IF the nominated list contains sentences come from one document then
Iterate over the list and for each sentence do
IF sentence similarity with user query > 0.2
Keep them in the most relevant list
IF the most relevant list has items
Generate the summary from this list until reach specified length
or no more sentence in this list remains
Else
Generate the summary from the nominated list until reach specified
length or no more sentence in this list remains.
Else IF the nominated sentences come from different documents
Follow the same procedure as above except for the length we get from
each document maximum two sentences
8 M. Alhoshan, N. Altwaijry

to the generic type (????), a small reference corpus is created, consisting of five
reference summaries created by a human expert. Each summary was created from
ten documents, and each group of documents concerns a specific subject. The
quality of generated summaries is evaluated automatically using ROUGE-N (?),
which compares two summaries (human-generated and system-generated) based
on the overlapping units between them, calculated as follows:

ΣS∈(Ref erenceSummary) Σgram(n) ∈S countmatch (gramn )


ROU GE − N = (5)
ΣS∈(Ref erenceSummary) Σgram(n) ∈S count( gramn )

where:
n: Length of n-gram
countmatch (gramn ): The total number of n-grams co-occurring in a reference sum-
mary and a system summary
count(gramn ): The number of n-grams in the reference summaries.

The system completely depends on the performance of the similarity method,


so the following procedure was used to set the best threshold of similarity. First,
a human expert classified a dataset – similar or not similar- consisting of a list
of sentences ordered in pairs according to their similarity. The list contains 236
distinct pairs. After the manual classification process, different thresholds for the
similarity are set and the results of the manual classification compared with the
system’s predictions. The performance of the system’s predictions is evaluated
using F-measure metrics, which is calculated as follows:

F − measure = 2 × precision × recall/precision + recall (6)

where:
precision: Percentage of sentences that the classifier labeled as similar are actually
similar
recall: Percentage of similar sentences the classifier labeled as similar

4 Evaluation results

4.1 Similarity method evaluation

To set the similarity threshold, several experiments with different threshold scores
that evaluated the performance of the similarity method were performed. Figure 2
shows similarity method performance at each threshold.
Arabic Query-Based Update-Summarization System 9

Fig. 2 Similarity method performance

The best performance was achieved at 0.7, as the threshold value gets closer to
1 and the performance remains stable. Setting the threshold value to 0.7 minimizes
the misclassification rate. Table 4 shows samples of the used dataset. Note that
the queries used in this evaluation are the same as those used in the summaries’
evaluation process.

Table 4: Sentences similarity dataset sample

System Human
Sentence 1 Sentence 2 simi- judgment
larity
score
0.8 similar
‡ ¯@ ð , é‚ ¯A JÖ Ï @ éK
AîE ú
¯ð
XYªË Êj.ÖÏ @ ©ÒJƒ@ à @ YªK. ð
@YK
QÓ éJj
. ÊË@ iJÓ úΫ Êj.ÖÏ @ Êj.ÖÏ @ ‡ ¯@ ð , HC  g@YÖ Ï @ áÓ
ékQ£ AÓ éƒ@  P YË I ¯ñË@  áÓ áÓ @YK
QÓ éJj  . ÊË@ iJÓ úΫ
 Q®Óð  Z@P @ áÓ  ZA’«
B@ ékQ£ AÓ éƒ@  P YË I ¯ñË@ 
, HAg
úÍ@ AëQ¢  éêk  ñK èXñªË@ð , HAg  Z@P@ áÓ
 Q®Óð ZA’« B@
. .
 ¯ é‚Êg
éÓXA 
. ú
¯ Êj.ÖÏ @ úÍ@ AëQ¢  éêk  ñK èXñªË@ð
. .
 ¯ é‚Êg 
éÓXA . ú
¯ Êj.ÖÏ @
continued on next page
10 M. Alhoshan, N. Altwaijry

continued from previous page


      0.76 similar
ÉK@ ð @ IªK AK á£ñË@ I KA¿ð ÉK@ ð @ IªK AK á£ñË@ I KA¿ð
  . Ï @ ñJ
KñK
  . Ï @ ñJ
KñK

¬B @ †ñ®K ­ÊÓ ú
æ•AÖ ¬B @ †ñ®K ­ÊÓ ú
æ•AÖ
QÓ 400 X@YJÓ@ úΫ ¼AÖޅ B@ QÓ 400 X@YJÓ@ úΫ ¼AÖޅ B@
鮢
 JÖ Ï AK HðPA
.  K èQK
Qk
 ú¯
.
鮢
 JÖ Ï AK HðPA
.  K èQK
Qk
 ú¯
.

HBAÒJkB@ I KA¿ð   „Ë@


, éJ
¯Qå HBAÒJkB@ I KA¿ð   „Ë@
, éJ
¯Qå

¬Qå”ËAK. HñÊ  JË@ úÍ@ AîD¯ð  Q‚ 
¬Qå”ËAK. HñÊ  K úÍ@ AîD¯ð  Q‚ 

éJªÔg I¢ 
á  
ú
¯ ,AîD„®K 鮢JÖÏ @ ú
¯ ú
j’Ë@

. . P 
g ú
¯ ,ú
j’Ë@
­J
¢®Ë@ é¢ ¯Am × ú
¯ áK
XAJ
’Ë@
  áK
XAJ
’Ë@ éJ
ªÔg. I¢  . P á 
g
 á ¯YË@
ú
æË@ ÈAÔ« @ð éª  ¯@ ñË@ á K á 
K. ­J
¢®Ë@  é¢ ¯Am × ú
¯

. 
èQK Qm Ì '@ AêË Qª JK ú
æË@ á ¯YË@ ÈAÔ« @ð , éª ¯@ñË@

.
 
, HðPAK èQK
Qk. AêË QªJK

AîD„JKP àA‚Ë 
Q®ªk .
úΫ IËA  ¯ð
øQk. AÓ à@ á£ñË@  Ë ú
G@ ñ®’Ë@
Ðñ®K
ú
æË@  ÐXQË@ ÈAÔ« @ éJ..ƒ@
  á 
鮢 JÖÏ @ ú
¯ 
ËðA®ÖÏ @ Yg @ AîE.
0.63 dissimilar
ú
¯ éÓ P @ PAK @ 2005 ÐA« ú
¯ð éJJ
« 2007 ñK
AÓ 17 ú
¯ð
àA¿
@Q
K Pð AÓYJ« ÕæʪJË@ ¨A¢¯ BñºJ
K

KQË@ é®K

èYªJ.ƒ@ Y¯ð  ,¨A¢
®ËAK
 A®Ê¾Ó  . Aj JKBAK
HAK ø Pñ»PAƒ
. Q KA ®ËA
 . 

½J
J
ÓðX ¼@ Y K @ Z@P PñË@ 
KP
A‚
KP »@ YK @ éJ
ƒAKQË@
Z@P PñÊË
áÓ 2005 ñJ
KñK

ú¯ àAJ
. ÊJ
¯ðX
ñK
AÓ 10 úÍ@ éJ.’JÓ ú
¯ ù
®K. ð 

 Ì '@ éÊJ  º‚ 


éÓñºm
 K XA¯ IJ
HC 
k 2012

HAk
continued on next page
Arabic Query-Based Update-Summarization System 11

continued from previous page


0.42 similar
CË@ à @ úÍ@ A’ @ PAƒ @ð
á 
Jk Z@P Pð 
KP áÊ« @ , éJêk. áÓ
.

á
J
ÓA¢ JË@ Q
« áK
Qk. AêÖÏ @ð
A¾KñK. ñƒ ¬CƒñëñK. ½J
‚ Ë@ 
àA KñJ

Ë@ úÍ@ àñʒJ
ƒ áK
YË@
ÕæJ
ƒ éK @ RJK
ñK ú
¯ 骯ñÓ  úΪ
  
P@ X @/ €PAÓ 20 Yg B@ Y JÓ XYmÌ '@ á Jk
áK
YË@
.
. CË@ É¿ XQ£ @
ñëð ,AJ
»QK úÍ@ ÑîEXA«@ 
ÕæJƒ
 á 
ÓXA¯ éJ
KA KñJ

Ë@ P Qm .Ì '@ àñʒ

¼Q‚Ó  àAJ
K. A’
@ éJ
Ê« Y» @ AÓ  
AJ
»QK úÍ@ éJ
»QË@ Ég@ñ‚Ë@ áÓ

éJ
Ê« IªÊ£@  ú
»QK ú
G. ðPð @ I.k. ñÖß. Yg B@ áÓ @P AJ.J«@

P QK
ðP ø
YË@ úG ðPð B@ ú
»QË@ †A ® KB@

.
ú
¯ éªÒm Ì '@ éJÊ« é¯XA’Ö
.

 Ï @ IÖ  ß
ɂ»ðQK.
0.44 dissimilar
úΫ QÓ B@ H@  X ‡J . ¢K
ð A‚Q ¯ t'
PAK I..‚. áºËð
Pñ¢ JÓ áÓ èYjJÖÏ @ HAK 
BñË@
éË A‚Ó ú
¯ XY‚ Ö Ï @ ú
×ñ®Ë@ 

YªJ.Ë@ Q
K AJK. Q
»Y JË@ ©Ó ,ú
m 
' PAK H. A¢m Ì '@ àA ¯ , éJ
J£ñË@
éK
ñêË@
 @ñÓ ú¯ l' @QË@
H. A¢m Ì '@ ú
¯ éJJ
Òë @ð ú
æK
YË@ éË A‚Ó éêk .
.
éK
ñêË@ ÉJ
º‚ ð ÐAªË@ ñë
á

Jk. CË@ð áK
Qk. AêÖÏ @
H. A¢k B h. AÓYKB@ H A¢k
.
éJ¯A ® JË@ éK XYªJË@

0.5 similar
ZAîDK@ ­ËAj JË@ ÈðX I JÊ« @  ‚Ö Ï @ ­ËAj
é»PA JË@ ÈðX I JÊ« @
ZYK. ð Ð Qm Ì '@ 鮓A« éJ
ÊÔ« ZAîDK@ Ð Qm Ì '@ 鮓A«
éJ
ÊÔ« ú
¯

ZAg. ð , ÉÓ B@ èXA«Z HAJ 
ÊÔ« €PAÓ 26 AîE @YK. ú
æË@  éJ
ÊÒªË@
èP@PñË
K. YªK. ½Ë X èñ«YË éJ JÊK úæ•AÖ
¨A¯YË@ àAJ 
KQË@
.
Ï@
Ð Qm Ì '@ 鮓A«
à@ ÈA¯ éK
Xñª‚Ë@ ,ø
XAë Pñ’JÓ éK. P YJ.« ú
æÒJ
Ë@
 ñÖÏ @ H@YK
éêk . 
YîDË@ IË@  P @ 
, ÉÓ B@ èXA«Z éJ
ÊÔ« †C¢  @ ð
I ® ®kð P@ñmÌ '@ ÈðXð éºÊÒÒÊË
.

ÉK
QK. @ 22 ZAªK. P B@ áÓ

@ZYK.
@ Ñ¢ªÓ
Aê¯@Yë
ø
PAm.Ì '@
continued on next page
12 M. Alhoshan, N. Altwaijry

continued from previous page


0.56 dissimilar

YKBñë ù® JË@ àðQ»AÓ

àA¿ð ú
¯ éJ
Êg@YË@
QK Pð

I.’JÓ Éª ƒ

ém… QK ú
¯ éÔ« Xð 2006 ÐA« H. Aj JK@ YªK. éÓñºk 
Èð @

»@QƒB@
 H. Qm Ì '@ HAK  . Aj JKB
31 ú¯ èPAJk@ ø YË@

YKBñë

ú
¯ éJ.’JÓ á« úÎm ' éK @ Q
« A‚
KP 2014 €PAÓ
Z@P PñÊË
2014 ñJ
KñK
ú¯ éÓñºm  Ì '@


YªK. XA’J¯CË á 
ªJ
Ë
@Q
K Pð
JÓ ñKP @ é®Êƒ
¨PñJ éËA  ® Jƒ@
.
0.3 similar
ú
G. ðPð B@ XAm' B@ èXA¯ Q¯ @ ú
G. ðPð B@ Êj.ÖÏ @ 
KP Y» @ð
AJ
»QK ©Ó A¯A ® K@ éªÒm
 Ì '@ ÐñJË@
.
ù
® éJêk. áÓ ½ƒñK YËAKðX
 ' Q
á  
Qm . .« 
Jk. CË@ ‡¯YK ém. 'AªÖÏ  Ì úÍ@ ɓñJË@ RJK
ñK úΫ èYK
Qª K
CË@ XAªK AK úæ” ®K ém'
@
á 
Jk ­ ¯ñK úæ” ®K AJ»QK ©Ó †A ® K@
. . .

. .

úÍ@ àAKñJ
Ë@ Ë áK
Y¯@ ñË@ XYm.Ì '@
ém.'
@ Qm'. Q.« áK
Qk. AêÖÏ @ ‡ ¯Y
K

‡ ¯ð ,ÉJ.®Ö Ï @ Yg B@ Y JÓ AJ
»QK

éJ
»QK PXA’Ó éKY» @ AÓ

éJ
K. ðPð @ð
0.35 dissimilar
I ¯ñË@ ú¯ éK @ àA «ðXP @ Y» @ð à @ úÍ@ H@  

è Yë  Q
K Y®JË@ Q
‚ ð
èXCK. éJ
¯ ­J
’ ‚   ø YË@

úΫ ­J
’ ‚   HC  JË@ ÈðYË@
½JËð @ úΫ , úk B á 
K
CÓ éKCK B á 
K
CÓ é«ñÒm.× AÓ AîD
“@ P @
úk
. .
Q
¯ñ K àñªJ
¢‚
B áK
YË@
éJ
¯ ‘Ê® JK I ¯ð  ú¯ ,ø Pñƒ

CË@ áÓ
, á 
Jk . ÉJ
ʯ XYªË àA¾Ó AJ
‚ ¯ AJ
ƒ éJ
ËðYË@ H@Y«A‚Ö
 Ï@
àñ» QK
áK
YË@ ð
ZAK
QK. B@ ZBñë
, éK
Q m × ¨A“ð @ ú¯ AK. ðPð @ ¡ƒð


úÎ « Bð @ ÑîD„ ® K @ úÍ@ Q¢ JË @
èQ
J.ªK Yg

4.2 Summarization method evaluation

The summarization method evaluation utilized ROUGE 2.0, which is a language-


independent implementation of ROUGE that operates on Unicode characters.
ROUGE 2.0 currently supports Unicode-based texts; however, it only supports
Arabic Query-Based Update-Summarization System 13

ROUGE-N evaluation metrics. Three metrics of ROUGE were tested: ROUGE-1,


ROUGE-2, and ROUGE-4. Table 5 and Figure 3 report the average Recall, Pre-
cision, and F-measure scores of the used metrics. As the gram sizes increase, the
results of ROUGE’s metrics decrease.

Table 5 Summaries’ evaluation results

Recall Precision F-measure


ROUGE-1 0.877988 0.672933 0.754258
ROUGE-2 0.843025 0.643858 0.72251
ROUGE-4 0.81354 0.621458 0.69705

Fig. 3 Summaries’ evaluation results

Comparing this method with approaches for other languages is very difficult.
We are also unable to compare it with other Arabic approaches, as none exist.
Moreover, ROUGE-N measure reliability and stability are affected by the size of
the dataset used in the evaluation process (?), as well as by the number of refer-
ence (or standard) summaries used for each summary (?). The dataset created is
relatively small compared with the one used to evaluate ROUGE-N (?). Therefore,
while the effectiveness of this summarization approach cannot be confirmed, using
a larger dataset in the future could provide more hope of doing so.
Appendix B shows a sample of the system summaries and their corresponding
human summaries (reference summaries).
14 M. Alhoshan, N. Altwaijry

5 Conclusion

This paper provides a query-based update-summarization system for the Arabic


language that can summarize multiple recent documents, supposing users have
read some related previous documents. The Arabic language is suffering from a
lack of such systems. Most of the available summarization systems are concerned
with generating a generic summary from a static collection of documents on a
given topic. This paper is the first attempt to promote the idea of update sum-
marization for the Arabic language. The proposed summarization system uses a
graph-based ranking model to represent similarity between sentences. That simi-
larity was calculated by finding lexical and semantic relations between sentences
using the Arabic WordNet lexical database. The similarity measuring method was
used to eliminate redundancy and generate summaries.
The tool was evaluated on aspects of performance of the similarity measuring
method, and the quality of the generated summary. The results were promising,
but further investigations are strongly recommended.
This work suffers from two major limitations. First, the performance of the
similarity measuring method is affected by the AWN dictionary. Indeed, AWN
suffers from some serious limitations, such as the limited number of words and
synonyms (?). The second limitation is the summaries-evaluation results. Since
ROUGE-N metrics’ reliability and stability depend on the sample size used in
the evaluation (?), five summaries with their corresponding standard summaries
are not enough. Future work requires further investigation of the summarization
method, such as using extra methods to capture the semantic similarity along with
AWN similarity to overcome the dictionary shortage.

References
A Stop words list

Below is the list of stop words that have been used in this system:

ñëð úæk
éK. àñºK  áÓ ú¯ úÍ@ úÎK
Y“ YªK. à@

à@ AîD
Ê« AîD
¯ á 
K. ð ú
æË@ ½Ë Y» ½ÊK àA¿ð úΫ Yg @ 
Ëð
B áÓð á 
g AÓ @ ø YË@ áºË
úΫð

YJÓ 
Ë ZA‚Ó á«

éË áºËð
éJºË ©Ó àðX Èñk éJ« AÓ ø
@ I KA¿ð I‚
Ë
.
K éK @ è Yë
ñm' áË @Yg. á 
K. Y¯ àñº Õç' ¡® ¯ úæË@
ñË ½Ë X éJ
¯ àA ¯ ZBñë ÕË ÐñJ
Ë@ à B ÑêË àA¿

 ð @ Yë
YJ«
ñë ÉK. Y® ¯ ©Óð à @ ú
¯ð øYË YK. É¿ áK
YÊË@
 '
éJ
Ê« úΫ X@ ð @ AêË Im ñê¯ ú
¯ð AîE. éJÓ AîD«


ÐñK
éªÓ ÉJ.¯ ¼AJë ÐAÓ @ ½Ë YË I KA¿ Y¯ð AJë ­J » AÒ»


B AÓ ð ð@ @ X@ ù
ë IJ
k Éë @ X@ úÍ@ AîDÓ


úm• @ úæ„Ó@ úæ„Ó @ iJ.“ @ iJ.“@ È@QK
AÓ È@QK
B È@PB È@PAÓ
úÍ@ úÍ@



Ë à A¿ à@ 
Ë PA“ HAK  . ½®K@AÓ ú æ¯AÓ hQK. AÓ É£ úm• @

IJ
AîD
Ë@ BYK. ø
@ H@  X éËð Èð@ áÖÞ • úÍAmÌ '@ È@QK
Bð AÒJ
ƒB ɪË

Arabic Query-Based Update-Summarization System 15

Jƒ àA¾ @ Yëð


¯ B@ @ YêË ø YË@ ð à@ ð éKA ¯ áK YË@

AÜØ àñº

éK@
È@ ñK @ð ùëð à @ð ø YË @ YîE
áºÖß
ø
YË@ .

.
éJ
Ë@ ø
YË@ àA K. ñK. @

B Sample of system summaries and their document

B.1 Original document:


'   
XAªK. A K. ú
æ” ®K
ém.'
@ Qm'. Q.« á 
Jk CË@ ‡ ¯Y
.
K ém Ì 'AªÖÏ AJ»QK ©Ó A¯A ® K@ éªÒm
.

 Ì '@ ÐñJË@ úG ðPð B@


.

. XAm B@ èXA¯ Q¯ @
 
   
ÈA¯ð . éJ
K. ðPð @ð éJ
»QK PXA’Ó éKY» @ AÓ ‡¯ð ,ÉJ.®ÖÏ@ Yg B@ YJÓ AJ
»QK úÍ@ àAKñJ
Ë@ Ë áK
Y¯@ ñË@ XYm.Ì '@ á 
Jk
  CË@
.
Èñk AJ
»QK ©Ó †A ® KB@ úΫ @ñ® ¯@ ð XAm' B@ èXA¯ à@ QK
ñK úΫ èYK
Qª K ú
¯ CJ
J.
ƒ AëñK
@YJÊ J ¯ Z@P Pð 
KP
@ Xð@X YÔg @ AJ»QK Z@P Pð 
KP ù® JË@ à @ YªK. ½Ë Xð , á 
Jk
PAƒ @ð . ɂ»ðQK. ú
¯ ú
G. ðPð B@ Êj.ÖÏ@ 
KP ñÊ«ð

CË@
.
ÕæJƒ P@ X @/ €PAÓ 20 Yg B@
YJÓ àAKñJ
Ë@ úÍ@ àñʒJ
ƒ áK
YË@ 
J
ÓA¢JË@ 
« áK
Qk. AêÖÏ@ð 
Jk. CË@ à @ úÍ@ A’
@
á Q á

.P QK
ðP éJ
Ê« IªÊ£@  ú
»QK ú
G. ðPð @ ¼Q‚Ó àAJ
K. A’
@ éJ
Ê« Y» @ AÓ ñëð ,AJ
»QK úÍ@ ÑîEXA«@ 

         
½ƒñK YËAKðX úG ðPð B@ Êj.ÖÏ@ 
KP Y» @ð
AJ
»QK ©Ó †A®K@ úÍ@ ɓñJË@ RJK
ñK úΫ èYK
QªK ù
® éJêk. áÓ
.

XAm B@ ú
¯ 82 Ë@ ÈðYË@ èXA¯ éªÒm. '@ úæ•ð @ Y¯ ½ƒñK àA¿ñ.êm.
@ Qm . .« áK
Qk. AêÖÏ@ ‡ ¯Y
 '    Ì   '  ' Q K ­ ¯ñK  úæ” ®K
.

‡¯ð , éJ KA KñJ     




Ë@ Ég@ñ‚Ë@ úÍ@ á 
Jk. CË@ ‡¯YK ­¯ð ¬YîE. AJ
»QK ©Ó YK
Yg. †A®K@ ¨ðQå„Ó úΫ 鮯@ñÖÏAK. ú
G. ðPð B@
Qê¢Ë@ ÉJ.¯ é«AÒJk. @ YªJ. èXA®Ë@  úΫ Q« ½ƒñK à@ PY’ÖÏ@ ÈA¯ð.  éJ ‚Q ®Ë@ é¯Aj’Ë@  úG ðPð @ PY’Ó XA¯ @ AÓ
éËA¿ñË


.

33 Õ¯P ɒ®Ë@ iJ¯ A’
@ ½ƒñK h ¯@ð . éJ
Ê« 鮯@ñÖÏAK. ÑëA“ð @ BYªÓ A¯A®K@ BJ
»QK Z@P Pð 
KP ©Ó ɂ»ðQK. ú
¯
   Q       
 ñJKñK  ( éJ
K@ Q 
ÖÏ@ð éJ
ËAÖÏ@) PñÓ BAK     HA ®Ó Èñ’¯ áÓ
, ÐXA®Ë@

30 úæk . é®ÊªJÖÏ@ð ,ú
G. ðPð B@ XAm' BAK. AJ
»QK éK
ñ’«  “ðA
 Ì '@ ÐñJË@ éJ K ðPð @ éËðX
. éªÒm  28 ¨AÒJk @ ú¯ h Q®ÖÏ@  Q® K à @ Q¢ JJÖÏ@ ð
.

. .

CË@ É¿ XQ£
á 
Jk  @ ÕæJ ƒ éK @ RJK ñK ú¯ 骯ñÓ  úΪ A¾KñK ñƒ ¬CƒñëñK  
KP áÊ« @ , éJêk. áÓ
.


. . ½J
‚ Ë@ Z@P Pð
XYmÌ '@
†A ® KB@ I.k. ñÖß. Yg B@ áÓ @P AJ.J«@ AJ
»QK úÍ@ éJ
»QË@ Ég@ñ‚Ë@ áÓ á 
ÓXA¯ éJ
KA KñJ
Ë@ P Qm .Ì '@ àñʒ
áK
YË@ .
     
 ß ø YË@ úG ðPð B@ ú»QË@
©J
¯P ú
»QK Èð ñ‚Ó Y» @ ,øQk @ éêk. áÓ .ɂ»ðQK. ú
¯ éªÒm.Ì '@ éJ
Ê« é¯XA’ÖÏ@ IÖ

.

 CË@ à @ P QK ðQË


ÉJ
kQË@ H@Z@Qk. @ à @ð ,AJ
»QK úÍ@ ÑîEXA«@ ÕæJƒ ú
æ•AÖÏ@ Yg B@ YªK. àAKñJ
Ë@ úÍ@ á
ʓ@ñË@ á 
Jk
     .

  
. éËñ¯ Yg úΫ ,ÉJ.®ÖÏ@ ÉK
QK. @ áÓ ©K. @QË@ YJÓ @YJ.ƒ
CË@ èXA«@
á
Jk .
 
Q
« é®K
Q¢. àAKñJ
Ë@ úÍ@ @ð Q.« áK
YË@ áK
Qk. AêÖÏ@ð á
Jk. CË@ ©J
Ôg. èQ®K @ YJ
ªJ‚ƒ ,h Q®ÖÏ@ †A®KB@ I.k. ñÖß.ð        
éJ
ËAÓ AK
@QÓ AJ
»QK iJÓð á

K Pñ‚Ë@ á
Jk CË@ ‘ªJ
. . Ë AK. ðPð @ ÈAJ.® Jƒ@ ÉK. A®Ó  ú¯ , àñK
Pñƒ àñ Jk B ÑîDJ K éJ «Qå…
.
.

'
. ú
G. ðPð B@ XAm B@ ú
¯ AîDK
ñ’« HA  KXAm× ©K
Qå„ð ÉJºJË@ ÈðYË ÈñkYË@ èQ
ƒ AK áÓ ¼@QK B@ á 
J£@ ñÖÏ@ ZA®«@ ð

   P AJ
ÊÓ éKCK ©ÊJ     
. Ó ¬Qå• ©K
Qå„ úΫ †AJ
‚Ë@ @ Yë ú¯ úG ðPð B@ XAm' B@ ‡¯@ð ð  
úÍ@ AꪯYK. A®K. Aƒ YêªK ðPñK
H@

.

.2018 ÐA« ÈñÊm'. øQk @ H@ P AJ
ÊÓ éKCK Q
¯ñ Kð AJ»QK

16 M. Alhoshan, N. Altwaijry

ú
¯ Qå „J.Ë@ I.K
QîDË Yg ©“ñË èYJ
k. é“Q  ¯ @ àñºJ
ƒ †A ® KB@ à@ IËA    
 ¯ É¿Q
Ó CJ
m.' @ éJ
KAÖÏ B@ èPA‚ ‚ÖÏ@ I KA¿ð

. èYJ
ªƒ éÖ ßA g úÍ@ ɓñJË@ àAÖÞ • úæ JºÖß
B@ Èñ®Ë@  ú¯ XXQK ÕË YKBñë

@ñ‚@ Q¯ ú
æ„Q ®Ë@ KQË@

à @ B@ . ém.'
@ Qm'.

‘m … ­Ë @ 143 áÓ Q» @ ø PAm.Ì '@ ÐAªË@ ©Ê¢Ó Y JÓ É“ð Y® ¯ , àA‚ B @ †ñ

 ®m Ì èYjJÖÏ@ Õ× B@  
éJ
“ñ®Ó ‡¯ðð

@ H@
   
 á Jk
á 
®ËAªË@ CË@ ¬B  Q å„«ð
àA
KñJ Ë@ àA ®ÊJ Ë@ ‡K Q£ @  «@ ð á Jk
†C CË@ ‡ ¯Y K ©’ ð .AJ»QK áÓ á 
ÓXA¯ àA KñJ

ÊË

.
.

.

. AJ
»QK ©Ó Ég úÍ@ ɓñJË@ Ég. @ áÓ
á 
J
K. ðPð B@ úΫ  ñª ’Ë@
YK
QK
éK @ AÒ» ,ÉÒJm
' B ©“ð ú¯ AîD¯
àA¿ð


Am Ì '@ AêÊm.… ú
¯ Q¢ JË@ úÍ@ ék  Am' AîE@ ÈA¯ð
. .
 ,ÐñJË@ AK ðPð @ Ñk Aë Y¯ àA «ðXP

. .
@ IJ£ Ik P ú»QË@ KQË@
.
. .


ú
¯ ñÊ«ð @ Xð@X Z@P PñË@ 
KP ÈñkX ©Ó áÓ@ Q ËAK. ½Ë X ZAg. ð , éʪ® K AÓ èQ® K @ úΫ úÎÖß à @ ÉJ.¯ á 
Jk CËAK

. .
éKCK èXCK éJ¯ ­J ’ ‚   ø YË@ I ¯ñË@  ú¯ éK @ àA «ðXP @ Y» @ð . ɂ»ðQK ú¯ á JK ðPð B@  
èXA®Ë@ ©Ó HAJkAJ.Ó  
.



.

.
Q  á  Q   B á

K CÓ
ZBñë àñ» K
áK
YË@ð ,
Jk. CË@ áÓ ÉJ
ʯ XYªË àA¾Ó
¯ñK àñªJ
¢‚
B áK
YË@ ½JËð @ úΫ , úk .
  
. èQ
J.ªK Yg úΫ Bð @ ÑîD„®K @ úÍ@ Q¢ JË @ , éK
Q m× ¨A“ð @ ú¯ AK. ðPð @ ¡ƒð ZAK
QK. B@

B.2 Manually generated summary 1:


'   
XAªK. A K. ú
æ” ®K
ém.'
@ Qm'. Q.« á 
Jk
CË@ ‡ ¯Y
.
K ém Ì 'AªÖÏ AJ»QK ©Ó A¯A ® K@ éªÒm
.

 Ì '@ ÐñJË@ úG ðPð B@


.

. XAm B@ èXA¯ Q¯ @
 
ÈA¯ð . éJ
K. ðPð @ð éJ
»QK PXA’Ó éKY» @ AÓ ‡¯ð ,ÉJ.®ÖÏ@ Yg B@ YJÓ AJ
»QK úÍ@ àAKñJ
Ë@ Ë áK
Y¯@ ñË@ XYm.Ì '@ á 
Jk
      CË@
.

Èñk AJ
»QK ©Ó †A ® KB@ úΫ @ñ® ¯@ ð XAm' B@ èXA¯ à@ QK
ñK úΫ èYK
Qª K ú
¯ CJ
J.
ƒ AëñK
@YJÊ J ¯ Z@P Pð 
KP
@ Xð@X YÔg @ AJ»QK Z@P Pð 
KP ù® JË@ à @ YªK. ½Ë Xð , á 
Jk
PAƒ @ . ɂ»ðQK. ú
¯ ú
G. ðPð B@ Êj.ÖÏ@ 
KP ñÊ«ð

CË@
.

ÕæJƒ P@ X @/ €PAÓ 20 Yg B@ Y JÓ àA KñJ
Ë@ úÍ@ àñʒJ
ƒ áK
YË@ á JÓA¢ JË@ Q« áK Qk AêÖÏ@ð á Jk
CË@ à @ úÍ@ A’ @



.
.

  Q   Q   
Y¯ ½ƒñK àA¿ . P K
ðP éJ
Ê« IªÊ£@ ú
»QK ú
G. ðPð @ ¼ ‚Ó àAJ
K. A’
@ éJ
Ê« Y» @ AÓ ñëð ,AJ
»QK úÍ@ ÑîEXA«@

AJ»QK ©Ó YK Yg †A ® K@ ¨ðQå„Ó « é® ¯@ ñÖÏAK úG ðPð B@ XAm' B@ ú¯ 82Ë@ ÈðYË@ èXA¯ éªÒm  Ì '@ úæ•ð @
­ ¯ð  ¬YîE
.

. úÎ .
.
.
 úG ðPð @ PY’Ó XA¯ @ AÓ ‡ ¯ð , éJ KA KñJ
A’
@ ½ƒñK h Q¯@ ð . éJ
‚Q ®Ë@ é¯Aj’Ë@
éËA¿ñË
.



Ë@ Ég@ñ‚Ë@ úÍ@ á 
Jk. CË@ ‡¯YK


 JÖÏ@ð ,úG ðPð B@ XAm' BAK AJ»QK éK ñ’«
( éJ
K@ Q 
ÖÏ@ð éJ
ËAÖÏ@) PñÓ BAK. é®Êª
. .

HA  “ðA ®Ó Èñ’¯ áÓ 33 Õ¯P ɒ®Ë@ iJ¯

XAm' B@ ‡ ¯@ ð ð . éªÒm  Ì '@ ÐñJË@ éJ K ðPð @ éËðX
.

.
 28 ¨AÒJk @ ú¯ h Q®ÖÏ@
.

 Q® K à @ Q¢ JJÖÏ@

 ñJKñK
ð ,ÐXA®Ë@

30 úæk 
©K Qå„ « †AJ
éKCK Q
¯ñ Kð AJ»QK úÍ@ AꪯYK


A®K Aƒ YêªK ðPñK H@
. .
 Ó ¬Qå•

 P AJ
ÊÓ éKCK ©ÊJ.


úÎ
‚Ë@ @ Yë ú
¯ ú
G. ðPð B@

.2018 ÐA« ÈñÊm'. øQk @ H@ P AJ
ÊÓ

B.3 Manually generated summary 2:



€PAÓ 20 Yg B@ Y JÓ àA KñJ
Ë@ úÍ@ àñʒJ


ƒ áK
YË@ á JÓA¢ JË@ Q« áK Qk AêÖÏ@ð á Jk



.


. CË@ à @ úÍ@ A’
@ PAƒ @ð
 áÓ    ÕæJƒ P@ X @/
éêk . .P QK
ðP éJ
Ê« IªÊ£@  ú
»QK ú
G. ðPð @ ¼Q‚Ó àAJ
K. A’
@ éJ
Ê« Y» @ AÓ ñëð ,AJ
»QK úÍ@ ÑîEXA«@
 ÕæJƒ úæ•AÖÏ@ Yg B@
Ë@ úÍ@ á 
ʓ@ñË@ á 
Jk
KñJ CË@ à @ P QK ðQË ©J¯P ú»QK Èð ñ‚Ó Y» @ ,øQk @
úÍ@ ÑîEXA«@
YªK . àA .

, h Q®ÖÏ@
 †A ® KB@ Ik ñÖß.ð . éËñ¯ Yg úΫ ,ÉJ®ÖÏ@  ÉK QK @ áÓ ©K @Q Ë@ JÓ @YJƒ
Y  ÉJkQË@ H@Z@  Qk. @ à @ð ,AJ
»QK
. . .
. . .

B ÑîDJ K éJ «Qå… Q« é®K Q¢ àA KñJ  



Pñƒ àñ
, àñK Jk .
.


.
Ë@ úÍ@ @ð Q.« áK
YË@ áK
Qk. AêÖÏ@ð á 
Jk. CË@ ©J
Ôg. èQ®K @ YJ
ªJ‚ƒ

èQƒ AK áÓ ð éJ ËAÓ AK @QÓ AJ»QK iJÓð CË@ ‘ªJ
. Ë AK. ðPð @ ÈAJ.® Jƒ@ ÉK. A®Ó
 ú¯

¼@QK B@ á
J£@ ñÖÏ@ ZA®«@


á 
K
Pñ‚Ë@ á 
Jk .


 ®m Ì èYjJÖÏ@ Õ× B@ éJ “ñ    KXAm× ©K
Qå„ð ÉJºJË@ ÈðYË ÈñkYË@
†ñ
®Ó ‡¯ðð . ú
G. ðPð B@ XAm' B@ ú
¯ AîDK
ñ’« HA
. AJ
»QK áÓ
ÊË ‘m … ­Ë @ 143 áÓ
á 
ÓXA¯ àA KñJ Q» @ ø PAm.Ì '@ ÐAªË@ ©Ê¢Ó Y JÓ É“ð Y® ¯ , àA‚ B @


Arabic Query-Based Update-Summarization System 17

B.4 System’s Summary:

AJ
»QK ©Ó †A ® KB@ úΫ @ñ® ¯@ ð XAm' B@ èXA¯ à@ QK
ñK úΫ èYK
Qª K ú
¯ CJ
J.
ƒ AëñK
@YJÊ J ¯ Z@P Pð 
KP ÈA¯ð 
@ Xð@X YÔg @ AJ»QK Z@P Pð 
KP ù® JË@ à @ YªK. ½Ë Xð , á 
Jk
. ɂ»ðQK. ú
¯ ú
G. ðPð B@ Êj.ÖÏ@ 
KP ñÊ«ð

CË@ Èñk
.

á
P@ X @/ €PAÓ 20 Yg B@ YJÓ àAKñJ
Ë@ úÍ@ àñʒJ
ƒ áK
YË@ 
J
ÓA¢JË@ 
« áK
Qk. AêÖÏ@ð 
Jk. CË@ à @ úÍ@ A’
@ PAƒ @ð
Q á
 Q 
, øQk @ éêk . áÓ .P K
ðP éJ
Ê« IªÊ£@  ú
»QK ú
G. ðPð @ ¼Q‚Ó àAJ
K. A’
@ éJ
Ê« Y» @ AÓ ñëð ,AJ
»QK úÍ@ ÑîEXA«@  ÕæJƒ

   
á
á Q  
, AJ
»QK úÍ@ ÑîEXA«@ ÕæJƒ ú
æ•AÖÏ@ Yg B@ YªK. àAKñJ
Ë@ úÍ@ 
ʓ@ñË@ 
Jk. CË@ à @ P K
ðQË ©J
¯P ú
»QK Èð ñ‚Ó Y» @
. éJ
K. ðPð B@   ÉK QK @ áÓ    Qk @ à @ð
XñÊ¿ àAg éJ
“ñ®ÖÏ@ 
KP . éËñ¯ Yg úΫ ,ÉJ.®ÖÏ@
. ©K. @QË@ YJÓ @YJ.ƒ ÉJ
kQË@ H@Z@ .
á
Jk CË@ èXA«@ (€QK YJƒñƒ   

. .
@) ½ƒñK YËAKðX ú
G. ðPð B@ Êj.ÖÏ@ 
KPð ñÊ«ð @ ©Ó IK
Yg ú
¯ (¡ƒð) QºKñK
áK Qk AêÖÏ@ð á Jk
éJ
«Qå… Q
« é®K
Q¢. àA KñJ

Ë@ úÍ@ @ð Q.« áK
YË@

.
   

. CË@ ©J
Ôg. èQ®K @ YJ
ªJ‚ƒ ,h Q®ÖÏ@ †A®KB@ I.k. ñÖß.ð
ð éJ ËAÓ AK @QÓ AJ»QK iJÓð á

K Pñ‚Ë@ á
Jk CË@ ‘ªJ
. Ë AK. ðPð @ ÈAJ.® Jƒ@ ÉK. A®Ó  ú¯ , àñK
ZA®«@


.

Pñƒ àñ Jk. B ÑîDJ
K.

‡ ¯Y K ©’ ð . úG ðPð B@ XAm' B@ ú¯ AîDK ñ’«


.

HA  KXAm× ©K
Qå„ð ÉJºJË@ ÈðYË ÈñkYË@ èQ
ƒ AK áÓ ¼@QK B@ á
J£@ ñÖÏ@
ú¯ AîD¯ á
®ËAªË@
 á Jk 
 Qå„«ð àA KñJ   Ë@ ‡K Q£ @ †C  «@ ð á Jk
YK
QK
éK @ AÒ» , ÉÒJm
' B ©“ð


. CË@ ¬B @ H@
Ë@ àA ®ÊJ .

. CË@

á

JK. ðPð B@ úΫ  ñª ’Ë@
.AJ
»QK ©Ó Ég úÍ@ ɓñJË@ Ég. @ áÓ
esp file for eval_results figure Click here to download Figure eval_results.eps
esp file for Picture1 figure Click here to download Figure Picture1.eps
esp file for sim_performance figure Click here to download Figure sim_performance.eps
Click here to access/download
Attachment to Manuscript
spbasic.bst
Click here to access/download
Attachment to Manuscript
spmpsci.bst
Click here to access/download
Attachment to Manuscript
spphys.bst
Click here to access/download
Attachment to Manuscript
svglov3.clo
Click here to access/download
Attachment to Manuscript
svjour3.cls
Click here to access/download
Attachment to Manuscript
template.aux

You might also like