Extraction of Main Event Descriptors From News Articles by Answering The Journalistic Five W and One H Questions

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Extraction of Main Event Descriptors from News Articles by

Answering the Journalistic Five W and One H Questions


Felix Hamborg, Corinna Breitinger, Moritz Schubotz, Soeren Lachnit, Bela Gipp
Department of Computer and Information Science
University of Konstanz, Germany
{firstname.lastname}@uni-konstanz.de

ABSTRACT aggregators must identify the main event to cluster related articles
[2, 3], e.g., articles reporting on the same event. Other disciplines
The identification and extraction of the events that news articles also analyze the events reported on in articles, e.g., in so called
report on is a commonly performed task in the analysis workflow frame analyses, social scientists identify how the media is report-
of various projects that analyze news articles. However, due to the ing on certain events. However, no method is publicly available
lack of universally usable and publicly available methods for news that extracts explicit descriptors of the main event. We define ex-
articles, many researchers must redundantly implement methods plicit event descriptors as the properties that occur in a text de-
for event extraction to be used within their projects. Answers to scribing an event, e.g., the text phrases in a news article that ena-
the journalistic five W and one H questions (5W1H) describe the ble a news reader to understand what the article is reporting on.
main event of a news story, i.e., who did what, when, where, why, The clear majority of current approaches suffer from at least
and how. We propose Giveme5W1H, an open-source system that one of three shortcomings. First, they detect events only implic-
uses syntactic and domain-specific rules to extract phrases an- itly, e.g., by employing topic modeling, but do not extract phrases
swering the 5W1H. In our evaluation, we find that the extraction or properties that explicitly describe the article’s main event [3].
precision of 5W1H phrases is 𝑝 = 0.64, and 𝑝 = 0.79 for the first The second category of approaches does not extract universally
four W questions, which discretely describe an event. usable descriptors, but is specialized on the extraction of task-spe-
cific event properties, such as the number of protestors in a
CCS CONCEPTS demonstration [4]. Approaches of the third category extract ex-
• Computing methodologies → Information extraction • plicit event descriptors but are not publicly available [5].
Information systems → Question answering • Information Journalists typically answer the journalistic five W and one H
systems → Summarization questions (5W1H), i.e., who did what, when, where, why, and how,
within the first few sentences of an article to inform the readers
KEYWORDS of the main event. For instance, the headline of a news article re-
News Event Detection, 5W1H Extraction, 5W1H Question An- porting on a terrorist attack in Afghanistan answers four of the
swering, Reporter’s Questions, Journalist’s Questions, 5W QA. 5W1H questions: “Taliban attacks German consulate in northern
Afghan city of Mazar-i-Sharif with truck bomb” The highlighted
ACM Reference format: phrases answer the questions who did what, where, and how;
F. Hamborg, C. Breitinger, M. Schubotz, S. Lachnit, B. Gipp. 2017. Extrac- ‘when’ and ‘why’ are answered in the remainder of the article.
tion of Main Event Descriptors from News Articles by Answering the Jour- Due to their descriptiveness of an article’s main event, we focus
nalistic Five W and One H Questions. In Proc. of the 18th ACM/IEEE-CS
our research on the extraction of the journalistic 5W1Hs.
Joint Conf. on Digital Libraries, Fort Worth, USA, June 2018 (JCDL’18), 2 p.

1 INTRODUCTION AND RELATED WORK 2 EXTRACTION OF 5W1H PHRASES


Giveme5W1H is an open-source main event retrieval system for
Extraction of a news article’s main event is a fundamental analysis
news articles. The system uses syntactic and domain-specific rules
task required for a broad spectrum of use cases. For instance, news
to extract the 5W1H phrases in a three-phase analysis pipeline
__________________________________________ depicted in Figure 1. The system builds on Giveme5W [1], and im-
This work has been supported by the Carl Zeiss Foundation. proves the extraction performance by addressing multiple of the
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
future work directions: Giveme5W1H uses coreference resolution,
for profit or commercial advantage and that copies bear this notice and the full cita- question-specific semantic distance measures, combined scoring
tion on the first page. Copyrights for third-party components of this work must be of candidates, and extracts phrases for the ‘how’ question.
honored. For all other uses, contact the Owner/Author.
In the first phase, preprocessing, Giveme5W1H performs state-
JCDL '18, June 3–7, 2018, Fort Worth, TX, USA
© 2018 Copyright is held by the owner/author(s). of-the-art NLP, and canonicalization to bring all named entities
ISBN 978-1-4503-5178-2/18/06. (NE) in their normalized form. During canonicalization we parse
https://doi.org/10.1145/3197026.3203899 dates written in natural language into canonical dates (TIMEX3),
Preprocessing Phrase Extraction Candidate Scoring

5W1H Phrases
Python Sentence Split. Canonicalization Action Who & what

Raw Article

Enrichment

Combined
Interface Tokenization When

Custom
CoreNLP

Scoring
SUTime Environment
RESTful POS & Parsing Where
Nominatim Cause
API NER Why
AIDA
Coref. Res. Method How

Cache input / output analysis process 3rd party libraries

Figure 1: The three-phase analysis pipeline (1) preprocesses a news text, (2) finds candidate phrases for each of
the 5W1H questions, and (3) scores these. Giveme5W1H can easily be accessed in Python and via a RESTful API.
perform geocoding, and perform NE recognition and disambigua- mean average generalized precision (MAgP). The MAgP over all
tion (NERD) to link all NEs to concepts in the YAGO graph. categories and questions was 0.64. If only considering the first
The second phase, phrase extraction, uses four extraction 4Ws, which are sufficient to uniquely represent an event, the
chains to retrieve candidate phrases: (1) the action chain extracts overall MAgP was 0.79. Extracting the answers to ‘why’ and ‘how’
phrases for the journalistic ‘who’ and ‘what’ questions, (2) envi- performed worse, since news articles often only imply causes and
ronment extracts ‘when’ and ‘where’, (3) cause extracts ‘why’, and methods. The extraction performance is similar to state-of-the-art
(4) method extracts ‘how’. We extract as ‘who’ candidates all sub- approaches [5], but a direct comparison is not possible due to the
jects, i.e., the first noun phrase (NP) from each sentence, and as non-availability of methods and datasets (see Section 1).
‘what’ candidates their respective predicates. To determine the
Table 1: MAgP-Performance of Giveme5W1H
‘when’ candidates, we take the TIMEX3 instances, which
Giveme5W1H extracts during preprocessing. Similarly, we take Question Bus Ent Pol Spo Tec Avg.
the geocodes as ‘where’ candidates. To find ‘why’ candidates, we Who .98 .88 .85 .97 .86 .91
look for three types of cause-effect indicators: causal conjunc- What .77 .67 .89 .83 .63 .75
tions, causative adverbs, and causative verbs. We then use a set of When .55 .91 .79 .77 .82 .77
syntax rules to find the respective causal phrase. To find ‘how’ Where .82 .63 .85 .77 .68 .75
candidates, we analyze copulative conjunctions, adjectives and Why .36 .18 .32 .33 .40 .32
adverbs. Often, sentences with a copulative conjunction, such as How .25 .36 .45 .27 .46 .36
“after [the train came off the tracks]”, contain a method phrase in
Avg. all .62 .61 .69 .66 .64 .64
the clause that follows the copulative conjunction. To improve ex-
Avg. 4W .78 .65 .84 .83 .75 .79
traction quality, we post-process all candidates for each question,
e.g., by truncating long phrases or discarding too short phrases.
The third phase, candidate scoring, aims to determine the best 4 CONCLUSION
candidate of each 5W1H question. First, we score candidates in- We proposed Giveme5W1H, the first open-source system that ex-
dependently for each of the 5W1H questions. Therefore, we devise tracts answers to the journalistic 5W1H questions, i.e., who did
domain-specific scoring rules. For instance, to score ‘who’ candi- what, when, where, why, and how, to describe a news article’s main
dates, we define three scoring factors, which we motivate from event. Giveme5W1H achieved a mean average generalized preci-
journalistic writing concepts, such as the inverse pyramid and sion (MAgP) of 0.64 for all questions, and an MAgP of 0.79 in an-
journalistic hooks: the candidate shall occur in the article (1) early swering the questions who, what, when, and where, which can
and (2) often, and (3) contain an NE. If a candidate contains an NE, uniquely represent an event. The code of Giveme5W1H and the
we will score it higher, since in news articles, the actors involved evaluation dataset are available under an Apache 2 license on
in events are often NEs, e.g., politicians. Lastly, we perform a com- GitHub: https://github.com/fhamborg/Giveme5W1H
bined scoring that adjusts the score of a given candidate, depend-
ing on the properties of top candidates of other questions, e.g., the REFERENCES
method by which an action was performed (‘how’) is usually de- [1] Hamborg, F. et al. 2018. Giveme5W: Main Event Retrieval from News Articles
scribed in the same or an adjacent sentence as the action (‘what’). by Extraction of the Five Journalistic W Questions. Proceedings of the
iConference 2018 (Sheffield, UK, 2018).
Giveme5W1H returns the top candidate phrase for each ques- [2] Hamborg, F. et al. 2017. Identification and Analysis of Media Bias in News
tion, including the normalized data from canonicalization. Articles. Proceedings of the 15th International Symposium of Information Science
(2017).
[3] Hamborg, F. et al. 2017. Matrix-based News Aggregation: Exploring Different
3 RESULTS News Perspectives. Proceedings of the ACM/IEEE Joint Conference on Digital
Libraries. 10, 17 (2017).
We use the evaluation dataset by Hamborg et al. [1], which con- [4] Oliver, P.E. and Maney, G.M. 2000. Political Processes and Local Newspaper
Coverage of Protest Events: From Selection Bias to Triadic Interactions.
sists of 60 articles in the categories business (Bus), entertainment American Journal of Sociology. 106, 2 (2000), 463–505.
(Ent), politics (Pol), sport (Spo), and tech (Tec). We asked three as- [5] Parton, K. et al. 2009. Who, what, when, where, why?: comparing multiple
sessors to judge the relevance of each answer on a 3-point scale approaches to the cross-lingual 5W task. Proc. of the Joint Conf. of the 47th
Annual Meeting of the ACL and the 4th Int. Joint Conference on Natural Language
(non-relevant, partially relevant, and relevant). Table 1 shows the Processing of the AFNLP: Volume 1-Volume 1 (2009), 423–431.

You might also like