Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Answering Complex Questions Using Open Information Extraction

Tushar Khot Ashish Sabharwal Peter Clark


Allen Institute for AI Allen Institute for AI Allen Institute for AI
tushark@allenai.org ashishs@allenai.org peterc@allenai.org

Abstract et al., 2007) tuples for complex multiple-choice


questions that require tuples from multiple sen-
While there has been substantial progress tences. Such a system can answer complex ques-
in factoid question-answering (QA), an-
arXiv:1704.05572v1 [cs.AI] 19 Apr 2017

tions in resource-poor domains where curated


swering complex questions remains chal- knowledge is unavailable. Elementary-level sci-
lenging, typically requiring both a large ence exams is one such domain, requiring com-
body of knowledge and inference tech- plex reasoning (Clark, 2015). Due to the lack of
niques. Open Information Extraction a large-scale structured KB, state-of-the-art sys-
(Open IE) provides a way to generate tems for this task either rely on shallow reasoning
semi-structured knowledge for QA, but to with large text corpora (Clark et al., 2016; Cheng
date such knowledge has only been used et al., 2016) or deeper, structured reasoning with
to answer simple questions with retrieval- a small amount of automatically acquired (Khot
based methods. We overcome this limita- et al., 2015) or manually curated (Khashabi et al.,
tion by presenting a method for reasoning 2016) knowledge.
with Open IE knowledge, allowing more Consider the following question from an Alaska
complex questions to be handled. Using a state 4th grade science test:
recently proposed support graph optimiza-
tion framework for QA, we develop a new Which object in our solar system reflects
inference model for Open IE, in particu- light and is a satellite that orbits around
lar one that can work effectively with mul- one planet? (A) Earth (B) Mercury (C)
tiple short facts, noise, and the relational the Sun (D) the Moon
structure of tuples. Our model signifi- This question is challenging for QA systems be-
cantly outperforms a state-of-the-art struc- cause of its complex structure and the need for
tured solver on complex questions of vary- multi-fact reasoning. A natural way to answer it
ing difficulty, while also removing the re- is by combining facts such as (Moon; is; in the
liance on manually curated knowledge. solar system), (Moon; reflects; light), (Moon; is;
satellite), and (Moon; orbits; around one planet).
1 Introduction
A candidate system for such reasoning,
Effective question answering (QA) systems have and which we draw inspiration from, is the
been a long-standing quest of AI research. Struc- TABLE ILP system of Khashabi et al. (2016).
tured curated KBs have been used successfully for TABLE ILP treats QA as a search for an optimal
this task (Berant et al., 2013; Berant and Liang, subgraph that connects terms in the question and
2014). However, these KBs are expensive to build answer via rows in a set of curated tables, and
and typically domain-specific. Automatically con- solves the optimization problem using Integer
structed open vocabulary (subject; predicate; ob- Linear Programming (ILP). We similarly want
ject) style tuples have broader coverage, but have to search for an optimal subgraph. However, a
only been used for simple questions where a single large, automatically extracted tuple KB makes
tuple suffices (Fader et al., 2014; Yin et al., 2015). the reasoning context different on three fronts:
Our goal in this work is to develop a QA system (a) unlike reasoning with tables, chaining tuples
that can perform reasoning with Open IE (Banko is less important and reliable as join rules aren’t
available; (b) conjunctive evidence becomes ually defined join rules for a small set of curated
paramount, as, unlike a long table row, a single tables. While it can also use open vocabulary tu-
tuple is less likely to cover the entire question; ples (as we assess in our experiments), its efficacy
and (c) again, unlike table rows, tuples are noisy, is limited by the difficulty of defining reliable join
making combining redundant evidence essen- rules for such tuples. Further, each row in some
tial. Consequently, a table-knowledge centered complex curated tables covers all relevant contex-
inference model isn’t the best fit for noisy tuples. tual information (e.g., each row of the adaptation
To address this challenge, we present a new table contains (animal, adaptation, challenge, ex-
ILP-based model of inference with tuples, im- planation)), whereas recovering such information
plemented in a reasoner called T UPLE I NF. We requires combining multiple Open IE tuples.
demonstrate that T UPLE I NF significantly outper-
forms TABLE ILP by 11.8% on a broad set of over 3 Tuple Inference Solver
1,300 science questions, without requiring manu-
ally curated tables, using a substantially simpler We first describe the tuples used by our solver. We
ILP formulation, and generalizing well to higher define a tuple as (subject; predicate; objects) with
grade levels. The gains persist even when both zero or more objects. We refer to the subject, pred-
solvers are provided identical knowledge. This icate, and objects as the fields of the tuple.
demonstrates for the first time how Open IE based
3.1 Tuple KB
QA can be extended from simple lookup questions
to an effective system for complex questions. We use the text corpora (S) from Clark
et al. (2016) to build our tuple KB. For each test
2 Related Work set, we use the corresponding training questions
Qtr to retrieve domain-relevant sentences from
We discuss two classes of related work: retrieval-
S. Specifically, for each multiple-choice question
based web question-answering (simple reason-
(q, A) ∈ Qtr and each choice a ∈ A, we use
ing with large scale KB) and science question-
all non-stopword tokens in q and a as an Elastic-
answering (complex reasoning with small KB).
Search1 query against S. We take the top 200 hits,
Web QA: There exist several systems for run Open IE v4,2 and aggregate the resulting tu-
retrieval-based Web QA problems (Ferrucci et al., ples over all a ∈ A and over all questions in Qtr
2010; Brill et al., 2002). While structured KBs to create the tuple KB (T).3
such as Freebase have been used in many (Berant
et al., 2013; Berant and Liang, 2014; Kwiatkowski 3.2 Tuple Selection
et al., 2013), such approaches are limited by the Given a multiple-choice question qa with question
coverage of the data. QA systems using semi- text q and answer choices A={ai }, we select the
structured Open IE tuples (Fader et al., 2013, most relevant tuples from T and S as follows.
2014; Yin et al., 2015) or automatically extracted Selecting from Tuple KB: We use an in-
web tables (Sun et al., 2016; Pasupat and Liang, verted index to find the 1,000 tuples that have
2015) have broader coverage but are limited to the most overlapping tokens with question tokens
simple questions with a single query. tok(qa).4 . We also filter out any tuples that over-
lap only with tok(q) as they do not support any
Science QA: Elementary-level science QA tasks
answer. We compute the normalized TF-IDF score
require reasoning to handle complex questions.
treating the question, q as a query and each tuple,
Markov Logic Networks (Richardson and Domin-
t as a document:
gos, 2006) have been used to perform probabilistic
reasoning over a small set of logical rules (Khot
tf(x, q) = 1 if x ∈ q; idf(x) = log(1 + N/nx )
et al., 2015). Simple IR techniques have also X
been proposed for science tests (Clark et al., 2016) tf-idf(t, q) = idf (x)
and Gaokao tests (equivalent to the SAT exam in x∈t∩q
China) (Cheng et al., 2016). 1
https://www.elastic.co/products/elasticsearch
The work most related to T UPLE I NF is the 2
http://knowitall.github.io/openie
aforementioned TABLE ILP solver. This approach 3
Available at http://anonymized
4
focuses on building inference chains using man- All tokens are stemmed and stop-word filtered
Active field must have < w1 connected edges
Active choice must have < w2 edges
Active qterm must have < w3 edges
Support graph must have < w4 active tuples
Active tuple must have ≥ w5 active fields
Active tuple must have an edge to some qterm
Active tuple must have an edge to some choice
Active tuple must have active subject
Figure 1: An example support graph linking a If a tuple predicate aligns to q, the subject (object) must
question (top), two tuples from the KB (colored) align to a term preceding (following, resp.) q
and an answer option (nitrogen). Table 1: High-level ILP constraints; we report re-
sults for w
~ = (2, 4, 4, 4, 2); the model can be im-
where N is the number of tuples in the KB and
proved with more careful parameter selection
nx are the number of tuples containing x. We nor-
malize the tf-idf score by the number of tokens in
t and q. We finally take the 50 top-scoring tuples model as follows8 .
Tqa 5 .
Objective Function
On-the-fly tuples from text: To handle ques- Similar to TABLE ILP, we score the support graph
tions from new domains not covered by the train- based on the weight of the active nodes and edges.
ing set, we extract additional tuples on the fly from Each edge e(t, h) is weighted based on a word-
S (similar to Sharma et al. (2015)). We perform overlap score.9 While TABLE ILP used Word-
the same ElasticSearch query described earlier for Net (Miller, 1995) paths to compute the weight,
building T. We ignore sentences that cover none or this measure results in unreliable scores when
all answer choices as they are not discriminative. faced with longer phrases found in Open IE tuples.
We also ignore long sentences (>300 characters) Compared to a curated KB, it is easy to find
and sentences with negation6 as they tend to lead Open IE tuples that match irrelevant parts of the
to noisy inference. We then run Open IE on these questions. To mitigate this issue, we improve the
sentences and re-score the resulting tuples using scoring of qterms in our ILP objective to focus on
the Jaccard score7 due to the lossy nature of Open important terms. Since the later terms in a ques-
IE, and finally take the 50 top-scoring tuples Tqa 0 .
tion tend to provide the most critical information,
we scale qterm coefficients based on their position.
3.3 Support Graph Search Also, qterms that appear in almost all of the se-
Similar to TABLE ILP, we view the QA task as lected tuples tend not to be discriminative as any
searching for a graph that best connects the terms tuple would support such a qterm. Hence we scale
in the question (qterms) with an answer choice the coefficients by the inverse frequency of the to-
via the knowledge; see Figure 1 for a simple il- kens in the selected tuples.
lustrative example. Unlike standard alignment Constraints
models used for tasks such as Recognizing Tex- Since Open IE tuples do not come with schema
tual Entailment (RTE) (Dagan et al., 2010), how- and join rules, we can define a substantially sim-
ever, we must score alignments between a set pler model compared to TABLE ILP. This reduces
Tqa ∪ Tqa 0 of structured tuples and a (potentially
the reasoning capability but also eliminates the re-
multi-sentence) multiple-choice question qa. liance on hand-authored join rules and regular ex-
The qterms, answer choices, and tuples fields pressions used in TABLE ILP. We discovered (see
form the set of possible vertices, V, of the support empirical evaluation) that this simple model can
graph. Edges connecting qterms to tuple fields and achieve the same score as TABLE ILP on the Re-
tuple fields to answer choices form the set of pos- gents test (target test set used by TABLE ILP) and
sible edges, E. The support graph, G(V, E), is a generalizes better to different grade levels.
subgraph of G(V, E) where V and E denote “ac- We define active vertices and edges using ILP
tive” nodes and edges, resp. We define the desired constraints: an active edge must connect two ac-
behavior of an optimal support graph via an ILP tive vertices and an active vertex must have at least
5
one active edge. To avoid positive edge coeffi-
Available at http://allenai.org/data.html
6 8
containing not, ’nt, or except c.f. Appendix A for more details
7 9
| tok(t) ∩ tok(qa) | / | tok(t) ∪ tok(qa) | w(t, h) = |tok(t) ∩ tok(h)|/|tok(h)|
cients in the objective function resulting in spuri- Solvers 4th Grade 8th Grade
ous edges in the support graph, we limit the num- TABLE ILP(C) 39.9 34.1
T UPLE I NF(T+T’) 51.7 51.6
ber of active edges from an active tuple, question
TABLE ILP(C+T) 42.1 37.9
choice, tuple fields, and qterms (first group of con- T UPLE I NF(C+T) 47.5 48.0
straints in Table 1). Our model is also capable of
using multiple tuples to support different parts of Table 2: T UPLE I NF is significantly better at struc-
the question as illustrated in Figure 1. To avoid tured reasoning than TABLE ILP
spurious tuples that only connect with the question
Solvers 4th Grade 8th Grade
(or choice) or ignore the relation being expressed
IR(S) 52.0 52.8
in the tuple, we add constraints that require each IR(S) + TABLE ILP(C) 53.3 54.5
tuple to connect a qterm with an answer choice IR(S) + T UPLE I NF(T+T’) 55.3 55.1
(second group of constraints in Table 1).
We also define new constraints based on the Table 3: T UPLE I NF is complementarity to IR, re-
Open IE tuple structure. Since an Open IE tu- sulting in a strong ensemble
ple expresses a fact about the tuple’s subject, we
require the subject to be active in the support web pages used by Clark et al. (2016). This cor-
graph. To avoid issues such as (Planet; orbit; Sun) pus is used by the IR solver and also used to create
matching the sample question in the introduction the tuple KB T and on-the-fly tuples Tqa 0 . Addi-

(“Which object. . .orbits around a planet”), we also tionally, TABLE ILP uses ∼70 Curated tables (C)
add an ordering constraint (third group in Table 1). designed for 4th grade NY Regents exams.
Its worth mentioning that T UPLE I NF only com- We compare T UPLE I NF with two state-of-the-
bines parallel evidence i.e. each tuple must con- art baselines. IR is a simple yet powerful
nect words in the question to the answer choice. information-retrieval baseline (Clark et al., 2016)
For reliable multi-hop reasoning using OpenIE tu- that selects the answer option with the best match-
ples, we can add inter-tuple connections to the ing sentence in a corpus. TABLE ILP is the state-
support graph search, controlled by a small num- of-the-art structured inference baseline (Khashabi
ber of rules over the OpenIE predicates. Learning et al., 2016) developed for science questions.
such rules for the Science domain is an open prob-
lem and potential avenue of future work. 4.1 Results
Table 2 shows that T UPLE I NF, with no curated
4 Experiments knowledge, outperforms TABLE ILP on both ques-
Comparing our method with two state-of-the-art tion sets by more than 11%. The lower half of the
systems for 4th and 8th grade science exams, table shows that even when both solvers are given
we demonstrate that (a) T UPLE I NF with only au- the same knowledge (C+T),11 the improved selec-
tomatically extracted tuples significantly outper- tion and simplified model of T UPLE I NF12 results
forms TABLE ILP with its original curated knowl- in a statistically significant improvement. Our
edge as well as with additional tuples, and (b) T U - simple model, T UPLE I NF(C + T), also achieves
PLE I NF’s complementary approach to IR leads to
scores comparable to TABLE ILP on the latter’s
an improved ensemble. Numbers in bold indicate target Regents questions (61.4% vs TABLE ILP’s
statistical significance based on the Binomial ex- reported 61.5%) without any specialized rules.
act test (Howell, 2012) at p = 0.05. Table 3 shows that while T UPLE I NF achieves
similar scores as the IR solver, the approaches
We consider two question sets. (1) 4th Grade
are complementary (structured lossy knowledge
set (1220 train, 1304 test) is a 10x larger superset
reasoning vs. lossless sentence retrieval). The
of the NY Regents questions (Clark et al., 2016),
two solvers, in fact, differ on 47.3% of the train-
and includes professionally written licensed ques-
ing questions. To exploit this complementarity,
tions. (2) 8th Grade set (293 train, 282 test) con-
tains 8th grade questions from various states.10 11
See Appendix B for how tables (and tuples) are used by
We consider two knowledge sources. The Sen- T UPLE I NF (and TABLE ILP).
12
On average, TABLE ILP (T UPLE I NF) has 3,403 (1,628,
tence corpus (S) consists of domain-targeted 80K resp.) constraints and 982 (588, resp.) variables. T UPLE -
sentences and 280 GB of plain text extracted from I NF’s ILP can be solved in half the time taken by TABLE ILP,
resulting in 68.6% reduction in overall question answering
10
http://allenai.org/data/science-exam-questions.html time.
we train an ensemble system (Clark et al., 2016) 6 Conclusion
which, as shown in the table, provides a substan-
tial boost over the individual solvers. Further, We presented a new QA system, T UPLE I NF, that
IR + T UPLE I NF is consistently better than IR + can reason over a large, potentially noisy tuple KB
TABLE ILP. Finally, in combination with IR and to answer complex questions. Our results show
the statistical association based PMI solver (that that T UPLE I NF is a new state-of-the-art struc-
scores 54.1% by itself) of Clark et al. (2016), T U - tured solver for elementary-level science that does
PLE I NF achieves a score of 58.2% as compared to
not rely on curated knowledge and generalizes to
TABLE ILP’s ensemble score of 56.7% on the 4th higher grades. Errors due to lossy IE and misalign-
grade set, again attesting to T UPLE I NF’s strength. ments suggest future work in incorporating con-
text and distributional measures.

5 Error Analysis
References
We describe four classes of failures that we ob-
Tobias Achterberg. 2009. SCIP: solving constraint in-
served, and the future work they suggest. teger programs. Math. Prog. Computation 1(1):1–
Missing Important Words: Which material 41.
will spread out to completely fill a larger con-
tainer? (A)air (B)ice (C)sand (D)water Michele Banko, Michael J. Cafarella, Stephen Soder-
land, Matthew Broadhead, and Oren Etzioni. 2007.
In this question, we have tuples that support water Open information extraction from the web. In IJ-
will spread out and fill a larger container but miss CAI.
the critical word “completely”. An approach capa-
ble of detecting salient question words could help J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Se-
mantic parsing on Freebase from question-answer
avoid that. pairs. In Empirical Methods in Natural Language
Lossy IE: Which action is the best method to Processing (EMNLP).
separate a mixture of salt and water? ...
Jonathan Berant and Percy Liang. 2014. Semantic
The IR solver correctly answers this question by parsing via paraphrasing. In ACL.
using the sentence: Separate the salt and water
mixture by evaporating the water. However, T U - Eric Brill, Susan Dumais, and Michele Banko. 2002.
PLE I NF is not able to answer this question as Open An analysis of the AskMSR question-answering sys-
tem. In Proceedings of EMNLP. pages 257–264.
IE is unable to extract tuples from this impera-
tive sentence. While the additional structure from Gong Cheng, Weixi Zhu, Ziwei Wang, Jianghui Chen,
Open IE is useful for more robust matching, con- and Yuzhong Qu. 2016. Taking up the gaokao chal-
verting sentences to Open IE tuples may lose im- lenge: An information retrieval approach. In IJCAI.
portant bits of information. Peter Clark. 2015. Elementary school science and math
Bad Alignment: Which of the following gases tests as a driver for AI: take the Aristo challenge! In
is necessary for humans to breathe in order to 29th AAAI/IAAI. Austin, TX, pages 4019–4021.
live?(A) Oxygen(B) Carbon dioxide(C) Helium(D)
Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sab-
Water vapor harwal, Oyvind Tafjord, Peter Turney, and Daniel
T UPLE I NF returns “Carbon dioxide” as the answer Khashabi. 2016. Combining retrieval, statistics, and
because of the tuple (humans; breathe out; carbon inference to answer elementary science questions.
dioxide). The chunk “to breathe” in the question In 30th AAAI.
has a high alignment score to the “breathe out” Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan
relation in the tuple even though they have com- Roth. 2010. Recognizing textual entailment: Ra-
pletely different meanings. Improving the phrase tional, evaluation and approaches–erratum. Natural
alignment can mitigate this issue. Language Engineering 16(01):105–105.
Out of scope: Deer live in forest for shelter. Anthony Fader, Luke S. Zettlemoyer, and Oren Etzioni.
If the forest was cut down, which situation would 2013. Paraphrase-driven learning for open question
most likely happen?... answering. In ACL.
Such questions that require modeling a state pre- Anthony Fader, Luke S. Zettlemoyer, and Oren Etzioni.
sented in the question and reasoning over the state 2014. Open question answering over curated and
are out of scope of our solver. extracted knowledge bases. In KDD.
David Ferrucci, Eric Brown, Jennifer Chu-Carroll,
James Fan, David Gondek, Aditya A Kalyanpur,
Adam Lally, J William Murdock, Eric Nyberg, John
Prager, et al. 2010. Building Watson: An overview
of the DeepQA project. AI Magazine 31(3):59–79.
David Howell. 2012. Statistical methods for psychol-
ogy. Cengage Learning.
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Pe-
ter Clark, Oren Etzioni, and Dan Roth. 2016. Ques-
tion answering via integer programming over semi-
structured knowledge. In IJCAI.
Tushar Khot, Niranjan Balasubramanian, Eric
Gribkoff, Ashish Sabharwal, Peter Clark, and Oren
Etzioni. 2015. Exploring Markov logic networks
for question answering. In EMNLP.
Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and
Luke S. Zettlemoyer. 2013. Scaling semantic
parsers with on-the-fly ontology matching. In
EMNLP.
George Miller. 1995. Wordnet: a lexical database for
english. Communications of the ACM 38(11):39–
41.
Panupong Pasupat and Percy Liang. 2015. Composi-
tional semantic parsing on semi-structured tables. In
ACL.
Matthew Richardson and Pedro Domingos. 2006.
Markov logic networks. Machine learning 62(1–
2):107–136.
Arpit Sharma, Nguyen Ha Vo, Somak Aditya, and
Chitta Baral. 2015. Towards addressing the wino-
grad schema challenge - building and using a seman-
tic parser and a knowledge hunting module. In IJ-
CAI.
Huan Sun, Hao Ma, Xiaodong He, Wen tau Yih, Yu Su,
and Xifeng Yan. 2016. Table cell search for question
answering. In WWW.
Pengcheng Yin, Nan Duan, Ben Kao, Jun-Wei Bao, and
Ming Zhou. 2015. Answering questions with com-
plex semantic constraints on open knowledge bases.
In CIKM.
A Appendix: ILP Model Details Apart from the constraints described in Table 1,
we also use the which-term boosting constraints
To build the ILP model, we first need to get defined by TABLE ILP (Eqns. 44 and 45 in Table
the questions terms (qterm) from the question by 13 (Khashabi et al., 2016)). As described in Sec-
chunking the question using an in-house chunker tion B, we create a tuple from table rows by setting
based on the postagger from FACTORIE. 13 pairs of cells as the subject and object of a tuple.
Variables For these tuples, apart from requiring the subject
The ILP model has an active vertex variable for to be active, we also require the object of the tuple.
each qterm (xq ), tuple (xt ), tuple field (xf ) and This would be equivalent to requiring at least two
question choice (xa ). Table 4 describes the co- cells of a table row to be active.
efficients of these active variables. For example,
B Experiment Details
the coefficient of each qterm is a constant value
(0.8) scaled by three boosts. The idf boost, idf B We use the SCIP ILP optimization engine (Achter-
for a qterm, x is calculated as log(1 + (|Tqa | + berg, 2009) to optimize our ILP model. To get
0 |)/n ) where n is the number of tuples in
|Tqa x x the score for each answer choice ai , we force the
Tqa ∪ Tqa 0 containing x. The science term boost,
active variable for that choice xai to be one and
scienceB boosts coefficients of qterms that are use the objective function value of the ILP model
valid science terms based on a list of 9K terms. as the score. For evaluations, we use a 2-core
The location boost, locB of a qterm at index i in 2.5 GHz Amazon EC2 linux machine with 16 GB
the question is given by i/tok(q) (where i=1 for RAM. To evaluate TABLE ILP and T UPLE I NF on
the first term). curated tables and tuples, we converted them into
Similarly each edge, e has an associated active the expected format of each solver as follows.
edge variable with the word overlap score as its
coefficient, ce . For efficiency, we only create B.1 Using curated tables with T UPLE I NF
qterm→field edge and field→choice edge if the For each question, we select the 7 best matching
coefficient is greater than a certain threshold tables using the tf-idf score of the table w.r.t. the
(0.1 and 0.2, respectively). Finally the objective question tokens and top 20 rows from each table
function of our ILP model can be written as: using the Jaccard similarity of the row with the
question. (same as Khashabi et al. (2016)). We
X X X
cq xq + ct xt + ce xe then convert the table rows into the tuple structure
q∈qterms t∈tuples e∈edges using the relations defined by TABLE ILP. For ev-
ery pair of cells connected by a relation, we cre-
ate a tuple with the two cells as the subject and
Description Var. Coefficient (c) primary object with the relation as the predicate.
Qterm xq 0.8·idfB·scienceB·locB The other cells of the table are used as additional
Tuple xt -1 + jaccardScore(t, qa) objects to provide context to the solver. We pick
Tuple Field xf 0 top-scoring 50 tuples using the Jaccard score.
Choice xa 0
B.2 Using Open IE tuples with TABLE ILP
Table 4: Coefficients for active variables. We create an additional table in TABLE ILP with
all the tuples in T . Since TABLE ILP uses
Constraints Next we describe the constraints in fixed-length (subject; predicate; object) triples,
our model. We have basic definitional constraints we need to map tuples with multiple objects to
over the active variables. this format. For each object, Oi in the input
Open IE tuple (S; P ; O1 ; O2 . . .), we add a triple
Active variable must have an active edge (S; P ; Oi ) to this table.
Active edge must have an active source node
Active edge must have an active target node
Exactly one answer choice must be active
Active field implies tuple must be active
13
http://factorie.cs.umass.edu/

You might also like