Professional Documents
Culture Documents
Grounded Semantic Role Labeling: January 2016
Grounded Semantic Role Labeling: January 2016
net/publication/305341979
CITATIONS READS
16 32
6 authors, including:
Changsong Liu
Michigan State University
14 PUBLICATIONS 162 CITATIONS
SEE PROFILE
All content following this page was uploaded by Changsong Liu on 20 July 2016.
Abstract
Predicate: “takes out”: track 1
Agent: ‘’The woman’’ : track 2
Pa.ent: ‘’a cucumber’’ : track 3
Semantic Role Labeling (SRL) captures se- Source: ‘’from the refrigerator’’ :
mantic roles (or participants) such as agent, track 4
Des.na.on: ‘’ ‘’ : track 5
The woman takes out a cucumber from the
patient, and theme associated with verbs refrigerator.
149
150
ing has been applied to ground phrases to image re- the center of the bounding boxes of the refrigerator
gions (Karpathy and Fei-Fei, 2015). track; and the destination is grounded to the
center of the cutting board track.
3 Method We use Conditional Random Field(CRF) to model
We first describe our problem formulation and then this problem. An example CRF factor graph is
provide details on the learning and inference algo- shown in Figure 2. The CRF structure is cre-
rithms. ated based on information extracted from language.
More Specifically, s1 , ..., s6 refers to the observed
3.1 Problem Formulation text and its semantic role. Notice that s6 is an im-
plicit role as there is no text from the sentence de-
Given a sentence S and its corresponding video clip
scribing destination. Also note that the whole
V , our goal is to ground explicit/implicit roles as-
prepositional phrase “from the drawer” is identified
sociated with a verb in S to video tracks in V. In
as the source rather than “the drawer” alone. This
this paper, we focus on the following set of semantic
is because the prepositions play an important role in
roles: {predicate, patient, location,
specifying location information. For example, “near
source, destination, tool}. In the cook-
the cutting boarding” is describing a location that is
ing domain, as actions always involve hands, the
near to, but not exactly at the location of the cutting
predicate is grounded to the hand pose repre-
board. Here v1 , ..., v6 are grounding random vari-
sented by a trajectory of relevant hand(s). Normally
ables which take values from object tracks and lo-
agent would be grounded to the person who does
cations in the video clip, and φ1 , ..., φ6 are binary
the action. As there is only one person in the scene,
random variables which take values {0,1}. When
we thus ignore the grounding of the agent in this
φi equals to 1, it means vi is the correct ground-
work.
ing of corresponding linguistic semantic role, oth-
Video tracks capture tracks of objects (including
erwise it is not. The introduction of random vari-
hands) and locations. For example, in Figure 1, there
ables φi follows previous work from Tellex and col-
are 5 tracks: human, hand, cucumber, refrigerator
leagues (Tellex et al., 2011), which makes CRF
and cutting board. Regarding the representation of
learning more tractable.
locations, instead of discretization of a whole image
to many small regions(large search space), we cre- 3.2 Learning and Inference
ate locations corresponding to five spatial relations
(center, up, down, left, right) with respect to each In the CRF model, we do not directly model the ob-
object track, which means we have 5 times number jective function as:
of locations compared with number of objects. For
instance, in Figure 1, the source is grounded to p(v1 , ..., vk |S, V )
P (φ|s1 , s2 , . . . , sk , v1 , v2 , . . . , vk , V )
s2 s1 s6 s3 s4 s5
The person
[Agent]
Takes out
[Predicate]
[DesCnaCon] A cuAng board
[PaCent]
From
[Source]
The drawer
[Source]
where φ is a binary random vector [φ1 , ..., φk ], in-
dicating whether the grounding is correct. In this
Figure 2: The CRF structure of sentence “the person takes out way, the objective function factorizes according to
a cutting board from the drawer”. The text in the square bracket the structure of language with local normalization at
indicates the corresponding semantic role. each factor.
151
Gradient ascent with L2 regularization was used selected two tasks “cutting cucumber” and “cutting
for parameter learning to maximize the objective bread” as our experimental data. Each task has 5
function: videos showing how different people perform the
P P
same task. Each video is segmented to a sequence
∂L
∂w = i F (φi , si , vi , V )− i EP (φi |si ,vi ,V ) F (φi , si , vi , V ) of video clips where each video clip comes with one
or more language descriptions. The original TACoS
where F refers to the feature function. During the dataset does not contain annotation for grounded se-
training, we also use random grounding as negative mantic roles.
samples for discriminative training. To support our investigation and evaluation, we
During inference, the search space can be very had made a significant effort adding the follow-
large when the number of objects in the world in- ing annotations. For each video clip, we anno-
creases. To address this problem we apply beam tated the objects’ bounding boxes, their tracks, and
search to first ground roles including patient, their labels (cucumber, cutting board, etc.) using
tool, and then other roles including location, VATIC (Vondrick et al., 2013). On average, each
source, destination and predicate. video clip is annotated with 15 tracks of objects. For
each sentence, we annotated the ground truth pars-
4 Evaluation ing structure and the semantic frame for each verb.
4.1 Dataset The ground truth parsing structure is the represen-
tation of dependency parsing results. The seman-
We conducted our investigation based on a sub-
tic frame of a verb includes slots, fillers, and their
set of the TACoS corpus (Regneri et al., 2013).
groundings. For each semantic role (including both
This dataset contains a set of video clips paired
explicit roles and implicit roles) of a given verb, we
with natural language descriptions related to sev-
also annotated the ground truth grounding in terms
eral cooking tasks. The natural language descrip-
of the object tracks and locations. In total, our anno-
tions were collected through crowd-sourcing on top
tated dataset includes 976 pairs of video clips and
of the “MPII Cooking Composite Activities” video
corresponding sentences, 1094 verbs occurrences,
corpus (Rohrbach et al., 2012). In this paper, we
and 3593 groundings of semantic roles. To check an-
Table 1: Statistics for a set of verbs and their semantic roles notation agreement, 10% of the data was annotated
in our annotated dataset. The entry indicates the number of ex- by two annotators. The kappa statistics is 0.83 (Co-
plicit/implicit roles for each category. “–” denotes no such role hen and others, 1960).
is observed in the data.1 From this dataset, we selected 11 most frequent
verbs (i.e., get, take, wash, cut, rinse, slice, place,
Verb Patient Source Destn Location Tool peel, put, remove, open) in our current investigation
take 251 / 0 102 / 149 2 / 248 – – for the following reasons. First, they are used more
put 94 / 0 – 75 / 19 – – frequently so that we can have sufficient samples of
get 247 / 0 62 / 190 0 / 239 – – each verb to learn the model. Second, they cover dif-
cut 134 / 1 64 / 64 – 3 / 131 5 / 130 ferent types of actions: some are more related to the
open 23 / 0 – – 0 / 23 2 / 21 change of the state such as take, and some are more
wash 93 / 0 – – 26 / 58 2 / 82 related to the process such as wash. As it turns out,
slice 69 / 1 – – 2 / 68 2 / 66 these verbs also have different semantic role patterns
rinse 76 / 0 0 / 74 – 8 / 64 – as shown in Table 1. The patient roles of all these
place 104 / 1 – 105 / 7 – – verbs are explicitly specified. This is not surprising
peel 29 / 0 – – 1 / 27 2 / 27 as all these verbs are transitive verbs. There is a large
remove 40 / 0 34 / 6 – – – variation for other roles. For example, for the verb
take, the destination is rarely specified by lin-
gories. This is partly due to the fact that some verb occurrences
1
For some verbs (e.g., get), there is a slight discrepancy be- take more than one objects as grounding to a role. It is also pos-
tween the sum of implicit/explicit roles across different cate- sibly due to missed/duplicated annotation for some categories.
152
guistic expressions (i.e., only 2 instances), however object and obtain the candidate track with this object
it can be inferred from the video. For the verb cut, label.
the location and the tool are also rarely spec-
Feature Extraction. Features in the CRF model can
ified by linguistic expressions. Nevertheless, these
be divided into the following three categories:
implicit roles contribute to the overall understanding
of actions and should also be grounded too.
1. Linguistic features include word occurrence
and semantic role information. They are ex-
4.2 Automated Processing
tracted by language processing.
To build the structure of the CRF as shown in Fig-
ure 2 and extract features for learning and inference, 2. Track label features are the label information
we have applied the following approaches to process for tracks in the video. The labels come from
language and vision. human annotation or automated visual process-
Language Processing. Language processing con- ing depending on different experimental set-
sists of three steps to build a structure containing tings (described in Section 4.3).
syntactic and semantic information. First, the Stan-
ford Parser (Manning et al., 2014) is applied to cre- 3. Visual features are a set of features involving
ate a dependency parsing tree for each sentence. geometric relations between tracks in the video.
Second, Senna (Collobert et al., 2011) is applied One important feature is the histogram compar-
to identify semantic role labels for the key verb in ison score. It measures the similarity between
the sentence. The linguistic entities with seman- distance histograms. Specifically, histograms
tic roles are matched against the dependency nodes of distance values between the tracks of the
in the tree and the corresponding semantic role la- predicate and other roles for each verb are
bels are added to the tree. Third, for each verb, the first extracted from the training video clips. For
Propbank (Palmer et al., 2005) entries are searched an incoming distance histogram, we calculate
to extract all relevant semantic roles. The implicit its Chi-Square distances (Zhang et al., 2007)
roles (i.e., not specified linguistically) are added as from the pre-extracted training histograms with
direct children of verb nodes in the tree. Through the same verb and the same role. its histogram
these three steps, the resulting tree from language comparison score is set to be the average of
processing has both explicit and implicit semantic top 5 smallest Chi-Square distances. Other vi-
roles. These trees are further transformed to the CRF sual features include geometric information for
structures based on a set of rules. single tracks and geometric relations between
two tracks. For example, size, average speed,
Vision Processing. A set of visual detectors are first and moving direction are extracted for a single
trained for each type of objects. Here a random track. Average distance, size-ratio, and rela-
forest classifier is adopted. More specifically, we tive direction are extracted between two tracks.
use 100 trees with HoG features (Dalal and Triggs, For features that are continuous, we discretized
2005) and color descriptors (Van De Weijer and them into uniform bins.
Schmid, 2006). Both HoG and Color descriptors
are used, because some objects are more structural, To ground language into tracks from the video, in-
such as knives, human; some are more textured such stead of using track label features or visual features
as towels. With the learned object detectors, given alone, we use a Cartesian product of these features
a candidate video clip, we run the detectors at each with linguistic features. To learn the behavior of dif-
10th frame (less than 0.5 second), and find the can- ferent semantic roles of different verbs, visual fea-
didate windows for which the detector score corre- tures are combined with the presence of both verbs
sponding to the object is larger than a threshold (set and semantic roles through Cartesian product. To
as 0.5). Then using the detected window as a starting learn the correspondence between track labels and
point, we adopt tracking-by-detection (Danelljan et words, track label features are combined with the
al., 2014) to go forward and backward to track this presence of words also through Cartesian product.
153
To train the model, we randomly selected 75% of versus automated language parsing; (2) object track-
annotated 976 pairs of video clips and corresponding ing and labeling is based on annotation versus auto-
sentences as training set. The remaining 25% were mated processing. These lead to four different ex-
used as the testing set. perimental configurations.
4.3 Experimental Setup Evaluation Metrics. For experiments that are based
on annotated object tracks, we can simply use the
Comparison. To evaluate the performance of our traditional accuracy that directly measures the per-
approach, we compare it with two approaches. centage of grounded tracks that are correct. How-
ever, for experiments using automated tracking,
• Baseline: To identify the grounding for each
evaluation can be difficult as tracking itself poses
semantic role, the first baseline chooses the
significant challenges. The grounding results (to
most possible track based on the object type
tracks) cannot be directly compared with the an-
conditional distribution given the verb and se-
notated ground-truth tracks. To address this prob-
mantic role. If an object type corresponds
lem, we have defined a new metric called approxi-
to multiple tracks in the video, e.g., multiple
mate accuracy. This metric is motivated by previous
drawers or knives, we then randomly select one
computer vision work that evaluates tracking per-
of the tracks as grounding. We ran this baseline
formance (Bashir and Porikli, 2006). Suppose the
method five times and reported the average per-
ground truth grounding for a role is track gt and the
formance.
predicted grounding is track pt. The two tracks gt
• Tellex (2011): The second approach we com- and pt are often not the same (although may have
pared with is based on an implementa- some overlaps). Suppose the number of frames in
tion (Tellex et al., 2011). The difference is the video clip is k. For each frame, we calculate the
that they don’t explicitly model fine-grained se- distance between the centroids of these two tracks.
mantic role information. For a better compar- If their distance is below a predefined threshold, we
ison, we map the grounding results from this consider the two tracks overlap in this frame. We
approach to different explicit semantic roles ac- consider the grounding is correct if the ratio of the
cording to the SRL annotation of the sentence. overlapping frames between gt and pt exceeds 50%.
Note that this approach is not able to ground As can be seen, this is a lenient and an approximate
implicit roles. measure of accuracy.
154
Table 2: Evaluation results based on annotated language parsing.
it is statistically significant (p < 0.05) compared to the gold/automatic language processing setting are
the approach (Tellex et al., 2011). not directly comparable as the automatic SRL can
misidentify frame elements.
For experiments based on automated object track-
ing, we also calculated an upper bound to assess the 4.5 Discussion
best possible performance which can be achieved As shown in Table 2 and Table 3, our approach
by a perfect grounding algorithm given the current consistently outperforms the baseline (for both ex-
vision processing results. This upper bound is cal- plicit and implicit roles) and the Tellex (2011) ap-
culated based on grounding each role to the track proach. Under the configuration of gold recogni-
which is closest to the ground-truth annotated track. tion/tracking, the incorporation of visual features
For the experiments based on annotated tracking, further improves the performance. However, this
the upper bound would be 100%. This measure performance gain is not observed when automated
provides some understandings about how good the object tracking and labeling is used. One possi-
grounding approach is given the limitation of vi- ble explanation is that as we only had limited data,
sion processing. Notice that the grounding results in we did not use separate data to train models for
155
Figure 3: The relation between the accuracy and the entropy of each verb’s patient from the gold language, gold visual recogni-
tion/tracking setting. The entropy for the patient role of each verb is shown below the verb.
object recognition/tracking. So the GSRL model The accuracy from our full GSRL model is already
was trained with gold recognition/tracking data and quite close to the upper bound. For other roles
tested with automated recognition/tracking data. such as predicate and destination, there is
By comparing our method with Tellex (2011), we a larger gap between the current performance and
can see that by incorporating fine grained seman- the upper bound. This difference reflects the model’s
tic role information, our approach achieves better capability in grounding different roles.
performance on almost all the explicit role (except Figure 3 shows a close-up look at the grounding
for the patient role under the automated tracking performance to the patient role for each verb un-
condition). der the gold parsing and gold tracking configuration.
The results have also shown that some roles are The reason we only show the results of patient
easier to ground than others in this domain. For role here is every verb has this role to be grounded.
example, the predicate role is grounded to the For each verb, we also calculated its entropy based
hand tracks (either left hand or right hand), there on the distribution of different types of objects that
are not many variations such that the simple base- can serve as the patient role in the training data.
line can achieve pretty high performance, especially The entropy is shown at the bottom of the figure. For
when annotated tracking is used. The same situation verbs such as take and put, our full GSRL model
happens to the location role as most of the lo- leads to much better performance compared to the
cations happen near the sink when the verb is wash, baseline. As the baseline approach relies on the en-
and near the cutting board for verbs like cut, etc. tropy of the potential grounding for a role, we fur-
However, for the patient role, there is a large ther measured the improvement of the performance
difference between our approach and baseline ap- and calculated the correlation between the improve-
proaches as there is a larger variation of different ment and the entropy of each verb. The result shows
types of objects that can participate in the role for a that Pearson coefficient between the entropy and the
given verb. improvement of GSRL over the baseline is 0.614.
For experiments with automated tracking, the up- This indicates the improvement from GSRL is pos-
per bound for each role also varies. Some roles itively correlated with the entropy value associated
(e.g., patient) have a pretty low upper bound. with a role, implying the GSRL model can deal with
156
more uncertain situations. For the verb cut, The effective especially when they are extracted based
GSRL model performs slightly worse than the base- on automatic visual processing. This is partly due to
line. One explanation is that the possible objects that the complexity of the scene from the TACoS dataset
can participate as a patient for cut are relatively con- and the lack of depth information. Recent advances
strained where simple features might be sufficient. in object tracking algorithms (Yang et al., 2013; Mi-
A large number of features may introduce noise, and lan et al., 2014) together with 3D sensing can be
thus jeopardizing the performance. explored in the future to improve visual processing.
We further compare the performance of our full Moreover, linguistic studies have shown that action
GRSL model with Tellex (2011) (also shown in Fig- verbs such as cut and slice often denote some change
ure 3) on the patient role of different verbs. Our of state as a result of the action (Hovav and Levin,
approach outperforms Tellex (2011) on most of the 2010; Hovav and Levin, 2008). The change of state
verbs, especially put and open. A close look at the can be perceived from the physical world. Thus an-
results have shown that in those cases, the patient other direction is to systematically study causality
roles are often specified by pronouns. Therefore, the of verbs. Causality models for verbs can potentially
track label features and linguistic features are not provide top-down information to guide intermediate
very helpful, and the correct grounding mainly de- representations for visual processing and improve
pends on visual features. Our full GSRL model can grounded language understanding.
better capture the geometry relations between differ- The capability of grounding semantic roles to the
ent semantic roles by incorporating fine-grained role physical world has many important implications. It
information. will support the development of intelligent agents
which can reason and act upon the shared phys-
5 Conclusion and Future Work ical world. For example, unlike traditional ac-
tion recognition in computer vision (Wang et al.,
This paper investigates a new problem on grounded 2011), grounded SRL will provide deeper under-
semantic role labeling. Besides semantic roles ex- standing of the activities which involve participants
plicitly mentioned in language descriptions, our ap- in the actions guided by linguistic knowledge. For
proach also grounds implicit roles which are not agents that can act upon the physical world such as
explicitly specified. As implicit roles also cap- robots, grounded SRL will allow the agents to ac-
ture important participants related to an action (e.g., quire the grounded structure of human commands
tools used in the action), our approach provides and thus perform the requested actions through plan-
a more complete representation of action seman- ning (e.g., to follow the command “put the cup on
tics which can be used by artificial agents for fur- the table”). Grounded SRL will also contribute
ther reasoning and planning towards the physical to robot action learning where humans can teach
world. Our empirical results on a complex cook- the robot new actions (e.g., simple cooking tasks)
ing domain have shown that, by incorporating se- through both task demonstration and language in-
mantic role information with visual features, our ap- struction.
proach can achieve better performance compared to
baseline approaches. Our results have also shown
that grounded semantic role labeling is a challenging 6 Acknowledgement
problem which often depends on the quality of au-
tomated visual processing (e.g., object tracking and The authors are grateful to Austin Littley and Zach
recognition). Richardson for their help on data annotation, and to
There are several directions for future improve- anonymous reviewers for their valuable comments
ment. First, the current alignment between a video and suggestions. This work was supported in part by
clip and a sentence is generated by some heuristics IIS-1208390 from the National Science Foundation
which are error-prone. One way to address this is and by N66001-15-C-4035 from the DARPA SIM-
to treat alignment and grounding as a joint problem. PLEX program.
Second, our current visual features have not shown
157
References Bastianelli Emanuele, Giuseppe Castellucci, Danilo
Croce, and Roberto Basili. 2013. Textual inference
Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- and meaning representation in human robot interac-
pervised learning of semantic parsers for mapping in- tion. In Joint Symposium on Semantic Processing.,
structions to actions. TACL, 1:49–62. page 65.
Faisal Bashir and Fatih Porikli. 2006. Performance Daniel Gildea and Daniel Jurafsky. 2002. Automatic la-
evaluation of object detection and tracking systems. beling of semantic roles. Computational linguistics,
In Proceedings 9th IEEE International Workshop on 28(3):245–288.
PETS, pages 7–14. Malka Rappaport Hovav and Beth Levin. 2008. Re-
Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. flections on manner/result complementarity. Lecture
Multimodal distributional semantics. J. Artif. Intell. notes.
Res.(JAIR), 49:1–47. Malka Rappaport Hovav and Beth Levin. 2010. Reflec-
Angel Chang, Will Monroe, Manolis Savva, Christopher tions on Manner / Result Complementarity. Lexical
Potts, and Christopher D Manning. 2015. Text to 3d Semantics, Syntax, and Event Structure, pages 21–38.
scene generation with rich lexical grounding. arXiv Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-
preprint arXiv:1505.06289. semantic alignments for generating image descrip-
David L Chen and Raymond J Mooney. 2008. Learning tions. June.
to sportscast: a test of grounded language acquisition. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and
In Proceedings of the 25th international conference on Tamara Berg. 2014. Referitgame: Referring to ob-
Machine learning, pages 128–135. ACM. jects in photographs of natural scenes. In Proceedings
Jacob Cohen et al. 1960. A coefficient of agreement for of the 2014 Conference on Empirical Methods in Nat-
nominal scales. Educational and psychological mea- ural Language Processing (EMNLP), pages 787–798,
surement, 20(1):37–46. Doha, Qatar, October. Association for Computational
Ronan Collobert, Jason Weston, Léon Bottou, Michael Linguistics.
Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly
Natural language processing (almost) from scratch. learning to parse and perceive: Connecting natural lan-
The Journal of Machine Learning Research, 12:2493– guage to the physical world. Transactions of the Asso-
2537. ciation for Computational Linguistics, 1:193–206.
Navneet Dalal and Bill Triggs. 2005. Histograms of ori- Polina Kuznetsova, Vicente Ordonez, Alexander C Berg,
ented gradients for human detection. In Computer Vi- Tamara L Berg, and Yejin Choi. 2013. Generalizing
sion and Pattern Recognition, 2005. CVPR 2005. IEEE image captions for image-text parallel corpus. In ACL
Computer Society Conference on, volume 1, pages (2), pages 790–796. Citeseer.
886–893. IEEE. Angeliki Lazaridou, Nghia The Pham, and Marco Ba-
Martin Danelljan, Fahad Shahbaz Khan, Michael Fels- roni. 2015. Combining language and vision with
berg, and Joost van de Weijer. 2014. Adaptive color a multimodal skip-gram model. arXiv preprint
attributes for real-time visual tracking. In Computer arXiv:1501.02598.
Vision and Pattern Recognition (CVPR), 2014 IEEE Beth Levin. 1993. English verb classes and alternations:
Conference on, pages 1090–1097. IEEE. A preliminary investigation. University of Chicago
Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, press.
Li Deng, Xiaodong He, Geoffrey Zweig, and Margaret Changsong Liu and Joyce Y. Chai. 2015. Learning
Mitchell. 2015. Language models for image cap- to mediate perceptual differences in situated human-
tioning: The quirks and what works. arXiv preprint robot dialogue. In The Twenty-Ninth Conference on
arXiv:1505.01809. Artificial Intelligence (AAAI-15). to appear.
Bradley Efron and Robert J Tibshirani. 1994. An intro- Changsong Liu, Rui Fang, and Joyce Chai. 2012. To-
duction to the bootstrap. CRC press. wards mediating shared perceptual basis in situated di-
Desmond Elliott and Arjen de Vries. 2015. Describ- alogue. In Proceedings of the 13th Annual Meeting of
ing images using inferred visual dependency repre- the Special Interest Group on Discourse and Dialogue,
sentations. In Proceedings of the 53rd Annual Meet- pages 140–149, Seoul, South Korea.
ing of the Association for Computational Linguistics Jonathan Malmaud, Jonathan Huang, Vivek Rathod,
and the 7th International Joint Conference on Natural Nick Johnston, Andrew Rabinovich, and Kevin Mur-
Language Processing (Volume 1: Long Papers), pages phy. 2015. What’s cookin’? interpreting cooking
42–52, Beijing, China, July. Association for Computa- videos using text, speech and vision. arXiv preprint
tional Linguistics. arXiv:1503.01558.
158
Christopher D. Manning, Mihai Surdeanu, John Bauer, Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and
Jenny Finkel, Steven J. Bethard, and David McClosky. Nicholas Roy. 2014. Learning perceptually grounded
2014. The Stanford CoreNLP natural language pro- word meanings from unaligned parallel data. Machine
cessing toolkit. In Proceedings of 52nd Annual Meet- Learning, 94(2):151–167.
ing of the Association for Computational Linguistics: Kewei Tu, Meng Meng, Mun Wai Lee, Tae Eun Choe,
System Demonstrations, pages 55–60. and Song-Chun Zhu. 2014. Joint video and text pars-
Anton Milan, Stefan Roth, and Kaspar Schindler. 2014. ing for understanding events and answering queries.
Continuous energy minimization for multitarget track- MultiMedia, IEEE, 21(2):42–70.
ing. Pattern Analysis and Machine Intelligence, IEEE Joost Van De Weijer and Cordelia Schmid. 2006. Col-
Transactions on, 36(1):58–72. oring local feature extraction. In Computer Vision–
Iftekhar Naim, Young C. Song, Qiguang Liu, Liang ECCV 2006, pages 334–348. Springer.
Huang, Henry Kautz, Jiebo Luo, and Daniel Gildea. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue,
2015. Discriminative unsupervised alignment of nat- Marcus Rohrbach, Raymond Mooney, and Kate
ural language instructions with corresponding video Saenko. 2015. Translating videos to natural language
segments. In Proceedings of the 2015 Conference of using deep recurrent neural networks. In Proceedings
the North American Chapter of the Association for of the 2015 Conference of the North American Chapter
Computational Linguistics: Human Language Tech- of the Association for Computational Linguistics: Hu-
nologies, pages 164–174, Denver, Colorado, May– man Language Technologies, pages 1494–1504, Den-
June. Association for Computational Linguistics. ver, Colorado, May–June. Association for Computa-
Luis Gilberto Mateos Ortiz, Clemens Wolff, and Mirella tional Linguistics.
Lapata. Learning to interpret and describe abstract Carl Vondrick, Donald Patterson, and Deva Ramanan.
scenes. In Proceedings of the 2015 Conference of the 2013. Efficiently scaling up crowdsourced video an-
North American Chapter of the Association for Com- notation. International Journal of Computer Vision,
putational Linguistics: Human Language Technolo- 101(1):184–204.
gies, pages 1505–1515. Heng Wang, Alexander Kläser, Cordelia Schmid, and
Martha Palmer, Daniel Gildea, and Paul Kingsbury. Cheng-Lin Liu. 2011. Action recognition by dense
2005. The proposition bank: An annotated corpus of trajectories. In Computer Vision and Pattern Recogni-
semantic roles. Computational linguistics, 31(1):71– tion (CVPR), 2011 IEEE Conference on, pages 3169–
106. 3176. IEEE.
Vignesh Ramanathan, Percy Liang, and Li Fei-Fei. 2013. Yezhou Yang, Cornelia Fermuller, and Yiannis Aloi-
Video event understanding using natural language de- monos. 2013. Detection of manipulation action con-
scriptions. In Computer Vision (ICCV), 2013 IEEE In- sequences (mac). In Proceedings of the IEEE Con-
ternational Conference on, pages 905–912. IEEE. ference on Computer Vision and Pattern Recognition,
Michaela Regneri, Marcus Rohrbach, Dominikus Wet- pages 2563–2570.
zel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Haonan Yu and Jeffrey Mark Siskind. 2013. Grounded
2013. Grounding action descriptions in videos. Trans- language learning from video described with sen-
actions of the Association for Computational Linguis- tences. In ACL (1), pages 53–63.
tics (TACL), 1:25–36. Jianguo Zhang, Marcin Marszałek, Svetlana Lazebnik,
Marcus Rohrbach, Michaela Regneri, Mykhaylo An- and Cordelia Schmid. 2007. Local features and ker-
driluka, Sikandar Amin, Manfred Pinkal, and Bernt nels for classification of texture and object categories:
Schiele. 2012. Script data for attribute-based recog- A comprehensive study. International journal of com-
nition of composite activities. In Computer Vision– puter vision, 73(2):213–238.
ECCV 2012, pages 144–157. Springer. Jie Zhou and Wei Xu. 2015. End-to-end learning of se-
Deb Roy. 2005. Grounding words in perception and ac- mantic role labeling using recurrent neural networks.
tion: computational insights. TRENDS in Cognitive In Proceedings of the 53rd Annual Meeting of the As-
Sciences, 9(8):389–396. sociation for Computational Linguistics and the 7th
Dan Shen and Mirella Lapata. 2007. Using seman- International Joint Conference on Natural Language
tic roles to improve question answering. In EMNLP- Processing (Volume 1: Long Papers), pages 1127–
CoNLL, pages 12–21. 1137, Beijing, China, July. Association for Computa-
Stefanie Tellex, Thomas Kollar, Steven Dickerson, tional Linguistics.
Matthew R Walter, Ashis Gopal Banerjee, Seth J
Teller, and Nicholas Roy. 2011. Understanding natu-
ral language commands for robotic navigation and mo-
bile manipulation. In AAAI.
159