Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

US 2008O1921.

16A1
(19) United States
(12) Patent Application Publication (10) Pub. No.: US 2008/0192116 A1
Tamir et al. (43) Pub. Date: Aug. 14, 2008
(54) REAL-TIME OBJECTS TRACKING AND Related U.S. Application Data
MOTION CAPTURE INSPORTS EVENTS
(60) Provisional application No. 60/666.468, filed on Mar.
(75) Inventors: Michael Tamir, Tel Aviv (IL); Gal 29, 2005.
Oz, Kfar Saba (IL)
Publication Classification
Correspondence Address:
DR. MARK M. FRIEDMAN (51) Int. Cl.
C/O BILL POLKINGHORN - DSCOVERY DS- H04N 7/8 (2006.01)
PATCH G06K 9/00 (2006.01)
90O3 FLORN WAY
UPPER MARLBORO, MD 20772 (52) U.S. Cl. .................. 348/157:382/103: 348/E07.085
(73) Assignee: SPORTVU LTD., Holon (IL) (57) ABSTRACT
21) Appl. No.: 11A909.08O Non-intrusive per1p
peripheral SVstems
y and methods to track.s iden
(21) Appl. No 9 tify various acting entities and capture the full motion of these
(22) PCT Filed: Mar. 29, 2006 entities in a sports event. The entities preferably include play
ers belonging to teams. The motion capture of more than one
(86). PCT No.: PCT/IL2O06/OOO388 player is implemented in real-time with image processing
methods. Captured player body organ or joints location data
S371 (c)(1), can be used to generate a three-dimensional display of the real
(2), (4) Date: Sep.19, 2007 sporting event using computer games graphics.

Entities

goalkeeper
Patent Application Publication Aug. 14, 2008 Sheet 1 of 15 US 2008/01921.16 A1

FIG. 1

Entities

t e ww.gx'grge trey-hurry was


gy-geanwé
rarer

see
s w Kr
K St. - a s Fs
Patent Application Publication Aug. 14, 2008 Sheet 2 of 15 US 2008/01921.16 A1

Camera 202a
video To broadcast station
Camera 202b
Inhage processing Graphical
unit (IPU) overlay server
204 208
Central Systern Teans/player's
Server statistics server
206 210

Camera 202n
To web To broadcast
station
Other applications
Sewers
212.

FIG, 2a
Patent Application Publication Aug. 14, 2008 Sheet 3 of 15 US 2008/01921.16 A1

Schematic template with player icons

Field Player icons Player icons


Team A Team B

FIG.2b
Patent Application Publication Aug. 14, 2008 Sheet 4 of 15 US 2008/O1921.16 A1

FIG. 3
Calculate dynamic background of playing field 302

Create the foreground image by background subtraction 304

Perform connection analysis and segmentation of foreground


objects such as players and referees 306

Assign each player to a team (using e.g. histogram


correlation and player's history) 308

Localize players in the camera's field of view

To To
broadcast broadcast
Calculate team statistics Create atop schematic view of
312 playing field with player's icons
314

To
Determine the current on-air Display the “on air” camera's broadcast
camera's field of view from field of view on the schematic
calculated broadcast's camera's playing field 318
pan/tilt/zoom sensors view 316
Patent Application Publication Aug. 14, 2008 Sheet 5 of 15 US 2008/01921.16 A1

FIG. 4
Setup: Automatically calculate intrinsic and extrinsic camera
parameters 400

Calculate dynamic background of playing field 402

Create foreground image by background subtraction 404

Perform connection analysis and segmentation of


foreground objects such as players and referees

Analyze teams' colors/textures based on segmented


object's location and count (number of objects assigned to a team)
408
Patent Application Publication Aug. 14, 2008 Sheet 6 of 15 US 2008/01921.16 A1

FIG. Sa

500
Camera
202a
video To broadcast station

Camera
202b Image Processing Graphical
Unit overlay server
204 208
Teams/player's
statistics server
210

ID
subsystem To web TO
video 505 site broadcast
station
Other applications
Sever
an as a - w w - - - - - are a- - - - - - - - -- -- - - - - - - 212

Display Robotic
or or <C : 504 camera
502
Patent Application Publication Aug. 14, 2008 Sheet 7 of 15 US 2008/01921.16 A1

System alerts that a tracked player has been "lost' or that his ID certainty has been lost
due to collision or occlusion 520

A dedicated robotic camera automatically locks on the last or predicted location of the
lost player and Zooms in to provide a high magnification video image
522

An operator identifies the lost player using the robotic camera's video feed and
indicates the player's identity to the system 524

F.G. Sb
Patent Application Publication Aug. 14, 2008 Sheet 8 of 15 US 2008/O1921.16 A1

Camera
202a
video 600 To broadcast station

Graphical
overlay server
208

Teams/player's
statistics server
210

To web TO
site broadcast
station
Other applications
SEWIS
212

FIG. 6a Identification
processing Unit
(IDPU) 602
Robotic
CatCa
604a

Robotic
Caea
604n
Patent Application Publication Aug. 14, 2008 Sheet 9 of 15 US 2008/01921.16 A1

FIG, 6b
System alerts that a tracked player has been "lost" or that his ID
certainty has been lost due to collision or occlusion
620

Multiple Iobotic cameras, each deployed in another sports arena


location to provide a "lost" player location from different directions,
lock on the last location of the lost player, and zoom in to provide a
high Inagnification video image of the player 622

The multiple video streams from different cameras are fed into the
IDPU and each is processed to identify the lost player by
automatically recognizing his jersey number or another pattern on his
outfit 624

The recovered ID is relayed to the systern server via Ethernet or


similar link 626
Patent Application Publication Aug. 14, 2008 Sheet 10 of 15 US 2008/01921.16 A1

Camera 702a.
7()O
video To broadcast station
Camera 702b Video
Image Processing and Graphical
Player Identification overlay server
Unit (IPPU) 208
704
Central Teams/player's
Sever statistics sever
206 20
y
Camera 702n
To Web To
site broadcast
station
Other applications
ScVeS
22

FIG. 7a.
Patent Application Publication Aug. 14, 2008 Sheet 11 of 15 US 2008/O1921.16 A1

FIG. 7b.

Digital Image processing unit:


video segmentation, connectivity,
object localization
Video feeds 722
from cameras
702 Jersey number/pattern
recognition unit
Digital 724
video

704
Patent Application Publication Aug. 14, 2008 Sheet 12 of 15 US 2008/O1921.16 A1

Capture a high resolution vileo frames from each real


Ge. 730

Separate each video frame into foreground objects and as


empty playing field 732

For each player, perform automatic blob Segmentation to


identify player body organs 734

Extract the player's organ directions from a viewpoint of


each camera.
736

Match the player's organs obtained from the different camera


viewpoints and calculating the 3D location of all the organs
738

Automatically select a dynamic player behavior that most


likely fits the player's body organ locations over a time
period, thereby creating temporal player characteristics 740

Generate, on a user's device, a 3D graphical dynamic


environment that combines the temporal player
characteristics with a real or virtual playing field image
742

By the user, select a static or dynamic viewpoint to view the


galle 744

By the user, view the 3D environment from the selected


viewpoint 746

f(G -Q-
Patent Application Publication Aug. 14, 2008 Sheet 13 of 15 US 2008/O1921.16 A1

AG WeSeould jury) Auricit


et N is
?Reeses as?a
&ota
Patent Application Publication Aug. 14, 2008 Sheet 14 of 15 US 2008/O1921.16 A1

6-G 3

can t- 3oz.

-
CAM - 2 - W. Sea

R 2 ceeNES Feates so
9- ?ee (Au ( Alwis?es
CA-LeeA FLEG l{T
C P
Patent Application Publication Aug. 14, 2008 Sheet 15 of 15 US 2008/01921 16 A1

Generate an "empty" playing field from at


least one real video camera CAM 1002

Segment foreground objects in each real


camera frame 1004

Correlate CAM and CAM1 frames and


perform a motion vector analysis using these
frames 1006

Calculate n synthesized frames for a


a"virtual camera" located in between real
cameras CAM and CAM-1 according to a
calculated location of the virtual camera
1008

Calculate a background "empty field" from


the viewpoint of the virtual camera
1010

Compose a synthesized foreground over the


calculated background empty field to obtain
a composite virtual flight replay clip
1012

Display composite replay clip


1014

FIG. 10
US 2008/O 1921 16 A1 Aug. 14, 2008

REAL-TIME OBJECTS TRACKING AND 0007. The real-time tracking and identification of various
MOTION CAPTURE INSPORTS EVENTS acting entities and capture of their full motion is achieved
using multiple TV cameras (either stationary or pan/tilt/Zoom
FIELD OF THE INVENTION cameras) peripherally deployed in the sports arena. This is
0001. The present invention relates in general to real-time done in Such a way that any given point on the playing field is
object tracking and motion capture in sports events and in covered by at least one camera and a processing unit perform
particular to “non-intrusive methods for tracking, identify ing objects segmentation, blob analysis and 3D objects local
ing and capturing the motion of athletes and objects like balls ization and tracking. Algorithms needed to perform these
and cars using peripheral equipment. actions are well known and described for example in J. Pers
and S. Kovacic, “A System for tracking players in sports
BACKGROUND OF THE INVENTION games by computer vision’, Electrotechnical Review 67(5):
281-288, 2000, and in a paper by T. Matsuyama and N. Ukita,
0002 Current sport event object monitoring and motion “Real time multi target tracking by a cooperative distributed
capture systems use mounted electrical or optical devices in vision system'. Dept. of Intelligent Science and Technology,
conjunction with arena deployed transceivers for live track Kyoto University, Japan and references therein.
ing and identification or image processing based “passive' 0008 Although the invention disclosed herein may be
methods for non-real-time match analysis and delayed applied to a variety of sporting events, in order to ease its
replays. The existing tracking systems are used mainly to understanding it will be described in detail with respect to
generate athletes/animals/players performance databases and Soccer games.
statistical event data mainly for coaching applications. Exem 0009 Most real-time tracking applications require live
plary systems and methods are disclosed in U.S. Pat. No. continuous identification of all players and other objects on
5,363,897, 5,513,854, 6,124,862 and 6,483,511. the playing field. The continuous identification is achieved
0003 Current motion capture methods use multiple elec either “manually using player tracking following an initial
tro-magnetic sensors or optical devices mounted on the manual identification (ID) and manual remarking by an
actor's joints to measure the three dimensional (3D) location operator when a player's ID is lost, or automatically by the use
of body organs (also referred to hereinas body sections, joints of general game rules and logics, pattern recognition for ball
or parts). “Organs’ refer to head, torso, limbs and other seg identification and especially—identification of the players
mentable body parts. Some organs may include one or more jersey (shirt) numbers or other textures appearing on their
joints. Motion capture methods have in the past been applied uniforms. In contrast with prior art, the novel features pro
to isolated (single) actors viewed by dedicated TV cameras vided herein regarding object identification include:
and using pattern recognition algorithms to identify, locate 0010 (1) In an embodiment in which identification is done
and capture the motion of the body parts. manually by an operator, providing an operator with a good
0004. The main disadvantage of all known systems and quality, high magnification image of a “lost player to remark
methods is that none provide a “non-intrusive' way to track, the player's identification (ID). The provision is made by a
identify and capture the full motion of athletes, players and robotic camera that can automatically aim onto the last known
other objects on the playing field in real-time. Real-time location or a predicted location of the lost player. It is assumed
non-intrusive motion capture (and related data) of multiple that the player could not move too far away from the last
entities such as players in sports events does not yet exist. location, since the calculation is done in every frame, i.e. in a
Consequently, to date. Such data has not been used in com very short period of time. The robotic camera is operative to
puter games to display the 3D representation of a real game in Zoom in on the player.
real time. 0011 (2) In an automatic identification, operator-free
0005. There is therefore a need for, and it would be advan embodiment, automatically extracting the ID of the lost
tageous to have “non-intrusive' peripheral system and meth player by capturing his jersey number or another pattern on
ods to track, identify and capture full motion of athletes, his outfit. This is done through the use of a plurality of robotic
players and other objects on the playing field in real-time. It cameras that aim onto the last location above. In this case,
would further be advantageous to have the captured motion more than one robotic camera is needed because the number
and other attributes of the real game be transferable in real is typically on the back side of the player's shirt. The “lock
time to a computer game, in order to provide much more ing on the number, capturing and recognition can be done by
realistic, higher fidelity computer sports games. well known pattern recognition methods, e.g. the ones
SUMMARY OF THE INVENTION
described in U.S. Pat. No. 5,353,392 to Luquet and Rebuffet
and U.S. Pat. No. 5,264,933 to Rosser et al.
0006. The present invention discloses “non-intrusive' 0012 (3) In another automatic identification, operator
peripheral systems and methods to track, identify various free embodiment, assigning an automatic ID by using mul
acting entities and capture the full motion of these entities tiple fixed high resolution cameras (the same cameras used
(also referred to as “objects”) in a sports event. In the context for motion capture) and pattern recognition methods to rec
of the present invention, “entities' refer to any human figure ognize players' jersey numbers as before.
involved in a sports activity (e.g. athletes, players, goalkeep 0013 These features, alone or in combination, appear in
ers, referees, etc.), motorized objects (cars, motorcycles, etc) different embodiments of the methods disclosed herein.
and other innate objects (e.g. balls) on the playing field. The 0014. It is within the scope of the present invention to
present invention further discloses real-time motion capture identify and localize the different body organs of the players
of more than one player implemented with image processing in real-time using high resolution imaging and pattern recog
methods. Inventively and unique to this invention, captured nition methods. Algorithms for determination of body pose
body organs data can be used to generate a 3D display of the and real time tracking of head, hands and other organs, as well
real Sporting event using computer games graphics. as gestures recognition of an isolated human video image are
US 2008/O 1921 16 A1 Aug. 14, 2008

known, see e.g. C. Wren et al. “Pfinder: real time tracking of 0021 (2) Rigid model 3D or 2D graphical live (or instant
the human body’. IEEE Transactions on Pattern Analysis and replays) representations of plays
Machine Intelligence, 19(7):780-785, 1997 and A. Aagarwal 0022 (3) Slaving a directional microphone to the auto
and B. Triggs "3D human pose from silhouettes by relevance matic tracker to “listen' to a specific athlete (or referee) and
vector regression'. International Conference on Computer generation of an instant “audio replay'.
Vision & Pattern Recognition, pages II 882-888, 2004 and 0023 (4) Slaving a robotic camera onto an identified and
references therein. The present invention advantageously dis tracked player to generate single player video clips.
closes algorithms for automatic segmentation of all players 0024 (5) Generation of a “telestrator clip' with automatic
on the playing field, followed by pose determination of all "tied to objects' graphics for the match commentator.
segmented players in real time. A Smooth dynamic body 0025 (6) Automatic creation of teams and players perfor
motion from sequences of multiple two-dimensional (2D) mance database for sports computer games developers and
views may then be obtained using known algorithms, see e.g. for “fantasy games', to increase game's fidelity through the
H. Sidenbladh, M. Black and D. Fleet, “Stochastic tracking of usage of real data collected in real matches.
3D human figures using 2D image motion” in Proc. of the 0026. According to the present invention there is provided
European Conference On Computer Vision, pages 702-718, a system for real-time object localization and tracking in a
2OOO. sports event comprising a plurality of fixed cameras posi
0015. It is also within the scope of the present invention to tioned at a single location relative to a sports playing field and
automatically create a 3D model representing the player's operative to capture video of the playing field including
pose and to assigna dynamic behavior to each player based on objects located therein, an image processing unit operative to
the 2D location (from a given camera viewpoint) of some of receive video frames from each camera and to detect and
his body organs or based on the 3D location of these organs. segment at least Some of the objects in at least Some of the
The location is calculated by triangulation when the same frames using image processing algorithms, thereby providing
organ is identified by two overlapping TV cameras. processed object information; and a central server operative
0016. It is further within the scope of the present invention to provide real-time localization and tracking information on
to use the real-time extracted motion capture data to generate the detected objects based on respective processed object
instant 3D graphical replays deliverable to all relevant media information.
(TV, web, cellular devices) where players are replaced by 0027. In an embodiment, the system further comprises a
their graphical models to which the real player's pose and graphical overlay server coupled to the central server and
dynamic behavior are assigned. In these graphical replays, the operative to generate a graphical display of the sports event
3D location of the capturing virtual camera can be dynami based on the localization and tracking information.
cally changed. 0028. In an embodiment, the system further comprises a
0017. The players and ball locations and motion capture statistics server coupled to the central server and operative to
data can also be transferred via a telecommunications net calculate statistical functions related to the event based on the
work Such as the Internet (in real-time or as a delayed stream) localization and tracking information.
to users of known sports computer games Such as "FIFA 0029. According to the present invention there is provided
2006” of Electronic Arts (P.O. Box 9025, Redwood City, a system for real-time object localization, tracking and per
Calif.94063), in order to generate in real-time a dynamic 3D Sonal identification of players in a sports event comprising a
graphical representation of the “real” match currently being plurality of cameras positioned at multiple locations relative
played, with the computer game's players and Stadium mod to a sports playing field and operative to capture video of the
els. A main advantage of such a representation over a regular playing field including objects located therein, an image pro
TV broadcast is its being 3D and interactive. The graphical cessing unit operative to receive video frames including some
representation of player and ball locations and motion capture of the objects from at least some of the cameras and to detect
data performed in a delayed and non-automatic way (in con and segment the objects using image processing algorithms,
trast to the method described herein), is described in patent thereby providing processed object information, a central
application WO984.6029 by Sharir et al. server operative to provide real-time localization and tracking
0018. Also inventive to the current patent application is the information on detected objects based on respective pro
automatic real time representation of a real sports event on a cessed object information, and at least one robotic camera
user's computer using graphical and behavioral models of capable to pan, tilt and Zoom and to provide detailed views of
computer games. The user can for example choose his view an object of interest.
point and watch the entire match live from the eyes of his 0030. In some embodiments, the system includes a plural
favorite player. The present invention also provides a new and ity of robotic cameras, the object of interest is a player having
novel reality-based computer game genre, letting the users an identifying shirt detail, and the system is operative to
guess the player's continued actions starting with real match automatically identify the player from at least one detailed
scenarios. view that captures and provides the identifying shirt item.
0019. It is further within the scope of the present invention 0031. In an embodiment, at least one robotic camera may
to use the player/ball locations data extracted in real-time for be slaved onto an identified and tracked player to generate
a variety of applications as follows: single player video clips.
0020 (1) (Semi-)automatic content based indexing, stor 0032. In an embodiment, the system further comprises a
age and retrieval of the event video (for example automatic graphical overlay server coupled to the central server and
indexing and retrieval of the game's video according to play operative to generate a schematic playing field template with
ers possessing the ball, etc). The video can be stored in the icons representing the objects.
broadcaster's archive, web server or in the viewer's Personal 0033. In an embodiment, the system further comprises a
Video Recorder. statistics server coupled to the central server and operative to
US 2008/O 1921 16 A1 Aug. 14, 2008

calculate statistical functions related to the sports event based the playing field and operative to capture a portion of the
on the localization and tracking information. playing field and objects located therein, providing an image
0034. In an embodiment, the system further comprises a processing unit operative to receive video frames from each
first application server operative to provide automatic or camera and to provide image processed object information,
semiautomatic content based indexing, storage and retrieval and providing a central server operative to provide real-time
of a video of the sports event. localization and tracking information on each detected player
0035. In an embodiment, the system further comprises a based on respective image processed object information.
first application server a second application server operative 0045. According to the present invention there is provided,
to provide a rigid model two dimensional (2D) or three in a sports event taking place on a playing field, a method for
dimensional (3D) graphical representations of plays in the locating, tracking and individual identifying objects in real
sports event. time comprising the steps of providing a plurality of fixed
0036. In an embodiment, the system is operative to gener cameras positioned at multiple locations relative to the play
ate a telestrator clip with automatic tied-to-objects graphics ing field and operative to capture a portion of the playing field
for a match commentator. and objects located therein providing an image processing
0037. In an embodiment, the system is operative to auto unit operative to receive video frames from each camera and
matically create team and player performance databases for to provide image processed object information, providing a
sports computer game developers and for fantasy games, central server operative to provide real-time localization and
whereby the fidelity of the computer game is increased tracking information on each identified player based on
through the usage of real data collected in real matches. respective image processed object information, and provid
0038. In an embodiment, the system further comprises a ing at least one robotic camera capable to pan, tilt and Zoom
graphical overlay server coupled to the central server and and to provide detailed views of an object of interest.
operative to generate a schematic playing field template with 0046 According to the present invention there is provided,
icons representing the objects; in a sports event taking place on a playing field, a method for
0039. In an embodiment, the system further comprises a real-time motion capture of multiple moving objects com
statistics server coupled to the central server and operative to prising the steps of providing a plurality of fixed high reso
calculate statistical functions related to the event based on the lution video cameras positioned at multiple locations relative
localization and tracking information. to a sports playing field, and using the cameras to capture the
0040. According to the present invention there is provided full motion of multiple moving objects on the playing field in
a system for automatic objects tracking and motion capture in real-time.
a sports event comprising a plurality of fixed high resolution 0047 According to the present invention there is provided,
Video cameras positioned at multiple locations relative to a method for generating a virtual flight clip (VFC) of a sports
sports playing field, each camera operative to capture a por game, comprising the steps of: at a high resolution recorder
tion of the playing field including objects located therein, the coupled to a plurality of fixed video cameras positioned at
objects including players, an image processing unit (IPU) multiple locations relative to a sports playing field, each cam
operative to provide full motion capture of moving objects era operative to capture a portion of the playing field includ
based on the video streams and a central server coupled to the ing objects located therein, the objects including players,
video cameras and the IPU and operative to provide localiza continuously recording respective real camera video frames,
tion information on player parts, whereby the system pro and using a VFC processor coupled to the high resolution
vides real time motion capture of multiple players and other recorder to select recorded real frames of various cameras, to
moving objects. create intermediate synthesized frames and to combine the
0041. In an embodiment, the IPU includes a player iden real and synthesized frames into a virtual flight clip.
tification capability and the system is further operative to BRIEF DESCRIPTION OF THE DRAWINGS
provide individual player identification and tracking.
0042. In an embodiment the system further comprises a 0048 For a better understanding of the present invention
three-dimensional (3D) graphics application server operative and to show more clearly how it could be applied, reference
to generate a three dimensional (3D) graphical representation will now be made, by way of example only, to the accompa
of the sports event for use in a broadcast event. nying drawings in which:
0043. According to the present invention there is provided 0049 FIG. 1 shows the various entities and objects appear
a system for generating a virtual flight clip (VFC) in a sports ing in an exemplary Soccer game;
event comprising a plurality of fixed video cameras posi 0050 FIG.2a shows a general block diagram of a system
tioned at multiple locations relative to a sports playing field, for real-time object tracking and motion capture in sports
each camera operative to capture a portion of the playing field events according to the present inventions
including objects located therein, the objects including play 0051 FIG.2b shows a schematic template of the playing
ers, a high resolution video recorder coupled to each camera field with player icons.
and used for continuously recording respective camera real 0.052 FIG. 3 shows a flow chart of a process to locate and
video frames, and a VFC processor operative to select track players in a team and assign each player to a particular
recorded real frames of various cameras, to create intermedi team in real-time;
ate synthesized frames and to combine the real and synthe 0053 FIG. 4 shows a flow chart of an automatic system
sized frames into a virtual flight clip of the sports game. setup steps;
0044 According to the present invention there is provided, 0054 FIG. 5a shows a block diagram of objects tracking
in a sports event taking place on a playing field, a method for and motion capture system with a single additional robotic
locating, tracking and assigning objects to respective identity camera used for manual players identification;
group in real-time comprising the steps of providing a plural 0055 FIG.5b shows a flow chart of a method for players
ity of fixed cameras positioned at a single location relative to identification, using the system of FIG.5a,
US 2008/O 1921 16 A1 Aug. 14, 2008

0056 FIG. 6a shows a block diagram of objects tracking on viewers’ TV sets. Outputs of team/player statistics server
and motion capture system including means for automatic 210 are fed to a web site or to a broadcast station.
players' identification using additional robotic cameras and a 0067. In a first embodiment used for player assignment to
dedicated Identification Processing Unit. teams and generation of a schematic template, cameras 202
0057 FIG. 6b shows a flow chart of a method for indi are fixed cameras deployed together at a single physical loca
vidual player identification, using the system of FIG. 6a, tion (“single location deployment) relative to the sports
0058 FIG. 7a shows a block diagram of objects tracking arena Such that together they view the entire arena. Each
and motion capture system including means for automatic camera covers one section of the playing field. Each covered
players identification using high-resolution fixed cameras section may be defined as the camera's field of view. The
only (no robotic cameras); fields of view of any two cameras may overlap to some
0059 FIG. 7b shows schematically details of an image degree. In a second embodiment, the cameras are deployed in
Processing and Player Identification Unit used in the system at least two different locations (“multiple location deploy
of FIG. 7a, ment”) so that each point in the sports arena is covered by at
0060 FIG. 7c shows the process of full motion capture of least one camera from each location. This allows calculation
a player; of the 3D locations of objects that are not confined to the flat
0061 FIG. 8 shows an embodiment of a system of the playing field (like the ball in a Soccer match) by means of
present invention used to generate a “virtual camera flight' triangulation. Preferably, in this second embodiment, the
type effect; players are individually identified by an operator with the aid
0062 FIG. 9 shows schematically the generation of a vir of an additional remotely controlled pan/tilt/Zoom camera
tual camera flight clip: (“robotic camera'). The robotic camera is automatically
0063 FIG. 10 shows a flow chart of a process of virtual aimed to the predicted location of a player “lost' by the
camera flight frame synthesizing; system (i.e. that the system cannot identify any more) and
provides a high magnification view of the player to the opera
DETAILED DESCRIPTION OF THE PREFERRED tor. In a third embodiment, robotic cameras are located in
EMBODIMENTS multiple locations (in addition to the fixed cameras that are
0064. The following description is focused on soccer as an used for objects tracking and motion capture). The robotic
exemplary sports event. FIG. 1 shows various entities (also cameras are used to automatically lock on a "lost player, to
referred to as "objects”) that appear in an exemplary Soccer Zoom in and to provide high magnification views of the player
game: home and visitor (or “first and second’ or A and B) from multiple directions. These views are provided to an
goalkeepers and players, one or more referees and the ball. additional identification processor (or to an added function in
The teams are separated and identifiable on the basis of their the IPU) that captures and recognizes the player's jersey
outfits (also referred to herein as “jerseys” or “shirts”). number (or another pattern on his outfit) from at least one
0065 FIG.2a shows a general block diagram of a system view. In a fourth embodiment, all cameras are fixed high
200 for real-time object tracking and motion capture in sports resolution cameras, enabling the automatic real time segmen
events according to the present invention. System 200 com tation and localization of each player's body organs and
prises a plurality of cameras 202a-n (n being any integer extraction of a full 3D player motion. Preferably, in this fourth
greater than 1) arranged in a spatial relationship to a sports embodiment, the player's identification is performed auto
playing field (not shown). The cameras are operative to pro matically by means of a “player ID processor that receives
vide video coverage of the entire playing field, each camera video inputs from all the fixed cameras. Additional robotic
further operative to provide a video feed (i.e. a video stream cameras are therefore not required. In a fifth embodiment,
including frames) to an image processing unit (IPU) 204. In used for the generation of a “virtual camera flight” (VCF)
some embodiments, IPU 204 may include added functions effect, the outputs of multiple high resolution cameras
and may be named image processing and player identification deployed in multiple locations (typically a single camera in
unit (IPPIU). IPU 204 communicates through an Ethernet or each location) are continuously recorded onto a multi-chan
similar local area network (LAN) with a central server 206, nel Video recorder. A dedicated processor is used to create a
which is operative to make “system level decisions where virtual camera flight clip and display it as an instant replay.
information from more than a single camera is required, like 0068 Player Localization and Tracking Using Cameras
decision on a “lost player, 3D localization and tracking, Deployed in a Single Location
object history considerations, etc.; with a graphical overlay 0069. In one embodiment, system 200 is used to locate and
server 208 which is operative to generate a graphical display track players in a team and assign each object to a particular
Such as a top view of the playing field with player icons (also team in real-time. The assignment is done without using any
referred to herein as a “schematic template); with a team/ personal identification (ID). The process follows the steps
player statistics server 210 which is operative to calculate shown in FIG. 3. The dynamic background of the playing
team or player statistical functions like speed profiles, or field is calculated by IPU 204 in step 302. The dynamic
accumulated distances based on object location information; background image is required in view of frequent lighting
and with a plurality of other applications servers 212 which changes expected in the sports arena. It is achieved by means
are operative to perform other applications as listed in the of median filter processing (or other appropriate methods)
Summary below. For example, a "3D graphics server 212 used to avoid the inclusion of moving objects in the back
may be implemented using a DVG (Digital Video Graphics), ground image being generated. The calculated background is
a PC cluster based rendering hardware with 3Designer, an subtracted from the video frame by IPU 204 to create a
on-air software module of Orad Hi-Tech Systems of Kfar foreground image in step 304. Separation of the required
Saba, Israel. foreground objects (players, ball, referees, etc) from the back
0066 An output of graphical overlay server 208 feeds a ground scene can be done using a chroma-key method for
Video signal to at least one broadcast station and is displayed cases where the playing field has a more or less uniform color
US 2008/O 1921 16 A1 Aug. 14, 2008

(like grass in a typical Soccer field), by Subtracting a dynami single object and (b) a location near goal 1. The color and
cally updated “background image' from the live frame for the intensity histograms, as well as their vertical distributions, are
case of stationary cameras, or by a combination of both meth then stored into the IPU to be later used for the assignment
ods. The foreground/background separation step is followed step of blobs to teams.
by thresholding, binarization, morphological noise cleaning 0073 Players and Ball Localization, Tracking and Identi
processes and connection analysis (connecting isolated pixels fication Using Cameras Deployed in Multiple Locations
in the generated foreground image to clusters) to specify 0074 FIG. 5a shows a block diagram of a tracking system
“blobs' representing foreground objects. This is performed 500 in which cameras are deployed in at least two different
by IPU 204 in step 306. Each segmented blob is analyzed in locations around the sports field in order to detect and localize
step 308 by IPU 204 to assign the respective object to an an object not confined to the flat playing field (e.g. a ball) by
identity group. Exemplarily, in a soccer match there are 6 means of triangulation (measuring directions from 2 sepa
identity groups—first team, second team, referees, ball, first rated locations). System 500 comprises in addition to the
goalkeeper, second goalkeeper. The blob analysis is imple elements of system 200 a robotic video camera 502 with a
mented by correlating either the vertical color and/or inten remotely controlled Zoom mechanism, the camera mounted
sity profiles or just the blob’s color content (preferably all on a remotely controlled motorized pan and tilt unit. Such
attributes) with pre-defined templates representing the vari robotic cameras are well known in the art, and manufactured
ous identity teams. Another type of blob analysis is the for example by Vinten Inc., 709 Executive Blvd, Valley Cot
assignment of a given blob to other blobs in previous frames tage, N.Y. 10989, USA. System 500 further comprises a dis
and to blobs identified in neighboring cameras, using meth play 504 connected to the robotic camera 502 and viewed by
ods like block matching and optical flow. This analysis is an operator 506. Camera 502 and display 504 form an ID
especially needed in cases of players' collisions and/or occlu subsystem 505.
sions when a joint blob” of two or more players needs to be 0075. The ball is segmented from the other objects on the
segmented into its "components', a.k.a. the individual play basis of its size, speed and shape and is then classified as
ers. The last step in the blob analysis is the determination of possessed, flying or rolling on the playing field. When pos
the object's location in the camera's field of view. This is done sessed by a player, the system is not likely to detect and
is step 310. recognize the ball and it has to guess, based on history, which
0070. Once the assignment stage is finished, system 200 player now possesses the ball. A rolling ball is situated on the
can perform additional tasks. Exemplarily, team statistics field and its localization may be estimated from a single
(e.g. team players’ average speed, the distance accumulated camera. A flying ball's 3D location may be calculated by
by all players from the beginning of the match, and field triangulating 2 cameras that have detected it in a given frame.
coverage maps) may be calculated from all players locations The search Zone for the ball in a given frame can be deter
data provided by the IPU in step 312. The team statistics are mined based on its location in previous frames and ballistic
calculated after assigning first the players to respective teams. calculations. Preferably, in this embodiment, players are per
The schematic template (shown in FIG. 2b) may be created Sonally identified by an operator to generate an individual
from the localization/teams assignment data inputs by the player statistical database.
graphical overlay server 208 in step 314. 0076 FIG. 5b shows a flow chart of a method for indi
0071 Another task that may be performed by system 200 vidual player identification implemented by sub-system 505,
includes displaying the current "on-air broadcast's camera using a manual ID provided by the operator with the aid of the
field of view on the schematic template. The process robotic camera. The tracking system provides an alert that a
described exemplarily in FIG.3 continues as follows. Knowl tracked player is either “lost' (i.e. the player is not detected by
edge of the pan, tilt and Zoom readings of the current "on air” any camera) or that his ID certainty is low in step 520. The
camera enables the geometric calculation and display (by latter may occur e.g. if the player is detected but his ID is in
system server 206 or another processor) of the momentary question due to a collision between two players. The robotic
“on air camera's field of view on the schematic playing field camera automatically locks on the predicted location of this
in step 316. The “on air broadcast camera's field of view is player (i.e. the location where the player was Supposed to be
then displayed on the template in step 318. based on his motion history) and Zooms in to provide a high
0072 Ayetanother task that may be performed by system magnification video stream in step 522. The operator identi
200 includes an automatic system setup process, as described fies the “lost player using the robotic camera's video stream
exemplarily in FIG. 4. System server 206 may automatically (displayed on a monitor) and indicates the player's identity to
learn “who is who according to game rules, location and the system in step 524. As a result, the system now knows the
number of objects wearing the same outfit, etc. In the game player's ID and can continue the accumulation of personal
preparation stage, there is no need for an operator to provide statistics for this player as well as performance of various
the system with any indication of the type “this is goalkeeper related functions.
A, this is the referee, etc. The first setup procedure as 0077. Note that the system knows a player's location in
described in step 400 includes the automatic calculation of previous frames, and it is assumed that a player cannot move
the intrinsic (focal length, image center in pixel coordinates, much during a frame period (or even during a few frame
effective pixel size and radial distortion coefficient of the periods). The robotic camera field of view is adapted to this
lens) and extrinsic (rotation matrix and translation vector) uncertainty, so that the player will always be in its frame.
camera parameters using known Software libraries such as 0078 FIG. 6a shows an automatic players/ball tracking
Intel's OpenCV package. Steps 402,404 and 406 are identical and motion capture system 600 based on multiple (typically
with steps 302,304 and 306 in FIG. 3. In step 408, the team 2-3) pan/tilt/Zoom robotic cameras 604 a... n for automatic
colors and/or uniform textures are analyzed by the IPU based individual player identification. FIG. 6b shows a flow chart of
on the locations of each segmented object and their count. For a method ofuse. The system in FIG. 6a comprises in addition
example, the goalkeeper of team 1 is specified by (a) being a to the elements of system 200 an Identification Processing
US 2008/O 1921 16 A1 Aug. 14, 2008

Unit (IDPU) 602 connected through a preferably Ethernet step 736 by IPPIU 704. Specific player's joints or organs
connection to system server 206 and operative to receive detected by different cameras are then matched one to another
video streams from multiple robotic cameras 604. based on their locations on the playing field and on some
0079. In use, as shown in FIG. 6b, the method starts with kinematic data (general morphological knowledge of the
step 620, which is essentially identical with step 520 above. human body) in step 738 by central server 206. A triangula
Step 622 is similar to step 522, except that multiple robotic tion based calculation of the locations of all body organs of all
cameras (typically 2-3) are used instead of a single one. In players is then done in step 738 as well by central server 206.
step 624, the multiple video streams are fed into IDPU 602 I0084 An automatic selection of a player's dynamic (tem
and each stream is processed to identify a player by automati poral) behavior that most likely fits his body's joints locations
cally recognizing his shirt's number or another unique pattern over a time period is then performed in step 740 using least
on his outfit. The assumption is that the number or unique squares or similar techniques by 3D graphics applications
pattern is exposed by at least one of the video streams, pref server 212. This process can be done locally at the application
erably originating from different viewpoints. The recognized server 212 side or remotely at the user end. In the latter case,
player's ID is then conveyed to the system server (206) in step the joints positions data may be distributed to users using any
626. known communication link, preferably via the World Wide
0080 FIG. 7a shows an automatic objects tracking and Web.
motion capture system 700 based on multiple high-resolution I0085. In step 742, a dynamic graphical environment may
fixed cameras 702a . . . 702n. System 710 comprises the be created at the user's computer. This environment is com
elements of system 200, except that cameras 702 are coupled posed of 3D specific player models having temporal behav
to and operative to feed video streams to an image processing iors selected in step 740, composed onto a 3D graphical
and player identification unit (IPPIU) 704, which replaces model of the Stadium or onto the real playing field separated
IPU 204 in FIG. 2a. Alternatively, the added functions of in step 732. In step 744, the user may select a static or dynamic
IPPIU 704 may be implemented in IPU 204. FIG. 7b shows viewpoint to watch the play. For example, he/she can decide
schematically details of IPPIU 704. IPPIU 704 comprises a that they want to watch the entire match from the eyes of a
frame grabber 720 coupled to an image processor 722 and to particular player. The generated 3D environment is then
a jersey number/pattern recognition (or simply “recogni dynamically rendered in step 746 to display the event from
tion') unit 724. In use, frame grabber 720 receives all the the chosen viewpoint. This process is repeated for every video
frames in the video streams provided by cameras 702 and frame, leading to a generation of a 3D graphical representa
provides two digital frame streams, one to unit 722 and tion of the real match in real time.
another to unit 724. Unit 722 performs the actions of object I0086 Virtual Camera Flight
segmentation, connectivity, blob analysis, etc. and provides I0087 FIG. 8 shows an embodiment of a system 800 of the
object locations on the playing field as described above. Unit present invention used to generate a “virtual camera flight
722 may also provide complete motion capture data com type effect (very similar to the visual effects shown in the
posed of 3D locations of all players' body parts. Recognition movie “The Matrix”) for a sports event. The effect includes
unit 724 uses pattern recognition algorithms to extract and generation of a “virtual flight clip' (VFC). System 800 com
read the player's jersey number or another identifying pattern prises a plurality of high-resolution fixed cameras 802a-n
and provides the player's ID to the system server. This process arranged in groups around a sports arena 804. Each group
is feasible when the resolution of cameras 702 is so chosen to includes at least one camera. All cameras are connected to a
enable jersey number/pattern recognition. high resolution video recorder 806. The cameras can capture
0081. In contrast with prior embodiments above, system any event in a game on the playing field from multiple direc
700 does not use robotic cameras for player identification. tions in a very high spatial resolution (~1 cm). All video
Fixed high resolution cameras 702a ... 702n are used for both outputs of all the cameras are continuously recorded on
tracking/motion capture and individual players identification recorder 806. A VFC processor 808 is then used to pick
0082 Generation of a 3D Graphical Representation of the selective recorded “real' frames of various cameras, create
Real Match in RealTime in a Computer Game intermediate synthesized frames, arrange all real and synthe
I0083. The information obtained by system 700 may be sized frames in a correct order and generate the virtual flight
used for generation of a 3D graphical representation of the clip intended to mimic the effect in “The Matrix” movie as an
real match in real time in a computer game. The resolution of instant replay in sports events. The new video clip is com
the cameras shown in FIG. 7a can be chosen in such a way to posed of the real frames taken from the neighboring cameras
enable a spatial resolution of at least 1 cm on each point on the (either simultaneously, if we “freeze' the action, or at pro
playing field. Such resolution enables full motion capture of gressing time periods when we let the action move slowly) as
the player as shown in 7c. The high resolution video from well as many synthesized (interpolated) frames inserted
each camera is first captured in step 730 by frame grabber between the real ones.
720. The video is then separated into foreground objects and I0088. In another embodiment, system 800 may comprise
an empty playing field in step 732 as explained in steps 302 the elements of system 700 plus video recorder 806 and VFC
and 304 in FIG.3 by IPPIU 704. Automatic foreground blobs processor 808 and their respective added functionalities
segmentation into player's head, torso, hands and legs is then I0089. The process is schematically described in FIG. 9.
performed in step 734 by IPPIU 704 using pattern recognition Three symbolic representations of recorded frame sequences
algorithms that are well known in the art (see e.g. J. M. of 3 consecutive cameras, CAM, CAM and CAM are
Buades et al., “Face and hands segmentation in color images shown as 902,904 and 906, respectively. The VFC processor
and initial matching. Proc. International Workshop on Com first receives a production requirement as to the temporal
puter Vision and Image Analysis, Palmas de Gran Canaria, dynamics with which the play event is to be replayed. The
December 2003, pp. 43-48). The player's organs or joints VFC processor then calculates the identity of real frames that
directions from the viewpoint of each camera are extracted in should be picked from consecutive real cameras (frames j, k,
US 2008/O 1921 16 A1 Aug. 14, 2008

and m from cameras i, i+1 and i+2 respectively in this 65. The system of claim 63, operative to perform an auto
example) to create the sequences of intermediate synthesized matic setup and calibration process, without need for an
frames, 908 and 910 respectively, to generate the virtual cam operator to mark the player during a preparatory stage.
era flight clip symbolically represented as 920. 66. A system for real-time object localization, tracking and
0090 FIG. 10 shows a functional flow chart of the process personal identification of players in a sports event compris
of FIG. 9. An "empty' playing field is generated as described 1ng:
in step 302 above, using a sequence of video frames from at a. a plurality of cameras positioned at multiple locations
least one of the cameras in step 1002. Foreground objects are relative to a sports playing field and operative to capture
segmented in step 1004. The frames from CAM, and CAM, video of the playing field including objects located
are spatially correlated using known image processing meth therein;
ods like block matching, and a motion vector analysis is b. an image processing unit operative to receive video
performed using optical flow algorithms in step 1006. Both frames including some of the objects from at least some
types of algorithms are well known in the art. A virtual camera of the cameras and to detect and segment the objects
having the same optical characteristics as the real ones then using image processing algorithms, thereby providing
starts a virtual flight between the locations of real cameras processed object information;
CAM,and CAM. Both the location of the virtual camera (in c. a central server operative to provide real-time localiza
the exact video frame timing) and the predicted foreground tion and tracking information on detected objects based
image for that location are calculated in step 1008 using pixel on respective processed object information; and
motion vector analysis and the virtual camera location deter d. at least one robotic camera capable to pan, tilt and Zoom
mined according to the pre-programmed virtual camera and to provide detailed views of an object of interest.
flight. The virtual camera background “empty field' is calcu 67. The system of claim 66, further comprising a display
lated from the same viewpoint in step 1010 and the synthe operative to display the detailed views to an operator.
sized foreground and background portions are then composed
in step 1012. n such synthesized frames are generated 68. The system of claim 67, wherein the object of interest
between the real frames of CAM, and CAM. The same is a player, and wherein the operator can identify the player
from the detailed view.
procedure is now repeated between real CAM and CAM
and so on. A video clip composed of Such multiple synthe 69. The system of claim 66, wherein one of the objects is a
sized frames between real ones is generated and displayed to ball, wherein the processed image information includes a
TV viewers in step 1014 as an instant replay showing the play location and tracking of the ball provided by the plurality of
CaCaS.
as if it was continuously captured by a flying real camera.
0091 All publications, patents and patent applications 70. The system of claim 68, wherein the player is either not
mentioned in this specification are herein incorporated in detected or its identity is uncertain and wherein the system is
their entirety by reference into the specification, to the same operative to allow the operator to manually remark the lost
extent as if each individual publication, patent or patent appli player.
cation was specifically and individually indicated to be incor 71. The system of claim 66, wherein the at least one robotic
porated herein by reference. In addition, citation or identifi camera includes a plurality of robotic cameras, wherein the
cation of any reference in this application shall not be object of interest is a player having an identifying shirt detail,
construed as an admission that such reference is available as and wherein the system is operative to automatically identify
prior art to the present invention. the player from at least one detailed view that captures and
0092. While the invention has been described with respect provides the identifying shirt item.
to a limited number of embodiments, it will be appreciated 72. The system of claim 71, wherein the identifying shirt
that many variations, modifications and other applications of detail is a shirt number.
the invention may be made. 73. The system of claim 66, wherein at least one robotic
1-61. (canceled) camera may be slaved onto an identified and tracked player to
62. A system for real-time object localization and tracking generate single player video clips.
in a sports event comprising: 74. The system of claim 67, further comprising a first
a. a plurality of fixed cameras positioned at a single loca application server coupled to elements band c and operative
tion relative to a sports playing field and operative to to provide automatic or semiautomatic content based index
capture video of the playing field including objects ing, storage and retrieval of a video of the sports event.
located therein; 75. The system of claim 67, further comprising a second
b. an image processing unit operative to receive video application server coupled to elements band c and operative
frames from each camera and to detect and segment at to provide a rigid model two dimensional (2D) or three
least some of the objects in at least some of the frames dimensional (3D) graphical representations of plays in the
using image processing algorithms, thereby providing sports event.
processed object information; and 76. The system of claim 67, operative to generate a telestra
c. a central server operative to provide real-time localiza tor clip with automatic tied-to-objects graphics for a match
tion and tracking information on the detected objects COmmentatOr.
based on respective processed object information. 77. The system of claim 67, operative to automatically
63. The system of claim 62, operative to assign each create team and player performance databases for sports com
detected object to an object group. puter game developers and for fantasy games, whereby the
64. The system of claim 63, wherein the detected object is fidelity of the computer game is increased through the usage
a player, wherein the object group is a team, and wherein the of real data collected in real matches.
assignment of the player to a team is automatic, without need 78. A system for automatic objects tracking and motion
for an operator to mark the player. capture in a sports event comprising:
US 2008/O 1921 16 A1 Aug. 14, 2008

a. a plurality of fixed high resolution video cameras posi i. capturing high resolution video frames from each cam
tioned at multiple locations relative to a sports playing era,
field, each camera operative to capture a portion of the ii. separating each video frame into foreground objects and
playing field including objects located therein, the an empty playing field,
objects including players; iii. performing automatic blob segmentation to identify the
b. an image processing unit (IPU) operative to provide full respective player's body organs, and
motion capture of moving objects based on the video iv. extracting the respective player's body organs directions
streams; and from a viewpoint of each camera,
c. a central server coupled to the video cameras and the IPU 87. The method of claim 86, wherein the capturing of the
full motion further includes:
and operative to provide localization information on vi. matching the player's body organs received from the
player parts, different camera viewpoints, and
whereby the system provides real time motion capture of vii. calculating a three-dimensional location of all the play
multiple players and other moving objects. er's organs including joints.
79. The system of claim 78, wherein the IPU includes a 88. The method of claim 87, wherein the capturing of the
player identification capability and wherein the system is full motion further includes automatically selecting a
further operative to provide individual player identification dynamic player's behavior that most likely fits the respective
and tracking. player's body organ location over a time period, thereby
80. The system of claim 79, wherein the player identifica creating respective player temporal characteristics.
tion is based on automatically identifying shirt detail 89. The method of claim 88, further comprising the step of
81. The system of claim 78, further comprising a three generating, on a user's device, a 3D graphical dynamic envi
dimensional (3D) graphics application server coupled to ele ronment that combines the temporal player characteristics
ments a-c and operative to generate a three dimensional (3D) with a real or virtual playing field image.
graphical representation of the sports event for use in a broad 90. The method of claim 86, wherein the processing unit is
Cast event. an image processing and player identification unit (IPPIU),
82. The system of claim 78, further comprising a three the method further comprising the step of using the IPPIU to
dimensional (3D) graphics application server coupled to ele identify a player from a respective player shirt detail.
ments a-c and used for providing temporal player behavior 91. A method for generating a virtual flight clip (VFC) of a
inputs to a user computer game. sports game, comprising the steps of:
83. A system for generating a virtual flight clip (VFC) in a a. at a high resolution recorder coupled to a plurality of
sports event comprising: fixed video cameras positioned at multiple locations
a. a plurality offixed video cameras positioned at multiple relative to a sports playing field, each camera operative
locations relative to a sports playing field, each camera to capture a portion of the playing field including objects
operative to capture a portion of the playing field includ located therein, the objects including players, continu
ing objects located therein, the objects including play ously recording respective real camera video frames;
and
ers;
b. a high resolution video recorder coupled to each camera b. using a VFC processor coupled to the high resolution
recorder to select recorded real frames of various cam
and used for continuously recording respective camera
real video frames; and eras, to create intermediate synthesized frames and to
c. a VFC processor operative to select recorded real frames combine the real and synthesized frames into a virtual
flight clip.
of various cameras, to create intermediate synthesized 92. The method of claim 91, wherein the step of using a
frames and to combine the real and synthesized frames VFC processor includes:
into a virtual flight clip of the sports game. i. generating an empty playing field from at least one cam
84. In a sports event taking place on a playing field, a era CAM,
method for real-time motion capture of multiple moving ii. Segmenting foreground objects in each real camera
objects comprising the steps of: frame,
a. providing a plurality offixed high resolution video cam iii. correlating real frames of two consecutive cameras
eras positioned at multiple locations relative to a sports CAM and CAM and performing a motion vector
playing field; and analysis using these frames,
b. using the cameras to capture the full motion of multiple iv. calculating n synthesized frames for a virtual camera
moving objects on the playing field in real-time. located between real cameras CAM, and CAM,
85. The method of claim 84, wherein the objects include according to a calculated location of the virtual camera
players having body organs, and wherein the step of using the V. calculating a background empty field from each view
cameras to capture the full motion of multiple moving objects point of the virtual camera,
includes capturing the full motion of each of multiple players vi. composing a synthesized foreground over the back
based on image processing of at least some of the body organs ground empty field to obtain a composite replay clip that
of the respective player. represents the virtual flight clip, and
86. The method of claim 85, wherein the capturing of the vii. displaying the composite replay clip to a user.
full motion of each of respective player further includes:
using a processing unit: c c c c c

You might also like