Download as pdf
Download as pdf
You are on page 1of 34
U c») United States 100600771 c2) Patent Application Publication (1) Pub. No.: US 2012/0060077 AI Mate et al. (43) Pub. Date: Mar. 8, 2012 (54) METHOD AND APPARATUS FOR VIDEO Publication Classification at Gl) mee Gnor 1700 (2006.01) (75) Inventors: Sujeet Shyamsuundar Mate (2) US.C et Crier Tampere (1) on ABSTRACT aga compilation of (73) Assignee: Nokla Corporation, Espoo (F1) ‘ation information, timo (21) ApphNo: 12077288 famed withthe (22) File: Sep.8, 2010, 20 = S01 RECEIVE MEDIA ITEMS 303 DETERMINE CONTEXT VECTORS FOR MEDIA ITEMS. 305 \| DETERMINE CRITERIA FOR SELECTING MEDIA ITEMS: ie ee 307 NI GENERATE COMPILATION OF MEDIA ITEMS BASED ON CONTEXT VECTORS DISTRIBUTE COMPILATION: US 2012/0060077 A1 Mar. 8, 2012 Sheet 1 of 18 Patent Application Publication aT asvaviva aya vida" wr wyosie waa hm jasvaviva vavo| RFID. - B}Gon wosna: ‘wasn 53) | snovoraa’ aa Taran = Emnqow ost oe | WGOWN YOSN: (viosw"o3) sNOUNONIddY (an'o®) weg 7] snouvonedy ‘SB tan} IN3HideNIOS BIS Foran snanano3 wasn i a ORLLIN NOLWONAIRINCD (aan "93) SNOUVOTIaelY 30) ANaAINOS ¥BSN “601 be Wold US 2012/0060077 A1 Mar. 8, 2012 Sheet 2 of 18 Patent Application Publication ee BOVAUSINI WHOSLVTd VIM ae BOVsURINI NOLVOINNAOD we FTINGOW SWNT 1 I ae BOVAMaINI NaS Te aTNCOW¥IG3N TOT Nawainoa wasn 60C, s1NGort Yala De sINGON YBLENONTTIOOV Ed angen €B1SN0 LINO We JIAGOW SONVE oe aindow NOLWOOT fd SINGOW VES) ‘TOL 3INGON YOSNaS 28d US 2012/0060077 A1 Mar. 8, 2012 Sheet 3 of 18 Patent Application Publication aNa NOW 1eWvOO SingiuisiC D~ eve i SHOLOBA LXBINOD NO CaS¥a SALI daw SO NOLYTIANOD 31V93NZO [\ soe f + SIAL! WIGAN ONLLDATAS YOd VINBLO INWYSISO [S soe Ss. ‘SWI IGA HOS SHOLOSA DGUNOD SNWRIELIC [> coe [ ‘SINS VION 3AI303Y [S toe £ Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 4 of 18 Patent Application Publication SWHIINOOTW NY $1001 ONILIOS '3SvavIva IGN ONY 1XGLNOO ‘VIGaW GNYSISATYNV IYO INGO BUNLdYo OF wmoauw ig vi0aW DGINOO #.LNEINOD LK SF ‘Si vava viean Biwvo vainoo J aor an ° eoran 0 Suasn INaSIO Aa NaS SYHOINW 830008 ul NOUVULSATH MAMRGAO NBLSAS old US 2012/0060077 A1 Mar. 8, 2012 Sheet 5 of 18 Patent Application Publication et Teron corn fim a r => emi, TS in suouo3ua ad ‘SOE MBIA 31S NGAS NY WOdd SORRS WIGS $ Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 6 of 18 Patent Application Publication £0} WHOSLV Id VIGSWN ONY LO} S3N SHL NASML38 ONTTVNOIS dO4S X3) SNOLLOY LNOBY NOLVONAANOAH NIA 4O GTSLd SHIN NOIOGY dYTHAAG S3OWSSaN UIAIS {SULIN) 1SSUIANI 40 SMCS {S33YOIG) MAW 4O OSS (HBLSITHA) HLONTT WOOd (O13 XZ Dia Sa ONVONINEA YYAidOd 'S43SN OWL NABMU3E NOLLOSSUSIN! LS3UAINE 4O Snide (O13 L437 BAW “ONIGHOOSY (SUSL3W) 1S3WG1Ni 40 SONVE ‘Xt Ovex) WOOZ “HIMON ‘LM $33H9IO) LO NOLLVINIRO {$33¥0I0) 41.1 ATI VaS NYE LEAN) bY SONU ISNYSN N3HIO “ONINOLLISOd HOON! ‘SdO) Id NOLLISOd YO NOLV907 HOLIIA 1XBINOO ‘Or WuOsIW Id vida ‘BW SIOVESIN YIANIS YOLDAA LXSINOD JOVI NIAUAS 3 E 9Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 7 of 18 Patent Application Publication ON yov'iwt NOUWO07 ivhaasn Sh pe aca ‘sa3u030 i ene x Tnouwoot iovasn nN Vv. Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 8 of 18 Patent Application Publication ‘TAHIONAT WOOE- - cee Ta RIONETIVOOS wn | TewOOr Til ant TwsOnInW- iwaanniy. [> 4 TaNOUISOS TaNOUSOS MNO aor NivarOT an SYOLOIA AXBLNOS SNHLLINW ONISN WYOSL Vd WIG3W 3HL Ad GS1NaWOO Si S¥asN USHLO AG G3Y3A09 NOISY HO VEY raw REI BWMIOA d¥TBA0 a2 ‘Old Patent Application Publication Mar. 8, 2012 Sheet 9 of 18 US 2012/0060077 AL ue INTERSECTION OF POINTING DIRECTIONS FIG. 7C US 2012/0060077 A1 Mar. 8, 2012 Sheet 10 of 18 Patent Application Publication LNBONOD ¥ 40 B9¥iS BHL “3D NIA SAO AZIM {aD¥U3A00 S3SWOIC 08 SCUVMON) AIA 40 073id 40 NOUV ZINA VINBLIND 1ND S.YOLOANIG Afd.no Andino AND SYOLIZNIG AN S.HOLIaNIG SiNaNOas 2xaN ares INBINOD G3RIOOSN ROMY GLYN SINGERS EVO INBINDDONZATING ABOROAUAO SIND INBA. a>] IOS MOLARS OA wReSLRID NO Case OS19719S S1NOG INSAA. 508 t 10 J] a ‘yaya MBH 190 £931NOO ‘10. DGLNDDONTZATINY AG 03108130 SiNOd INa'a, ‘SNOISUBA NBL ‘WIGAN OL93 188 NOd IVadaH VELSAS ONIONVY ‘Sd9 UBLaWONTTIOOV SS¥sNOD ‘SHOSNES AXBINOD ANSYUNO ‘sHOLOaN anos vavawaan cod ‘SdSLS SISSHINAS INZENOO 8 Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 11 of 18 Patent Application Publication ‘'NOLLOSIO ONLLNIOd Hi SALVOIONI MOWAY SHI ‘SHSM VIHIO AG OSHSAOD VEE V6 9ld US 2012/0060077 A1 Mar. 8, 2012 Sheet 12 of 18 Patent Application Publication 3N30S BHLOLISIYZLNI IO SNIGVY SHL ONIENAS YSN Od LINWRC AG WALSAS-8NS ONION Od HSONIAMBIA NI NOLLUISOd TWHLN3: 2 96 Old US 2012/0060077 A1 Mar. 8, 2012 Sheet 13 of 18 Patent Application Publication JINWIAG AG UBINIO Ni NOLLISOd SNOWSHd HONOL ONISA O3AON St JOu ‘NOLLIVUSIN G38¥8 HONOL 40 NOUWEISITY ‘MBSA HL AG 13S NOLLISOd MIN cor 06 ‘ld Patent Application Publication Mar. 8, 2012 Sheet 14 of 18 US 2012/0060077 AI 931 FIG. 9D Patent Application Publication Mar. 8, 2012 Sheet 15 of 18 US 2012/0060077 AI FIG, 9E US 2012/0060077 A1 Mar. 8, 2012 Sheet 16 of 18 Patent Application Publication BOAR ONUNIOd Sr (Oisv) of oLwOadS NoUvoiddy or AHOWaA_AINO va S00r wogsavoNd Ot ASOWAN Or 3OVaHaLNE NOUWOINNANKOO au aT INIT IOMLN wadIAgdd SORES JINUBLN 80 et uO tae Ob Old Patent Application Publication Mar. 8, 2012 Sheet 17 of 18 US 2012/0060077 AI 2 FIG. 11 US 2012/0060077 A1 Mar. 8, 2012 Sheet 18 of 18 Patent Application Publication oxi 2 ypaueg samog % eogpent haved soz SNVIDIOVE SV ASONEN Seer Tar TP. Sova av j-——+—4_ 93000 SOWIE orony, Teer Zh Old US 2012/0060077 AL METHOD AND APPARATUS FOR VIDEO. SYNTHESIS BACKGROUND [0001] Service providers and device manufseturers (2. Wireles, cellular, et) ae continually challenged to deliver valueand convenience to consumers by, for example, provide ing compelling network services, Important differentiator in the industry are application and network services that offer ‘entertainment (eg, media) and location services. In particu Tae, media sharing services allow for distribution of content other users of the media shoring service. Traditionally, the ‘content disiibuted on such media sharing. services is Uploaded by one or moneusers, Interesting transformations of te content ean be utilized to improve user experience. How ‘ever, such transformations are generally Timited due to tech nical challeages and limitations that exist for enabling the transformations. For example, only basic transformations may be accomplished uilizing basie media SOME EXAMPLE EMBODIMENTS 10002] Therefore, there isa need for an approsch for gen- ‘erating a compilation of media items. 10003] According toone embodiment, a method comprises receiving a plurality of media ites. The method also com prises determining respective context vectors forthe plurality ‘oF media items, The eonext vectors inelide, a leas in part, foricatation information. geo-location information, timing information, oF a combination thereof assocated with the ‘reation of respective media items. The method further eom- prises determining to generate © compilation of at least portion of the medis items based, at least in part, on the 10004] According to another embodiment, an apparatus ‘comprises atleast one procestor, and atleast one memory including computer program code, the atleast one memory tnd the compster program code configured to, with the at Teast one processor eause, at last in part, the apparatus to receive «plurality of media items. The yppartus is also ‘cased to determine respective context vectors for the plural ity of media items. The context vectoes inchndo, at Teast in part, orientation information, peo-location infomnation, tin- tng information, ora combination thereof associated with the ‘creation of respective medi itens. The apparatus is furlher ‘cased to determine to generate a compilation of atleast a portion of the media items based, at leat in part, on the ‘context vectors 10005] Accoeding to another embodiment, » computer: readable stonage miium caries one or more sequences of ‘one of more instructions which, when executed by one or more processors, cause, at least i part, an apparatus. (0 receive « plurily of media items, The appara i also ‘case to determine respoctive context vectors fr the plral= ity of media items, The context vectors inchide, at Teast in par, orientation information, geo-location infomation, tim- ‘ng information, ora eombination thereof associated with the creation of respective medi items. The apparats is furor ‘cased to determine to generate a compilation of at least a portion of the media items based, at least in part, on the [0006] According t another embodiment, an apparatus ‘comprises means for receiving a plurality of media items. The apparatus also comprises means for determining respective Mar. 8, 2012 context vectors for the plurality of mea items. The context vectors inelude, at least in pat, orientation information, 300- location information, timing information, ot 3 combina thereaf associated with the creation of respective media items. The apparatus furher comprises means for determin- ing to peneratea compilation of at leat a portion ofthe media items based, at least in part, on the context vectors. [0007] Sail other aspects, features, and advantages ofthe fvention are readily apparent fromthe following detailed eserition, simply hy ilustating,@ number of particular ‘embodiments and implementations, including the best mode ‘contemplated fr careying out the invention, Te invention is also capable of other and different embodiments, and its ‘several details can he modifiedn various obvious respects, ll ‘without departing from the spirit and seope of the invention. ‘Accordingly, the drawings and description are o be regard as illustrative in nature, and not as restrictive BRIEP DESCRIPTION OF THE DRAWINGS [0008] The embodiments ofthe invention are ilstrated by ‘way oFexample, and not by way of Fmitaton, inthe figures oF the aevompanying drawings: {0009} FG. Lisa diagram ofa system eapable of genera ‘compilation of media items based on context vectors ‘associated wit the media items, acconling to one embod ‘ment: [0010] FIG. 2isadiageamot the components of user equip ‘ict that can beutilizedin generating media items, acconding tw one embodiment, [W011] FIG. 38a fowchart of process for generating a ‘compilation of media items, according to one embodiment {0012} FIG. 4 isa diagram of uilizing a medi platform to generate a compilation of media items, according to one ebodimeat, [0013] FIG. 5 is «diagram of views of media items co: lected by user equipment utilized 10 generate a compilation, avcordiig to one embodiment; [0014] FIG. Gisadiogram of signaling between user equip ‘ent and a media platform For generating @ compilation, ‘ocording to one embodiment: [0015] IGS. 7A-7C are map diagrams displaying example ‘maps of locations utilized to generate context veetor infor ‘mation, according o various embodiments: [0016] FIG. 8 is a flow diagram wilized in generating a ‘compilation of media items, according to one embodiment {0017} FIGS. 9A-9E are diagrams of user interfaces Tized in the processes of FIGS. 4 and 8, according to various embodiments; [0018] FIG. 10isadiagram of hardware that can be used to implement an embodiment of the invention: {0019} FIG. 1 sdiagram of chip set that can be used to ‘implement an embodiment of the invention; and [0020] FIG. 12 is a digpram of @ mobile terminal (ex. bhandsct) that can bused ta implement an embodiment ofthe DESCRIPTION OF SOME EMBODIMENTS [0021] Examples of a method, apparatus, and computer program for generating a compilation of media items are Aisclosed. In the following description, for the purposes of explanation, numerous specific details are set orth in onder to provide thorough understanding of the embodiments ofthe Invention, Is apparent, however, to one skilled in the art that US 2012/0060077 AL the embodiments of the invention may be practiced without these specific deals o with an equivalent arrangement. In otherinstances, wel-Anown structures and devices are shown, iin block diagram fom in order to avoid unnecessarily obscur- ing the embodimeats ofthe invention [0022] FIG. 1 isa diggram of system capable of generat- ing # compilation of media items based on context vectors associated with the media items, according © one embo ment. Inamobile world, an increasing number of services and applications are targeted at providing socal services and dis- teibuting media captured by individual users. As such, ‘advances in mobile multimedia technology have given ise (0 increase in user generated content. These users can share the ‘content with others using one or more platforms (eg. va the Internet). Increases in accessibility of the intersct and band- Widih available to wireless and wired users, consumers can easily be distributed such media 10023] Individual users commonly record media (e: ideo, atado, images, et.) at events tha the users find inter ‘esting, Examples of events include concert, dance festivals, ‘carnivals, ete. Moreover, vents can have one or more focus Points and can be attended by more than one person. Many’ ‘ovens are recorded by more than one user using personal recording device (eg, mobile phone, a camcorder digital ‘camera, et.) Users may wish to view media assoeiated with, the events. These users may wish to view the media using different views comresponding 10 uploaded media of the 10024] more advantageous way to view such content ‘oid be to automatically enhance oF customize media to enertea synthesized or machine-generate compilation of ‘media items, As used herein, the term media item refers media that ean be erested or generated. Media items can be uploaded (via a stream or file transfer) toa platform for generating the compilation, However, many technical chal- lenges are present in generating such a symbesized com tion. One stich challengeisto determine what view is eovered by cach media item associted with an event. Another toch- nical challenge includes determining wht content o inelude in-a symhesized compilation. For example, one or more medi items may include similar or the same content atthe same time. Which one ofthe media items should be included in the media compilation? Further, more than one interesting setivty or event (eg, sub-cvent) can coeur near 2 particular location. Itcan be challenging to determine which sub-event to focus upon when generating the compilation. In eerain ‘emboxtiments, synthesized compilation sa compilation thst js generated automatically or otherwise without need for ‘manvalinstroctions [0025] FIG. 1 isa diagram of a system capable of generat- ing a compilation of media items based on context vectors ‘associated with the media items, aecording to one embodi- ‘ment. Information collected from mobile devices (eg. user ‘equipment (UE) 101a-104c) can be ulized to capture the tems. Inone embodiment, the UEs 101a-10Le capture (e, photos, video clips, audio clips, ete.) and 1 media items and related information (e.g, eon= text vectors) to a media platform 103 via a communication network 105. The media items can be captured, fr instance, to expliidy transmit information about an event of maybe ‘captured for other purposes (e., sightsccng, general inter- ‘est et.) but then co-opted for use inthe approach described herein. The media may be analyzed to determine information about theexistence of an event which canbe transmit to the Mar. 8, 2012 UBs 1014-10Le. Furhber, media items ean be combined to generate the compilation based on one of more eritera and a plunity of media items. The compilation ean thus hea cus- fomized director's ent combining various media items for a [0026] ‘The UE 101 is any type of mobile termina, fixed ‘eminal, oF portable terminal inchiding « mobile handset, station, unit, deviee, multimedia computer, multimedia tb: Jet, Internet node, communicator, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, personal communication system (PCS) device, personal navigation device, personal digital assistants (PDAS), aoe player, digital cameraveamicorde, posi- tioning deviee, television receiver, radio broadcast receiver, electronic hook device, game device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the UE 101 can suppor any type of interface to the user (uch as “wearable” circuitry, et.) The UE 101 may include a sensor modiale L07a-I07e to determine context vectors (eg. location information, timing information, ete.) of the UE 101. The seasor module 107 may be utilized by one oF ‘more applications 109 (ex, media applications 109a-n, event applications et.) The system 100 has been simplified Fort include throo UES 10La-10Le to capture the event, hawseve, it is contemplated that any multiple number of UES 101 (e., ‘more than two UBS 101) can be utlizod in capturing infor vation about the event [0027] The UES 1010-101 may utilize respective media applications 109 to capture media ofan event HLL as well as the location, via a location sensor of the sensor module 107, ‘and other information ew, compass information, aecelerom eer ilt information, et.) about the UE 101 during the even Incerain embodiments, the event may inlude a static event (eg a normal occurrence such as media capture around monument), a sudden incident (eg., a spontancous occut- rence such as an accident or an impromptt fo festival tha users determine isa pood reason to capture media), a special event (ean occurrence that is indicated to be more impor tant by the media platfoem 103 based on certain criteria), @ combination there. othe like [0028] When the media is captured, context vector ean be ‘determined and associated with the media, la certain embod ‘ments, a context vector is one oF more daa items that ean be associated with the medi, AS sch, a context vector can Jnchide ime information, a postion (Pi) ofthe UE. aliude (Ai of the UE 101, silt (75) of the UE. 101, an orientation (01) of the UE 101, a zoom level (Zi) ofthe UE 101, a focal length (F) ofthe UE. 101, a field of view (FOV) ofthe UE 101, radius of interest (RadisOi) ofthe UE 101 While capturing the media content, « range of interest (RangeOli) ofthe UE 101 while capturing the media content, ‘ora combination threo: The position can be detected from fone or more sensors of the UE 101 (e-.. via a Global Posi- tioning System (GPS), Further, the altitude can be detected rom one oe more sensors such as an altimeter andor GPS, The tlt ofthe UE 101 can be based on a reference point (ex. ‘seamera sensor location) with respect to the ground based on ‘gcelerometer information, Moreover, the orentation ean be based on compass (¢g.,magnetomete) information and may be based on reference to noth, One oF more zoom levels, a ‘ocal length, anda field of view can be determined acconting toweamera sensor. Further, the radius of interest andor range US 2012/0060077 AL ‘of imerest can be determined based on one or more of the ‘ther parameters or another sensor (e2, @ mnge detection season) 10029] Incerisinembodimnents, capture ofamedia item can include corresponding context vector information. For ‘example one oF moze frames ofa video can have associated With ian audio component as well a context vector Thus the context vector ean be associated with one or more seg- rents ofthe modia item. Further, the context vector can be stored in one oF more data structures or foems, sch as RDF? XML (Resource Description Framework/Extensible Markup Language). 10030] Further, tbe media platform 103 can automatically deraine or infer information ahout the vocurrence of a ‘even or other event related information by evaluating one or ‘more ofthe context vectors received from one of mare UES 1010-1010 during an event. By way of example, a context vector can he transmitted as a data structure of information {eat independent of media content andor with the media ‘content), The context vector may be ulized io determine one ‘oF more focal points of a plurality of UE 104 by using the Iiple Us 101 (ee the s ofthe UEs 101) This focus may be determined 10 be a center or other important Point of the event, In one seenario, the cantext vector is Separated fru the medi to conserve bandwidth whe trans- mmittng the context veetor tothe media platform 103. Under this scenario, the content vectors may’ be transmitted in real time or near realtime while the actual modin is steamed or transmitted at later time. With this approach, guidance can beprovided other users capturing the event 11. Moneover, ‘mei application 109 may inchude options aso participate ‘or not participate ina service provided bythe medi platform, 103 to determine the existence of events andor publish media ‘of the user ofthe application 109. In certain embodiments, 0 ‘encourage participation, The user my be recognized or com pensated ifthe users media is presented 1 other uses [0031] The media platform 108 may receive the context ‘vectors andthe media from Us 101 and tore the information in one or more databases. Thecontext vectors can hestoedin ‘context data database 113, The context daa database 113, ray be utilized to store current and historical data about ‘events, Moreover, the media plato 103 may have access to additional historical data (eg. historical sensor data or addi- tional historia information about a region that may or may not be associated with events) to determine if an event is ‘occurring orhas occurred ata particulartime. This featureean be use in determining if newly uploaded media items ean beassociated with oneor more evens, A medindata database LS can be utilized for collecting and storing media items ‘The media items may ineluce metadata including associated ‘content vectors 10032] The events or historical data may be serted using the jox-location ofthe UFs 101 ofa determined geo-location of ‘evens. Further, the media may be published from the media, ‘data database 115 to one of more UEs (e, UE 1019). The media platform 103 may additionally extract an identifier associated with apariclar UE 101 from a received context vector and associate the user and/or UE 101 with 2 profile ‘The user profile may he ufilized to collect historical event information about a particular UE 104 or user. This data may be used in determining how useful associated media items may be in compilation Mar. 8, 2012 [0033] The media platform 103 can utilize criteria from a criteria data database 117 to generate one oF more compl tions. As previously note, the compilations can represent director's ent of media items associated with an event 111 Generation of the compilations i fuer detailed in FIGS. 3 and 4 [0034] By way of example, the communication network 105 of system 100 incldes one or more networks such 883 ata network (not shown), a wireless network (not shown). telephony network (pot shown), or any combination thereof Wiscontemplatod that the data network may be any Toca area network (LAN), metropolitan area network (MAN), wide area network (WAN), a pubic data network (ete Intes- het), short range wireless network, or any other suitable pocketswitched network, such as a commercially owned, ropriciary packet-switched network, eg. 2 propriciary table or fiberoptie network, and the ike, or any combination thereof. In addition, the wireless network may be, for example acellular network and may employ various teh- nologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), alobal system {or mobile communications (GSM), Interact protocol mut media subsystem (IMS), universal mobile telecommunica- tions system (UMTS), ec. as well as any other suitable wireless medium, e.g, workdwide interoperability for micro wave access (WIMAX), Long Term Evolution (LTE) net- works, code division multiple aceess (CDMA), wideband tod division multiple acess (WCDMA), wireless fidelity (if), wireless LAN (WLAN), Bluetooth®, Intemet Proto- col (IP) data casting, stollte, mobile ad-hoc network (MA NET, and the like, or any combination there. [0035] By way of example, the URS 101 and media plat orm) 108, communicate with each other and other compo- nents of the communication nework 108 using well known, ‘new or sill developing protoools. In this context, a protocol includes set of rules defining how the network noes within the communication network 105 interact with each other based on information sent over he communication links. The protocols ar effective at diferent layers of operation within teach node, from generating and receiving physical signals of various types, to selecting ink for ransferring those signals, to the format of information indicated by those signals, to identifying which softvare application exccuting ona com- pale system sends or ceive the information, The concep ‘ually different layers of protocols for exchanging informa- tion over a network are described in the Open Systems Interconnection (OSI) Reference Model. [0036] Communications berween the nerwork nodes are ‘ypically effected by exchanging discrete packets of data ach packet typically comprises (1) header information asso- ciated witha panicular pretoco, and (2) paylaad information that falas the header information and comin information that may he processed independently ofthat particular pro- ‘ocol. In somte protocols, the packet includes (3) trailer infor- ‘mation following the payload and indicating the end of the payload infomation. The header includes information such fas the source of the pocket, its destination, the length ofthe payload, and other properties used by the protocol, Often, the data in the payload for the particular protocol includes a Ineader and payload fora different protocol associated with a diflrent, higher layer of the OSI Reference Model. The header fora particular protocol typically indicatesa type for the next protocol contained i its payload. The higher layer protocols sido be encapsulated inthe lower layer protocol US 2012/0060077 AL The headers included ina packet traversing multiple hetero- geneous networks, such asthe Intent, typically include a physical (layer 1) header, a data-link (layer 2) header, an Intermetwork(layer3) header anda transpot (layer 4) header, ‘and various application headers (layer 5 ayer 6 and layer 7), ‘8 defined by the OSI Reference Mode! [0037] Inoneembodinent, the UE 101 and media platform 103 interact according toa client-server model. According to the client-server model, a elient process sends a message inching «request o aserver process andthe server process responds by providing service. The server process may also eum a message with a response to the elient process. Often the cient process and server process execute on diferent ‘computer devices, called hosts, and communicate via a net- work using one oF more protocols for network communica tions. The ten “server” is conventionally used to refer to the process that provides the service, or the ost computer on Which the process operates. Similarly, the teem “client” is ‘conventionally usd to refer tothe process that makes the Fequest, othe host computer on which the process operates, ‘As used herein, the tems “cient” and “server” refer to the processes, rather than the host computers, unless otherwise ‘lea from the context, In adtion, the process performed by ‘server can be broken up to run as multiple processes on multiple bosts (sometimes called tiers) for reasons that include reliability scalability, and redundancy, among others 10038] FIG. 2isadiagram ofthe components of a UE 101, ‘acconling w one embodiment. By way of example, the UE 10 includes one or more components for collecting and teansmiting moda items and context vectors. Is conten plated tht the functions of these components may’ be com bined in one or more components of performed by other ‘components ofequivalet functional, In this embodiment, the UE 101 includes a media platform interlace 201 10 com- mmnicate withthe media platlorm 103, sensor module 107 that incindesJoeation mexdule 203 to deteemine the location ofa UE 101, a range module 204 to detect the range of an ‘object from the UE 101 while capturing media, a magnetom= ‘eter module 208 o determine horizontal erentationof the UE. 101, an accelerometer modile 207 to detemnine vertical ori ‘entation ofthe UE 101, an altimeter module 209 to determine ‘line, camera mode 210 to capture images, and other seasor modules (not shown), a media module 211 that may be used 10 capture media, # runtime modile 213 to execute applications on the UE 101, a user interface 218, and a com- Iminiestion interface 217. Information from the location ‘mode 203, range module 204, magnetometer axodule 208, ‘ccelerometer module 207, and media module 211 may be used to detemine the direction oF vector along which the UE 101 js aligned When, for instance, capturing event related media or information (eg. the direction oF vector along. Which a camera of the UE 101 is pointed when capturing an image ofan event, Inthis way the UE 101 may generate and transmit context vector to media platform 108 that nchades the directional and location information. Further, the UE 101 may embed the context vector in media transmitted to the media platform 103 to determine the existence ofa current ‘event, existence ofa past event, ora combination thereof 10039] The media platform interfice 201 is used by the ‘atime module 213 to communicate wi 3 media platform 103. In some embodiments, the interface i wsed to upload media and context vectors for processing atthe media pat- form 103. Further, the media platform interface 201 may be utilized by an application 109 to receive event information Mar. 8, 2012 from the media platform 103. In certain embodiments, the ‘vent infrmation includes «determination that an event 11. is occurring, an extent ofthe event 111, a face of the event TH1,astrocture ofthe event 11, type ofthe event 111,03 combination there. Ln certain embodiments, the face ofthe event 111 is the dircction ofa focus point of the event 111 points towards (e.8,a front stage ata concen). As such the lace ofthe event 111 may be the outward presentation ofthe event that the UEs 101 capture media regarding the event from. The location module 203, magnetometer module 208, taeelerometer module 207, ad media module 211 may be tllized to create context vectors to transmit to the media platform 103, [0040] Morvover, in certsin embodiments, UPs 101 may ‘additionally communicate with other UFs 101 and devices via the communication interface 217. In these scenarios, information may be transmit! between UEs 101 visa poor. to-peer network topology. The UE 101 may communicate with other UES 101 utilizing an application 109 based on proximity tothe other UFs 101 (eg. via a Near Field Com- ‘munication (NEC) connection). In certain embodiments, NFC technology is a short-range technology that enables ‘o-Way iterations between devices. NFC technology can be used to communicate with smartcards, readers, and other NFC doviees (¢4, anothor UE 101), NFC can ulizea mag- netic field induction (eg. using anteanas) to communicate ‘with other NPC deviees that are located within a certain Aistance. For example, NFC deviee can transmit on a radio. band (eg, the rio band of 13.56 M12). In another embod iment, a first UE 1016 may utilize @ second UE 10la as ‘conduit to communicate with the media platform 103, I this scenario, the second UE 10La may collet information (eg. conte vst andor medi) um teat UE 1008 a ‘upload the information to the media platform 103. T be useful when there is 9 crowd of UES 101 (which may regularly occur during an event) andthe network is a bottle- neck or congested because of the crowd [0041] In one embodiment, the location module 203 can ‘determine a user’ location, The user's location can be deter mined by triangulation system such asa GPS, assisted GPS (A-GPS) A-GPS, Cell of Origin, wireles lea area network teiangulation, of other location extrapolation technologies. Standard GPS and A-GPS systems can use satellites to pin- point thelocation(e, longitude, latitude and altitude) of the UE 101, ACellof Origin system can be used to determine the celular tower thats cellular UE 101 is synchronized with, ‘This information provides a coarse location ofthe UE 101 becausethe cellar owerean havea unigue cellar identifier (cell-ID) that ean be geographically mapped. The location ‘module 203 may also utilize multiple technologies to detect theloeation ofthe UE 101. GPS coordinates ean provide finer ‘etal a8 to the location ofthe UE 101, The location module 205 maybe utilized by the application 109 to capture location {nformationas partofa context vectorto ansmitto the media platform 103, [0042] The range module 204 can include one or more sensors that sense the range of an object. For example, an ingfared sensor, a radio sensor, sonic sensor, a laser, a TIDAR (Light Desction and Ranging), radar tea be utilized to determine a range between the UE 101 and an object. The range detection can further be guided (0 deer- ‘ne how fara object centered by the UE 104 is. The range ricdule 204 ea thas detect what i in view and whether the US 2012/0060077 AL view includes one or more obstructions to an event 11 Range detection is further detailed in FIG. TB. [0043] The magnetometer module 208 ean include an instrament that can measure the strength andor ditection of a magnetic field. Using the same approach as a compass, the ‘magnetometers capable of determining the dreetion ofa UE 101 wsing the magnetic field ofthe Earth, The font of media ‘capture device (eg. .eamera module 210) ean be marked as, ‘reference point in determining direction. Thus, if the mag- hfe eld points north compared to the reference point, the angle the UE 101 reference points from the magnetic elds knowa. Simple calculations can be made to determine the direction of the UE 101. Inoneembodiment, horizontal direc tional data obtained from a magnetometer is stored in a eon- text vector when media is captred. This drctionalnfoema- tion may be correlated with the location information of the UE 101 and other UEs 101 fo detemnine a focus point (eg. ‘where multiple vectors associated with the determined loca tions ross paths) forthe event TH. 10044] Further, the socelerometer module 207 may include fn instrument that can messure acceleration. Using a three axis accelerometer, with axes X,Y, and Z, provides the accel- ‘ration inthree directions with Kacwn angles. Once again the font of a media capture device can be marked a a reference point in determining direction. Because the acceleration due to gravity is known, when 8 UE 101 is stationary, the accel- ‘erometer module 207 can determine the angle the UP. 101 is pointed as compared to Farth’s gravity, In one embodiment, ericl directional data obtained From an accelerometer is stored in th context vector when moda i captured. [0045] | Moreover the alkimeter module 209 may be uilized to determine the altitude of the UE 101 during the event Altitude information may be included in the event veetor to ‘desermine a vantage of he user while eapturing media, More ‘over, aliude information may be used to determine events, happening a single longitude snd latte location, but a 2 different elevation (¢g,onarvofofa building, edge ofaclif, tc). In certain embodiments, the altimeter module 209 includes apressure altimeter that determines barometric pres- sreto detemninethe alin, In anotherembediment, the UE 101 may include a temperature sensor that is used to infer strode based onthe ambient temperature (e-.,tomporstre ‘decreases at known rate with inreasing aide), In addition for alematively, GPS information may be ulized to deter- rine altitude information, [0046] Media canbe captured using a media capture deview sssociated with the media modle 211, A modia capture ‘device may include camera module 210, an audio recorder, ‘Video camera, a combination thereof, et. In one embod rent, visual media is captured in the form ofan image or 8 Series of images. The media modile 211 can obiainthe image from camera and embed the image within an event vector also containing location data, timing data, and orientation data, Moreover, the event vector may additionally include air-pressure seasor dat, temperature sensor data, other such sensor data, of a combination thersf. Timing information ‘can be synchronized between UEs 101 utilizing one or more services, In certain embodiments, the UEs 101 include a ‘cellar radio. The celular radio can be wilized to synchro nize the UE 101 toa particular time associated witha wireless ‘carrie Carrer information can be included ax metadata because different carriers can include different iming clocks As such, the timing information can be synchronized based ‘on earier. Additionally, one or more ofisets ean be deter- Mar. 8, 2012 ‘ined between UES 101 associated with different caries Further, content processing (eg, audio processing) may be lized to synchronize timing of associated media items. [047] Information thats collected to transmit tothe media. platform 103 may be controlled and viewed using the user Interface 218, which ean include various methods of commu nication. For example, the user interface 218 can have outputs including a visual component (ea creea), an audio com- ponent (eg, a verbal instructions), a physical component (2, vibrations) and other methods of communication. User inputs can inchide a wuch-screen interface, microphone, camer, a seroll-and-lick interface, a button interface, te Further, the user may input a request o star an application 109 and uz the use interface 215 while capturing mesa, Additionally or alternatively, the UE 101 may include an ‘application 109 that can be presented using the user interface 218. Utilizing the user interface 218, the wser may select View one of more views ofthe event 11 andor request that events nearby the user be presented to the user. Purher, the wer may, while capturing media items, receive ouput ‘describing where to focus andor other guidance information [0048] Adaitionally tho uso interfuce 218 can be uz, tw present compilations of media items. For example, the runtime module 213 can request the compilation from the ‘medi platform via the media platform interfice 201. The ‘intime module 213 can then receive the compilation (eg viadownload ora seam) and present the content viathe user interface 218. In cenain embodiments, the user can enter parameters for determining criteria to contro the user expe- Fence of the compilation vis the wser interface 218. This criteria andor parameters for determining te eiteria can be sent to the media platform 108. The media platform 103 can process the compilation according othe criteria and send the compilation. The runtime module 213 can then receive the jon and preseat it via the user interface 218, The ion ean he made dynamically according 49 one oF ‘ore interactive responses. [0049] FIG. 3 isa flowchart of a process for generating a ‘compilation of meta items, according oone embodiment. In ‘one embodiment, control loge of the media platform 108 performs the process 300 and is implemented in or instance, chip set including a processor and a memory as shown in FIG. 11. Additionally oraltematively, oneormore portions oF the process 300 can be implemented via another device (eg the UE 101), As seh, contol log ofthe media pte 108 andor other devices can be wilized to implement onc ormore ses ofthe process 300, [0050] At step 301, the contro logic of the media platform 103 recsives a plurality of media items. These media items can be utilized lor generating a compilation media itom (e., a director's eut compilation) Further, dhese media items ean, be stored inthe mesa data database I15 of the media plat- orm 103. The media items can bein the format one or more video items, audio items, image items, ete. For example, the ‘edi items canbestreamesdto themed platform 103 andlor sent asafile, ein Moving Picture Expert Group (MPEG) format, Windows® media formats (eg, Windows Media Video (WMV), Audio Video Interleave (AVI) format, as well as new and/or proprietary formats, [n certain embodiments, these media item can additionally inelide associated content vectors as part ofthe media lems, 3s metadsta in dhe media items, oF a8 a separate file associated with the media tems [0OSH] Themedia platform 103 determines respective con text vectors associated with the media items (tap 303). The US 2012/0060077 AL ‘context vectors can include orientation information, geo-o- cation information, timing. infomation, a combination thereof, and ean be associated withthe creation of the respec- tive media items. For example, the context vector associated with a frame ofa video can include the tise the frame was ‘captured as well as the location the UE 101 capturing the media item was. Moreover, as noted above, the context vocter nay inelude accelerometer dats, magnetometer data, atim- ‘ler data. 200m eve lata, focal length dat, filo view dat ‘or combinations thereof. The media platform 103 can deter- mine the context vectors by using one or more identifiers linking the media items to respective context vectors extract ing the context vectors from the media items, or the lke. [0052] _tstop 305, the media platform 103 determines one ‘or more criteria, user preferences, ora combination thereat foe selecting from among the media items to include in a ‘compilation. This step can oecur multiple times as one or more criteria can be updated (eg, based on user input change inna service performed by the media platform 103, ete). The criteria can include one of more specifications andor filters sssociated with the cantext vectors and/or media items. Por ‘example, one criterion may include preference or threshold 6 to what quality media tem to include inthe compilation (eg. a resolution fora video fle) Another riterion can be to utilize an advantageous view in the compilation. This ean be hased on determined faces) of an event associated with the ‘context vectors. The determination of the feces is further detailed ia FIGS. 7A-7C. One such advantageous erterion includes a determination that the media item i captring the ‘event fom an elevated location, Another criterion includes ‘determination that the media item has an unobstructed view ‘of the event As such, the enteria can include one oF more rules specifying « beneficial range of one or more parameters ina context vector. Additionally, these eriteria can be deter- mined based on one or more algoritums associated wth deter- mining views based on multiple UE 101 capturing media items during an event, For example, in one scenario, the ‘rteriainchide parameters associated with panaing of a UE. 101 while capturing an associated moda item. For example, it may be undesirable to watch a media item with a large ‘amount of minor panning, but it may be desirable to watch media items that change focus toa particular sub-event accur- ring during an event. As noted ahove, the panning can be detected via the context vectors (et, based on the magne tometer modile 208 andlor accelerometer module 207), The associated magnetometer information andar accelerometer information ean be analyzed for detecting the panning. Oneor ‘more panning events canbe detected from the context vectors {aL based on ester andor pattems), 10053] - Moreover the criteria can be changed based on user input. Por example, ifa user is viewing the compilation, the viewer may be provided one or more options to specify ei= teria, In this example, dhe compilation can be augmented based on the specified criteria. In one scenario, the user is provided the compilation on a touch sereen device. The user may input additional criteria using the touch sereen. For ‘example, ifthe user is interested in a particular view ofthe ‘event from another angle, this can be selected as criteria Additionally or altematvely, the user may be provided trrvis for specifying criteria thatthe user is interested in those views. When selected, the preference can be added as criteria, As such, iF media item incindes such a view, the media item will receive preference for viewing. If no such Mar. 8, 2012 ‘media items exist then the closest view andor angle to the selection can receive preference for viewing. [0054] In cenain embodiments, the criteria canbe utilized ‘odotermine score to decide what seument fom what media item to incorporate ina compilation. The scoring mechanism «ean be based on corresponding segments of media items and is farther detailed in FIGS. 4 snd 8 [0055] At stop 307, the modia platform 103 genorates 2 ‘compilation of at least a portion ofthe media items based, at Teast ia pat, on the context vectors. For example, multiple ‘medi items ean be selected! for incorporation in the emp Jation based on theone or morceriteri. The incorporation can be further based on one oF more events, sub-evenls, andlor focal points (eg, regions of interest) of the media items based, atleast in part onthe respective context vectors. none example, a subevent can be detected based om a detected panning and/or focus change in one of more media items ‘associated with an event, When this occurs, the compilation an include a view from a media item focused on the sub- event, In another example, the compilation can be based at least in part, ona viewing of one media item, Additionally or ltematively, ane or more media items can be incorporate the same time. For example, multiple views can be incorpo- rated inthe compilation for the stme time segment. In this ‘manner, a view focused on 2 sub-event can be presented simultaneously with view focused on another portion ofthe cevent. The compilation incorporation is fur detailed in FIGS. 4 and 8, [0086] Themedia platform 103 distributes the compilation ‘of media items (step 309). In contain scenarios, the distribu ‘ion is determined by'the control loge to he provided vin web portal. For example, the media platform 103 may host a ‘web page andlor may determine to post the mesa items ‘andor compilation on another web portal (evi a social networking hosting site). Furher, the compilation can be based, at least in part, ona media item that a Users viewing. [0057] For example, the user of a UE 101 selects media item to view. The media platform 103 presents the media tem and generates a director's eut based, atleast in part, on the ‘media item. This director's eutean be presente tothe user as ‘selectable option, As uch, the media platform 103 searches ‘context data database 113 for modia items that include ‘ontext vectors associated with a event associated with the presented media item, The director's eut ean then select ci- ‘eri based, at east in parton the media item (sep 308) and generate the compilation (step 307). Then, the compilation fn be transmitted to UE 101 ofthe user for presentation (Gtep 309), This compitation can be presented as a stream andor media files. As previously noted, the user can be pre- sented with one of more options to modify the compilation (eat by selecting akitonal eritera).Incertain seenarios the ‘iia items can have associated user ratings (eg. sor). These scores may additionally be utilized in determining \which media items to ulize in the compilation eg. the best scores can be weighted with a preference for being included inthe compilation) [0088] FIG. 4 isa diagram of uilizing a media platform to ener a compilation of media items, according 10 one embodiment. In this embodiment, users wilize UES 10La- 1017 atanevent 401 (ea soccereven!) Attheevent, media items as wel as associated context vectors are sent from the UES 101 to ie mesa platform 103, The media platform 103 can store this information in the context data database 113 and/or media data database 118. certain embodiments, the US 2012/0060077 AL ‘context data database 113 can be populated utilizing one or more script o impor serialized context data. The seriliza- tion helps facilitate accest ofthe context vectors. 0059] During the event 401, or at later time, the media Platform 108 can output a compilation (eg, director's cut) based on the context vectors, media items, and criteria stoned ina criteria data database 117- As previously noted, sensor ddatacan be storeds context vectors and may be stored in any format, such as RDE/XML files, similar suitable Formats, oF propricary formats. The UE 101 clock ean be synchronized ‘a common shared clock between all of the UES 101 recording conten that i used for automate video editing at the media platform 103. The common shared clock may be a GPS clock, wall clock time, oF any suitable clock that is soctate and stable. For example, a cellular UE 101 can be synchronized based on a network clock. Also, a software implementation may recont accurate timestamps for each sensor data reading in order to eomscly interpret the actions ‘of the content capturer (e., panning, til, et.) The sensor ‘data from the compass, accelerometer, alimeter etc. can be transfered citer in realtime, ina non real-time fashion, oat ‘any convenient me fo the media platform 108 along with the ‘captured media items. 0060] ‘The media platform 103 aa analyze 403 the media items and context data to determine ene oF more events a0-

You might also like