Utility Pole Geotagger

You might also like

Download as pdf
Download as pdf
You are on page 1of 19
United States Patent ay (io Patent No US 8,818,031 BI Kelly et al. (45) Date of Patent: Aug. 26, 2014 (ss) UIT PoLeGn apiyoen ate a9 Car mee (95) mesos: James Fintan Kay, Mins CAO, REGIE NI 33 Masa ions SSeS Mica Sine Sin Pec, CA (US) Mlxander Raber Por, San ones Non, CA) hn Hendon Gravois (US) Mathew Wang, JF 200M) © 23100 on GOED Mountain View, CA (US) OTHER PUBLICATIONS (73) Assignee: Google Inc., Mountain View, CA (US) (Cheng, Wengang etal. “Power Pole Detection Based on Graph Cut, (*) Note: Sobjectto any dacainer tetemofitis—Socey pp TO . " pet el ancl tne 38 ai i Hoa tie liye Don Sm USE. 1st by 202d val ge Sheen OE My epee 21) Appl. No= 13410,657 ‘Remote Sensing 2010, val. 2, pp. 641-664. (2) Fis: Mav 2am —— sy tm, Primary saminer— Gregory? Cina > ee ae (4) Attornes Agent, or Firm — Lemer, David, Littenberg, (2) US.CL ‘Krumbolz & Mentlik, LLP use asus (58) Fill of Coin Seach o ABSTRACT USPC ww $5631 628: 39210; OVA, 512 igo on nc eine of einen po ‘See application file for complete search history ‘vide a method for determining the geographic location of an so elena ed Oe lemetnd comping vga ison ys US. PATENT DOCUMENTS 589313 B2* 112013 Mian Tous S6s8377 BL? 42014 Urbach al nav's22 20oS'276464 AL 123005 Duguete eta 382151 20060236007 A+ 11/2006 Rosenberg 54038708 doas2s9sr4 Al® 112006 Rosenberg 70217 2009 0262974 Al* 102009 Lithpoulos 321100 doinoor6dt Al* 32010 Mian oUI9 plirality of images from a database, each imoge related to a respective vantage point, detecting, with the vision system, the object in at easttwo ofthe plurality of images, generating 1 vector in relation tothe respective vantage point for each Jamage in which the objet was detected, and teiangulating, based on the intersection of at least 10 vectors, the ge ipie location ofthe objec. 20 Claims, 11 Drawing Sheets US. Patent 100 — Aug. 26,2014 Sheet 1 of 11 US 8,818,031 BI Start Process | — 102 I Retrieve Street View Images | — 104 ! Identify Presence of Specific Object Within Images | — 106 ! Define Object Detection Vectors I Combine Vectors from Multiple Images | — 110 |! Triangulate Utility Pole Positions | — 12 Run Clustering Algorithm | — 114 Additional Filtering | — 116 Output Utility Pole Locations | — 118 FIG. 1 US. Patent Aug. 26, 2014 Sheet 2 of 11 US 8,818,031 B1 206 204 206 206 206 203 FIG. 2 US. Patent Aug. 26, 2014 Sheet 3 of 11 US 8,818,031 B1 FIG. 3 US. Patent Aug. 26, 2014 Sheet 4 of 11 US 8,818,031 B1 FIG. 4B FIG. 4A US. Patent Aug. 26, 2014 Sheet 5 of 11 US 8,818,031 B1 500 a cl 900 8 ‘Addison Ave \ 800 FIG. 5 502 US. Patent Aug. 26, 2014 Sheet 6 of 11 US 8,818,031 B1 Gat GOH aGgteway FIG. 6 a Stanford University US. Patent Aug. 26, 2014 Sheet 7 of 11 US 8,818,031 B1 700 | — 702 Receive Street View Images Break Down Image into Pixels Apply Line Detector to Each Pixel at Different Orientations 708 Encode Pixel as Color Based [~ on Orientation Response | — 710 Add Color Thresholding |} — 712 Apply Three Column Filter 714 Threshold Response to [~ Obtain Pole Hypotheses | — 716 Derive Horizontal Value FIG. 7 US 8,818,031 BI Sheet 8 of 11 Aug. 26, 2014 US. Patent ts. oe a Ce US. Patent Aug. 26, 2014 Sheet 9 of 11 US 8,818,031 BI FIG. 9 US 8,818,031 BI Sheet 10 of 11 Aug. 26, 2014 US. Patent Ol Old e601015 Kiowon ZioL ‘#001 eo1neg wis|uDYoeN e019 yndyno uofj2euU0D10}U) ynduy o1ot ‘9001 7 408889044 001 Z001 ‘B00L US. Patent Aug. 26, 2014 Sheet 11 of 11 US 8,818,031 B1 $ |e fe fe 35 g £ eroror eo 38 8 2? a 2 3B ~ ~ 5 fi 5 9 = & 1102 US 8,818,031 BL 1 UTILITY POLE GEOTAGGER BACKGROUND OF THE DISCLOSURE 1. Field ofthe vention Aspects ofthe present mention relate to a system and method for detemining the gographic location o spite object 2. Discussion of Related Ant Utility companies and. municipal omanizations often require the geographic loaton of to spevifc objets when designing thei uly networks andlor constuction plas For example a uilty company instaling a fiber optic net ‘work may’ aed to know the location of preexisting tity poles ithiaa een aro, sre planar may Waal tokaow the location of stop signs within a certain ate, and a water company may want to know the leation of fie hydans witha cetin area. Oae creat method for ieating the Iocation of specific objets (elity poles, stp sins fie Iyans te.) withinan area isto have annual mally inspect an area for the specific object and record the go- ‘npc locations of any identified objects. Another eurent inetiod for identifying the geographic location of specifi objetsistoreview aor orsatliteimagesofthearea forthe speci objects SUMMARY. Aspects in accord withthe present invention aedirectedto ‘8-method for determining the geographic location af an ‘object, the method comprising retrieving ata vision system a plurality of images fom a database, each image related toa respective vantage point, detecting, with the vision system, the objectinst east two of the plurality of images, generating 8 Vector in relation tothe respective vantage point for each image in which the object was detested, and trianguatin, based on the intersection of at least wo vectors, the geo graphic location ofthe abject. Acconting to one embodiment, the method further com- prises combining, in single representation, vectors from ‘ach image in which the object was detected. I one embod ‘meat, the method futher comprises outputting to an interlace a representation of the geographic loation of the objec. Accomting to another embodiment, the method further ‘comprises filtering the goographic location of the object. Ia ‘one embodiment, filtering the geographic location of the ‘object comprises clustering intersections of vectors into _groups based oa proximity, identi inga Hist group of vectors ‘containing the highest numberof intersections, disregarding vectors inchded within the first group, identifying, in response to disregarding second group of vectors which no longer coatains intersections, and elimisating in response 10 identifying, weetors inchuded within the second group. In another embodiment, filtering the geographic location ofthe ‘object comprises confirming the presence of the objet in reference o arial or satellite imagery. In one embodimet, filtering the geographic location ofthe abject comprises con firming the presence ofthe object in reference to point cloud data of the geographic location. According to one embodiment, enerating a vector com prises calewlating «horizontal 1 postion for esch image in ‘whiel the object was detected, associating each horizontal position with 2 rotational angle in relation to the respective ‘vantage point ofthe related image. In another embodiment, detecting comprises breaking down each image into a plual- ity of pixels, and individually analyzing each pixel for a structure representative of the object, Fa » s 2 According anoier embodiment, individually analyzing comprises applying singe line detector to eah one ofthe plurality of pixels to purity of diferent orientations and recording each pixel’ response to the line detector at each one of the plurality of diferent orientations. Inoneembodi- ment, the method further comprises encoding each piel with a color indicative of whether te pixels response tothe fine decor signifies the presence othe structure epresenaive ofthe abject. Aecordn to one embodiment, caleulatng the horizon positon comprises filtering the encoded pixels to identify a ‘roping of ites indicative o the presence of the siture representative ofthe objec, and caeulating an average ri- nial position of the grouping of pixel “Another aspect in accord with the present invention is dizzted to system foe determining the yogrophic location ofanobjct. thesystem comprising aninterfceconfiavedto receive a defined geographic area from a use, a database configured to store a plurality of images in relation to the defined geographic ara, a vision system configured to rocoive the plurality of images rom the database and identity the presence of the object within any of the plurality of images, anda processor configured to generate a vector for cach nage in which the object was detected and triangle, based onthe intersection of at least two vectors, the geo: rphiclaation ofthe objet Accord to one embodiment, the system fuer com prises an ouput display configured to provide the geographic Jocation ofthe objeto the ser In another embaximent, the Vision syslem comprises a single line detector, and an encode, wherein the vision system is configured to break down each one of the plraity of images into a plralty of pixels and apply the single line detector to each one ofthe Plural of pixels ato plurality of diferent orientations, and hers the encoders conigred 0 encode each pixel with ‘color indicative of whethce the pines response othe Hine disecor signifies the presence ofa structure representative the objec. “According to another embodiment, in generating the wetor foreach inag, the processor is further cofiured to caleu- lutea orion positon for each image ia which the objet was detected, associate each horizontal position with a rta- tional angle in relerence to tothe rested image “Aspevis in accord wih the present iavention are also directed to computer readable medium comprising com puter-xectable instructions that when executeona proces- Sor periorns method for determining the geographic les ti ofan objet the method comprising sets of ee ing at a vsion stem a plurality of images fom a database, each image relate to a eespectve vase point, detecting. wit the vision syste, the objet nat least Woof the plraiy of images, generating a vector in elton tthe respective van- tape point foreach image in which the abject was detected, and iangulatiag, based on the intersection of a est two ‘ectrs, the geographic location of the objet According to one embodiment, generating a vector com prises calculating a horizontal position for each image in Which the objet was detete, associating ach hazoatal position wit a oaonl ange in relation tthe respective ‘aniage point of the related image. In anather embodiment, detecting comsises breaking doin each image into a pz. ‘iy ofpnels, applying singe line detector each one a the plurality of pixels at plraity of diferent orientations, nd recording each pixel’ response tthe line detector at each one ofthe plurality of diferent eeentations. In ane embodi- ment thet farther comprises encoding cach pixel with US 8,818,031 BL 3 ‘representation indicative of whether the pixels response 10 the line detector signifies the presence of a structure repre- sentative of the object. BRIEF DESCRIPTION OF DRAWINGS ‘The accompanying drawings are not intended 1 be drawn to scale, In the drawings, each identical or neaely identical ‘component that i illustrated in various FIGs. is represented by a like numeral, For purposes of clarity, not every compo nent may be labeled in every drawing. Inthe drawings: FIG. 1 is flew chart of process far detemnising the eographic location ofa specific object in aeeordance with ‘one embodiment ofthe present invention; FIG. 2 isa split view image illustrating asret view image ‘ofa streot and a graphical representation of the street in cordance with one embodiment ofthe present invention; FIG. 3isa split view image illustrating asreet view image ‘of astoet ad graphical representation ofthe street includ ingan object detection vectorin accordance withone embodi- ment ofthe present invention; FIGS, 48 and 4B are split view image illustrating a street, view image ofa street and a graphical representation of the stret including combinations of objct detection vectors in accordance with one embodiment of the present invention; FIG. $is a graphical representation ofa clustering alo. rithm in accordance with one embodiment of the present invention; FIG. 6llustates an output map of uty poe locations in sccordance with ane embodiment ofthe present invention; FIG. 7 is flow chart of a process for operating a vision system in accordance with one embodiment ofthe present inveation; FIG. Bisa split view image illustrating astrot view image ‘of a street anda color encoded image of the steet in acooe- ‘dance with oe embodiment af the present invention; FIG. 9 isa split view image illustrating astet view image ‘of street, a color encoded image of the street and a filtered image ofthe seat in accordance with one embodiment of the preset invention; FIG. 10 isa block diagram of a general-purpose computer system upon which vaeious embodiments of the invention may be implemented; and FIG. 11 js a block diagram of a computer data storage system with which various embodiments of the invention may be practiced DETAILED DESCRIPTION Finbodimens of he inveton anointed the tai feansineton and the armagenentaf compan fath inte follwing descpton or stated the dravings. Embodiment of he invention ae apoble of bing paced corof being carried out in various ways. Also, the phraseology St temiolgy used hein sore purpose of exrpon ful shouldnt be regarded as liming. The use of inc ing,” “comprising,” or “having,” “containing”, “involving”, fn vaatons hereof en, mean enemas tens std threaher and equals hereof wells atonal AS described above, ily and constneton companies aiien dese to eam the geographic Ieation of specie thes (ety pole nap tits, fie bye te) tina gien ae, Howeve te cuent metbode of ter isl inspecting he are it person or eiewing geil or Scale ipery a the ae can prove tobe son nee tinct For example, nnd vial nt » 4 ing given are for specific obec maybe relying on incom- plete or inaccurate dacumentation, inaccurately record the locaton ofthe object, unintentionally miss a portion ofthe sven re andlor ake longtime to doa thorough toinspec- tion ofthe area, In another example the review a serial oF satlite imagery may also prove ineffective asthe specific jects may be dificult ident from above, For istance, uty poles may appear as smal dots na setlite image and nay be very dill fora usr to deny In aiton, poor resolotion of arial r satelite imagery may preven an accu- rate review ofthe imagery ‘Assoch, embodiments herein provide a mor eficent an accurate system and method for determining the geographic Jocation of an objet hy nalyzng.a plurality of tet view images, identifying the presence of obs within any ofthe plurality ofimages and bsedon the deni objet, deter- Inning the geosraphic locaton of «specifi object FIG. Lisa flow chart 100 af a process for determining the seograie ation of spectc object (2, a uit pole). At block 102, the process i inte by a user wishing to iemify the geographic locaton of wiity poles within a defined area. According to one embodiment, usr selects the defined are by using an interlace (eg. mouse, keyboanl touchscreen eo eet poion of amapdispayedon the cutput display ofa compute. For instance, in one example, the user operates a computer mouse to drag box over a potion of the displayed mp. herby indicating he desired tea within Which the user wishes to deny the presence oF uilty poles. Inter embodiments auser sees the defined. area by entering sot names oe GPS coordinates into an interfae ofa computer. ‘At block 104, plurality of soot view images, each one previously recorded. on a stret located within the area Selected hy the user, i etieved from a database. Acooning twoneemboaiment, he databases eae cll wo theuser weve, in another embodiment, the databases lest ata differet location than the user dnd the use communicates withthe database overs extemal nework (ea, Vi the inter) Each one ofthe erieve set view images sa stret evel image (eg. ground level view) of te street (and aljacent area) om which the camera recording the image is eed. ‘Aeconing o oneembodiment the stet view imegesare 360 degre panorama thumbs othe street ad ares mes ately adjacent to the srt on whieh the camer is recording the images. For example, in one embodiment soe view images may be recorded by a camera located on vehicle As the vehicle travers astet within the defined arc, the camer located onthe vehicle automatically records images ofthe stet a adjacent areas. The images are uploaded wo the database foe Inter image processing and analysis. According 10 one embodiment, ination to the set view images, geo- sxephic position and orientation information ofthe vebicle and camera e2obuined from Global Posioning Sytem, (GPS) within he vebile) is also associated with each sect. ‘iow image and uploaded to the database For example, FIG. 2 i a split view image ilotating a ste view image 200 ofthe steet 204 on which the veicle 208 (and bene camera) is located and a graphical epresen- tstion 202 the stret 208 on which he vehicle 208 (camera) is cure locate. The camera records images 200 othe street 204 and ares 206 adjacent (i. on each side othe street 204 and crests a 360 degree panorama image 200 of the street 204 and the areas adjacent 206 the sreet 204 ‘AL Bock 106 analysis ofthe reeved plurality of street view images isbegun. Eachone ofthe plurality of street view US 8,818,031 BL 5 imagesisanalyzed separately by vision system to determine if at least one ofthe spevified objects (ea uty pole) is identified in the image. A more detiled description ofthe vision system is deserbed below in reference to FIGS. 7-9. Upon identifying the presence of at leastone uty pole in a stret view image, the vision system calculates a horizontal position within the image space for each detected utility pole De to the 360 degree nature of the panorama steet view image, the horizontal position coeespondls toa rotation angle about the origin ofthe vehicle (ie. camera). A block 108, using the horizontal position (ie. rotation angle) information of each utility pole identified in the text View images in addition to the position and orientation infar- ration ofthe vehiele associated with exch toot view image, ‘an objet detection vector is generated for each identified uly pole in relation to the camera For instance, FIG. 3 isa split view image illstating the steot view image 200 ofthe see 204 on which te veil 203 (and hence camera) is located and the coresponding traphical representation 202 ofthe street 204 on which the ‘ehicle 203 (camera) is curently located. According to one ‘example, upon analyzing the steet view image 200 for utility poles. te vision ystem identifies ult pole represented by \ertical line 300. The uty pote 300 is associated with 3 horizontal position within the image space of the stret view image 200, The horizontal position cozresponds oa rotation angle about the origin ofthe camera 203. Based onthe rola- tion angle andthe position and orientation information of the ‘camera 203, an objet detection vector 302, identifying the location ofthe uilty pole in relation to the camera 203 is ‘generated. According to one embodiment, Mercator coordi- hates ae wilized inthe computation ofthe object detection ‘vectors; however, any otherknown coordinatesystem may be used ‘A block 110, the individually generated object detection vectors fom each ofthe plurality of siret view images are ‘combined into a single epreseniation For example, FIGS. 4A and 4B are split view images illsinting siest view images, 400 and 410 respectively, taken at diferent locations along 2 sieet 402 and coresponting graphical representae tions, 411 aad 413 respectively. Each street view image 400, 4410 is analyzed, as discussed above, for utility poles and appropriate objost detection vectors are generated. In ai- tion to object detection vectors generated in eelation tothe associated street view, each graphical representation 411,413 also includes object detetion vectors previously generated in relation to other stret view images. For example, as seen in FIG. 4, object detection vectors 404, 406 are generated in esponse to the idetification of utility poles 405,407 in stret view image 400, as discussed above. The objet detection vectors 404, 406 are combined With an object detection vector 401, previously generated during the analysis of another stost view image. As soon in FIG. 4B, object detection vectors 412, 414 are generate in response tothe identification of uty poles 408,407 insteeet View image 410, as discussed above. The object detection vectors 412, 414 ae combined withthe abject detection vee- tors 404, 406 associated with street view image 400 and the ‘object detection vector 401, previously generated during the analysis of another stoot view image. According to: one ‘embodiment, all of the object detection vectors generated fom each ofthe plurality of retrieved street view images are ‘combined ito a single representation. At block 112, any intersections of abject detection vectors (teated as finite line segments) are analyzed to teiangulate uly pole positions. For example, as seen in FIG. 4A, the imtersection ofthe object detection vectors 404 and 401 indi- 6 cts tht the uilty pole 408 is located at point 418. As seen jn FIG, 4B, the intersection of the object detection vectors 4412, 404 and 401 indicates that tho vility pole 405i located a point 415, The intersection ofthe object detection vectors 406, 414 indicates tht the uty poe 407i located at point 47 Aer computing the different intersection points in rel tion fo the generated abject detection vectors, some intersec- tion points may be reduadant as a result of multiple pole detections occurring forthe same poe. Inadiion, the inter- section points may also contain “false” pole locations, There- fore, at block 114, an iterative clustering algorithm is per- ‘ormed othe identify “tue” poe locations. One example of the clustering algorithm can be seen in FIG. 8. FIG. 8 is a graphical representation 500 of a plurality of object detection vectors S02 to generated as a result of ana- Iyzinga plurality of retrieved stret view images, as discussed above, The plurality of object detection vectors $02 forms variety of intersection points (ie, indicating the potential presence of uty poles). Theclustering algorithm groups the Intersection points based on their proximity to each other and iteratively analyzes each custer to identify which chsters should be dismissed as false and which clusters should be Iabeled as legitimate For example, with regard to FIG. the clustering al rithm groups together all intersections within a proximity threshold, As shown ia FIG. §, thie results in. four clusters (eg, cluster one $04, cluster two $06, cluster three $08 and cluster four 10), The algorithm then determines which elus- ters the most legitimate by looking tthe numb of inter- sections contained inthe cluster. For instance, a shown in FIG. 8, cluster one is identified as the most legitimate as it contains five detetion vectors $02 passing though it. Upon identifying cluster one S04 as the most legitimate the five detection veetors $02 tha contributed to intersections within cluster one 804 are disregarded as they are confirmed 8 associated with eluster one S04, Thus, closer three S08, Which relied on one of the disregarded vectors to form its cluster, is dismissed, as without the disregarded veetor, the cluster no longer exists. The algorithm nex iteratively ana Iyzes the reining two clusters (cluster to $06 aad cluster Jour S10) to determine which one ofthe remaining clustersis tte most egtimate. As cluster two $06 as the mostintersec- tions, its labeled as legitimate, and its detection vectors re disregarded, Similarly es with cluster tree S08, wih the removalof the vectors associated with cluster two 506, cluster Jour 510 is dismissed as it no longer contains any intersec- tions. As no clusters eemain to analyze, the clustering algo- rithm has identified cluster one S04 and cluster two 506 2 confirmed utility pole locations. At block 116, addtional ters may be applied wo the con firmed utility pole locations to futher improve the accuracy of the utility pole location data. In one embodiment, the confirmed utility pole locations are matched against satellite or serial imagery to coafimn the location ofa utility poe. In nother embodiment, a human operator reviews the coa= firmed utility pole locations (ether by reviewing the street view images or the stretitsel! to ensure the accuracy of the uility pole location la other embodiments any eer typeof appropriate quality assurance measures may be ulized (c- such as crowd sourcing) In another embodinient, in addition to the visual imagery Ge. the sioxt view images), additional data recorded bythe ‘hicle may be utilized to Further confimn the location of uty pole. For example, laser point cloud data may be path= ered by the vehiclessit traverses astrwet.Inone embodiment, the laser point cloud data may inclode distance information US 8,818,031 BL 1 such as how far an object is away fom the street oF the «distance from the ea to an object. This data may be useful ia ‘making the object detection vectors more accurate. Tis data may also be useful in confirming the presence ofa desired ‘object, For example, i an deatified objects located too Lar from te sire, it my be rejected asapotentialuility pole. In this way, addtional “flso” utility pole locations may be fither eliminated. Atblock 118, iter directly following the clustering algo- rithm or after any editiona firing, confirmed wity pole locations are output tthe user. Asillustated in FIG. 6, amap (600 ofthe area originally defined by the user ay be provided to the user. The map 600 includes identifying markers 602, ‘ech representing the location of an identified utility pole Uiilizng the accurate positions of the utility poles identified ‘onthe ap 600, 2 user may be able to more ellicently and accurately design utility network or construction projet ‘within the defined area According wo one embodiment, alist of uty pole loca- lions (ea Fist of sieet address or GPS eoorinates) may also be provided tothe user. In another embodiment, in adi- tion to the uty pole lation data, a confidence number related t0 each utility pole location may aso be provided to the ser. The confidence number may ingicate tothe user, how likey it is that the utility poe location is accurate If utility pole location was confirmed va clustering, laser point cloud data, and aerial imagery, i will have a higher confidence number than a utility pole location that was only confirmed through clustering. As described above, each one ofthe retrieved plurality of strot view images is analy7ed separately by a vision system to determine if a least one of the specifid objects (eg. uly pole) is identified inthe image. The operation ofthe Vision system is now described in relation to FIGS. 7.9, FIG. isa flow chart 700 of aproces for operating vision system to analyze a street view imagein order wo detemineif atleast one ofa specified object (ea ulity poe) is iden ited in the image. AL block 702, the vision system receives the plurality of stceet view images retrieved from the database, The vision system treats each image as an individual vision problem and, a block 704, beaks down each image iat individual pixels in onder to identify specific features (ez, vertical lines car- relating to utility poles) within cach image. common method for identifying features within an imageinchudes using singleline detector (e-2,a vertical line detector, Such detectors respond strongly when an analyzed pixel matches the desired orientation, For example. a vertical Tine detetor will respond strongest at a pixel that has strong ‘erica structure aboveand below it, Vertical structure in this, ‘ease may be defined by the length in pixels one can tavel Vertically from the given pixel such that the difference hetwoon one pixel and the next is within a specified error threshold, In effect, a vertical fine detector is looking for vertical coasistency, However, ia such situation where the identification of specific abject, such as a uly pole, is desired, a single ‘vertical line detector may not be useful. Because the vertical line detector analyzes each pixel for vertical consistency, it ‘would respond the same way toa horizontal wall as it would a utlity poe .. both would have vertical consistency). “Taerelore embodiments herein, providea detector capable ‘of responding strongly to pixels having vertical structure, poorly to horizontal structure and somewhere in between for ‘digonal structures, AL lock 706, te vision system applies line detector to ‘ech pixel at multiple orientations. For example, according to s s 8 one embodiment, the vision system rotates each pixel sixteen times and records the response othe line detector tthe pixel for each diffrent orientation, In another embodiment, more or less than sixten diferent orientations may be used. In nother embodiment, rather than otating the piso, plurality of ine detectors, each orientated ifferenly may be used. In rotating the pixel and tacking the line detectors response to different orientation, the vision system sable to determine which pivels are likely to be part of a vertical structure, suc asa utility pole. For example, the vison sys- tem identities which pixels respond strongest to vertical ‘orientation whi also responding poorest oa horizontal ri= entation ‘At lock 708, each pixel is encoded witha representation based on how the pixel responds to exc orientation. For example, inne embodiment, cach pixel is encoded as a.color based on how the pixel responds to each orientation. Pixels that eespond more strongly to vertical structure are encoded with acolor closer to one endo the color spectrum and pixels that respond more stroagly to horizontal stmcture axe encoded with acolr closer to the opposite end ofthe spee- trum, For example, acconting 10 one embodiment, pixels that respond strongest to vertical structure are encoded black, pixels that respond strongest to horizontal structure axe ‘encoded white and pixels that correspond to astrcturesome- where inbetween (e..adiagonaltasiicture)areencoded as an appropriate shade of grey depending on whether the piel is closer to being vertical Ge, a darker shade of grey) ot horizontal (ea lighter shade of grey). As described herein, exch pixel is encoded with arepresentativecolor, however, i other embodiments, any type of representation indicating hw the pixel responded to the desired orientation (eg, such asa number seleor ranking) may’be used For example, FIG. 8 isa split view image iustating a strcet View image 800 and corresponding orieataton based color encoded image 810, Upon breaking the street view ‘mage 800 down into pixels and applying the line detector to cach pixel at different orientations, the pixels are color encoded bated on their esponse tothe different orientations, resulting in the orientation hased color encoded iniage 810, As can be seen, the encoding of the utility pole 802 inthe ‘ret view image 800, results ina line of eacoded black pixels 812 having strong vertical strcture and weak horizontal sincture (ea line indicating the presence of the wlity pole 802), ‘AL block 710, after generating a orientation based color encoded image, the color encoded image is further enhanced by adding color thresholding to remove pixels that were too green or Blue (Le. clearly not indicative ofa uit pole). For example, FIG. 9 isa split view image ilstating a sioa view ‘image 900 including a utility pole 901. After color encoding each piel othe street view image 900 in relation othe fine detector orientation esponse and also applying coor thesh- old, the intermediate image 902s produced. The interme- dite image 902 includes line of encoded black pixels 903, having sttoag vertical structure and weak horizontal stuc- At block 712, three column haarike feature filter is applied tothe intermediate image 902. The filter groups pix- els into columas and analyzes three columas ata time. For «example, the filter works by sublracting the summediatensity ofthe pixels inthe middlecoluma fom the summed inten ofthe pixels inthe side columns, Thus, an area of strong dark vertical pixels with strong ight vertical pixels on either side will respond bes. US 8,818,031 BL 9 A lock 714 threshold is appli to the response fom the three column fier to obtain pole hypotheses, indicated by the white pixels 906 in the bottom image 904. The line of ‘white pixels 996 indicates line of pixels with strong vertical structure and weak horizontal strocture(.e-an object soch as 2 ility pole). In other embodiments, any other appropiate methods of thresholding an filtering may be wilized toaccu- rately identify pixels indicating the presence of a desired object (ea uility poe). Atblock 716, single horizontal values derived from the _groupof white pixels 906, ro According to one embodimeat, the group of white pixels 906 s clustered toa single value by finding the average horizontal location of the group of white pels 906, The resulting horizoutal value is used, a discussed above, to generate a rotational angle (and vector) ‘corresponding tothe identified uility pole. ‘Various embodiments according to the present invention ‘may be implemented on one of moze computer systems oF ‘other devices capable of automatically ideuiying a channel 2 andlor program as described herein, A computer system my be a single computer that may inchde a minicomputer, mainframe, a server, & personal computer, ot combination thereof, The computer system may inchice any typeofsystem, ‘capable of performing remote computing operations (eg. «xl phone, PDA, se-op box, ar other system). computer system used to ran the operation may also include any com bination of computer system types that cooperate to accom: plish system-level tasks. Multiple computersystemsamay also he usd torn the operation. The computer system also may’ include input or output devices, displays, or storage units It should be appreciated that any computer system or systems may be used, ad the invention isnot iited to any number, |ype, or configuration of computer systems, These computer systems may be, for example, general- purpose computers suchas those based oa latel PENTIUM- Iype_ processor, Motorola PowerPC, Sun UltriSPARC, Heewlet-Packard PA-RISC processors, of auy other type of processor. It should be appreciated that one oF more of any "ype computer system may be use to partially o filly anto- mate play of the described system according to various ‘embodiments of the invention, Further, the system may be located ona single eomputer or may'be distributed among 3 plurality of computers attached by a communications as ‘work For example, various aspects of the invention may be implemented as specialized software executing i a general- purpose computersystem 1000 suchas hat shown in FIG. 10, ‘The computer system 1000 may inchide a processor 1002 ‘connected 1 one or more memory devices 1004, such 2s 4 disk drive, memory, or athe device for storing dat. Memory 1004 is typically used for storing programs and dsta during ‘operation ofthe computer systent 1000, Components of com- puter system 1000 may be coupled by an interconnection mechanism 1006, which may include one or more busses (eg, botween components that are intgrated within a same machine) and/or a network (eg. between components that reside on separate discrete machines). The interconnection mechanism 1006 enables communications (eg, dat, instrotions) tobe exchanged between system components of system 1000. Computer system 1000 also includes one ot more input devices 1008, for example, a keyboard, mouse, trackball, to microphone, touchscreen and one or mire aut put devices 1010, for example, a printing device, display screen, andor speaker. In addition, computer system 1000 ‘may contain one or more interfaces (nat shown) that connect 10 computer system 1000 1 communication aework (in adi- tion or as an alternative tothe interconnection mechanism 1006. ‘The storage system 1012, show in greater detail in FIG. 11. typically includes computer eadable and writeable non- volatile recording medium 1102 in which signals are stored that define @ program to be executed by the processor or ‘information stored ono in the mesium 1102 1 be processed by the program. The mevium may, for example, be a disk or flash memory. Typically, in operation, the processor causes data to be read from the nonvolatile recording mestium 1102 {nto another memory 1104 that allows for faster accesso the ‘information by the processor than does the medium 1102, This momory 1104 is typically a volatile, random access ‘memory suchas. a dynamic random access memory (DRAM) or static memory (SRAM). It may be located in storage sys- ‘em 1012, sown or in memory system 1004. The proces- sor 1002 generally manipulates tho data within the integrated eitvuit memory 1004, 1104 and then copies the data tthe medium 1102 after processing is competed. A variety of mechanisms are known for managing data movement berwoon the medium 1102 and the integrated circuit memory element 1004, 1104, and the invention isnot imi thereto, The invention is not Himited toa particular memory system, 1004 or storage system 1012. ‘The computer system may include specialy-programmed, special-purpose hardware, for example, an applicationspe- cific integrated circuit (ASIC), Aspect ofthe invention may be implemented in software, haware or fimwave, or any combination thereof. Further, such methods, acs, systems, system elements and components thereof may be imple= rented as pat ofthe computer system described above or as an independent component Although computer system 1000 is shown by way of example as one type of computer system upon which various aspects ofthe invention may be practiced it shouldbe appre= ciated that aspects ofthe iaveation are not Finite to being ‘implemented on the computer system as shown in FIG. 10, Various aspects of the invention may be practiced on one ot more computers having a different architecture or compo eats that that shown in FIG. 10, ‘Computer system 1000 may be @ general-purpose com= puter system that is programmable using a high-level com> Puler programming language, Computer system 1000 may be ‘also implemented using specially programmed, special pur- pose hardware. In computer system 1000, processor 1002 is ‘ypically acommercally available pocessorsuchasthewell- Known Pentium class processor avaiable from the Intel Cor- poration, May other processors are available, Sucka proces- Sor usually executes an operating system which may be, for example, the Windows 95, Windows 98, Windosss NT, Wine dows 2000 (Windows ME), Windows XP, or Windows Vist operating systems available from the Microsoft Corporation, MAC OS System X available from Apple Computer, the Solaris Operating System available from Sun Microsystems, Cor UNDX available from various sources, Many other operat- ing systems may be uso The processor and operating system together define acom= puter platform for which application programs ia izhlevel programming languages are written It should be understood that the invention is not limited 10 @ particular computer system platform, processor, operating system, or network, Also, it should be appareat fo those skilled in the at thatthe present invention isnot limited to a specific programming Janguage or computer system, Further, it should be appreci= sed that oer appropriate programming languages and other appropriate computer systems could also be used US 8,818,031 BL 1 ‘One or more portions of the computer system may be

You might also like