An Optimization Based Framework For Pose Estimation of Human Lower Limbs From A Single Image

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

An Optimization Based Framework for Pose Estimation of Human Lower Limbs from a Single Image

Priyanshu Agarwal December 12, 2010

Abstract The problem of human pose estimation from a single image is a challenging problem in computer vision. Most work dealing with the problem of pose estimation from a single image requires user input in the form of a few clicked points on the image. In this work, we propose a method to estimate the initial pose of human lower limbs from a single image without any user assistance. Our method is based on the fact that if an image from a 3D model of a human can be generated, a metric can be established by comparing this model generated image with the original image. We use image subtraction as the means to evaluate this metric for measuring the accuracy of the estimated pose and minimize it using Method of Multipliers in conjugation with Powells conjugate direction method.

Keywords: Human pose estimation , constrained optimization, single image 1

Contents
1 2 Introduction System Overview 2.1 Human Lower Limb Detection . . . . . . . . . 2.2 Model Based Silhouette Generation . . . . . . 2.3 Variables . . . . . . . . . . . . . . . . . . . . 2.4 Optimization Based Body Position Estimation . 2.4.1 Problem Statement . . . . . . . . . . . 2.4.2 Assumptions . . . . . . . . . . . . . . 2.4.3 Z Coordinate Estimation . . . . . . . . 2.4.4 X,Y Coordinate Estimation . . . . . . . 2.5 Optimization Based Pose Estimation . . . . . . 2.5.1 Problem Statement . . . . . . . . . . . 2.5.2 Objective Function . . . . . . . . . . . 2.5.3 Constraints . . . . . . . . . . . . . . . 2.5.4 Optimization Problem . . . . . . . . . 2.5.5 Optimization Problem in Standard Form Optimization Approach Results and Discussion 4.1 Initial Design Points . . . . . . . . . . . . . 4.2 Optimization Parameters . . . . . . . . . . . 4.3 Convergence Criteria . . . . . . . . . . . . . 4.3.1 Augmented Lagrangian Method . . . 4.3.2 Powells Conjugate Direction Method 4.3.3 Golden Section . . . . . . . . . . . . Conclusion Future Work 3 3 3 4 4 4 4 5 5 6 7 7 7 7 7 8 9 10 12 12 13 13 13 13 15 15 16 16 20 22 23 25

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

3 4

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

5 6

Appendices A MATLAB Code for overall system. B MATLAB Code for Augmented Lagrangian Method implemented for Objective 1. C MATLAB Code for Augmented Lagrangian Method implemented for Objective 2. D MATLAB Code for Augmented Lagrangian Method implemented for Objective 3. E MATLAB Code for evaluation of Objective 1.

F MATLAB Code for evaluation of Objective 2. G MATLAB Code for evaluation of Objective 3. H MATLAB Code for Powells Conjugate Direction Method. I J MATLAB Code for Golden Section Method. MATLAB Code for Swanns Bounding Method.

26 27 28 29 31 32 32 34

K Objective Function Output for different inputs L The inputs used and output at each stage of the image processing M The detailed output obtained for images in KTH dataset

Introduction

Human pose estimation from images is an active area of research in computer vision with various applications such as computer interfaces to motion capture for character animation, biometrics or intelligent surveillance [1]. Figure 1 shows the overview of a system for estimating human shape and pose from a single image. The initialization step in the system requires some initial user input for determining the pose. In order to fully automate the system an automatic initialization of the human pose is must. In this work we have addressed this problem for those images in which the human pose is nearly parallel to the camera plane.

Figure 1: Overview of a system for estimating human shape and pose from a single image [2].

2
2.1

System Overview
Human Lower Limb Detection

We extract silhouette from the original image by converting the original image into a binary image. The binary image is then searched for a connected region. The centroid of the region is evaluated. The centroid roughly lies on the waist of the human. The portion of the region lying above this centroid is then removed to retain the lower limbs. We use standard MATLAB functions for image processing.

2.2

Model Based Silhouette Generation

We have a VRML lower limbs model which can be positioned and articulated to realize any given pose. In order to generate a similar silhouette from a VRML model, we rst initialize the VRML model with the required input and then capture a synthetic image from the model. This image is then converted into a binary image, which provides us the model based silhouette.

2.3

Variables

z
CameraCamera Plane Plane Pl Pl

2 4 4

(i)

(ii)

Figure 2: Variables used in the model. (a) Side view with respect to camera plane, (b) front view with respect to camera plane. Our system consists of 3 position variables and 4 angular variables. The x,y position variables are the location of the human waist in the image coordinate frame. The z position variable is the depth of the human waist with respect to the camera as shown in Figure 2. The four angular variables are the right hip exion, right knee exion, left hip exion, left knee exion.

2.4

Optimization Based Body Position Estimation

We formulate the problem of determining the position of the body as two optimization subproblems. 2.4.1 Problem Statement

Given a single real image with unknown camera parameters determine the pose of the human lower limbs in sagittal plane. 4

2.4.2

Assumptions

We assume the following conditions for the developed framework: 1. Human limbs have roughly same aspect ratio for all humans. 2. Illumination conditions in the actual image are such that the human silhouette can be easily extracted. 3. Model geometry closely represents the human body geometry. This ensures that the silhouette area is roughly same as that of the actual human silhouette. This also ensures that the centroid of the two in an image lies close to each other. However, even with slightly different geometry the model will work successfully as the frame works on minimizing the difference. 4. The human limbs are oriented in the sagittal plane. 5. The direction of movement of the human is known and so the model is oriented appropriately. This can also be determined using optimization. However, in this work we are interested in determining the actual orientation of the limbs. 6. The human silhouette centroid is such that the area below is represents the human lower limbs silhouette along with the waist. 2.4.3 Z Coordinate Estimation

The z coordinate estimation is based on the fact that an actual body will roughly occupy a similar area in an actual image as that of the model in the synthetic image. We set up an optimization problem based on the difference in the area of the original image and the model generated image and try to minimize the square of this difference. We also setup an upper and a lower bound on the z coordinate to bound its value. Problem Statement Given a single real image with unknown camera parameters determine the z coordinate of the model i.e. human waist. Objective Function We choose the objective function to be the square of the difference in the silhouette area in the original image and the model generated silhouette in the synthetic image as follows: A(bz) = (Ao Am (bz))2 (1)

Constraints We specify the limit on z coordinate after certain experimentation. The model seems to generate a reasonable area in the synthetic image if the z coordinate is bounded in the following limts: bzlower bz bzupper 5 (2)

where bzlower =-30 meters, and bzupper =30 meters. Optimization Problem The optimization problem is then setup as follows: Minimize: A(bz) = (Ao Am (bz))2 Subject to: bzlower bz bzupper 2.4.4 X,Y Coordinate Estimation (3) (4)

The estimation of the x, y coordinate is based on the fact that once the camera depth is almost the same the centroid of the silhouette in the original image and the model generated image should roughly be the same.In order to estimate the x, y coordinate of the body we setup another optimization problem. In this problem we try to minimize the distance between the centroid of the original silhouette and the model generate silhouette. Problem Statement Given a single real image with unknown camera parameters determine the x,y coordinate of the model i.e. human waist. Objective Function The objective function for this problem is the distance between the the square of the distance between the centroid of the original silhouette and the model generate silhouette given as follows: C(bx, by) = (xco xcm (bx, by))2 + (yco ycm (bx, by))2

(5)

Constraints The x,y coordinates can now be constrained based on the size of the original image. Let the size of the original image by (m n). So, x,y must lie within following limits: bxlower bx bxupper bylower by byupper

(6) (7)

where bxlower = -45 meters, bxupper = 45 meters, bylower = -45 meters, and byupper = 45 meters. Optimization Problem We now setup an optimization problem based on above mentioned objective function and constraints. Minimize: C(bx, by) = (xco xcm (bx, by))2 + (yco ycm (bx, by))2 Subject to: bxlower bx bxupper bylower by byupper (8) (9) (10)

2.5

Optimization Based Pose Estimation

This is the primary problem which is addressed in this work. The basic idea behind this is that if the limbs are oriented in exactly the same way as that in the original image the difference in the original image and the model generated image should be the least. Here we try to estimate the joint angles of the two legs by setting up an optimization problem. 2.5.1 Problem Statement

Given a single real image with unknown camera parameters determine the pose of the human lower limbs in the sagittal plane. 2.5.2 Objective Function

The objective function here need to be evaluated by actually articulating the model with the current variable values and then capturing an image. This image is then used to obtain the binary image of lower limbs as explained in Section 2.1. The VRML model is also used to generate a model based silhouette as explained in Section 2.2. These two silhouettes are then subtracted to obtain a subtracted image. The subtracted image is not converted into an image with absolute difference between the two. The region in the absolute subtracted image is counted, which gives a measure of the extent of mismatch in the two images. We use this absolute area as the objective function for estimating the pose. f (1 , 2 , 3 , 4 ) = Absolute Area of subtracted Image 2.5.3 Constraints (11)

We limit the human joint angles based on the constraints set by the human body. We refer an example [3] in an open source software OpenSim [4] for setting the following limits: 1 2 3 4 1upper 2upper 3upper 4upper

1lower 2lower 3lower 4lower

(12) (13) (14) (15)

where 1lower = -0.6981 radians (40o ), 1upper = 1.6581 radians (95o ), 2lower = -2.0943951 radians (120o ), 2upper = 0 radians (0o ), 3lower = -0.6981 radians (40o ), 3upper = 1.6581 radians (95o ), 4lower = -2.0943951 radians (120o ), and 4upper = 0 radians (0o ) 2.5.4 Optimization Problem

The optimization problem is then setup using the above mentioned objective function and constraints.

Minimize: f (1 , 2 , 3 , 4 ) = Absolute Area of subtracted Image Subject to: 1lower 1 1upper 2lower 2 2upper 3lower 3 3upper 4lower 4 4upper 2.5.5 Optimization Problem in Standard Form

(16) (17) (18) (19) (20)

The optimization problem in standard form can be expressed as the following three optimization sub-problems: Minimize: A(x) = (Ao Am (x))2 Subject to: g11 (x) = x3 bzlower 0 g12 (x) = bzupper x3 0 (21) (22) (23)

Minimize: C(x) = (xco xcm (x))2 + (yco ycm (x))2 Subject to: g21 (x) = x1 bxlower 0 g22 (x) = bxupper x1 0 g23 (x) = x2 bylower 0 g24 (x) = byupper x2 0

(24) (25) (26) (27) (28)

Minimize: f (1 , 2 , 3 , 4 ) = Absolute Area of subtracted Image Subject to: g31 (x) = x4 1lower 0 g32 (x) = 1upper x4 0 g33 (x) = x5 2lower 0 g34 (x) = 2upper x5 0 g35 (x) = x6 3lower 0 g36 (x) = 3upper x6 0 g37 (x) = x7 4lower 0 g38 (x) = 4upper x7 0

(29) (30) (31) (32) (33) (34) (35) (36) (37)

where x = [bx, by, bz, 1 , 2 , 3 , 4 ]T , bxlower = -45 meters, bxupper = 45 meters, bylower = 45 meters, byupper = 45 meters, bzlower =-30 meters, bzupper =30 meters, 1lower = -0.6981 radians (40o ), 1upper = 1.6581 radians (95o ), 2lower = -2.0943951 radians (120o ), 2upper = 0 radians (0o ), 3lower = -0.6981 radians (40o ), 3upper = 1.6581 radians (95o ), 4lower = -2.0943951 radians (120o ), and 4upper = 0 radians (0o ) 8

Optimization Approach

The initial approach to solve the optimization problem is presented in Figure 3. However, experimentation shows that the nature of the underlying objective function itself changes with the results of the rst two optimizations subproblems. This brings the challenge as to how to deal with the three the three subproblems simultaneously. Experiments showed that the optimization problem is unstable when all the parameters are optimized at once using a single objective function. To avoid this unstable nature of the combined objective function, we choose another optimization approach in which optimization is carried out individually at each stage. The overall all system consists of three optimization subroutines as shown in Figure 4. We run the optimization routines iteratively and minimize the three objective functions sequentially. We choose to optimize the functions sequentially because the nature of the problem is such that the optimization of individual functions does not pose contradictory requirements on the design variables. This makes it possible to solve the problems sequentially and reach a minima.

Initialize all variables i bl Optimize for x, y ,z coordinate of human waist i.e. (bx,by,bz) and pose i.e. (1-4) Minimize: F = AreaSub Image+1(A0-Am(bz))2+2 {(xc0-xcm (bx,by))2 + (yc0-ycm (bx,by))2} Subject to: bxlower bx bxupper, bylower by byupper, bzlower bz bzupper, 11ower 1 1upper, 21ower 2 2upper, 31ower 3 3upper, 41ower 4 4upper

No

Optimization has converged? Yes Stop

Figure 3: Flowchart of the initial optimization approach. We use Augmented Lagrangian method i.e. Method of Multipliers for solving the optimization problem considering its advantage over penalty methods which are less robust due to sensitivity to penalty parameter chosen. The formulation for the three optimization sub-problems are given as follows:

P1 (x) = A(x) + R1 [ g11 (x) + 11 P2 (x) =C(x) + R2 [ g21 (x) + 21 g24 (x) + 24 g34 (x) + 34 g37 (x) + 37
2 2 24 ] 2 2

2 11 + g12 (x) + 12 2

2 12 ] 2 2 23 +

2 21 + g22 (x) + 22

2 22 + g23 (x) + 23

P3 (x) =f (x) + R3 [ g31 (x) + 31


2 2

2 31 + g32 (x) + 32 2 2

2 32 + g33 (x) + 33 2 2 36 +

2 33 +

2 34 + g35 (x) + 35 2 37 + g38 (x) + 38

2 35 + g36 (x) + 36 2 38 ]

In order to solve the ND unconstrained optimization subproblem, we use Powells conjugate direction method as it does not require any derivative information of the objective function. For 1D optimization subproblem we employ Golden section with Swanns bounding. However, the nature of the problem is such that there are two possible minimum. This is because of the fact that the two human limbs can be interchanged to achieve a very similar silhouette. To avoid missing the actual pose we set up the problem such that we end with two probable poses. Figure 5 shows the owchart of the overall system. The system takes an image from the data set and extract the silhouette corresponding to the lower limbs of the human body as described in Section 2.1. The area of the silhouette is then evaluated and the model z coordinate is changed such that the difference in the are of the model generated silhouette and the actual human silhouette is minimized. The centroid of both the silhouettes is then located and optimized to lie as close as possible. Finally, the optimization of the limb pose is carried out such that the absolute ares of the two subtracted silhouette images is minimized. Once a solution is obtained the pose of the lower limbs is swapped and the problem is solved once again to obtain a solution with the two legs now interchanged in pose. Both the poses are then saved as the probable poses. Some other technique that takes into account the edges in the image need to be used for identifying the true pose. In case, the dened optimization technique is being used for a sequence of images the pose before and after the current image can help in establishing the true current pose. In this work, we do not consider this problem. We have coded all optimization routines in MATLAB and have used functions from the image processing toolbox for silhouette extraction and centroid determination. We have also used functions available in the Virtual Reality Toolbox for VRML model based image generation. Appendix A-J contain the MATLAB code implemented to solve the optimization problem.

Results and Discussion

We test the developed optimization approach for images from the KTH [5] data set with person walking parallel to the camera plane.

10

Initiali all ize varia ables Opt timize for z coordinate of human waist i.e. bz e n b Minim mize: A(bz) = (A0-Am(bz))2 ( Subje to: bzlow bz bzupper ect z wer Optimiz for x, y coordinate i.e. (bx,by) of human waist ze c ) Minimize C(bx,by) = (xc0-xcm (bx,by))2 + (yc0-ycm (b e: ) bx,by))2 Subjec to: bxlowe bx bxupper ct x er bylowe by byupper y er Optimize for Pose Minimiz f(1-4) = AreaSubtractted Image ze: Subje to: 11ow 1 1upper, 21owe 2 2u , ect wer 1 er upper 31ow 3 3upper, 41owe 4 4u wer 3 er upper

No

A All three op ptimization problem has ms conver rged? Yes Sto op

Figure 4: A owchart of the adopted optimization approach.

11

Obtain an image from the data set f h d Obtain the human lower limb silhouette and its centroid Solve all three optimization sub problems using the sub-problems discussed approach Swap the variables 1, 3 and 2, 4 respectively Again solve all three optimization sub-problems with this new initial pose Save the solution from both the approaches
Figure 5: Flowchart of the overall system.

4.1

Initial Design Points

We choose an arbitrary feasible pose vector and test it on different images which acts as different input for the system. The chosen pose vector is x = [3, 4, 0, 0.5359, -0.9626, -0.5277, -0.1271]T .

4.2

Optimization Parameters

We chose the following values for the optimization parameters:


0 0 0 0 0 0 0 0 0 0 1. 11 = 0, 12 = 0, 21 = 0, 22 = 0, 23 = 0, 31 = 0, 32 = 0, 33 = 0, 34 = 0, 35 = 0 0 0 0, 36 = 0, 37 = 0, 38 = 0.

2. = 0.1 for Swanns bounding. 3. R1 = 2, R2 = 2, R3 = 100. We choose a high value for R3 to avoid any violation of the constrains on the pose. However, the nal solution is found to be independent of the value of R. 12

4.3

Convergence Criteria

We choose the following convergence criterion at different levels of the solution: 4.3.1 Augmented Lagrangian Method

The convergence criteria consist of both the change in the value of the function and the pose variables which is described as follows: P k P k1 X X
k k1

f x x3

where k is the iteration number, f 1 = 10, f 2 = 5, f 3 = 100, x1 = 0.1, x2 = 0.1, and The subscripts represent the values for the three optimization subproblems. 4.3.2 Powells Conjugate Direction Method

= 0.2.

For Powells conjugate direction method the convergence criteria consists of the change in the value of the function and a limit on the number of iterations: P k P k1 f k kmax where k is the iteration number, 4.3.3 Golden Section
f1

= 10,

f2

= 5,

f3

= 100 and kmax = 100.

The convergence criteria consist of both the change in the value of the function and the pose variables, and a limit on the number of iterations. P k P k1 X X
k k1

x k kmax

where k is the iteration number, f 1 = 10, f 2 = 5, f 3 = 100, kmax = 100. Such large value of the change in the function value is chosen as the convergence criteria as the function evaluation is in terms of pixels with makes it difcult for the optimization to converge with lower values. Figure 6 shows the input images used and the corresponding solutions obtained. For every pose two possible solutions are obtained. The results show that the optimization predicts the pose the human lower limbs quite well. While the performance is very good for images where no occlusion of a limb is present, the output is satisfactory for images having occlusion. Appendix M contains detailed output for images in the KTH dataset. 13

(i)

(ii)

(iii)

(iv)

(v)

(vi)

(vii)

(viii)

(ix)

(x)

(xi)

(xii)

(xiii)

(xiv)

(xv)

Figure 6: The two outputs obtained for different input images. The rst column shows the input images, the second and the third column show the output model images for the estimated pose. 14

Conclusion

The presented optimization based framework is a good way for estimating human pose from a single image. This technique provides most probable poses i.e. capitalizes on nding the local minimums of a multi-modal function and then choosing the global minimum using some other methodology not discussed. It also performs fairly well in case of occlusion of limb. Most techniques present today require some user input, but the approach presented in this work alleviates all such requirements. However, computation time to arrive at a solution was observed to be large for certain images ( few hours on a standard PC with a 3Gb RAM) in which it took some time to converge. We believe more efcient implementation of the subroutines in C/C++ can reduce the solution time.

Future Work

The present work does not address the issue of providing the most probable pose instead it provides two probable poses. In future using some technique that incorporates the edges in the original image, as a metric, can be used to nally choose one pose out of the two probable ones. This can also be achieved by using inference or move limits from previous pose in case a sequence of images are being processed. Also, the technique can be extended for images in which the human limbs are oriented in 3D. In addition, the problem can be extended for the full human body. Also, an optimization framework can be setup to optimize the geometry of the model simultaneously in case a video i.e. sequence of images are being processed. This adaptability in the model shape will result in more plausible pose estimation.

References
1. Sminchisescu C. and Telea A. (2002). Human pose estimation from silhouettes a consistent approach using distance level sets. In International conference on computer graphics, visualization and computer vision (WSCG). 2. Guan P., Weiss A., Balan A. and Black M.: Estimating human shape and pose from a single image. In: ICCV (2009). 3. Delp S.L., Anderson F.C., Arnold,A.S., Loan P., Habib A., John C.T., Guendelman E. and Thelen D.G., OpenSim: open-source software to create and analyze dynamic simulations of movement, IEEE Trans. Biomed. Eng. 54 (2007), pp. 19401950. 4. Anderson F.C. and Pandy M.G., Dynamic optimization of human walking. Journal of Biomechanical Engineering, 2001. 5. KTH dataset, Available: http://www.nada.kth.se/cvap/actions/

15

MATLAB Code for overall system.


%MATLAB Code for the overall system
close all clc
5

i f ( e x i s t (w) & isopen(w)) c l o s e (w); d e l e t e (w); end clear all epsilon_x = 0.1; i=185; F = []; X = []; X1 = []; X2 = []; SCORE = []; SCORE1 = []; SCORE2 = []; bx = 3; by = 4; bz = 0; rhf = 30.70623249* p i /180; rkf = -55.15546834* p i /180; lhf = -30.23302178* p i /180; lkf = -7.28018554* p i /180; d i r = [cd \KTH\frame]; Ireal = imread([ d i r num2str(i) .jpg]); S = s i z e (Ireal);

10

15

20

25

30

% figure % fig = subplot(2,3,1,Position,[600 600 S(2) S(1)]);


f i g u r e (Position,[400 400 S(2) S(1)]);
35

w = vrworld(lowerbodyu2); open(w);

% c = vr.canvas(w, gcf); % subplot(2,3,1)


40

45

c = vr.canvas(w, g c f ); body = vrnode(w,bodycf); rthigh = vrnode(w,rthigh); rleg = vrnode(w,rleg); lthigh = vrnode(w,lthigh); lleg = vrnode(w,lleg); nodes = [body rthigh rleg lthigh lleg]; x0 = [bx; by; bz; rhf; rkf; lhf; lkf]; %bodyx, bodyy, bodyz, %right hip flexion, right knee flexion, left hip flexion, left knee flexion

50

16

figure; f o r i=194:234 x = x0; Ireal = imread([ d i r num2str(i) .jpg]);

55

% figure(6);
s u b p l o t (2,3,2); imshow(Ireal);
60

imwrite(Ireal,[Iorg num2str(i-184) .jpg]);

Ireal = im2bw(Ireal,0.2);

% figure(7);
s u b p l o t (2,3,3); imshow(Ireal);
65

imwrite(Ireal,[Ibin num2str(i-184) .jpg]);

Stats = regionprops(Ireal,Area,BoundingBox,Centroid); human = f i n d ([Stats(:).Area]>500); Ireal(1:uint8(Stats(human).Centroid(2)),:) = 0;

% figure(5);
70

s u b p l o t (2,3,4); imshow(Ireal);

imwrite(Ireal,[Ilimb num2str(i-184) .jpg]);

75

80

85

90

95

h o l d on; p l o t (Stats(human).Centroid(1),Stats(human).Centroid(2),r*); Stats = regionprops(Ireal,Area,BoundingBox,Centroid); human = f i n d ([Stats(:).Area]>250); p l o t (Stats(human).Centroid(1),Stats(human).Centroid(2),r*); px = x; k = 0; w h i l e ((norm(x-px)/norm(x))>epsilon_x | k==0) px = x; x = [x0(1:2);x(3:7)]; [x, f l a g ] = ALM1(x,c,nodes,Ireal); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end x = [px(1:2);x(3:7)]; [x, f l a g ] = ALM2(x,c,nodes,Ireal,Stats(human).Centroid(1),... Stats(human).Centroid(2)); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2)

100

17

105

110

115

120

125

130

135

d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end [x,Score1, f l a g ] = ALM3(x,c,nodes,Ireal); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end k = k+1; end X2 = [X2, x]; SCORE1 = [SCORE1 Score1]; bx = x(1); by = x(2); bz = x(3); rhf = x(4); rkf = x(5); lhf = x(6); lkf = x(7); body.translation = [bx by bz]; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; vrdrawnow; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; Ivrml = c a p t u r e (c); imwrite(Ivrml,[Output1_ num2str(i) .jpg],jpg); x1 = x; x = [x(1:3);x(6:7); x(4:5)] %Swapping the angles of left and right leg

140

145

150

% pause
k=0; w h i l e ((norm(x-px)/norm(x))>epsilon_x | k==0) px = x; x = [x0(1:2);x(3:7)]; [x, f l a g ] = ALM1(x,c,nodes,Ireal);

155

18

160

165

170

175

180

185

190

195

200

205

i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end x = [px(1:2);x(3:7)]; [x, f l a g ] = ALM2(x,c,nodes,Ireal,Stats(human).Centroid(1),... Stats(human).Centroid(2)); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end [x,Score2, f l a g ] = ALM3(x,c,nodes,Ireal); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end k = k+1; end X2 = [X2, x]; SCORE = [SCORE Score]; bx = x(1); by = x(2); bz = x(3); rhf = x(4); rkf = x(5); lhf = x(6); lkf = x(7); body.translation = [bx by bz];

210

19

215

220

rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; vrdrawnow; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; Ivrml = c a p t u r e (c); imwrite(Ivrml,[Output2_ num2str(i) .jpg],jpg); end pause; c l o s e (w); d e l e t e (w);

225

MATLAB Code for Augmented Lagrangian Method implemented for Objective 1.


f u n c t i o n [x1, f l a g ] = ALM1(x,c,nodes,Ireal)

% Augmented Lagrangian Method


bzu = 30; %Upper limit on body z coordinate bzl = -30; %Lower limit on body z coordinate R = 2; %ALM Penalty Coefficient x0 = x; % x = [bx by bz rhf rkf lhf lkf] sigma0 = 0;
10

epsilon_f = 10; epsilon_x = 0.1;

%Optimization 1
15

f = @(x) Objective1(x,c,nodes,Ireal); g1 = @(x) bzu-x(3); g2 = @(x) x(3)-bzl; g_penalty = @(g,x,sigma) (g(x)+sigma<0)*(g(x)+sigma)2-sigma2; P =@(x,sigma) f(x)+R*(g_penalty(g1,x,sigma(1))+g_penalty(g2,x,sigma(2))); CX = @(x) x(3,:); AX = @(x,cx) [x(1:2); cx; x(4:7)]; sigma = [sigma0 sigma0]; pP = P(x0,sigma); x1 = x0; i = 0; px = x0; S = [sigma]; I = [i]; X = [x0];

20

25

30

20

PF = [P(x0,sigma)]; F = [f(x0)]; G = [g1(x0) g2(x0)];


35

40

45

50

55

60

w h i l e (abs(pP-P(x1,sigma))>epsilon_f | norm(x1-px)>epsilon_x | i==0 ) i = i+1; pP = P(x1,sigma); px = x1; [x1, f l a g ] = Powell(@(x) P(AX(px,x),sigma),CX(px),epsilon_f ); x1 = AX(px,x1); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end S = [S; sigma]; I = [I; i]; X = [X, x1]; PF = [PF; P(x1,sigma)]; F = [F; f(x1)]; G = [G; g1(x1), g2(x1)]; sigma = [((g1(x1)+sigma(1))<0)*(g1(x1)+sigma(1)),... ((g2(x1)+sigma(2))<0)*(g2(x1)+sigma(2))];

%
end
65

pause

70

75

d i s p (Results using Augmented Lagrangian Method); d i s p (i sigma1 sigma2 x1 P(x) f(x) g1(x) g2(x)); f p r i n t f (%d %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f \n,... [I S [CX(X)] PF F G]); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); end f p r i n t f (\n\n);

21

MATLAB Code for Augmented Lagrangian Method implemented for Objective 2.


f u n c t i o n [x1, f l a g ] = ALM2(x,c,nodes,Ireal,xc,yc)

% Augmented Lagrangian Method


bxu = 45; %Upper limit on body x coordinate bxl = -45; %Lower limit on body x coordinate byu = 45; %Upper limit on body y coordinate byl = -45; %Lower limit on body y coordinate
10

R = 2; %ALM Penalty Coefficient x0 = x; % x = [bx by bz rhf rkf lhf lkf] sigma0 = 0; epsilon_f = 5; epsilon_x = 0.1;

15

%Optimization 2
f = @(x) Objective2(x,c,nodes,Ireal,xc,yc); g1 = @(x) x(1)-bxl; g2 = @(x) bxu-x(1); g3 = @(x) x(2)-byl; g4 = @(x) byu-x(2); g_penalty = @(g,x,sigma) (g(x)+sigma<0)*(g(x)+sigma)2-sigma2; P =@(x,sigma) f(x)+R*(g_penalty(g1,x,sigma(1))+... g_penalty(g2,x,sigma(2))+g_penalty(g3,x,sigma(3))+... g_penalty(g4,x,sigma(4))); CX = @(x) x(1:2); AX = @(x,cx) [cx; x(3:7)];
30

20

25

35

40

sigma = [sigma0 sigma0 sigma0 sigma0]; pP = P(x0,sigma); x1 = x0; i = 0; px = x0; S = [sigma]; I = [i]; X = [x0]; PF = [P(x0,sigma)]; F = [f(x0)]; G = [g1(x0) g2(x0) g3(x0) g4(x0)]; w h i l e (abs(pP-P(x1,sigma))>epsilon_f | norm(x1-px)>epsilon_x | i==0 ) i = i+1; pP = P(x1,sigma); px = x1; [x1, f l a g ] = Powell(@(x) P(AX(px,x),sigma),CX(px),epsilon_f ); x1 = AX(px,x1); i f ( f l a g ==1)

45

22

50

55

60

65

70

d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end S = [S; sigma]; I = [I; i]; X = [X, x1]; PF = [PF; P(x1,sigma)]; F = [F; f(x1)]; G = [G; g1(x1), g2(x1), g3(x1), g4(x1)]; sigma = [((g1(x1)+sigma(1))<0)*(g1(x1)+sigma(1)),... ((g2(x1)+sigma(2))<0)*(g2(x1)+sigma(2)),... ((g3(x1)+sigma(3))<0)*(g3(x1)+sigma(3)),... ((g4(x1)+sigma(4))<0)*(g4(x1)+sigma(4))]; end d i s p (Results using Augmented Lagrangian Method); d i s p ([i sigma1 sigma2 sigma3 sigma4 x1 x2 P(x) f(x) g1(x) g2(x) g3(x) g4(x)]); f p r i n t f ([%d %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f\n],[I S X PF F G]); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); end f p r i n t f (\n\n);

75

80

85

MATLAB Code for Augmented Lagrangian Method implemented for Objective 3.


f u n c t i o n [x1,Score, f l a g ] = ALM3(x,c,nodes,Ireal)

% % % %

Augmented Lagrangian Method close all clear all clc

rhfu = 95* p i /180; %Upper limit on right hip flexion anlge rhfl = -40* p i /180; %Lower limit on right hip flexion anlge

23

10

rkfu = 0* p i /180; %Upper limit on right knee flexion anlge rkfl = -120* p i /180; %Lower limit on right knee flexion anlge lhfu = 95* p i /180; %Upper limit on left hip flexion anlge lhfl = -40* p i /180; %Lower limit on left hip flexion anlge

15

lkfu = 0* p i /180; %Upper limit on left knee flexion anlge lkfl = -120* p i /180; %Lower limit on left knee flexion anlge R = 100; %ALM Penalty Coefficient x0 = x; % x = [bx by bz rhf rkf lhf lkf] sigma0 = 0; epsilon_f = 100; epsilon_x = 0.2;
25

20

%Optimization 3
f = @(x) Objective3(x,c,nodes,Ireal); g1 = @(x) x(4)-rhfl; g2 = @(x) rhfu-x(4); g3 = @(x) x(5)-rkfl; g4 = @(x) rkfu-x(5); g5 = @(x) x(6)-lhfl; g6 = @(x) lhfu-x(6); g7 = @(x) x(7)-lkfl; g8 = @(x) lkfu-x(7); g_penalty = @(g,x,sigma) (g(x)+sigma<0)*(g(x)+sigma)2-sigma2; P =@(x,sigma) f(x)+R*(g_penalty(g1,x,sigma(1))+g_penalty(g2,x,sigma(2))+... g_penalty(g3,x,sigma(3))+g_penalty(g4,x,sigma(4))+... g_penalty(g5,x,sigma(5))+g_penalty(g6,x,sigma(6))+... g_penalty(g7,x,sigma(7))+g_penalty(g8,x,sigma(8))); CX = @(x) x(4:7); AX = @(x,cx) [x(1:3);cx];
45

30

35

40

50

55

sigma = [sigma0 sigma0 sigma0 sigma0 sigma0 sigma0 sigma0 sigma0]; pP = P(x0,sigma); x1 = x0; i = 0; px = x0; S = [sigma]; I = [i]; X = [x0]; PF = [P(x0,sigma)]; F = [f(x0)]; G = [g1(x0) g2(x0) g3(x0) g4(x0) g5(x0) g6(x0) g7(x0) g8(x0)]; w h i l e (abs(pP-P(x1,sigma))>epsilon_f | norm(x1-px)>epsilon_x | i==0 ) i = i+1; pP = P(x1,sigma); px = x1; [x1, f l a g ] = Powell(@(x) P(AX(px,x),sigma),CX(px),epsilon_f);

60

24

65

70

75

80

85

x1 = AX(px,x1); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); break; e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); break; e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); break; end S = [S; sigma]; I = [I; i]; X = [X, x1]; PF = [PF; P(x1,sigma)]; F = [F; f(x1)]; G = [G; g1(x1), g2(x1), g3(x1), g4(x1), g5(x1), g6(x1), g7(x1), g8(x1)]; sigma = [((g1(x1)+sigma(1))<0)*(g1(x1)+sigma(1)),... ((g2(x1)+sigma(2))<0)*(g2(x1)+sigma(2)),... ((g3(x1)+sigma(3))<0)*(g3(x1)+sigma(3)),... ((g4(x1)+sigma(4))<0)*(g4(x1)+sigma(4)),... ((g5(x1)+sigma(5))<0)*(g5(x1)+sigma(3)),... ((g6(x1)+sigma(6))<0)*(g6(x1)+sigma(6)),... ((g7(x1)+sigma(7))<0)*(g7(x1)+sigma(7)),... ((g8(x1)+sigma(8))<0)*(g8(x1)+sigma(8))];

90

% % %

abs(pP-P(x1,sigma)) norm(x1-px) pause

95

100

105

end Score = f(x1); d i s p (Results using Augmented Lagrangian Method); d i s p ([i sigma1 sigma2 sigma3 sigma4 sigma5 sigma6 sigma7 sigma8 x1 x2 x3 x4 P(x) f(x) g1(x) g2(x) g3(x) g4(x) g5(x) g6(x) g7(x) g8(x)]); f p r i n t f (%d %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f %7.4f\n,... [I S X PF F G]); i f ( f l a g ==1) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Swans Bounding Failed]); e l s e i f ( f l a g ==2) d i s p ([Constrained Optimization Failed->Unconstrained Optimization Failed->Linear Search Failed]); e l s e i f ( f l a g ==3) d i s p (Constrained Optimization Failed->Powells Method Failed); end f p r i n t f (\n\n);

MATLAB Code for evaluation of Objective 1.


25

%MATLAB Function to evaluate the Objective Function 1 value


f u n c t i o n [val] = Objective1(x,c,nodes,Ireal) bx = x(1); by = x(2); bz = x(3); rhf = x(4); rkf = x(5); lhf = x(6); lkf = x(7); body = rthigh rleg = lthigh lleg = nodes(1); = nodes(2); nodes(3); = nodes(4); nodes(5);

10

15

20

body.translation = [bx by bz]; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; vrdrawnow; Ivrml = c a p t u r e (c); Ivrml=im2bw( c a p t u r e (c),0.1); Areal = sum(sum(Ireal)); Avrml = sum(sum(Ivrml)); val = (Areal-Avrml)2; f i g u r e (2); s u b p l o t (2,3,5); imshow(Ivrml); h o l d on; t e x t (60,10,num2str(val),color,white);

25

30

MATLAB Code for evaluation of Objective 2.


%MATLAB Function to evaluate the Objective Function 2 value
f u n c t i o n [val] = Objective2(x,c,nodes,Ireal,xc,yc) bx = x(1); by = x(2); bz = x(3); rhf = x(4); rkf = x(5); lhf = x(6); lkf = x(7); body = rthigh rleg = lthigh nodes(1); = nodes(2); nodes(3); = nodes(4);

10

26

15

lleg = nodes(5); body.translation = [bx by bz]; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; vrdrawnow; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; Ivrml = c a p t u r e (c); Ivrml=im2bw( c a p t u r e (c),0.1); Stats = regionprops(Ivrml,Centroid,Area); human = f i n d ([Stats(:).Area]>250); f i g u r e (2); s u b p l o t (2,3,5); imshow(Ivrml); h o l d on; i f ( i s e m p t y (Stats(human))) p l o t (Stats(human).Centroid(1),Stats(human).Centroid(2),r*); x = Stats(human).Centroid(1); y = Stats(human).Centroid(2); else x = 0; y = 0; end val = (x-xc)2+(y-yc)2; t e x t (60,10,num2str(val),color,white);

20

25

30

35

40

45

MATLAB Code for evaluation of Objective 3.


%MATLAB Function to evaluate the Objective Function 3 value
f u n c t i o n [val] = Objective3(x,c,nodes,Ireal) bx = x(1); by = x(2); bz = x(3); rhf = x(4); rkf = x(5); lhf = x(6); lkf = x(7); body = rthigh rleg = lthigh lleg = nodes(1); = nodes(2); nodes(3); = nodes(4); nodes(5);

10

15

27

20

25

30

35

body.translation = [bx by bz]; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; vrdrawnow; rthigh.rotation = [1 0 0 -rhf]; rleg.rotation = [1 0 0 -rkf]; lthigh.rotation = [1 0 0 -lhf]; lleg.rotation = [-1 0 0 lkf]; Ivrml = c a p t u r e (c); Ivrml=im2bw( c a p t u r e (c),0.1); Areal = sum(sum(Ireal)); Avrml = sum(sum(Ivrml)); Isub = abs(Ivrml-Ireal); val = sum(sum(Isub)); f i g u r e (2); s u b p l o t (2,3,5); imshow(Ivrml); s u b p l o t (2,3,6); imshow(Isub); h o l d on; t e x t (60,10,num2str(val),color,white);

MATLAB Code for Powells Conjugate Direction Method.


f u n c t i o n [x1, f l a g ] = Powell(f,x,epsilon_f)

%Powells Conjugate Direction Method


x1 = x; px = x1; a = 0; %intial value for alpha d = 0.1;%delta for swans method kMax = 10; S = e y e ( l e n g t h (x1)); d i s p (Results using Powells Conjugate Direction Method); k =1; j=1; px = x1; w h i l e ((k==1)|(abs(f(px)-f(x1))>epsilon_f)) px = x1; J = []; X = [x1]; F = [f(x1)]; Alpha = [a]; i f ( s i z e (S,1)==1) start = 1; else start = s i z e (S,1)+1; end

10

15

20

25

28

30

35

40

45

50

55

60

65

70

f o r j=start:-1:1 i f j==1 s = S(:, s i z e (S,1)); else s = S(:,j-1); end x2 = @(a) x1+a*s; [lb,ub, f l a g ] = swans(a,d,f,x2); i f ( f l a g ==1) d i s p (Unconstrained Optimization Failed->Swans Bounding Failed); break; end [oa, f l a g ] = goldensection(lb,ub,f,x2,epsilon_f); i f ( f l a g ==2) d i s p (Unconstrained Optimization Failed->Linear Search Failed); break; end J = [J; j]; Alpha = [Alpha; oa]; x1 = x2(oa); X = [X,x1]; F = [F;f(x1)]; end i f ( s i z e (S,1)==1) S = s; else S = [S(:,2:end), s]; s = (X(:, s i z e (X,1))-X(:,1))./norm(X(:, s i z e (X,1))-X(:,1)); end J = [J; 0]; d i s p ([Iteration num2str(k)]); d i s p ( j a x1->xn f(x)); d i s p ([ f l i p u d (J) Alpha X F]) k=k+1; i f ( f l a g ==1) d i s p (Unconstrained Optimization Failed); break; e l s e i f ( f l a g ==2) d i s p (Unconstrained Optimization Failed->Linear Search Failed); break; e l s e i f (k==kMax) f l a g = 3; d i s p (Powells Method Failed); break; end end

MATLAB Code for Golden Section Method.


f u n c t i o n [oa, f l a g ] = goldensection(lb,ub,f,x2,epsilon_f)

%Golden Section Method

29

t = 0.618; %Golden section ratio tola = 0.01; %convergence tolerance on alpha value for Golden Section ilimit = 1000; f l a g = 0; glb = lb; gub = ub; i f (glb==gub) oa = 0; return end delf = 1000; dela = 1000; niterate = 1; ga1 = glb+t*(gub-glb); ga2 = glb+(1-t)*(gub-glb); gf1 = f(x2(ga1)); gf2 = f(x2(ga2));

10

15

20

25

w h i l e ((delf>epsilon_f | dela>tola) & niterate<ilimit) % i f (gf1>gf2 & ga1>ga2) gub = ga1; ga1 = ga2; gf1 = gf2; ga2 = glb+(1-t)*(gub-glb); gf2 = f(x2(ga2)); oa = ga2; delf = abs(gf1-gf2); e l s e i f (gf1<gf2 & ga1>ga2) glb = ga2; ga2 = ga1; gf2 = gf1; ga1 = glb+t*(gub-glb); gf1 = f(x2(ga1)); oa = ga1; delf = abs(gf1-gf2); e l s e i f (gf1==gf2 & ga1>ga2) glb = ga2; gub = ga1; ga1 = glb+t*(gub-glb); ga2 = glb+(1-t)*(gub-glb); gf1 = f(x2(ga1)); gf2 = f(x2(ga2)); oa = (ga1+ga2)/2; delf = abs(gf1-gf2); end dela = abs(ga1-ga2); niterate = niterate+1;

30

35

40

45

50

55

pause;

30

60

end i f (niterate==ilimit) oa = 0; d i s p (Linear Search Failed); f l a g = 2; end

MATLAB Code for Swanns Bounding Method.


f u n c t i o n [lb, ub, f l a g ] = swans(a,d,f,x2)

%Swanns Bounding
f l a g = 0; lb = a; ub = a; n = 0; a0 = a-d; a1 = a; a2 = a+d; fa0 = f(x2(a0)); fa1 = f(x2(a1)); fa2 = f(x2(a2)); i f (fa2>fa1 & fa0<fa1) d=-d; temp = a2; a2 = a0; a0 = temp; temp = fa2; fa2 = fa0; fa0 = temp; e l s e i f (fa2>=fa1 & fa0>=fa1) lb = min(a0,a2); ub = max(a0,a2); end w h i l e (fa2<=fa1) n = n+1; a0 = a1; a1 = a2; a2 = a1+2n*d; fa0 = fa1; fa1 = fa2; fa2 = f(x2(a2)); i f (d<0) lb = a2; ub = a0; else lb = a0; ub = a2; end i f (fa0==inf | fa1==inf | fa2==inf) d i s p (Bounding Diverged); f l a g = 1;

10

15

20

25

30

35

40

31

break end
45

end

Objective Function Output for different inputs

Figure 7: Objective function values obtained for a sequence of subtracted images, (1)-(4) are shown in Figure 8.

The inputs used and output at each stage of the image processing

32

(i)

(ii)

(iii)

(iv)

(v)

(vi)

(vii)

(viii)

(ix)

(x)

(xi)

(xii)

(xiii)

(xiv)

(xv)

(xvi)

(xvii)

(xviii)

(xix)

(xx)

(xxi)

(xxii)

(xxiii)

(xxiv)

Figure 8: Substracted image obtained for different inputs.(a)-(d) Original Image, (e)-(h) human separated image, (i)-(l) human lower limbs, (m)-(p) VRML model images, (q)-(t) binary model images, (u)-(x) subtracted images 33

The detailed output obtained for images in KTH dataset

(i)

(ii)

(iii)

(iv)

(v)

(vi)

(vii)

(viii)

(ix)

(x)

(xi)

(xii)

Figure 9: The two outputs obtained for different input images. The rst column shows the input images, the second and the third column show the output model images for the estimated pose.

34

(xiii)

(xiv)

(xv)

(xvi)

(xvii)

(xviii)

(xix)

(xx)

(xxi)

(xxii)

(xxiii)

(xxiv)

(xxv)

(xxvi)

(xxvii)

Figure 9: The two outputs obtained for different input images. The rst column shows the input images, the second and the third column show the output model images for the estimated pose (contd.).

35

(xxviii)

(xxix)

(xxx)

(xxxi)

(xxxii)

(xxxiii)

(xxxiv)

(xxxv)

(xxxvi)

(xxxvii)

(xxxviii)

(xxxix)

(xl)

(xli)

(xlii)

Figure 9: The two outputs obtained for different input images. The rst column shows the input images, the second and the third column show the output model images for the estimated pose (contd.). 36

(xliii)

(xliv)

(xlv)

(xlvi)

(xlvii)

(xlviii)

(xlix)

(l)

(li)

(lii)

(liii)

(liv)

(lv)

(lvi)

(lvii)

Figure 9: The two outputs obtained for different input images. The rst column shows the input images, the second and the third column show the output model images for the estimated pose (contd.).

37

You might also like