Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 17

F. What are the uses of the concept of standard deviation and skewness in management.

In financial mathematics and financial risk management, Value at Risk (VaR) is a widely used risk measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, probability and time horizon, VaR is defined as a threshold value such that the probability that the mark-to-market loss on the portfolio over the given time horizon exceeds this value (assuming normal markets and no trading in the portfolio) is the given probability level.[1] For example, if a portfolio of stocks has a one-day 95% VaR of $1 million, there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one day period, assuming markets are normal and there is no trading. Informally, a loss of $1 million or more on this portfolio is expected on 1 day in 20. A loss which exceeds the VaR threshold is termed a VaR break.[2]

The 5% Value at Risk of a hypothetical profit-and-loss probability density function VaR has five main uses in finance: risk management, risk measurement, financial control, financial reporting and computing regulatory capital. VaR is sometimes used in nonfinancial applications as well.[3] Important related ideas are economic capital, backtesting, stress testing, expected shortfall, and tail conditional expectation.[4

VAR in Governance
An interesting takeoff on VaR is its application in Governance for endowments, trusts, and pension plans. Essentially trustees adopt portfolio Values-at-Risk metrics for the entire pooled account and the diversified parts individually managed. Instead of probability estimates they simply define maximum levels of acceptable loss for each. Doing so provides an easy metric for oversight and adds accountability as managers are then directed to manage, but with the additional constraint to avoid losses within a defined risk parameter. VAR utilized in this manner adds relevance as well as an easy to monitor risk measurement control far more intuitive than Standard Deviation of Return. Use of VAR in this context, as well as a worthwhile critique on board governance practices as it relates to investment management oversight in general can be found in 'Best Practices in Governance".[12]

[edit] Risk measure and risk metric


The term VaR is used both for a risk measure and a risk metric. This sometimes leads to confusion. Sources earlier than 1995 usually emphasize the risk measure, later sources are more likely to emphasize the metric. The VaR risk measure defines risk as mark-to-market loss on a fixed portfolio over a fixed time horizon, assuming normal markets. There are many alternative risk measures in finance. Instead of mark-to-market, which uses market prices to define loss, loss is often defined as change in fundamental value. For example, if an institution holds a loan that declines in market price because interest rates go up, but has no change in cash flows or credit quality, some systems do not recognize a loss. Or we could try to incorporate the economic cost of things not measured in daily financial statements, such as loss of market confidence or employee morale, impairment of brand names or lawsuits.[4] Rather than assuming a fixed portfolio over a fixed time horizon, some risk measures incorporate the effect of expected trading (such as a stop loss order) and consider the expected holding period of positions. Finally, some risk measures adjust for the possible effects of abnormal markets, rather than excluding them from the computation.[4] The VaR risk metric summarizes the distribution of possible losses by a quantile, a point with a specified probability of greater losses. Common alternative metrics are standard deviation, mean absolute deviation, expected shortfall and downside risk.[1]

[edit] VaR risk management


Supporters of VaR-based risk management claim the first and possibly greatest benefit of VaR is the improvement in systems and modeling it forces on an institution. In 1997, Philippe Jorion wrote:[13]

[T]he greatest benefit of VAR lies in the imposition of a structured methodology for critically thinking about risk. Institutions that go through the process of computing their VAR are forced to confront their exposure to financial risks and to set up a proper risk management function. Thus the process of getting to VAR may be as important as the number itself. Publishing a daily number, on-time and with specified statistical properties holds every part of a trading organization to a high objective standard. Robust backup systems and default assumptions must be implemented. Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are toofrequently down. Anything that affects profit and loss that is left out of other reports will show up either in inflated VaR or excessive VaR breaks. A risk-taking institution that does not compute VaR might escape disaster, but an institution that cannot compute VaR will not. [14] The second claimed benefit of VaR is that it separates risk into two regimes. Inside the VaR limit, conventional statistical methods are reliable. Relatively short-term and specific data can be used for analysis. Probability estimates are meaningful, because there are enough data to test them. In a sense, there is no true risk because you have a sum of many independent observations with a left bound on the outcome. A casino doesn't worry about whether red or black will come up on the next roulette spin. Risk managers encourage productive risk-taking in this regime, because there is little true cost. People tend to worry too much about these risks, because they happen frequently, and not enough about what might happen on the worst days.[15] Outside the VaR limit, all bets are off. Risk should be analyzed with stress testing based on long-term and broad market data.[16] Probability statements are no longer meaningful. [17] Knowing the distribution of losses beyond the VaR point is both impossible and useless. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not.[1] One specific system uses three regimes.[18] 1. One to three times VaR are normal occurrences. You expect periodic VaR breaks. The loss distribution typically has fat tails, and you might get more than one break in a short period of time. Moreover, markets may be abnormal and trading may exacerbate losses, and you may take losses not measured in daily marks such as lawsuits, loss of employee morale and market confidence and impairment of brand names. So an institution that can't deal with three times VaR losses as routine events probably won't survive long enough to put a VaR system in place. 2. Three to ten times VaR is the range for stress testing. Institutions should be confident they have examined all the foreseeable events that will cause losses in this range, and are prepared to survive them. These events are too rare to estimate probabilities reliably, so risk/return calculations are useless. 3. Foreseeable events should not cause losses beyond ten times VaR. If they do they should be hedged or insured, or the business plan should be changed to avoid

them, or VaR should be increased. It's hard to run a business if foreseeable losses are orders of magnitude larger than very large everyday losses. It's hard to plan for these events, because they are out of scale with daily experience. Of course there will be unforeseeable losses more than ten times VaR, but it's pointless to anticipate them, you can't know much about them and it results in needless worrying. Better to hope that the discipline of preparing for all foreseeable threeto-ten times VaR losses will improve chances for surviving the unforeseen and larger losses that inevitably occur. "A risk manager has two jobs: make people take more risk the 99% of the time it is safe to do so, and survive the other 1% of the time. VaR is the border."[14]

[edit] VaR risk measurement


The VaR risk measure is a popular way to aggregate risk across an institution. Individual business units have risk measures such as duration for a fixed income portfolio or beta for an equity business. These cannot be combined in a meaningful way.[1] It is also difficult to aggregate results available at different times, such as positions marked in different time zones, or a high frequency trading desk with a business holding relatively illiquid positions. But since every business contributes to profit and loss in an additive fashion, and many financial businesses mark-to-market daily, it is natural to define firm-wide risk using the distribution of possible losses at a fixed point in the future.[4] In risk measurement, VaR is usually reported alongside other risk metrics such as standard deviation, expected shortfall and greeks (partial derivatives of portfolio value with respect to market factors). VaR is a distribution-free metric, that is it does not depend on assumptions about the probability distribution of future gains and losses.[14] The probability level is chosen deep enough in the left tail of the loss distribution to be relevant for risk decisions, but not so deep as to be difficult to estimate with accuracy.[19] Risk measurement VaR is sometimes called parametric VaR. This usage can be confusing, however, because it can be estimated either parametrically (for examples, variance-covariance VaR or delta-gamma VaR) or nonparametrically (for examples, historical simulation VaR or resampled VaR). The inverse usage makes more logical sense, because risk management VaR is fundamentally nonparametric, but it is seldom referred to as nonparametric VaR.[4][

Post-modern portfolio theory[1] (or "PMPT") is an extension of the traditional modern portfolio theory (MPT, which is an application of mean-variance analysis or MVA). Both theories propose how rational investors should use diversification to optimize their portfolios, and how a risky asset should be priced.

Harry Markowitz laid the foundations of MPT, the greatest contribution of which is the establishment of a formal risk/return framework for investment decisionmaking. By defining investment risk in quantitative terms, Markowitz gave investors a mathematical approach to asset-selection and portfolio management. But there are important limitations to the original MPT formulation. Harry Markowitz laid the foundations of MPT, the greatest contribution of which is the establishment of a formal risk/return framework for investment decisionmaking. By defining investment risk in quantitative terms, Markowitz gave investors a mathematical approach to asset-selection and portfolio management. But there are important limitations to the original MPT formulation. Downside risk Downside risk (DR) is measured by target semi-deviation (the square root of target semivariance) and is termed downside deviation. It is expressed in percentages and therefore allows for rankings in the same way as standard deviation. An intuitive way to view downside risk is the annualized standard deviation of returns below the target. Another is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at an exponential rate. This is consistent with observations made on the behavior of individual decision-making under Volatility skewness is another portfolio-analysis statistic introduced by Rom and Ferguson under the PMPT rubric. It measures the ratio of a distributions percentage of total variance from returns above the mean, to the percentage of the distributions total variance from returns below the mean. Ignoring kurtosis risk will cause any model to understate the risk of variables with high kurtosis. For instance, Long-Term Capital Management, a hedge fund cofounded by Myron Scholes, ignored kurtosis risk to its detriment. After four successful years, this hedge fund had to be bailed out by major investment banks in the late 90s because it understated the kurtosis of many financial securities underlying the fund's own trading positions.[1]

G. Distinguish between simple and multiple regression. Give examples . Also state the formula for solving simple and multiple regression.

Simple Linear Regression (SLR) is a special regression model where : There is only one (numerical) independent variable. The model is linear both in the independent variable and, more importantly, in the parameters.

Differences between Simple and Multiple Linear Regressions

There are at least three reasons that plead in favor of a special treatment of Multiple Linear Regression.
Matrix calculations, linear algebra

We studied Simple Linear Regression using "ordinary" equations and indeed, the same could be done with Multiple Linear Regression. But because there are now p independent variables instead of just one, calculations become exagerately cumbersome. Fortunately, Linear Algebra is here to give us a helping hand, and provides for extremely compact and elegant calculations. As a matter of fact, Linear Algebra will be our almost exclusive tool throughout the Tutorials, and Multiple Linear Regression is an excellent pedagogic means for a soft introduction to several important aspects of Linear Algebra.
Variables selection

This point is more important for the analyst, and Multiple Linear Regression may very well be the first example he will meet of the bias-variance tradeoff that we now summarize in a few words. Any model (whether predictive or descriptive) built from a data sample should not incorporate "too many" parameters (here, the *j ). Beyond a certain limit : Its performances will keep improving if measured on the available data set, but will degrade if measured on new data, because of an increased variance of its predictions. The values of its parameters will become unstable (very sensitive to small changes in data), and therefore meaningless, again because of too large a variance ot the estimated parameters. These issues are of great practical importance, and the analyst will therefore have to spend considerable effort to select among the numerous candidate independent variables those that will ultimately be retained in the final model.
Collinearity of the predictors

Another source of variance of both the parameters and the predictions of a Multiple Linear Regression model is the possible collinearity of the predictors. Even a thorough selection of the independent variables cannot completely eradicate collinearity. Consequently, Multiple Linear Regression has developped specific techniques, like Ridge Regression, for circumventing this problem.

formula for solving simple and multiple regression


Formula for simple regression: Regression Equation(y) = a + bx Slope(b) = (NXY - (X)(Y)) / (NX2 - (X)2) Intercept(a) = (Y - b(X)) / N where x and y are the variables. b = The slope of the regression line a = The intercept point of the regression line and the y axis. N = Number of values or elements X = First Score Y = Second Score XY = Sum of the product of first and Second Scores X = Sum of First Scores Y = Sum of Second Scores X2 = Sum of square First Scores

Multiple regression analysis is similar to simple regression analysis. However, it is more complex conceptually and computationally. The general equation for the probabilistic multiple regression model is given by y = 0 + 1x1 + 2x2 + kxk + . Where y = the value of the dependent variable 0 = the regression constant 1 = the partial regression coefficient for independent variable 1 2 = the partial regression coefficient for independent variable 2 ..... ..... k = the partial regression coefficient for independent variable k k = the number of independent variables The procedure for determining formula to solve for multiple regression coefficients is similar to that of solving for simple regression coefficients. The formulas are established to meet an objective of minimizing the sum of squares

of error for the model. Hence, the regression analysis shown here is reffered to as least square analysis. Methods of calculs are applied, resulting in K+1 unknowns for regression analysis with k dependent variables. For multiple regression models with two independent variables, the result is three simultaneous equations with three unknowns( b0, b1 and b2).

H. What is difference between correlation and regression. Give example of Zero correlation and perfect correlation, (1) The correlation answers the STRENGTH of linear association between paired variables, say X and Y. On the other hand, the regression tells us the FORM of linear association that best predicts Y from the values of X. (2a) Correlation is calculated whenever: * both X and Y is measured in each subject and quantify how much they are linearly associated. * in particular the Pearson's product moment correlation coefficient is used when the assumption of both X and Y are sampled from normallydistributed populations are satisfied * or the Spearman's moment order correlation coefficient is used if the assumption of normality is not satisfied. * correlation is not used when the variables are manipulated, for example, in experiments. (2b) Linear regression is used whenever: * at least one of the independent variables (Xi's) is to predict the dependent variable Y. Note: Some of the Xi's are dummy variables, i.e. Xi = 0 or 1, which are used to code some nominal variables. * if one manipulates the X variable, e.g. in an experiment. (3) Linear regression are not symmetric in terms of X and Y. That is interchanging X and Y will give a different regression model (i.e. X in

terms of Y) against the original Y in terms of X. On the other hand, if you interchange variables X and Y in the calculation of correlation coefficient you will get the same value of this correlation coefficient. (4) The "best" linear regression model is obtained by selecting the variables (X's) with at least strong correlation to Y, i.e. >= 0.80 or <= -0.80 (5) The same underlying distribution is assumed for all variables in linear regression. Thus, linear regression will underestimate the correlation of the independent and dependent when they (X's and Y) come from different underlying distributions.

Correlation and linear regression are not the same. What is the goal? Correlation quantifies the degree to which two variables are related. Correlation does not fit a line through the data points. You simply are computing a correlation coefficient (r) that tells you how much one variable tends to change when the other one does. When r is 0.0, there is no relationship. When r is positive, there is a trend that one variable goes up as the other one goes up. When r is negative, there is a trend that one variable goes up as the other one goes down. Linear regression finds the best line that predicts Y from X. What kind of data? Correlation is almost always used when you measure both variables. It rarely is appropriate when one variable is something you experimentally manipulate. Linear regression is usually used when X is a variably you manipulate (time, concentration, etc.) Does it matter which variable is X and which is Y? With correlation, you don't have to think about cause and effect. It doesn't matter which of the two variables you call "X" and which you call "Y". You'll get the same correlation coefficient if you swap the two. The decision of which variable you call "X" and which you call "Y" matters in regression, as you'll get a different best-fit line if you swap the two. The line that best

predicts Y from X is not the same as the line that predicts X from Y (however both those lines have the same value for R2) Assumptions The correlation coefficient itself is simply a way to describe how two variables vary together, so it can be computed and interpreted for any two variables. Further inferences, however, require an additional assumption -- that both X and Y are measured, and both are sampled from Gaussian distributions. This is called a bivariate Gaussian distribution. If those assumptions are true, then you can interpret the confidence interval of r and the P value testing the null hypothesis that there really is no correlation between the two variables (and any correlation you observed is a consequence of random sampling). With linear regression, the X values can be measured or can be a variable controlled by the experimenter. The X values are not assumed to be sampled from a Gaussian distribution. The distances of the points from the best-fit line is assumed to follow a Gaussian distribution, with the SD of the scatter not related to the X or Y values. Relationship between results Correlation computes the value of the Pearson correlation coefficient, r. Its value ranges from -1 to +1. Linear regression quantifies goodness of fit with r2, sometimes shown in uppercase as R2. If you put the same data into correlation (which is rarely appropriate; see above), the square of r from correlation will equal r2 from regression.

Example of Zero correlation and perfect correlation,


Correlation is a statistical measurement of the relationship between two variables. Possible correlations range from +1 to 1. A zero correlation indicates that there is no relationship between the variables. A correlation of 1 indicates a perfect negative correlation, meaning that as one variable goes up, the other goes down. A correlation of +1 indicates a perfect positive correlation, meaning that both variables move in the same direction together.

B. What is rank correlation Give example. When should the rank correlation be used ?

In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter (rho) or as rs, is a nonparametric measure of statistical dependence between two variables. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or 1 occurs when each of the variables is a perfect monotone function of the other. The Spearman correlation coefficient is often thought of as being the Pearson correlation coefficient between the ranked variables. In practice, however, a simpler procedure is normally used to calculate . The n raw scores Xi, Yi are converted to ranks xi, yi, and the differences di = xi yi between the ranks of each observation on the two variables are calculated. If there are no tied ranks, then is given by:[1][2]

Example
In this example, we will use the raw data in the table below to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week. IQ, Xi 106 86 100 101 99 103 97 113 112 110 Hours of TV per week, Yi 7 0 27 50 28 29 20 12 6 17 . To do so we use the following steps,

First, we must find the value of the term reflected in the table below.

1. Sort the data by the first column (Xi). Create a new column xi and assign it the ranked values 1,2,3,...n. 2. Next, sort the data by the second column (Yi). Create a fourth column yi and similarly assign it the ranked values 1,2,3,...n. 3. Create a fifth column di to hold the differences between the two rank columns (xi and yi).

4. Create one final column IQ, Xi 86 97 99 100 101 103 106 110 112 113

to hold the value of column di squared.

Hours of TV per week, Yi rank xi rank yi di 0 1 1 0 0 20 2 6 4 16 28 3 8 5 25 27 4 7 3 9 50 5 10 5 25 29 6 9 3 9 7 7 3 4 16 17 8 5 3 9 6 9 2 7 49 12 10 4 6 36 . The value of n is 10. So these

With found, we can add them to find values can now be substituted back into the equation,

Which evaluates to = 0.175757575... With a P-value = 0.6864058 (using the t distribution) This low value shows that the correlation between IQ and hours spent watching TV is very low. In the case of ties in the original values, this formula should not be used. Instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above).

The Spearman's Rank Correlation Coefficient is used to discover the strength of a link between two sets of data.

C. What are Growth rates ? Give Examples of different types of growth rates.
What Does Growth Rates Mean? The amount of increase that a specific variable has gained within a specific period and context. For investors, this typically represents the compounded annualized rate of growth of a company's revenues, earnings, dividends and even macro concepts - such as the economy as a whole.

Investopedia explains Growth Rates Different types of industries have different benchmarks for rates of growth. For instance, companies that are on the cutting edge of technology would be more likely to have higher annual rates of growth compared to a mature industry, like retail sales. The use of historical growth rates is one of the simplest methods of estimating future growth. However, historically high growth rates don't always mean a high rate of growth looking into the future, because industrial and economic conditions change constantly. For example, the auto industry has higher rates of revenue growth during good economic times. However, in times of recession, consumers would be more inclined to be frugal and not spend disposable income on a new car

A percent growth rate (or sometimes referred to as percent change, growth rate, or rate of change) is a useful indicator to look at how much a population is growing or declining in a particular area. It is also useful when comparing the growth or decline of populations in two different areas or regions. But percent growth rate can be used in other studies besides population (such as employment, unemployment, economic factors, etc.). Any number from one time and any number from another time can be put into the calculation to determine growth rate. The rate of change (percent change, growth rate) from one period to another is calculated as follows: Percent Change = (value at end of period - value at beginning of period)/value at beginning of period * 100 Another way of expressing the equation for growth rate or percent change is: Percent change = (Vpresent-Vpast)/Vpast*100 In this formula, Vpresent = present or future value Vpast = past or present value Example to calculate growth rate or percent change: A particular city has a population of 800,000 in 1990 and a population of 1,500,000 in 2008. To find the growth rate of the population in this city, do the following:

Growth Rate = (1,500,000 - 800,000)/800,000 * 100 Growth Rate = 87.5 percent Types of Growth rate

Exponential growth, a growth rate classification .

Exponential growth represents items that grow by a certain percentage each time period rather than a fixed amount. For example, if you left money in a savings account at a bank instead of in a glass jar at home, the bank would pay you interest. The amount of interest would depend on how much money you had in the account so as the balance of the account increased, so would the amount of interest earned. Exponential growth rates can be identified by larger increases in the amount over time

Compound annual growth rate or CAGR, a measure of financial growth

Compound annual growth rate (CAGR) is a business and investing specific term for the smoothed annualized gain of an investment over a given time period the revenues of a company over four years, V(t) in above formula, have been: Year 2004 2005 2006 2007 Revenues 100 115 150 200

tn t0 = 2007 - 2004 = 3
Then, calculating CAGR gives:

Economic growth, the increase in value of the goods and services produced by an economy

Economic growth is the increase of per capita gross domestic product (GDP) or other measures of aggregate income, typically reported as the annual rate of change in real GDP. Economic growth is primarily driven by improvements in productivity, which involves producing more goods and services with the same inputs of labor, capital, energy and materials. Economists draw a distinction between short-term economic stabilization and long-term economic growth. The topic of economic growth is primarily concerned with the long run. The short-run variation of economic growth is termed the business cycle

Growth rate (group theory), a property of a group in group theory

In group theory, the growth rate of a group with respect to a symmetric generating set describes the size of balls in the group. Every element in the group can be written as a product of generators, and the growth rate counts the number of elements that can be written as a product of length n. Example. The triangle groups include 3 finite groups (the spherical ones, corresponding to sphere), 3 groups of quadratic growth (the Euclidean ones, corresponding to Euclidean plane), and infinitely many groups of exponential growth (the hyperbolic ones, corresponding to the hyperbolic plane).

Population growth rate, change in population over time .

Population growth is the change in a population over time, and can be quantified as the change in the number of individuals of any species in a population using "per unit time" for measurement. In biology, the term population growth is likely to refer to any known organism, but this article deals mostly with the application of the term to human populations in demography. In demography, population growth is used informally for the more specific term population growth rate , and is often used to refer specifically to the growth of the human population of the world.

D. What are the uses of growth rates in management ?

The BCG matrix (aka B.C.G. analysis, BCG-matrix, Boston Box, Boston Matrix, Boston Consulting Group analysis, portfolio diagram) is a chart that had been created by Bruce Henderson for the Boston Consulting Group in 1968 to help corporations with analyzing their business units or product lines. This helps the company allocate resources and is used as an analytical tool in brand marketing, product management, strategic management, and portfolio analysis.[1]

USING THE SUSTAINABLE GROWTH RATE The concept of sustainable growth can be helpful for planning healthy corporate growth. This concept forces managers to consider the financial consequences of sales increases and to set sales growth goals that are consistent with the operating and financial policies of the firm. Often, a conflict can arise if growth objectives are not consistent with the value of the organization's sustainable growth. If a company' s sales expand at any rate other than the sustainable rate, one or some combination of the four ratios must change. If a company's actual growth rate temporarily exceeds its sustainable rate, the required cash can likely be borrowed. When actual growth exceeds sustainable growth for longer periods, management must formulate a financial strategy from among the following options: (1) sell new equity; (2) permanently increase financial leverage (i.e., the use of debt); (3) reduce dividends; (4) increase the profit margin; or (5) decrease the percentage of total assets to sales.

The Gordon growth model is a tool that is commonly used to value stocks. Originally developed by Professor Myron Gordon and also known as Gordons growth model, the aim of the method is to value a stock or company in todays terms, using discounted cash flows to take into account the present value of future dividends.

Fundamental analysis of a business involves analyzing its financial statements and health, its management and competitive advantages, and its competitors and markets. When applied to futures and forex, it focuses on the overall state of the economy, interest rates, production, earnings, and management. When analyzing a stock, futures contract, or currency using fundamental analysis there are two basic approaches one can use; bottom up analysis and top down analysis.[1] The term is used to distinguish such analysis from other types of investment analysis, such as quantitative analysis and technical analysis.

You might also like