Group 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

BUSINESS RESEARCH ISSUES

Business Research Methods

1/10/2011
KIIT School of Management

Submitted to:
Prof. R.N. Subudhi

Submitted by:
Avik Sarkar, 10202134
Biswajit Nayak, 10202137
Pralina Jena, 10202145
Om Prakash Panda, 10202164
Bimla Acharya, 10202165
Abhirup Chakraborty, 10202183
1. WHY DO YOUTH DONT TAKE
POLITICS AS A CAREER?
 Politics Is Wrong for Career Starter

Politics, theoretically and practically, is probably the word's most biased career. Politician,
therefore, needs to not care about anything, but power and meticulously bias toward his/her
political party. That's why I strongly suggest anyone not to enter politics as career start-up.

As a career-starter, you need to be curious about everything; opening your eyes and ears to
learn everything in a neutral way, allocating time to involve in any activity of your workplace
and society as a whole, thinking and working as a researcher (neutrally learning everything,
as well as analyzing), and finally you need more time to always sharpen your skill(s) by
reading and other socializations. Politics is an absolute barrier to these self-improving
activities.

Business nations like Singapore, the United States of America, the United Kingdom, France,
Japan, Hong Kong, South Korea, Taiwan, young graduates might have more choices not to
enter politics in career start-up, since these nations offer diversified career in business and
other non-governmental sector. Low-developed and developing countries in Southeast Asia,
like Laos, Vietnam, Cambodia, Burma and many other countries in Africa, young graduates
or youths may agree to accept any job; low-paying or without pay, for experience which
might help boosting their future job prospect. I am very regretful that in Cambodia political
parties are using youth and young graduates to campaign for their parties, since youth and
young graduates have the highest potential and highest numbers.

To me, reading books in library, browsing internet, participating in public lectures and
presentations offered by NGOs, universities, government, business institutions, and other
interest groups, are more beneficial than working for any political party as career start-up.

This letter hereby wishes to function as an appeal to all young graduates and youth in the
world not to enter politics as career start-up, and to appeal to all political parties and
governments in the world to place these youth and young graduates out of politics.

Source: http://www.ehow.com/how_2060883_make-career-politics.html

 Politics: An interesting career opportunity


This interesting career opportunity is an umbrella group, that can fit in multiple persons of different
backgrounds. The basic job positions in this field begin from MLA, that is, Member of Legislative
Assembly and MP, that is, Member of Parliament.

With the Indian economy scaling new heights with every passing day, it offers numerous lucrative
careers for youngsters. Apart from many established careers such as medical, engineering and CA,
there are many new avenues where career can be carved out and dreams can be materialised, some
of such avenues include SEO, BPO, KPO, Clinical research, sports, HM, retail, fashion designing,
beautician and content writing.
However, only a few give a thought to an age-old opportunity – to be a politician. This interesting
career opportunity is an umbrella group, that can fit in multiple persons of different backgrounds.
The basic job positions in this field begin from MLA, that is, Member of Legislative Assembly and MP,
that is, Member of Parliament. One can also begin in a freelance mode and try for the elections at
the corporation level.

While entry to both the legislative houses can only be permitted to those, who are above 25 years of
age, it doesn't mean that one cannot enter this sacred profession! One can begin by university and
college elections too.

The field has no prescribed minimum educational qualification; some political parties prefer
candidates, who have acquired degrees/diplomas from fake universities and colleges, as it will
facilitate such people in launching fake universities/colleges once they are elected.

For professional experience, while many demand leadership qualities etc, there are still a few who
want a minimum five years of experience in the trade. The latter group looks for opportunists, who
have never shown faith in one party and can go to any extent, namely bribing other politicians to
buy their votes during a no confidence motion, officers to get his/her work done at any cost
especially building flats illegally and getting government jobs for relatives.

There are a select few parties who look for honest souls, a few others look for fraudsters, non
punctual and undisciplined, people with past criminal records in kidnapping, murder and extortion.

Source: www.merinews.com/article/politics-an-interesting-career-opportunity/15786351.shtml

 Career in Politics in India

It is wonderful that young people like you are showing an interest in participating in our
country‘s political process. The number of people taking part in the panchayats or other local
or municipal bodies in the past few years has grown more than ever before.

Sure, belonging to a political family helps you to get your foot in the door but beyond that it
is entirely up to your performance and capability. If you are not up to the job, no party or
voters will re-elect you.

Here is a good opportunity to test the waters and get close to the scene of action.

Designed for exceptional young Indians like you who are seeking opportunities to widen their
understanding of the nature of politics and policy-making in India, the legislative assistants to
members of parliament program me is based on the lines of similar exposure offered to youth
in other democracies such as the US and UK.

Besides providing valuable experience, the opportunity to assist MPs in their work in the
Parliament will serve as a launch-pad for future careers in management, law, public policy,
journalism, among others.

Each Legislative Assistant (LA) would be assigned to a specific MP, catering to their
individual research needs and duties. They will help the MP frame questions, raise issues and
prepare for committee meetings and speeches.
The Indian Parliament has about 790 members affiliated to over 40 political parties. Each
Member of Parliament (MP) represents close to two million citizens.

MPs pass about 60 bills a year on an average, but do not have a dedicated research staff or
institutional support to help them understand the implications of various legislations. This
makes it difficult for them to prepare for debates and committee meetings.

The programme is coordinated by PRS Legislative Research, an independent research


initiative that works with Members of Parliament (MPs) across party lines to provide research
support on legislative and policy issues.
As the focus of the programme is on Parliament, the LAs need to be based in Delhi for the
entire period of the programme: 10 months full-time (July 2010 – May 2011).

You will receive a stipend of Rs 10,000 pm + Rs 2,000 pm for internet, telephone and
conveyance costs. Motivated fresh graduates in any discipline can send in their CV along
with a 500-word description of why you would like to participate in the programme to
talent@prsindia.org with ‗Legislative Assistants for MPs Programme‘ in the subject line.
Shortlisted candidates will be interviewed in the week starting June 21. Offers will be made
on June 28. The programme commences on July 12. The application deadline is June 16.

Source: http://careers.icbse.com/career-in-politics-in-india/

2. DOES TECHNOLOGY CREATE


UNEMPLOYMENT?
 Does More Technology Create Unemployment?

Each new generation brings the reemergence of many of the fears of the past, requiring the
repetition of old explanations to put them to rest. Today there is a renewed concern that
technological advancement may displace much of the manufacturing (and other) work force,
creating widespread unemployment, social disruption, and human hardship. For example, in
1983 the Upjohn Institute for Employment Research forecast the existence of 50,000 to
100,000 industrial robots in the United States by 1990, resulting in a net loss of some 100,000
jobs. Barry Bluestone, perhaps foremost among today's gloomy economists, is also worried
about the future. He argues that "capital hypermobility" requires that America "reestablish the
social safety net and extend the range of the regulatory system to make that net even more
secure.". Harvard's Robert Reich completes the theme that government must act by arguing
that America's industrial policy "is the by-product of individual corporate strategies whose
goals may have little to do with enhancing the standard of living of Americans." He further
states that our current industrial policy creates jobs that are "lower-skilled and routine,
eventually to be replaced by robots and computers."

What are we to make of all these claims and predictions and the rhetoric that surrounds them?
Conservative economic thinkers tend to disparage persons who fear the rapid advance of
technology by labeling them "Luddites." This term is both unfair and inaccurate. The real
Luddites, of the early 1800s, were uneducated working people who destroyed textile
machinery and other symbols of advancing technology, which, despite their efforts, were to
move the broad spectrum of humanity above the subsistence level for the first time. Today's
proponents of economic activism are typically not of the working class and are usually quite
well educated. Nobel laureate Wassily Leontief, who gave the keynote speech for the
National Academy of Engineering at its 1983 symposium "The Long-Term Impact of
Technology on Employment and Unemployment," cannot fairly be called a Luddite, yet he
expressed concern about what he saw as technological advancement's undesirable
distributional effects across income groups.

R. H. Mabry is professor of finance at Clemson University. A. D. Sharplin is


professor of management at Northeast Louisiana University.

In part, opposition to technology springs simply from a more or less visceral fear of
scientism, which is often taken to imply the dehumanization of humankind. Mainly, though,
the warnings heard today are thoughtful and well intentioned, even if often in error or
somewhat self-serving. Flatly in error are those that predict no more jobs for a very large
sector of the population as a result of advancing technology, creating a massive problem of
involuntary unemployment. It is not at all clear that a large number of jobs are about to be
destroyed; even if they were, such long-run unemployment as would occur would certainly
not be involuntary. Rather, it would take the form of even shorter workdays, shorter work-
weeks, and fewer working members in the family, as it has throughout our history.

Some who correctly anticipate that technological change may produce short-run employment-
adjustment problems overstate those problems. They also often fail to mention that the short-
run unemployment that occurs is primarily the result of artificial imperfections--a lack of
competition--in certain labor and product markets. The amount of short-run unemployment
created by advancing technology, as well as the amount of howling (or lobbying), is directly
related to the degree of artificiality in the particular labor markets affected. It will be argued
below that the workers harmed by technological advancement are those who have been
receiving wages in excess of the amount they would receive in a fully competitive labor
market. In other words, they have been receiving economic rent. It will be further argued that
those workers remain unemployed when displaced by technology because they seek to regain
their former employment or seek employment in another industry that pays excessive wages.
In other words, they are unemployed because they are rent seekers. Finally, the effects of
slow and rapid technological change will be discussed. The rate of change can serve as a
basis for reasoned debate of some of the legitimate social concerns facing our society as a
result of technological advancement, given the institutional imperfections already existing in
the labor market.

Source: R. H. Mabry and A. D. Sharplin

 Structural Unemployment and Technology

Previously, I've argued here that job automation technology might someday advance to the
point where most routine or repetitive jobs will be performed by machines or software, and
that, as a result, we may end up with severe structural unemployment. The latest weekly
report shows anincrease of 25,000 in new unemployment claims--instead of the decrease
expected by economists. Clearly, the economy continues to struggle with job creation, and I
think that automation is playing a significant role.

Catherine Rampell recently wrote an article in the New York Times that delves into the
impact this issue is having in the lives of typical workers. As the article points out:
For the last two years, the weak economy has provided an opportunity for employers to do
what they would have done anyway: dismiss millions of people — like file clerks, ticket
agents and autoworkers — who were displaced by technological advances and international
trade.

Rampell's piece does an especially good job of capturing the denial that is likely to continue
to be associated with this issue:

Ms. Norton is reluctant to believe that her three decades of experience and her typing talents,
up to 120 words a minute, are now obsolete. So she looks for other explanations.

―You can‘t replace the human thought process,‖ she says. ―I can anticipate people‘s needs.
Usually, I give them what they want before they even know they need it. There will never be
a machine that can do that.‖

The fact is that there will very soon be machines and software algorithms that can very
effectively anticipate needs and perform increasingly complex (and high-paying) jobs that
require higher and higher skill levels. The unwillingness to acknowledge this reality and
confront its implications extends not just to impacted workers, but also to economists and
policy makers—virtually all of whom are either in denial or oblivious to this issue.

Many mainstream economists are projecting that unemployment will remain high for years to
come—but there is a near universal expectation that eventually, the problem will correct itself
and we will gravitate back to something close to full employment.

In my book, The Lights in the Tunnel: Automation, Accelerating Technology and the
Economy of the Future (now available as a free PDF eBook

Most economists are likely to dismiss this prediction. Many will suggest that new technology
will result in the creation of new industries and new employment sectors. The problem with
that assumption is that the new industries created tend to nearly always be very capital-
intensive and employ relatively few workers—while at the same time making available
technologies that are highly disruptive to more traditional labor intensive sectors that employ
millions of people.

The worker that Rampell interviewed for her article has ended up taking what is often the jo b
of last resort: a part-time position at Wal-Mart. But what will happen when Wal-Mart begins
to employ significant automation? Is that really unthinkable?
To see the difference in employment between a traditional industry and a new technology
industry, compare Wal-Mart (over 2 million employees and revenue of about $180,000 per
worker) with Google (20 thousand employees and over a million dollars per worker).
A similar story can be seen by in the DVD rental industry. How many workers are (or have
been) employed by Blockbuster in its thousands of retail locations, as compared with Netflix
in a few highly automated distribution centers? The inevitable migration from delivering
DVDs though the mail toward to instantly available streaming video can only accelerate that
trend.

The fact is that structural unemployment is here to stay. It will very likely get worse, and it
will increasingly impact workers with college educations and high skill levels. Those with
few skills and little education have been the first to feel the brunt, but machines are getting
better and smarter.

Source:http://www.angrybearblog.com/2010/05/structural-unemployment-and-
technology.html

 Technology and Unemployment in the Short Run: Distributional Effects


and Rent Seeking

While technological advancement over the long run does not lead to unemployment
problems, but rather is the engine for higher standards of living with either more or less
employment at the discretion of individuals, short-run problems may certainly arise from
technological advancement if there are imperfections in labor and product markets. If there
are no such imperfections, technological advancement in a given industry will not lead to
prolonged or significant unemployment.

With perfectly competitive markets, technological advancement in one industry simply


releases labor resources for other uses, in that industry or in another. Will jobs be available in
other industries? Yes. Several scenarios demonstrating this principle are possible in
completely free markets. First, when technical change lowers costs in a given industry, the
competitive firms comprising that industry must lower their prices, generating larger sales
and an even greater need for employment. In this case, employment goes up, not down, and
with the increased competition for workers, wages rise in all industries capturing some of the
value of the technological change for workers.

Second, when technical change in a given industry is labor saving, but its downward effect on
product prices does not result in larger quantities sold sufficient to provide the same amount
of employment in the industry as before the change, then temporary unemployment occurs.
However, jobs are available elsewhere in competitive markets. If nothing else, wages are bid
down enough in other industries to absorb the released labor But the savings in the industry
where the advancement occurs must also be taken into account. Either more money goes to
remaining workers in that industry, so that they raise the demand for other products, thus
enabling the released labor to be employed in other industries without lower wages; or
product prices are lower in the automated industry, so that consumers can buy the same
amount and have income left over to demand more products from other industries, again
enabling the released labor to be employed in those expanding industries without lower
wages.

Thus, while some unemployment may occur when there is technological advancement in
competitive markets, it is both temporary and a natural consequence of the ability to change
jobs freely. It is certainly not a social problem requiring any sort of government action. There
is an unemployment problem in the short run only when markets are not freely competitive.
The interesting point is that most labor markets and product markets that are restricted or are
not competitive in some way are imperfect precisely because of past and current government
action.

In light of the definition of "economic rent" given above, the reason that rents earned by
workers cause unemployment in the short run is really quite straightforward. If workers
receive some $22 per hour (as they do in the auto industry, counting benefits) when the
comparable figure in similar jobs is $12, then it is difficult for those workers to accept the
$12 alternative when they are laid off for any reason, including a technological advancement
that allows the industry to produce the same output with fewer workers. These workers
remain unemployed for longer periods than they otherwise would because they hope to obtain
employment again at the noncompetitive, higher wage rate to which they are accustomed.
They are rent seekers, and the rent they seek is the $10 difference between their alternative
competitive wage and the wage formerly obtained through some degree of monopoly power
in the original labor market.

It is important to understand at this point that rents cannot be obtained by workers unless
there is monopoly power in both the labor and product markets of an industry. A strong union
that raises wages above the competitive level in a firm that faces competition in its product
market will soon drive the company out of business. On the other hand, restrictions in the
selling market--tariffs on imported goods, for example-- will do workers little good without
some monopoly power over hiring. In order to raise wages, workers must be able to restrict
employment or limit entry into the particular labor market, which is most conveniently
accomplished in manufacturing settings through unions. It should be added that firms in an
industry and their trade groups also actively support workers' efforts to obtain rents if these
efforts are aimed at restricting competition in product markets, thus helping the firms to earn
rents, too. This is what happened in the automobile industry, for example. Further, successful
rent seeking is possible only with the help of government enforcement. Hence, a partnership
is forged between labor and industry management, with the help of a new "visible hand"--
government.

It is no small matter that the rents obtained by workers through labor- and product-market
restrictions provide a strong and perverse incentive for the industry to develop and employ
new labor-saving technology. Knowing that, unions must retard technological advancements,
often seeking government help in doing so. The partnership between labor and industry
managers may begin to fall apart if wages are pushed higher than the restrictions in the
product market can support through above-competitive product prices. In order to continue
paying wages that include rents, firms and labor must continually press for added monopoly
restrictions in the product market. In the auto industry, for example, more tariffs and quotas
must be requested from government "to save auto jobs."

Government intervention leads to an even larger problem. Additional restrictions, protecting


ever-higher prices and wages, lead to two reactions that ultimately cause the demise of the
labor-industry partnership that the rent seekers have formed. First, American markets present
themselves as increasingly attractive targets for foreign competition. Japanese manufacturers,
for instance, knew they could make better cars and steel products at far lower cost than their
American counterparts. In the 1970s, Japan began to press for reductions in U.S. tariffs and
quotas, using their considerable bargaining power as importers of U.S. farm and other
products.

Second, American consumers begin to realize that they are paying prices well above
competitive levels and no longer support their labor and industry brethren in the halls of
Congress. Consequently, product barriers begin to fall, as they did in 1981 for the auto and
steel industries. At that point, the industries could no longer pay the same rents to workers as
before and began to renegotiate wage packages downward and match foreign competitors' use
of technology. While "voluntary quotas" in the automobile industry remained in effect until
1985, the industry is now on a path toward fuller international competition, the use of
appropriate new technology, and perhaps the use of much less labor. Though more capital
will be used relative to labor, it is possible that sales will expand enough to employ the same
amount of labor as before, or perhaps more, in the long run. There will be some short-run
unemployment, however.

Further complicating the problem is the fact that when the artificial barriers protecting either
the workers (industry-wide bargaining) or the products (trade restrictions) in an industry do
break down, there is likely to be a great deal of unemployment all at once. While sizable
unemployment ought to make it apparent to workers that they cannot all obtain jobs paying
their former rents, and thus should make the adjustment period go more quickly as workers
accept lower-paying jobs, many of them will persist in seeking rent-paying jobs anyway.
Large numbers of rent seekers band together to reverse the situation, doing so at the expense
of other workers and of consumers in general. These workers are fully supported by workers
who remain in the affected industry, usually through their unions, who fear they may lose
their rent-paying jobs, too. It is important to bear in mind that the sum of these rents is large
and that each individual has the incentive to spend up to the value of his share to preserve or
regain his rent-paying job. That is why major lobbying efforts can be mounted to get
government to reestablish trade restrictions or to uphold such labor-market restrictions as
industry-wide bargaining agreements.

Many of the cries for government intervention in the economic system undoubtedly come
from rent seekers who promote the interests of one or another of America's pluralities against
the interests of society as a whole. Farmers and their lobbyists, using such nice words as
"parity," call for continued price supports, loan subsidies, and cash or in-kind payments for
withholding crop land from production. Other groups are said to be on welfare when they
receive payments for not producing. Economic studies done for the Associated General
Contractors of America by Georgetown University's Douglas Brown and by Data Resources,
Inc., "prove" that increasing government spending on construction would increase long-term
employment and output. Automobile, steel, and textile industry executives--supported by
workers, labor unions, and suppliers, with whom they share the rents they extract from
society--want import quotas and tariffs. In general, these groups know that their efforts to get
a larger slice of the American pie necessarily make that pie smaller. The pie shrinks not only
because such efforts consume resources that could be better spent in the production of goods
and services, but because government-- broadly construed--does respond, creating rents and
causing resources to be misallocated. Of course, most economists consider such government
responses ill advised.

Beyond the question of efficiency, many believe such specialized intrusions by government
violate the constitutional rights of Americans to equal treatment under the law. Consider the
following case. An amendment to the Fair Labor Standards Act in 1942 allowed the
development of regulations that over time made New England's cottage textile industry less
feasible. These regulations imposed several burdensome restrictions, effectively prohibiting
certain types of knitting in the home, for example. The regulations were successfully opposed
in 1984, however, on the basis that they set an unacceptable precedent.[15] Opponents of the
regulations argued before the Supreme Court in Breen v. International Ladies Garment
Workers' Union that if textile unions received constitutional protection against competition
from families working in their homes, then any other industry might demand similar
protection. Perhaps commercial picture framers could shut down family picture-framing
enterprises operating from garages.
It is important to note that opposition to such restrictive laws is not usually successful. One of
the main arguments for government protection of various industry segments today is the
claim that markets, especially labor markets, are not free and that each group therefore has a
right to the same protections and resulting subsidies its competitors receive. Joseph
Schumpeter recognized the insidious nature of this kind of thinking and forecasted that it
would inevitably destroy capitalism as we know it.

Source: http://www.cato.org/pubs/pas/pa068.html

 Technological Advancement and the Depreciation-of-Human-Capital


Argument

Schumpeter argued that innovation drives any capitalist economy. He also noted that
innovation is a process of "creative destruction," in which new capital equipment renders old
capital equipment obsolete. Implicit in Schumpeter's writings is the idea that technological
advancement can also make human skills obsolete or at least depreciate them significantly.

Much of the fear of human-capital depreciation may be grounded in the recent massive
layoffs in the U.S. automobile and steel industries. In both cases, oligopolistic structures,
coupled with government-supported multi-company collective bargaining and trade barriers,
restrained innovation for many years. Companies were able to saddle American consumers
with the rents they paid workers and themselves. They were also able to pass along the costs
of using suboptimal combinations of labor and machines, namely, too few machines and too
much labor for the given output level.

The automobile makers met their comeuppance in 1981 and 1982, when circumstances
required them to take giant steps toward optimality. The lowering of trade barriers (though
softened by "voluntary" quotas in their place), the decreased demand for U.S. cars caused by
high interest rates and high fuel costs, and the targeting of U.S. markets by Japanese
producers resulted in extensive layoffs of workers and large losses for the "big four"
automobile producers. As noted earlier, the companies began to recover by automating
aggressively, by bargaining away some of the rents they had been paying, and by continuing
to request and get some important protection. A similar sequence of events occurred in the
steel industry.

When technological advancement is incremental, as it usually is when forms of monopoly


power do not inhibit it, such disruptions do not occur. In an economy where huge rents are
allowed to exist--nearly half of a typical automobile worker's income is clearly economic
rent--there is the continual threat that market forces will cause those rents to be eliminated
rapidly or, as happened in the automobile industry, cause the replacement of much of the
human capital with physical capital. Persons who oppose technological advancement on this
basis argue that it is inequitable to allow market forces to depreciate human capital. Taken to
an extreme, of course, that view would negate all innovation.

Even in its milder forms, the argument is insidious. As in the auto industry, restrictions on
free markets, including labor markets, produce upheavals that serve as the pretext for further
restrictions. In fact, because labor continues to demand its rents, with government support but
now without the previous level of protection in the product market, it may be that the capital-
labor equation has been altered to overly favor capital. With high wages and less protection to
enable industry to pass the cost of these rents on to consumers, industry now must use even
more capital to replace labor than if government allowed wages to fall to competitive levels.
It is now apparent that there is and will be less employment in the automobile and steel
industries because of government intervention-- its support of union bargaining--in these
markets. Harvard professor William Abernathy provides strong anecdotal evidence for this
point, stating in 1983 that "Ford Motor Company to be as efficient as Japanese auto firms . . .
[can] afford to keep only half the 256,000 American employees it had in 1978."[18] Such a
large shift, if it does indeed occur, would not have been necessary had there not been a closed
system and payments of rent in earlier periods. Nor would the decline in Ford employment be
so extensive now if wage levels were allowed to return to competitive levels without rents.
Past government intervention has been helpful only to those who received rents the past two
decades.

 Unemployment is an inevitable by-product of technological


advancement.
Technology is increasingly becoming an important part of our daily lives. Based on the recent
research, it shows that technology always appear daily in the society (Luleå University of
Technology, 2002-2003, Internet). Generally, there are three main terms which are necessary
to define before discussing the issue. Unemployment is an economic condition generated by
the fact that individuals actively seeking jobs remain unhired (Online Business Dictionary,
2007). Likewise, the meaning of inevitable is something that is certain to happen or cannot
avoid and prevent (Oxford Advanced Learner‘s Dictionary, 2000). Moreover, the change
brought about by automation of the workplace, with its consequential impacts on
employment such as training, skills of jobs. This is generally named technological
advancement (The CCH Macquarie Dictionary of Business, 1993). In discussing whether or
not unemployment is an ―output‖ of technological advancement, it is necessary to mention
both sides with reference to two countries in order to achieve judgments. In the beginning,
machinize industries greatly diminish the labour force because of advancement. Nevertheless,
it can be seen that the declining of workers is simply a short period effect. In addition,
relatively high cost of advancing techniques generates a chain reaction such as
unemployment whereas excogitating more advancement creates abundant job opportunities.

Firstly, technology is greatly applied to most of the industries since the general trends. No
matter which vocations, it could be said that usage of technology is an indispensability factor
for industries even if a mini food store. It is widely believed that productive efficiency would
be achieved. In the same manner, running cost of a business might economize on a large scale
comparing to before. As White points out, consequently, computers, machines or
microprocessors are required whenever the technological advancement occurred (1991, 94).
Additionally, unemployment in the auto industry due to the application of automation is
generated as there was no longer a demand for the services that employee were able to
provide (Parry & Kemp, 2004, 212). Indeed, technological advancement implied the higher
skilled workers are necessary to perform the new equipments or ideas. For example,
Department of Employment and Workplace Relations (DEWR) data indicates Australian
skilled job vacancies rose 0.1 per cent recently (The Daily Telegraph, 2007, 11).
Accordingly, it is generally believed that the reducing majority of original labour force is an
inevitable by-product of technological change by the view of economics.

On the other hand, the unemployment caused by technological progress is merely for a short
period of time. As a matter of facts, the less skilled or educated workers are released for more
skilled employment. Since most of the unemployed workers will undergo retraining for the
sake of learning the skills associated with the new technology (Parry & Kemp, 2004, 212). In
addition, revolution in the nation‘s workplace is not easy to come through if remaining the
labour quality (Healey, 2000, 2). It is called structural unemployment. At the same time, the
vacant of market could be replaced by part-time workers for short-term. After the retraining
of new technology, the workers will become a high skilled labour so as to the unemployment
rate will observably return to normal level. It could be proved by last 10 years unemployment
rate of Australia. According to Australian Bureau of Statistics report and latest newspaper,
unemployment rate of August in 1997 is 8.5 per cent and drop to 4.2 per cent of August in
2007 which is the lowest level in 33 years (1997 & 2007, 1) (Rollins, 2007, 8). There is no
doubt that the unemployed workers can be reassembled after the technological training. From
these facts, it may demonstrate that the unemployment caused by technological progress is
barely a short time phenomenon.

Secondly, the cost of adding latest technological equipments is certainly high and time
consuming. It is stated that the cost strains will be placed on a numerous community
institutions such as the health and education systems which do not have sufficient resources
to keep up with the current developments (Parry & Kamp, 2005, 74). Therefore, society
harmony will be affected as well as the unemployment increased. Furthermore, the labour
force of primary occupations which the jobs concerned with the extraction of natural
resources from the land will drop constantly. For instance, the America Washington State
Employment Security shows that the total primary labour is decreasing from 4710 to 4195
during 2005 to 2006 (2007, internet). In support, the production industries employment rate
in Australia is falling from 46% in 1966 to 28% in 1999 (Healey, 1999, 1). As the huge
running cost in mechanisation and computerisation leads to reduce work force of society. It is
possible to deduce that the technological change could not be avoided.

In contrast, the job opportunity will be created since the exploiture of new technology. As
Healey stresses, the educated workers will be preferred in future markets due to the modem
automation industries (1999, 3). Undoubtedly, technological advancement is a fact of life.
Although it is not everyone able to use or understand technology, it still has an important role
in daily. For example, National Australia Bank (NAB) has faced a numerous skilled financial
planners today. According the Daily Telegraph, there is a shortage of planners with financial
education in NAB due to the rapid advancement in society or even the worldwide (Tucker,
2007, 82). It can be argued that the progress changed the type of main occupation toward
professional. Similarly, patterns of production and employment will change over time (Parry
& Kemp, 2004, 212). As a result, it is necessary to have professionals to create and improve
ideas for the industry in order to achieve the goal of economics. On the other word, the new
invention or improvement would not be originated unless there are certain specialists in the
labour market. In fact, all technological advancement is generated by human. Therefore, there
are a certain number of jobs reserved for the ―inventors‖. It seems that the new technology
not only supplying a number of job opportunities substantially but improving the whole
society living standard.

In summary, though the technological advancement merely causes a short-term of


unemployment and even produces a certain amount of skilled employment, society will
become down term since there is an unemployment outbreak at first. Furthermore, the effects
of unemployment are always unpredictable and shocking. In fact, the unemployment will lead
to different type of economic and social problems. As a result, the total national production
will drop and so does the living standard. The recovery of unemployment is relatively
difficult and costly for government. After the foregone problems spread over, it is
substantially believed that unemployment cannot be avoided after the technological
advancement. Therefore, it should be recommended to preplan some prevention methods on
the unemployment before it breaks out to society.
3. Impact of Information Technology on
Organized Crime
 Cyber-Organised Crime — The Impact of Information Technology on
Organised Crime

Some have argued that organised crime is a problem of the last quarter of the 20th century
and in the case of most states is a new phenomenon. Of course, so much depends upon what
is meant by organised crime. Groups of individuals formed and managed to perpetrate acts
against the law are nothing new. Nor is it novel that the primary motivation of such
enterprises is economic gain — spurred on by ‗plain old-fashioned‘ greed and corruption.
Banditry, smuggling, racketeering and piracy were just as much a problem for the praetors
and the vigils of ancient Rome as they are today for the Italian authorities. What has changed
is the criminals' ability to operate beyond the reach of the domestic legal system and,
therefore, be able to conduct an enterprise in crime that is not so amenable to the traditional
criminal justice system and its agents. Of course, in truth, thinking criminals have always
sought to place themselves beyond the reach of the law and it was not just a matter of having
a faster horse. Corruption of officials and the patronage of powerful individuals whose
interests, for whatever reason, might be at variance with those of the state are tried and tested
tools. It has always been recognised that even if as a matter of theory jurisdiction was
unfettered, the practicalities are such as to render enforcement parochial. Thus, when Henry
II of England was asked towards the end of the 12th century how far his writ ran, he
responded ‗as far as my arrows reach‘. While developments in ballistic technology might
render this a relatively useful approach to dealing with the international criminal, in the vast
majority of cases the criminal law in its application, or at least administration, will be
confined within domestic borders. Developments in technology, communication, travel and
the liberalisation of movement, whether of persons, things or wealth, have all combined to
give the criminal enterprise of today the same ability as any other business to move from one
jurisdiction to another, or involve in a single act two or more different jurisdictions.

Source: Journal of Financial Crime, Volume: 8

 Terrorism and Technology

Roots of the notion of cyber terrorism can be traced back to the early 1990s, when the
rapid growth in internet use and the debate on the emerging ―information society‖ sparked
several studies on the potential risks faced by the highly networked, high – tech dependent
United States. As early as 1990, the US National Academy of Sciences began a report on
computer security with the words, ―We are at risk. Increasingly, America depends on
computers….. Tomorrows terrorist may be able to do more damage with a keyboard than
with a bomb.‖ At the same time, the prototypical term, ―electronic Pearl Harbor‖ was coined
linking the threat of a computer attack to an American historical trauma. Psychological,
political and economic forces have combined to promote the fear of cyber terrorism. From a
psychological perspective, two of the greatest fears of modern time are combined in the term
―cyber terrorism‖.
After 9/11, the security and terrorism discourse soon featured cyber terrorism
prominently. This was understandable, given that more nightmarish attacks were expected
and that cyber terrorism seemed to offer Al Qaeda opportunities to inflict enormous damage.
The evolution of Cyber terrorism can be analysed by examining how access to cyber technology is
perceived and used by the terrorist groups. To use of the subjects are presented by Schmitt and
Rathmell. Schmitt distinguishes between information operations and computer network attacks. The
former is defined as encompassing “virtually any non-consensual actions intended to discover, alter,
destroy, disrupt or transfer data stored in a computer, manipulated by a computer or transmitted
through a computer network”. Information systems can be either defensive or offensive in nature.
Computer network attacks are considered offensive information operations and “disrupt, deny,
degrade, or destroy information resident in computers and computer networks. Cyber Terrorism is
the convergence of terrorism and cyberspace. It is generally understood to mean unlawful
attacks and threats of attack against computers, networks, and the information stored therein
when done to intimidate or coerce a government or its people in furtherance of political or
social objectives. Further, to qualify as Cyber Terrorism, an attack should result in violence
against persons or property, or at least cause enough harm to generate fear. Attacks that lead
to death or bodily injury, explosions, plane crashes, water contamination, or severe economic
loss would be examples. Serious attacks against critical infrastructures could be acts of cyber
Terrorism, depending on their impact. Attacks that disrupt nonessential services or that are
mainly a costly nuisance would not.

Cyberspace is constantly under assault. Cyber spies, thieves, saboteurs, and thrill
seekers break into computer systems, steal personal data and trade secrets, vandalize Web
sites, disrupt service, sabotage data and systems, launch computer viruses and worms,
conduct fraudulent transactions, and harass individuals and companies. These attacks are
facilitated with increasingly powerful and easy-to-use software tools, which are readily
available for free from thousands of Web sites on the Internet. Many of the attacks are serious
and costly. The recent ILOVEYOU virus and variants, for example, was estimated to have hit tens of
millions of users and cost billions of dollars in damage. The February denial-of-service attacks against
Yahoo, CNN, eBay, and other e-commerce Web sites was estimated to have caused over a billion in
losses. It also shook the confidence of business and individuals in e-commerce.
In 1998, ethnic Tamil guerrillas swamped Sri Lankan embassies with 800 e-mails a day
over a two-week period. The messages read "We are the Internet Black Tigers and
we're doing this to disrupt your communications." Intelligence authorities
characterized it as the first known attack by terrorists against a country's computer
systems.
 During the Kosovo conflict in 1999, NATO computers were blasted with e-mail
bombs and hit with denial-of-service attacks by hacktivists protesting the NATO
bombings. In addition, businesses, public organizations, and academic institutes
received highly politicized virus-laden e-mails from a range of Eastern European
countries, according to reports. Web defacements were also common. After the
Chinese Embassy was accidentally bombed in Belgrade, Chinese hacktivists posted
messages such as "We won't stop attacking until the war stops!" on U.S. government
Websites.
 Since December 1997, the Electronic Disturbance Theater (EDT) has been
conducting Web sit-ins against various sites in support of the Mexican Zapatistas. At
a designated time, thousands of protestors point their browsers to a target site using
software that floods the target with rapid and repeated download requests. EDT's
software has also been used by animal rights groups against organizations said to
abuse animals. Electro hippies, another group of hack activists, conducted Web sit-ins
against the WTO when they met in Seattle in late 1999. These sit-ins all require mass
participation to have much effect, and thus are more suited to use by activists than by
terrorists
Many studies have dealt with terrorism by examining the nature of terrorist group
characteristics and their resultant behaviour. Few, however, have taken into account how the
cyber environment will impact this behaviour. In order to assess accurately the nature of the
terrorist threat in this decade, it is imperative to include the significance of this factor into
future analyses.
Perhaps the most profound environmental change to affect the course of terrorism in
the last decade has been the collapse of the former Soviet Union. The end of the cold war and
the rise of the present American hegemony have had important implications for the many
political ideologies that have encouraged and sustained terrorist campaigns over the last forty
years. In particular, the goal of a left-backed global evolution is no longer tenable
Cyber terrorism is an attractive option for modern terrorist for several reasons.
First, it is cheaper than traditional terrorist methods. All that the terrorist needs is a
personal computer and an online connection.
Second, Cyber terrorism is more anonymous than traditional methods.
Third, the variety and number of targets are enormous. The cyber terrorism could target
the computers and computer networks of governments, individuals, public utilities,
private airlines, and so forth.
Fourth, cyber terrorism can be conducted remotely, a feature that is especially appealing
to terrorist. Cyber terrorism requires less physical training, psychological investment, risk
of mortality, and travel than conventional forms of terrorism, making it easier for terrorist
organisations to recruit and retain followers.
Fifth, As the I Love You virus showed, cyber terrorism has the potential in affect directly
a larger number of people than traditional terrorist methods, thereby generating greater
media coverage, which is ultimately what terrorists want.
―The modern thief can steal more with a computer than with a gun. Tomorrow‘s
terrorist may be able to do more damage with a keyboard than with a bomb‖.
Source:
www.lawyersclubindia.com/mobile/articles/display_article_list_mobile.asp?article_id=2783

 The Impact of September 11 on Information Technology

During the late 1980s and early 1990s, I made several business-related trips to Japan to work
with some of the leading computer companies in the country. One of those trips took place in
early December 1991; at the end of a long, busy work-week, I awoke in my Tokyo hotel room
on a Saturday morning, and realized with a shock that it was December 7th.

December 7th—a date, as my parents had been informed by a somber President Franklin
Delano Roosevelt, that would forever live in infamy. And this was not just any run-of-the-mill
December 7th, but the 50th anniversary of the attack on Pearl Harbor that launched America
into the second World War. I am one generation removed from that event, and was not even
born when it occurred; but I couldn't help feeling tense and nervous as I left my hotel room to
stroll around the streets of Tokyo on that quiet, gray Saturday morning. How will people
behave? I wondered. What will they say to me? How will I respond?

I tried to anticipate a number of different attitudes and interactions from Japanese citizens that
I expected to encounter in the hotel lobby, on the streets, and in the shops. I anticipated seeing
thoughtful editorials in the newspapers, and earnest spokesmen on the television news
programs; and I tried to sort out my own feelings about a date that had taken on mythic
proportions throughout my entire life. But the one attitude and reaction that I did not expect,
and that I saw consistently throughout the city of Tokyo that day, was: nothing. No remorse,
no belligerence, no moment of quiet reflection, no jingoistic speeches of pride for the Japanese
military forces, no mention whatsoever. Nothing. Nada. Zip. Everywhere I went, I was
surrounded by a young generation of earnest, well-groomed, polite Japanese men and women
who were completely preoccupied with the day-to-day tasks of shopping, or taking their
children to the park, or (as they do so often in Japan, even on Saturday) hurrying into their
offices to put in a full day of work. Pearl Harbor, for all its significance to me, apparently
meant nothing to them.1

And perhaps a grandchild of mine, 50 years from now, will have the same experience—and
perhaps that experience will be magnified, as it was for me, by the unexpected coincidence of
celebrating the 50th anniversary of September 11, 2001 on the streets of Kabul or Kandahar.
But as I write these words, just a few short weeks after the attack on the Pentagon and the
World Trade Center, it's hard to imagine that anyone could ever forget the horror, and also the
significance, of that Tuesday morning in New York City and Washington. It was, as Obe Wan
Kenobee remarked in the original Star Wars movie, when he sensed that one of the rebel
planets had been obliterated by Darth Vader's Death Star, a "disturbance in the Force."

More than just a disturbance in the Force, September 11th represents a paradigm shift—a
fundamental transformation in our understanding of how things work, and why things happen.
Part, though by no means all, of that paradigm shift will involve the field of information
technology (IT); and the details of the IT paradigm shift are the subject of this book.

The paradigm shift

The term "paradigm shift" was popularized by the late Thomas Kuhn in a classic book, The
Structure of Scientific Revolutions. My Microsoft Word dictionary says that one of the
definitions of "paradigm" is "a generally accepted model of how ideas relate to one another,
forming a conceptual framework within which scientific research is carried out." And a
paradigm shift occurs when the existing framework requires so many exceptions and special
cases, and when it fails to address so many important problems, that it literally collapses under
its own weight and is replaced by a new, simpler, more persuasive paradigm.

It's reassuring to know that, in the aftermath of September 11th, the paradigms of the physical
sciences, which we all learned in high school and college, still seem intact; even though I feel
disoriented and confused much of the time these days, I know that the law of gravity still
holds. Indeed, even though most of us were so astounded by the real-time experience of
watching the World Trade Center collapse that we whispered, in awe and terror, "This can't be
happening!", the architects and engineers who replayed those awful videos over and over
again have convinced themselves that the buildings did obey the laws of physics and
thermodynamics—and that, in many ways, they did exactly what they were intended to do,
when subjected to a sudden and catastrophically violent shock.

But in addition to paradigms of physics and astronomy and mechanical engineering, there are
also business paradigms and political paradigms and social paradigms that we've become
accustomed to. And those political/social paradigms have a great deal of influence on the
plans, strategies, and—without meaning to overuse the word completely—the paradigms of
the information-technology (IT) profession to which I've devoted my career. Those paradigms
have indeed shifted; and while we may need to wait for several more months or years before
our politicians and philosophers can understand and articulate the quantity and quality of the
social/political paradigm shift, we should start doing our own thinking about the IT
ramifications.

For example: Many of us were gratified by the success of the ad hoc communication networks
that were stitched together by desperate family members and business colleagues in the hours
immediately following the WTC attack; they were grass-roots, bottom-up, and emergent (as
opposed to pre-planned, and hierarchically managed) in nature. Independently of what
corporations and their IT departments might plan, by way of an effective post-9-11
environment, I firmly believe that individual citizens and corporate employees will put more
and more faith in these ad hoc networks in the coming months, as they continue to cope with
the sluggish, ambiguous, contradictory, and sometimes untrustworthy flow of information
from "official" channels.

If things like this were only taking place at the individual level, it might not be worth writing
about. But it's not just individuals, and not just corporate IT departments, who might be
thinking about such issues: the military is taking it seriously, too. In an article by Leslie
Walker entitled "Uncle Sam Wants Napster!" (Washington Post, Nov. 8, 2001), we learn that
the Pentagon is looking at peer-to-peer (P2P) file-sharing and collaboration tools like Napster
and Groove—not because they want to download pirated music, but because the classic top-
down, hierarchically-developed communication systems often turn out to be incompatible
(e.g., Navy communication systems can't talk to Army communication systems), and too
cumbersome to cope with the chaotic, fast-moving world of terrorist warfare.

If this is relevant for the military community, could it be relevant for corporate IT
environments too? And even if corporate executives think it isn't, is it possible that their front-
line workers—the sales reps, and field-service technicians, and work-at-home
telecommuters—might disagree? Is it possible that the urgent, time-critical need to
communicate will cause them to completely abandon and ignore their company's high-
security, firewall-protected email systems, and resort instead to SMS-messaging on their cell
phones, and AOL instant messaging on their Palm Pilots? Even if it's against company policy,
is it possible—perhaps even likely—that they'll do it anyway? And if they're going to do it
anyway, because they believe they absolutely have to, does it make sense for companies to
design their systems to support ad hoc, emergent, P2P communications in the first place?
Apparently, the U.S. Defense Department thinks it's worth considering; perhaps other
companies should, too.

In addition to thinking about communication networks, there are numerous other paradigm
shifts that IT organizations will have to accommodate in the coming years. For example,
within a week after September 11th, IT industry journals reported that Ford Motor Company
was seriously rethinking the lean inventory system that its IT organization worked so hard to
enable;2 after all, the likelihood of chaotic disruptions necessitates a more resilient supply
chain, with more "buffers" and more inventory. In the weeks since then, I've heard about
numerous other companies which are also re-thinking their inventory systems, and re-thinking
the assumptions upon which their entire supply chain is based.

Similarly, the whole notion of globalization is being called into question: If tensions increase
and various parts of the world become more hostile and isolationist in nature, it might be far
too risky to present a homogeneous corporate image in a hundred different countries. But if
there is a strong move, within today's multinational companies, towards autonomy and
heterogeneity, what does that imply for the integrated Enterprise Resource Planning (ERP)
systems that our IT organizations have been grappling with for the past decade?

Interestingly, some of these paradigm shifts were already underway before September 11th;
after all, terrorism was already a fact of life, and many corporations were already concerned
about security. And beyond the obvious and direct threat posed by terrorism, more and more
companies have been realizing that change—competitive change, regulatory change,
technological change, market-preference change, etc.—is occurring at an ever-increasing
speed, and in ever more disruptive forms. To cite just one non-terrorism example, consider
Napster: Without any warning or fanfare, one individual college dropout was able to create an
Internet-based music-sharing technology that threatened to wreak havoc upon the mammoth
music industry.

Napster and the September 11th terrorists share a characteristic that corporations and
government agencies are likely to see more and more often in the future: disruptive threats
caused by "stateless actors," whose technology makes them disproportionately powerful. In
the past, nations expected to face threats from other nations; corporations expected
competition from other corporations. But now, if the official reports are accurate,3 a handful of
individuals, armed with box-cutters and funded with less than a million dollars in "seed
money," has managed to wreck four commercial airplanes and cause over a hundred billion
dollars in physical destruction. And one college student, whose only objective seems to have
been the achievement of a "cool" technology for sharing music with his friends, nearly
brought the recording industry to its knees. This is indeed a new world.

As it turns out, Napster has probably been put out of business by the legal counter-attack
posed by the recording industry; but the battle may go on for years into the future, as
rebellious teenagers use Gnutella, LimeWire, and a dozen other derivative technologies to
circumvent the "legitimate" practice of buying CDs and tape cassettes in stores for their
listening pleasure. As for the World Trade Center attack: The President, the Secretary of
Defense, the Secretary of State, and numerous other high officials have told us that we are
engaged in a war that will go on for years, and possibly decades. The established order of
things has been upset by new paradigms, and we're being told that we should expect them to
continue being upset for years into the future.

There is one other aspect of the paradigm shift that is exemplified in a particularly stark
fashion by the World Trade Center attack, but also by the Napster phenomenon and many of
the other disruptive changes we're facing today: The war is no longer "over there," it's here. In
the past, our military forces expected attacks upon the United States to emanate from other
parts of the world—e.g., in the form of Russian ICBM missiles flying over the North Pole to
attack us via Canada, or by enemy submarines popping up in the middle of the Atlantic and
Pacific Oceans and lobbing missiles at our cities. Meanwhile, our publishers typically
expected their copyright threats to emanate from China, or Russia, or Third-World countries
where copyright laws were ignored, or flouted openly. But Napster was created by an
American student at Boston University; and the World Trade Center, along with the Pentagon,
was attacked by commercial U.S. airlines piloted by individuals living in the U.S. on
student/tourist visas.

IT will be one of the likely battlefields of the future

The battlefield between Napster and the recording industry was the courtroom; and, as this
book is being written, the battlefield in the war against terrorism exists largely in Afghanistan.
But in a larger sense, information technology (IT) is the battlefield upon which many of the
conflicts are likely to be fought in the next several years.

Some of these conflicts will be obvious and direct, in terms of their association with IT;
"cyber-warfare" is the catch-all term that's being used to describe various forms of hacking,
viruses, physical attacks on computer centers or the Internet backbone, etc. In some cases, the
attack may not be on the computers per se, but on the ability of computer systems to support
such critical functions as telephone switching centers, stock-market trading systems, air-traffic
control, etc.

It's also important to realize that IT is involved indirectly in almost every other aspect of
hostility we're likely to face in the coming years—including the "hostile competition" that
private-sector organizations face, even without the terrorism associated with September 11th.
One of the stumbling blocks in implementing a more comprehensive air-travel security
system, for example, is the lack of adequate information systems to identify potential terrorists
before they get on the airplane. And one of the concerns that the health-care community has,
in the face of potential anthrax/smallpox attacks, is that it lacks the kind of real-time tracking
systems that UPS and Federal Express use to monitor the movement of packages through their
organizations.

On a somewhat more subtle, philosophical level, information technology determines the


degree to which we live in an "open" versus a "closed" society. It may seem overly
melodramatic to suggest that the September 11th attack launched a war between the "open"
society of the United States and the "closed" society of the fundamentalist Taliban movement.
But it definitely is true that a large dimension of the American response to that attack has been
a reassessment of the very openness that allowed terrorists to enter the country and board
civilian airlines with little or no trouble. Now we find ourselves discussing and debating such
questions as: what information does the government have a right to know about citizens and
visitors to this country? What information is it obliged to disclose, when it monitors and
eavesdrops upon citizens and visitors, and when it arrests them for suspected terrorist
activities? What information are citizens allowed to access and publish on the Internet? What
rights do we have to encrypt the private messages we wish to send our personal friends and
business colleagues? What obligations do our banks, our hospitals, our tax-collection
agencies, and numerous other private-sector and public-sector organizations have to maintain
the security and privacy of personal information they collect about us?

Obviously, questions like these are not going to be answered exclusively by IT professionals
or computer companies like IBM and Microsoft. On the other hand, IT professionals are likely
to have a more realistic assessment of the feasibility and practicality of various privacy and
security policies and regulations being contemplated by government authorities. Furthermore,
government and the legal profession tend to be reactive rather than proactive; and their time-
frame for reacting to problems and opportunities is measured in months or years. Meanwhile,
the private sector—and, in particular, the high-tech startup companies in places like Silicon
Valley—is proactive, opportunistic, and fast-moving. If you're concerned about issues of
privacy and security, you'd better start talking about it with IT professionals now, because
companies like Microsoft and Sun and Apple are likely to do something tomorrow to exploit
whatever opportunities they see available.

Source:  By Edward Yourdon, May 10, 2002


4. EFFECTS OF SMOKING ON CHILDREN
 Cigarette Smoking and the Effects on Children

Today, children are at great health risk and danger from cigarette and tobacco smoke aside
from the young teenagers and adults who are active as well as passive smokers. Children
belong to a high risk group as they get exposed to passive smoke, environmental tobacco
smoke (ETS) or better known as the second hand smoke. Cigarette smoking effect on
children is very dangerous because at a very young age, they are still in the developing age
and also their breathing rate is faster than the adults.

Children's breathing rate is much faster than the adults. An adolescent or young adult
breathes around 16 times in a minute while a child breathes way more than this rate. A
normal 5 year old can breath more than 20 times a minute which can sometimes increase to
60 times every minute. As the breathing of children is more, the cigarette smoking effect on
children becomes more intense as they take in more air that is filled with cigarette smoke.
Because of this, the children's lungs will receive a higher percentage of toxins and poisons
than that of the young adults.

The different cigarette smoking effects on the children and there children are numerous. In
detrimental effect is that babies who are born to mothers who smoke during her pregnancy
are much more prone to be born below the normal weight than those who are born to mothers
who do not smoke. Cigarette smoking significantly affects a baby's weight because of
developing a less resistant body. Another cigarette smoking effect on children is high
percentage of occurrence of sudden infant death syndrome to be suffered by babies whose
mothers are smokers. Likewise, babies win smoking mothers are at greater risk of suffering
from learning disabilities and cerebral palsy.

In the case of young children, one cigarette smoking effect on children is the development of
the condition of respiratory difficulties and illnesses such as asthma. If children are already
asthmatic, it can get worse by second hand smoke. Second hand smoke is one of the leading
causes of new asthma cases and other respiratory complications every year. Another very
serious cigarettes smoking effect on children is the development of pneumonia or pulmonary
bronchitis. In the United States alone, many pneumonia sufferers caused by cigarette smoking
are children.

Other cigarette smoking effects of smoking on children are as follows:

Children have increased water or fluid in the middle ear, developing in a hearing and even
speech problems.

Children's lungs function less efficiently. Likewise, their immune system becomes less strong
and protective than that of young adults.

Another cigarette smoking effect on children is the inability of the child's body to develop
fully; his height and weight development is adversely affected. The child exposed to cigarette
smoking has a great tendency to not fully achieve his over-all physical and intellectual
development. Babies whose mothers smoke during their pregnancy are born with a deficiency
in height and weight.
Source: http://EzineArticles.com/?expert=Dr._Mark_Clayson

 Harmful effects of smoking around children


What is third-hand smoke?

We know about the effects of second-hand smoke which causes approximately 3,400 lung
cancer deaths and 22,700 to 69,600 heart disease deaths in adult nonsmokers in the United
States each year, as reported by the American Lung Association.

But now, experts say it‘s more than just the smoke that can harm a non-smoker. As reported
in the January 2009 issue of the journal Pediatrics, toxins from tobacco cling to a smoker‘s
hair, clothing, and on other surfaces within the home, including carpets and cushions long
after a cigarette is put out. Children may then ingest these particles while playing, crawling,
or just snuggling up to the smoker.

Real dangers of third-hand smoke

And just what, exactly, could a child come into contact with? Researchers say tobacco smoke
carries 250 poisonous gases, chemicals and several harmful metals. These compounds may
remain within a home long after smoking has stopped (nursing mothers who smoke may also
transfer the toxins into her baby via breast milk). And over time, children who are exposed to
these low levels of tobacco particles may develop cognitive deficits and psychological
problems like ADHD.

Awareness is key to protecting kids

According to the authors of the Pediatrics report, awareness is the first major step towards
stamping out third-hand smoke. After surveying more than 1,500 households in the United
States, they found that fewer than half of smokers agreed that third-hand smoke was harmful
to children. Additionally, only about 25 percent had strict rules about not smoking in the
house.

Source: http://www.sheknows.com/health-and-wellness/articles/807271/harmful-effects-of-
smoking-around-children

 How Secondhand Smoke Affects a Child


Children face a higher risk than adults of the negative effects of secondhand smoke. Not only is a
child's body still developing physically, but their breathing rate is faster than that of adults. Adults
breathe in and out approximately 14 to 18 times a minute, where newborns can breathe as many as
60 times a minute.

When the air is tainted with cigarette smoke, young, developing lungs receive a higher concentration
of inhaled toxins than do older lungs. And think about it: young children have less control over their
surroundings than the rest of us. Babies can't move to another room because the air is smoky. They
depend on us to provide them with clean air to breathe.
Facts about Secondhand Smoke and Children

 Babies whose mothers smoked during pregnancy often weigh less when they are born than
those who are born to non smoking mothers.
 Babies whose mothers smoked during pregnancy are at an increased risk for developmental
issues such as learning disabilities and cerebral palsy.
 SIDS (sudden infant Death Syndrome) Fetuses exposed to chemicals in cigarettes through
the placenta are thought to be at an increased risk of SIDS. There are a variety of opinions
about the role secondhand smoke plays after birth in SIDS deaths, but a California EPA study
has estimated that between 1900 and 2700 children die annually of SIDS due to secondhand
smoke exposure.
 Children who spend one hour in an extremely smoky room inhale enough toxic chemicals to
equal smoking 10 cigarettes.
 Asthma - the EPA estimates that between 200,000 and 1,000,000 kids with asthma have
their condition worsened by secondhand smoke. Passive smoking may also be responsible
for thousands of new cases of asthma every year.
 Among children under 18 months of age in the United States, secondhand smoke is
associated with as many as 300,000 cases of bronchitis or pneumonia each year.
 Children in smoking households experience more middle ear infections. Inhaled cigarette
smoke irritates the Eustachian tube, and the subsequent swelling leads to infections, which
are the most common cause of hearing loss in children.
 It has been estimated that between 50 and 75 percent of children in the United States have
detectable levels of cotanine in their bloodstream.

Source: http://quitsmoking.about.com/od/secondhandsmoke/a/smokeandkids.htm

 The effects of smoking on the unborn child and complications of


pregnancy and birth.

Tobacco smoke contains more than 4000 harmful chemicals, of which a number of them are known
carcinogens in humans, whilst others are highly toxic and poisonous.

When an expecting mother inhales tobacco smoke from a cigarette, some of the chemicals are
exhaled immediately and leave the body, but others stay in the body and make their way into the
placenta. The unborn child, as well as inhaling the mainstream smoke that the mother breathes in
from the cigarette, which stays in her body, it may also inhale any secondhand smoke that is in the
air. This would mean that the growing foetus would be negatively affected by two different types of
smoke.
The unborn child in the womb relies on the mother for its food, nutrients and oxygen in order to
develop and grow healthily before the birth. The placenta is the tissue that connects the foetus to its
mother and from where it receives all it needs for its correct development whilst it is in the mother's
womb.

On smoking several things happen. Firstly, there is a reduced supply of oxygen, due to the increase
of nicotine and carbon monoxide in the mother's bloodstream. This means that there is less oxygen
available to the baby, as the harmful substances replace it. The baby will begin to move slower after
the mother has smoked a cigarette and the baby's heart will have to work faster, as it tries to
breathe in more oxygen. As well as a reduced amount of oxygen, the nicotine constricts the blood
vessels in the mother's side of the placenta, thus preventing the blood supply, oxygen and the
necessary amount of nutrients and food from reaching the baby, which will result in the slow
growth of the foetus.

Not only this, once the mother has given birth, she will cut off the supply of nicotine to her
child and shortly the baby will begin to suffer the effects of nicotine withdrawal.

Even if the mother does not smoke but the baby is exposed to passive smoking from the
father, the growth and development of the foetus can be affected.
Smoking throughout pregnancy does affect both mother and child and can lead to
complications that could have been prevented had the mother stopped smoking.

Fortunately some mothers suddenly develop a strong distaste for smoking when they become
pregnant and are easily able to give up smoking for the nine-month period or longer.

If you quit smoking within the first 3 months of being pregnant, you are greatly increasing
the probability of giving birth to a normal and healthy baby.

Below is a list of possible pregnancy complications that have been associated with women
who smoke:

 Ectopic pregnancy - this can be life-threatening for the mother and can lead to
difficulties in becoming pregnant again. In an ectopic pregnancy, the egg usually
becomes implanted in one of the fallopian tubes and begins to grow there. In the
majority of cases, this type of pregnancy will never result in the live birth of a child,
as there is not enough room for the baby to grow fully, and the cells must be removed
as soon as the ectopic pregnancy is diagnosed by either an injection of drugs or by
surgery.
 Foetal death - this is when the baby is still a foetus (less than 28 weeks) and dies in
the uterus. Maternal smoking has been linked to the death of 5 - 10% of all foetal and
neonatal deaths.
 Stillbirth and death of the baby in the first week - this risk is increased by a third if the
mother smokes.
 Miscarriage - the risk of suffering a miscarriage is increased by 25% for a smoker.
 Placenta previa - the placenta lies extremely low in the uterus and block or covers the
opening of the cervix. This can result in a difficult delivery and puts the mother's and
baby's life at risk.
 Early detachment of the placenta from the wall of the uterus before delivery, which
could result in heavy bleeding.
 Increase of heart rate and blood pressure in the mother due to the effects of the
nicotine.
 Blood clots
 Vomiting
 Vaginal bleeding
 Thrush
 Urinary tract infections
 Premature rupture of the membranes, which may lead to a premature birth as well as
infection.
 Lack of necessary vitamins and folic acid.
 Decreased lung function of the developing baby, caused by the nicotine that crosses
the placenta to the foetus and alters the cells of the unborn child's developing lungs.
 Premature birth, which could result in a low-weight baby. Full-term babies are
healthier and stronger. Going into labour prematurely is twice as common in smokers
than it is in non-smokers. The risks are even higher if the mother is still smoking
throughout the latter half of her pregnancy.
 Respiratory problems in the mother.
 Less energy and therefore tiring more easily and less able to cope well with the
pregnancy.

Remember, the more cigarettes you smoke throughout your pregnancy, the greater the risks
of harm to the foetus, complications with the pregnancy and harm to your health.

Source: http://www.helpwithsmoking.com/smoking-and-pregnancy/effects-on-foetus-
pregnancy.php

5. PRODUCT RESPONSIBILITY IN A
GLOBAL ECONOMY
 Reasonable product-liability reform

*It has long been apparent that the legal procedures established to compensate
individuals injured by defective products are badly flawed. Among the defects:

* While 70 percent of the products made and sold in this country cross state lines,
each of the 50 states has its own product-liability law, which adds substantially to the
complexity and cost of business defenses against personal-injury lawsuits.

* Product-liability laws of many states lack guidelines for assessing punitive damages,
which are added to compensatory damages covering economic costs such as lost
wages and medical expenses resulting from an injury.

* In 30 states, a business that sells a product containing defects of which it is not


aware can be held liable for damages if people are harmed by that product.

* A company bearing only 1 percent responsibility for an injury can be forced to pay
virtually all of an award for damages.

* Eleven states permit liability suits by people who were under the influence of
alcohol or illegal drugs when they used the products allegedly responsible for their
injuries.

* In some states, the gross misuse or alteration of a product does not preclude suits for
damages related to that product.

* Fear of liability lawsuits has caused manufacturers to withhold from the market a
wide range of products that could make important contributions to the health, safety,
and convenience of the American people.
*The total cost of the current tort liability system, which covers all types of personal-
injury claims, is an estimated $150 billion a year, which represents resources that
could otherwise be put into developing new products, creating jobs, and improving
U.S. competitiveness in global markets.

*Business has been trying for nearly, 20 years to bring order and fairness to the laws
for resolving product-liability claims. Legislation to that end cleared Congress last
year for the first time, but President Clinton vetoed it on the ground that it did not
address consumer interests adequately. He said he would support "reasonable"
legislation.

*Product-liability-reform legislation is again pending in Congress. The measure was


drawn up to meet many of the objections the president stated in his veto and to offer
room for compromise on others.

*These are among the key provisions of the measure:

* A federal liability law would replace the complex and frequently conflicting
patchwork of 51 individual arrangements in the states and Washington, D.C.

* Product sellers would be liable only for their own negligence or failure to comply
with an express warranty.

* The influence of alcohol or illegal drugs and the misuse or alteration of a product
would be factors in determining liability.

* Punitive damages could be awarded if "clear and convincing evidence" proves that
harm was caused by a defendant's "conscious, flagrant indifference to the safety of
others." Such damages would be limited to two times compensatory damages or
$250,000, whichever is greater.

* Defendants would be liable for noneconomic damages only in direct proportion to


each defendant's responsibility for the claimant's harm.

*Product-liability reform enjoys bipartisan support in Congress. The pending


legislation surely qualifies as the "reasonable" approach that the president has pledged
to support.

*There is no reason why he should continue to stand in the way of changes to a


system that supposedly benefits victims of product-related injuries but is itself the
cause of serious injury to the economy, to U.S. competitiveness abroad, to the
consuming public, and to the sense of fairness on which the nation prides itself.

 Products Liability Law - Defective and Dangerous Products

 Clients often sit down with me at an initial meeting, describe their accident and a
feature or component of a "product" and ask "do I have a case?" The answer to that
question most often turns on whether the product or its component can be
characterized as "defective". A recent product recall by a large manufacturer provides
an excellent learning tool on this topic.
 In November, 2009, Maclaren, USA, Inc. issued a "voluntary" recall of several
models of "umbrella" style baby strollers. As designed, manufactured and sold, the
strollers had exposed "shear points" at the hinges. This design defect caused twelve
reported cases in which infants' fingertips were amputated in the shear point.
According to news reports, these strollers were manufactured in China and distributed
by Maclaren USA, Inc.
 The Maclaren baby stroller defect is a classic example of a design defect. A product is
defective when it contains any component or feature which makes it unsafe for its
intended use or anticipated misuse, or if it is lacking any feature necessary to make it
safe. The exposed shear point obviously is a feature which makes the stroller unsafe.
This is a classic "design defect" case.
 Products are defective due to a "manufacturing defect" or "malfunction" when there is
a flaw in the manufacture or construction of the particular product involved in the
accident. A hypothetical example is a lightweight aluminum product such as a bicycle
handlebar. Some of these products are designed to be as light as possible. The design
involves portions of the handlebars which have very thin aluminum walls. These are
generally safe if the manufacturing process is perfect. However, if an air bubble, grain
of sand or other impurity gets into the aluminum at a stress point, this can cause a
sudden, catastrophic failure of the bar. While the design is arguably sound, the
presence of the flaw in the manufacturing process of the particular item makes that
product defective.
 Another way in which a product can be deemed defective is the failure to warn of
potential dangers in the use of a product or the failure to provide the proper
instructions as to the product's safe use. An example here would be a manufacturing
machine that has moving parts that can snag on loose clothing such as shirtsleeves or
frayed work gloves. While such a product would likely be deemed defective in
design, the failure to warn the users to avoid loose clothing and frayed gloves could
also be considered a defect.
 The examples of product defects are as varied as products themselves. Placing an
emergency stop switch more than an arm's length from an in-running pinch point on
an industrial machine is a design defect. Designing a child's toy with small pieces that
can break off and choke a child is a design defect. Designing a farm tractor without an
operator presence control that would shut the engine down if the operator left the
operating station of the tractor is a design defect.
 A wooden stepstool which is perfectly proper in design is defective if insufficient glue
is used to assemble it. Metal scaffolding materials are defective if the welds are
improper, causing the framing to fail under stress.
 With the increasingly global nature of our economy, more and more products and
component parts are being manufactured in foreign countries where labor costs are
very low. Quality control systems in these countries and these factories are often
below United States standards. With this, the incidence of manufacturing defects and
malfunctions increases.
 Also, big box discount stores often sell low price products that are designed and
manufactured overseas. These products are often packaged with the intention that they
will be sold in many countries. Sometimes, the warnings and instructions are
formatted in such a way to have application in many countries which speak many
languages. This can result in instructions and warnings that are very difficult to
understand. This can constitute a warnings defect.
 Bottom line, it is difficult to provide a universal description of a product defect. Each
case, and each product, ordinarily must be evaluated on its own terms. An
experienced attorney familiar with products liability law will often be able to give you
a preliminary evaluation of whether a product is or is not "defective". However, the
opinion of a qualified expert witness or consultant is sometimes required before even
a preliminary opinion or evaluation of a given case can be given.

Mr. O'Brien is the founder and Chair of the firm's Plaintiff's Personal Injury Practice Group. Mr.
O'Brien is a partner at White and Williams LLP and has been with the firm for over 25 years. He
has the highest possible rating in the Martin Dale Hubbard Ratings, and has been selected in a
survey of his peers as a "Pennsylvania Super Lawyer" multiple times.

Mr. O'Brien dedicates 100% of his professional time to representing individuals who have been
seriously and catastrophically injured in accidents. He has extensive experience handling cases
in a wide range of areas including products liability, industrial accidents, premises liability
accidents, construction site accidents, sports and recreation accidents, motor vehicle and
trucking accidents and dog attacks.

 Fundamental Facts About Product Liability Insurance

Each industrialized company wants a cover on their goods for fortification. Whatever happens to
someone who will be in making use of your product is covered. This is the accurate reason from
the growing need of product fault insurance coverage.

Product liability insurance defends the manufacturer in the episode that they find their company
being sued. By means of this type of cover, the insurance organization looks after some money
that is in merit to the applicant. As a result of obtaining any product responsibility insurance, the
producer does not have to be concerned about any thrashing which might be related in the midst
of a law suit.

Product fault insurance for a small business is diverse from the product responsibility insurance
that is being promoted to massive firms. With product fault insurance, many restrictions and
provisions that exist could be valid. The whole package a corporation obtains with their
merchandise responsibility insurance coverage will differ depending upon the product which
they manufacture. For example, a business that produces blankets will not be obliged to always
comprise the same type of coverage as the company who produces circular saws.

In getting product guilt insurance will be hard job. A lot of insurance bureaus do not put forward
this sort of insurance coverage. The applicant will be required to make inquiries to stumble on
insurance agency who presents product liability insurance coverage intended for your form of
business. On the other hand, product responsibility insurance agent will be standing by to lend
you a hand by means of responding to any queries that you have doubts about regarding this
type of insurance. They might also know how to find out what type of product liability
insurance as well as the cover quantity you require.

Insurance companies or product liability insurance dealers will obtain a number of information
about your business that will be put into consideration to make out the outline of product
liability insurance which your business really want. They will be looking at the scope of
business which you carry out. They will also look at the kind of product that your business
manufactures. As well, they will take in deliberation the outlets which you make use of to trade
your product.

Product liability insurance possibly will mean the dissimilarity involving your business staying
buoyant or requiring applying for bankruptcy. Devoid of the product responsibility insurance,
corporation will be legally responsible and such event that a court action occurred wherein the
judgment is ended in support of the applicant, the company is required to pay. This can result to
bankruptcy commonly in a number of cases, depending upon the amount of money dictated by
the court. And if you have product responsibility insurance, you would be capable of continuing
your business to function, fix the flaw on the merchandise, and settle the suit.

Product responsibility insurance is a must intended for all producing companies. This is one
variety of insurance which should not be ignored. The industrialized company should not initiate
selling their products prior to obtaining such type of insurance. If you would like to be sure that
your production is covered not minding of what might happen, make certain that you possess the
entire feasible insurance coverage compulsory, like product liability insurance.

 Product Liability Insurance Quote


Product responsibility insurance is a type of cover that offers protection against damages paid if
your product is accountable for injury to folk or damage to property. Some insurers don't offer
this as a standalone product, but it will often be available as a part of a business insurance
package. This insurance may be helpful if you produce, correct or supply any sort of 'product' as
you may be considered liable if the product is known as 'not fit for purpose'.

Product guilt insurance policies insure you against damages paid out because a product you have
supplied, manufactured or serviced causes property damage or injury. In the case of personal
injury, you will be covered if the NHS attempts to recover the price of hospital treatment and
ambulance costs.

Who is responsible?

The manufacturer of the product is typically responsible, but claims are sometimes brought
against the supplier first. Even if you did not turn out the product, you'll still be liable for
damages if the maker has gone out of business or you cannot identify the maker. If you change,
correct or refurbish the product then you'll also be liable, as well as if you imported it from
outside the ECU or if the name of your business is on the product.

Any sales contract that you give to your consumer must have included terms for the return of
flawed goods, and your supply contract must include details of product safety, quality control
and product returns.

Cover levels
Most businesses have cover of almost £2 million. The common range is between £1 million and
£5 million, and the level will be decided by the kind of product and its intended use.

They'll be able to help choose a policy that fits your wishes and also check that your business
has the correct contracts and terms in place for cover to be useful.

6. ROLE OF GOVERNMENT ON THE 40%


GROWTH OF THE TELECOM SECTOR
 Growth of Telecom Sector

The opening of the sector has not only led to rapid growth but also helped a great deal towards
maximization of consumer benefits as tariffs have been falling across the board. From only 54.6
million telephone subscribers in 2003, the number increased to 621.28 million at the end of
March 2010 and further to 742.13 million at the end of October 2010 showing an addition of
120.85 million during the period from March 2010 to October 2010. Wireless telephone
connections have contributed to this growth as the number of wireless connections rose from
3.57 million in March 2001 to 13.29 million in 2003, 101.86 million in March 2006, 584.32
million in March 2010 and 706.70 million at the end of October, 2010. The year also witnessed
two more telecom companies crossing the 100 million mark in terms of wireless connections.
Bharti Airtel was the first Indian Operator to achieve the landmark in 2009. It was followed by
Vodafone and Reliance Communication in 2010.

Growth of Telephones over the years


(In million)
March'04 March'05 March'06 March'07 March'08 March'09 March'10 Oct.'10

Wireline 40.92 41.42 40.23 40.77 39.41 37.96 36.96 35.43

Wireless 35.62 56.95 101.87 165.09 261.08 391.76 584.32 706.70

Gross Total 76.54 98.37 142.09 205.87 300.49 429.73 621.28 742.13

Annual Growth% 40% 29% 44% 45% 46% 43% 45% 19%

Teledensity

Teledensity is an important indicator of telecom penetration in the country. The teledensity


which was 2.32% in March 1999 increased to 12.7% in March 2006 and 52.74% in March 2010
and further to 62.51% in October 2010. Thus there has been continuous improvement in the
overall teledensity of the country. The rural teledensity which was above 1.21% in March 2002
has increased to 24.31% in March 2010 and further to 29.25 in October 2010. The urban
teledensity has increased from 66.39 in March 2008 to 119.45% in March 2010 and stands at
140.06% at the end of October 2010.

Focus on Rural Telephone

While the urban subscribers have been growing significantly, similar growth has not been on the
rural front. With introduction of mobile services in rural areas, the rural subscribers have
recently shown an increase though. The rural Telephone connections have gone up from 3.6
million in 1999 to 12.3 million in March 2004 and further to 200.77 million in March 2010.
Their share in the total telephones has constantly increased from around 14% in 2005 to 32.75%
at the end of October 2010. The rural subscribers have grown to 243.04 million at the end of
October 2010. The wireless connections have contributed substantially to total rural telephone
connections; it stands at 233.95 million in October 2010. During 2010-11, the growth rate of
rural telephones was 21.05% as against 18.69% of urban telephones. The private sector has
contributed to the growth of rural telephones as it provided about 84.27% of rural telephones
during October 2010.

 Manufacturing

Indian telecom industry manufactures a complete range of telecom equipment using state of art
technology. Considering the growth of telecom, there are excellent opportunities to domestic
and foreign investors in manufacturing sector. The last five years saw many renowned telecom
companies setting up their manufacturing base in India. The production of telecom equipments
in value terms is expected to increase from Rs.4,88,000 million during 2008-09 to Rs.5,35,000
million in 2010-2011. There are favourable factors such as policy moves taken by the
Government, incentives offered, large talent pool in R&D and low labour cost which can
provide an impetus to the industry. Exports increased from INR 4,020 million in 2002-03 to
INR 1,35,000 million in 2009-10 accounting for 26 per cent of the total equipment produced in
the country and it is expected to increase to Rs 1,50,000 million in 2010-11.

Foreign Direct Investment (FDI)

The liberalization in financial sector has beneficial results in the telecom sector. Liberalization,
with allowing entry to the private firms has resulted in unprecedented growth in telecom sector.
Allowing greater participation of foreign investor has helped in growth of the sector. Today,
telecom is the third major sector attracting FDI inflows after services and computer software
sector. At present 74% to 100% FDI is permitted for various telecom services. This investment
has helped telecom sector to grow. The total FDI equity inflows in telecom sector have been
US$ 1057 million during 2010-11 (April-September).

 3G AND BWA TELECOM SERVICES

The phenomenal growth of the telecom industry in India is being followed by the urge to move
towards better technology and the next level of service delivery. While the last 5 years have
been transformational for Indian telecom industry, the next few years look even more exciting.

3G spectra have already been allotted to successful bidders for commercial use on September 1,
2010 as per the timelines indicated in the Notice Inviting Application (NIA) and in the Letter of
Intent issued after the bid amounts were deposited. The 3G spectrum has been allotted to AirTel,
Aircel, Vodafone, S Tel, Reliance, Idea Cellular and Tata Cellular Services who won the bids
through the electronic auction spread over a period of 34 days in respect of 3G and 16 days in
respect of BWA. The BWA spectra have also been assigned to the successful bidders which are
Aircel, Augere, Tikona, Qualcomm, Infotel and Bharti. 3G & BWA spectrum would enable
users to have value added services like video streaming, mobile internet access, higher & faster
data downloads. With the allotment of spectrum, the Department of Telecommunications met all
the timelines in strict adherence to the NIA, right from the issuing of the NIA, conducting the
auction, earning a revenue of Rs.106000 crores and assigning spectrum on due date. The
electronic auction conducted for 3G & BWA spectrum, the first of its kind in the country, has
been historic in terms of its success. The auctions took place in a fair and transparent manner
satisfying all the stakeholders including the bidders who have won the spectrum for pan-India
and different circles. The success of this auction has been unparallel and the Government
intends to replicate this model in other sectors involving large stakes.

Newer access technology like BWA and 3G would completely transform the character of
internet/broadband scenario in India. BWA will overcome the key hindrance of Right of Way
(ROW) in India, while 3G has the potential to make the mobile phone, a ubiquitous device for
accessing the internet.

Mobile Number Portability (MNP)

MNP allows any subscriber to change his service provider without changing his mobile phone
number. The much-awaited mobile number portability was launched on November 25, 2010 in
Haryana and will be available to more than 700 million subscribers from January 20, 2011
across the country. As continued efforts of the Government to increase competition in the
market and to provide wider choice to customer, Mobile Number Portability will be an
important step. The networks in all the remaining 21 Licensed Service Areas have started
migration for working in the MNP environment. For orderly technical migration of complex
interconnected networks, each of the remaining service areas will be migrated one by one on
alternate days. This will enable simultaneous validation of technical parameters and removal of
any problems arising from migration activity to ensure successful and smooth migration of a
service area.

Inclusive Telecom Growth and Broadband for all

Telecom connects people across the length and breadth of the country irrespective of income
bracket and it provides immense benefit to all in the society. It contributes significantly to
India‘s GDP and particularly benefits the poor people in the country. The mobile phone has
revolutionized Indian economy, in that it has become more inclusive in terms of enabling
greater participation of the poorer sections of society. People can now transact their business in
a more economical manner saving expenditure on incremental cost involved in physical
movement. Now, they do not have to move from place to place in order to do business. And
expanding broadband base will only improve the scenario.

Source: http://pib.nic.in/newsite/mainpage.aspx

7. Is technology integration the solution to


biotechnology's slow research and
development productivity?
The environment of drug discovery and development has rendered innovative capabilities of a
biotechnology company further uncertain since it is influenced by financing and competitive
constraints requiring enormous R&D investments with uncertain pay-offs. Moreover, innovation
is essential to the competitive survival and lays the foundation of a biotechnology company's
strategy formulation in transforming knowledge-based assets into marketed products (Nicholls-
Nixon and Woo, 2003; Schweizer, 2005). However, in a constantly changing environment,
implementing a successful strategy depends to a significant degree on learning with new
directions and on recognizing opportunities that materialize during the process. In such a
context, external sourcing helps multiply opportunities of discovery, provide the requisite
flexibility that enables the firm to win the race against time and to increase the chances of
product success.

The particular challenge facing early-stage biotechnology companies is that a wide range of
tools, technologies and approaches need to be combined and applied to the search for new and
improved therapeutics. To complicate things further a high proportion of these technologies and
related intangible assets have unproven R&D productivity. In a R&D outsourcing environment,
due to discrepancies in data integrity and harmonization strategic alliances contribute mainly to
the dispersion of the product information flow from one stage to another within the value chain
of the drug discovery process. This presents a dramatic challenge for an early stage
biotechnology company since the largest value to be captured resides in innovations that affect
the later stages of drug development. The main issue that needs to be resolved is improving the
probability of success of clinical trials which only manifest itself only many years later. Firms
need to avoid misconceptions about accelerated drug development that has compromised the
completeness of development and favoured diminished quality.

 Implications of technology sourcing on business strategy

The management of knowledge would be the systematic process of sourcing, validating,


selecting, organizing, differentiating and generating knowledge. In other words, it attempts to
generate and utilize spaces of technology interaction that allow the development of the
intangible assets that support the firm in the achievement of its objectives. Thus, gaining and
sustaining a competitive advantage requires that a company understands the entire value
delivery system and not just the segment of the value chain in which it participates. This
necessitates the management of competitive capabilities at a wider scale and no longer based at
a firm level based on a sole technology. As complex as it might seem to implement platforms
that incorporate the role of other value-adding technologies, the option of ignoring these wider
issues is quickly becoming unrealistic for biotechnology companies. This evolution requires a
fundamental recasting of intra and inter-firm organizational frameworks, business models and
strategic orientations. Firms can increase considerably their operational efficiency and R&D
productivity by providing direction capabilities within the technology maze of the drug
discovery process.

In the early stage of the drug discovery process innovation is based on an open network where
firms engage in efforts to establish direct contacts with all their partners. This creates a situation
that ultimately contributes to the dispersion of innovation and is characterized by high
managerial involvement. In effect, different partners have access to different flows of
information, whilst at this stage value is essentially created through a closed network where
information is exchanged and transformed based on shared norms.

Achieving this goal is the primary objective of he biotechnology sector's efforts to renovate and
reinvigorate its R&D. Developing technology internally meets the needs of capability building
for the firm, but it requires more time and greater resources. Acquiring technology externally
constitutes the best alternative for the sector which suffers from rapidly changing technology
and high investor expectations. Such an approach will lead access to new knowledge and
organizational structures that are essential in serving markets of unmet medical needs in the
most cost-effective and rational way.

Source: AHUJA, G., Katila, R. (2001) Technological acquisitions and the innovation
performance of acquiring firms: a longitudinal study. Strategic Management Journal, 22,
197-220

 BIOTECHNOLOGY'S INCURSION INTO THE PHARMACEUTICAL


INDUSTRY

That first generation of biotech firms actually launched the industry, just as they had changed
the dominant business vision (even if this was largely speculative at the time). The working of
financial markets contributed quite importantly to the boom of the DBFs (see Figure 1), even if
the role of financial ventures has progressively changed over time. The initial DBFs were
founded by academic researchers and a high percentage of these firms still have academic
scientists among their founders. But the role of financial capital has gradually shifted away from
funding scientific ventures, which existed only in the form of ideas and plans, towards the
guidance of young, yet already established, firms towards their stock market listings (Ostro and
Esposito 1999). In that respect, the working of financial markets has a very powerful role in the
reduction in the number of small firms that characterizes the current biotechnology industry.

A second stage emerged in the 1990s and was characterized by a boom of economic
opportunities resulting from the huge research effort aimed at an analytical understanding of the
working of living organisms. The sequencing of genomes (human and non-human) appeared as
an emblematic event that implied a change in knowledge regime that affected pharmaceutical
research in many ways. In the way biotechnology techniques impacted the pharmaceutical
industry, this second stage was based much more on interdisciplinary and combinatorial
knowledge issued from the experience of different but complementary disciplines (biology and
chemistry of course, but also materials, mechanics, robotics, equipment suppliers, software and
computer industries, etc.). This interdisciplinary learning implied the discovery of new industrial
activities, new methods and techniques, new instruments, new equipments, new computer and
software applications, and the like (Hoch et al. 1996). In that landscape, DBFs began to play
another role, moving from translators to explorers of scientific and technological opportunities
for large pharmaceutical corporations (see Pyka and Saviotti 2001). Consequently, alliances
between DBFs and large pharmaceutical corporations are currently emerging as the backbone of
a competitive pharmaceutical industry (Malloy 1999). Very few DBFs have so far managed to
become vertically integrated producers even if a consolidation phase is currently noticeable in
the US context (see Table 1). Most of them remain collaborators or suppliers to large
pharmaceutical corporations through their participation in complex networks of collaborations.
DBFs have become a main source of knowledge generation for the pharmaceutical industry.

Source: Industry and Innovation, Sep 2003 by Quere, Michel


 Biotechnology's growth- innovation paradox and the new model for
success.

The biotech sector is facing a paradox. Traditionally, the strength of the sector has been based
on the ability of companies to innovate while its future success will be based on the ability of
companies to grow. Innovation and increasing size, however, do not always go hand in hand. In
fact, in recent Accenture research, almost 50% of pharma and biotech executives believe that
biotechs become less innovative as they grow, while 72% believe that the traditional pharma
operating model may not be the best one for biotechs to emulate.

So what options do biotechs have if they need to grow but must remain nimble and flexible?
One option, highly favoured by the investment community, is for biotechs to move beyond
dependence on revenues generated by technology platforms alone and instead become product-
driven companies. For companies that choose to become product-driven, the winners will be
those that can distinguish themselves from the crowd at the earliest possible stage and grow
without sacrificing the very things at which they are best. They will be the companies that:

 demonstrate the capability to deliver repeat innovation


 generate value early and quickly
 spread risk by developing multiple products
 minimise operational inefficiencies
 are expert at managing investor relations.

But what would this nimble, product-driven biotech company look like? To truly enable
innovation, flexibility and rapid delivery of value, the model would have two main elements - an
innovation centre and a capability network.

The innovation centre, embedded within discovery, would be separate from other activities
associated with bringing products to market in order to enable it to focus on fostering and
delivering continued innovation to drive growth. In addition to its internal activities, the
innovation centre would proactively identify new technologies outside of the parent company
and apply them to increase the speed, effectiveness and efficiency of the discovery process.

Combined, the two elements of the proposed model can allow biotechs to gain rapid access to
the capabilities required to bring products to market while retaining key attributes of innovation,
scientific excellence, entrepreneurial spirit and flexibility. By adopting this model, biotechs will
be better able to foster repeat innovation and enhance operational efficiencies.

Our research shows that 71% of pharma/biotech executives believe this operating model could
be a success and 72% of biotech executives believe their company would consider adopting the
model. Biotechs adopting this model will need to focus on four core competencies to deliver the
benefits of innovation, speed and flexibility.

8. 4G LAUNCH IN THE MARKET


The world is progressing in terms of technology. Modern technology has good features but best
is always in progress. Every generation is building up with the advanced technology. Firstly 1G
and 2G technology were introduced in 1990's, 3G technology was launched with quite good and
advanced phase now "G technology" is yet to come!4G is the name given to the fourth
generation technology. G stands for generation. 4G technology generally refers to 4g mobile
technology. It is $G wireless technology which has been used today. First of all 1G and 2g
technology was introduced in 1992. The technology which is used now days is the example of
4g technology.

 Implementing 4G Technology for organization Development

The corporate work culture has changed a lot over the couple of years. With the growing
competition in the market it has now become essential to incorporate various advanced
technology for proper growth and development of the organization. Collecting valuable
technology resources for a strong networking system plays an important role in such cases
which would help in better and faster performance.

This is possible with the introduction of the forth generation wireless communication system
know as the 4G network. It is a step ahead to the basic networking technology. It has the
potential to provide better internet protocol system with higher security and faster data
transmission thus providing higher quality service. It can also be implemented in the mobile
technology which reduce the malfunctions in transmission. Mainly there are two separate type
of 4G wireless network

Type of 4G Wireless Network

1. Long term evolution or LTE and


2. World wide Interoperability for Microwave Access or WiMAX

Features of 4G wireless technology

1. To enable large number of wireless- enabled devices it has 4G, IPv6 support
2. It has developed assess schemes
3. It has advanced antenna system
4. It uses SDR or software –defined Radio
5. It uses WiMax or LTE and other packet base data transmission systems

Advantages of 4G mobile technology

1. Better reception and transmission of data , at least 100 Mbit/s


2. It has secured IP based solutions such as – ultra – broadband internet access , gaming
services , streamed multimedia , IP telephony etc
3. It has flexible channel bandwidth which varies from 5 – 20 MHz up to 40 MHz
4. It has smooth hand off over heterogeneous networks
5. It has femtocell which has the capacity for better coverage

The special type of antennas used in this process popularly known as Spacial multiplexing
makes this technology efficient for multitasking by performing various functions
simultaneously. This accelerates the data exchange rate.

Source: Daisey Brown


 Get Around a New City Using 4G Technology

Moving to a new city is exciting, yet overwhelming for so many reasons. There is a lot of new
information to research and to learn. With 4G technology on your mobile device, you can learn
as you go, making getting around your new city easier than ever.

Moving to a new city entails learning all the roads, the restaurants, the shops, the parks, the bars,
and services like doctors and beauty salons. Not only do you need to learn where these things
are, but you also need to learn which are the best, which match your needs, and how to get there.
All of these things in your old city were like second nature so learning them all over again in a
new place can be very daunting and seem like such a huge task.

Thank goodness for the internet where you can find a world full of information on just about
anything you need. You can do all of the research on your new city from your house on your
computer, visiting local websites or just doing searches in your web browser. Either way you
will come up with a ton of helpful information. OR you can learn as you go. This may seem
scary at first – walking around a strange city with no destination or idea of where you are? But
with 4G technology on your phone, you will be connected as you work your way around the
city.

If your phone has 4G technology on it, you will be connected no matter where in the city you
are. As you are walking down the street and are hungry or thirsty, you can look up nearby
restaurants, bars and cafes. Between the ones you find, you can look up their menu, prices,
atmosphere, location, and reviews to help you make a decision of where to go. If you are
walking around and want to find a library, a nail salon, a grocery store, or need to look up
directions to a specific location, you can do it right on your phone. With 4G technology you do
not have to worry about getting lost or confused in your new city. Before you know it, your new
area will feel like home and you will be able to get around to all your favorite new places with
ease.

4G will change the way you live, especially if you are in a new area. Whether you find yourself
in a new city because you moved or because you are on vacation, a constant internet connection
on your cell phone will come in handy. You will never get lost, and will always be able to find
exactly what you need, information on the place, and how to get there.

 4G Technology Is Real Wireless Internet

It is always important to have some type of communicative connection to your life even when
you are far away from your home. There are so many things that you must stay in touch with
now due to how fast our modern day world moves. Everything continues to go faster and faster,
and due to that fast pace, it is important to find ways to stay connected to all of the things and
people that are important in your life.

But, this can be very difficult, especially when you leave your home, whether that be you the
actual dwelling in which you live or you're the city or town in which you live. 4G is a great way
to stay connected to the things and people that are involved in your life on a daily basis. It can
be the one technology that keeps lets you keep in touch with the world no matter where you are,
within reason of course.
If you are taking a trip to Antarctica, the Saharan desert, or adventuring through the jungle in
Thailand it should be more than obvious that the only form of communication that you have
while you are there is you own voice that only the people and animals in your immediate
vicinity can hear. And if you are extremely lucky, your tour guide will have some sort of
satellite communication device just in case something bad happens to either you or a fellow
tourist.

But the idea is that you will have a real, and quite quick mobile broadband Internet connection
with 4G technology that fits in your pant, or backpack pocket. This is revolutionary because just
a few years ago, this was impossible. The only possibility of having any type of Internet
connection was by using a computer. And while you are on the move, it goes without saying
that a desktop computer is completely out of the question.

The only option that was left was bringing a laptop computer with wireless Internet capabilities,
but this option was not always guaranteed to work either because this required that you find a
location that had a wireless Internet modem that you could connect to. This is a very limited
option because you must be within a certain distance of that wireless modem to be able to access
that network, which means that you cannot truly use that Internet connection on the go.

You must make the commitment to stay wherever that connection is for a fixed amount of time
and do the tasks that you had in mind. There are glitches in this system because if you forgot to
do something very important while you were there and you had already left, then you were
basically out of luck and had to move on without the answer to whatever that inquiry was. Well
now, with 4G you have a viable option for true on the go and unfettered Internet. All you need is
cell phone service to do whatever it is that you have to do.

You might also like