Professional Documents
Culture Documents
Policy Informatics Assignment Q3
Policy Informatics Assignment Q3
Policy Informatics Assignment Q3
dramatic impact or are perceived as potential game changers in many applications. Take big data
Policy Informatics for Smart Policy-Making Daniel Zeng, University of Arizona and Chinese
Academy of Sciences promise to change how the scientific enterprise operates and innovates. In
industry settings, new waves of products and services based on these technologies, many by
startups touted as tomorrow’s Google and Facebook, are entering the marketplace, improving
productivity in old industry sectors and opening up new opportunities for future businesses yet to
be defined. In the public sector, these technologies are starting to make similarly profound
stemming from the scientific community or private sector, public-sector applications tend to take
a more conservative approach (likely rightfully so) when adapting and adopting technology. As a
result, it’s still too early to pinpoint full-scale redesign of policy-making approaches in major
public decision-making areas, or to discuss completed success stories. Yet, bits and pieces of the
next generation of IT-driven policy-making have been emerging for some time. It’s already
common knowledge that social media provides great potential for policy makers to gauge public
opinion. Researchers and analysts routinely study social media content, such as Twitter feeds, to
characterize patterns of public sentiment on various issues with policy relevance, and in some
cases, to identify how emotion or influence propagates through online social networks.
Understanding gained through such analyses complements the traditional approach, which is
largely based on polling, and can inform policy-making. In more specific public-sector
including but not limited to sensor networks and social media, are being integrated to provide
faster and finer-granularity assessment of a situation. In addition, these technologies serve as the
enabling mechanism for coordination among people as well as resources, often in a distributed
fashion, and provide timely feedback to decision makers either during the planning phase or after
the implementation of involved policies and decisions. In public health crisis management, most
notably during the most recent Ebola epidemic, and the 2009 H1N1 (swine flu) pandemic, social
media has been heavily studied, with all kinds of simulation-based predictive models developed.
Despite the fact that most of these models performed poorly (and were often outrageously
wrong), the public health community has argued for the value of such models, from the policy
standpoint.
Policy Informatics
Policy makers have traditionally relied on intuition, experience, small-sample human contact,
polls, and media outlets to gauge society’s “pulse.” As I argued above, IT is providing innovative
The field of policy informatics is emerging to cross-fertilize between computational sciences and
public administration and policy, and to advance the framework of and infrastructural support for
informatics research in the context of public policy and administration. New do-main-specific
information collection and analysis approaches are being developed to meet the needs of
complex policy and administration problems. Work on social media, population-scale big open
data, and temporal-spatial-network visualization has received a lot of attention lately; more
applied research investigates technology adoption issues in governance processes. Also gaining
momentum is research on new data-driven decision making models; complex systems view of
processes, and the public; and persuasive technologies. From Informed Policy Making to Smart
Policy-Making Policy informatics is in its early stage of development, yet many of its concepts
and techniques have already met with success in the policy research community and in practice.
It’s safe to characterize the state of the art as being largely informatics and data science-based,
In this sense, policy informatics is already using technological means to enable informed policy-
making. Granted, providing principled informed policy-making frameworks and tools to policy
makers can have enormous implications; concerted efforts from multiple disciplines are still
needed in the years and decades to come, to perfect such frameworks and tools, and promote
their adoption. In the meantime, it makes a lot of sense for the research community to look
beyond informed policy-making to explore how policy informatics can help build the foundation
for even better and more advanced policy-making, something more akin to smart policy-making.
effective, real-time situational awareness and analysis capabilities to make use of data.
What advanced capabilities, then, will differentiate smart policy-making? To come up with a
definitive set of such differentiators won’t be possible due to the topic’s emerging nature. But
from the current literature and ongoing discussions among academics and practitioners, it isn’t
too difficult to venture on some novel aspects or even pillars of next-generation smart
policymaking:
Informed policy-making focuses on what has happened and what is happening. Smart
policy-making needs to take into consideration additional information, such as what
might happen down the road. In other words, smart policymaking entails a much more
proactive framework.
capability, emphasizing various engineering aspects of data sharing and mining. In smart
policy-making, the integration between data and the domain is expected to be much
tighter. Various kinds of behavioral, affect, and root-cause analyses, at both the
individual and group/population levels, would need to be carried out in particular policy
contexts.
With a significantly improved understanding of the policy environment, and much more
detailed, data-driven models, we can expect possible major changes as to the framing,
it’s conceivable that new decision-making tools, in addition to informatics and situational
awareness tools, will play a crucial role in smart policy-making. Since the development
policy-making is expected to create more exciting and novel research problems for AI
applications such as self-driving cars, robotics, Siri, and Cortana, it’s important for the AI
measurements of things since its raw data which means for it to be processed and made
meaningful some data mining techniques need to be applied. Datamining has a lot of techniques
that can be used effectively depending on the type of data set. Datamining is used to come up
with trends and patterns, for this to come to play one of the data mining techniques has to be
Tracking patterns.
One must be able to track patterns in a given data set, this is one of the most recommended skills
that a data miner should have. This is usually involves the recognition of some reputations or
trends in your data happening at regular intervals. For example, you might see that a certain route
is taken by most people during summer or NUST students board taxis in the morning than in the
evening.
Classification.
Classification is a more complex data mining technique that forces you to collect various
attributes together into discernable categories, which you can then use to draw further
conclusions, or serve some function. For example, if you’re evaluating data on individual
student’s attendance and number of courses passed, you might be able to classify them as “low,”
“moderate,” or “high” performance. You could then use these classifications to learn even more
Association is related to coming up with patterns, but is more specific to dependently linked
variables thus one has to identify certain patterns that are linked to one another for example in a
shop might realize that a person who buys bread always buys coffee, these items highly
correlated with another thus this technique is mostly used in shelf arrangement in shops, you’ll
look for specific events or attributes that go hand in hand.
Outlier detection.
Most of the data sets are filled with outliers and in many cases, simply recognizing the
overarching pattern can’t give you a clear understanding of your data set. You also need to be
able to identify those anomalies, or outliers in your data. For example, in a class usually people
who excuse themselves during lecture hours are males but one strange week during the month,
there’s a huge spike in females who excuse themselves during the lectures, you’ll want to
investigate the spike and see what drove it, so you can either replicate it or better understand
Clustering.
Clustering is very similar to classification, but involves grouping chunks of data together based
on their similarities. For example, you might choose to cluster different items of your audience
into different packets based on how often they tend to shop at your store.
Regression.
Regression, used primarily as a form of planning and modeling, is used to identify the likelihood
of a certain variable, given the presence of other variables. For example, you could use it to
project a certain price, based on other factors like availability, consumer demand, and
competition. More specifically, regression’s main focus is to help you uncover the exact
Prediction.
Prediction is one of the most valuable data mining techniques, since it’s used to project the types
of data you’ll see in the future. In many cases, just recognizing and understanding historical
trends is enough to chart a somewhat accurate prediction of what will happen in the future. For
example, you might review Students payment history and past payments to predict whether
Data visualization is the graphical representation of information, data in a data set can be
represented using different data visualization tools if you are working with massive amounts of
data, one challenge is how to display output in a way that’s not overwhelming Data visualization
can help you to create better and more appealing business reports, maximizing the potential of
your analysis. If you want the attention of your clients and colleagues, you need to learn modern
turning information into visually engaging images and stories. It enables you to highlight the
most relevant conclusions from what would otherwise be considered a huge pile of worthless
documents.
In Data visualization you need to match the audience, thus you need to know literate is your
audience. Some people cannot read a simple bar graph or pie chart thus you have to make it as
simple as possible. Even those who want to interpret data should be able to draw conclusions
from the given data. (Neely,2007 ) explained it concisely: “If you are dealing with
inexperienced clients, stay away from advanced solutions. But if you are meeting highly skilled
professionals, going beyond pies and charts is mandatory”. Therefore, you must get to know the
audience you face and give them materials they can digest successfully.
approach data visualization: Relationships: Shows the connections thus mutual impact between
specific elements (such as course and Program). Scatter plot is the best choice in this case.
Timeframe: Line graphics suits perfectly if you want to show how certain phenomenon is
developing over time. Composition: This technique is developed to reveal the structure of a
single unit, showing its constitutive elements. A pie chart is the simplest way to do this but if you
want a more distinguished data visualization, go for the 100% stacked horizontal bar graph or a
slope graph. Comparisons: Bar charts are the usual suspect if you want to compare two or more
values.
Although it seems irrelevant, the colors that are chosen have a strong impact the overall
effectiveness of your data visualization model. The contrasts between the opposing elements,
emphasizing the differences among these features. People mostly use red, green, blue, and
yellow because they can be recognized and distinguished easily, should not mix the colors too
much because it creates confusion among viewers and interferes with already established
patterns. For example, if you used red to mark negative trends and green to highlight positive
Data visualization can become a source of valuable digital content, which demands adding
interactive elements to the presentation. Interactive maps play the major role in that regard
because they allow users to engage and look only for information that they really need.
Interactive maps enable users to wander around the chart, zoom in and out, identify special
elements upon click, get a 360-degree overview, and many other interesting features. Creating
such maps is a highly complex process but it will definitely leave a great impression on your
clients or customers.
As the matter of fact, interactive maps had already become a standard technique for the vast
majority of companies and websites, with the likes of Google, Booking.com, or National
References
Andersen, et al. (2007) Group model building: Problem structuring, policy simulation and
Bryson, J. M. (2004). What to do when stakeholders matter. Public Management Review, 6(1),
p21--53.
Dawes, S. and Helbig, N. (forthcoming). The Value and Limits of Government Information
Resources for Policy Informatics in De Souza, K. and Johnston E., Policy Informatics, MIT
Press.
community.eu/
Janssen, M. and Klievink, B. (2010). Gaming and simulation for transforming and reengineering
government: Towards a research agenda. Transforming Government People Process and Policy
0 4, p132--137.
Koliba C. and Zia A. (forthcoming) Governance Informatics: Using Computer Simulation