Tools For The Smart Grid

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Source: Standard Handbook for Electrical Engineers, 17th Edition

ISBN: 9781259642586
Authors: Surya Santoso Ph.D., H. Wayne Beaty

24.4. TOOLS FOR THE SMART GRID

24.4.1. Introduction
The smart grid uses a wide variety of technologies and performs functions ranging from wide-area monitoring at transmission
levels to home automation at low-voltage distribution levels. The design, analysis, and optimization of the smart grid have to be
carried out taking into account, among others, the following aspects:

Power system planning, operation, and control require large amount of computational work due to large mathematical
models and, in certain applications, fast time response.

Traditional computational resources do not meet the emerging requirements of a grid model that is evolving into a more
dynamic, probabilistic, and complex representation. New computing techniques and solutions should enable faster and
more comprehensive analysis. In addition, the focus is moving to real-time simulation platforms.

In the foreseen smart grid scenario, the manipulation of large amount of data collected from smart meters and sensors will
make necessary the application of new computing capabilities, like those provided by graphics processing units (GPU), and
geographically distributed processing techniques, like cloud computing.

The study of the smart grid requires a combined simulation of power systems and information and communications
technology (ICT) infrastructures. Since the operation of the power system increasingly depends on communication and data
networks [65], it is crucial to understand the impact of the ICT infrastructures on the operation of the power grid.

The operation of the smart grid also involves issues such as safety and security (including protection against potential
cyber attacks) [65].

To deal with those aspects and achieve the goals mentioned above, new analytical methods and simulation tools are required.
Simulation has always been an important tool for the design of power systems; in a smart grid context it can be useful to
reduce the costs associated with upgrades to both power and communication infrastructures, analyze the potential loss of
service that can occur as a consequence of a failure or a cyber attack, or enable the design and evaluation of different solutions
before deploying them in the field [65]. Simulation faster than real time can be crucial for the development and implementation
of smart grids [66].

Some features of the tools required for the design, analysis and optimization of the smart grid are discussed below; the rest of
Sec. 24.4 covers with some detail the combined simulation of energy and communication systems, the fields in which HPC has
been applied to date, the development and capabilities of real-time simulation platforms, the advantages that big data and
analytics will bring to the smart grid operation, and the application of cloud computing to smart grid analysis and operation.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Simulation of Very Large-Scale Systems. Several large-scale grid concerns and threats cannot be adequately modeled using
present capabilities. Among these concerns are wide-area disruptive events, including natural events, cascading accidents,
coordinated cyber and physical attacks, interdependencies of the power grid system and critical infrastructures, or scenarios
including wide-scale deployment of intermittent distributed generation. For instance, understanding the interdependencies of
the electric power grids with other critical infrastructures is a serious need, since disruptions in one infrastructure (e.g., the
electric grid system) can have severe consequences for other infrastructures (e.g., natural gas and water supply systems). New
modeling approaches could span different applications (operations, planning, training, and policymaking) and concerns
(security, reliability, economics, resilience, and environmental impact) on a set of spatial and temporal scales wider than those
now available. To fulfill this role, new simulation tools are needed. Such tools could be built through a combination of existing
distributed and new capabilities, and take either the form of a single, centralized facility or a virtual, integrated environment.
Several issues would have to be addressed, however, in their development and implementation. In particular, acquisition of and
access to validated electric infrastructure data, physical and administrative protection of controlled information, including
protection of sensitive information generated as model output, opportunity for data sharing, data verification and validation,
identification of data use, and an environment that simplifies the integration of diverse system models. For a discussion of this
topic see reference [67]. However, the main challenges will not be in the size of the systems to be analyzed but on the
effectiveness of the analytical methods and the accuracy of the implemented models. Some experience is already available in
the implementation of tools for simulating huge power systems at a reasonable time; see for instance [68] and [69].

Multidomain Simulation Tools. An accurate modeling of some generation and energy storage technologies may require the
application of simulation tools capable of connecting and interfacing applications from different types of physical systems
(mechanical, thermal, chemical, electrical, electronics). Several packages offer a flexible and adequate environment for these
purposes. They can be used to develop custom-made models not implemented in specialized packages. Open connectivity for
coupling to other tools, a programming language for development of custom-made models and a powerful graphical interface
are capabilities available in some circuit-oriented tools that can be used for expanding their own applications and for
developing more sophisticated models. These tools can be applied for the development and testing of highly detailed and
accurate device models, or linked to other tools to expand their modeling capabilities [70].

Interfacing Techniques. Power system studies are numerous and each has its own modeling requirements and solution
techniques. Over time, several software applications have been developed to meet the individual needs of each study. However,
a need is also recognized to exploit the complementary strengths by interfacing two or more applications for model validation
or to exchange data between different tools. Examples are the interface of time- and frequency-domain tools, the interface of a
circuit-oriented tool and a tool based on the finite element method (FEM), or the interface for analysis of interactions between
communication and power systems, each system being represented by a specialized tool [70–76]. Although there have been
significant improvements in tools that can simultaneously reproduce different physical systems (e.g., multidomain simulation
tools), the interfacing will become an increasing necessity for the simulation of complex systems whose modeling
requirements cannot be met by a single tool.

Agent-Based Simulation. Agent-based simulation (ABS) is a powerful approach for simulation of real-world systems with a
group of interacting autonomous agents modeled as computer programs that interact with each other. The design of an agent-
based simulator involves communication protocols and languages, negotiation strategies, software architecture, and
formalisms. ABS can conveniently model the complex behavior of system participants and is particularly suitable for large-
scale systems involving various types of interacting system participants with distinct roles, functionalities, behavior, and
decisions. ABS is a suitable approach for some type of simulations: power market simulation [42, 43], system vulnerability
assessment [77], or co-simulation of power and communication systems [78]. ABS has been applied in power system studies
for many years and will increase its application mainly due to the increasing demand for large-scale system simulators.

24.4.2. Combined Modeling of Power Grids and


Communication Systems

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Communication technology will have a prominent role in the future smart grid; therefore, an accurate prediction of the power
grid behavior will require accurate models of the communication infrastructure and the integrated simulation of generators,
transmission and distribution systems, control processes, loads, and data networks. Current approaches to hybrid system
simulation focus on integrating available software packages. However, the results often have limited application or make
significant sacrifices of precision and accuracy; the analysis of integrated power and ICT infrastructures requires integrated
simulation frameworks [65]. The scenarios to be considered for the electric grid require the systematic construction of nontrivial
hybrid models, which should include complex continuous and discrete dynamics [78–80].

The Role of Communication Networks in Smart Grids. Communication networks already play an important role in the power
system, and they will play an even more crucial role in the future smart grid. The smart grid communications layer can be seen
as consisting of several types of networks, each having a distinct scale and range. For instance, wide-area networks (WAN) are
bandwidth communication networks that operate at the scale of the medium voltage network and beyond, handle long-distance
data transmission and provide communication between the electric utility and substations; AMIs interconnect WANs and end-
user networks, and provide communication for low voltage power distribution areas; home area networks (HAN) provide
communication between electrical appliances and smart meters within the home, building or industrial complex. Power line
communication (PLC) is an option that uses the existing power wires for data communication (i.e., the power grid itself
becomes the communication network); narrowband PLC technologies that operate over distribution systems are also used for
monitoring (e.g., AMI) or grid control. However, the power grid was not designed for communication purposes; from a
communication perspective, existing power grid networks suffer from several drawbacks [65]: fragmented architectures, lack of
adequate bandwidth for two-way communications, lack of interoperability between system components, or inability to handle
increasing amount of data from smart devices.

Continuous Time and Discrete Event Simulation Models. Power system and communication network simulators use different
modeling approaches and solution techniques:

Dynamic power system simulation uses continuous time where variables are described as continuous functions of time, but
since some discrete dynamics are to be introduced, a time stepped approach is used (numerical algorithms with discrete
time slots are applied).

Communication networks are packet switching networks, which can be adequately modeled as discrete events that occur
unevenly distributed in time. This is different approach from that used for power system dynamic simulation, based on a
fixed interval between events.

Figure 24-11 illustrates the difference between the two types of simulators: it is evident that synchronizing the time of different
components is a crucial aspect when combining several tools. An option to deal with both approaches is the use of predefined
synchronization points: each simulator pauses when their simulation clock reaches a synchronization point; after each
simulator is paused, information is exchanged. However, this can have some drawback as messages that need to be
exchanged between both simulators are delayed if they occur between synchronization points. A solution to this problem is to
reduce the time step between synchronization points, although this will degrade performance. Consequently, the co-simulation
needs to achieve balance between accuracy and simulation speed, and take into account that not all time instants at which
communication between the different simulators must occur are known a priori. See [65] and [81].

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 24-11 Continuous time versus discrete event simulation. (a) Continuous time simulation—
evenly distributed time steps; (b) discrete event simulation—unevenly distributed time steps.

Combined Simulation of Power and Communication Systems. The combined simulation of the power systems and
communication networks can be achieved by two approaches [65].

Co-simulation. An approach that combines existing specialized simulators can be a faster and cheaper solution that
constructing a new environment that combines power and communication systems in a single tool. On the other hand, using
existing models and algorithms that have already been implemented and validated reduces the risk of errors. In the smart
grid context, a co-simulator would consist of a specialized communication network simulator and a specialized power
system simulator. When multiple simulators are interfaced, the main challenge is to connect, handle and synchronize data
between simulators using their respective interfaces: time management between two simulators can be challenging if each
simulator manages its time individually, and the necessary synchronization between simulators running separately implies
penalties, such as start-up times, reading and processing input data. A significant effort on this field has been carried out to
date; see [82–88].

Integrated Simulation. An approach in which the power system and the communication network are simulated in a single
environment simplifies the interface between tasks and allows sharing the management of time, data, and
power/communication system interactions among the simulator parts.

Several tutorials on the combined simulation of power systems and communication networks have been presented in 6
[ 5, 89,
90].

24.4.3. High Performance Computing


© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
24.4.3. High Performance Computing
Definition. Simulation, optimization, and control of the smart grid can be included in the category of highly intensive computer
problems. Smart grid studies require very complex mathematical models due to the increasing use of power electronic based
control devices, the implementation of deregulation policies, or the co-simulation of energy and communications systems. All
these facts are increasing the computer requirements for power system applications. The concept high performance computing
(HPC), usually associated to supercomputing, is now used to denote a multidisciplinary field that combines powerful computer
architectures (e.g., computer clusters) with powerful computational techniques (e.g., algorithms and software) [91–93]. The
availability of affordable medium-size computer clusters and multicore processors, together with the increased complexity of
power system studies, is promoting the application of HPC [94], which may involve grid computing [95] or multicore computing.

Early attempts focused in two categories: naturally parallel applications, like Monte Carlo simulation and contingency analysis,
and coupled problems, like the simulation of large-scale electromechanical transients [96, 97]. Parallel computing is a type of
computation in which calculations are carried out simultaneously: a large problem is divided into smaller ones, which are then
solved at the same time. Parallelism has been employed for many years but interest in it has grown lately, and parallel
computing is becoming a dominant paradigm in computer architecture, mainly in the form of multicore processors. Parallel
computers can be roughly classified according to the level at which the hardware supports parallelism, with multicore
computers having multiple processing elements within a single machine, while clusters and grids use multiple computers to
work on the same task [98–102]. The availability of desktop computers with parallel computing capabilities raises new
challenges in software development, since these capabilities can be exploited for developing tools that could support the smart
grid simulation requirements. A first option is to adapt current methods, since some current algorithms can be easily adapted to
parallel computing environments; however, the real challenge is the development of new solution methods specially adapted to
a parallel processing environment.

Application to Power System Analysis. A first classification of power system computations may distinguish between steady-
state and dynamic analysis: steady-state analysis determines a snapshot of the power grid, without considering the transition
from a snapshot to the next; dynamic analysis solves the set of differential-algebraic equations that represent the power system
to determine its evolving path. Current power system operation is based on steady-state analysis. Central to power grid
operations is state estimation. State estimation typically receives telemetered data from the SCADA system every few seconds
and extrapolates a full set of conditions based on the grid configuration and a theoretically based power flow model. State
estimation provides the current power grid status and drives other key functions such as contingency analysis, economic
dispatch, OPF, and AGC.

A computational effort aimed at obtaining adequate simulators must account for the following aspects 9[ 1–93]: (1) power grid
operation functions are built on complex mathematical algorithms, which require significant time to solve; (2) a low
computational efficiency in grid operations can lead to inability to respond to adverse situations; (3) a power grid can become
unstable and collapse within seconds; (4) studies are usually conducted for areas within individual utility boundaries and
examine an incomplete set of contingencies prioritized from experience as the most significant; (5) dynamic analysis still
remains as an offline application, but grid operation margins determined by offline analysis do not always reflect real-time
operating conditions. Consequently, power system operations demand the application of HPC technologies to transform grid
operations with improved computational efficiency and dynamic analysis capabilities.

Potential areas of HPC application in power systems are dynamic simulation, real-time assessment, planning and optimization,
or probabilistic assessment. For a review of HPC development and applications in power system analysis see references [92,
93].

24.4.4. Real-Time Simulation

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Introduction. Continuous advances of hardware and software have led to the replacement of expensive analog simulators by
fully digital real-time simulators (RTSs) [103–107]. Due to advances in digital processors, parallel processing, and
communication technology, RTSs are becoming increasingly popular for a variety of applications [108–110]. Field
programmable gate arrays (FPGAs) are also making significant inroads into real-time simulators: this technology can offer
high-speed high-precision simulations in standalone configurations, and work as accelerator components in PC-cluster
simulators [111, 112].

Real-time simulators can be used for testing equipment in a hardware-in-the-loop (HIL) configuration or for rapid control
prototyping (RCP), where a model-based controller interacts in real time with the actual hardware; see Fig. 24-12 [35, 103]. To
achieve their goals, RTSs must have the ability to simulate all phenomena within a specified time step and maintain its real-time
performance so all signals must be exactly updated at the specified time step; a failure to update outputs and inputs at the
specified time step can cause distortion that will affect the simulation or the performance of the equipment under test. The rest
of this subsection summarizes the main features of a real-time simulator and their applications.

Figure 24-12 Applications of real-time digital simulators. (a) Hardware-in-the-loop simulation; (b)
controller prototyping [103]. (©IEEE 2011.)

Main Features of Real-Time Simulation. The implementation and development of real-time simulation platforms are based on
the features detailed below [106].

Real-Time Constraint. A real-time computation must be fast enough to keep up with real time; this constraint must be
respected at any time step. In general, the differential-algebraic equations (DAEs) of the system under study are discretized and
computed along a sequence of equal time-spaced points. Then, all these points must be computed and completed within the
specified time step. If a time step is not completed in time, there is an overrun. Overruns create distortion of the waveforms
injected into the equipment under test, and can lead to equipment misoperation.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Bandwidth. The time step selected for a simulation must be compatible with the frequency range of the phenomena to be
simulated. Electric systems are difficult to simulate because their bandwidth is high. Mechanical systems with slow dynamics
generally require a simulation time-step between 1 and 10 ms, although a smaller time-step may be required to maintain
numerical stability in stiff systems. A common practice is to use a simulation time-step below 50 μs to provide acceptable
results for transients up to 2 kHz, a time-step of approximately 10 μs for phenomena with frequency content up to 10 kHz, and
time steps shorter than 1 μs for simulating fast-switching power electronic devices used in transmission and distribution
systems, or used to interface distributed generators. Power electronic converters with a higher PWM carrier frequency in the
range of 10 kHz may require time-steps of less than 0.5 μs.

Parallel Processing. Real-time constraints require the use of highly optimized solvers that can take advantage of parallel
processing. The application of parallel processing can be facilitated by using distributed-parameters transmission lines to make
the admittance matrix block-diagonal (i.e., creating subsystems that can be solved independently from each other). This
approach allows dividing the network into subsystems with smaller admittance matrices; however, an effective implementation
of this technique assumes that the processing time for data exchange between processors is much smaller than the time used
to simulate each subsystem. Some software packages have been specifically developed to take advantage of parallel
processing, with great improvements in simulation speed. Linkage is performed to generate different executable modules that
can be assigned to different processor cores and increase the simulation speed according to the number of assigned processor
cores. Using hundreds of cores to simulate very large grids requires the use of efficient and automatic processor allocation
software to facilitate to use of such powerful parallel computers.

Latency. In power electronic converter simulation it can be defined as the time elapsed between the semiconductor firing
pulses sent by the controller under test and the reception by the controller of voltage and current signals sent back by the
simulator. The use of voltage source converters with PWM frequency higher than 10 kHz as well as modular multilevel
converters (MMC) with a very large number of levels may require time step values below 1 μs to achieve a total latency below 2
μs. Reaching such a low latency requires the use of FPGA chips. The difficulties in implementing low-latency inter-processor
communications capable of fast data transfer without overloading processors is one of the main impediments for the
development of very high performance simulators.

Solvers. Two important aspects to be implemented in real-time solvers are parallelization and stiffness. Parallel solvers for
real-time applications are generally based on the presence of lines or cables that split the system into smaller subsystems,
each of which can be simulated within the specified time step using only one processor. If the computational time for one of
these subsystems becomes too long because of its size, the common practice is to add artificial delays to obtain a new
reduction of size and computational time. An artificial delay is usually implemented with a stub line, which is a line with a length
adjusted to obtain a propagation time of one time step. Large capacitors and inductors can also be used to split a large system
in several smaller subsystems to take advantage of parallel processing; however, the addition of these artificial delays may be
problematic as parasitic Ls and Cs could be large compared to actual component values. Consequently, circuit solvers capable
of simulating large circuits in parallel without adding parasitic elements can be very useful to increase simulation speed and
accuracy. This is the key feature of the state-space-nodal algorithm [113], a nodal admittance based solver that minimizes the
number of nodes (and thus the size of the nodal admittance matrix) to achieve a faster simulation. Another important issue is
the stiffness exhibited by the equations of electric circuits: the circuit has a spread of natural frequencies. Nonstiff solvers (e.g.,
explicit Runge-Kutta) will be unstable due to the high-frequency components of the DAEs unless a very short-time step
corresponding to the highest frequency is used; under such circumstance the simulation becomes extremely long. Some stiff
solvers are able to cut through the high-frequency components and be less influenced by frequency components higher than
sampling frequency. For switching transient studies, the concern is usually for components below 2 kHz, but the equations may
have eigenvalues in the MHz range that cannot easily be eliminated. This is the reason why most solvers are based on the A-
stable order-2 trapezoidal rule of integration, although it can exhibit numerical oscillations and in many cases users must use
snubber circuits across the switches to avoid such oscillations [114]. Other more stable and accurate rules, such as the order-5
L-stable discretization rule, enable the use of larger time step values [106].

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Input/Output Requirements. Real-time simulators are built around multicore PCs with extensive input/output (I/O) capabilities.
FPGA chips mounted on electronic boards can provide a direct and rapid interface and be used to implement models and
solvers of moderate complexity, as well as fast control systems and signal processing. I/O requirements for real-time
simulation are increasing mainly due to the increasing complexity of current power electronics systems. Real-time simulators
must be able to physically interface with communication protocols (e.g., DNP3 protocol used for power system relays and
substations) with an appropriate driver. The user can also program the FPGA board to interface to the desired protocol. The
simulator must also provide proper signal conditioning for all I/Os such as filtering and isolation.

Applications of Real-Time Simulators. The applications of real-time simulators can be classified into the following categories
[106]:

1. Rapid Control Prototyping (RCP). A real-time simulator is used to quickly implement a controller prototype that is connected
to either the real or a simulated plant.

2. Hardware-in-the-Loop (HIL). This approach acts in an opposite manner; its main purpose is to test actual hardware (e.g., a
controller) connected to a simulated plant. See also [115].

3. Software-in-the-Loop (SIL). It is applied when controller object code can be embedded in the simulator to analyze the global
system performance and to perform tests prior to the use of actual controller hardware (HIL).

4. Power-Hardware-in-the-Loop (PHIL). This option consists of using an actual power component in the loop with the simulator.

5. Rapid Simulation (RS). It takes advantage of parallel processing to accelerate simulation in massive batch run tests (e.g., a
Monte-Carlo simulation). RS is very useful for reducing the simulation time of large systems using detailed models.

The list of fields in which RTSs have been applied include among others FACTS and HVDC applications, Monte Carlo
simulations, and protection system studies [106]. Research is also done to develop real-time and faster-than-real-time transient
stability simulators. Although real-time simulation of power system transient is well established, the need for offline real-time
phasor-domain simulation has been recognized for a long time; real-time phasor-type simulators can be useful for testing the
functionality of controllers and protective devices in large-scale power systems, for training purposes, as operator training tool
in energy management centers, for online prediction of system instability, or for implementing corrective actions to prevent
system collapse. For a more detailed list of applications of real-time simulation, see [110].

The future generation of real-time simulation platforms will be capable of simulating long-term phenomena simultaneously
with very short transients and fast switching events requiring sub-microseconds time-steps, performing multidomain and
multirate simulation (i.e., capable of simulating the dynamic response of all aspects and components affecting the system
performance and security assessment), and integrating high-end general purpose processors with reconfigurable processor
technologies, such as FPGAs, to achieve the best performance at a low cost [116].

24.4.5. Big Data and Analytics


Introduction. Big data is a term coined to denote large and complex data sets for which traditional data processing
applications are inadequate. The term often refers to the use of advanced methods to collect, manipulate and extract value
from huge collections of data. To achieve such goal it is important to account for the following aspects:

Big data manipulation includes capture, search, storage, transfer, visualization, analysis, curation, sharing, updating, and
information privacy. The list of challenges includes how to characterize the uncertainty in large and complex data sets,
reconcile information from multiple sources, and extract information from large volumes of data.

Data can be complex in many ways (volume, noise, heterogeneity, multisource, collected over a range of temporal and
spatial scales). Deducing information from such complexity can be useful to optimize performance, improve design, or
create predictive models.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The data are usually generated as random information from an unknown probability distribution. Understanding data means
to deduce the probabilistic model from which they have been generated. The information to be gained is the relationship
between the variables of the model.

Accuracy in big data may lead to more confident decision making, and better decisions can result in greater operational
efficiency, as well as reduction of cost and risk. However, there are always sources of error (e.g., noise in the
measurements, lossy data compression, mistakes in model assumptions, unknown failures in algorithm executions) in the
process from data generation to inference.

For more details on big data and data engineering, see references [117]–[119].

Data is increasing at exponential rates, which requires new framework for modeling uncertainty and predicting the change of
the uncertainty. Powerful techniques are needed to efficiently process very large volume of data within limited time. The
concept analytics is generally used to denote the discovery, interpretation, and communication of meaningful patterns in data.
Big data analytics refers to the set of technologies, such as statistics, data mining, machine learning, signal processing, pattern
recognition, optimization and visualization methods, which can be used to capture, organize, and analyze massive quantity of
information as it is produced, and obtain meaningful insight to provide a better service, at a lesser cost, and taking advantage
of multiple data sources. The main challenges faced by analytics software (i.e., massive and complex data sets in a constant
state of change) are leading to the development of tools that will be useful for its application in the smart grid.

Two concepts closely related to big data are data mining and machine learning [120–122]. Data mining is the process of
discovering patterns in large data sets: it is aimed at extracting information from data and transforms it into knowledge for
further use. The goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data
itself; the actual task of data mining is the analysis of large quantities of data to extract previously unknown patterns such as
groups of data records (cluster analysis), unusual records (anomaly detection), and dependencies (association rule mining).
These patterns can then be seen as a kind of summary of the input data, and may be used in further analyses. Machine learning
explores the study and construction of algorithms that can learn from and make predictions on data: such algorithms build a
model from inputs in order to make predictions or decisions expressed as outputs. Machine learning is employed where
designing and programming explicit algorithms is infeasible. Within the field of data analytics, machine learning is used to
devise complex models and algorithms that lend themselves to prediction. These models allow users to produce reliable
decisions and uncover hidden insights through learning from relationships and trends in the data.

The Role of Big Data in Smart Grid Operation [123–130]. Utilities are collecting an increasing amount of real-time information
from homes, factories, power plants, and transmission/distribution infrastructures. Big data is generated in the grid from
various sources [130]: (1) synchrophasor-measured data; (2) condition-based measurements acquired by IEDs; (3) data from
smart meters and other customer interaction channels (e.g., voice, Internet, or mobile); (4) data from home automation and
intelligent home devices; (5) data from new technologies requiring additional monitoring (e.g., electric vehicles, wind generation,
photovoltaic panels, microgrids); (6) energy market pricing and bidding data; (7) offline entered nameplate and maintenance
data; (8) data on customers, service connection and assets; and (9) management, control, and maintenance data in the power
generation, transmission, and distribution networks acquired by IEDs. Some data widely used in decision making, such as
weather and GIS information, are not directly obtained through grid measurements. Utilities can also collect large amount of
data from computer simulations based on reliable power system models and powerful software tools. Figure 24-13 shows the
main sources of big data in the smart grid.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Figure 24-13 Sources of big data in the smart grid.

Key characteristics of data from power grids are volume (e.g., immense amounts of data), velocity (i.e., some information is
made available in real or near real time), and variety (i.e., either structured or unstructured data can come from multiple data
sources, such as network monitors, meters, or even images and videos).

The data from AMI generates information on how individual customers respond to requests of consumption reduction. Both
the supply of power and the demand for it can be managed more efficiently if utilities and consumers get accurate information
about power use (e.g., consumers could be guided to move some electricity consumption to off-peak hours and reduce the
need for nonrenewable power plants to be activated at peak hours). This information can also be used to improve grid reliability,
outage response, reducing the cost of distribution operations, or measuring the impact of a demand response program.

For efficient asset management and outage prevention, real-time operation can be crucial, often requiring fast processing of
large volumes of data. To unlock the full value of this information, utilities need complex event-processing engines that are still
under development. However, many utilities do not have the capabilities to transmit, store and manipulate such information. In
addition, focusing on high volumes of data rather than on the data management can lead to nonadequate decisions; although
volume is a significant challenge in managing big data, data information variety and velocity must be focused on as well.

A challenge for utilities is that they run their central control operations on a mix of legacy computer systems, many of which
cannot communicate with each other. Past experience, particularly from large blackouts, has shown the need for better
situational awareness about network disturbances such as faults and dynamic events, sudden changes of intermittent power
from renewable resources such as wind generation, outage management tasks such as fault location and restoration, and
monitoring of system operating conditions such as voltage stability. These and other tasks have been handled reasonably well
by existing solutions, but improvements in decision making are highly desirable in order to produce more cost-effective and
timely decisions, facilitating more efficient and secure grid operation.

Appropriate big data analytics capabilities will deliver value to (1) anticipate failures of distribution assets, such as
transformers, to organize maintenance operations; (2) improve balance of generation and demand through better forecasting
and demand response management; (3) improve energy planning forecasts to decrease energy costs; (4) improve customer
service quality, identifying power cuts more accurately at the moment they occur for faster restoration; (5) enable optimization
and control of delocalized generation to facilitate the connection of a large number of electrical vehicles and renewable energy
sources; (6) support distribution network operations, such as voltage optimization or outage management; (7) structure energy
saving policies; (8) improve customer service with error-free metering and billing services; (9) allow utilities to identify and
reduce losses.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The incorporation of big data analytics into utilities is finding some evident barriers, such as the heterogeneity of data types and
formats, security, confidentiality and privacy issues [131], the ownership of private data, or the unavailability of adequate tools.

Some emerging technologies, such as the IoT and cloud computing, are closely related to big data in the smart grid 1[ 32]. The
IoT represents an enormous amount of networking sensors which collect various kinds of data (e.g., environmental,
geographical, operation and customer data). These new technologies offer new opportunities, as they can contribute to create
the value that big data will generate to smart grid through efficiently integrating, managing and mining large quantities of
complex.

24.4.6. Cloud Computing


Definition. Cloud computing is a model for enabling ubiquitous, on-demand access to a pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) [133]. Cloud computing provides users with capabilities
to process data in third-party data centers. The current availability of high-capacity networks, low-cost computers and storage
devices, as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility
computing, have led to a growth in cloud computing, which is becoming a highly demanded service due to high computing
power, low cost of services, high performance, scalability, accessibility, and availability. For more details on cloud computing
see [134–139].

Cloud computing is often compared to the following technologies with which it shares certain aspects [134]:

Virtualization abstracts away the details of physical hardware and provides virtualized resources for high-level applications.
Virtualization forms the foundation of cloud computing, as it provides the capability of pooling computing resources from
clusters of servers and dynamically assigning or reassigning virtual resources to applications on demand.

Grid computing coordinates networked resources to achieve a common computational objective. Cloud computing and grid
computing employ distributed resources to achieve application-level objectives; however, cloud computing takes one step
further by leveraging virtualization technologies at multiple levels to realize resource sharing and dynamic resource
provisioning.

Utility computing provides resources on demand and charges customers based on usage rather than a flat rate. Cloud
computing can be perceived as a realization of utility computing, and adopts a utility-based pricing scheme for economic
reasons. With on-demand resource provisioning and utility-based pricing, service providers can maximize resource
utilization and minimize their operating costs.

Autonomic computing aims at building computing systems capable of self-management (i.e., they react without human
intervention). Its goal is to overcome the management complexity of current computer systems. Although cloud computing
exhibits certain autonomic features, such as automatic resource provisioning, its objective is to reduce cost rather than
complexity.

Deployment Models. Several models of cloud computing, with variations in physical location and distribution, have been
adopted. Basically, cloud computing can be grouped into four subcategories: public, private, community, or hybrid [133, 137]. A
public cloud is a cloud made available to the general public in a pay-as-you-go manner while a private cloud is as internal data
center not made available to the general public. In most cases, establishing a private cloud means restructuring an existing
infrastructure by adding virtualization and cloud-like interfaces to allow users to interact with the local data center while
experiencing the same advantages of public clouds, most notably self-service interface, privileged access to virtual servers, and
per-usage metering and billing. A community cloud is shared by several organizations with common concerns (e.g., goal,
security requirements, policy, compliance considerations) [137]. A hybrid cloud takes shape when a private cloud is
supplemented with computing capacity from public clouds.

Types of Clouds. From the perspective of service model, cloud computing is divided into three classes, depending on the
abstraction level of the capability provided and the service model of providers [133–135] (see Fig. 24-14):

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
Infrastructure as a Service (IaaS). This option offers virtualized resources (i.e., computation, storage, and communication)
on demand. A cloud infrastructure provisions servers running several choices of operating systems and a customized
software stack. Users are given privileges to perform numerous activities to the server (e.g., starting and stopping it,
customizing it by installing software packages, attaching virtual disks to it, and configuring access permissions and
firewalls rules).

Platform as a Service (PaaS). This option offers a higher level of abstraction to make a cloud easily programmable: an
environment on which developers create and deploy applications and do not necessarily need to know how many
processors or how much memory applications will be using. In addition, multiple programming models and specialized
services (e.g., data access, authentication, and payments) are offered as building blocks to new applications.

Software as a Service (SaaS). Services provided by this option can be accessed by end users through Web portals.
Consumers are increasingly shifting from locally installed computer programs to online software services that offer the
same functionally; traditional desktop applications such as word processing and spreadsheet can now be accessed as a
service in the Web. This model alleviates the burden of software maintenance for customers and simplifies development
and testing for providers.

Figure 24-14 Cloud computing classes.

A Practical Experience. Utilities will have to provide new business capabilities in the face of rapidly changing technologies.
Cloud computing solutions promise great technological capabilities and benefits; however, not much experience is yet available
on its application by utilities. The experience of ISO-NE presented in [140] is a good example from which some important
conclusions were derived.

The list of challenges associated with migrating power system simulations to the cloud-computing platform includes topics
related to development, software license, cost management, and security. The main findings provided in reference [140] can be
summarized as follows:

A platform developed on the basis of an open architecture should not only accommodate different resource management
and job balancing tools, but also support different power system simulation programs.

The cloud-computing platform can comply with current cybersecurity and data privacy requirements, comparable to existing
on-premise infrastructures.

Deploying power system applications to the cloud can result in significant cost savings and performance improvement
without the compromise of system security and quality of service.

Utilities need to develop a strong business case when making the decisions on which data and applications will be deployed
in the cloud so as to take advantage of all the benefits.

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.
The way cloud computing functions is significantly different from the traditional on-premise infrastructure, so understanding
all the implications is important to equip utilities with the information needed to make the appropriate assessment.

Cloud computing can achieve significant cost saving by avoiding unnecessary over-expenditure on IT infrastructure and
eliminating capital investment expenditures. Public cloud computing can be a robust solution far beyond the economic reach of
many organizations. The high resiliency and availability of services provided by cloud computing are critical to utilities as their
daily operations become increasingly dependent on IT services and makes cloud computing an attractive solution to meet the
growing internal computational needs; in addition, migrating power system applications to public cloud services can comply
with current cybersecurity and data privacy requirements.

The potential cloud computing applications in power system analysis and operations have been analyzed in several works; see
[141–145].

© McGraw-Hill Education. All rights reserved. Any use is subject to the Terms of Use, Privacy Notice and copyright information.

You might also like