Harnessing AI and Expert Systems For Electric Vehicle V2G Functionality

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Harnessing AI and Expert Systems for Electric

Vehicle V2G Functionality


Mahmoud Elnady
VSB-TU OSTRAVA
mahmoud.elnady.czv@vsb.cz

Introduction

The widespread adoption of electric vehicles (EVs) as shown in figure 1 presents a potential
strain on current power infrastructure unless their charging and discharging activities are
efficiently coordinated. Dynamic pricing, a facet of demand response strategies, offers a
compelling avenue to incentivize EV owners to engage in scheduling programs. Consequently,
the realms of EV charging and discharging scheduling, coupled with dynamic pricing models,
emerge as vital arenas for investigation. While numerous scholars have delved into artificial
intelligence-driven EV charging demand forecasting and scheduling models, showcasing their
superiority over conventional optimization techniques, such as linear and exponential models,
scant attention has been paid to EV discharging scheduling, particularly in the context of
vehicle-to-grid (V2G) systems. Given the nascent nature of EVs feeding electricity back into
the grid, a comprehensive review of existing research in EV charging and discharging becomes
imperative to discern prevailing gaps and pave the way for future advancements.

Fig.1 Electric vehicles market, global, annual sales (‘000), 2018-2035. Credit: GlobalData.

In recent years, numerous countries have ramped up the adoption of electric vehicles (EVs) as
part of their efforts to address energy crises and environmental issues, notably the surge in CO2
emissions and concerns about climate change. The first quarter of 2022 witnessed a significant
acceleration in this trend, with approximately two million EVs sold globally, marking a
staggering 75% increase compared to the same period in 2021 [1]. With governments
worldwide offering incentives and implementing supportive policies, the trajectory suggests a
continued growth in EV numbers. However, while this surge presents a promising solution, the
uncoordinated proliferation of EV charging poses challenges to the existing power grid.
Conversely, EV batteries represent mobile energy storage units capable of offering ancillary
services to power grids, including peak-shaving, valley-filling, and voltage and frequency
regulation. Moreover, if managed effectively, EV batteries can serve as flexible loads,
optimizing renewable energy utilization by aligning charging and discharging cycles with
renewable generation profiles [2].

The concept of utilizing electric vehicle (EV) batteries to supply electricity back to the power
grid, known as vehicle-to-grid (V2G), was pioneered by Kempton and Letendre in 1997 [3].
Engaging in V2G programs offers a plethora of environmental, economic, and socio-
technological advantages to various stakeholders, including EV owners, grid operators,
government entities, and aggregators. For instance, participation in V2G services enables EV
owners to mitigate the total ownership costs by selling stored energy back to the power grids
during peak demand periods. Additionally, V2G initiatives alleviate power grid congestion,
reduce emissions, and bolster renewable energy utilization for grid operators by strategically
timing EV charging during periods of high renewable generation and discharging surplus
energy during periods of low renewable energy generation and heightened demand [4].

However, the adoption rate of electric vehicles (EVs) remains relatively modest, and the
concept of vehicle-to-grid (V2G) integration is still in its nascent stages, undergoing continual
evolution. Numerous V2G initiatives are currently confined to pilot projects, yet to transition
to full-scale implementation. Notably, the majority of these pilot projects have been initiated
only in recent years, with limited scope and scale [5]. Furthermore, existing literature
predominantly emphasizes EV charging methodologies, ranging from classification of
charging techniques to the design of charging strategies aimed at cost minimization. Specific
studies delve into topics such as EV charging scheduling, clustering, and forecasting
techniques, optimal charging strategies under dynamic pricing schemes, and various aspects of
V2G implementation, including customer acceptance, renewable energy integration, economic
feasibility, energy trade coordination, and optimal energy management systems [6].
Encouraging EV owners to engage in the V2G program sans monetary incentives poses
challenges, as it entails both energy expenditure and time investment to feed electricity back
to the grid. Consequently, the economic and operational considerations of V2G adoption
become increasingly pertinent as the penetration of V2G-capable EVs rises. While several EV
charging scheduling approaches have proven effective in responding to Time of Use (ToU)
and Real-Time Pricing (RTP) signals to reduce peak demand, mitigate load fluctuations, and
minimize charging costs [7], the extent to which these pricing policies accurately reflect power
system conditions remains a topic that is yet to be extensively explored in the literature.

Furthermore, only a limited number of papers have proposed the economic and operational
dimensions of the V2G program, including discharging scheduling and pricing mechanisms.
Al-Ogaili et al. [8] asserted the superiority of artificial intelligence models over probabilistic
models. With the availability of extensive datasets and the growing computational power,
artificial intelligence algorithms like neural networks and reinforcement learning have gained
widespread popularity and efficacy across various applications, including forecasting and
optimization tasks. The data-driven learning capabilities inherent in artificial intelligence
models render them more adept than traditional optimization models, such as linear and
exponential models. Additionally, artificial intelligence models often do not necessitate expert
knowledge of complex systems, which can be challenging to acquire. Consequently, artificial
intelligence models have found extensive utility in diverse EV-related studies, spanning from
EV battery design and management to V2G applications. A comprehensive examination of the
multifaceted roles that artificial intelligence plays in facilitating the mass adoption of EVs can
be found in [9]. Shahriar et al. [10] conducted a review on machine learning-based prediction
and classification of EV charging behavior, highlighting the potential of reinforcement learning
for EV scheduling. Moreover, numerous artificial intelligence models have been deployed to
address various EV charging scheduling tasks, including predicting EV charging electricity
prices, discerning EV driving patterns, estimating aggregated battery capacities, forecasting
charging load demand, analyzing charging patterns, and optimizing EV charging/discharging
schedules. However, the interconnections and gaps among these studies have not been
comprehensively explored in the existing literature.

Main Idea

The primary focus of this essay is to evaluate, condense, and assess the current artificial
intelligence-driven algorithms concerning three key components of EV charging/discharging:
forecasting, scheduling, and dynamic pricing. Additionally, this essay highlights the research
deficiencies in the realm of efficient EV discharging scheduling and pricing strategies, based
on insights gleaned from existing literature. The relationship between forecasting, scheduling,
and dynamic pricing is shown in Figure 2.

Fig 2. Relationship between EV charging and discharging scheduling, forecasting, and dynamic pricing.
When it comes to using V2G in practice, the most important thing is
to make sure that EV drivers have enough energy in their car
batteries when they need it. For example, a driver must be able to
make a trip to work and back, at any time. This is the basic
requirement of V2G and any other charging technology: The EV
driver must be able to communicate when they want to unplug the
car and how full the battery should be at that time.

Discussion

The evolution of Vehicle-to-Grid (V2G) technology can be categorized into three distinct
phases [11], as illustrated in Figure 3 alongside corresponding EV charging/discharging
techniques. Initially, in the preliminary phase, EV charging loads represent only a small
proportion of the overall power grid demand. Currently, V2G development predominantly
resides within this initial phase. Consequently, prevalent charging control strategies include
uncontrolled charging and controlled charging. Many V2G pilot projects are still in the
planning and testing stages, with only a minimal portion of commercial EVs equipped with
V2G capability. Furthermore, the existing Time of Use (ToU) tariff, typically applied to other
load demands, can influence EV charging behaviors. However, traditional ToU tariffs may
struggle to accommodate the widespread adoption of EVs if the stochastic nature of EV
charging loads is not considered in their design.

During the second phase, the electric vehicle (EV) charging load is expected to constitute a
more substantial portion of the power grids' overall load, attributable to the heightened
penetration of EVs. The proliferation of uncoordinated and stochastic EV charging demands
during peak periods poses a significant challenge to power grid stability. Consequently,
aggregators are tasked with a crucial role in orchestrating EVs to offer Demand-Side
Management (DSM) services, such as optimizing renewable energy utilization, peak shaving,
valley filling, and mitigating distribution network congestion [11]. Smart charging/discharging
control strategies become imperative during this phase, with existing literature predominantly
focused on EV smart charging/discharging strategies.

The majority of prediction models employed in forecasting future electricity prices, EV load
demand, renewable energy generation, and the availability of battery state of charge (SOC) for
V2G services are supervised learning-based models. These models are then utilized in
optimization frameworks to generate EV charging/discharging decisions. Given that even a
small forecasting error can significantly impact decision-making, the accuracy of forecasting
models becomes paramount for optimal EV charging scheduling models.
Fig 3. Schema of V2G development phases and corresponding EV charging techniques, (a). smart charging, (b).
controlled charging, (c). smart charging, (d). indirectly controlled charging. The solid black arrow indicates the
power flow, the dash red arrow indicates the charging control, and the blue dash arrow indicates the information
flow.

While various artificial intelligence methods are widely utilized due to abundant datasets and
robust computational capabilities, long short-term memory (LSTM) and gated recurrent units
(GRUs) stand out as the most popular methods for forecasting EV charging/discharging tasks.
Their effectiveness lies in their capacity to handle nonlinearities and long-term dependencies.
However, despite the adequacy of supervised learning models in providing forecasting
accuracy, uncertainty remains an inherent characteristic of forecasting models. The stochastic
and unpredictable nature of EVs' charging and discharging patterns further complicates
forecasting efforts.

To address these challenges, hybrid and ensemble techniques amalgamate the strengths of
different forecasting methods or model configurations to bolster forecasting performance [12].
Additionally, online-based forecasting and prediction confidence interval techniques emerge
as potential solutions to mitigate the impact of inherent forecasting uncertainty. Online-based
forecasting models leverage the latest available data to continuously update the model, while
prediction confidence interval techniques enable decision-makers to account for uncertainty
ranges in their planning processes.

Furthermore, relying solely on individual forecasted information proves insufficient for


optimal EV charging/discharging decisions. For instance, making optimal
charging/discharging decisions based solely on forecasted EV charging demand is challenging.
Instead, decisions should be informed by a comprehensive consideration of all pertinent power
system conditions, such as renewable generation, system constraints, supply and demand
dynamics, and charging/discharging dynamic pricing signals. Thus, optimization models are
essential for processing critical forecasted information to guide optimal decision-making.

In recent years, reinforcement learning-based models have gained prominence for scheduling
EVs' charging/discharging operations. However, traditional Q-learning poses challenges due
to its reliance on a lookup table, rendering it impractical for problems like EV charging
scheduling characterized by numerous states and actions [13]. Deep-Q networks (DQN), which
merge deep neural networks with Q-learning, have emerged as a solution adopted by many
researchers to address the curse of dimensionality inherent in conventional Q-learning
methods. Consequently, DQN-based models are deemed more suitable for navigating high-
dimensional and intricate environments [14]. Nonetheless, DQN's discrete action spaces
impose limitations on action space exploration, and issues related to the overestimation of
action values have been encountered. To mitigate these concerns, Double DQN has been
proposed to alleviate the overestimation problem associated with DQN [15]. Furthermore, deep
deterministic policy gradient (DDPG) and soft-actor-critic (SAC) have been leveraged to offer
continuous state and action spaces, thereby enhancing DQN's performance. Despite their
capability to handle continuous and complex problems, DQN, DDPG, and SAC typically
necessitate prolonged training periods. Research has indicated that A3C requires less training
time than DQN [16]. Notably, despite its advantages, Actor Critic (A3C) has yet to be
employed for EV charging/discharging tasks. Nevertheless, A3C, with its capability for parallel
training of multiple agents, offers benefits such as rapid convergence towards optimal policies
and stabilization of the learning process. Hence, it merits consideration for future research
endeavors in this domain.

Moreover, techniques like episodic memory and meta-reinforcement learning hold promise for
accelerating training speed and enhancing meta-learning capabilities. Further details regarding
episodic memory and meta-reinforcement learning can be found in [17]. Additionally, fine-
tuning reward functions and hyperparameters of reinforcement learning models is crucial.
Techniques such as adjustable reward functions and Bayesian Optimization can be employed
for tuning reward functions and hyperparameters, respectively. Furthermore, given the ongoing
evolution and improvement of reinforcement learning (RL) methods, it is imperative to apply
the latest advanced RL techniques to EV charging/discharging scheduling and dynamic pricing
tasks to explore potential performance enhancements. Various parameters of RL-based models,
including reward functions, learning rates, exploration-exploitation trade-offs, and model
structures, should also be systematically tested and compared to identify optimal combinations
suitable for the tasks at hand.
While simulation outcomes for the majority of EV charging/discharging scheduling models
demonstrate satisfactory performance, many of these models have been evaluated using
historical or synthetic data. However, historical or synthetic electricity price data may not
adequately capture real-time operational dynamics, including EV charging load demand, V2G
discharging requests, and renewable energy generation. Hence, the development of a suitable
dynamic pricing model that accurately mirrors power system operating conditions is
imperative.

Finally, as the V2G technology advances into the third phase, robust technological
infrastructure and efficient management practices will enable numerous EVs to contribute
ancillary services to power grids through bidirectional power flow, ultimately realizing the
vision of smart grids [11]. At this stage, substantial communication resources will be
indispensable for facilitating interactions between EV owners and power system operators. An
event-triggered communication mechanism is advocated as it conserves bandwidth and signal-
processing resources compared to periodic communication methods. Furthermore, minimizing
battery degradation and charging/discharging losses is essential for expediting the adoption of
V2G technology. Consequently, research in artificial intelligence-based battery design and
management assumes significance. Despite the manifold benefits of V2G, incentivizing EV
owners with diverse driving and charging preferences to participate in a unified V2G program
poses a challenge. Moreover, developing tailored V2G programs to accommodate all
participants is impractical. Hence, dynamic pricing emerges as a pivotal tool for influencing
EV owners' charging and discharging behaviors, serving as a specialized form of demand
response.

Dynamic pricing strategies, widely utilized in various sectors such as airline ticketing, e-
commerce product pricing, and advertising bid pricing, entail adjusting prices based on demand
and supply dynamics. The approach involves raising prices during periods of high demand and
lowering them when supply exceeds demand. However, when designing dynamic pricing
models for power systems with high EV penetration, additional factors like power system
constraints, renewable energy generation, and EV charging/discharging preferences must be
taken into account. Thus, a well-designed dynamic pricing strategy that accurately reflects
power grid conditions is essential for effectively guiding EV charging and discharging to
optimize energy utilization. Nevertheless, there is limited literature dedicated to designing
dynamic pricing schemes specifically for EV charging/discharging [18]. Most research focuses
on developing scheduling models that incorporate dynamic pricing to minimize charging costs.
Furthermore, existing dynamic pricing schemes offered by utility companies may not
adequately address the impact of high EV penetration and EV owners' preferences. Although
the Peer-to-Peer (P2P) electricity transaction mechanism allows sellers and buyers to jointly
determine tariffs, the auction-based mechanism may not always reach convergence.
Additionally, EV owners may struggle to make pricing decisions without a comprehensive
understanding of power system conditions and the electricity market. For instance, they may
undervalue their stored battery power during power system constraints and overvalue it when
renewable energy production is high. Therefore, further research is warranted to design
dynamic pricing schedules that address these aforementioned challenges[19].
Conclusions

This essay provides an overview of three critical facets of EV charging and discharging:
forecasting, scheduling, and dynamic pricing, highlighting their interconnected relationship.
The efficacy of scheduling models primarily hinges on the accuracy of forecasting outcomes
and pricing strategies. Conversely, the precision of forecasting and the performance of
scheduling significantly impact the effectiveness of dynamic pricing strategies in reflecting
real-time power system conditions.

The majority of forecasting models discussed herein are supervised learning-based, with LSTM
and GRU emerging as the most favored methods owing to their adeptness at handling nonlinear
and long-term dependencies. Nonetheless, uncertainty remains inherent in forecasting models,
necessitating ongoing enhancements in their performance. Apart from hybrid and ensemble
techniques, alternatives such as utilizing the latest data for updating forecasting models and
incorporating uncertainty intervals can aid decision-making processes.

Numerous researchers have employed reinforcement learning-based optimization models to


make optimal EV charging and discharging decisions based on forecasted outcomes, including
charging and discharging prices. DQN, DDPG, and SAC are among the popular reinforcement
learning models, each with its strengths and weaknesses. While DQN addresses the curse of
dimensionality associated with conventional Q-learning, issues such as action value
overestimation and lengthy training times persist. Double-DQN and A3C offer potential
solutions to these challenges.

Ensuring that scheduling models can effectively optimize power grids necessitates that the
information they rely on for decision-making accurately reflects real-time power grid
conditions. Hence, both forecasting outcomes and dynamic pricing signals, capable of
reflecting these conditions, are pivotal. Although many studies have delved into the forecasting
and scheduling aspects of EV charging, the focus on EV discharging and dynamic pricing
design remains limited. Additionally, existing dynamic pricing is often designed by system
operators without due consideration for EV owners' preferences, and not all key factors related
to EV charging/discharging are factored into dynamic pricing models.

Dynamic pricing strategies play a crucial role in indirectly controlled charging/discharging and
can incentivize greater EV owner participation in V2G programs. Consequently, more attention
should be devoted to designing dynamic pricing schemes that accurately reflect real-time
power system conditions while striking a balance between system operators and EV owners.

Beyond the technical realm of EV charging/discharge scheduling and dynamic pricing,


research on the social and economic aspects is imperative. Socially, understanding EV owners'
and system operators' responses to dynamic pricing through surveys and analysis can inform
dynamic pricing policy enhancements. Economically, investigating the feasibility and
profitability of dynamic pricing models for the overall system, encompassing individual EV
owners, system operators, and power systems, is essential.
Reference
[1] IEA. Global EV Outlook 2022. IEA 2022. May 2022. Available online:
https://www.iea.org/reports/global-ev-outlook-2022 (accessed on 5 October 2022).
[2] Kempton, W.; Tomic, J. Vehicle-to-grid power implementation: From stabilizing the grid to supporting
large-scale renewable energy. J. Power Sources 2005, 144, 280–294. [CrossRef]
[3] Kempton, W.; Letendre, S.E. Electric vehicles as a new power source for electric utilities. Transp. Res.
Part D Transp. Environ. 1996, 2, 157–175. [CrossRef]
[4] Al-Awami, A.T.; Sortomme, E. Coordinating vehicle-to-grid services with energy trading. IEEE Trans.
Smart Grid 2012, 3, 453–462. [CrossRef]
[5] V2G Hub Insights. Available online: https://www.v2g-hub.com/insights (accessed on 4 November
2022).
[6] Shariff, S.M.; Iqbal, D.; Alam, M.S.; Ahmad, F. A state of the art review of electric vehicle to grid (V2G)
technology. IOP Conf. Ser. Mater. Sci. Eng. 2019, 561, 012103. [CrossRef]
[7] Cao, Y.; Tang, S.; Li, C.; Zhang, P.; Tan, Y.; Zhang, Z.; Li, J. An optimized EV charging model
considering TOU price and SOC curve. IEEE Trans. Smart Grid 2012, 3, 388–393. [CrossRef]
[8] Al-Ogaili, A.S.; Hashim, T.J.T.; Rahmat, N.A.; Ramasamy, A.K.; Marsadek, M.B.; Faisal, M.; Hannan,
M.A. Review on scheduling, clustering, and forecasting strategies for controlling electric vehicle
charging: Challenges and recommendations. IEEE Access 2019, 7, 128353–128371. [CrossRef]
[9] Ahmed, M.; Zheng, Y.; Amine, A.; Fathiannasab, H.; Chen, Z. The role of artificial intelligence in the
mass adoption of electric vehicles. Joule 2021, 5, 2296–2322. [CrossRef]
[10] Shahriar, S.; Al-Ali, A.R.; Osman, A.H.; Dahou, S.; Nijim, M. Machine learning approaches for EV
charging behavior: A review. IEEE Access 2020, 8, 168980–168993. [CrossRef]
[11] Shi,L.;LV,T.;Wang,Y.Vehicle-to-
gridservicedevelopmentlogicandmanagementformulation.J.Mod.PowerSyst.CleanEnergy 2019, 7,
935–947. [CrossRef]
[12] Raju,M.P.;Laxmi,A.J.IOTbasedonlineloadforecastingusingmachinelearningalgorithms.ProcediaCompu
t.Sci.2020,171, 551–560. [CrossRef]
[13] Mhaisen,N.;Fetais,N.;Massoud,A.Real-
timeschedulingforelectricvehiclescharging/dischargingusingreinforcementlearning. In Proceedings of
the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha,
Qatar, 2–5 February 2020.
[14] Wang, X.; Wang, J.; Liu, J. Vehicle to grid frequency regulation capacity optimal scheduling for battery
swapping station using deep q-network. IEEE Trans. Ind. Inform. 2021, 17, 1342–1351. [CrossRef]
[15] Hasselt,H.v.;Guez,A.;Silver,D.DeepreinforcementlearningwithdoubleQ-
learning.arXiv2015,arXiv:1509.06461.[CrossRef]
[16] Mnih,V.;Badia,A.P.;Mirza,M.;Graves,A.;Harley,T.;Lillicrap,T.P.;Silver,D.;Kavukcuoglu,K.Asynchron
ousmethodsfordeep reinforcement learning. arXiv 2016, arXiv:1602.01783.
[17] Gershman,S.J.;Daw,N.D.Reinforcementlearningandepisodicmemoryinhumansandanimals:Anintegrativ
eframework. Annu. Rev. Psychol. 2017, 68, 101–128. [CrossRef]
[18] Maestre, R.; Duque, J.; Rubio, A.; Arevalo, J. Reinforcement learning for fair dynamic pricing. In
Proceedings of the Intelligent Systems Conference, London, UK, 6–7 September 2018.
[19] Chen, Qin, and Komla Agbenyo Folly. "Application of artificial intelligence for ev charging and
discharging scheduling and dynamic pricing: A review." Energies 16.1 (2022): 146.

You might also like