2407.04724v1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

A Likelihood-Based Generative Approach for Spatially Consistent Precipitation

Downscaling

Jose González-Abad 1
arXiv:2407.04724v1 [physics.ao-ph] 26 Jun 2024

Abstract ing technique for Perfect Prognosis Statistical Downscaling


(PP-SD), given its ability to model non-linear relationships
Deep learning has emerged as a promising tool
and handle spatial data (Goodfellow et al., 2016). The re-
for precipitation downscaling. However, current
sulting DL-based regional projections have proven useful
models rely on likelihood-based loss functions
for several climate change applications, even forming the
to properly model the precipitation distribution,
first continental-wide contribution of a PP-based technique
leading to spatially inconsistent projections when
to the CORDEX initiative (Baño-Medina et al., 2022).
sampling. This work explores a novel approach
by fusing the strengths of likelihood-based and Unfortunately, regression-based DL models tend to focus on
adversarial losses used in generative models. As capturing the expected value of the output distribution, thus
a result, we propose a likelihood-based generative leading to the underrepresentation of extremes. This can
approach for precipitation downscaling, leverag- be problematic for variables such as precipitation, highly
ing the benefits of both methods. characterized by these events (e.g., heavy rainfalls). To
address this, recent DL models for PP-SD maximize the
likelihood with respect to a explicit distribution reflecting
1. Introduction the dynamics of precipitation (Baño-Medina et al., 2022).
However, because these distributions are learned indepen-
Global Climate Models (GCMs) simulate the spatio- dently for each grid-point at the regional scale, the resulting
temporal evolution of climate by numerically solving the set projections may lack spatial consistency when sampling
of physical equations describing its constituent components (González-Abad et al., 2021).
and interconnections (Chen et al., 2021). By running these
models under various emission scenarios, it is possible to In this work, we explore generative models to address this
generate future projections under climate change conditions issue. For the first time, we combine a likelihood-based
(O’Neill et al., 2016). However, GCMs have coarse resolu- training with a conditional Generative Adversarial Network
tions due to computational and physical constraints, limiting (cGAN) for the downscaling of precipitation. Through our
their use in regional-scale studies. experimental setup, we show how the proposed approach
enables the DL model to sample spatially consistent precipi-
Statistical Downscaling (SD) attempts to overcome this lim- tation fields, while allowing defining a explicit probability
itation by modeling the relationship between the coarse distribution over the target data.
(low-resolution) and regional (high-resolution) scales. Un-
der the Perfect Prognosis (PP) approach (Maraun & Wid-
mann, 2018), an empirical link is established between a 2. Background
set of large-scale synoptic variables (representing the state 2.1. Deep Learning for Precipitation Downscaling
of the atmosphere) and the local variables of interest (e.g.,
precipitation). This link is learned via a statistical model Inspired by advances in the Super Resolution (SR) field
using observational datasets. The model is then applied to (Wang et al., 2020), recent studies have explored SR models
the large-scale variables from a GCM to obtain the corre- for the downscaling of precipitation (Vandal et al., 2017;
sponding regional projections for future scenarios. Cheng et al., 2020a; Passarella et al., 2022; Sharma & Mi-
tra, 2022). In this context, SR models aim to establish a
Recently, Deep Learning (DL) has emerged as a promis-
link between the coarse and regional versions of a specific
1
Instituto de Fı́sica de Cantabria (IFCA), CSIC-Universidad de field. However, using a surface variable from a GCM of-
Cantabria, Santander, Spain. Correspondence to: Jose González- ten results in reproducing the regional biases caused by its
Abad <gonzabad@ifca.unican.es>. coarse resolution, limiting their effectiveness for climate
downscaling.
ICML 2024 Machine Learning for Earth System Modeling work-
shop, Vienna, Austria. Copyright 2024 by the author(s).

1
A Likelihood-Based Generative Approach for Precipitation Downscaling

For this reason, in the climate context PP-SD models con- ing ensembles of predictions, although unlike the NLL ap-
stitute the standard, as they rely on large-scale synoptic proach, these are computed by passing noise as input to
variables representing the state of the atmosphere, which the generator, not by explicitly modeling the corresponding
are properly reproduced by the coarse resolution of GCMs. distribution. In the context of PP-SD, GANs have not been
Several architectures have been explored such as recurrent explored yet, although they show promise in related areas
networks (Misra et al., 2018), a combination of convolu- such as SR downscaling (Leinonen et al., 2020; Cheng et al.,
tional and dense layers (Pan et al., 2019; Baño-Medina 2020b), meteorological downscaling (Price & Rasp, 2022;
et al., 2020) and fully-convolutional (Adewoyin et al., 2021; Harris et al., 2022) and emulation (Rampal et al., 2024a).
Quesada-Chacón et al., 2022), some of them even gener-
ating projections in future scenarios (Baño-Medina et al., 3. Experimental Framework
2021; Soares et al., 2023). Among these architectures, the
U-Net (Ronneberger et al., 2015) has recently shown promis- 3.1. Region of Study and Data
ing results (Quesada-Chacón et al., 2022; Adewoyin et al.,
In this work, we focus on daily precipitation downscaling
2021), even in related fields such as emulation (Doury et al.,
over a domain centered on the Alps (37.6°N-50.4°N and
2023; 2024).
3.6°E-16.4°E), a region of interest due to its prominent
orography, which significantly influences local precipitation.
2.2. Extreme Precipitation
As we frame this study within the PP approach, we rely on
Due to the dynamics of precipitation, which typically ad- the ERA5 reanalysis dataset (Hersbach et al., 2020) (quasi-
heres to exponential probability distributions, and its non- observational) at 1◦ resolution for the large-scale variables
continuous nature (occurrence and amount), regression- (predictors) and on the observational dataset E-OBS (Cornes
based DL models encounter difficulties in accurately model- et al., 2018) for the local-scale variable (predictand) at 0.1◦
ing it. This often leads to significant issues, such as under- resolution. Following previous works (Baño-Medina et al.,
estimation of extreme precipitation events (Rampal et al., 2022; Soares et al., 2023), we select as predictors the air
2024b). temperature, specific humidity, geopotential height, and the
meridional and zonal wind components at 500, 700, and
To address this challenge, and drawing inspiration from pre- 850 hPa.
vious work (Dunn, 2004; Cannon, 2008), authors in (Baño-
Medina et al., 2022) train a DL model by minimizing the
3.2. Standard Deep Learning Models
Negative Log-Likelihood (NLL) of a Bernoulli and gamma
distributions for the occurrence and amount, respectively. Following the aforementioned recent advances in PP-SD,
By working under this assumption, which aligns with the we rely on the U-Net, a fully-convolutional model. Specif-
dynamics of precipitation (Williams, 1997), they are able ically, we adhere to the implementation details described
to model the whole distribution, including the extremes. Its in (Doury et al., 2023). We train two different versions of
success has led to its application across various regions (Sun this model: U-Net (MSE) and U-Net (NLL). The former
& Lan, 2021; Rampal et al., 2022; Kheir et al., 2023; Hos- minimizes the MSE, whereas the latter minimizes the NLL
seini Baghanam et al., 2024), making it the most extended of a Bernoulli-gamma distribution, as proposed in (Baño-
DL-based PP-SD model. Unfortunately, this approach mod- Medina et al., 2022). While U-Net (MSE) directly computes
els a different probability distribution for each of the grid- the downscaled precipitation, U-Net (NLL) computes, for
points forming the downscaled variable, resulting in spatial each grid-point in the predictand, the parameters p, α and
inconsistency when sampling from these distributions, lead- β defining the corresponding Bernoulli-gamma distribution.
ing to unrealistic projections (González-Abad et al., 2021). Consequently, for NLL-based models, the final prediction
corresponds to a random sample from the modeled distribu-
2.3. Generative Precipitation Downscaling tions.
Recently, Generative Adversarial Networks (GANs) (Good- We divide the observational dataset into a training (1980-
fellow et al., 2014) have attracted significant attention in 2010) and a test (2011-2022) period. These models are
the SD field. Unlike regression-based DL models, which trained using the Adam optimizer (Kingma & Ba, 2014)
generally minimize loss functions aiming at capturing the with a learning rate of 10−4 and a batch size of 64.
mean (e.g., Mean Squared Error, MSE), GANs minimize
an adversarial loss which encourages the generator to better 3.3. Likelihood-Based Generative Approach
reproduce the underlying distribution of the data in order to
The main contribution of this work involves cGAN mod-
fool the discriminator. This leads to improved reproduction
els (Mirza & Osindero, 2014) for PP-SD. Unlike standard
of extremes and finer details in precipitation downscaling
GANs, cGANs allow conditioning the generation process
(Rampal et al., 2024b). In addition, GANs allow comput-

2
A Likelihood-Based Generative Approach for Precipitation Downscaling

on specific data by feeding it to both the generator and 4. Results


discriminator. In addition to the adversarial loss, cGANs
also minimize a content loss that ensures that conditionally Figure 1 shows the violin plot for four different metrics
generated samples align with the true target values. In the computed on the test set: the relative bias of the mean, the
context of precipitation downscaling, combining these loss relative bias of the Simple Daily Intensity Index (SDII),
functions allows the generator to produce precipitation fields which corresponds to the precipitation amount for rainy
that both fool the discriminator and accurately reproduce the days (≥ 1mm), the Root Mean Square Error (RMSE), and
daily precipitation conditioned on the specific large-scale the ratio of standard deviations. These metrics are shown
synoptic state (Harris et al., 2022; Rampal et al., 2024a). for the four intercompared models: U-Net (MSE), U-Net
(NLL), cGAN (MSE), and cGAN (NLL).
To represent the state-of-the-art in generative precipitation
downscaling, we follow (Ravuri et al., 2021; Harris et al., Besides the U-Net (MSE) which overestimates it, all models
2022) and implement what we denote as cGAN (MSE), as reproduce the mean. However, for the SDII, which rep-
this model relies on the MSE as its content loss. However, to resents values further from the mean, MSE-based models,
avoid the so-called blurry effect, where the model smooths particularly U-Net, tend to underestimate it, whereas NLL-
out fine details and produces predictions resembling the based models are able to reproduce it accurately. As ex-
mean, the content loss is applied to the mean of an ensemble pected, MSE-based models reveal superior performance for
of predictions. Note that, although the content loss is com- the RMSE due to their relation with the minimized loss func-
puted over an ensemble, at inference time the prediction of tion. Although NLL-based models are expected to exhibit
this model corresponds to a single sample. higher RMSE due to sampling-induced errors, cGAN (NLL)
achieves comparable performance to MSE-based models.
The main contribution of this work is the introduction In terms of the ratio of standard deviations, NLL-based
of the cGAN (NLL) model, which combines likelihood- models show superior performance, primarily attributed to
based training with the cGAN framework. Unlike other ap- sampling from the Bernoulli-gamma distribution. However,
proaches, this model uses the NLL of the Bernoulli-gamma the cGAN (MSE) model, despite sampling, falls short in
distribution as the content loss. As a result, the genera- accurately reproducing this aspect, which may indicate that
tor produces a set of probability distributions, similarly explicitly defining the distribution of precipitation is key.
to U-Net (NLL). However, by passing a random sample
from these distributions to the discriminator, the generator
is forced to improve the spatial consistency to minimize the Bias Rel. Mean % Bias Rel. SDII %
adversarial loss, as spatially inconsistent precipitation fields 40 40
are easily detectable by the discriminator. Therefore, adver- 20 20
sarial training should lead the generator to learn spatially-
0 0
aware distributions across grid-points in the downscaled
field, addressing the drawbacks of standard likelihood-based 20 20
DL models. 40 40

For both cGAN (MSE) and cGAN (NLL) models, we use 10 RMSE Ratio Standard Dev.
the same architecture for the generator as that used for the U- 1.4
8
Net (MSE) and U-Net (NLL) models. For the discriminator, 1.2
we implement a fully convolutional network that processes 6
1.0
both the large- and regional-scale data through a series of 4
0.8
convolutional and dense layers. Following previous works 2
(Leinonen et al., 2020; Harris et al., 2022; Rampal et al., 0.6
0 U-Net U-Net cGAN cGAN U-Net U-Net cGAN cGAN
2024a), we rely on the Wasserstein formulation of GANs (MSE) (NLL) (MSE) (NLL) (MSE) (NLL) (MSE) (NLL)
(Arjovsky et al., 2017) with a gradient penalty term (Gulra-
jani et al., 2017). This training framework is popular due Figure 1. Violin plot showing the results for four different metrics
to its theoretical properties, such as the possibility to train computed on the test set: the relative bias of the mean and the
the discriminator to optimality. For both cGAN models, SDII, the RMSE, and the ratio of standard deviations. Each metric
and following (Arjovsky et al., 2017), we use RMSprop as displays the results corresponding to the different DL models
the optimizer with a learning rate of 10−5 and a batch size intercompared: U-Net (MSE), U-Net (NLL), cGAN (MSE), and
of 64. The generator and discriminator are trained in an cGAN (NLL).
adversarial manner, with the discriminator being updated
five times for each update of the generator. The training and Figure 2 displays the precipitation histogram during the test
test sets cover the years detailed in Section 3.2. period across all grid-points in the predictand. The logarith-
mic scale of the y-axis facilitates the assessment of model

3
A Likelihood-Based Generative Approach for Precipitation Downscaling

performance, particularly for less frequent extreme events. nature of the distributions resulting from minimizing the
Examining the histogram within the 0-50 mm interval (top- NLL loss function leads to spatial inconsistencies in the
right corner), we observe a notable decline in precipitation generated field, evident in the U-Net (NLL) model’s predic-
values from 0 to approximately 1 mm in the target dataset. tion. However, incorporating the adversarial loss enables
The U-Net (MSE) model fails to adapt to this pattern, result- the cGAN (NLL) to achieve greater consistency while still
ing in an overestimation of precipitation. In contrast, other adhering to the Bernoulli-gamma distribution. In fact, the
models accurately replicate this decrease. This discrepancy prediction of the cGAN (NLL) model falls between that of
supports the U-Net (MSE) model’s overestimation of the the cGAN (MSE) and the U-Net (NLL).
mean in the corresponding violin plot. The adversarial train-
ing of the cGAN (MSE) allows it to adjust to this decrease,
while NLL-based models effectively handle it due to the U-Net (MSE) U-Net (NLL)
underlying distributional assumption. Additionally, the his-
togram reveals that MSE-based models underestimate the
distribution beyond approximately 15 mm, as minimizing
the MSE loss function mainly involves fitting the mean.
Conversely, NLL-based models accurately reproduce this
segment of the distribution, as also evidenced in the SDII
violin plot.
Expanding our focus to the histogram spanning the 0-350 cGAN (MSE) cGAN (NLL)
mm interval, representing precipitation extremes, a simi-
lar pattern emerges: only NLL-based models successfully
replicate these extreme values, including those surpassing
200 mm. This underscores the effectiveness of assuming a
gamma distribution for modeling the precipitation amount.

Target 0 10 20 30 40 50
107 107
U-Net (MSE) Precip. (mm/day)
106 106
U-Net (NLL)
cGAN (MSE)
10 5
cGAN (NLL) Figure 3. Comparison of predictions generated by the DL models
105
intercompared for a day in the test period.
Counts

104 104
3
10
103

102 0 10 20 30 40 50 5. Conclusions
101 In this work, we have introduced a novel likelihood-based
10 0 generative approach for precipitation downscaling. This
0 50 100 150 200 250 300 350 method leverages a combination of likelihood and adver-
Precip. (mm/day) sarial losses, enabling the model to properly reproduce the
target distribution while generating spatially consistent pre-
Figure 2. Histogram of the precipitation distribution for the test pe- cipitation fields, addressing a main challenge for standard
riod, aggregated across all grid-points in the predictand. The black likelihood-based DL models. Furthermore, likelihood-based
line represents the target observational dataset, while the different loss functions enable generative models to produce explicit
colors correspond to the various DL models being compared. A
probability distributions (e.g., Bernoulli-gamma) for precip-
zoomed-in view for values in the 0-50 interval is provided in the
top-right corner of the histogram.
itation. This capability is crucial when downscaling future
GCM projections, as estimating probabilities of extreme
events is vital for risk assessment.
Figure 3 depicts the predictions of each DL model for a spe-
cific day of the test period. As anticipated, the U-Net (MSE) Future work includes evaluating the likelihood-based gen-
exhibits the blurry effect, lacking fine details in the gener- erative model in the GCM space and compare the resul-
ated field. In contrast, the cGAN (MSE) addresses this issue tant projections with those of established PP-SD models.
by leveraging the adversarial loss, leading the generator to We also plan to employ eXplainable Artificial Intelligence
better capture these details to fool the discriminator, along (XAI) techniques to understand the impact of the adversarial
with the effect of computing the MSE over the ensemble loss on the modeled probability distributions and learned
of predictions. As previously discussed, the independent patterns.

4
A Likelihood-Based Generative Approach for Precipitation Downscaling

Acknowledgements Cheng, J., Liu, J., Xu, Z., Shen, C., and Kuang, Q. Generat-
ing high-resolution climate prediction through generative
We acknowledge support from grant CPP2021-008510 adversarial network. Procedia Computer Science, 174:
funded by MICIU/AEI/10.13039/501100011033 and by the 123–127, 2020b.
“European Union” and the “European Union NextGenera-
tionEU/PRTR”. Cornes, R. C., van der Schrier, G., van den Besselaar, E. J.,
and Jones, P. D. An ensemble version of the e-obs temper-
References ature and precipitation data sets. Journal of Geophysical
Research: Atmospheres, 123(17):9391–9409, 2018.
Adewoyin, R. A., Dueben, P., Watson, P., He, Y., and Dutta,
R. Tru-net: a deep learning approach to high resolution Doury, A., Somot, S., Gadat, S., Ribes, A., and Corre, L.
prediction of rainfall. Machine Learning, 110:2035–2062, Regional climate model emulator based on deep learn-
2021. ing: Concept and first evaluation of a novel hybrid down-
scaling approach. Climate Dynamics, 60(5):1751–1779,
Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein gen- 2023.
erative adversarial networks. In International conference
on machine learning, pp. 214–223. PMLR, 2017. Doury, A., Somot, S., and Gadat, S. On the suitability of
a convolutional neural network based rcm-emulator for
Baño-Medina, J., Manzanas, R., and Gutiérrez, J. M. Con- fine spatio-temporal precipitation. Toulouse School of
figuration and intercomparison of deep learning neural Economics Repository, 2024.
models for statistical downscaling. Geoscientific Model
Development, 13(4):2109–2124, 2020. Dunn, P. K. Occurrence and quantity of precipitation can be
modelled simultaneously. International Journal of Cli-
Baño-Medina, J., Manzanas, R., and Gutiérrez, J. M. On
matology: A Journal of the Royal Meteorological Society,
the suitability of deep convolutional neural networks for
24(10):1231–1239, 2004.
continental-wide downscaling of climate change projec-
tions. Climate Dynamics, 57(11):2941–2951, 2021. González-Abad, J., Baño-Medina, J., and Cachá, I. H. On
Baño-Medina, J., Manzanas, R., Cimadevilla, E., Fernández, the use of deep generative models for perfect prognosis
J., González-Abad, J., Cofiño, A. S., and Gutiérrez, J. M. climate downscaling. arXiv preprint arXiv:2305.00974,
Downscaling multi-model climate projection ensembles 2021.
with deep learning (deepesd): contribution to cordex eur- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
44. Geoscientific Model Development Discussions, 2022: Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.
1–14, 2022. Generative adversarial nets. Advances in neural informa-
Cannon, A. J. Probabilistic multisite precipitation downscal- tion processing systems, 27, 2014.
ing by an expanded bernoulli–gamma density network.
Goodfellow, I., Bengio, Y., and Courville, A. Deep learning.
Journal of Hydrometeorology, 9(6):1284–1300, 2008.
MIT press, 2016.
Chen, D., Rojas, M., Samset, B., Cobb, K., Diongue Ni-
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and
ang, A., Edwards, P., Emori, S., Faria, S., Hawkins, E.,
Courville, A. C. Improved training of wasserstein gans.
Hope, P., Huybrechts, P., Meinshausen, M., Mustafa, S.,
Advances in neural information processing systems, 30,
Plattner, G.-K., and Tréguier, A.-M. Framing, Context,
2017.
and Methods. In Masson-Delmotte, V., Zhai, P., Pirani,
A., Connors, S., Péan, C., Berger, S., Caud, N., Chen, Y., Harris, L., McRae, A. T., Chantry, M., Dueben, P. D., and
Goldfarb, L., Gomis, M., Huang, M., Leitzell, K., Lonnoy, Palmer, T. N. A generative deep learning approach to
E., Matthews, J., Maycock, T., Waterfield, T., Yelekçi, O., stochastic downscaling of precipitation forecasts. Jour-
Yu, R., and Zhou, B. (eds.), Climate Change 2021: The nal of Advances in Modeling Earth Systems, 14(10):
Physical Science Basis. Contribution of Working Group e2022MS003120, 2022.
I to the Sixth Assessment Report of the Intergovernmen-
tal Panel on Climate Change, pp. 147–286. Cambridge Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horányi,
University Press, Cambridge, United Kingdom and New A., Muñoz-Sabater, J., Nicolas, J., Peubey, C., Radu, R.,
York, NY, USA, 2021. Schepers, D., et al. The era5 global reanalysis. Quarterly
Journal of the Royal Meteorological Society, 146(730):
Cheng, J., Kuang, Q., Shen, C., Liu, J., Tan, X., and Liu, 1999–2049, 2020.
W. Reslap: Generating high-resolution climate prediction
through image super-resolution. IEEE Access, 8:39623– Hosseini Baghanam, A., Nourani, V., Bejani, M., and Ke, C.-
39634, 2020a. Q. Improving the statistical downscaling performance of

5
A Likelihood-Based Generative Approach for Precipitation Downscaling

climatic parameters with convolutional neural networks. Rainfall extremes over new zealand. Weather and Climate
Journal of Water and Climate Change, pp. jwc2024592, Extremes, 38:100525, 2022.
2024.
Rampal, N., Gibson, P. B., Sherwood, S., Abramowitz, G.,
Kheir, A. M., Elnashar, A., Mosad, A., and Govind, A. An and Hobeichi, S. A robust generative adversarial network
improved deep learning procedure for statistical down- approach for climate downscaling and weather generation.
scaling of climate data. Heliyon, 9(7), 2023. Authorea Preprints, 2024a.
Kingma, D. P. and Ba, J. Adam: A method for stochastic Rampal, N., Hobeichi, S., Gibson, P. B., Baño-Medina, J.,
optimization. arXiv preprint arXiv:1412.6980, 2014. Abramowitz, G., Beucler, T., González-Abad, J., Chap-
Leinonen, J., Nerini, D., and Berne, A. Stochastic super- man, W., Harder, P., and Gutiérrez, J. M. Enhancing re-
resolution for downscaling time-evolving atmospheric gional climate downscaling through advances in machine
fields with a generative adversarial network. IEEE Trans- learning. Artificial Intelligence for the Earth Systems, 3
actions on Geoscience and Remote Sensing, 59(9):7211– (2):230066, 2024b.
7223, 2020.
Ravuri, S., Lenc, K., Willson, M., Kangin, D., Lam,
Maraun, D. and Widmann, M. Statistical downscaling and R., Mirowski, P., Fitzsimons, M., Athanassiadou, M.,
bias correction for climate research. Cambridge Univer- Kashem, S., Madge, S., et al. Skilful precipitation now-
sity Press, 2018. casting using deep generative models of radar. Nature,
597(7878):672–677, 2021.
Mirza, M. and Osindero, S. Conditional generative adver-
sarial nets. arXiv preprint arXiv:1411.1784, 2014. Ronneberger, O., Fischer, P., and Brox, T. U-net: Con-
Misra, S., Sarkar, S., and Mitra, P. Statistical downscaling volutional networks for biomedical image segmenta-
of precipitation using long short-term memory recurrent tion. In Medical image computing and computer-assisted
neural networks. Theoretical and applied climatology, intervention–MICCAI 2015: 18th international confer-
134:1179–1196, 2018. ence, Munich, Germany, October 5-9, 2015, proceedings,
part III 18, pp. 234–241. Springer, 2015.
O’Neill, B. C., Tebaldi, C., Van Vuuren, D. P., Eyring,
V., Friedlingstein, P., Hurtt, G., Knutti, R., Kriegler, E., Sharma, S. C. M. and Mitra, A. Resdeepd: A residual
Lamarque, J.-F., Lowe, J., et al. The scenario model super-resolution network for deep downscaling of daily
intercomparison project (scenariomip) for cmip6. Geo- precipitation over india. Environmental Data Science, 1:
scientific Model Development, 9(9):3461–3482, 2016. e19, 2022.

Pan, B., Hsu, K., AghaKouchak, A., and Sorooshian, S. Soares, P. M., Johannsen, F., Lima, D. C., Lemos, G., Bento,
Improving precipitation estimation using convolutional V., and Bushenkova, A. High resolution downscaling of
neural network. Water Resources Research, 55(3):2301– cmip6 earth system and global climate models using deep
2321, 2019. learning for iberia. Geoscientific Model Development
Discussions, 2023:1–46, 2023.
Passarella, L. S., Mahajan, S., Pal, A., and Norman,
M. R. Reconstructing high resolution esm data through Sun, L. and Lan, Y. Statistical downscaling of daily tem-
a novel fast super resolution convolutional neural net- perature and precipitation over china using deep learning
work (fsrcnn). Geophysical Research Letters, 49(4): neural models: Localization and comparison with other
e2021GL097571, 2022. methods. International Journal of Climatology, 41(2):
Price, I. and Rasp, S. Increasing the accuracy and resolution 1128–1147, 2021.
of precipitation forecasts using deep generative models. Vandal, T., Kodra, E., Ganguly, S., Michaelis, A., Nemani,
In International conference on artificial intelligence and R., and Ganguly, A. R. Deepsd: Generating high reso-
statistics, pp. 10555–10571. PMLR, 2022. lution climate change projections through single image
Quesada-Chacón, D., Barfus, K., and Bernhofer, C. Re- super-resolution. In Proceedings of the 23rd acm sigkdd
peatable high-resolution statistical downscaling through international conference on knowledge discovery and
deep learning. Geoscientific Model Development, 15(19): data mining, pp. 1663–1672, 2017.
7353–7370, 2022.
Wang, Z., Chen, J., and Hoi, S. C. Deep learning for image
Rampal, N., Gibson, P. B., Sood, A., Stuart, S., Fauchereau, super-resolution: A survey. IEEE transactions on pattern
N. C., Brandolino, C., Noll, B., and Meyers, T. High- analysis and machine intelligence, 43(10):3365–3387,
resolution downscaling with interpretable deep learning: 2020.

6
A Likelihood-Based Generative Approach for Precipitation Downscaling

Williams, P. Modelling Seasonality and Trends in Daily


Rainfall Data. In Advances in Neural Information Pro-
cessing Systems, volume 10. MIT Press, 1997.

You might also like