Download as pdf or txt
Download as pdf or txt
You are on page 1of 289

!

Decision)making!tool!
for!the!operation!of!a!
stormwater!
harvesting!system!
!
DOCTORAL!THESIS !

Submitted!to!the!Faculty!of!Engineering!Pontificia!Universidad!Javeriana!

By#Sandra#Lorena#Galarza#Molina!

Supervised!by!Andrés#Torres#CE.#MSc.#PhD!

Examination!Committee:!

PhD.#Virginia#Stovin!–!University!of!Sheffield!–!Sheffield!–!United!Kingdom!

PhD.#Alberto#Campisano!–!Università!degli!studi!di!Catania!–!Catania!–!Italia!

PhD.#Juan#Pablo#Rodríguez!–!Universidad!de!los!Andes!–!Bogotá!–!Colombia!

PhD.#Diego#Méndez#Chaves–!Pontificia!Universidad!Javeriana!–!Bogotá!–!Colombia!!!

November 16th 2017, Bogotá D.C.


ABSTRACT

Urban! drainage! issues! presented! by! massive! worldMwide! urban! sprawl! impact! catchment!
hydrology! and! this! is! evident! in! increased! runoff! rates! and! volumes! as! well! as! decreased!
infiltration! and! baseMflow.! Likewise,! urbanization! contributes! to! potential! water! use! losses,!
threatening! drinking! water! supplies,! increasing! flooding! frequency! and! adversely! affecting!
human! health! and! biodiversity,! all! of! which! stem! from! higher! pollutant! concentrations! in!
watercourses.!A!shift!in!urban!drainage!management!has!been!taking!place!since!the!1970s.!
There! is! a! transition! from! a! flooding! control! approach! to! a! more! holistic! approach,! where!
multiple!objectives!are!taken!into!account!in!the!design!and!decisionMmaking!processes.!This!
holistic!approach!is!known!as!sustainable!urban!drainage!systems!(SUDS)!or!water!sensitive!
urban!design!(WSUD).!!

Despite! research! carried! out! on! SUDS! performance,! there! remains! a! need! to! analyze! SUDS!
performance! for! stormwater! harvesting! (SWH),! especially! in! light! of! the! lack! of! design,!
monitoring,!operational!and!maintenance!specific!knowledge.!The!following!questions!remain!
open:!(i)!what!are!the!minimum!monitoring!requirements!for!a!SUDS!used!for!SWH?!(ii)!What!
is!the!necessary!equipment!for!an!optimal!system!management!(operation!and!maintenance)?!
(iii)! How! can! the! online! monitoring! be! used! to! infer! operational! rules! of! the! system?! To!
contribute! to! reducing! the! identified! knowledge! gaps,! a! better! understanding! of! the!
performance! and! operation! of! SUDS! used! for! SWH! in! an! Andean! tropical! city! is! proposed,!
through!the!assessment!of!SUDS!used!for!SWH!performance!and!determination!of!operating!
protocols.! The! overall! aim! of! this! PhD! thesis! is! the! development! of! a! decisionMmaking! (DM)!
tool!that!uses!online!quantity!and!quality!data!to!propose!operational!protocols!with!an!eye!
towards!endMuse.!To!fulfill!the!overarching!research!goal,!a!case!study!in!BogotaM!Colombia!is!
used:!a!constructedMwetland/reservoirMtank!system!(CWRT).!!

For!the!development!of!the!DM!tool!some!methods!were!applied!and!developed!in!order!to:!(i)!
calibrate!water!flow!and!water!quality!onMline!equipment;!(ii)!detect!first!flush!before!the!end!
of!runoff!event;!(iii)!predict!final!uses!with!as!less!monitoring!requirements!as!possible;!(iv)!
evaluate! DM! scenarios! (Ckc! M! Kappa! Coefficient! and! reliabilities).! We! considered! that! the!
changes!of!water!quality!translate!into!a!waterMuse!changing!should!be!taken!into!account!for!
an!onMline!decisionMmaking!tool.!Therefore,!three!DM!tool!scenarios!with!different!monitoring!
options! were! built! and! evaluated.! To! simplify! the! number! of! water! uses,! two! water! uses!
groups!are!defined!based!on!the!restricted!and!unrestricted!water!uses!concepts.!The!defined!

i
water! uses! groups! are:! (i)! USES! A! –! unrestricted:! Urban! Reuse,! Agricultural! reuse,!
Recreational! impoundments;! (ii)! USES! B! –! restricted:! Agricultural! reuse! (food! processed! &!
nonMfood!crops),!Landscape!impoundments,!Restricted!areas!irrigation.!

In! the! first! scenario,! we! employed! an! ultrasonic! level! and! a! spectrometer! probe! at! the!
entrance! of! the! system.! With! the! aim! of! evaluating! less! demanding! monitoring! options!
regarding!cost,!a!second!scenario!was!developed!following!the!same!methodology!used!for!the!
first! scenario,! but! instead! of! using! a! spectrometer! probe! at! the! entrance! of! the! system,! a!
turbidity! probe! was! used.! For! the! third! scenario,! we! implemented! Support! Vector! Machine!
(SVM)! that! allowed! us! to! simulate! the! water! uses! with! the! storm! parameters! and! hydraulic!
performance!variables.!Therefore,!the!quality!measurements!are!no!longer!needed!for!water!
uses'!definition.!!

In! the! first! and! second! scenarios,! we! were! able! to! detect! an! event! with! the! first! flush!
phenomenon.! For! this! purposes,! we! implemented! two! methods:! the! rules'! method! and! the!
SVM! method.! The! method! with! the! highest! Ckc! value! was! the! rules'! method! of! the! first!
scenario.! This! method! obtained! a! Ckc! values! of! 0.49,! that! means! a! moderate! strength! of!
agreement.!Regarding!water!use!at!a!general!level,!the!first!scenario!tool!obtained!a!Ckc!value!
of! 0.11! (slight! strength! of! agreement),! a! water! use! B! reliability! of! 0.83! and! a! No! Use! (NU)!
reliability!of!0.29!for!the!general!level!execution.!On!the!other!hand,!the!second!scenario!did!
not! accomplish! a! better! Ckc! value! (M0.08,! poor! strength! of! agreement)! in! contrast! with! the!
third!scenario!the!Ckc!value!improved!to!0.28!(fair!strength!of!agreement).!Therefore,!using!
the!SVM!method,!we!observed!that!there!was!an!improvement!in!the!Ckc!value!from!a!slight!
strength!of!agreement!(0.11)!to!a!fair!strength!of!agreement!(0.28).!!

With! the! decisionMmaking! tools! the! final! water! use! can! be! predicted! per! minute! (onMline)! or!
per! event! (deferred).! As! well,! we! observed! that! is! feasible! to! relate! hydraulic! variables! and!
storm! parameters! with! the! final! water! uses! in! line! with! Sandoval! et! al.! (2013,! 2014).! This!
allows! us! to! simplify! the! process! and! the! quality! measurements! are! no! longer! needed! for!
water! uses'! definition.! Even! if! the! methods! proposed! could! be! applied! for! a! wide! range! of!
study!cases,!for!the!present!PhD!study!they!were!applied!to!a!specific!SWH!system!located!in!a!
university! campus! of! an! Andean! tropical! city:! a! CWRT! system! in! a! university! campus! in!
BogotáMColombia.!It!is!worth!to!mention!these!methods!were!tested!in!real!time!and!on!a!real!
scale.!Therefore,!this!process!allows!us!to!bring!knowledge!based!on!a!study!case.!For!other!
SWH! systems,! it! is! recommended! to! apply! the! proposed! methodologies! to! address! the!
feasibility!of!using!them.!!

Another!point!worth!remarking!is!that!with!the!monitoring!of!SWH!we!open!the!panorama!to!
simplify!the!first!flush!(FF)!phenomenon!detection.!Hence,!this!methodology!identifies!the!FF!
phenomenon! without! the! need! of! having! recorded! the! entire! event! and! without! measured!
quality! indicators.! With! only! a! hydraulic! variable! (in! our! case! Δhmax! –! difference! between!
head!over!the!entrance!weir!in!the!time!i!and!time!i+1)!a!specific!storm!event!could!be!related!
to!a!first!flush!effect.!!

ii
!

RESUMEN

Los! problemas! de! drenaje! urbano! derivados! de! la! expansión! urbana! masiva! a! nivel! mundial!
tienen!un!impacto!en!la!hidrología!de!la!cuenca,!lo!cual!se!hace!evidente!en!el!aumento!de!las!
tasas! y! volúmenes! de! escorrentía,! así! como! en! la! disminución! de! la! tasa! de! infiltración! y! el!
flujo!de!base.!Asimismo,!la!urbanización!contribuye!a!las!pérdidas!potenciales!de!uso!del!agua,!
amenazando! el! suministro! de! agua! potable,! aumentando! la! frecuencia! de! las! inundaciones! y!
afectando! negativamente! la! salud! humana! y! la! biodiversidad,! todo! lo! cual! se! deriva! en! el!
aumento!de!las!concentraciones!de!contaminantes!en!los!cursos!de!agua.!Desde!la!década!de!
1970,!ha!tenido!lugar!un!cambio!en!la!gestión!del!drenaje!urbano.!Se!observa!una!transición!
de! un! enfoque! de! control! de! inundaciones! a! un! enfoque! más! holístico,! donde! se! tienen! en!
cuenta! múltiples! objetivos! en! los! procesos! de! diseño! y! toma! de! decisiones.! Este! enfoque!
holístico! se! conoce! como! sistemas! de! urbanos! drenaje! sostenible! (SUDS)! o! diseño! urbano!
sensible!al!agua!(de!sus!siglas!en!inglés!WSUD).!
!
A!pesar!de!la!investigación!realizada!sobre!el!comportamiento!de!los!SUDS,!sigue!existiendo!la!
necesidad!de!analizar!el!rendimiento!de!SUDS!para!el!aprovechamiento!del!agua!lluvia!(AAL),!
especialmente! a! la! luz! de! la! falta! de! conocimiento! específico! sobre! diseño,! monitoreo,!
operación!y!mantenimiento.!Las!siguientes!preguntas!permanecen!abiertas:!(i)!¿Cuáles!son!los!
requisitos!mínimos!de!monitoreo!para!un!SUDS!utilizado!para!el!AAL?!(ii)!¿Cuál!es!el!equipo!
necesario! para! una! gestión! óptima! del! sistema! (operación! y! mantenimiento)?! (iii)! ¿Cómo! se!
puede! usar! el! monitoreo! en! línea! para! inferir! las! reglas! operacionales! del! sistema?! Para!
contribuir!a!reducir!las!brechas!de!conocimiento!identificadas,!se!propone!proporcionar!una!
mejor! comprensión! del! desempeño! y! la! operación! de! los! SUDS! utilizados! para! AAL! en! una!
ciudad!tropical!andina,!mediante!la!evaluación!de!los!SUDS!utilizados!para!el!desempeño!de!
AAL! y! la! determinación! de! los! protocolos! de! operación.! El! objetivo! general! de! esta! tesis!
doctoral!es!el!desarrollo!de!una!herramienta!de!toma!de!decisiones!que!utiliza!datos!en!línea!
de! cantidad! y! calidad! de! agua! lluvia! para! proponer! protocolos! operativos! con! miras! al! uso!
final.! Para! cumplir! con! el! objetivo! general! de! investigación,! se! utiliza! un! estudio! de! caso! en!
BogotáMColombia:!un!sistema!de!humedal!construido!/!tanque!de!regulación!(HCTR).!
!
Para!el!desarrollo!de!la!herramienta!de!toma!de!decisiones!(TD),!se!aplicaron!y!desarrollaron!
algunos!métodos!para:!(i)!calibrar!los!elementos!de!medición!de!caudal!y!calidad!del!agua!en!
línea;!(ii)!detectar!el!fenómeno!de!primer!lavado!antes!del!fin!del!evento!de!escorrentía;!(iii)!
predecir! los! usos! finales! con! los! mínimos! requisitos! de! monitoreo! posibles;! (iv)! evaluar! los!
escenarios!de!TD!(Ckc!M!Coeficiente!Kappa!y!confiabilidades).!Consideramos!que!los!cambios!
en!la!calidad!del!agua!se!traducen!en!un!cambio!en!el!uso!del!agua!que!debe!tenerse!en!cuenta!

iii
para! una! herramienta! de! TD! en! línea.! Por! lo! tanto,! se! construyeron! y! evaluaron! tres!
escenarios!de!TD!con!diferentes!opciones!de!monitoreo.!Para!simplificar!la!cantidad!de!usos!
de!agua,!se!definen!dos!grupos!de!uso!de!agua!basados!en!los!conceptos!de!uso!restringido!y!
sin! restricciones! del! agua.! Los! grupos! definidos! de! uso! de! agua! son:! (i)! USOS! A:! sin!
restricciones:! reutilización! urbana,! reutilización! agrícola,! embalses! recreativos;! (ii)! USOS! B! M!
restringidos:!reutilización!agrícola!(alimentos!procesados!y!cultivos!no!alimenticios),!fuentes!
en!el!paisaje,!riego!de!áreas!restringidas.!
!
En! el! primer! escenario! empleamos! un! nivel! ultrasónico! y! una! sonda! de! espectrómetro! a! la!
entrada! del! sistema.! Con! el! objetivo! de! evaluar! opciones! de! monitoreo! menos! exigentes! en!
términos! de! costo,! se! desarrolló! un! segundo! escenario! siguiendo! la! misma! metodología!
utilizada! para! el! primer! escenario,! pero! en! lugar! de! usar! una! sonda! de! espectrómetro! en! la!
entrada! del! sistema,! se! utilizó! una! sonda! de! turbidez.! Para! el! tercer! escenario,!
implementamos!máquinas!de!soporte!vectorial!que!nos!permitió!simular!los!usos!del!agua!con!
los! parámetros! de! la! tormenta! y! las! variables! de! rendimiento! hidráulico.! Por! lo! tanto,! las!
mediciones!de!calidad!ya!no!son!necesarias!para!la!definición!de!usos!del!agua.!
!
El!primer!y!segundo!escenario!fueron!capaces!de!detectar!un!evento!con!el!primer!fenómeno!
de! descarga.! Para! este! propósito! implementamos! dos! métodos:! el! método! de! reglas! y! el!
método! SVM.! El! método! con! el! mayor! valor! de! Ckc! fue! el! método! de! las! reglas! del! primer!
escenario.!Este!método!obtuvo!valores!Ckc!de!0.49,!lo!que!significa!una!fuerza!moderada!de!
acuerdo.! En! términos! de! uso! de! agua! a! nivel! general,! la! herramienta! del! primer! escenario!
obtuvo!un!valor!de!Ckc!de!0.11!(fuerza!leve!de!acuerdo),!una!confiabilidad!de!uso!de!agua!B!de!
0.83!y!una!confiabilidad!de!No!Uso!de!0.29!para!la!ejecución!de!nivel!general.!Por!otro!lado,!el!
segundo! escenario! no! logró! un! mejor! valor! de! Ckc! (M0.08,! fuerza! de! acuerdo! pobre)! en!
contraste!con!el!tercer!escenario,!el!valor!de!Ckc!mejoró!a!0.28!(fuerza!de!acuerdo!razonable).!
Por!lo!tanto,!utilizando!el!método!SVM!observamos!que!hubo!una!mejora!en!el!valor!de!Ckc!
desde!una!leve!fortaleza!de!acuerdo!(0.11)!hasta!una!fuerza!de!acuerdo!(0.28).!
!
Con!las!herramientas!de!toma!de!decisiones,!el!uso!final!del!agua!puede!predecirse!por!minuto!
(en!línea)!o!por!evento!(diferido).!Asimismo,!observamos!que!es!factible!relacionar!variables!
hidráulicas!y!parámetros!de!tormentas!con!los!usos!finales!del!agua!en!línea!con!Sandoval!et!
al.!(2013,!2014).!Esto!nos!permite!simplificar!el!proceso!y!las!mediciones!de!calidad!ya!no!son!
necesarias!para!la!definición!de!los!usos!del!agua.!Incluso!si!los!métodos!propuestos!pudieran!
aplicarse!para!una!amplia!gama!de!casos!de!estudio,!para!el!presente!estudio!de!doctorado!se!
aplicaron!a!un!sistema!específico!de!AAL!ubicado!en!un!campus!universitario!de!una!ciudad!
tropical! andina:! un! sistema! HCTR! en! un! campus! universitario! en! BogotáMColombia! .! Vale! la!
pena!mencionar!que!estos!métodos!fueron!probados!en!tiempo!real!y!en!una!escala!real.!Por!
lo! tanto,! este! proceso! nos! permite! traer! conocimiento! basado! en! un! caso! de! estudio.! Para!
otros!sistemas!de!AAL,!se!recomienda!aplicar!las!metodologías!propuestas!a!fin!de!abordar!la!
viabilidad!de!su!uso.!
!
Otro! punto! de! observación! es! que! con! el! monitoreo! de! AAL! abrimos! el! panorama! para!
simplificar!la!detección!del!fenómeno!de!primer!de!lavado!(FF).!Por!lo!tanto,!esta!metodología!
identifica!el!fenómeno!FF!sin!la!necesidad!de!haber!registrado!todo!el!evento!y!sin!la!medición!
de! indicadores! de! calidad.! Con! solo! una! variable! hidráulica! (en! nuestro! caso! Δhmax! M!
diferencia!entre!la!cabeza!sobre!el!vertedero!de!entrada!en!el!tiempo!i!y!el!tiempo!i!+!1)!un!
evento!de!tormenta!específico!podría!estar!relacionado!con!el!fenómeno!FF.!!

iv
!

Dedicated* to* all* the* people* with* whom* I* interacted* during* the* development* of* this* study,*
especially*to*my*loved*ones.*

v
!

ACKNOWLEDGEMENTS

The!development!of!this!work!has!been!made!with!the!financial!support!of!Colciencias!under!
the! Scholarship! M! PhD! National! Students! announcement! 567! (2013! –! 2016)! and! the!
Scholarship!from!Fundación*CEIBA!(2011!M2012).!Also,!this!project!received!financial!support!
for! the! acquisition! of! the! monitoring! equipment! from! the! Pontificia! Universidad! Javeriana!
within! the! call! for! internal! research! projects:! (i)! Project! 5666! “Valoración* de* la* oferta* y* la*
demanda* hídrica* del* sistema* humedalDconstruido/tanqueDregulador* para* el* aprovechamiento*
de* aguas* lluvias* en* el* campus* de* la* pontificia* universidad* Javeriana,* sede* Bogotá* (PUJB)”!
(Determination! of! the! ConstructedMwetland/reservoirMtank! (CWRT)! demand! and! offer!
performance! for! rainwater! harvesting! in! the! PUJB! campus).;! (iii)! Project! 6308! “Metodología*
para* soportar* la* toma* de* decisiones* de* la* operación* del* sistema* humedalDconstruido/tanqueD
regulador* (HCTR)* basada* en* el* monitoreo* en* continuo* de* calidad* de* aguas”(Methodology! to!
support! decisionMmaking! in! the! operation! of! the! CWRT! system! based! on! water! quality!
continuous!monitoring)!

I!would!like!to!express!my!gratitude!to!my!supervisor,!Doctor!Andrés!Torres!for!selling!me!the!
idea!of!being!a!PhD!student,!giving!me!his!support!and!guidance,!encouraging!me!to!step!into!
the!scientific!world!and!helping!me!with!development!of!this!research.!I!am!very!grateful!for!
his!commitment!and!his!dedication,!and!he!has!been!an!example!to!follow.!

I! am! grateful! to! the! members! of! the! jury:! Virginia! Stovin! for! her! important! suggestions! and!
contributions,! taking! her! time! to! come! to! Colombia! and! being! part! of! this! process;! Alberto!
Campisano!for!accepting!to!be!an!examiner!in!two!stages!of!the!process:!Doctoral!candidacy!
and!Dissertation!defense,!and!for!his!contributions;!Juan!Pablo!Rodríguez!and!Diego!Méndez!
Chaves!for!accepting!to!be!examiners!committee,!and!for!their!comments!and!contributions;!
Tim! Fletcher,! Manfred! Schütze! and! Pedro! Avellaneda! for! accepting! to! be! examiners! in! the!
Comprehensive! Exam! and! for! their! comments! and! contributions;! Sylvie! Barraud! and! Juan!
Diego!Giraldo!for!accepting!to!be!examiners!in!the!Doctoral!candidacy!and!their!contributions.!

Thanks!to!Tim!Fletcher!for!allowing!me!to!stay!at!the!facilities!of!the!Melbourne!University,!by!
his!comments!and!ideas!about!the!behavior!of!the!constructed!wetland!and!Matthew!Burns!at!
Melbourne!University!for!helping!me!to!identify!the!problem!with!the!inflow!measurements.!!

vi
I!would!like!to!thanks!to!Physical!Resources!Office!(PRO)!of!the!PUJB!for!taking!the!decision!to!
build!the!CWRT!and!allow!us!to!enter!and!install!the!monitoring!equipment.!

Thanks!to!my!great!friends:!Nathalie!Hernández!for!her!technical!support!during!and!after!the!
stormwater!sampling!as!a!student!of!the!master’s!degree!on!Hydrosystems;!Camilo!Otalora!for!
helping!me!with!the!installation!of!the!monitoring!equipment!within!the!system,!his!technical!
and! nonMtechnical! advices;! Andrés! Baquero! for! sharing! part! of! this! process,! his! technical!
advices!and!his!support;!Jaime!Lara!for!his!advices!and!technical!support;!Sandra!Fajardo!for!
her! support,! example! and! advice! during! the! process;! Leonardo! Plazas! for! his! technical!
advices.!

Thanks! to! the! testing! laboratory! of! the! civil! engineering! department! staff,! especially! to! the!
water! quality! laboratory! team.! Thanks! to! all! staff! of! the! Pontificia! Universidad! Javeriana! for!
their!collaboration!in!each!administrative!process.!

Thanks! to! my! family,! my! parents! for! their! love! and! company,! my! siblings! for! their! moral!
support!and!for!helping!me!to!go!through,!to!my!closest!friends!for!their!love!and!kindness.!

vii
!

CONTENTS!
ABSTRACT# I!
RESUMEN# III!
ACKNOWLEDGEMENTS# VI!
CONTENTS# VIII!

GENERAL#INTRODUCTION# 1!
REFERENCES# 7!

PART#A# 16!

THEORETICAL#FRAMEWORK# 16!
CHAPTER#1# 18!
STORMWATER#HARVESTING#AND#CONSTRUCTED#WETLANDS# 18!
1.1!STORMWATER!HARVESTING! 18!
1.2!MONITORING!OF!CONSTRUCTED!WETLANDS!SYSTEMS! 19!
CHAPTER#2# 23!
WATER#QUALITY#ONWLINE#MEASUREMENTS#AND#WATER#QUANTITY#CONCEPTS#AND#
FUNDAMENTALS# 23!
2.1!UVMVIS!ABSORBANCE!SPECTROMETRY! 23!
2.2!TURBIDITY! 24!
2.3!FIRST!FLUSH!PHENOMENON! 25!
2.4!HYDROLOGIC!AND!HYDRAULIC!METHODS! 26!
CHAPTER#3# 28!
STATISTICAL#TOOLS# 28!
3.1!LINEAR!REGRESSION! 28!
3.2!PEARSON!AND!SPEARMAN!CORRELATION!METHODS! 28!
3.3!SHAPIROMWILK!AND!BARTLETT’S!TESTS! 29!
3.4!BOXPLOT!AND!WILCOXON!SIGNED!RANK!TEST! 29!
3.5!KRUSKALMWALLIS!TEST! 30!
3.6!KAPPA!COEFFICIENT! 30!
3.7!LOGISTIC!REGRESSION! 31!
3.8!PLS!M!PARTIAL!LEAST!SQUARES! 31!
3.9!KERNEL!DENSITY!ESTIMATION! 32!
3.10!SVM! 32!
CONCLUSIONS#PART#A# 34!
REFERENCES#PART#A# 35!

PART#B# 47!

MATERIALS#AND#METHODS# 47!
CHAPTER#4# 49!
CONSTRUCTEDWWETLAND/RESERVOIRWTANK#(CWRT)#SYSTEM# 49!

viii
4.1!GENERAL!DESCRIPTION! 49!
4.2!MONITORING!SYSTEM! 49!
4.3!LABORATORY!ANALYSIS! 54!
4.4!MONITORING!EQUIPMENT!DATA!INVENTORY! 55!
CHAPTER#5# 56!
AVAILABLE#METHODS#FOR#WATER#QUALITY#ANALYSIS# 56!
5.1!UVMVIS!SPECTROMETER!PROBES!CALIBRATION! 56!
5.2!TURBIDITY!PROBE!CALIBRATION!BASE!METHOD! 57!
5.3!FIRST!FLUSH!OCCURRENCE!PROBABILITY!METHOD! 58!
CHAPTER#6# 59!
WATER#USE#AND#REUSE#QUALITY#GUIDELINES#AND#ACTS# 59!
CHAPTER#7# 62!
COMPUTER#BASED#TOOLS# 62!
CONCLUSIONS#–#PART#B# 65!
REFERENCES#–#PART#B# 66!

PART#C# 70!

DEVELOPED#METHODS# 70!
CHAPTER#8# 72!
WATER#QUANTITY#MONITORING#SYSTEM#CALIBRATION#METHOD# 72!
CHAPTER#9# 76!
WATER#QUALITY#MONITORING#SYSTEM#CALIBRATION#METHOD:#TURBIDITY#PROBES#76!
CHAPTER#10# 79!
DEFINITION#OF#WATER#USE#GROUPS#AND#WATER#USE#LIMITS#EMPLOYING#
SPECTROMETER#ABSORBANCES# 79!
10.1!WATER!USE!GROUPS! 79!
10.2!WATER!USES!LIMITS!EMPLOYING!SPECTROMETER!ABSORBANCES! 80!
CHAPTER#11# 82!
RULES#FOR#FIRST#FLUSH#IDENTIFICATION#AND#EVALUATION#METHODS# 82!
11.1!RULES!FOR!FIRST!FLUSH!IDENTIFICATION! 82!
11.2!USING!RELIABILITY!AS!AN!EVALUATION!METHOD! 83!
11.3!EVALUATION!METHOD!TO!IDENTIFY!THE!REDUNDANT!VARIABLES! 84!
CHAPTER#12# 88!
EFFICIENCY#OF#THE#CONSTRUCTED#WETLAND#SYSTEM# 88!
CONCLUSIONS#–#PART#C# 91!
REFERENCES#–#PART#C# 93!

PART#D# 94!

SUPPORTING#RESULTS# 94!
CHAPTER#13# 96!
SAMPLING#CAMPAIGNS#RESULTS# 96!
CHAPTER#14# 99!
WATER#QUANTITY#MONITORING#SYSTEMS:#CALIBRATION#RESULTS# 99!
CHAPTER#15# 102!
WATER#QUALITY#MONITORING#SYSTEMS:#CALIBRATION#RESULTS# 102!
CHAPTER#16# 106!
FIRST#FLUSH#ANALYSIS# 106!

ix
CHAPTER(17( 109!
CONSTRUCTED2WETLAND(EFFICIENCY(PERFORMANCE( 109!
CONCLUSIONS(PART(D( 112!
REFERENCES(PART(D( 113!

PART(E( 114!

DECISION2MAKING(TOOL( 114!
CHAPTER(18( 116!
OBSERVED(WATER(USES:(USING(ON2LINE(UV2VIS(ABSORBANCES( 116!
CHAPTER(19( 119!
TOWARDS(A(DECISION2MAKING(TOOL( 119!
19.1.!DM!TOOL!FIRST!SCENARIO!–!USING!SPECTROMETER! 119!
19.2.!DM!TOOL!SECOND!SCENARIO!–!USING!TURBIDITY!PROBE! 130!
19.3.!DM!TOOL!THIRD!SCENARIO!–!USING!SVM! 136!
19.4.!COMPARISON!OF!THREE!SCENARIOS!RESULTS! 142!
CHAPTER(20( 147!
CHANGING(MEASUREMENT(FREQUENCY(OF(THE(RECORDED(DATA( 147!
CHAPTER(21( 151!
DECISION2MAKING(TOOL(GENERAL(CONCLUSIONS( 151!
REFERENCES(PART(E( 153!

GENERAL(CONCLUSIONS( 154!
GENERAL(CONCLUSIONS(REFERENCES( 157!

APPENDICES( !
!

x
GENERAL INTRODUCTION
Over the last few decades, problems associated with urbanization have begun to gain traction
in the field of urban drainage (Harremoës 1997). Urban drainage issues presented by massive
world-wide urban sprawl (United Nations 2010) impact catchment hydrology, evident in
increased runoff rates and volumes as well as decreased infiltration and base-flow (Chocat et al.
2001; Fletcher et al. 2013). Likewise, urbanization contributes to potential water use losses in
the process of threatening drinking water supplies, increasing flooding frequency and adversely
affecting human health and biodiversity, all of which stem from higher pollutant
concentrations in watercourses (Chocat 1997; Pahl-Wostl et al. 2010).
As far as stormwater pollutants are concerned, a wide variety—both inorganic and organic
(Hvitved-Jacobsen et al. 2010)—have been detected; together, these pollutants have a
deleterious effect on human health and stream ecological health (Göbel et al. 2007; McCarthy
et al. 2008; Walsh and Kunapo 2009), in addition to groundwater (Gromaire-Mertz et al. 1999).
Also, stormwater runoff is recognized as the main source of heavy metals, while wastewater
proves to be the main source of organic and nitrogenous pollution (Eriksson et al. 2007;
Gasperi et al. 2010; German et al. 2005; Hvitved-Jacobsen et al. 2010; Zhang et al. 2012).
According to Burton and Pitt (2001), Goonetilleke et al. (2005) and Hathaway and Hunt
(2010), the most relevant factors for stormwater runoff characteristics are site specificity,
climate and other local variables such as frequency of street cleaning. Hence, local
characterization (qualitative and quantitative) of stormwater runoff with monitoring studies
becomes paramount (Barbosa et al. 2012). Another factor that must be considered is the role
played by geomorphologic changes on receiving waters spurred by urbanization; such factors
include channel enlargement, bed and bank erosion (Booth 1990; Booth and Jackson 1997;
Konrad et al. 2005) and negative impacts on the benthic habitat due to sediment excess (Lenat
and Crawford 1994; Olthof, unpublished data, 1994; Wang et al. 1997).
Setting the stage for the present study entails a look at the history of urban drainage systems.
Traditionally, their design is aimed at meeting basic population needs, i.e. stormwater and
wastewater transportation (collect, convey and discharge as efficiently and quickly as possible)
(Cherrared et al. 2007, 2010; C.N.E.S 2000; Niemczynowicz 1999). Yet, the implementation of
these projects neglected key aspects such as: environmental protection, economic and financial
management, drainage system maintenance, regulation and design standards and information
management. These problems would only be exacerbated seeing as drainage systems are under
increasing pressure due to climate change, demographic growth, urbanization development,
environmental contamination, resource constraints and infrastructure obsolescence (Cherrared
et al. 2007, 2010; C.N.E.S 2000; Delleur 2003; Roesner et al. 2001; Wong et al. 2000).
Faced with these complications, it is clear that traditional urban drainage practices need to be
reconceived. Traditional practices are viewed as removed from society’s environmental values
and as impediments to the broader pursuit of advancing more sustainable urban environments
(Newman and Kenworthy 1999; Thomas et al. 1997; Wong and Eadie 2000). To this end, a
shift in urban drainage management has been taking place since 1970s (Barlow et al. 1977;
Chocat et al. 2001; Marsalek and Chocat 2002). There is a transition from a flooding control
approach to a more holistic approach, where multiple objectives are taken into account in the
design and decision-making processes (Fletcher et al. 2014). Although this holistic approach to

2
urban hydrology management possesses many facets, they are all interrelated (Fletcher et al.
2013): low impact development (Prince George’s County (Md.) et al. 1999), sustainable urban
drainage systems (SUDS) (CIRIA 2000), water sensitive urban design (WSUD) (Whelans et al.
1994; Wong 2007), best management practices (BMPs) (Schueler 1987), alternative techniques
(Azzout et al. 1995) and alternatives to stormwater drainage (Grotehusmann et al. 1994; Sieker
1996; Uhl 1990). Up to this point, SUDS have been discussed insofar as the academic field is
concerned. It is worth mentioning that these techniques have been successfully put into place
in countries such as the United States (DeBusk et al. 2012), China (Li et al. 2000), South Korea
(Han and Park 2007), Malaysia (Lariyah et al. 2011), Australia (Coombes et al. 2006) and Brazil
(Ghisi et al. 2009)).
In this document, the acronyms WSUD and SUDS are used. Looking at SUDS techniques, a
wide variety is observed, including permeable pavements, constructed wetlands and infiltration
basins (Scholes et al. 2005). SUDS collect, store and improve the stormwater quality, trying to
emulate the hydrologic conditions extant before urbanization (Durrans et al. 2003). In other
words, these systems help minimizing the effects of anthropic activity on waterways
(Berndtsson et al. 2006; Durrans et al. 2003; Mentens et al. 2006).
In recent years it has been found that design focused on SUDS generates opportunities for
stormwater harvesting (SWH) (Mitchell et al. 2006; Wong 2007). The implementation of SWH
systems helps to control urban flooding (Fletcher et al. 2008; van Roon 2007a; Zhu et al.
2004), decreases the amount of runoff that enter to the drainage system (Werner and Collins
2012) and decreases the potable water demand (Coombes et al. 2000, 2003; Ghisi et al. 2009;
Wong 2007). Therefore, SWH can face current water shortages and stormwater pollution
(Fletcher et al. 2008; van Roon 2007b; Walsh et al. 2005; Zhu et al. 2004). Some researches
focus on rainwater harvesting (RWH) collecting only rainwater primarily from roofs (e.g.
Abdulla and Al-Shareef 2009; Burns et al. 2015; Coombes et al. 2006; Jones and Hunt 2010).
Moreover, SWH has more public acceptance compare with other alternatives of water sources
such as wastewater recycling, reuse and seawater desalination (Brown and Davies 2007;
Coombes et al. 2003). Regardless there are many SWH studies that present positive results,
there is a resistance to adopt such systems on a larger scale possible due to lack of information
of SWH effectiveness (Imteaz et al. 2011). Therefore, the enhanced deployment of SWH
depends on establishing concrete answers to the following questions (Mitchell et al. 2008):
“How much stormwater can be harvested?” “How reliable is this supply source?” (Farreny et
al. 2011), and “How much storage is required?”
In spite of research carried out worldwide on SUDS performance (e.g. Boogaard et al. 2014;
Brown and Hunt 2012), there remains a pressing —current— need to analyze SUDS
performance for SWH, especially in light of the lack of design, monitoring, operational and
maintenance specific knowledge. In Colombia, few research projects have taken SUDS into
account for SWH purposes. Although, we have had an approach to the SWH or RWH (Ballén
et al. 2006; Lara Borrero et al. 2007; Palacio Castañeda, 2010; Ramírez 2009; Sanchez and
Caicedo 2003; Torres et al. 2011b; a, 2013) and SUDS (Álvarez and Celedón 2012; Devia et al.
2012; Galarza and Garzón 2005; Gómez-González et al. 2010; Torres et al. 2011c, 2012), there
is no research on SUDS for SWH.

3
The SUDS performance can be difficult to quantify and sustain without a proper maintenance,
due to their complex “man-made”“nature”. In addition, these systems need a stable financing
during their life especially for continuous maintenance (Marsalek and Chocat 2002). The deficit
of SUDS construction, operation and maintenance programs is one of the main aspects that
stills open (Backhaus and Fryd 2013; Hatt et al. 2006; Sharma et al. 2012). The lack of these
requirements lead to devaluate the use of SUDS (Houle et al. 2013; Kirby 2005; McKissock et
al. 1999; O’Sullivan et al. 2012) and hinders the SUDS decision-making process (Dierkes et al.
2015).
Monitoring is a key component that supports SUDS performance validation and effectiveness
(Brown et al. 2007; Erickson et al. 2013; Sharma et al. 2012; Taylor and Fletcher 2007). Sharma
et al. (2012) developed a study to understand the challenges that have SUDS in general. They
found that there is an actual need for long-term monitoring studies. Stormwater monitoring is
a main area that still under debate, because there is no perfect equipment or method that fits
all monitoring studies (Barbosa et al., 2012). Some researchers have contributed to this field
(e.g. Barraud et al. 2002; Dechesne et al. 2004; Maniquiz et al. 2010). Also, Hatt et al. (2006)
identified a lack of adequate monitoring in SWH systems. This leads to have less background
data to support the design, maintenance and operation of these systems in order to guarantee a
good holistic performance (Hatt et al. 2004, 2006). Therefore, Barbosa et al. (2012) suggest
that each case study must take into account local constraints, available budget and relevant
time constraints. On top of that, two main aspects require consideration: (i) few information is
better than no information; ii) is better to have some reliable data that a lot of inaccurate data
(Barbosa et al. 2012). Lucas et al. (2014) identify a lack of constructed wetlands (CW)
performance data and monitoring results in the UK because these systems are usually part of
the SUDS train, meaning the entire treatment performance is calculated. In contrast, in the
United States and other countries, CW systems are stand-alone.
As regards to stormwater online monitoring with the emergence of sensor technology for
continuous water quality monitoring, various authors (e.g. Barraud et al. 2002; Bertrand-
Krajewski et al. 2008; Gruber et al. 2005; Grüning and Orth 2002; Hochedlinger et al. 2006;
Lacour et al. 2009; Veldkamp et al. 2002) have developed studies to acknowledge the
implications of using this technology. One key point is the local calibration needed for the
online probes, in order to increase measurement quality and reduce systematic errors
(Gamerith et al. 2011; Lepot et al. 2016). Furthermore, the use of on-line measurements can
provide information of the storm events that allow developing detailed analysis of the process
involved (Métadier and Bertrand-Krajewski 2012). This can allow reliable data that can support
real time decision-making systems.
Over the last two decades, Real Time Control (RTC) has been implemented to improve urban
drainage system performance (e.g. Pleau et al. 2001; Puig et al. 2009). RTC refers to the
monitoring of urban drainage system behavior with real time measurements, allowing for swift
actions to be taken in order to adjust behavior according to specific goals. That is to say, RTC
optimizes system operation (Seggelke et al., 2013). In order to achieve RTC, on-line
monitoring equipment, data acquisition and data transmission are necessary steps (Campisano
et al. 2013). Some RTC tools for Stormwater Harvesting (SWH) systems include on-line
decision-making methods, which seem to be essential (Poch et al. 2012) for improving real-

4
time operation of SWH systems. On-line decision-making could be supported by on-line
monitoring, modelling or forecasting results.
In sum, understanding the importance of SUDS translates into progress in terms of WSUD
objectives by promoting water conservation via SWH, reducing peak runoffs and improving
stormwater quality (thereby protecting surface water and ground water). In spite of research
carried out on SUDS performance, there remains a need to analyze SUDS performance for
SWH, especially in light of the lack of design, monitoring, operational and maintenance
specific knowledge. The following questions remain open: (i) what are the minimum
monitoring requirements for a SUDS used for stormwater harvesting? (ii) What is the
necessary equipment for an optimal system management (operation and maintenance)? (iii)
How can be used the online monitoring to infer operational rules of the system? To phrase it
another way, operational and maintenance protocols destined to manage end-use and storage
time (storage time is inversely correlated with water quality) are of the utmost significance in
the field at this time. Additionally, the dearth of experience in Colombia related to this topic,
given the South American country’s tropical mountain climate and culture, mean that the study
of SUDS for SWH is particularly prescient. The proposal here described takes as one of its
primary goals the identification and fulfillment of gaps in international research done on SWH
using SUDS. Moreover, this doctoral project emphasizes online sensors in general and RTC,
along with their respective technical approaches. In so doing, this project reinforces an area
that would benefit from greater methodological contributions.
In order to contribute to reduce the identified knowledge gaps, it is proposed providing a
better understanding of performance and operation of SUDS used for SWH in Andean
tropical city, through the assessment of SUDS used for SWH performance and determination
of operation protocols. The overall aim of this PhD thesis is the development of a decision-
making tool that uses online quantity and quality data to propose operational protocols with an
eye towards end-use. In order to fulfill the overarching research goal, a case study in Bogota-
Colombia is used: a constructed-wetland/reservoir-tank system (CWRT). The CWRT was built
on the Pontificia Universidad Javeriana-Bogota campus between 2012 and 2013 as part of this
PhD project. This project was the result of a research process (see Galarza-Molina et al.
2015a). One of the projects involved was the development of a decision-making tool to
support the selection of rainwater harvesting scenarios (Galarza-Molina et al. 2015b). After the
results were reported to the Physical Resources Office of the University, they decided to
design and construct recommended scenario, beginning with the SWH of one of the suggested
basins. The entire system receives runoff from a parking lot, a soccer field and numerous green
areas. As far as constructed wetland (CW) specifics are concerned, it is a horizontal subsurface
flow (HSSF) wetland on top of an underlying gravel bed equipped with different gravel sizes to
minimize clogging. The CW is specifically designed to enhance the quality of runoff from the
parking lot.
For the development of the decision-making tool some methods were applied and developed
in order to: (i) calibrate water flow and water quality on-line equipment; (ii) detect first flush
before the end of runoff event; (iii) predict final uses with as less monitoring requirements as
possible. We brought tools from other disciplines to implement them in the SWH operation.

5
This document consists on five parts. Part A exposes the literature review concerning to the
main topic of this thesis (e.g. stormwater harvesting fundamental concepts of spectrometry).
Part B presents the general description of our case study, the water quality probes calibration
methods that were developed by other authors and a method for the determination of the
occurrence probability of the first flush phenomenon. Additionally, this Part shows the water
use and reuse guidelines that were considered and we close this Part showing the computer-
based tools that helped to the data analysis and development of methods explained and the
developed methods shown in Part C. Part C illustrates the developed methods (e.g. water
quantity calibration method, the definition of two water use groups). Part D shows the results
obtained using the described methods in Part B and C. Finally, Part E presents the
development of a Decision-making (DM) tool based on the results of Part D. First, we show
the results observed for water uses employing the methodology proposed in Part C. Then,
three DM tool scenarios are shown for the operation of the stormwater harvesting system with
as less as possible monitoring equipment. We closed this Part with the general conclusions of
the decision-making tool. Sandra Lorena Galarza Molina curriculum vitae is shown in Appendix
F.

6
REFERENCES

Abdulla, F. A., and Al-Shareef, A. W. (2009). “Roof rainwater harvesting systems for
household water supply in Jordan.” Desalination, 243(1–3), 195–207.
Álvarez, J., and Celedón, E. (2012). “Evaluación de las capacidades hidráulicas y de retención
de contaminantes de un modelo de trinchera de retención construida con una canastilla en
PVC (Aquacell) acoplada con capa filtrante en geotextil, arena y grava utilizada como
componente del drenaje urbano.” Trabajo de grado para la obtención del título de Magíster en
Ingeniería Civil, Pontificia Universidad Javeriana, Bogotá.
Azzout, Y., Barraud, S., N. Cres, F., and Alfakih, E. (1995). “Decision aids for alternative
techniques in urban storm management.” Water Science and Technology, Innovative Technologies
in Urban Storm Drainage Selected Proceedings of the 2nd NOVATECH Conference on
Innovative Technologies in Urban Stonn Drainage, 32(1), 41–48.
Backhaus, A., and Fryd, O. (2013). “The aesthetic performance of urban landscape-based
stormwater management systems: a review of twenty projects in Northern Europe.” Journal of
Landscape Architecture, 8(2), 52–63.
Ballén, J. A., Galarza, M. Á., and Ortiz, R. O. (2006). “Sistemas de aprovechamiento de agua
lluvia para vivienda urbana.” João Pessoa, Brazil.
Barbosa, A. E., Fernandes, J. N., and David, L. M. (2012). “Key issues for sustainable urban
stormwater management.” Water Research, Special Issue on Stormwater in urban areas, 46(20),
6787–6798.
Barlow, D., Burrill, G., and Nolfi, J. (1977). Research report on developing a community level natural
resource inventory system: Center for Studies in Food Self-Sufficiency.
Barraud, S., Gibert, J., Winiarski, T., and Bertrand Krajewski, J. L. (2002). “Implementation of
a monitoring system to measure impact of stormwater runoff infiltration.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 45(3), 203–210.
Berndtsson, J. C., Emilsson, T., and Bengtsson, L. (2006). “The influence of extensive
vegetated roofs on runoff water quality.” Science of The Total Environment, 355(1–3), 48–63.
Bertrand-Krajewski, J.-L., Barraud, S., Gilbert, J., Malard, F., Winiarski, T., and Delolme, C.
(2008). “The OTHU case study: integrated monitoring of stormwater in Lyon, France
(Chapter 23).” Data Requirements for Integrated Urban Water Management: Urban Water Series -
UNESCO-IHP, T. Fletcher and A. Deletic, eds., Taylor & Francis, London (UK), 303–314.
Boogaard, F., Lucke, T., van de Giesen, N., and van de Ven, F. (2014). “Evaluating the
Infiltration Performance of Eight Dutch Permeable Pavements Using a New Full-Scale
Infiltration Testing Method.” Water (20734441), 6(7), 2070–2083.
Booth, D. B. (1990). “Stream channel incision in response following drainage basin
urbanization.” Water Resources Bulletin, 26, 407–417.
Booth, D. B., and Jackson, C. R. (1997). “Urbanization of Aquatic Systems: Degradation
Thresholds, Stormwater Detection, and the Limits of Mitigation1.” JAWRA Journal of the

7
American Water Resources Association, 33(5), 1077–1090.
Brown, R. A., and Hunt, W. F. (2012). “Improving bioretention/biofiltration performance
with restorative maintenance.” Water Science and Technology: A Journal of the International Association
on Water Pollution Research, 65(2), 361–367.
Brown, R., Farrelly, M., and Keath, N. (2007). Summary Report: Perceptions of Institutional Drivers
and Barriers to Sustainable Urban Water Management in Australia. National Urban Water
Governance Program; Monash University, Calyton, Melbourne, Australia.
Brown, R. R., and Davies, P. (2007). “Understanding community receptivity to water re-use:
Ku-ring-gai Council case study.” Water Science and Technology: A Journal of the International
Association on Water Pollution Research, 55(4), 283–290.
Burns, M. J., Fletcher, T. D., Duncan, H. P., Hatt, B. E., Ladson, A. R., and Walsh, C. J.
(2015). “The performance of rainwater tanks for stormwater retention and water supply at the
household scale: an empirical study.” Hydrological Processes, 29(1), 152–160.
Burton, A. G., and Pitt, R. E. . (2001). Stormwater Effects Handbook: A Toolbox for Watershed
Managers, Scientists, and Engineers. Lewis Publishers, CRC Press, Boca Raton, Fla.
Campisano, A., Ple, J. C., Muschalla, D., Pleau, M., and Vanrolleghem, P. A. (2013). “Potential
and limitations of modern equipment for real time control of urban wastewater systems.”
Urban Water Journal, 10(5), 300–311.
Cherrared, M., Chocat, B., and Benzerra, A. (2007). “Problématique et faisabilité du
développement durable en matière d’assainissement urbain.” Lyon, France.
Cherrared, M., Zekiouk, T., and Chocat, B. (2010). “Durabilité des systèmes d’assainissement
algériens, étude de l’aspect fonctionnel du système de la ville de Jije.” Lyon, France.
Chocat, B. (1997). Encyclopédie de l’hydrologie urbaine et de l’assainissement. (H. Lavoisier, ed.), Tec &
doc-Lavoisier, Paris.
Chocat, B., Krebs, P., Marsalek, J., Rauch, W., and Schilling, W. (2001). “Urban drainage
redefined: from stormwater removal to integrated management.” Water Science & Technology,
43(5), 61.
CIRIA. (2000). Sustainable Urban Drainage Systems: Design Manual for Scotland and Northern Ireland.
Construction Industry Research and Information Association.
C.N.E.S. (2000). “L’eau En Algérie: Le Grand défi de demain.” Travaux de la Commission
Aménagement du Territoire et de L’Environnemen, Alger, 105–149.
Coombes, P. J., Argue, J. R., and Kuczera, G. (2000). “Figtree Place: a case study in water
sensitive urban development (WSUD).” Urban Water, 1(4), 335–343.
Coombes, P. J., Dusntan, H., Spinks, A., Evans, C., and Harrison, T. (2006). “Key Messages
from a Decade of Water Quality Research into Roof Collected Rainwater Supplies.” Perth,
Western Australia, 1–9.
Coombes, P. J., Kuczera, G., and Kalma, J. D. (2003). “Economic, water quantity and quality
impacts from the use of a rainwater tank in the inner city.” Australian Journal of Water Resources,
7(2), 101–110.

8
DeBusk, K., Hunt, W. F., Quiley, M., Jeray, J., and Bedig, A. (2012). “Rainwater Harvesting:
Integrating Water Conservation and Stormwater Management through Innovative
Technologies.” World Environmental and Water Resources Congress 2012, American Society of Civil
Engineers, 3703–3710.
Dechesne, M., Barraud, S., and Bardin, J.-P. (2004). “Spatial distribution of pollution in an
urban stormwater infiltration basin.” Journal of Contaminant Hydrology, 72(1–4), 189–205.
Delleur, J. W. (2003). “The Evolution of Urban Hydrology: Past, Present, and Future.” Journal
of Hydraulic Engineering, 129(8), 563–573.
Devia, C., Puentes, Á., Oviedo, N., Torres, A., and Angarita, H. (2012). “Cubiertas verdes y
dinámica hídrica en la ciudad.” San José, Costa Rica.
Dierkes, C., Lucke, T., and Helmreich, B. (2015). “General Technical Approvals for
Decentralised Sustainable Urban Drainage Systems (SUDS)—The Current Situation in
Germany.” Sustainability, 7(3), 3031–3051.
Durrans, S. R., Dietrich, K., Ahmad, M., and Inc, H. M. (2003). Stormwater conveyance modeling
and design. Haestad Press.
Erickson, A. J., Weiss, P. T., and Gulliver, J. S. (2013). “Introduction.” Optimizing Stormwater
Treatment Practices, Springer New York, 1–10.
Eriksson, E., Baun, A., Mikkelsen, P. S., and Ledin, A. (2007). “Risk assessment of xenobiotics
in stormwater discharged to Harrestrup Å, Denmark.” Desalination, MEDAWATER
International Conference on Sustainable Water Management, Rational Water Use, Wastewater
Treatment and Reuse June 8–10, 2006, Marrakech, Morocco, 215(1–3), 187–197.
Farreny, R., Gabarrell, X., and Rieradevall, J. (2011). “Cost-efficiency of rainwater harvesting
strategies in dense Mediterranean neighbourhoods.” Resources, Conservation and Recycling, 55(7),
686–694.
Fletcher, T. D., Andrieu, H., and Hamel, P. (2013). “Understanding, management and
modelling of urban hydrology and its consequences for receiving waters: A state of the art.”
Advances in Water Resources, 35th Year Anniversary Issue, 51, 261–279.
Fletcher, T. D., Deletic, A., Mitchell, V. G., and Hatt, B. E. (2008). “Reuse of urban runoff in
Australia: a review of recent advances and remaining challenges.” Journal of Environmental
Quality, 37(5 Suppl), S116-127.
Fletcher, T. D., Shuster, W., Hunt, W. F., Ashley, R., Butler, D., Arthur, S., Trowsdale, S.,
Barraud, S., Semadeni-Davies, A., Bertrand-Krajewski, J.-L., Mikkelsen, P. S., Rivard, G., Uhl,
M., Dagenais, D., and Viklander, M. (2014). “SUDS, LID, BMPs, WSUD and more – The
evolution and application of terminology surrounding urban drainage.” Urban Water Journal,
0(0), 1–18.
Galarza, S., and Garzón, F. (2005). “Estudio de viabilidad técnica de los sistemas urbanos de
drenaje sostenible para las condiciones tropicales de Colombia.” Epiciclos, 4(1), 59–70.
Galarza-Molina, S., Torres, A., Lara-Borrero, J., Méndez-Fajardo, S., Solarte, L., and Gonzales,
L. (2015a). “Towards a constructed wetland/reservoir-tank system for rainwater harvesting in
an experimental catchment in Colombia.” Revista Ingeniería y Universidad, 19(2), 169–185.

9
Galarza-Molina, S., Torres, A., Moura, P., and Lara-Borrero, J. (2015b). “CRIDE: A Case
Study in Multi-Criteria Analysis for Decision-Making Support in Rainwater Harvesting.”
International Journal of Information Technology & Decision Making, 14(1), 43–67.
Gamerith, V., Steger, B., Hochedlinger, M., and Gruber, G. (2011). “Assessment of UV/VIS-
spectrometry performance in combined sewer monitoring under wet weather conditions.”
Proceedings of the 12th International Conference on Urban Drainage, Porto Alegre, Brazil.
Gasperi, J., Gromaire, M. C., Kafi, M., Moilleron, R., and Chebbo, G. (2010). “Contributions
of wastewater, runoff and sewer deposit erosion to wet weather pollutant loads in combined
sewer systems.” Water Research, 44(20), 5875–5886.
German, J., Vikstro, M., Svensson, G., and Gustafsson, L.-G. (2005). “Integrated stormwater
strategies to reduce impact on receiving waters.” Copenhagen/Denmark.
Ghisi, E., Tavares, D. da F., and Rocha, V. L. (2009). “Rainwater harvesting in petrol stations
in Brasília: Potential for potable water savings and investment feasibility analysis.” Resources,
Conservation and Recycling, 54(2), 79–85.
Göbel, P., Dierkes, C., and Coldewey, W. G. (2007). “Storm water runoff concentration matrix
for urban areas.” Journal of Contaminant Hydrology, Issues in urban hydrology: The emerging field
of urban contaminant hydrology, 91(1–2), 26–42.
Gómez-González, G. A., Rodriguez-Benavides, A. F., and Torres, A. (2010). “Durabilidad de
las capacidades filtrantes de la capa de rodadura de un Pavimento Poroso Rígido.” Punta del
Este, Argentina, 1–11.
Goonetilleke, A., Thomas, E., Ginn, S., and Gilbert, D. (2005). “Understanding the role of
land use in urban stormwater quality management.” Journal of Environmental Management, 74(1),
31–42.
Gromaire-Mertz, M. C., Garnaud, S., Gonzalez, A., and Chebbo, G. (1999). “Characterisation
of urban runoff pollution in Paris.” Water Science and Technology, Innovative Technologies in
Urban Storm Drainage 1998 (Novatech ’98) Selected Proceedings of the 3rd NOVATECH
Conference on Innovative Technologies in Urban Storm Drainage, 39(2), 1–8.
Grotehusmann, D., Khelil, A., Sieker, F., and Uhl, M. (1994). “Alternative Urban Drainage
Concept and Design.” Water Science & Technology, 29(1–2), 227–282.
Gruber, G., Winkler, S., and Pressl, A. (2005). “Continuous monitoring in sewer networks an
approach for quantification of pollution loads from CSOs into surface water bodies.” Water
Science and Technology: A Journal of the International Association on Water Pollution Research, 52(12),
215–223.
Grüning, H., and Orth, M. (2002). “Investigations of the dynamic behaviour of the
composition of combined sewage using on-line analyzers.” Water Science and Technology: A
Journal of the International Association on Water Pollution Research, 45(4–5), 77–83.
Han, M., and Park, J. (2007). “Innovative rainwater harvesting and management in the republic
of Korea.” Rainwater and Urban Design 2007, Engineers Australia, Sydney, Australia, 329–339.
Harremoës, P. (1997). “Integrated water and waste management.” Water Science and Technology,
Sustainable Sanitation Selected Papers on the Concept of Sustainability in Sanitation and Water

10
and Wastewater Management, 35(9), 11–20.
Hathaway, J. M., and Hunt, W. F. (2010). “Evaluation of First Flush for Indicator Bacteria and
Total Suspended Solids in Urban Stormwater Runoff.” Water, Air, & Soil Pollution, 217(1–4),
135–147.
Hatt, B. E., Deletic, A., and Fletcher, T. D. (2004). Integrated storm water treatment and re-use
systems– an inventory of Australian practice. Cooperative Research Centre for Catchment
Hydrology, Melbourne, Australia.
Hatt, B. E., Deletic, A., and Fletcher, T. D. (2006). “Integrated treatment and recycling of
stormwater: a review of Australian practice.” Journal of Environmental Management, 79(1), 102–
113.
Hochedlinger, M., Kainz, H., and Rauch, W. (2006). “Assessment of CSO loads--based on
UVIVIS-spectroscopy by means of different regression methods.” Water Science and Technology:
A Journal of the International Association on Water Pollution Research, 54(6–7), 239–246.
Houle, J., Roseen, R., Ballestero, T., Puls, T., and Sherrard, J., Jr. (2013). “Comparison of
Maintenance Cost, Labor Demands, and System Performance for LID and Conventional
Stormwater Management.” Journal of Environmental Engineering, 139(7), 932–938.
Hvitved-Jacobsen, T., Vollertsen, J., and Haaning Nielsen, A. (2010). Urban and Highway
Stormwater Pollution: Concepts and Engineering. CRC Press, Boca Raton, Fla.
Imteaz, M. A., Shanableh, A., Rahman, A., and Ahsan, A. (2011). “Optimisation of rainwater
tank design from large roofs: A case study in Melbourne, Australia.” Resources, Conservation and
Recycling, 55(11), 1022–1029.
Jones, M. P., and Hunt, W. F. (2010). “Performance of rainwater harvesting systems in the
southeastern United States.” Resources, Conservation and Recycling, 54(10), 623–629.
Kirby, A. (2005). “SuDS—innovation or a tried and tested practice?” Proceedings of the ICE -
Municipal Engineer, 158(2), 115–122.
Konrad, C. P., Booth, D. B., and Burges, S. J. (2005). “Effects of urban development in the
Puget Lowland, Washington, on interannual streamflow patterns: Consequences for channel
form and streambed disturbance.” Water Resources Research, 41(7), 15.
Lacour, C., Joannis, C., and Chebbo, G. (2009). “Assessment of annual pollutant loads in
combined sewers from continuous turbidity measurements: sensitivity to calibration data.”
Water Research, 43(8), 2179–2190.
Lara Borrero, J. A., Torres, A., Campos Pinilla, M. C., Duarte Castro, L., Echeverri Robayo, J.
I., and Villegas González, P. A. (2007). “Aprovechamiento del agua lluvia para riego y para el
lavado de zonas duras y fachadas en el campus de la Pontificia Universidad Javeriana
(Bogotá).” Ingenieria y Universidad, 11(2).
Lariyah, M. S., Mohd Nor, M. D., Mohamed Roseli, Z. A., Zulkefli, M., and Amirah Hanim,
M. . (2011). “Application of Water Sensitive Urban Design at Local Scale in Kuala Lumpur.”
Porto Alegre, Brazil, 1–14.
Lenat, D. R., and Crawford, J. K. (1994). “Effects of land use on water quality and aquatic

11
biota of three North Carolina Piedmont streams.” Hydrobiologia, 294(3), 185–199.
Lepot, M., Torres, A., Hofer, T., Caradot, N., Gruber, G., Aubin, J.-B., and Bertrand-
Krajewski, J.-L. (2016). “Calibration of UV/Vis spectrophotometers: A review and comparison
of different methods to estimate TSS and total and dissolved COD concentrations in sewers,
WWTPs and rivers.” Water Research, 101, 519–534.
Li, F., Cook, S., Geballe, G. T., and Burch Jr, W. R. (2000). “Rainwater Harvesting Agriculture:
An Integrated System for Water Management on Rainfed Land in China’s Semiarid Areas.”
AMBIO: A Journal of the Human Environment, 29(8), 477–483.
Lucas, R., Earl, E. R., Babatunde, A. O., and Bockelmann-Evans, B. N. (2014). “Constructed
wetlands for stormwater management in the UK: a concise review.” Civil Engineering and
Environmental Systems, 0(0), 1–18.
Maniquiz, M. C., Lee, S.-Y., and Kim, L.-H. (2010). “Long-Term Monitoring of Infiltration
Trench for Nonpoint Source Pollution Control.” Water, Air, & Soil Pollution, 212(1–4), 13–26.
Marsalek, J., and Chocat, B. (2002). “International report: Stormwater management.” Water
Science and Technology: A Journal of the International Association on Water Pollution Research, 46(6–7),
1–17.
McCarthy, D. T., Deletic, A., Mitchell, V. G., Fletcher, T. D., and Diaper, C. (2008).
“Uncertainties in stormwater E. coli levels.” Water Research, 42(6–7), 1812–1824.
McKissock, G., Jefferies, C., and D’Arcy, B. J. (1999). “An Assessment of Drainage Best
Management Practices in Scotland.” Water and Environment Journal, 13(1), 47–51.
Mentens, J., Raes, D., and Hermy, M. (2006). “Green roofs as a tool for solving the rainwater
runoff problem in the urbanized 21st century?” Landscape and Urban Planning, 77(3), 217–226.
Métadier, M., and Bertrand-Krajewski, J.-L. (2012). “The use of long-term on-line turbidity
measurements for the calculation of urban stormwater pollutant concentrations, loads,
pollutographs and intra-event fluxes.” Water Research, Special Issue on Stormwater in urban
areas, 46(20), 6836–6856.
Mitchell, V. G., Hatt, B. E., Deletic, A., Fletcher, T. D., McCarthy, D., and Magyar, M. (2006).
Integrated Stormwater Treatment and Harvesting: Technical Guidance Report. Institute for Sustainable
Water Resources, Monash University, Melbourne, Australia.
Mitchell, V. G., McCarthy, D. T., Deletic, A., and Fletcher, T. D. (2008). “Urban stormwater
harvesting – sensitivity of a storage behaviour model.” Environmental Modelling & Software, 23(6),
782–793.
Newman, P., and Kenworthy, J. (1999). Sustainability and Cities, P: Overcoming Automobile
Dependence. Island Pr, Washington, D.C.
Niemczynowicz, J. (1999). “Urban hydrology and water management – present and future
challenges.” Urban Water, 1(1), 1–14.
O’Sullivan, J. J., Bruen, M., Purcell, P. J., and Gebre, F. (2012). “Urban drainage in Ireland –
embracing sustainable systems.” Water and Environment Journal, 26(2), 241–251.
Pahl-Wostl, C., Holtz, G., Kastens, B., and Knieper, C. (2010). “Analyzing complex water

12
governance regimes: the Management and Transition Framework.” Environmental Science &
Policy, Special issue: Water governance in times of change, 13(7), 571–581.
Palacio Castañeda, N. (2010). “Propuesta de un sistema de aprovechamiento de agua lluvia
como alternativa para el ahorro de agua potable, en la institución educativa María Auxiliadora
de Caldas, Antioquia.” Revista Gestión y Ambiente, 13(2), 25–40.
Pleau, M., Pelletier, G., Colas, H., Lavallée, P., and Bonin, R. (2001). “Global predictive real-
time control of Quebec Urban Community’s westerly sewer network.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 43(7), 123–130.
Poch, M., Cortés, U., Comas, J., Rodriguez-Roda, I., and Sànchez-Marrè, M. (2012). Decisions on
urban water systems: some support. Universitat de Girona.
Prince George’s County (Md.), Department of Environmental Resources, and Programs and
Planning Division. (1999). Low-impact development: an integrated design approach. Prince George’s
County Dept. of Environmental Resources, Largo, Md. (9400 Peppercorn Pl., Largo 20774).
Puig, V., Cembrano, G., Romera, J., Quevedo, J., Aznar, B., Ramón, G., and Cabot, J. (2009).
“Predictive optimal control of sewer networks using CORAL tool: application to Riera Blanca
catchment in Barcelona.” Water Science and Technology: A Journal of the International Association on
Water Pollution Research, 60(4), 869–878.
Ramírez, J. (2009). “Construcción verde en concreto.” Noticreto, 93, 20–27.
Roesner, L. A., Bledsoe, B. P., and Brashear, R. W. (2001). “Are Best-Management-Practice
Criteria Really Environmentally Friendly?” Journal of Water Resources Planning and Management,
127(3), 150–154.
van Roon, M. (2007a). “Water localisation and reclamation: Steps towards low impact urban
design and development.” Journal of Environmental Management, 83(4), 437–447.
van Roon, M. (2007b). “Water localisation and reclamation: Steps towards low impact urban
design and development.” Journal of Environmental Management, 83(4), 437–447.
Sanchez, L. S., and Caicedo, E. (2003). “Uso del Agua Lluvia en la Bocana- Buenaventura.”
Cinara, Cartagena, Colombia, 1–9.
Scholes, L., Revitt, M., and Ellis, J. B. (2005). The fate of stormwater priority pollutants in BMPs.
Schueler, T. R. (1987). Controlling Urban Runoff: A Practical Manual for Planning and Designing
Urban BMPs. Metropolitan Information Center.
Sharma, A. K., Cook, S., Tjandraatmadja, G., and Gregory, A. (2012). “Impediments and
constraints in the uptake of water sensitive urban design measures in greenfield and infill
developments.” Water Science and Technology: A Journal of the International Association on Water
Pollution Research, 65(2), 340–352.
Sieker, F. (1996). “Dezentrale Regenwasserbewirtschaftung als Beitrag der
Siedlungswasserwirtschaft zur Hochwasserdämpfung.” Zeitschrift für Stadtentwässerung und
Gewässerschutz (SuG), Universität Hannover, 34.
Taylor, A. C., and Fletcher, T. D. (2007). “Nonstructural Urban Stormwater Quality Measures:
Building a Knowledge Base to Improve Their Use.” Environmental Management, 39(5), 663–677.

13
Thomas, J. F., Gomboso, J., Oliver, J. E., and Ritchie, V. A. (1997). Wastewater re-use, stormwater
management and the national water reform agenda!: report to the Sustainable Land and Water Resources
Management Committee and to the Council of Australian Governments National Water Reform Task Force
/ J.F. Thomas ... [et al]. Research position paper!; 1. 1329-5713, CSIRO Land and Water,
Canberra.
Torres, A., Estupiñán Perdomo, J.L., and Zapata García, H.O. (2011a). “Proposal and
assessment of rainwater harvesting scenarios on the Javeriana University campus, Bogota.”
Porto Alegre, Brazil, 1–8.
Torres, A., Mendez-Fajardo, S., Gutiérrez-Torres, A., and Sandoval, S. (2013). “Quality of
Rainwater Runoff on Roofs and Its Relation to Uses and Rain Characteristics in the Villa
Alexandra and Acacias Neighborhoods of Kennedy, Bogota, Colombia.” Journal of
Environmental Engineering, 139(10), 1273–1278.
Torres, A., Méndez-Fajardo, S., López-Kleine, L., Marín, V., González, J. A., Suárez, J. C.,
Pinzón, J. D., and Ruiz, A. (2011b). “Preliminary assessment of roof runoff rain water quality
for potential harvesting in bogota’s peri-urban areas.” Revista U.D.C.A Actualidad &
Divulgación Científica, 14(1), 127–135.
Torres, A., Ortega Suescun, D. H., and Herrera Daza, E. (2011c). “Propiedades filtrantes de
los pavimentos porosos rígidos.” Gestión integrada del recurso hídrico frente al cambio climático, L. D.
Sánchez, A. Gálvis, I. Restrepo, and M. R. Peña, eds., Programa Editorial U-Valle, Cali, 39–48.
Torres, A., Santa, M. A., and Quintero, J. A. (2012). “Desempeño hidráulico de un modelo de
trinchera de retención utilizada como componente del drenaje urbano.” Revista Acodal, 229(1),
19–27.
Uhl, M. (1990). “Alternativen zur Regenwasserableitung.” Technische Berichte über
Ingenieurhydrologie und Hydraulik, Institut für Wasserbau, T. H. Darmstadt, ed., Institute of River
and Coastal Engineering, Hamburg, Germany, 47–90.
United Nations. (2010). World urbanization prospects: the 2009 revision. Department of Economic
and Social Affairs (Population Division), New York, 47.
Veldkamp, R., Henckens, G., Langeveld, J., and Clemens, F. (2002). “Field data on time and
space scales of transport processes in sewer systems.” Portland, Oregon, United States, 15.
Walsh, C. J., Fletcher, T. D., and Ladson, A. R. (2005). “Stream restoration in urban
catchments through redesigning stormwater systems: looking to the catchment to save the
stream.” Journal of the North American Benthological Society, 24(3), 690–705.
Walsh, C. J., and Kunapo, J. (2009). “The importance of upland flow paths in determining
urban effects on stream ecosystems.” Journal of the North American Benthological Society, 28(4),
977–990.
Wang, L., Lyons, J., Kanehl, P., and Gatti, R. (1997). “Influences of Watershed Land Use on
Habitat Quality and Biotic Integrity in Wisconsin Streams.” Fisheries, 22(6), 6–12.
Werner, B., and Collins, R. (2012). Towards efficient use of water resources in Europe — European
Environment Agency (EEA). ISSN 1725-9177, Publication, Denmark.
Whelans, C., Maunsell, H. G., and Thompson, P. (1994). Planning and management guidelines for

14
water sensitive urban (residential) design. Department of Planning and Urban Development of
Western Australia, Perth, Western Australia.
Wong, T. H. F. (2007). “Water Sensitive Urban Design - the Journey Thus Far.” Australian
Journal of Water Resources, 10(3).
Wong, T. H. F., Breen, P., and Lloyd, S. (2000). Water Sensitive Road Design - Design Options for
Improving Stormwater Quality of Road Runoff. Technical Report, Cooperative Research Centre for
Catchment Hydrology, NSW, Australia, 1–69.
Wong, T. H. F., and Eadie, M. L. (2000). “Water sensitive urban design: A paradigm shift in
urban design.” <http://search.informit.com.au /documentSummary ;dn=519426635188728
;res=IELENG > (Apr. 24, 2015).
Zhang, Z., Cui, B., and Fan, X. (2012). “Removal mechanisms of heavy metal pollution from
urban runoff in wetlands.” Frontiers of Earth Science, 6(4), 433–444.
Zhu, K., Zhang, L., Hart, W., Liu, M., and Chen, H. (2004). “Quality issues in harvested
rainwater in arid and semi-arid Loess Plateau of northern China.” Journal of Arid Environments,
57(4), 487–505.

15
PART A
THEORETICAL FRAMEWORK
Part A exposes the concepts, definitions, relevant literature and existing theories that were
implemented for the development of this thesis. These concepts and definitions are well
known and frequently used by the scientific community.
Chapter 1 shows the literature review concerning stormwater harvesting and constructed
wetlands. This Chapter introduces what have been done to monitoring the constructed
wetlands. Chapter 2 exposes the fundamental concepts of spectrometry, turbidity, first flush,
and hydrologic and hydraulic methods. Finally, Chapter 3 illustrates the statistics methods
employed for data analysis.
CHAPTER 1

STORMWATER HARVESTING AND CONSTRUCTED WETLANDS

This Chapter illustrates a literature review about stormwater harvesting and constructed
wetlands.

1.1 STORMWATER HARVESTING


Recent studies in urban drainage management have come to view urban water runoff as an
opportunity (Mitchell et al. 2006). In other words, researchers have recognized the additional
water supply, reduced potable water demand, increased biodiversity and improved
microclimate offered by this source is less a problem and more a boon (Ashley et al. 2013).
Hence, further attention is being paid to stormwater harvesting (SWH) as an alternative source
of water for non-potable purposes (Hatt et al. 2006) such as: landscape irrigation, toilet
discharge, washing floors and washing building facades (Coombes et al. 2000; Ghimire et al.
2012; Ghisi et al. 2009; Shuster et al. 2013).

The implementation of SWH techniques in urban areas is considered a multi-beneficial strategy


in light of the fact that it also provides urban flooding control (Fletcher et al. 2008; van Roon
2007a; Zhu et al. 2004) and lower demand to meet potable water consumption needs
(Coombes et al. 2000, 2003; Ghisi et al. 2009; Wong 2007). Further still, it helps relieve
pressure on urban drainage systems during strong rain events (Werner and Collins 2012),
which reduces and/or solves current water shortages and tackles urban natural waterway
pollution (Fletcher et al. 2008; van Roon 2007b; Walsh et al. 2005; Zhu et al. 2004).

In addition, SWH embodies adaptation strategies that must be heeded by the water sector to
confront the challenges of climate change (Aladenola and Adeboye 2009; Boelee et al. 2012).
On top of the policy and water benefits, this technique enjoys greater public acceptance than
wastewater recycling / reuse / seawater desalinization (Brown and Davies 2007; Coombes et
al. 2003). Despite these positives, and the fact that many studies in SWH have yielded good
results, there remains a general avoidance to adopt this technique on a wider scale primarily
attributable to the lack of information regarding SWH effectiveness (Imteaz et al. 2011).
Nevertheless, it has been successfully implemented as an alternative water source in some
countries (DeBusk et al. 2010; Lariyah et al. 2011). The enhanced deployment of SWH
depends on establishing concrete answers to the following questions (Mitchell et al. 2008):
“How much stormwater can be harvested?” “How reliable is this supply source?” (Farreny et
al. 2011), and “How much storage is required?” Some researches focus on rainwater harvesting
(RWH) collecting only rainwater primarily from roofs (e.g. Abdulla and Al-Shareef 2009; Burns
et al. 2015; Coombes et al. 2006; Jones and Hunt 2010).

18
Australia leads the world in SUDS used for SWH. Thus, Hatt et al. (2006) reviews the
Australian practice primarily for non-potable uses gleaned from runoff generated from all
urban surfaces. In 2008, Fletcher et al. identified areas of SWH that remain undeveloped, as
well as impediments to the adoption of SWH as a source of water. In short, the authors hold
that some of those impediments are: (i) “Data requirements for assessing reliability and proper
system function risk”; (ii) “Data requirements for assessing life cost”; (iii) “Stormwater end use
water quality guidelines”; (iv) “Water-Energy Tradeoffs”; (v) “Retrofit technologies”.
Recently, Kazemi and Hill (2015) studied the effect of store the stormwater in different
permeable pavement basecourses. They compared the results with the Australian guidelines
finding that the stored water generally met the requirements for irrigation of green spaces.
Similarly, in UK, Nnadi et al. (2015) developed a research that studied the capability that has
permeable pavement to treat stormwater. The authors found that this kind of SUDS can meet
the chemical standards for use in agricultural irrigation regardless of the type of sub-base used.
In Colombia, although some research has looked into SWH or RWH (Ballén et al. 2006; Lara
Borrero et al. 2007; Palacio Castañeda, 2010; Ramírez 2009; Sanchez and Caicedo 2003; Torres
et al. 2011b; a, 2013) and SUDS (Álvarez and Celedón 2012; Devia et al. 2012; Galarza and
Garzón 2005; Galarza-Molina et al. 2016; Gómez-González et al. 2010; Torres et al. 2011c,
2012), virtually no research projects have taken SUDS into account for SWH purposes. Ballén
et al. (2006) assess the feasibility of RWH in Colombia and enumerate five critical variables for
the success of SUDS for SWH: precipitation, house coverage area, water supply availability,
water price per cubic meter and investment needed to build, maintain and operate the system.
A sixth aspect, storage tank capacity, should also be added to this list, for it forms an
indispensable part of the design of such systems (Fewkes 1999).

1.2 MONITORING OF CONSTRUCTED WETLANDS SYSTEMS


Constructed Wetland systems (CWs) play a part in the significant changes that urban drainage
management and the urban water cycle have experienced in the last few decades. CWs
implementation within stormwater management began in the late 1980s, (Somes and Wong
1994; Vymazal 2005), precisely when researchers saw the fundamental role of natural wetlands
in the environment (Leibowitz et al. 2000) and the successful use of CWs in treating different
kinds of wastewater (e.g. domestic wastewater, mine discharge, and industrial and agricultural
effluents). Additionally, these systems quickly became recognized for their effective removal of
pollutants, low energy requirements, ease of maintenance and low construction and operating
costs (Cooper 2007; Gottschall et al. 2007; Langergraber et al. 2010; Soda et al. 2012). CWs are
split into (Kadlec and Wallace 2008): (i) free water surface flow systems (FWS), (ii) horizontal
subsurface flow systems (HSSF), (iii) vertical subsurface flow systems (VF) and (iv) hybrid
systems.
Regarding Constructed Wetlands (CWs) for SWH, Hatt et al. (2006) classify 14% of SUDS
used for stormwater treatment as CWs. In fact, the end uses of water treated water in these
systems are irrigation, environmental flows, fire-fighting, toilet flushing, miscellaneous uses
(e.g. industrial recycling, industrial backup supply) and other outdoor uses (e.g. car washing,
window washing). Yet, the authors fail to locate adequate information about system

19
construction, maintenance and operation. They conclude that clear specifications of operation
and maintenance programs must be developed. Also, Hatt et al. (2006) raise concern about the
fact that the design of these systems is aimed at stormwater pollution control practices,
perhaps lacking assurance of the quality necessary.
According to studies carried by Barten (1987), Carleton et al. (2001), Meiorin (1989), Tilley and
Brown (1998), Constructed Wetland (CW) performance for stormwater depends on: inflow,
hydraulic loading rate and detention time. These variables are functions of storm intensity,
runoff volume and CW size. CW treatment processes consist of a combination of physical,
chemical, and biological processes, which include sedimentation, precipitation, adsorption to
soil particles, assimilation by plant tissue and microbial transformations (Brix 1993). Studies on
CWs for treating stormwater show that sedimentation processes seem to be the principal
process of treatment compared to with plant uptake (Lung and Light 1996; Mays and Edwards
2001; Walker and Hurl 2002).

Further, more concise review of the CWs for stormwater management can be found in Lucas
et al. (2014). The authors observe variations in removal efficiency and effluent pollutant
concentrations, indicating that some CWs are more successful than others. The reason for this
difference is attributed to specific circumstances. In short, UK CWs perform efficiently as well
as other systems in the world. Generally speaking, low phosphate removal efficiencies, a wide
range of removal efficiencies (highlighting the importance of good system design) and negative
values, are the result from re-suspension of particles in CWs (in the case of FWS) or system
saturation of the system (in the case of HSSW) (Lucas et al. 2014).
In addition to the previously discussed literature review, CW treatment performance in
countries around the world is summarized by Malaviya and Singh (2012). In order to compare
these findings to those of Lucas et al. (2014), Table A-1 was developed. This table was
complemented with a literature review done by Mungasavalli and Viraraghavan (2006), as well
as recent research not included (e.g. Al-Rubaei et al. 2016; Mohammadpour et al. 2014).
On the whole, CW performance is observed to be variable and site-specific. According to
Holland et al. (2004), these aspects are the result of sensitivity to rainfall characteristics, as well
as CW hydraulic conditions.
According to a constructed wetlands (CWs) review made by Lucas et al. (2014) is an issue the
lack of performance data and monitoring results. This observation was also reported by
Fletcher et al. (2008) in the review publication on stormwater harvesting systems. This is
possibly due to the fact that CWs are part of SUDS treatment train (typically in the UK).
Therefore, the CWs are not a stand-alone treatment (Lucas et al. 2014). On the contrary, in
USA CWs are normally used as a stand-alone treatment, so information about their
performance is more available (Moore and Hunt 2012).

20
Table A-1 CWs Removal efficiency (%) (adapted from Malaviya and Singh, 2012). *Mean removal efficiency; ** For TTC Thermotolerant
Coliforms; *** Synthetic runoff
Pollutant Removal efficiency (%) Reference
Basic Parameters
TSS 7 to 96 Al-Rubaei et al. (2016); Birch et al. (2004); Carleton et al. (2000); Choi et
al. (2012)***; Mangangka et al. (2013); Mohammadpour et al. (2014);
Rushton et al. (1995); Terzakis et al. (2008)
BOD5 25 to 97 Ko et al. (2010; Lee and Scholz (2007); Mohammadpour et al. (2014)
TP 12 to 96 Al-Rubaei et al. (2016); Birch et al. (2004)*; Carleton et al. (2000); Choi
et al. (2012)***; Farrell and Scheckenberger (2003); Ham et al. (2010);
Headley and Tanner (2012); Hey et al. (1994); Ko et al. (2010);
Livingston (1989); Mangangka et al. (2013); Maristancy and Bartel
(1989); Reinelt and Horner (1995); Terzakis et al. (2008)
TN (Total Nitrogen) 10 to 84 Al-Rubaei et al. (2016); Birch et al. (2004)*; Carleton et al. (2000); Choi
et al. (2012)***; Ham et al. (2010); Headley et al. (2001); Livingston
(1989); Mangangka et al. (2013); Maristancy and Bartel (1989); Raisin et
al. (1997); Sim et al. (2008); Terzakis et al. (2008)
Metals
Zn 4 to 99 Al-Rubaei et al. (2016); Birch et al. (2004); Carleton et al. (2000); Choi et
al. (2012); Daukas et al. (1989); Farrell and Scheckenberger (2003);
Mungur et al. (1999); Terzakis et al. (2008)
Pb 57 to 97 Al-Rubaei et al. (2016); Birch et al. (2004); Carleton et al. (2000); Choi et
al. (2012)***; Daukas et al. (1989); Farrell and Scheckenberger (2003);
Mungur et al. (1999); Terzakis et al. (2008); Walker and Hurl (2002)
Cd 0 to 96 Al-Rubaei et al. (2016); Carleton et al. (2000); Daukas et al. (1989)
Cu -4 to 97 Al-Rubaei et al. (2016); Birch et al. (2004)*; Carleton et al. (2000);
Daukas et al. (1989); Farrell and Scheckenberger (2003); Lee and Scholz
(2007); Maristancy and Bartel (1989); Mungur et al. (1999); Silverman
(1989); Terzakis et al. (2008); Walker and Hurl (2002)
Ni 22 to 33 Birch et al. (2004); Terzakis et al. (2008)
Cr 40 to 91 Birch et al. (2004)*; Daukas et al. (1989); Maristancy and Bartel (1989);
Silverman (1989)
Bacterial indicators
Fecal coliforms (FC) 76 to 79** Birch et al. (2004); Davies and Bavor (2000)

To know the CW hydraulic and treatment performances, authors such as Al-Rubaei et al.
(2016) had used automatic samplers (flow-weighted composite samples), ultrasonic flow
meters and rain gauge with 0.2 mm resolution. The flow-weighted composite samples
represent the pollutant concentration for the entire stormwater event. If the pollutant
concentration changes drastically, the measured pollutant concentration may not be represent
(Erickson et al. 2010). The monitoring periods for the CW studied by Al-Rubaei et al. (2016)
were summer 1997 (Johansson 1997; Växjö Municipality 1998) , winter 2003 (Semadeni-Davies
2006) and from May 2013 to April 2014 (Al-Rubaei et al. 2016). Carleton et al. (2000) had the
same goal as Al-Rubaei et al. (2016), but the former implemented automatic flow gauging–
sampling stations (flow-weighted composite samples) that worked with Palmer–Bowlus flumes
and a submerged pressure transducer inside the flume. For the precipitation measurements the
authors used a tipping-bucket rain gauge with 0.2 mm resolution. The monitoring period of
this research was from April 1996 to May 1997. Similar monitoring equipment was
implemented by Birch et al. (2004) for the evaluation of the efficiency of a CW in Sydney. The
monitoring period of this study was between April and June 2000.
During the period 2009 and 2010, Thomas et al. (2016) quantified the performance of a CW
located in Sydney. In order to improve the accuracy of the research (observed in other studies),
the authors employed high temporal resolution monitoring and automatic sampling. Five water

21
quality-monitoring stations were installed with auto sampler, flow monitor (ultrasonic level
sensor), turbidity sensor, conductivity and temperature sensor. The authors used the collected
data to calibrate and validate a model to estimate the reduction of the pollutants
concentrations. The results showed that the proposed model could be used as a standard
method for estimating the pollutant reduction in terms of TN (Total Nitrogen) and TP (Total
Phosphorus) performance of other similar systems. However, the TSS predictions were too
uncertain to be used (Thomas et al. 2016).
Other authors were interested in determining the background concentration of suspended
solids and nutrients in a CW for stormwater treatment. They employed grab sampling method
and the samples were collected at daily intervals. Two monitoring periods were defined, from
August 2002 to January 2003 and June 2003 to February 2004 (Kasper and Jenkins 2007).
Most of the studies presented in this Chapter implemented sampling campaigns and laboratory
analysis to know the water quality performance of CWs (e.g. Al-Rubaei et al. 2016; Carleton et
al. 2001; Ellis et al. 2003). However, this technique have restrictions associated with: the
number of samples collected per event due high costs, lack of representativeness of the
dynamics of pollutographs due inadequate time steps and few data acquisition that enable
adequate statistical analyses (Métadier and Bertrand-Krajewski 2012). Since the late 1990s
various researchers (e.g. Barraud et al. 2002; Bertrand-Krajewski et al. 2008; Gruber et al. 2005;
Grüning and Orth 2002; Hochedlinger et al. 2006; Lacour et al. 2009; Veldkamp et al. 2002)
have used a new developed sensor technology for continuous water quality monitoring. The
purposes of these studies range from real time control of sewer systems, to the analysis and
modelling of a catchment (Métadier and Bertrand-Krajewski 2012). Despite of all the benefits
that have online monitoring sensors (e.g. measurements of usual water quality indicators at
high temporal resolution)(Gruber et al. 2006), online probes require local calibration to
increase measurement quality and reduce systematic errors (Gamerith et al. 2011). In the last
years, some authors focus on analysis of large databases (Métadier and Bertrand-Krajewski
2012) with the aim of identifying the pollutants behavior and the potential of online
monitoring. Others focus on the influence of local calibration on the quality of online
measurements (Caradot et al. 2015). Furthermore, the use of on-line measurements can
provide information of the storm events that allow developing detailed analysis of the process
involved (Métadier and Bertrand-Krajewski 2012). This can allow reliable data that can support
real time decision-making systems.

22
CHAPTER 2

WATER QUALITY ON-LINE MEASUREMENTS AND WATER


QUANTITY CONCEPTS AND FUNDAMENTALS

This Chapter involves the fundamental concepts of spectrometry, turbidity and first flush
phenomenon. As well, the hydrologic and hydraulic methods implemented during the
development of this thesis are presented in this Chapter.
Sewer Measurements

2.1 UV-VIS The


ABSORBANCE SPECTROMETRY
measurement takes place directly in-situ without sampling or sample treatment.
Thus, measurement errors due to sampling, transport, storage, dilution etc. are not
Numerous procedures havemeasurement
relevant. A single been developed based
typically takes on the15UV-Vis
about seconds.spectrophotometry.
The instrument is In the
last years, several developments were done that led to a lot of applications based on the
equipped with an auto-cleaning system using pressurised air. In the CSO chamber
exploitation an
of aexplosion-proof
large part ofversion
the UVis spectrum.
installed. ATherefore,
path lengthoneof of5 mm
these(length of
applications is the
measurement window) is used for wastewater applications.
direct examination of water and wastewater. The quality measurements can be carried out with
the UV colorimetry
4.3.2.1. Ior with newTOmulti-wavelength
NTRODUCTION UV-VIS SPECTROSCOPY approaches (Thomas 2007). Hence, this
kind of growing technology allows us to have direct on-field
The basis for optical spectroscopy is the complementarity waterprinciple
qualityofmeasurements.
Bohr and
Einstein which describes the frequency correlation calculated by following equation:
In a water sample there are a variety of dissolved substances. The substances of the measured
medium weaken the light that is emitted by a lamp. Each molecule of a dissolved substance
∆E = E − E 1 = h ⋅ ν
absorbs radiation at2 a certain and known wavelength. Hence, the concentration of substances
contained establishes the size of the absorption of the sample. If the concentration of a certain
This formula describes the discrete atomic and molecular energy levels Ei with the
substance is frequency
higher, the more it will weaken the light beam. The absorbance is represented by a
ν of electromagnetic radiation. The proportionality constant h is Planck's
ration of twoconstant
light intensities:
with a valuethe intensity
of 6.626·10 -34 of light when the beam passed through the medium
Js. Instead of the frequency ν , the wave number
to be measured~ and when the reference medium
ν will be used and, therefore, the equation can is distilled
be writtenwater.
as: There is a linear increase in
absorption with higher concentrations (s::can Messtechnik GmbH 2007).
∆E = E − E = h ⋅ c ⋅ ~
ν
Absorption spectroscopy
2 1
in the UV and the visible (VIS) range can be classified according to
with ν = c λ = c ⋅ ~
ν
Figure A-1 (Perkampus 1986). The ranges shown in Figure A-1 are not tight limits, due to
absorption of the molecules beneath 200 nm and above 50000 cm-1, respectively. These
Absorption spectroscopy in the ultraviolet (UV) and visible (VIS) range can be
boundaries are device and experiment dependent. But the upper boundary (800 nm) is not
classified according to Figure 4-15.
limited due to the device (Hochedlinger 2005).

Figure Figure
A-1. Classification
4-15 Spectra of absorption
Ranges spectroscopy
(Perkampus, 1986) (Adapted from: Hochedlinger 2005; Perkampus 1986)

In the visible spectral range, the interactions of matter and electromagnetic radiation
occur, resulting in dye. The ranges described (Figure 4-15) are not definite
boundaries due to possible absorption of the molecules beneath 200 nm and above
50 000 cm-1, respectively. The shortwave boundary is device and experiment 23
dependent. The long wave boundary (800 nm) is not really limited due to the device.

The Beer-Lambert law is the mathematical and physical basis for absorption of light
y standards. By using a different matrix, the instrument was not sensitive to very fine particle
ymer in StablCal™ Standards is stabilized, suspensions. (Very fine silica will not produce a flame
eriorate over time as is the case with image extinction in a Jackson Candle Turbidimeter.) The
urbidity formazin standards. Due to this Jackson Candle Turbidimeter was also incapable of
ty, StablCal™ Standards of any concen- measuring turbidity due to black particles such as charcoal
up to 4000 NTU can be manufactured because light absorption was so much greater than light
ready-to-use formats. scattering that the field of view became dark before
enough sample could be poured into the tube to reach an
dity Standards have many advantages
image extinction point.
formazin and other secondary turbidity
, StablCal™ Standards are stable for a Several visual extinction turbidimeters were developed
o years. Figure 5 (p. 8) displays the with improved light sources and comparison techniques,
During the 90s, the UV-Vis technology for water analysis moved from the laboratory to the
but human judgment errors contributed to a lack of preci-
Cal™ Standards of three different
field,The
— 2.0, 10.0, and 20.0 NTU. butstability
the off-line analyzers were large,
sion. Photoelectric expensive
detectors, andtocomplicated
sensitive very small (Ojeda and Rojas 2009).
ds is independent of concentration. changes in light intensity, became popular to measure the
l™ Standards are prepared at specific attenuation of transmitted light through a fixed-volume
eliminating the tedious and technique- sample. The instruments provided much better precision
ation through volumetric dilutions. under certain conditions, but still were limited in their
™ Standards have the same2.2 particle
TURBIDITY size ability to measure high or extremely low turbidity. At
ormazin and they can be directly low scattering intensities, the change in transmitted light,
ormazin. Thus a StablCal™Turbidity
Standardis an expression of the
viewed fromoptical property
a coincident view, that
was so causes lightit isto be scattered and absorbed
small that
by particles
ncentration that is independent of anyand molecules. The
virtually suspended
undetectable matter
by any means.orTypically,
impurities (e.g. clay, silt, finely divided
the signal
organicthismatter, plankton
Figure 6 (p. 8) demonstrates was and
lost inother microscopic
the electronic noise. Atorganisms) that cause the turbidity makes
higher concentrations,
ormance of the StablCal™ Standards multiple scattering interfered with direct scattering.
water appear cloudy (ASTM International 2003; EPA 1999).
mazin standards in the 1 to 5 NTU
The solution to this problem was to measure the light
array of turbidimeters.During
Last, StablCal™
the 70s, the scattered
turbidityat anstandard measurements change from the Jackson candle
angle to the incident light beam and then
e repeatably prepared from traceable
turbidimeter
nd can be considered primary standards.
method (Sadar 1996)
relate this to the nephelometric
angle-scattered method
light to the sample’s actual(see Figure A-3) (EPA 1999).
turbidity. A detection angle of 90° is considered to be
The nephelometric method determines turbidity by the light scattered at an angle of 90º from
e matrix of StablCal™ Standards has very sensitive to particle scatter. Most modern instruments
the incident beam. A 90º
educe the potential health risks that
detection angle is considered to be the least sensitive to variations in
measure 90° scatter (Figure 4); these instruments are
particle
ith traditional formazin standards.size. This method
called is calibrated using
nephelometers, suspensions
or nephelometric of formazin polymer such that a
turbidimeters,
value ofany
this matrix effectively scavenge 40trace
nephelometric units (NTU)
to distinguish is approximately
them from equalwhich
generic turbidimeters, to 40 JTU (Jackson Turbidity
Unit)concentration
he standard. The hydrazine (EPA 1999). measure the ratio of transmitted to absorbed light.
vels that are below analytical detection
e levels in StablCal™ Standards have
y at least three orders of magnitude over Glass
nal formazin standards of equal turbidity. Sample Cell
Lamp
al™ Standards are pre-made, the only
n required is to thoroughly mix the Transmitted
Light
e use. This eliminates exposure to the
90° Scattered
es potential to contaminate the standard, Aperture Light
Lens
hat would otherwise be spent in
standards by volumetric dilution. Detector

try Figure A-2. Nephelometric measurement (source: Sadar 1996)


need for precise measurements of very Figure 4. In nephelometric measurement, turbidity is
samples containing fine solids demanded determined by the light scattered at an angle of 90°
The optical
n turbidimeter performance. The scattered light measurement
from the incident beam.is the measuring principle for the turbidity. This
Turbidimeter presentedmeasurement
serious practicalmethod requires specific details for the place and the installation of the sensor.
The
use it could not measure infrared
turbidity lowerlight penetrates the water sample, thus if a wall or floor is close to the sensor the
as somewhat cumbersome,light and
canwasbe reflected or scattered simulating a higher turbidity. Also, the direct sunlight can
uman judgment to determine the exact
interfere with the measurement, as well as the coloration and air bubbles in the sample.
During the first 10 7years of 2000, a great advance on turbidity measurement has been observed
(Bin Omar and Bin MatJafri 2009). The improvement has focused on the elimination of stray
light interference. As well, software improvements allow having more accurate and stable
measurements. Additionally, the new technology admits a wide dynamic range, admit water
with more complex matrix and minimize the interferences (Association 2011).

24
2.3 FIRST FLUSH PHENOMENON
Runoff that wash urbanized areas exhibits higher levels of pollution (Torres et al. 2016; Zhang
et al. 2010), even in sometimes greater than wastewater. This phenomenon is called the first
flush (FF) phenomenon: the greatest mass of pollutants contained in a small fraction of
volume (Artina et al. 1999; Novotný and Chesters 1981). Therefore, we decided to take it into
account in this thesis.
The initial expressions of the FF phenomenon were not exactly defined. These definitions
were vague, which led to generate interpretations depending on each person. This situation
was of concern because the potential design of treatment facilities could be function of this
phenomenon. Consequently, a better understanding of the first flush phenomenon was
needed. Bertrand-Krajewski et al. (1998) studied the distribution of the pollutant load vs. the
volume in stormwater e︎ vents in order to clarify the notions and to arrive to a new FF
phenomenon definition.
With the aim of comparing the pollutant mass flow rate curve from the different storm events,
the M(V) curves of pollutant mass distribution vs. volume were implemented. The M(V) of an
storm event shows the variation of the pollutant mass allowing the analysis of the
phenomenon. If the pollutant concentration remains constant during the storm event, the
pollutant mass is proportional to the volume and the M(V) curve is fused with the bisector
(Bertrand-Krajewski et al. 1998). Then, each M(V) curve was fitted approximately by a power
function . The b parameter symbolizes the gap between the M(V) curve and the
bisector. Bertrand-Krajewski et al. (1998) observed that the parameter b varies significantly
from one event to other. As well, the authors noted that when the parameter b was lower,
more pronounced was the FF. This statement was proven by the fact that the main part of the
total pollutant load is transported by the first part of the total volume. Therefore, Bertrand-
Krajewski et al. (1998) proposed a new definition for the FF phenomenon based on the
behavior of the M(V) curves and the values of the b parameter.
Six zones for the M(V) curves were defined, based on the M(V) curve fitting to the power
function and adapting the typology propose by Geiger (1987). Figure A-3 shows the FF six
zones proposed and the b values for each zone. A M(V) curve can have a positive or a negative
gap with the bisector. When the gap is positive the M(V) curve can be classify in Zone I, II or
III. Whereas, the M(V) curve can be classify in Zone IV, V or VI. When the parameter b is
close to 0 the greater is the first flush effect. Hence, the zones related to the former are zones I
and II.

25
O+.'Dg/" Z+$)$.'%D[$.R+2,:& (' $13

+,4#'%&'( )# )*+ b \#'+, #3


+,
!

A7@J_
A7JbB Space between M(V) curve and
@7AAA b Value ZONE
@7@_I
bisector
_7NI_ 0<b≤ 0.185 I High
Positive
b<1 0.185<b≤ 0.862 II Medium
space
/, $+,/0),1 )*+ 0#2+,) 0.862<b≤ 1.000 III Insignificant
5+ "#$$+0.)+% 2&)* )*+ 1.000<b≤ 1.159 IV Insignificant
, "#'"0/,&#' &'%&".)+, 1.156 < b ≤ Negative
b>1 V Medium
&%+.1 2*&"* &, $.$+09 5.395 space
0 %.).1 .""#$%&'( )# 5.395 < b < ∞ VI High
"/$ 6#$+ 3$+>/+')09 &'
'), ?.$+. )94&".009 0+,,
9 /4,)$+.6 ,6.00 ,/5D
L*#".)1 @IIBK HM!1
&)* . ($+.) %+.0 #3 4$/D E&(7Figure A-3.
@N7 c#'+, #3 First Flush
)*+ @?AC zones
"/$-+, as function
%+4+'%&'( #' )*+ of the parameter b and typology of the M(V)curve zones
4.$D proposed by Bertrand-
.6+)+$ !7 Krajewski et al. (1998) (adapted from: Bertrand-Krajewski et al. 1998)

.*'(/
O/,) 0&:+ )*+ ,0#4+ #3
4+ %#+, '#) .44+.$ )# 2.4".'
.'% HYDROLOGIC ANDX'HYDRAULIC
0+.% )# 6&,).:+' "#'"0/,&#',7 3.")1 )*+ METHODS
4.$.6+)+$7 8*+ 0#2+,) 6.&' 4$#4#$)&#' #3 )*+ ,/,4+'%+% ,#0&%, .'% #3 )*+
% 2&)* 4.$)&"/0.$ ,0#4+ The hydrologic
4.$)&"/0.)+ and hydraulic
4#00/).') 6.,,+,1 +-+' &' "#65&'+%methods will helps us to assess initially the runoff rates that
,+2+$ ,9,)+6,1 +')+$, )*+ ,+2+$ ,9,)+6 2&)* )*+ $/'D
#',&%+$+% &' $+0.)&#' )# entered to the system, as well
#Y 3$#6 )*+ ,/$3."+, #3 )*+ ".)"*6+') ?Z+$)$.'%D to compute the inflow and outflow of the system. We decided to
( $.&'3.00 +-+'), 2#/0% implement the following basic notions.
[$.R+2,:&1 @IIBK Q+&'\6.''1 @II]C7 8*+ 4$#4#$)&#'
.'% )*+ "#$$+,4#'%&'( #3 )*+ 8^^ 6.,, %/+ )# )*+ $+,/,4+',&#' #3 %+4#,&),
#2+-+$1 )*+ "#')$.%&"D &,The
)*&, ,/5R+") $+-+.0 )*+ rational
+,)&6.)+% method
)# $.'(+ is the
#' .-+$.(+ 3$#6frequently
#'09 ?L*+55#1 @II]C7 U#$+#-+$1 ,#6+ .))+64), )#
@_ )# NA` tool for initial assessments of runoff rates (CIRIA 2000).

. .'% )*+ &64#,,&5&0&)9 The equation


,&6/0.)+ )*+ 4*+'#6+'.E2-1&'%&".)+
expresses the peak rate of storm runoff as the product of the catchment
)*.) 6.;&6/6
59 6+.', #3 #'+ ,&'(0+
* &, ,/44#,+% )# #""/$
area, the peak rate of rainfall and the runoff coefficient (WMO 2012).
-.0/+, #3 )*+ 8^^ "#'"+')$.)&#' .$+ +,,+')&.009
0&':+% )# 4+.:, #3 $.&'3.00 &')+',&)91 )*+ +$#,&#' #3
)+6, 2&)* ,)++4 ,0#4+1
%+4#,&), #""/$$&'( 0.)+$ "#$$+,4#'%&'( 6#$+ )# (E2-1)
% .0,# #""/$ 6#$+ 3$+D
4+.: =#2, &' )*+ ,+2+$, ?Z+$)$.'%D[$.R+2,:&1
50+ %+4#,&), &' )*+ ,+2D
@IIBC7 8*&, $+6.$: "#$$#5#$.)+, )*+ .'.09,&, ".$$&+%
3$+>/+')09 &' ,+2+$ where Q is the design peak runoff (L/s), C is the non-dimensional runoff coefficient which is
#/) 59 S+&(+$ ?@IJTC7 U+'."*+$ .'% !/(/,)&'
2&)* 0#2 =#2 $.)+,
JK HM!1 @IINC7 8*+,+
dependent
?@IIBC .0,# ,*#2on )*.)the catchment
. <$,) $.&'3.00 4+.:characteristics,
&, '#) I is the rainfall intensity for the design return
, ,*#2 )*.) &) &, '#) period (mm/h)
'+"+,,.$&09 ,/P"&+') )#and =/,*for#/) a. duration
,+2+$ ,9,)+6 equal to the “time of concentration” of the catchment, and
.'% )*.) )*+ 3#00#2&'( =#2 4+.:,1 2*+' )*+9 +;&,)1
#3 ! &' $+0.)&#' )# )*+
)2#$: ?U+'."*+$ .'%
A is the($+.)09
"#')$&5/)+ total)#catchment area%&,"*.$(+%7
)*+ 4#00/).') 6.,, being drained (ha). This method is recommended for small highly
impervious drainage areas such3#$as4#0Dparking lots and roadways draining into inlets and gutters
Q#2+-+$1 )*+,+ "#'"0/,&#', ".' 5+ %&Y+$+')
0/).'), #)*+$ )*.' ,/,4+'%+% ,#0&%, 5+"./,+ #3 )*+&$
%/ ;%*' 89*#:: )6 &+/<
$+,,&#' VV$&',&'( =/,*WW
(AMEC Earth and Environmental et al. 2001).
,#/$"+, .'% )*+&$ )$.',3+$ "#'%&)&#',7 8*+$+3#$+1 9+)
*+ +$#,&#' #3 %+4#,&), .(.&'1 .'9 $+4$+,+').)&#' #3 )*+ 4*+'#6+'. 2*&"*
'( =#2 $.)+ .'% =#2 &,The rational.'%method
)## ,&640&,)&" .'9 *.,)9 is characterized
.,,/64)&#' 5.,+% #' by the consideration of the entire drainage area as a single
@IJJC7 8*&, %+,"$&4)&#' .unit,
,&'(0+ estimation
)94+ #$ 0&6&)+% of the flow at the
&'3#$6.)&#' ,*#/0% 5+ most downstream point only and the assumption that
6+'. &' )*+&$ +')&$+)9 .-#&%+%7
rainfall is uniformly distributed over the drainage area and is constant over time. Also, this
method
894#0#(9 #3 )*+ @?AC "/$-+, %+4+'%&'( follows
#' )*+ 4.$.6+)+$ ! ?,++ the
E&(7 @NCassumptions that the predicted peak discharge has the same probability of
occurrence
c#'+ '$ as the)*+ used
S.4 5+)2++' rainfall
@?AC "/$-+ intensity (I) and the runoff coefficient (C) is constant during the
.'% )*+ 5&,+")#$

d ! !A7@J_ @ storm event (AMEC


4#,&)&-+ (.4 Earth and
*&(* 4#,&)&-+ Environmental et al. 2001).
(.4
d ! !A7JbB B 6+%&/6 4#,&)&-+ (.4
d ! !@7AAA N '+(0&(&50+ 4#,&)&-+ (.4
d ! !@7@_I
d ! !_7NI_
]
_
According'+(.)&-+
to (.4(ASTM'+(0&(&50+
International
'+(.)&-+ (.4
6+%&/6 '+(.)&-+ (.4
2003) V-notch weirs measure the small discharge
d ! d f! b accurately. Small changes in (.4
*&(* '+(.)&-+ the discharge produce huge change in depth. Hence, the
associated uncertainty to the depth measurement of this weir has less effect on the estimation
of the discharge compare to other weirs. The discharge equation for triangular weirs is given in
equation E2-2 (Finnemore and Franzini 2001; Shen 1981).

26
8 θ
Q = Cd ⋅ ⋅ tan ⋅ 2g ⋅ H 5/2 (E2-2)
15 2

where Q (m3/s) is the measured flow rates, H (m) is the head over the weir that was calculated
with the measured levels, θ is the V-notch angle (degree) and g is the acceleration of gravity
(9.81 m2/s).
Erickson et al. (2013) recommend that when a weir is used to determine a discharge, is
important to ensure that all the flows enter through the weir, not by the sides or underneath.
As well, is important to ensure critical flow over the crest to maintain a “free outfall” over the
weir. The weir requires inspection and maintenance at least once per month.

27
CHAPTER 3

STATISTICAL TOOLS

This chapter exposes the statistical tools applied during the development of this thesis for the
analysis of collected data. It will begin with the basic univariate analysis, and then it will
increase the complexity with the statistics multivariate.

3.1 LINEAR REGRESSION


A linear regression is a statistical method that studies the relationships between two continuous
(quantitative) variables and attempts to model this relationship. This method has an equation
in the form , where X is considered the explanatory variable and Y is the
dependent variable. b is the slope of the line and a is the intercept (The Pennsylvania State
University 2017; Yale University 1997).
It is recommended to do a scatterplot before trying to fit a linear model to observed data. This
tool can be helpful in observing the behaviour of the variables (any increasing or decreasing
trends). If the observed relationship not appears to have any association between the proposed
explanatory and dependent variables, then this model would not be useful. Normally, the least-
squares method is the most common method for fitting a regression line (Yale University
1997). This method was used within this thesis during the development of calibration
methods.

3.2 PEARSON AND SPEARMAN CORRELATION METHODS


The strength of a relation between two variables can be quantified using a correlation
coefficient. This coefficient measures the proportion that two variables tend to change
together. Also, describes the direction and the strength of the relationship (Minitab 2016; Yale
University 1997). The Pearson coefficient evaluates the linear relationship between two
variables. A linear relationship denotes when the change of one variable is associated with a
proportional change in the other variable. Also, is assuming that the variables follow a
Gaussian distribution. This coefficient always takes a value between -1 and 1 indicating perfect
correlation (Minitab 2016; Yale University 1997).
On the other hand, when the variables do not follow a Gaussian distribution the Spearman
coefficient (non-parametric method) is employed. This method evaluates the monotonic
relation between two continuous or ordinal variables. A monotonic relationship means that the
variables tend to change together, but not necessarily at a constant rate (Minitab 2016). For

28
these two methods, we use the statistical significance (p-value) to decide whether a correlation
exists at all. The null hypothesis of these tests is there is no (monotonic) association between
the two variables. If the p-value is greater than 0.05 the null hypothesis is accepted (Minitab
2016). These methods were implemented during data analysis and the development of
calibration methods.

3.3 SHAPIRO-WILK AND BARTLETT’S TESTS


The Shapiro-Wilk test is used to test if a random sample comes from a normal distribution.
This test calculates a W statistics, small values of W are evidence of departure from normality
(NIST/SEMATECH 2012). The Shapiro-Wilk test is based on the correlation between the
data and the corresponding normal scores (Peat and Barton 2005). The null hypothesis is that a
sample x1, ..., xn came from a normally distributed population. If the p-value is lower than W
the null hypothesis is rejected, therefore the sample does not come from a normal distribution.
Bartlett’s test is employed to test if k samples have homogeneity of variances. Homogeneity of
variances means equal variances across samples. Some statistical method assumes the
homogeneity of variances. Therefore, Bartlett’s test can verified this assumption. This test is
sensitive to normality assumption, so if the residuals do not appear normal then this test
should not be used. The null hypothesis is that all k variances are equal against the alternative
that at least two are different (NIST/SEMATECH 2012; Snedecor and Cochran 1980; The
Pennsylvania State University 2017). If the p-value is lower than 0.05 the null hypothesis is
accepted, therefore the variance is the same for all the k samples. These tests were executed
within this thesis during the data analysis, before the correlation methods.

3.4 BOXPLOT AND WILCOXON SIGNED RANK TEST


Boxplot is a statistical graphic technique for observed the location and variation between
different groups of data. This graphics is built with the lower whisker (data minimum), the
bottom of the box is the lower quartile (25th percentile), the middle is the data median, top of
the box is the third quartile (75th percentile), the top is the upper whisker (data maximum) and
data outliers. The data minimum is calculated with , where q1 is the lower
quartile, IQR the interquartile range , q3 is the third quartile and lw the lower
whisker. The data maximum is calculated with , where uw the upper
whisker. The outliers are the data that are lower than the lw and/or higher than the uw. Thus
the boxplot can show us a graphical summary of the distribution of a sample
(NIST/SEMATECH 2012).
The Wilcoxon signed rank test is the non-parametric alternative to the t-test for independent
samples. That is to say, when it cannot be assumed that the population is normally distributed.
Wilcoxon signed rank test evaluates if the difference between population means for two paired
samples are equal. This test is based on the difference between each data of the sample. Then
these differences are ranked. If the two population means are in fact equal, then the sums of

29
the ranks should also be nearly equal. If the difference between the sums of the ranks is too
great, the null hypothesis that the population means are equal has to be rejected (NIST 2015).
Therefore, if the p-value is greater than 0.05 the null hypothesis is accepted.
These tests were used within this thesis during the data analysis (e.g. calibration methods, the
fist flush phenomenon, among others).

3.5 KRUSKAL-WALLIS TEST


The Kruskal-Wallis test is a non-parametric test for the situation where the ANOVA normality
assumptions may not apply. The ANOVA is a data analysis technique for examining the
significance of the factors in a multi-factor model. Kruskal and Wallis introduced this test in
1952. The test computes the equality of medians in the analysis of variance of a factor. It is
usually done to identify if one variable has a significant influence over another (Walpole et al.
1999). The null hypothesis of this test is that the medians of all groups are equal. Hence, if the
p-value is lower than 0.05 the null hypothesis is rejected.
In order to apply this test the following assumptions should be met: 1- the dependent variable
should be measured at the ordinal or continuous level, 2- the independent variable should
consist of two or more categorical independent groups, 3- there should not be a relationship
between the observations in each group or between the groups themselves and 4- determine
whether the distributions in each group have the same shape (Lund Research Ltd 2013). The
Kruskal-Wallis test was used to identify which variables (e.g. storm parameters: Antecedent
Dry Weather Period, rainfall depth, among others) have a significant influence over others.

3.6 KAPPA COEFFICIENT


In 1960 Cohen proposed an observer agreement statistic (Cohen 1960) for categorical data
(Landis and Koch 1975a; b), the Kappa coefficient. This coefficient is calculated with the
equation E3-1. This statistic can be applied to different kind of studies and is relatively easy to
calculate (Uebersax 1987).
Po − Pe
k= (E3-1)
1− Pe
Po is the relative observed agreement among the judges and Pe is the proportion of units for
which agreement is expected by chance. k maximum value is 1, this value indicates a total
agreement between the judges. The k values lower than 0 are not for practical interest.
(Cohen 1960). Therefore, the Kappa coefficient was used in this thesis to compare the
observed monitoring data results with the simulated results, each one with two choices. In
order to describe the relative strength of agreement associated with kappa statistics, Landis and
Koch (1977) assigned the following label to the ranges of Cohen's kappa coefficient (Table A-
3):

30
Table A-2 Labels assigned to corresponding ranges of kappa (adapted from: Landis and Koch 1977)
Kappa Statistic Strength of Agreement
<0.00 Poor
0.00–0.20 Slight
0.21–0.40 Fair
0.41–0.60 Moderate
0.61–0.80 Substantial
0.81–1.00 Almost perfect

3.7 LOGISTIC REGRESSION


Logistic regression is a statistic method that allows modelling the relationship between
predictor variables and categorical response variables (dichotomous-binary). This model
estimates the probability of falling into a certain level of the categorical response given a set of
predictors (The Pennsylvania State University 2017). Logistic regression is comparable to linear
regression, the difference entails that the dependent variable is nominal and you do not
measure it directly. Instead, it is calculated the probability of obtaining a particular value of a
nominal variable (McDonald 2014).
The null hypothesis of this test is that the probability of a particular value of the nominal
variable is not associated with the value of the measurement variable. If the p-value is lower
than 0.05 the null hypothesis is rejected. We implemented this method to explore if we can
predict the water uses with it (McDonald 2014).
The major assumptions of this statistic method are: 1- the outcome must be discrete, otherwise
explained as the dependent variable should be dichotomous in nature, 2- the data should not
contain outliers, 3- there should be no high inter-correlations among the predictors (Statistics
Solutions n.d.).

3.8 PLS - PARTIAL LEAST SQUARES


Partial least squares (PLS) is a statistical technique that generalizes and fuses principal
component analysis (PCA) and the multiple regression methods (Abdi 2003). PLS was used as
a calibration method for the spectrometer data. It is a useful technique when the number of
variables is comparable to or greater than the number of observations. Or in the case where
there are other factors leading to correlations between variables (VCCL 2005). The main
objective of PLS is to predict variable Y from a matrix X of independent variables and to
describe their common structure: the matrix X is grouped in latent vectors that represent the
major percentage of the data variance with the variable Y. Thereby, PLS model contains the
smallest necessary number of factors (latent vectors) (Höskuldsson 1988). Furthermore, PLS
approach allows one to detect relationship between Y and independent variables even if key
variables have little contribution to the first few principal components. Then, the regression
model is obtained from the relation of latent vectors and the variable Y (Torres and Bertrand-
Krajewski 2008).

31
3.9 KERNEL DENSITY ESTIMATION
The Kernel density estimation is a statistic technique that is able to estimate the probability
density function (PDF) of a continuous random variable. We used this technique to determine
the constructed wetland efficiency. It belongs to a class of estimators called non-parametric
density estimators, because it does not assume any underlying distribution for the variable.
This method has no fixed structure and depends on that the all the data points arrive to an
estimate. There are various options among kernels (e.g. uniform, triangle). The most often
used is the Gaussian kernels. The motivation for kernel estimators was the problems associated
with the histograms, selection of the bins width and end points (Cai 2013; The University of
Edinburgh n.d.).
The kernel estimators center a kernel function at each data point. Also, smoothness or
continuity can be added by using a suitable kernel. The kernel density estimation depends on
the selection of the most appropriate bandwidth (h). Figure A-5 shows the effect of different h
on the quality of a kernel estimate. Small values of h lead to very spiky estimates (see Figure A-
5a) (not much smoothing) while larger h values lead to over smoothing (see Figure A-5b). In
order to choose an optimal h the AMISE (Asymptotic Mean Integrated Squared Error)
method is used (see Figure A-6c). AMISE is calculated from the data. This means that selected
h is an estimate of an asymptotic approximation, and recovers all the important features whilst
maintaining smoothness (Cai 2013; The University of Edinburgh n.d.).

3.10 SVM
Recently, Support Vector Machines (SVM) have transformed to a standard methodology in the
computer science and engineering communities (Moguerza and Muñoz 2006). SVM are
employed for classification, clustering and regression analysis. These models use learning
algorithms that analyze data and recognize patterns (Kandananond 2013; Vapnik et al. 1996).
SVMs are a non-linear and non-parametric classification technique. Also, SVM compared with
Neural Networks is more robust because of the learning result and its prediction accuracy is
generally higher. We employed this method for the calibration of the spectrometer data. Also,
we used the SVM algorithms in order to identify which ones of the input variables are able to
classify or predict the output variables.
SVM algorithms are based on the principles of statistical learning theory (Boser et al. 1992).
From a training data set each one is classified in one of two categories, an SVM training
algorithm builds a model that assigns new examples into one category or the other. Hence, this
model becomes in a non-probabilistic binary linear classifier (Kandananond 2013; Vapnik et al.
1996).
SVM resolved the adjustment of a function that describes a relationship between X (object)
and the answer Y using S (the data set). This method, for classification and regression, allows a
compromise between parametric and non-parametric approaches. A linear classifier can be
adjusted in a high dimensional feature space (Lopez-Kleine and Torres 2014). The SVM
operation is based on finding the hyperplane that gives the largest minimum distance to the

32
training example. The margin is twice the distance found. Hence, the optimal hyperplane
maximizes the training data (opencv dev team 2014).

33
CONCLUSIONS PART A

In spite of research carried out on SUDS performance, there remains a pressing—current—


need to analyze SUDS performance for SWH, especially in light of the lack of design,
monitoring, operational and maintenance specific knowledge. Specifically for CWs, the lack of
CW performance data and monitoring results is still an issue (Lucas et al. 2014). The following
questions remain open (Mitchell et al. 2008): (i) what are the minimum monitoring
requirements for a SUDS used for stormwater harvesting? (ii) What is the necessary equipment
for an optimal system management (operation and maintenance)? (iii) How can be used the
online monitoring to infer operational rules of the system? To phrase it another way,
operational and maintenance protocols destined to manage end-use and storage time (storage
time is inversely correlated with water quality) are of the utmost significance in the field at this
time. Additionally, the dearth of experience in Colombia related to this topic, given the South
American country’s tropical mountain climate and culture, mean that the study of SUDS for
SWH is particularly prescient.
Once the research problem was identified, topics of metrology and data analysis were reviewed
as a basis for facing the problem. Therefore, thanks to the growing development around the
UV-Vis Absorbance Spectrometry, the creation of a quality probe allows us to have direct on-
field water quality measurements. The use of this probe avoids sampling, no sample
preparation, and no reagents. Through the spectra (fingerprints) of the water sample the water
composition can be known. Moreover, additional specific parameters (e.g. turbidity, nitrate
concentration, and sum parameters such as spectral absorption coefficient at 254 nm
(SAC254)) could be obtained. Regarding the turbidity measurement, thanks to the great
advance on this field the water turbidity can be measured on-site, more stable and accurate.
These probes allow us to have high measuring frequency in order to compute the system
performance and phenomena such as the first flush.
Regarding the first flush phenomenon, this method gives the opportunity to identify the storm
events that have the greatest mass of pollutants contained in a small fraction of volume. This
knowledge of the system allows us to propose operation actions, such as by-pass the first part
of certain storm event in order to improve the system performance.
Tools such as the Rational method will help us to assess initially the runoff rates that entered
to the system. These data will be compared them with the ones measured with the sharp-
crested weirs and continuous ultrasonic level sensors (explained in Part B Section 4.2) using
the weir equation. Finally, statistical methods give us the opportunity to analyze the recorded
monitoring data, propose models and take decisions concerning to the system operation.
The methods and tools presented will serve to be applied to the case study and/or to propose
new tools or methods.

34
REFERENCES PART A

Abdi, H. (2003). “Partial least squares (PLS) regression.” Encyclopedia of Social Sciences Research
Methods, M. Lewis-Beck, A. Bryman, and T. Futing, eds., Sage, Thousand Oaks, CA (USA),
792–795.
Abdulla, F. A., and Al-Shareef, A. W. (2009). “Roof rainwater harvesting systems for
household water supply in Jordan.” Desalination, 243(1–3), 195–207.
Adhikari, A. R., Acharya, K., Shanahan, S. A., and Zhou, X. (2011). “Removal of nutrients and
metals by constructed and naturally created wetlands in the Las Vegas Valley, Nevada.”
Environmental Monitoring and Assessment, 180(1–4), 97–113.
Aladenola, O. O., and Adeboye, O. B. (2009). “Assessing the Potential for Rainwater
Harvesting.” Water Resources Management, 24(10), 2129–2137.
Al-Rubaei, A. M., Engström, M., Viklander, M., and Blecken, G.-T. (2016). “Long-term
hydraulic and treatment performance of a 19-year old constructed stormwater wetland—
Finally maturated or in need of maintenance?” Ecological Engineering, 95, 73–82.
Álvarez, J., and Celedón, E. (2012). “Evaluación de las capacidades hidráulicas y de retención
de contaminantes de un modelo de trinchera de retención construida con una canastilla en
PVC (Aquacell) acoplada con capa filtrante en geotextil, arena y grava utilizada como
componente del drenaje urbano.” Trabajo de grado para la obtención del título de Magíster en
Ingeniería Civil, Pontificia Universidad Javeriana, Bogotá.
AMEC Earth and Environmental, C., Center for Watershed Protection, Debo and Associates,
Jordan Jones and Goulding, and Atlanta Regional Commission. (2001). “Georgia Stormwater
Management Manual – Volume 2:Technical Handbook.” Georgia Stormwater Management Manual,
Georgia, USA.
Artina, S., Bardasi, G., Borea, F., Franco, C., Maglionico, M., Paoletti, A., and Sanfilippo, U.
(1999). “Water quality modelling in ephemeral streams receiving urban overflows. The pilot
study in Bologna.” J. Ball and I. Joliffe, eds., Sydney, Australia, 1589–1596.
Ashley, R., Lundy, L., Ward, S., Shaffer, P., Walker, L., Morgan, C., Saul, A., Wong, T., and
Moore, S. (2013). “Water-sensitive urban design: opportunities for the UK.” Proceedings of the
ICE - Municipal Engineer, 166(2), 65–76.
Association, A. W. W. (2011). Operational Control of Coagulation and Filtration Processes. American
Water Works Association.
ASTM International. (2003). “D1889–00 Standard test method for turbidity of water.” ASTM
International, Annual Book of ASTM Standards, Water and Environmental Technology, West
Conshohocken, Pennsylvania, 6.
Ballén, J. A., Galarza, M. Á., and Ortiz, R. O. (2006). “Sistemas de aprovechamiento de agua
lluvia para vivienda urbana.” João Pessoa, Brazil.
Barraud, S., Gibert, J., Winiarski, T., and Bertrand Krajewski, J. L. (2002). “Implementation of
a monitoring system to measure impact of stormwater runoff infiltration.” Water Science and

35
Technology: A Journal of the International Association on Water Pollution Research, 45(3), 203–210.
Barten, J. M. (1987). “Stormwater Runoff Treatment in a Wetland Filter: Effects on the Water
Quality of Clear Lake.” Lake and Reservoir Management, 3(1), 297–305.
Bertrand-Krajewski, J.-L., Barraud, S., Gilbert, J., Malard, F., Winiarski, T., and Delolme, C.
(2008). “The OTHU case study: integrated monitoring of stormwater in Lyon, France
(Chapter 23).” Data Requirements for Integrated Urban Water Management: Urban Water Series -
UNESCO-IHP, T. Fletcher and A. Deletic, eds., Taylor & Francis, London (UK), 303–314.
Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). “Distribution of pollutant mass vs
volume in stormwater discharges and the first flush phenomenon.” Water Research, 32(8), 2341–
2356.
Bin Omar, A. F., and Bin MatJafri, M. Z. (2009). “Turbidimeter Design and Analysis: A
Review on Optical Fiber Sensors for the Measurement of Water Turbidity.” Sensors, 9(10),
8311–8335.
Birch, G. F., Matthai, C., Fazeli, M. S., and Suh, J. Y. (2004). “Efficiency of a constructed
wetland in removing contaminants from stormwater.” Wetlands, 24(2), 459.
Boelee, E., Yohannes, M., Poda, J.-N., McCartney, M., Cecchi, P., Kibret, S., Hagos, F., and
Laamrani, H. (2012). “Options for water storage and rainwater harvesting to improve health
and resilience against climate change in Africa.” Regional Environmental Change, 13(3), 509–519.
Boser, B., Guyon, I. M., and Vapnik, V. N. (1992). “A Training Algorithm for Optimal Margin
Classifiers.” Proceedings of Fifth Annual Workshop on Computational Learning Theory, Pittsburgh,
ACM, USA., 144 – 152.
Brix, H. (1993). “Wastewater treatment in constructed wetlands: system design, removal
processes and treatment performance.” Constructed Wetlands for Water Quality Improvement, G. A.
Moshiri, ed., Lewis,CRC Press, Boca Raton, Fla., 9–22.
Brown, R. R., and Davies, P. (2007). “Understanding community receptivity to water re-use:
Ku-ring-gai Council case study.” Water Science and Technology: A Journal of the International
Association on Water Pollution Research, 55(4), 283–290.
Bulc, T., and Slak, A. S. (2003). “Performance of constructed wetland for highway runoff
treatment.” Water Science and Technology, 48(2), 315–322.
Burns, M. J., Fletcher, T. D., Duncan, H. P., Hatt, B. E., Ladson, A. R., and Walsh, C. J.
(2015). “The performance of rainwater tanks for stormwater retention and water supply at the
household scale: an empirical study.” Hydrological Processes, 29(1), 152–160.
Cai, E. (2013). “Exploratory Data Analysis: Kernel Density Estimation – Conceptual
Foundations.” The Chemical Statistician.
Caradot, N., Sonnenberg, H., Rouault, P., Gruber, G., Hofer, T., Torres, A., Pesci, M., and
Bertrand-Krajewski, J.-L. (2015). “Influence of local calibration on the quality of online wet
weather discharge monitoring: feedback from five international case studies.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 71(1), 45–51.
Carleton, J. N., Grizzard, T. J., Godrej, A. N., and Post, H. E. (2001). “Factors affecting the

36
performance of stormwater treatment wetlands.” Water Research, 35(6), 1552–1562.
Carleton, J. N., Grizzard, T. J., Godrej, A. N., Post, H. E., Lampe, L., and Kenel, P. P. (2000).
“Performance of a Constructed Wetlands in Treating Urban Stormwater Runoff.” Water
Environment Research, 72(3), 295–304.
Cheng, S., Grosse, W., Karrenbrock, F., and Thoennessen, M. (2002). “Efficiency of
constructed wetlands in decontamination of water polluted by heavy metals.” Ecological
Engineering, 18(3), 317–325.
Choi, J. Y., Maniquiz, M. C., Geronimo, F. K., Lee, S. Y., Lee, B. S., and Kim, L. H. (2012).
“Development of a horizontal subsurface flow modular constructed wetland for urban runoff
treatment.” Water Science and Technology: A Journal of the International Association on Water Pollution
Research, 66(9), 1950–1957.
CIRIA. (2000). Sustainable Urban Drainage Systems: Design Manual for Scotland and Northern Ireland.
Construction Industry Research and Information Association.
Cohen, J. (1960). “A Coefficient of Agreement for Nominal Scales.” Educational and Psychological
Measurement, 20(37), 37–46.
Coombes, P. J., Argue, J. R., and Kuczera, G. (2000). “Figtree Place: a case study in water
sensitive urban development (WSUD).” Urban Water, 1(4), 335–343.
Coombes, P. J., Dusntan, H., Spinks, A., Evans, C., and Harrison, T. (2006). “Key Messages
from a Decade of Water Quality Research into Roof Collected Rainwater Supplies.” Perth,
Western Australia, 1–9.
Coombes, P. J., Kuczera, G., and Kalma, J. D. (2003). “Economic, water quantity and quality
impacts from the use of a rainwater tank in the inner city.” Australian Journal of Water Resources,
7(2), 101–110.
Daukas, R., Dowry, L., and Walker Jr., W. W. (1989). “Design of wet detention basins and
constructed wetlands for treatment of stormwater runoff from a regional shopping mall in
Massachusetts.” Constructed wetlands for wastewater treatment: Municipal, industrial, agricultural, D. A.
Hammer, ed., Lewis Publishers, CRC Press, Michigan, U.S.A, 686–694.
Davies, C. M., and Bavor, H. J. (2000). “The fate of stormwater-associated bacteria in
constructed wetland and water pollution control pond systems.” Journal of Applied Microbiology,
89(2), 349–360.
DeBusk, K. M., Wright, J. D., and Hunt, W. F. (2010). “Demonstration and Monitoring of
Rainwater Harvesting Technology in North Carolina.” Low Impact Development 2010: Redefining
Water in the City, S. Struck and K. Lichten, eds., American Society of Civil Engineers, San
Francisco, California, United States, 1–10.
Devia, C., Puentes, Á., Oviedo, N., Torres, A., and Angarita, H. (2012). “Cubiertas verdes y
dinámica hídrica en la ciudad.” San José, Costa Rica.
Dittmer, U., Meyer, D., and Langergraber, G. (2005). “Simulation of a subsurface vertical flow
constructed wetland for CSO treatment.” Water Science and Technology, 51(9), 225–232.
Ellis, J. B., Shutes, R. B. E., and Revitt, D. M. (2003). Constructed wetlands and links with sustainable

37
drainage systems. Environment Agency.
EPA. (1999). “Importance of turbidity.” Guidance Manual for Compliance with the Interim Enhanced
Surface Water Treatment Rule: Turbidity Provisions, 815-R-99-010.
Erickson, A. J., Gulliver, J. S., Kang, J.-H., Weiss, P. T., and Wilson, C. B. (2010).
“Maintenance for Stormwater Treatment Practices.” Journal of Contemporary Water Research &
Education, 146(1), 75–82.
Erickson, A. J., Weiss, P. T., and Gulliver, J. S. (2013). Optimizing Stormwater Treatment Practices:
A Handbook of Assessment and Maintenance. Springer, New York!; London.
Eriksson, E., Baun, A., Scholes, L., Ledin, A., Ahlman, S., Revitt, M., Noutsopoulos, C., and
Mikkelsen, P. S. (2007). “Selected stormwater priority pollutants — a European perspective.”
Science of The Total Environment, 383(1–3), 41–51.
Farrell, A. C., and Scheckenberger, R. B. (2003). “An assessment of long-term monitoring data
for constructed wetlands for urban highway runoff control.” Water Quality Research Journal of
Canada, 38(2), 283–315.
Farreny, R., Gabarrell, X., and Rieradevall, J. (2011). “Cost-efficiency of rainwater harvesting
strategies in dense Mediterranean neighbourhoods.” Resources, Conservation and Recycling, 55(7),
686–694.
Fewkes, A. (1999). “The use of rainwater for WC flushing: the field testing of a collection
system.” Building and Environment, 34(6), 765–772.
Finnemore, E., and Franzini, J. (2001). Fluid Mechanics With Engineering Applications. McGraw-
Hill Education, Boston.
Fletcher, T. D., Deletic, A., Mitchell, V. G., and Hatt, B. E. (2008). “Reuse of urban runoff in
Australia: a review of recent advances and remaining challenges.” Journal of Environmental
Quality, 37(5 Suppl), S116-127.
Frechen, F.-B., Schier, W., and Felmeden, J. (2006). “The Plant-Covered Retention Soil Filter
(RSF): The Mechanical and Biological Combined Sewer Overflow (CSO) Treatment Plant.”
Engineering in Life Sciences, 6(1), 74–79.
Galarza, S., and Garzón, F. (2005). “Estudio de viabilidad técnica de los sistemas urbanos de
drenaje sostenible para las condiciones tropicales de Colombia.” Epiciclos, 4(1), 59–70.
Galarza-Molina, S., Torres, A., Rengifo, P., Puentes, A., Cárcamo-Hernández, E., Méndez-
Fajardo, S., and Devia, C. (2016). “The benefits of an eco-productive green roof in Bogota,
Colombia.” Indoor and Built Environment, 1420326X16665896.
Gamerith, V., Steger, B., Hochedlinger, M., and Gruber, G. (2011). “Assessment of UV/VIS-
spectrometry performance in combined sewer monitoring under wet weather conditions.”
Proceedings of the 12th International Conference on Urban Drainage, Porto Alegre, Brazil.
Geiger, W. F. (1987). “Flushing efects in combined sewer systems.” Lausanne, Switzerland,
40–46.
Ghimire, S. R., Watkins, D. W., and Li, K. (2012). “Life cycle cost assessment of a rain water
harvesting system for toilet flushing.” Water Science & Technology: Water Supply, 12(3), 309.

38
Ghisi, E., Tavares, D. da F., and Rocha, V. L. (2009). “Rainwater harvesting in petrol stations
in Brasília: Potential for potable water savings and investment feasibility analysis.” Resources,
Conservation and Recycling, 54(2), 79–85.
Gill, L. W., Ring, P., Higgins, N. M. P., and Johnston, P. M. (2014). “Accumulation of heavy
metals in a constructed wetland treating road runoff.” Ecological Engineering, 70, 133–139.
Gómez-González, G. A., Rodriguez-Benavides, A. F., and Torres, A. (2010). “Durabilidad de
las capacidades filtrantes de la capa de rodadura de un Pavimento Poroso Rígido.” Punta del
Este, Argentina, 1–11.
Greenway, M., Jenkins, G., and Polson, C. (2006). “Macrophyte zonation in stormwater
wetlands: getting it right!” Lisbon, Portugal.
Gruber, G., Bertrand-Krajewski, J.-L., De Beneditis, J., Hochedlinger, M., and Lettl, W. (2006).
“Practical aspects, experiences and strategies by using UV/VIS sensors for long-term sewer
monitoring.” Water Practice & Technology, 1(1).
Gruber, G., Winkler, S., and Pressl, A. (2005). “Continuous monitoring in sewer networks an
approach for quantification of pollution loads from CSOs into surface water bodies.” Water
Science and Technology: A Journal of the International Association on Water Pollution Research, 52(12),
215–223.
Grüning, H., and Orth, M. (2002). “Investigations of the dynamic behaviour of the
composition of combined sewage using on-line analyzers.” Water Science and Technology: A
Journal of the International Association on Water Pollution Research, 45(4–5), 77–83.
Ham, J., Yoon, C. G., Kim, H.-J., and Kim, H.-C. (2010). “Modeling the effects of constructed
wetland on nonpoint source pollution control and reservoir water quality improvement.”
Journal of Environmental Sciences (China), 22(6), 834–839.
Hatt, B. E., Deletic, A., and Fletcher, T. D. (2006). “Integrated treatment and recycling of
stormwater: a review of Australian practice.” Journal of Environmental Management, 79(1), 102–
113.
Headley, T. R., Huett, D. O., and Davison, L. (2001). “The removal of nutrients from plant
nursery irrigation runoff in subsurface horizontal-flow wetlands.” Water Science and Technology: A
Journal of the International Association on Water Pollution Research, 44(11–12), 77–84.
Headley, T. R., and Tanner, C. C. (2012). “Constructed Wetlands With Floating Emergent
Macrophytes: An Innovative Stormwater Treatment Technology.” Critical Reviews in
Environmental Science and Technology, 42(21), 2261–2310.
Hey, D. L., Kenimer, A. L., and Barrett, K. R. (1994). “Water quality improvement by four
experimental wetlands.” Ecological Engineering, 3(4), 381–397.
Hochedlinger, M. (2005). “Assessment of Combined Sewer Overflow Emissions.” PhD
Thesis, Faculty of Civil Engineering, University of Technology Graz, Austria.
Hochedlinger, M., Kainz, H., and Rauch, W. (2006). “Assessment of CSO loads--based on
UVIVIS-spectroscopy by means of different regression methods.” Water Science and Technology:
A Journal of the International Association on Water Pollution Research, 54(6–7), 239–246.

39
Holland, J. F., Martin, J. F., Granata, T., Bouchard, V., Quigley, M., and Brown, L. (2004).
“Effects of wetland depth and flow rate on residence time distribution characteristics.”
Ecological Engineering, 23(3), 189–203.
Höskuldsson, A. (1988). “PLS regression methods.” Journal of Chemometrics, 2(3), 211–228.
Imteaz, M. A., Shanableh, A., Rahman, A., and Ahsan, A. (2011). “Optimisation of rainwater
tank design from large roofs: A case study in Melbourne, Australia.” Resources, Conservation and
Recycling, 55(11), 1022–1029.
Johansson, M. (1997). “Avskiljning av dagvattenföroreningar-driftuppfölning av Bäckaslövs
avsättningsdamm och våtmark (Treatment of stormwater pollution of the Bäckaslöv
sedimentation basin and wetland).” MSc Thesis, Department of Sanitary Engineering,
Chalmers University of Technology, Gothenburg, Sweden (In Swedish).
Jones, M. P., and Hunt, W. F. (2010). “Performance of rainwater harvesting systems in the
southeastern United States.” Resources, Conservation and Recycling, 54(10), 623–629.
Kadlec, R. H., and Wallace, S. (2008). Treatment Wetlands, Second Edition. CRC Press.
Kandananond, K. (2013). “Applying 2k Factorial Design to Assess the Performance of ANN
and SVM Methods for Forecasting Stationary and Non-stationary Time Series.” Procedia
Computer Science, 17th International Conference in Knowledge Based and Intelligent
Information and Engineering Systems - KES2013, 22, 60–69.
Kasper, T. M., and Jenkins, G. A. (2007). “Measuring the background concentration in a
constructed stormwater treatment wetland.” Urban Water Journal, 4(2), 79–91.
Kazemi, F., and Hill, K. (2015). “Effect of permeable pavement basecourse aggregates on
stormwater quality for irrigation reuse.” Ecological Engineering, 77, 189–195.
Ko, C.-H., Chang, F.-C., Lee, T.-M., Chen, P.-Y., Chen, H.-H., Hsieh, H.-L., and Guan, C.-Y.
(2010). “Impact of flood damage on pollutant removal efficiencies of a subtropical urban
constructed wetland.” The Science of the Total Environment, 408(20), 4328–4333.
Lacour, C., Joannis, C., and Chebbo, G. (2009). “Assessment of annual pollutant loads in
combined sewers from continuous turbidity measurements: sensitivity to calibration data.”
Water Research, 43(8), 2179–2190.
Landis, J. R., and Koch, G. G. (1975a). “A review of statistical methods in the analysis of data
arising from observer reliability studies (Part I)*.” Statistica Neerlandica, 29(3), 101–123.
Landis, J. R., and Koch, G. G. (1975b). “A review of statistical methods in the analysis of data
arising from observer reliability studies (Part II)*.” Statistica Neerlandica, 29(4), 151–161.
Landis, J. R., and Koch, G. G. (1977). “The Measurement of Observer Agreement for
Categorical Data.” Biometrics, 33(1), 159–174.
Lara Borrero, J. A., Torres, A., Campos Pinilla, M. C., Duarte Castro, L., Echeverri Robayo, J.
I., and Villegas González, P. A. (2007). “Aprovechamiento del agua lluvia para riego y para el
lavado de zonas duras y fachadas en el campus de la Pontificia Universidad Javeriana
(Bogotá).” Ingenieria y Universidad, 11(2).
Lariyah, M. S., Mohd Nor, M. D., Mohamed Roseli, Z. A., Zulkefli, M., and Amirah Hanim,

40
M. . (2011). “Application of Water Sensitive Urban Design at Local Scale in Kuala Lumpur.”
Porto Alegre, Brazil, 1–14.
Lee, B.-H., and Scholz, M. (2007). “What is the role of Phragmites australis in experimental
constructed wetland filters treating urban runoff?” Ecological Engineering, 29(1), 87–95.
Leibowitz, S. G., Loehle, C., Li, B.-L., and Preston, E. M. (2000). “Modeling landscape
functions and effects: a network approach.” Ecological Modelling, 132(1–2), 77–94.
Livingston, E. H. (1989). “Use of wetlands for urban stormwater management.” Constructed
wetlands for wastewater treatment: Municipal, industrial, agricultural, D. A. Hammer, ed., Lewis
Publishers, CRC Press, Michigan, U.S.A, 253–262.
Lopez-Kleine, L., and Torres, A. (2014). “UV-vis in situ spectrometry data mining through
linear and non linear analysis methods.” DYNA, 81(185), 182–188.
Lucas, R., Earl, E. R., Babatunde, A. O., and Bockelmann-Evans, B. N. (2014). “Constructed
wetlands for stormwater management in the UK: a concise review.” Civil Engineering and
Environmental Systems, 0(0), 1–18.
Lund Research Ltd. (2013). “Kruskal-Wallis H Test in SPSS Statistics | Procedure, output and
interpretation of the output using a relevant example.” <https://statistics.laerd.com/spss-
tutorials/kruskal-wallis-h-test-using-spss-statistics.php> (Jan. 18, 2017).
Lung, W.-S., and Light, R. N. (1996). “Modelling copper removal in wetland ecosystems.”
Ecological Modelling, 93(1–3), 89–100.
Malaviya, P., and Singh, A. (2012). “Constructed Wetlands for Management of Urban
Stormwater Runoff.” Critical Reviews in Environmental Science and Technology, 42(20), 2153–2214.
Mangangka, I. R., Egodawatta, P., Parker, N., Gardner, T., and Goonetilleke, A. (2013).
“Performance characterisation of a constructed Wetland.” Water Science and Technology, 68(10),
2195–2201.
Maristancy, A. E., and Bartel, R. . (1989). “Wetlands and stormwater management: A case
study of Lake Munson. Part I: Long-term treatment efficiencies.” Pro- ceedings of the Symposium on
Wetlands: Concerns and Suc- cesses, D. W. Fisk, ed., American Water Resources Association,
Bethesda, Maryland, U.S.A, Tampa, Florida, 215–229.
Mays, P. A., and Edwards, G. S. (2001). “Comparison of heavy metal accumulation in a natural
wetland and constructed wetlands receiving acid mine drainage.” Ecological Engineering, 16(4),
487–500.
McDonald, J. (2014). “Simple logistic regression - Handbook of Biological Statistics.”
<http://www.biostathandbook.com/simplelogistic.html> (May 29, 2017).
Meiorin, E. C. (1989). “Urban Runoff Treatment in a Fresh/Brackish Water Marsh in
Fremont, California.” Constructed Wetlands for Wastewater Treatment: Municipal, Industrial and
Agricultural, D. A. Hammer, ed., Lewis, CRC Press, Chelsea, MI, 677–685.
Métadier, M., and Bertrand-Krajewski, J.-L. (2012). “The use of long-term on-line turbidity
measurements for the calculation of urban stormwater pollutant concentrations, loads,
pollutographs and intra-event fluxes.” Water Research, Special Issue on Stormwater in urban

41
areas, 46(20), 6836–6856.
Minitab. (2016). “A comparison of the Pearson and Spearman correlation methods.”
mtbconcept, <http://support.minitab.com/en-us/minitab-express/1/help-and-how-
to/modeling-statistics/regression/supporting-topics/basics/a-comparison-of-the-pearson-
and-spearman-correlation-methods/> (Jan. 16, 2017).
Mitchell, V. G., Hatt, B. E., Deletic, A., Fletcher, T. D., McCarthy, D., and Magyar, M. (2006).
Integrated Stormwater Treatment and Harvesting: Technical Guidance Report. Institute for Sustainable
Water Resources, Monash University, Melbourne, Australia.
Mitchell, V. G., McCarthy, D. T., Deletic, A., and Fletcher, T. D. (2008). “Urban stormwater
harvesting – sensitivity of a storage behaviour model.” Environmental Modelling & Software, 23(6),
782–793.
Moguerza, J. M., and Muñoz, A. (2006). “Support vector machines with applications.” Statistical
Science, 21(3), 322–336.
Mohammadpour, R., Shaharuddin, S., Chang, C. K., Zakaria, N. A., and Ab Ghani, A. (2014).
“Spatial pattern analysis for water quality in free-surface constructed wetland.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 70(7), 1161–1167.
Moore, T. L. C., and Hunt, W. F. (2012). “Ecosystem service provision by stormwater
wetlands and ponds – A means for evaluation?” Water Research, Special Issue on Stormwater in
urban areas, 46(20), 6811–6823.
Mungasavalli, D. P., and Viraraghavan, T. (2006). “CONSTRUCTED WETLANDS FOR
STORMWATER MANAGEMENT A REVIEW.” FRESENIUS ENVIRONMENTAL
BULLETIN, 15(11), 1363–1372.
Mungur, A. S., Shutes, R. B. E., Revitt, D. M., House, M. A., and Fallon, C. (1999). “A
constructed wetland for the treatment of urban runoff. In: Wong, M.H., Wong, J.W.C. and
Baker, A.J. M.(ed.).” Remediation and Management of degraded lands, M. H. Wong, J. W. C. Wong,
and A. J. M. Baker, eds., Lewis Publishers, Boca Raton, FL, U.S.A, 329–341.
NIST. (2015). “Signed Rank Test.”
<http://www.itl.nist.gov/div898//software/dataplot/refman1/auxillar/signrank.htm> (Jan.
16, 2017).
NIST/SEMATECH. (2012). e-Handbook of Statistical Methods.
Nnadi, E. O., Newman, A. P., Coupe, S. J., and Mbanaso, F. U. (2015). “Stormwater
harvesting for irrigation purposes: An investigation of chemical quality of water recycled in
pervious pavement system.” Journal of Environmental Management, 147, 246–256.
Novotný, V., and Chesters, G. (1981). Handbook of nonpoint pollution: sources and management. Van
Nostrand Reinhold, New York.
Ojeda, C. B., and Rojas, F. S. (2009). “Process Analytical Chemistry: Applications of
Ultraviolet/Visible Spectrometry in Environmental Analysis: An Overview.” Applied
Spectroscopy Reviews, 44(3), 245–265.
opencv dev team. (2014). “Introduction to Support Vector Machines — OpenCV 2.4.13.2

42
documentation.”
<http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.h
tml> (May 30, 2017).
Palacio Castañeda, N. (2010). “Propuesta de un sistema de aprovechamiento de agua lluvia
como alternativa para el ahorro de agua potable, en la institución educativa María Auxiliadora
de Caldas, Antioquia.” Revista Gestión y Ambiente, 13(2), 25–40.
Peat, J., and Barton, B. (2005). “Medical Statistics: A Guide to Data Analysis and Critical
Appraisal.” Medical Statistics, Blackwell Publishing Inc., 317–324.
Perkampus, H.-H. (1986). UV-VIS-Spektroskopie und ihre Anwendungen | Heinz-Helmut Perkampus
| Springer. Springer publishing house Berlin Heidelberg, Berlin, Germany.
Raisin, G. W., Mitchell, D. S., and Croome, R. L. (1997). “The effectiveness of a small
constructed wetland in ameliorating diffuse nutrient loadings from an Australian rural
catchment.” Ecological Engineering, 9(1), 19–35.
Ramírez, J. (2009). “Construcción verde en concreto.” Noticreto, 93, 20–27.
Reinelt, L. E., and Horner, R. R. (1995). “Pollutant removal from stormwater runoff by
palustrine wetlands based on comprehensive budgets.” Ecological Engineering, 4(2), 77–97.
Revitt, D. M., Shutes, R. B. E., Jones, R. H., Forshaw, M., and Winter, B. (2004). “The
performances of vegetative treatment systems for highway runoff during dry and wet
conditions.” Science of The Total Environment, Highway and Urban Pollution, 334–335, 261–270.
Revitt, D. M., Worral, P., and Brewer, D. (2001). “The integration of constructed wetlands into
a treatment system for airport runoff.” Water Science and Technology: A Journal of the International
Association on Water Pollution Research, 44(11–12), 469–476.
van Roon, M. (2007a). “Water localisation and reclamation: Steps towards low impact urban
design and development.” Journal of Environmental Management, 83(4), 437–447.
van Roon, M. (2007b). “Water localisation and reclamation: Steps towards low impact urban
design and development.” Journal of Environmental Management, 83(4), 437–447.
Rushton, B., Miller, C., and Hull, C. (1995). “Residence time as a pollutant removal mechanism
in stormwater detention ponds.” Proceedings of the 4th Biennial Stormwater Research Conference,
Brooksville, FL. Southwest Florida Water Management District, Clearwater, FL, 210–221.
Sadar, M. J. (1996). “Understanding Turbidity Science - Booklet No. 11.” Hach Company
Technical Information Series.
Salem, Z. B., Laffray, X., Ashoour, A., Ayadi, H., and Aleya, L. (2014). “Metal accumulation
and distribution in the organs of Reeds and Cattails in a constructed treatment wetland
(Etueffont, France).” Ecological Engineering, 64, 1–17.
Sanchez, L. S., and Caicedo, E. (2003). “Uso del Agua Lluvia en la Bocana- Buenaventura.”
Cinara, Cartagena, Colombia, 1–9.
s::can Messtechnik GmbH. (2007). “Manual s::can spectrometer probe Version 1.0 January
2007 Release.” Viena, Austria.

43
Semadeni-Davies, A. (2006). “Winter performance of an urban stormwater pond in southern
Sweden.” Hydrological Processes, 20(1), 165–182.
Shen, J. (1981). Discharge characteristics of triangular-notch thin-plate weirs!: studies of flow to water over
weirs and dams. Water Supply Paper, USGS Numbered Series, U.S. Geological Survey!:For sale
by Supt. of Docs., U.S. G.P.O.,.
Shuster, W. D., Lye, D., De La Cruz, A., Rhea, L. K., O’Connell, K., and Kelty, A. (2013).
“Assessment of Residential Rain Barrel Water Quality and Use in Cincinnati, Ohio1.” JAWRA
Journal of the American Water Resources Association, 49(4), 753–765.
Silverman, G. S. (1989). “Development of an urban runoff treatment wetlands in Fremont,
California.” Constructed wetlands for wastewater treatment: Municipal, industrial, agricultural, D. A.
Hammer, ed., Lewis Publishers, CRC Press, Michigan, U.S.A, 669–676.
Sim, C. H., Yusoff, M. K., Shutes, B., Ho, S. C., and Mansor, M. (2008). “Nutrient removal in
a pilot and full scale constructed wetland, Putrajaya city, Malaysia.” Journal of Environmental
Management, 88(2), 307–317.
Snedecor, G. W., and Cochran, W. G. (1980). Statistical Methods. Seventh Edition. isbn
0813815606. Iowa State.
Somes, and Wong, T. H. F. (1994). “Design criteria for a trial wetland to improve the water
quality of runoff from a small rural catchment.” International Hydrology and Water Resources
Symposium of the Institution of Engineers, Adelaide, Australia, 403–409.
Statistics Solutions. (n.d.). “What is Logistic Regression?” Statistics Solutions.
Terzakis, S., Fountoulakis, M. S., Georgaki, I., Albantakis, D., Sabathianakis, I., Karathanasis,
A. D., Kalogerakis, N., and Manios, T. (2008). “Constructed wetlands treating highway runoff
in the central Mediterranean region.” Chemosphere, 72(2), 141–149.
The Pennsylvania State University. (2017). “Lesson 1: Simple Linear Regression | STAT 501.”
<https://onlinecourses.science.psu.edu/stat501/node/250> (Jan. 16, 2017).
The University of Edinburgh. (n.d.). “Kernel Density Estimators.”
<http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MISHRA/kde.h
tml> (Jan. 17, 2017).
Thomas, A. O., Morrison, R. J., Gangaiya, P., Miskiewicz, A. G., Chambers, R. L., and Powell,
M. (2016). “Constructed Wetlands as Urban Water Constructed Wetlands as Urban Water
Quality Control Ponds - Studies on Reliability and Effectiveness.” Wetlands Australia Journal,
28(1).
Thomas, O. (2007). “Preface.” Techniques and Instrumentation in Analytical Chemistry, UV-Visible
Spectrophotometry of Water and Wastewater, O. T. and C. Burgess, ed., Elsevier, v.
Tilley, D. R., and Brown, M. T. (1998). “Wetland networks for stormwater management in
subtropical urban watersheds1.” Ecological Engineering, 10(2), 131–158.
Torres, A., and Bertrand-Krajewski, J.-L. (2008). “Partial Least Squares local calibration of a
UV-visible spectrometer used for in situ measurements of COD and TSS concentrations in
urban drainage systems.” Water Science and Technology: A Journal of the International Association on

44
Water Pollution Research, 57(4), 581–588.
Torres, A., Estupiñán Perdomo, J.L., and Zapata García, H.O. (2011a). “Proposal and
assessment of rainwater harvesting scenarios on the Javeriana University campus, Bogota.”
Porto Alegre, Brazil, 1–8.
Torres, A., Mendez-Fajardo, S., Gutiérrez-Torres, A., and Sandoval, S. (2013). “Quality of
Rainwater Runoff on Roofs and Its Relation to Uses and Rain Characteristics in the Villa
Alexandra and Acacias Neighborhoods of Kennedy, Bogota, Colombia.” Journal of
Environmental Engineering, 139(10), 1273–1278.
Torres, A., Méndez-Fajardo, S., López-Kleine, L., Marín, V., González, J. A., Suárez, J. C.,
Pinzón, J. D., and Ruiz, A. (2011b). “PRELIMINARY ASSESSMENT OF ROOF RUNOFF
RAIN WATER QUALITY FOR POTENTIAL HARVESTING IN BOGOTA’S PERI-
URBAN AREAS.” Revista U.D.C.A Actualidad &amp; Divulgación Científica, 14(1), 127–135.
Torres, A., Ortega Suescun, D. H., and Herrera Daza, E. (2011c). “Propiedades filtrantes de
los pavimentos porosos rígidos.” Gestión integrada del recurso hídrico frente al cambio climático, L. D.
Sánchez, A. Gálvis, I. Restrepo, and M. R. Peña, eds., Programa Editorial Universidad Del
Valle, Cali, 39–48.
Torres, A., Salamanca-López, C. A., Prieto-Parra, S. F., and Galarza-Molina, S. (2016).
“OFFIS: a method for the assessment of first flush occurrence probability in storm drain
inlets.” Desalination and Water Treatment, 57(52), 24913–24924.
Torres, A., Santa, M. A., and Quintero, J. A. (2012). “Desempeño hidráulico de un modelo de
trinchera de retención utilizada como componente del drenaje urbano.” Revista Acodal, 229(1),
19–27.
Uebersax, J. S. (1987). “Diversity of decision-making models and the measurement of
interrater agreement.” Psychological bulletin, 101(1), 140–146.
Vapnik, V., Golowich, S. E., and Smola, A. (1996). “Support Vector Method for Function
Approximation, Regression Estimation, and Signal Processing.” Advances in Neural Information
Processing Systems 9, MIT Press, 281–287.
Växjö Municipality. (1998). “Bäckaslövs våtmark—Kontroll och uppföljning (Bäckaslöv
CSW—Inspection and monitoring). Tekniska förvaltningen 1998-10-30 (In Swedish).”
VCCL. (2005). “Partial Least Squares Regression (PLSR).”
<http://146.107.217.178/lab/pls/> (Jan. 16, 2017).
Veldkamp, R., Henckens, G., Langeveld, J., and Clemens, F. (2002). “Field data on time and
space scales of transport processes in sewer systems.” Portland, Oregon, United States, 15.
Vymazal, J. (2005). “Horizontal sub-surface flow and hybrid constructed wetlands systems for
wastewater treatment.” Ecological Engineering, Constructed wetlands for wastewater treatment,
25(5), 478–490.
Walker, D. J., and Hurl, S. (2002). “The reduction of heavy metals in a stormwater wetland.”
Ecological Engineering, 18(4), 407–414.
Walpole, R. E., Myers, R. H., and Myers, S. L. (1999). Probabilidad y estadística para ingenieros.

45
Pearson Educación.
Walsh, C. J., Fletcher, T. D., and Ladson, A. R. (2005). “Stream restoration in urban
catchments through redesigning stormwater systems: looking to the catchment to save the
stream.” Journal of the North American Benthological Society, 24(3), 690–705.
Welker, A. (2006). “Vertical flow constructed wetlands for enhanced CSO treatment- an
alternative for elimination of organic pollutants?” Lisbon, Portugal.
Werner, B., and Collins, R. (2012). Towards efficient use of water resources in Europe — European
Environment Agency (EEA). ISSN 1725-9177, Publication, Denmark.
WMO. (2012). INTERNATIONAL GLOSSARY OF HYDROLOGY. Chair, Publications
Board, Geneva, Suiza.
Wong, T. H. F. (2007). “Water Sensitive Urban Design - the Journey Thus Far.” Australian
Journal of Water Resources, 10(3).
Yale University. (1997). “Online courses - Linear Regression.”
<http://www.stat.yale.edu/Courses/1997-98/101/linreg.htm> (Jan. 16, 2017).
Zhang, M., Chen, H., Wang, J., and Pan, G. (2010). “Rainwater utilization and storm pollution
control based on urban runoff characterization.” Journal of Environmental Sciences (China), 22(1),
40–46.
Zhu, K., Zhang, L., Hart, W., Liu, M., and Chen, H. (2004). “Quality issues in harvested
rainwater in arid and semi-arid Loess Plateau of northern China.” Journal of Arid Environments,
57(4), 487–505.

46
PART B
MATERIALS AND METHODS
Part B establishes the case study and the proposed methods by other authors, accepted by the
scientific community, that were implemented in this thesis.
Chapter 4 shows the general description of our case study with the sampling points, the
implemented monitoring system, the water quality indicators and laboratory equipment and
methods. This Chapter ends with the inventory of data obtained during the monitoring period.
Then, Chapter 5 explains the water quality probes calibration methods that were developed by
other authors. Additionally, this chapter includes a method for the determination of the
occurrence probability of the first flush phenomenon seen at Part A Section 2.3.
Chapter 6 shows all the water use and reuse guidelines that were considered during the
development of this thesis. This Chapter specifies the water use and the quality parameters
thresholds suggested by each guideline. Finally, Chapter 7 shows computer-based tools that
helped to the data analysis and development of methods explained in this Part and the
developed methods shown in Part C.
CHAPTER 4

CONSTRUCTED-WETLAND/RESERVOIR-TANK (CWRT) SYSTEM

This Chapter entails the general description of our case study, showing the sampling points
and the implemented monitoring system. Furthermore, we include a description of the water
quality indicators measured, the laboratory equipment and methods. We end this Chapter,
exposing the inventory of data obtained during the monitoring period.

4.1 GENERAL DESCRIPTION


The constructed-wetland/reservoir-tank (CWRT) system of PUJB (Pontificia Universidad
Javeriana Bogotá) (Figure B-1a) was built between 2012 and 2013. This system was the result
of a research process for stormwater harvesting (SWH) that started within the campus in 2007,
by the research group Ciencia e Ingeniería del Agua y el Ambiente (from the same university), within
the framework of the PUJB Environmental Management Plan and the collaboration of the
Physical Resources Office (PRO) of the University. In this process, an inventory of the PUJB
water uses was developed. It showed us that the main PUJB water use is for non-potable
purposes (floor cleaning and landscape irrigation)(Galarza-Molina et al. 2015b).
The CWRT system has two inlets: one receives runoff from the parking lot (3776 m2) (Figure
B-1b), and the second one receives the runoff from the soccer field and green areas (14816
m2). During the design phase, we proposed a method for the reservoir tank design that takes
into account the probability to supply the water demand, as well as the most needed probable
time step and their respective variabilities (for more information see Galarza-Molina and
Torres 2017). The constructed-wetland (CW) was specifically designed to enhance the quality
of runoff from the parking lot, because according to the quality campaigns heavy metals were
coming from this area. It is a horizontal subsurface flow (HSSF) wetland (Figure B-1c) that
was planted with locally available Cyperus papyrus and the underlying gravel bed was built with
different gravel sizes to minimize possible clogging. In order to avoid CW’s potential clogging,
it is divided in three zones and the gravel is strictly organized according to a decreasing size:
the first zone has a gravel size of 2.54 cm (1 in); the second zone has a gravel size of 1.9 cm (¾
in); and the third zone has 1.3 cm (½ in). The CWRT system has two settling tanks (Figure B-
1c): one before the CW and the other that receives the stormwater runoff from the soccer
field. For more information see Galarza-Molina et al. (2015b).

4.2 MONITORING SYSTEM


To gauge the system’s performance, one sampling point was located in the inlet of the CW and
another at the outlet of the CW (see Figure B-1c). The hydraulic performance was monitored

49
by means of two triangular sharp-crested weirs and continuous ultrasonic level sensors (Figure
B-2b). The weirs were installed at the inlet of the settling tank (Figure B-1c point 1-Inflow) and
the outlet of the CW (Figure B-1c point 2-Outflow). The ultrasonic level sensors were
produced and installed by the Colombian company Instrucing S.A.S, based on Airmar®
products (Airmar 2017). This type of level sensors was chosen because its easy installation, the
sensor resolution (1 mm), the sensor installation range (0.10 m to 1.5 m) and also because the
price was within the budgeted. The data recorded from the continuous ultrasonic level sensors
were logged to a Lookout - Web-Enabled Automation Software (National Instruments 1989).
Following the recommendations by Instrucing S.A.S, this software was chosen because allows
handling and controlling easily the recorded data. As well, software interface is friendly, allows
exporting data in different formats and lets handle calibrations.

Figure B-1. CWRT system: (a) Location of the CWRT system at PUJB; (b) Parking building location; (c) Plan view of the
CWRT system: 1-Inflow from the parking lot; 2-Outflow from the CW; 3- Inflow from the soccer field and green areas, 4-
Outflow of the CWRT (Galarza-Molina et al. 2016)

In terms of water quality performance, three probes were available for the project. Two UV-
Vis spectrometers probes (sold by the company s::can, called spectro::lyser) were installed at
the CWRT inlet (Figure B-2a and B-2b) and CW outlet (Figure B-2d). Additionally, a turbidity
probe (sold by the company WTW) was installed at the CWRT inlet (Figure B-2c). The quality
data measured with these probes were recorded every minute. Sections 4.2.1 and 4.2.2 will
describe in detail the UV-Vis spectrometers and the turbidity probes.
Rainfall was monitored using a WatchDog 2000 Weather Station (Spectrum Technologies, Inc.
2000), which has a tipping bucket rain gauge with a resolution of 0.25 mm. The weather station
was already installed at School of Engineering's building. This building is more or less 100 m

50
from the system (Figure B-3). The rainfall data were recorded every minute, and every week
were downloaded from the weather station.

(a) (b)

(c)

(d)
Figure B-2. (a) System's inlet general view and inlet spectrometer location. (b) Zoom of the spectrometer within the inlet.
(c)Arial view of the system’s inlet: location of the spectrometer and turbidity probes (d) System's outlet general view and outlet
spectrometer location (white arrows show the flow direction) (adapted from: Galarza-Molina et al. 2015b; a).

CWRT

100 m

Figure B-3.WatchDog Weather Station photo. Distance from the School of Engineering's building to the CWRT system.

51
4.2.1 UV-Vis spectrometer probes
The UV-Vis spectrometer probes are part of the latest techniques in continuous measurement
for water quality. This technique reduces the disadvantages associated with water sampling
campaigns and laboratory testing, because the measurement with the probe is performed on-
site (in-situ) (Langergraber et al. 2003).
The spectrometer probe has 547 mm long with a 44 mm diameter, with an optical path
window located at 1/3 (181.5 mm) of one end of probe (Appendix B-4-1 – Figure 1a). The
measured probe window can be adapted to optical path lengths of 5 mm and 35 mm
(Appendix B-4-1 – Figure 1b), which depends on the water quality to be measured. For our
case study the 35 mm optical path length was used (s::can Messtechnik GmbH 2007, n.d.).
Additionally, the probe has a self-cleaning system to ensure that no particles stick on the
measured probe window. This system employs a compressed-air cleaning system (s::can
Messtechnik GmbH 2007).
Regarding the probes installation, we followed the recommendations of the probe manual. The
probe was located in a horizontal orientation at 15 cm of the bottom of the inlet and with the
measuring window face to the flow. This position will prevent sedimentation of particles and
the adherence of gas particles to the optical window (s::can Messtechnik GmbH 2007).
The UV-Vis spectrometer probe is capable of providing TSS, COD and BOD results in real-
time with high time-resolution (e.g. measurement every minute) (Langergraber et al. 2004). The
measuring principle of the UV-Vis spectrometer probes is the absorbance of the light
generated by suspended or dissolved particles in wavelengths ranging from ultraviolet (200-400
nm) to visible (400-732.5 nm). The wavelengths steps are spacing every 2.5 nm, so each
measurement will have 214 absorbance values, one for each wavelength (Langergraber et al.
2004; s::can Messtechnik GmbH 2006). The spectra (see Figure B-4), known as fingerprints,
characterize the water sample. The water composition can be known by analyzing the general
shape of the spectrum or absorbance at a specific wavelength. These fingerprints are employed
to obtain more specific parameters (e.g. turbidity, nitrate concentration, and sum parameters
such as spectral absorption coefficient at 254 nm (SAC254)) (Langergraber et al. 2003; Rieger
et al. 2004).

52
ÝjÁw֐ː?MÁ?͝ÁßË27Ê7‰ÄË
”?ÍjÁ‰?Äˉ™ËÍj™Ä˝wË~Á?”ÄÊËÁ?™~j¯±Ë
ĉÍÖ˝¬Í‰W?ËÄj™ÄÁÄËÊˬÁMjÄËÝ?ÄË
jßËĉ”¬jˬ†Í”jÍjÁÄË͆?ÍË?ÁjË jW?ÖÄj˙j‰Í†jÁËW†j”‰W?Ä˙Á˔Ü‰™~ˬ?ÁÍÄËwÁËWj?™‰™~Ë?ÁjË
j™~͆ÄË?ÍË?Ë͉”j±Ë0†ÖÄ^Ë͆jÄjË ™jWjÄÄ?ÁßËwÁË͆j‰ÁËw֙W͉™‰™~^Ë?ËWj?™‰™~ËÄßÄÍj”ËÖĉ™~ˬÁjÄÄÖÁ‰ÄjaË?‰ÁË
™jˬ?Á?”jÍjÁ˝™ß^Ë?™aË?ÍËMjÄÍË ‰ÄËÖÄja^Ë͆jËĬjWÍÁ”jÍjÁËW?™Ëw֙W͉™Ë‰™Ë†?ÁĆËj™Ü‰Á™”j™ÍÄË݉͆ː‰Í͐jË
†aÄË͝ËW”¬j™Ä?ÍjËwÁËWÁÄĈ Á˙Ë”?‰™Íj™?™Wj±
™Ë?ËÝ?ÍjÁ˔?ÍÁ‰Þ±Ë Þ?”¬jÄ˝wË
jËÝ?Üjj™~͆ˉ™ÄÍÁ֔j™ÍÄˉ™Ë
yåå ¤Ôå
ÖÁj”j™Í˝w˙‰ÍÁ?Íj^ËÍÖÁM‰a‰ÍßË 27ËÁ?a‰?͉™ 7‰Ä‰MjËÁ?a‰?͉™Ë®7‰Ä¯
|yå
jWÍÁ?ËMĝÁ¬Í‰™Ë
jww‰W‰j™ÍË ¤åå
|åå
?ÄËÝjË?ÄË?ËÝj?Í†ËwË Ïyå 2™W”¬j™Ä?ÍjaËĬjWÍÁ֔ oå
MĝÁ¬Í‰™ËR¤Ê”S

MĝÁ¬Í‰™ËR¤Ê”S
®jwÍË?މį
aË݆j™ËÖĉ™~Ë͆jËj™Í‰ÁjË Ïåå

Ý?Üjj™~͆ıË0†jË Ôyå Éå
!#Ï

# ˆÍÍ?
!#Ô
Ä˝ÜjÁËÁjWj™ÍËßj?ÁĈ?ÜjË Ôåå

WÍÁ֔Ë27Ê7‰ÄËĬjWÍÁÄW¬ßË ¤yå
0ÖÁM‰a‰Íß^Ë0..
¤åå
Ôå

ÖÁ 0ÖÁM‰a‰ÍßËW”¬j™Ä?Íja

# ˆÄÖMj ĬjWÍÁ֔ˮÁ‰~†ÍË?މį
å å
ÔååËËËËËËËËËËÔyåËËËËËËËËËËÏååËËËËËËËËËËÏyåËËËËËËËËËË|ååËËËËËËËËËË|yåËËËËËËËËËËyååËËËËËËËËËËyyåËËËËËËËËËËÉååËËËËËËËËËËÉyåËËËËËËËËËËÈååËËËËËËËËËËÈyå
8?Üjj™~͆ËR™”S

W‰¬jÄËMj†‰™a^Ë?™aËW?¬?M‰‰Í‰jÄË
Figure B-4. UV-Vis spectrum, and examples of parameters derived out of this spectrum together with their characteristic
‰~ÖÁjËÔ±Ë27ˆ7‰ÄËĬjWÍÁ֔^Ë?™aËjÞ?”¬jÄ˝wˬ?Á?”jÍjÁÄËajÁ‰Üja˝ÖÍË
absorbance profile (source: van den Broeke 2007).
wË͆‰ÄËĬjWÍÁ֔Ë͝~j͆jÁË݉͆Ë͆j‰ÁËW†?Á?WÍjÁ‰Ä͉WË?MĝÁM?™WjˬÁw‰jÄ
†jËĬjWÍÁ]]ßÄjÁ0 Ë݉ËMjËÖÄjaË
ÍÁ¬†Í”jÍjÁËÁjWÁaÄː‰~†ÍË
The data measured by the spectrometer probe were recorded and visualized using ana::proTM,
which is a software
ÔååˈËÈyå˙”±Ë0†j˔j?ÄÖÁj”j™ÍË 0†j˔j?ÄÖÁj”j™ÍˬÁ‰™W‰¬j
provided by the s::can company. This software offers two options: one
with the results employing a calibration option (based on the statistical technique of PLS) in
order to assess0†jË?MĝÁ¬Í‰™ËĬjWÍÁ?^ËÁjwjÁÁjaË͝Ë?ÄËw‰™~jÁ¬Á‰™ÍÄË®‰~ÖÁjËÔ¯^˝MÍ?‰™jaË
™ËwË27ˆÄ¬jWÍÁÄW¬ß^ËÁ‰ÜjÁËÝ?ÍjÁ˔™‰ÍÁ‰™~
the concentration of different pollutant indicators in concordance to the
݉͆ËÄÖW†Ë™ˆ‰™jËĬjWÍÁ”jÍjÁÄË?ÁjËÖÄjaËwÁË͆jËW†?Á?WÍjÁ‰Ä?͉™ËwË
absorbance of UV-Visible spectra (Hochedlinger 2005; s::can Messtechnik GmbH 2006); and
the raw data ͆jËÄ?”¬jaËÝ?ÍjÁ±Ë8‰Í†‰™Ë͆jÄjËw‰™~jÁ¬Á‰™ÍÄ˝™jËW?™Ëw‰™aË?ˆÖ~jË
option that is the absorbance spectra (fingerprint: 214 wavelengths)
measurements. ?”Ö™Í˝wˉ™wÁ”?͉™Ë?MÖÍË͆jËÝ?ÍjÁËW”¬Ä‰Í‰™ÇË͆jßË?ÁjËÖÄjaË
These options are saved in two files of 1254 records: .par and .fp, respectively
(s::can Messtechnik GmbH 2006). For more details of the operation and maintenance of this
W?W֐?ÍjËĬjW‰w‰Wˬ?Á?”jÍjÁÄ^ËÄÖW†Ë?ÄËÍÖÁM‰a‰Íß^˙‰ÍÁ?ÍjËW™Wj™ÍÁ?͉™^Ë
sensor see Gamerith (2011), Gruber et al. (2006), Hochedlinger (2005) and Langergraber et al.
?™aËÄ֔ˬ?Á?”jÍjÁÄËÄÖW†Ë?ÄË.

(2004). For calibration purposes, a sampling slide ^Ë


# ^Ë0#
Ë?™aË #
±¤
Ôy| was used (Appendix B-4-1 – Figure 1c).

4.2.2 Turbidity probe


0ÖÁM‰a‰ÍßËaÖjË͝ËÄÖĬj™ajaËÄÖMÄÍ?™WjÄËW?ÖÄjÄː‰~†ÍËÄW?ÍÍjÁ‰™~Ë?™aË
Ć?a‰™~^Ë͆ÖÄˉ™wÖj™W‰™~Ë͆jË?MĝÁ¬Í‰™ËÜjÁË͆jËj™Í‰ÁjËw‰™~jÁ¬Á‰™Í±Ë
The turbidity probes are part of the online sensors that are currently available in the market.
0†‰ÄˉÄË?™Ë‰”¬ÁÍ?™ÍËw?W͝ÁË͆?Íˉ™wÖj™WjÄˉ™ˆÄ‰ÍÖ˔j?ÄÖÁj”j™ÍÄË?™aË
Like the spectrometer sensor, the use of turbidity probes minimize the problems associated
with water sampling campaigns and laboratory testing (Henckens et al. 2002). VisioTurb® 700
Áj¶Ö‰ÁjÄËW”¬j™Ä?͉™Ë‰™ËÁajÁË͝˝MÍ?‰™ËÁj‰?MjË?™aËÁj¬ÁaÖW‰MjË
IQ produced byÁj?a‰™~ıË0†jËÍÖÁM‰a‰ÍßËW”¬j™Ä?͉™ËajÜj¬jaË?ÄÄjÄÄjÄˬ?ÁÍ˝wË͆jË
the WTW company was the probe installed at the system’s inlet. The turbidity
measurements are conducted nephelometrically and the probe has an AutoRange function that
Á‰~‰™?ËĬjWÍÁ֔Ë?™aË͆j™ËW?W֐?ÍjÄË?Ëw֙W͉™ËajÄWÁ‰M‰™~ËÍÖÁM‰a‰Íß±Ë
selects the optimal resolution from the wide range of 0 to 4000 NTU (Nephelometric
Turbidity Unit).0†‰ÄËw֙W͉™Ë‰ÄËÖÄjaË͝ËW”¬j™Ä?ÍjË͆jËĬjWÍÁ֔ËwÁËÍÖÁM‰a‰ÍßË?™aË
This sensor has an ultrasound cleaning system that minimizes  maintenance
activities and ensures low maintenance and long-term reliable measurement operation. The
structure of the turbidity sensor is shown in Appendix B-4-1 – Figure 2a.
W†ËÔååÈ
The turbidity sensor was installed at the inlet following the manufacturer's installation
recommendations that are illustrated in Appendix B-4-1 – Figure 2b. Due to the emergence of
the probe measurement infrared beam (45º from the front of the sensor), we have to be care
with the turbidity probe orientation. Therefore, the probe has to be installed taking into

53
account the minimum immersion depth (7 cm) and the minimum distance to the wall and floor
(10 cm) (WTW GmbH 2013).
The data of the turbidity sensor was captured and stored using the system’s IQ SENSOR NET
MIQ/TC 2020 XT (Appendix B-4-1 – Figure 3a) and MIQ/JB (branching module) (Appendix
B-4-1 – Figure 3b). The Figure 2b in Appendix B-4-1 shows how was connected the turbidity
probe to the terminal module using the branching module (MIQ/JB).
The calibration procedure of this probe was developed following the manual
recommendations. For this procedure a bucket made of black plastic (12 L) was used and the
sensor was located within the bucket with the sample water. Figure 4 in Appendix B-4-1 shows
the measurement environment for the calibration procedure (WTW GmbH 2013). The
detailed calibration process will be explained in the next chapter.
Figure 5 in Appendix B-4-1 shows the internal components of a specific sensor. The scattered
light is measured at an angle of 90 degree. An infrared light with a wavelength of 860 nm is
used for this sensor, assuming that potential coloration of the sample does not affect the
measurements (WTW GmbH 2013; Xylem n.d.).

4.3 LABORATORY ANALYSIS


Inflow and outflow samples were collected during seven rainfall events (Apr-22-14, May-6-14,
Oct-9-14, Mar-3-15, Mar-16-15, Nov-5-15 and Nov-19-15). The grab sampling method and
sub-sampling for triplicated analysis was used. In sum, 65 water samples (36 at the inflow and
29 at the outflow) were obtained with their sub-sampling for triplicated analysis: 195 sub-
samples. The water samples were analyzed for TSS, COD, Turbidity (T) and pH water quality
indicators (WQI). One additional sampling campaign (May 19th 2016) was done at the outlet of
the system. For this campaign we collected four samples. The WQI tested were Fecal
Coliforms, Total Coliforms and BOD5.
The analysis of the above mentioned water quality indicators were done in the water quality
laboratory of the School of Engineering at the PUJB, following the procedures established by
the Standard Methods (Rice et al. 2012). The microbiological analysis were done following the
procedures established by Colilert® -IDEXX, which are included in the Standard Methods for
the Examination of Water and Wastewater (Rice et al. 2012). Table B-1 summarizes the
equipment, specifications, precision and standards employed for the analysis of the WQI.
Table B-1. Description of the WQI equipment, specifications, precision and standard used.
Quality indicators Equipment Specifications Precision Standard
Drying Oven Memert N° series 810049 Model: U50
Vacuum pump Gast Max 54ressure: 4.08 bar/60 psi
Sartorius N° series 28003741
TSS Analytical balance 0.0001 mg SM2540D
Model: Cubis MSE 2245-000DU
250 mL 2 mL
Test tube
500 mL 5 mL
Oxygen probe YSI Ref: ProBOD 0.02 mg/l
Optical Dissolved N° series 12A100554
0.01 mg/L SM5210B
BOD5 Oxygen Meter Model: YSI 76540 P
Test tube 250 mL 2 mL
Pipette 5 mL 0.1 mL
COD Thermo-reactor Serie 9502376 HACH 1ºC SM5220C

54
Quality indicators Equipment Specifications Precision Standard
Titulador Serie 10003870 TRITROLINE 6000 0.01 mL
Orbeco – Hellige Model: 965 – 10
Turbidity Turbidimeter 0.2 NTU SM 2130 B
(115 /230v AC)
YSI N° series JCO 1323
pH phmeter 0.01 SM 4500-H+ B
Model: pH100
Quanti-Tray Sealer® Serie 171097 Model 2X IDEXX
UV- Lamp Spectroline Model EA-160
Portable Viewing
Spectroline Model CM-10A
Cabinet
Laboratory incubator Incucell Serie 1709 Colilert® -IDEXX
Fecal Coliforms
Total Coliforms Test bags: 100 mL Whirl-Pak® Coli
Reagent for 100 mL
Colilert® reagent
samples
Trays containing 97
Quanti-Tray®/2000 IDEXX
wells each

4.4 MONITORING EQUIPMENT DATA INVENTORY


Table B-2 summarizes the time line of the data recorded with the installed monitoring
equipment (Section 4.2). IEB means rain data, Qin and Qout mean the flow measurement at the
inlet and outlet, respectively. Specin and Specout denotes for the spectrometer probes installed at
the inlet and outlet, respectively. Tin means the turbidity probe installed at the inlet.
The ultrasonic levels (for flow measurement) were installed since the end 2013. At the same
time, we began to download the rain data form the weather station. Then, at the beginning of
2014 one spectrometer probe was installed at the inlet. The other one was installed at the
outlet during August of 2014, as well as the turbidity probe.
Grey frames and X in bold on Table B-2 show the chosen data period for the development of
this thesis. The period is from August to November of 2014 and January to July of 2015.
These window times were chosen because it was the time when most of the monitoring
equipment data coincided.
Table B-2. Time line of the data recorded with the installed monitoring equipment. X: recorded month; IEB: rain data; Qin
and Qout: the flow measurement at the inlet and outlet; Specin and Specout: the spectrometer probes installed at the inlet and
outlet; Tin: turbidity probe installed at the inlet. Grey frame and X in bold show the chosen period.
Year 2013 2014 2015 2016
Month 10 11 12 01 02 03 04 05 06 07 08 09 10 11 12 01 02 03 04 05 06 07 08 09 10 11 05
IEB X X X X X X X X X X X X X X X X X
Q in X X X X X X X X X X X X X X X X X X X X X X X
Q out X X X X X X X X X X X X X X X X X X X X X X
Spec in X X X X X X X X X X X X X X X X X
Spec o u t X X X X X X X X X
T in X X X X X X X X X X
Grab
X X X X X X
samples

55
CHAPTER 5

AVAILABLE METHODS FOR WATER QUALITY ANALYSIS

This Chapter explains the water quality probes calibration methods that were developed by
other authors and a method for the determination of the occurrence probability of the first
flush phenomenon.

5.1 UV-VIS SPECTROMETER PROBES CALIBRATION


For the UV-Vis Spectrometer probes (Section 4.2.1), due to the fact that the composition of
the stormwater is highly variable, the manufacturer suggests adapting the overall calibration to
the specific quality of the waters of the studied water system through a local calibration
(Fleischmann et al. 2001; Hochedlinger 2005). We developed the UV-visible Spectrometer
probes local calibration using laboratory reference concentrations of 65 water samples (from
seven rain events) coupled with the measurement of the absorbance spectra of these samples.
TSS total concentrations were determined in the water quality laboratory of the School of
Engineering at the PUJ, following the procedures established by the Standard Methods
(Section 4.3) (Rice et al. 2012). Table B-3 shows the water samples per event used for the
spectrometer calibration.
Table B-3. Water samples (WS) per event
used for the spectrometer calibration
Events Inflow Outflow
Apr-22-14 5 2
May-6-14 5 5
Oct-9-14 5 5
Mar-3-15 5 3
Mar-16-15 5 5
Nov-5-15 5 5
Nov-19-15 6 4
Total of WS 36 29

The experiment requires the consideration of the reliable data of the calibration data set
(spectra and concentrations). Therefore, we used the methodologies proposed by Torres
(2011), and Torres et al. (2013a) for the assessment of uncertainties and detection of
multivariate outliers, respectively. First, the uncertainty associated with the laboratory TSS
concentrations and the UV-Vis spectrometer probe data was calculated. The data with relative
uncertainty above 25% were discarded (Baleo et al. 2003). The following precisions were taken
into account: for the case of TSS, the precision of the scale used to measure the masses (0.001
g), and the precision of the test tube used to measure the volume of each sample (0.01 L), and
for the case of the UV-Vis Spectrometer probes, the precision of the probe (0.001 abs/m).
Regarding the outlier detection, the methodology was based on the estimation of partial

56
outliers from sub-data sets (Torres et al. 2013a) and outlier identification in high dimensions by
Filzmoser et al. (2008). Any data detected as a possible outlier more than 95% of the times was
cataloged as an outlier. The script developed for the assessment of uncertainties and detection
of multivariate outliers is presented in Appendix B-5-1.
In order to calibrate the online sensor data with laboratory results, various methodologies can
be used. According to the review developed by Lepot et al. (2016), the best methods are Partial
Least Squares (PLS) (Part A Section 3.8) and Support Vector Machines (SVM) (Part A Section
3.10). Partial Least Squares (PLS) method seems to be the most frequently used calibration
method. On the other hand, in Torres et al. (2013b) authors found that SVM models seem to
be more parsimonious and give more robust results than PLS models. For the development of
this thesis, the methods mentioned above were implemented.
For PLS regression, the program OPP (OTHU PLS Program) developed by Torres and
Bertrand-Krajewski (2008) with the modifications added by Zamora and Torres (2014) was
implemented. This algorithm uses the package Partial Least Squares and Principal Component
Regression (pls) with the Wide Kernel algorithm and cross-validation Leave One Out (Mevik
and Wehrens 2007). The PLS calibration was run 5000 simulations changing the validation and
calibration data set each time. The correlation coefficient (r) and the RMSE (Root Mean
Square Error) between predicted and observed values were calculated. The script developed
for this calibration method is presented in Appendix B-5-2
Regarding SVM, the methodology proposed by Torres et al. (2013b) was implemented. We
used the kernlab (Kernel-Based Machine Learning Lab) package in R (R Core Team 2016). This
package proposes the most popular kernel functions: (a) rbfdot Radial Basis kernel "Gaussian",
(b) polydot Polynomial kernel, (c) vanilladot Linear kernel, (c) tanhdot Hyperbolic tangent
kernel, (d) laplacedot Laplacian kernel, (e) besseldot Bessel kernel, (f) anovadot ANOVA RBF
kernel, (g) splinedot Spline kernel. For more information about SVM models see (Lopez-
Kleine and Torres 2014; Torres et al. 2013b). Then, using Monte Carlo method for the
uncertainty analysis, we generated 5000 random replicas of TSS laboratory concentrations
(mg/L) per sample and for absorbance spectra measurements. Then after running 5000
simulations in each of the kernel functions, we estimated the correlation coefficient (r) and the
RMSE (Root Mean Square Error) between predicted and observed values. The script
developed for this calibration method is presented in Appendix B-5-3.

5.2 TURBIDITY PROBE CALIBRATION BASE METHOD


For the turbidity probe calibration process (Chapter 11 Part C) we used the turbidity-based
method proposed by Torres et al. (2013a). This method allows us to obtain real-time TSS
concentrations from turbidity measurements. The method is divided in three parts: 1-
determination of the associated uncertainty per sample (Turbidity and TSS laboratory
concentration); 2- detection of samples with low representativeness by means of multivariate
outliers analysis; 3- development of regression functions between TSS and turbidity values.
The first and the second parts were the ones included in the developed method that is shown
in Part C Chapter 9. Instead of using the third part, we propose a different approach for the

57
relation between TSS and turbidity values. This method will be explained in following
paragraphs.
First, the uncertainty associated with the laboratory TSS concentrations and the turbidity probe
data was calculated. The data with relative uncertainty greater than 25% were discarded (Baleo
et al. 2003). We used the same precisions mentioned above (Section 5.1) including the one
regarding the turbidity probe (0.01 NTU). For the outlier detection, the methodology was
based on the estimation of partial outliers from sub-data sets (Torres et al. 2013a) and outlier
identification in high dimensions by Filzmoser et al. (2008). The data detected in more than
95% of the time was cataloged as an outlier (Torres et al. 2013a). These procedures were run
5000 times. At the end, in order to evaluate the method, two statistics were calculated: the
correlation coefficient (r) and the RMSE (Root Mean Square Error) between predicted and
observed values. The script developed for the assessment of uncertainties is presented in
Appendix B-5-4.

5.3 FIRST FLUSH OCCURRENCE PROBABILITY METHOD


The method proposed by Torres et al. (2016) was used to assess the occurrence probability of
the first flush phenomenon (Part A Section 2.3) (Bertrand-Krajewski et al. 1998). This method
accounts for experimental uncertainties associated with the collection of temporal data,
considering concepts such as standard uncertainty (for first flush uncertainties) and the Monte
Carlo method. This method counts with a five-stage procedure, that was put forth to assess
first flush occurrence probability during rainfall events (see Figure 1 in Appendix B-5-5)
(Torres et al. 2016).
The occurrence probability of first flush was defined as the probability that a rainfall event has
of falling within first flush zones in line with Bertrand-Krajewski et al. (1998). Since
uncertainty analysis was done during the calibration procedures, we could compute 5000
possible M(V) curves (pollutant mass distribution vs. volume) for each event. Each M(V)
curve was fitted using the power function (Bertrand-Krajewski et al. 1998) and the parameter b
(which characterizes the gap between the M(V) curve and the bisector – when pollutant mass
is proportional to the volume) was determined. For more information about this method see
Torres et al. (2016). The script developed for the FF assessment is presented in Appendix B-5-
5.

58
CHAPTER 6

WATER USE AND REUSE QUALITY GUIDELINES AND ACTS

The water use and reuse guidelines used during the development of this thesis were:
• 2004 Guidelines for Water Reuse (EPA 2004).
• Directive 2006/7/EC of the European Parliament and of the Council of 16 February
2006 Concerning the Management of Bathing Water Quality and Repealing Directive
76/160/EEC (EU 2006).
• Wastewater treatment and use in agriculture – FAO irrigation and drainage paper 47
(Pescod 1992).
• Manual on Water Quality for Reuse of Treated Municipal Wastewater of Japan (MLIT
2005).
• Colombia Water Uses and Liquid Waste Act: DECRETO 1594 DE 1984. Usos del agua y
residuos líquidos (Colombia 1985).
A wide range of standards was selected with the aim of having different points of view. The
guidelines from EPA (EPA 2004), Europe Union (EU) (EU 2006) and Food Agriculture
Organization (FAO) (Pescod 1992) were chosen because are world references. Regarding the
manual on water quality for reuse of Japan (MLIT 2005), it was chosen because of Japan rapid
development on water reuse during 1960s (Asano et al. 2007). Finally, in order to have a local
reference Colombia's Act of water uses and liquid waste (Colombia 1985) was included.
The EPA recommended quality indicators are pH, BOD5, Fecal coliform and Chlorine
residual. The EPA guidelines reuse groups are: 1- Urban reuse (all types of landscape irrigation,
also vehicle washing, toilet flushing, use in fire protection systems and commercial air
conditioners, and other uses with similar access or exposure to the water); 2- Restricted access
area irrigation (sod farms, silviculture sites, and other areas where public access is prohibited,
restricted, or infrequent); 3- Agricultural reuse—food crops not commercially processed
(surface or spray irrigation of any food crop, including crops eaten raw); 4- Agricultural
reuse—food crops commercially processed (Surface irrigation of orchards and vineyards); 5-
Agricultural reuse—nonfood crops (Pasture for milking animals; fodder, fiber, and seed crops);
6- Recreational impoundments (incidental contact (e.g., fishing and boating) and full body
contact with reclaimed water allowed); 7- Landscape impoundments (aesthetic impoundments
where public contact with reclaimed water is not allowed). Table B6-1 in Appendix B-6-1
shows the comments for each reuse group included in the EPA guidelines.
The EU guidelines –concerning the management of bathing water quality– classified the
bathing water as: poor, sufficient, good or excellent. These classifications depend on the
established limits of microbiological indicators. Table B6-2 (Appendix B-6-1) specifies the

59
microbiological indicators limits for each group (EU 2006). This guideline takes into account
the microbiological indicator Intestinal enterococci. The difference between the Intestinal
enterococci and Escherichia Coli (E. coli) is that the former tend to survive longer in water
environments than latter. As well, Intestinal enterococci are more resistant to drying and are
more resistant to chlorination (OPS-Organización Panamericana de la Salud 2006).
Regarding FAO guidelines for wastewater treatment and use in agriculture, the water uses are
agricultural irrigation for nonfood crops, agricultural sprinkler irrigation, agricultural surface
irrigation. The quality parameters are pH, TDS (total dissolved solids), Manganese (Mn), Iron
(Fe) and Hydrogen Sulphide (H2S). This guideline suggests microbiological quality limits for
water reuse in agriculture, see Table B6-3 (Appendix B-6-1) (Pescod 1992).
The manual on water quality for reuse of Japan recommends the following parameters: E. coli,
total coliform, turbidity, pH, color, appearance, odor and chlorine residual. The limits of these
parameter are for toilet flushing, landscape irrigation and recreational use (see Table B6-4 in
Appendix B-6-1) (MLIT 2005).
The Colombia's Act of water uses and liquid waste handles the following water uses:
agricultural, recreational and landscape enhancement. Table B6-5 (Appendix B-6-1) shows the
quality parameters limits for agricultural use. Table B6-6 (Appendix B-6-1) illustrates the
quality indicators for recreational use. The landscape enhancement use means the water that
would be used to contribute to the harmonization and beautification of the landscape. For this
use the guideline recommends: 1- Absence of floating material and foams from human activity;
2- Absence of fats and oils that form visible film; 3- Absence of substances that produce odor
(Colombia 1985).
For our case study, the initial water uses of interest were toilet flushing and showers,
agricultural and non-agricultural irrigation, floor and facade cleaning and water features. The
selected water quality indicators were pH, TSS, BOD5, Turbidity, Total Coliform and Fecal
Coliform (Ecoli). These water quality indicators were chosen because are the most common
parameters (EPA 2004) and the techniques for its determination are available in the university
laboratory. Therefore, the quality parameters of these water uses were identified in the selected
guidelines (Ardila-Quintero et al. 2016). Table B-4 shows the suggested guidelines for the
water uses of interest.

60
Table B-4 Suggested guidelines for the water uses of interest (Colombia 19851; EPA 20042; EU 20063; MLIT 20054; Pescod
19925) (adapted from Ardila-Quintero et al. 2016)
Water uses
Toilet flushing and Agricultural and non-agricultural Floor and facade
Water features
Showers irrigation cleaning
Parameters
6,0 – 9,0 2

5,8 – 8,6 (Toilet 6,0 – 9,0 2 6,0 – 9,0 2


pH 5,8 – 8,6 4
flushing2) <7 5 5,0 – 9,0 1
5,0 – 9,0 1
<= 10 mg/L (non-agricultural irrigation2)
BOD5 <= 10 mg/L 2 <= 10 mg/L 2
<= 30 mg/L 2
<= 5 mg/L (non-agricultural irrigation2)
TSS <= 5 mg/L 2 <= 30 mg/L 2
<= 30 mg/L 2
<= 5 NTU (Toilet
T flushing2) <= 5 NTU (non-agricultural irrigation2) <= 2 NTU 4
<= 2 NTU 2&4
ND 2&1 ND (non-agricultural irrigation2)
Ecoli < 500 cfu/100 mL < 500 cfu/100 mL (non-agricultural
(Showers3) irrigation5)
ND (Toilet flushing4) < 5000
Total Coliform < 1000 ND 4 microorganisms
microorganisms/100mL1 /100mL1

61
CHAPTER 7

COMPUTER BASED TOOLS

For the development of this thesis we used RStudio (RStudio Team 2016). This software is an
Integrated Development Environment (IDE) for R statistical computing environment that
facilitates the use of R. The methods mentioned in this Part (Chapter 7- Methods available for
quality analysis) and in the Part C were implemented using these programs. R software was
chosen because is of one the most used programs for data analysis (Vance 2009) and
development of statistical software (Fox and Andersen 2005; Vance 2009). R is a free software
supported by the R Foundation for Statistical Computing and is part of the GNU system
(operating system that is free software) (R Core Team 2016).
The production of graphs with high quality in R is another positive point of this program and
the fact that the user has the entire control over the plot (Lewin-Koh 2015). R functions and
datasets are within packages. Fourteen packages are included with the basic installation of R
(base, compiler, datasets, grDevices, graphics, grid, methods, parallel, splines, stats, stats4, tcltk,
tools and utils). These packages have the basic functions that allow R to work and standard
statistical and graphical functions (e.g. linear models, classical tests, high-level plotting
functions, etc.) (Venables et al. 2015). There are more available packages that include a very
wide range of modern statistics. These can be found through the CRAN family of Internet
sites. (“R: What is R?” n.d.) For more information about RStudio see RStudio Team (2016)
and for R see The R Foundation (n.d.). The following paragraphs will show the packages used
in this thesis.
Stats package (version 3.1-1; 2014): This package contains functions for statistical
calculations and random number generation (R Core Team 2016). Most of the functions
within this package were used for the development of this thesis. Table B-5 shows the
principal functions used of this package.
Table B- 5. Functions of the Stats package (adapted from R Core Team 2016).
Function Description Used for Part - Section
anova Compute analysis of variance (or deviance) Identifying a variable or variables that C – 11.1
tables for one or more fitted model objects. represent the first flush phenomenon

diff Returns suitably lagged and iterated differences. Simulation of the first flush C–8
phenomenon and to find ΔHmax C – 11.1
cor Computes the variance of x and the correlation Determination of the correlation B – 5.1, 5.2 and 5.3
of x and y if these are vectors. coefficient C–9

lowess Performs the computations for the LoWeSS Adding smoothness to the level signal C–8
smoother, which uses locally-weighted
polynomial regression.
plot Generic function for plotting of R objects. Plotting all the time series used in this B – 5.1, 5.2 and 5.3
thesis C

62
Function Description Used for Part - Section
D
E
predict Generic function for predictions from the Prediction of the results of PLS and B – 5.1 and 5.2
results of various model fitting functions SVM calibration methods
rnorm Random generation for the normal distribution Random generation number used for B – 5.1 and 5.2
with mean equal to mean and standard uncertainty analysis
deviation equal to sd.
sd Computes the standard deviation of the values Uncertainty analysis B – 5.1 and 5.2
in x.
nls Determine the nonlinear (weighted) least- Determination of the parameter b of B – 5.3
squares estimates of the parameters of the first flush phenomenon
nonlinear model
nls.control Allow to set some characteristics of the nls Determination of the parameter b of B – 5.3
nonlinear least squares algorithm the first flush phenomenon
lm It is used to fit linear models. It can be used to Identifying a variable or variables that C – 11.1
carry out regression, single stratrum analysis of represent the first flush phenomenon
variance and analysis of covariance.
pairwise.wilcox.test Calculate pairwise comparisons between group Evaluation method to identify the C – 11.3
levels with corrections for multiple testing redundant variables

grDevices package (version 3.1.1; 2014): Graphics devices and support for base and grid
graphics. Table B-6 illustrates the function used of this package.
Table B- 6. Functions of the grDevices package (adapted from R Core Team 2016).
Function Description Used for Part - Section
boxplot.stats This function is typically called by another Obtaining the quartiles and the lower B–5
function to gather the statistics necessary for and upper whisker. Used for C – 11
producing box plots, but may be invoked uncertainty analysis and rules for first C – 12
separately. flush identification, evaluation methods
and efficiency model

sgeostat package (version 1.0-25; 2013): An Object-oriented Framework for Geostatistical


Modeling in S+ containing functions for variogram estimation, variogram fitting and kriging as
well as some plot functions. This package has a reverse depend with the package mvoutlier (R
Core Team 2016). Table B-7 illustrates the function used of this package.
Table B- 7. Functions of the Mvoutlier package (adapted from R Core Team 2016).
Function Description Used for Part - Section
pcout PCOut Method for Outlier Identification in Outlier detection for the calibration B – 5.1 and 5.2
High Dimensions procedures

mvoutlier package (version 2.0-6; 2015): various methods for multivariate outlier detection
(R Core Team 2016). Table B-8 shows the function used of this package.
Table B- 8. Functions of the Mvoutlier package (adapted from R Core Team 2016).
Function Description Used for Part - Section
pcout PCOut Method for Outlier Identification in Outlier detection for the calibration B – 5.1 and 5.2
High Dimensions procedures

datatable package (version 1.9-4; 2014): fast aggregation of large data (e.g. 100GB in RAM),
fast ordered joins, fast add/modify/delete of columns by group using no copies at all, list
columns, a fast friendly file reader and parallel file writer. Offers a natural and flexible syntax,
for faster development (R Core Team 2016). Table B-9 shows the function used of this
package.
Table B- 9. Functions of the datatable package (adapted from R Core Team 2016).

63
Function Description Used for Part - Section
fread Similar to read.table but faster and more Read large files (e.g. spectrometer data B – 5.1 and 5.2
convenient. matrix 1400x 214) for calibration
procedures

pls package (version 2.4-3; 2013): Multivariate regression methods Partial Least Squares
Regression (PLSR), Principal Component Regression (PCR) and Canonical Powered Partial
Least Squares (CPPLS) (R Core Team 2016). Table B-10 illustrates the function used of this
package.
Table B- 10. Functions of the pls package (adapted from R Core Team 2016).
Function Description Used for Part - Section
plsr Function to perform partial least squares PLS calibration methodology B – 5.1
regression (PLSR)

kernlab package (version 0.9-20; 2015): Kernel-based machine learning methods for
classification, regression, clustering, novelty detection, quantile regression and dimensionality
reduction. Among other methods, kernlab includes Support Vector Machines, Spectral
Clustering, Kernel PCA, Gaussian Processes and a QP solver. Table B-11 shows the function
used of this package.
Table B- 11. Functions of the kernlab package (adapted from R Core Team 2016).
Function Description Used for Part - Section
ksvm ksvm supports the well known C-svc, nu-svc, SVM calibration methodology B – 5.1
(classification) one-class-svc (novelty) eps-svr, C – 11.3
nu-svr (regression) formulations along with
native multi-class classification formulations
and the bound-constraint SVM formulations.

Graphics package (version 3.1.1; 2014): This package contains functions for ‘base’ graphics.
Base graphics are traditional S-like graphics, as opposed to the more recent grid graphics.
Table B-12 shows the principal functions used of this package.
Table B- 12. Functions of the graphics package (adapted from R Core Team 2016).
Function Description Used for Part - Section
boxplot Produce box-and-whisker plot(s) of the given Uncertainty analysis and rules for first B–5
(grouped) values. flush identification, evaluation methods C – 11 and 12
and efficiency model
barplot Creates a bar plot with vertical or horizontal Efficiency model C – 12
bars

ks package (version 1.9.2, 2014) Kernel smoothers for univariate and multivariate data. Table
B-13 shows the function used of this package.
Table B- 13. Functions of the ks package (adapted from R Core Team 2016).
Function Description Used for Part - Section
kde Kernel density estimate for 1- to 6- Efficiency model C –12
dimensional data

64
CONCLUSIONS – PART B

Having a constructed-wetland/reservoir-tank (CWRT) as a pilot-scale case study allowed us to


study its performance through the implementation of online monitoring equipment. According
to a constructed wetlands (CWs) review, the lack of CW performance data and monitoring
results is still an issue (Lucas et al. 2014). Therefore, there is an opportunity to contribute with
the monitoring of this case study.
The use of online monitoring equipment allows having a detailed understanding of the water
quality variations during storm events. Furthermore, the inconvenient associated with manual
sampling can be avoided. Although, is important to develop local calibrations of these probes
due to the fact that the composition of the stormwater is site-highly variable (Hochedlinger
2005). These local calibrations are based on laboratory reference concentrations coupled with
the probes measurements.
The uncertainty analysis and the outlier detection allow us to filter the database to improve the
calibration process. PLS and SVM spectrometer calibration methods translates the raw
spectrometer data to field concentrations, minimizing the problems associated with the use of
global calibrations. These methods are the most sophisticated ones that perform in a
satisfactory way the calibrations (Lepot et al. 2016). However, a calibration method for the
turbidity data is needed. The implementation of the occurrence probability of first flush
method accounts for the variability that observed during storm events using uncertainty
analysis.
R and RStudio softwares allow us to analyze the recorded monitoring data, using wide variety
of statistical methods. As well, R helped us to develop and implement the methods discussed
in this Part and the ones exposed in Part C. R programing language was used to build the
decision-making tool for the operation of the CWRT system (stormwater harvesting system)
with less monitoring equipment and to evaluate different monitoring options.

65
REFERENCES – PART B

Airmar. (2017). “AT225 Airducer.”


<http://www.airmartechnology.com/productdescription.html?id=18> (Jan. 5, 2017).
Ardila-Quintero, C., León-Ramírez, R., Galarza-Molina, S., and Torres, A. (2016). “Usos
Potenciales del Efluente de un Humedal-Construido en Bogotá.” Revista U.D.C.A Actualidad
&amp; Divulgación Científica, 19(1), 237–242.
Asano, T., Burton, F., Leverenz, H., Tsuchihashi, R., and Tchobanoglous, G. (2007). Water
Reuse: Issues, Technologies, and Applications. McGraw-Hill Professional.
Baleo, J.-N., Bourges, B., Courcoux, P., Faur-Brasquet, C., and Le Cloirec, P. (2003).
Méthodologie expérimentale!: Méthodes et outils pour les expérimentations scientifiques. Editions Tec et Doc
/ Lavoisier.
Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). “Distribution of pollutant mass vs
volume in stormwater discharges and the first flush phenomenon.” Water Research, 32(8), 2341–
2356.
van den Broeke, J. (2007). “On-line and In-situ UV/Vis Spectroscopy: Real time multi
parameter measurements with a single instrument.” AWE International, (10), 54–59.
Colombia. (1985). DECRETO 1594 DE 1984. Usos del agua y residuos líquidos. DECRETO 1594
DE 1984.
EPA. (2004). “Guidelines for Water Reuse.” Washington, DC.
EU. (2006). Directive 2006/7/EC of the European Parliament and of the Council of 16 February 2006
Concerning the Management of Bathing Water Quality and Repealing Directive 76/160/EEC. L64, 37–
51.
Filzmoser, P., Maronna, R., and Werner, M. (2008). “Outlier Identification in High
Dimensions.” Comput. Stat. Data Anal., 52(3), 1694–1711.
Fleischmann, N., Langergraber, G., Weingartner, A., Hofstaedter, F., Nusch, S., and Maurer, P.
(2001). “On-line and in-situ measurement of turbidity and COD in wastewater using UV/VIS
spectrometry.” Berlin, Germany, Paper No. B1375.
Fox, J., and Andersen, R. (2005). R: Using the R statistical computing environment to teach social
statistics courses.
Galarza-Molina, S., Gómez, A., Hernández, N., Burns, M. J., Fletcher, T. D., and Torres, A.
(2015a). “Towards a Stormwater Harvesting Constructed Wetland Performance for Real Time
Operation.” RESURBE II - International Conference on Urban Resilience, Bogotá D.C.

66
Galarza-Molina, S., Gómez, A., Hernández, N., Burns, M. J., Fletcher, T. D., and Torres, A.
(2016). “On-line equipment installed in a stormwater harvesting system: calibration
procedures, first performance results and applications.” NOVATECH 2016, Lyon, France.
Galarza-Molina, S., and Torres, A. (2017). “Sizing method for stormwater harvesting tanks
using daily resolution rainfall and water demand data sets.” Luna Azul, (45), 107–122.
Galarza-Molina, S., Torres, A., Lara-Borrero, J., Méndez-Fajardo, S., Solarte, L., and Gonzales,
L. (2015b). “Towards a constructed wetland/reservoir-tank system for rainwater harvesting in
an experimental catchment in Colombia.” Revista Ingeniería y Universidad, 19(2), 169–185.
Gamerith, V. (2011). “High Resolution Online Data in Sewer Water Quality Modeling.” PhD
Thesis, Faculty of Civil Engineering, University of Technology Graz, Austria.
Gruber, G., Bertrand-Krajewski, J.-L., De Beneditis, J., Hochedlinger, M., and Lettl, W. (2006).
“Practical aspects, experiences and strategies by using UV/VIS sensors for long-term sewer
monitoring.” Water Practice & Technology, 1(1).
Henckens, G. J. R., Veldkamp, R. G., and Schuit, T. D. (2002). “On Monitoring of Turbidity in
Sewers.” American Society of Civil Engineers, 1–13.
Hochedlinger, M. (2005). “Assessment of Combined Sewer Overflow Emissions.” PhD
Thesis, Faculty of Civil Engineering, University of Technology Graz, Austria.
Langergraber, G., Fleischmann, N., and Hofstädter, F. (2003). “A multivariate calibration
procedure for UV/VIS spectrometric quantification of organic matter and nitrate in
wastewater.” Water Science and Technology: A Journal of the International Association on Water Pollution
Research, 47(2), 63–71.
Langergraber, G., Fleischmann, N., Hofstaedter, F., and Weingartner, A. (2004). “Monitoring
of a paper mill wastewater treatment plant using UV/VIS spectroscopy.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 49(1), 9–14.
Lepot, M., Torres, A., Hofer, T., Caradot, N., Gruber, G., Aubin, J.-B., and Bertrand-
Krajewski, J.-L. (2016). “Calibration of UV/Vis spectrophotometers: A review and comparison
of different methods to estimate TSS and total and dissolved COD concentrations in sewers,
WWTPs and rivers.” Water Research, 101, 519–534.
Lewin-Koh, N. (2015). “CRAN Task View: Graphic Displays & Dynamic Graphics & Graphic
Devices & Visualization.”
Lopez-Kleine, L., and Torres, A. (2014). “UV-vis in situ spectrometry data mining through
linear and non linear analysis methods.” DYNA, 81(185), 182–188.
Lucas, R., Earl, E. R., Babatunde, A. O., and Bockelmann-Evans, B. N. (2014). “Constructed
wetlands for stormwater management in the UK: a concise review.” Civil Engineering and
Environmental Systems, 0(0), 1–18.
Mevik, B.-H., and Wehrens, R. (2007). “The pls Package: Principal Component and Partial
Least Squares Regression in R.” Journal of Statistical Software, 18(2), 1–24.

67
MLIT. (2005). Manual on Water Quality for Reuse of Treated Municipal Wastewater. MLIT-Ministry
of Land, Infrastructure, and Transportation, Tokyo, Japan.
National Instruments. (1989). Lookout - Industrial Automation Software System. English.
OPS-Organización Panamericana de la Salud. (2006). “INTESTINAL ENTEROCOCCI.”
Pescod, M. B. (1992). Wastewater treatment and use in agriculture - FAO irrigation and drainage paper
47. FAO, Rome.
R Core Team. (2016). R: A language and environment for statistical computing. R Foundation for
Statistical Computing, Vienna, Austria.
“R: What is R?” (n.d.). <https://www.r-project.org/about.html> (Jan. 11, 2017).
Rice, E. W., Bridgewater, L., American Public Health Association, American Water Works
Association, and Water Environment Federation. (2012). Standard methods for the examination of
water and wastewater. American Public Health Association, Washington D.C.
Rieger, L., Langergraber, G., Thomann, M., Fleischmann, N., and Siegrist, H. (2004). “Spectral
in-situ analysis of NO2, NO3, COD, DOC and TSS in the effluent of a WWTP.” Water Science
and Technology: A Journal of the International Association on Water Pollution Research, 50(11), 143–152.
RStudio Team. (2016). “RStudio.” RStudio, <https://www.rstudio.com/> (Jan. 11, 2017).
s::can Messtechnik GmbH. (2006). “Manual ana::pro Version 5.3 September 2006 Release.”
Viena, Austria.
s::can Messtechnik GmbH. (2007). “Manual s::can spectrometer probe Version 1.0 January
2007 Release.” Viena, Austria.
s::can Messtechnik GmbH. (n.d.). “spectro::lyser.” Viena, Austria.
Spectrum Technologies, Inc. (2000). “WatchDog 2000 Series Weather Stations | Spectrum
Technologies.” <http://www.specmeters.com/weather-monitoring/weather-stations/2000-
full-stations/> (Jan. 6, 2017).
The R Foundation. (n.d.). “R: The R Project for Statistical Computing.” <https://www.r-
project.org/> (Jan. 11, 2017).
Torres, A. (2011). “Metodología para la Estimación de Incertidumbres Asociadas a
Concentraciones de Sólidos Suspendidos Totales Mediante Métodos de Generación Aleatoria.”
Rev. Tecno Lógicas, 26, 181–200.
Torres, A., Araújo Acosta, J. M., González Acosta, M., Vargas Luna, A., and Lara-Borrero, J.
A. (2013a). Metodología para Estimar Concentraciones de SST en Tiempo Real en
Hidrosistemas Urbanos a Partir de Mediciones de Turbiedad. Turbidity-Based Methodology
For Real-Time TSS Concentrations Estimates In Urban Water Systems.” Ciencia e Ingeniería
Neogranadina, 23–1.
Torres, A., and Bertrand-Krajewski, J.-L. (2008). “Partial Least Squares local calibration of a

68
UV-visible spectrometer used for in situ measurements of COD and TSS concentrations in
urban drainage systems.” Water Science and Technology: A Journal of the International Association on
Water Pollution Research, 57(4), 581–588.
Torres, A., Lepot, M., and Bertrand-Krajewski, J.-L. (2013b). “Local calibration for a UV/Vis
spectrometer: PLS vs. SVM. A case study in a WWTP.” 7th International Conference on Sewer
Processes & Networks, Sheffield, United Kingdom, 1–8.
Torres, A., Salamanca-López, C. A., Prieto-Parra, S. F., and Galarza-Molina, S. (2016).
“OFFIS: a method for the assessment of first flush occurrence probability in storm drain
inlets.” Desalination and Water Treatment, 0(0), 1–12.
Vance, A. (2009). “R, the Software, Finds Fans in Data Analysts.” The New York Times.
Venables, W., Smith, D., and R Core Team. (2015). “An Introduction to R, Notes on R: A
Programming Environment for Data Analysis and Graphics.” Vienna, Austria.
WTW GmbH. (2013). “OPERATING MANUAL - IQ SENSOR NET VisoTurb (R) 700 IQ
(SW).” Weilheim.
Xylem. (n.d.). “ Turbidity and Suspended Solids.”
Zamora, D., and Torres, A. (2014). “Method for outlier detection: a tool to assess the
consistency between laboratory data and ultraviolet-visible absorbance spectra in wastewater
samples.” Water Science and Technology: A Journal of the International Association on Water Pollution
Research, 69(11), 2305–2314.

69
PART C
DEVELOPED METHODS
Part B presented fundamental methods developed by other authors and accepted by the
scientific community that were implemented in this thesis. Part C shows methods that we
developed for the specific study case and the installed monitoring equipment.
Chapter 8 shows the water quantity calibration method, in order to translate measured water
levels to flow rates. Then, Chapter 9 describes the water quality calibration method for the
turbidity probe, which was installed at the system’s inlet. This calibration allows translating
turbidity measurements to TSS concentrations. Chapter 10 illustrates the definition of two
water use groups, based on the water use and reuse guidelines exposed on Part B Chapter 6.
As well in Chapter 10, a method was proposed with the aim of knowing directly the water uses
employing the information recorded by the spectrometer probe located at the wetland's outlet.
Section 11.1 shows a method that allows us identifying a first flush event with a variable or
variables without waiting until the end of the event. A reliability evaluation simulation method
(Section 11.2) was proposed in order to complement the Kappa coefficient method (Section
3.6 Part A). Section 11.3 explains an evaluation method to identify the redundant variables that
are not needed in the Support Vector Machine (SVM) method (Section 3.10 Part A) to
simulate water uses. This method employs the Wilcoxon signed-rank test.
Finally, Chapter 12 shows the construction of an efficiency model / wavelength relation that
uses on-line spectrometer measured data and total load removal method. This general
efficiency model would be employed to predict the outlet spectrum, which can be related with
the final water use.
CHAPTER 8

WATER QUANTITY MONITORING SYSTEM CALIBRATION


METHOD

The system was monitored by means of two triangular sharp-crested weirs and two continuous
ultrasonic level sensors, in order to know the CW (constructed-wetland) hydraulic behavior.
The weirs and the sensors were installed at the inlet and the outlet of the CW system (see Part
B Chapter 6, Figure B-1 and Figure B-2). Since the constructor of the CW system did not
deliver the weirs calibrated, we were no able to use the weir equation. Therefore, a calibration
procedure was developed. For this purpose and with the aim of having a constant flow, a
pumping system of CWRT and a pipeline system were used. It was necessary to use an
ultrasonic flow meter (portable ultrasonic flowmeter – Portaflow C by Fuji Electric) that
measures the input flows. The water levels were simultaneously recorded with the ultrasonic
sensors. Figure C-1 shows the portable ultrasonic flow meter, the ultrasonic level sensor and
the pipeline configuration in the system’s inlet and outlet.

Figure C-1. Pipeline configuration for the inlet weir (left figure) and for the outlet weir (right figure)

From preliminary tests a large variability of the measured data was observed, both in flows and
water levels. Additionally, the flow at a given flow rate was stabilized after 10 minutes.
Therefore, four calibration campaigns were done in order to account for that variability: two
for the inflow and two for the outflow. Each flow was measured during 30 minutes and
recorded every 10 seconds. During the calibration campaigns (April 16 and 17 for the inflow,
September 25 and October 2, 2015 for the outflow) the flow was changed four times in
ascending and descending ways, with the aim of detect variations above mentioned.
In sum, we obtained 180 pairs of flow rates and levels. After reviewing the recorded data, we
observed dispersion for measured flows: different values of flow were obtained for similar
levels. Hence, groups or so-called clusters of flow rates and levels (one cluster for each given
value of flow rate) were obtained. Figures C-2a and C-2b show the recorded levels and flows

72
from the developed campaigns for the inlet and outlet, respectively. The closest levels were
grouped as data clusters in red-dash ellipses (see Figures C-2a and C-3b).

Figure C-2. (a) Inlet recorded flows vs. water level. The red-dash ellipses show an example of data clusters. (b) Outlet
recorded flows vs. water level. The red-dash ellipses show an example of data clusters.

Figure C-2a shows that the inlet recorded flows varied between 0.025 L/s and 7.54 L/s.
Regarding the outlet, Figure C-2b shows that the flows varied between 0.66 L/s and 3.03 L/s.
We could not measure flows lower than 0.66 L/s, because huge dispersion of the results.
Hence, instead of defining an explicit model that relates water flow and level, we developed a
methodology that uses all the measured data. The method translates the water level into flow
based on random selections and interpolation of the data. In detail, for levels that are between
the measured ranges the level will be located between the two closest clusters of measured
levels. Then, two possible levels are chosen randomly and with a linear interpolation the
corresponding flow is computed. For levels that are out of the measured range, we computed
the flow rate based on weir hydraulics using the field coefficient of discharge (Cd). Cd was
calculated using the data collected in the calibration campaigns and using the equation E8-1
based on the equation E2-1 for triangular sharp-crested weirs showed in Part A Chapter 4.

(E8-1)

where Q is the measured flow rates, H is the head over the weir that was calculated with the
measured levels, θ is the weir angle (90º) and g is the acceleration of gravity (9.81 m2/s). Figure
C-3a and C-3b show Cd versus H for the inlet and outlet, respectively. For the inlet, Cd values
used are shown in blue. In the case of the outlet we defined two Cd groups in blue and red.
These groups will help us to determine flows lower or higher to the measured ones. So, for a
level that was out of the measured range, a Cd was chosen randomly from all possible selected
Cd values and the flow was computed using equation E8-1.

73
Figure C-3. Coefficients of discharge (Cd) obtained vs. water table (H) at (a) the inlet, blue circles show the Cd used to
compute the flows for the levels that were out of the measured range. (b) Cd vs. H at the outlet, blue and red circles show the
Cd used to compute the flows for the levels that were out of the measured range.

Figure C-4 shows the procedure explained above which was repeated 5000 times (after these
repetitions we did not observed flows variation). Consequently, inflow and outflow rates
confidence bands were obtained for each month of recorded data. As an example, Figure C-5
illustrates the flow rates confidence bands for the event of January 27th 2017.

Beginning#t=1min

Is!X!in!the! Search!X!level!
measured! Yes
X:!level!in!cm!at!min!t in!the!two!closest!
range!data? clusters

No
Compute!the!flow!
with!linear!interpolation!
t!=!t!+1!min Choose!a!measured!Cd

8 θ
Q= Cd tan 2gH 5/2
15 2

Flow!in!L/s
Figure C-4. Calibration method flow diagram for the quantity monitoring system: t: time; X: level in cm

Figure C-5a illustrates the inflow confidence bands and Figure C-5b shows the outflow
confidence bands. Figure C-5a shows that the maximum flow at the inlet from January was
27.3 L/s. Regarding the outlet, Figure C-5b shows that the maximum flow was 0.62 L/s. We
observed that the outflow from CW starts before the inflow, there are various explanations for

74
this phenomenon: two days before the illustrated event there was another rain event, hence
there was water within the CW; The CW itself receives directly rainwater and if it is full it
begins to drain, perhaps before flow is registered at the entrance of the system because of the
delay that has the input flow (before it begins to drain there are losses within the catchment
basin); and because of the nature of the CW most of the time it has water. These findings
illustrate the hydraulic performance of the CW.

(a) January 2015 Inflow (b) January 2015 Outflow


Figure C-5. Flow rate confidence bands of January, (a) inflow and (b) outflow

With the method presented above, we obtained flow rates for the inlet and outlet for all the
recorded months. This method related directly the flow with the recorded levels using
measured field data. Finally, with the calculated flows and the pollutants concentrations per
minute (method described in Part B Section 5.1 and Section 5.2) we are able to compute the
system's efficiency and the first flush phenomenon. The script developed for this calibration
method is presented in Appendix C-8-1.

75
CHAPTER 9

WATER QUALITY MONITORING SYSTEM CALIBRATION


METHOD: TURBIDITY PROBES

The turbidity probes do not give a TSS measurement directly, so a calibration methodology
was developed. This methodology was based on the work by Torres (2011) and Torres et al.
(2013) (see Part B Section 7.2). Also, this methodology uses the method explained in the
previous chapter (Chapter 8), but instead of relating flows with levels, field turbidity values
were related with TSS laboratory concentrations. As it was mentioned in Part B Chapter 4, the
turbidity probe was located at the constructed wetland (CW) system’s inlet (see Figure B-2 in
Part B Chapter 4.2).
Twenty-eight water samples from three rain events were used for this local calibration (see
Table C-1). These samples were coupled with the corresponding turbidity field measurements.
In line with Caradot et al. (2015), between 15 to 20 samples are required for the calibration.
From preliminary tests, we observed that it was better to develop the calibration separately,
one for the inflow and one for the outflow. Hence, 14 samples were used for the inflow
turbidity calibration and 14 for the outflow.
Table C-1. Water samples per event for
turbidity probe calibration
Events Inflow Outflow
Mar-16-2015 5 5
Nov-5-2015 3 5
Nov-19-2015 6 4
Total of WS 14 14

Figure C-6 shows field turbidity vs. laboratory TSS concentrations. The relation between these
variables is not linear, the slope changes when TSS values are greater than 10 mg/L. This
figure illustrates the behavior of TSS concentrations and turbidity values at the inlet (red
circles) and at the outlet (blue rhombus). At the inlet, minimum (4.4 NTU and 7.4 mg/L) and
maximum (28.3 NTU and 33.6 mg/L) values of these indicators were obtained. In contrast, at
the outlet most of the values are concentrated in the area of less than 5 NTU of turbidity and
10 mg/L of TSS. This behavior can be related with the variation of water quality at the inlet
during the event: higher pollutant loads are expected at the beginning of the event that at the
end of it. Regarding the outlet, an initial wash off of the pollutants retained by the CW during
previous events could be the cause of high TSS and turbidity values.
The experiment requires the consideration of the uncertainty associated with the laboratory
TSS concentrations and the turbidity probe data. The methodologies proposed by Torres
(2011) and Torres et al. (2013) were used for the assessment of uncertainties and detection of
multivariate outliers, respectively. For the case of TSS, the precision of the scale used to

76
measure the masses was 0.001 g, and the precision of the test tube used to measure the volume
of each sample was 0.01 L. The precision of the turbidity probe was 0.01 NTU.

Figure C-6. Field turbidity measurement coupled with TSS lab concentrations, the red circles are the measurements at the
inlet and the blue rhombus are the measurements of the outlet.

In order to calibrate the turbidity online sensor data with laboratory results and with the aim of
using all the measured data, a methodology similar to the one described in Chapter 8 was
developed: random selections and linear interpolation of the measured data. For this case, the
turbidity field values with TSS laboratory concentrations were the related variables. For
turbidity values that were between the measured ranges, the value was located between the two
closest clusters. Then, two possible turbidity values were chosen randomly and with a linear
interpolation the corresponding TSS was computed. Regarding this calibration, for the
turbidity values that were out the measured range, the treatment was different from the one
used in Chapter 8: randomly two turbidity values, closest to the value of interest, were chosen
and then the TSS was calculated extrapolating the chosen turbidity values. Figure C-7 shows
the procedure explained above which was repeated 5000 times (after which no variation was
observed). The script developed for this calibration method is presented on Appendix C-9-1.
Figure C-8 shows an example of the TSS median values (in blue) and confidence bands (in
grey) of August 2014 that were translated from the turbidity measurements (red line). During
most of the time TSS values were above the turbidity values, except for TSS values above 5
mg/L. This observation can be explained by the slope changes observed in Figure C-6. The
slope for TSS values greater than 10 mg/L and turbidity values greater than 11 NTU is steeper
than for lower values of these indicators.
Consequently, inflow TSS concentrations confidence bands for each month of recorded
turbidity data were obtained using the explained methodology. Additionally, with the TSS
values calculated from the turbidity measurements coupled to flow rates (Chapter 8) the first
flush phenomenon (Part A Section 2.3) can be quantified for each event.

77
Beginning#t=1min

Is!Turb!in!the! Search!Turb
Turb:!inlet!turbidity! measured! Yes
in!the!two!closest!
(NTU)!at!min!t range!data? clusters

No
Compute!TSS
with!linear!interpolation
t!=!t!+1!min

Choose!two!closest!values!and!
extrapolate!the!TSS!value

TSS!in!mg/L

Figure C-7. Methodology for turbidity probes calibration (t: minute value; Turb: inlet turbidity).

Figure C-8. Turbidity values are shown in red for August 2014. TSS median values are shown in blue and confidence bands in
grey.

78
CHAPTER 10

DEFINITION OF WATER USE GROUPS AND WATER USE LIMITS


EMPLOYING SPECTROMETER ABSORBANCES

This Chapter illustrates the definition of two water use groups, based on the water use and
reuse guidelines exposed in Part B Chapter 6. Then, a method is proposed with the aim of
knowing directly the water uses employing the information recorded by the UV-Vis
spectrometer probe located at the wetland's outlet and using the defined water use groups.

10.1 WATER USE GROUPS


In Chapter 6, the water use and reuse guidelines and the defined initial water uses are shown.
In order to simplify the amount of water uses, two water use groups are defined based on the
restricted and unrestricted water uses concepts (EPA 2004). Restricted water use means that
public exposure with the treated water is controlled. Otherwise, unrestricted water use no
limitations are imposed on body-contact activities (Asano 2007; EPA 2004). Therefore,
restricted water use's quality requirements may not be as strict as non-restricted water use
(EPA 2004). The defined water use groups are: (i) USES A – unrestricted: Urban Reuse,
Agricultural reuse, Recreational impoundments; (ii) USES B – restricted: Agricultural reuse
(food processed & non food crops), Landscape impoundments, Restricted areas irrigation.
With the definition of the water uses groups we are able to simplify the water use evaluation
process: instead of having multiple water uses, the decision is summarized in only two general
groups.
Tables C-2 shows the water quality indicators for the defined water use groups. These water
quality indicators were chosen because are the most common parameters (EPA 2004) and the
techniques for its determination were available at the university laboratory. The indicators
values were selected from the most restricted limit of the named guidelines. Therefore, we are
on the safe side.
Table C-2. Defined water group uses with the selected indicators values; ND: no detected (adapted from: Colombia 19851;
EPA 20042; EU 20063; MLIT 20054; Pescod, 19925)
Group TSS BOD5 T Total coliform Fecal Coliform
Description pH
Uses (mg/L) (mg/L) (NTU) (MPN/100mL) (MPN/100 mL)
Urban reuse, agricultural
A reuse, recreational <= 5 2 <=10 2 <= 2 2&4 5.0 1–8.6 4 ND 2 ND 1&2
impoundments
Agricultural reuse (food
processed & non-food
B crops), landscape <= 30 2 <= 30 2 6.0–9.0 2&5 ≤200 2
impoundments, restricted
areas irrigation

79
10.2 WATER USES LIMITS EMPLOYING SPECTROMETER ABSORBANCES
According to Section 4.2 –Monitoring System– two spectrometer probes were installed. One
spectrometer was located at the inlet of the system and the other was located at the outlet.
Additionally, in Section 5.1 (UV-Vis spectrometer probes calibration) was explained that in
order to develop the spectrometer probe calibration, the laboratory reference concentrations
have to be coupled with the measurement of the absorbance spectra of these samples.
Therefore, the water uses at the outlet could be known if a direct relation between samples and
UV-Vis spectra can be developed. The following method was proposed with the aim of
knowing directly the water uses, employing only the information recorded by the spectrometer
probe located at the outlet.
Each laboratory concentration from the sampling campaigns was compared with the water
quality indicators limits proposed in the above section for uses A (Urban reuse, agricultural
reuse, recreational impoundments) and B (Agricultural reuse (food processed & non-food
crops, landscape impoundments, restricted areas irrigation) (Table C-2). The samples that met
at least three of the four requirements (TSS, BOD5, T, pH) for each use were grouped in use A
and in use B respectively as well as the UV-Vis spectra coupled with these samples. The
selection of these UV-Vis spectra limits is explained as follows. The quartiles and box plot
values of the first wavelength (200 nm) of the UV-Vis spectra for each group use were
calculated. This wavelength was chosen because is the first recorded wavelength of each
spectrum and it corresponds to the UV range. For use A the upper whisker was chosen in
order to increase the chance of being in this group (microbiological wavelength values are
close to the upper whisker). Regarding group use B, the third quartile was chosen, because it
allows us to include more samples. And also to be on the safe side: has more implication to
choose wrong between water use B and NU rather than choose water use A and B. Then,
from the wavelengths 200 nm of UV-Vis spectrum use A the close value to the upper whisker
was identified. The limit of use A was chosen as the UV-Vis spectrum corresponding to the
wavelength 200 nm identified above. In the case of the selection of use B limit the method
implemented with use A was used, but instead of identifying the wavelength 200 nm with the
closest value to the upper whisker, the third quartile was used. Therefore, the spectra that
corresponded to these two wavelengths were chosen as limits for use A and B. We preferred to
use the quartile method in order to choose one measured spectrum, instead of built a synthetic
spectrum from the measured ones.
Then, the recorded spectra of each month were compared with the defined limits. If 95% of
the absorbances were below the defined limit, the water fulfills a certain water use. Figure C-9
shows the defined absorbance spectra limits and an example of the proposed method. Red
color illustrates the absorbance spectrum that does not fulfill any use. The absorbance
spectrum that achieves use A and B is shown in blue. Regarding only use B, the absorbance
spectrum that achieves this use is shown in cyan.
For most of the months, the absorbance spectra were measured every minute. Therefore, using
this method we were able to identify when (in time) and for how long (during the measured
period) the treated water meets certain use. With this method the water use at the outlet can be

80
know in a direct way, without taking water samples or using local or global calibrations. The
script developed for this method is presented in Appendix C-10-1.

Figure C-9. UV-Vis Spectra limits for use A (black-solid line) and use B (black-dash line); λ is the number of wavelength; Out
– Abs 1/m is the absorption at the outlet. Red color illustrates the absorbance spectrum that does not fulfill any use. As an
example of the proposed method the absorbance spectrum that achieves water use A and B is shown in blue. In cyan is shown
the absorbance spectrum that achieves the water use B.

81
CHAPTER 11

RULES FOR FIRST FLUSH IDENTIFICATION AND EVALUATION


METHODS

This Chapter explains the developed methods for the first flush (FF) identification and
evaluations methods that support methods such as Kappa coefficient (k) and SVM. For the FF
identification, a method based on rules was proposed. Then, we present a reliability method
that complement the Kappa coefficient evaluation method shown in Section 3.6. At the end,
Section 11.3 shows an evaluation method that identifies the redundant variables with the aim
of use the needed variables in the Support Vector Machine (SVM) method (Section 3.10).

11.1 RULES FOR FIRST FLUSH IDENTIFICATION


As seen in Section 4.2 –Monitoring System–, two spectrometer probes and two ultrasonic
levels were installed at the system's inlet and outlet respectively. With this monitoring
installation the first flush (FF) phenomenon can be quantified, using the distribution of
pollutants mass and volume in the stormwater discharges events (see Part A Section 2.3 and
Part B Section 5.3). This can be useful to determine if the FF is relevant to the system’s
performance and to take it into account for an eventual decision-making system in the end use
of water. Hence, the first flush (FF) phenomenon was determined for each event at the inlet of
the constructed wetland with the recorded data.
In order to propose a decision-making (DM) tool for the operation of the constructed-
wetland/reservoir-tank system with less monitoring equipment, the identification of an event
as a first flush was used as a tool. As was mentioned in Section 5.3, in order to determine the
FF phenomenon is necessary to have recorded the entire event. Hence, looking for a variable
or variables that represent the first flush phenomenon became indispensable.
A hydrologic and hydraulic characterization was done for each analysed event. The evaluated
storm parameters were: Antecedent Dry Weather Period (ADWP), rainfall depth (P),
maximum storm intensity (Imax), mean storm intensity (Imean) and rain duration (Dplu). The
hydraulic performance variables defined were: maximum head over the entrance weir (Hmax),
mean head over the entrance weir (Hmean), difference between head over the entrance weir in
the time i and time i+1 (Δhmax) and storm event duration according to the head over the
entrance weir (Dweir). These variables were measured during the development of each event.
Therefore, an analysis of variance was done to identify a variable or variables that represent the
first flush phenomenon. Tests were conducted with each one of the storm parameters and
hydraulic variables versus the zones for the M(V) curves (Bertrand-Krajewski et al. 1998).

82
Zones 1 and 2 are related to the fist flush phenomenon. After running the analysis of variance,
the variable or variables that represent the first flush phenomenon at the beginning of the
event were found. So, these variables can differentiate the zones related to the first flush
phenomenon (Zones 1 and 2) from others.
A model was proposed using the variables identified above. The model allows classifying the
event as a first flush or not with a value of the chosen variable or variables. Then, we proceed
to define rules with the quartiles and box plots values for these variables. Each event was
simulated with these rules. The rules are a combination between the quartile and box plot
values of the chosen variables (q: quartile; lw: lower whisker; uw: upper whisker).
For example, lets consider that the chosen variables were ADWP and Hmean. Table C-3
shows five rules that were established in order to identify a first flush event. Rule A means that
an event with Hmean greater and equal to q1 and with a ADWP greater and equal to q2 is a
first flush event. Rule B and C are similar to rule A but instead of Hmean(q1) and ADWP(q2)
were used Hmean(lw) and ADWP(q2;uw). Rule D and rule E, only used on variable Hmean(q3)
and ADWP(q1) respectively. Finally, the results using these rules were evaluated using Cohen's
kappa coefficient (Ckc) (see Part A Section 3.6) and the reliabilities of FF and No FF event
(see Section 13.2).
Table C-3. Rules that identifies a first flush event (q: quartile; lw: lower whisker; uw: upper
whisker). Hmean: mean head over the entrance weir; ADWP: Antecedent Dry Weather Period
Rule Description
Rule A First quartile (q1) of Hmean and second quartile (q2) of ADWP
Rule B Lower whisker (lw) of Hmean and third quartile (q3) of ADWP
Rule C Lower whisker (lw) of Hmean and upper whisker (uw) of ADWP
Rule D Third quartile (q3) of Hmean
Rule E First quartile (q1) of ADWP

Hence, with this method we are able to identify a first flush event with a variable or variables
without waiting for the event end. The script developed for this method is presented on
Appendix C-11-1.

11.2 USING RELIABILITY AS AN EVALUATION METHOD


The Kappa coefficient (k) (see Part A Section 3.6) was used in this thesis to compare the
observed monitoring data results with the simulated results with two choices. But this
coefficient did not work well when the results of the observed monitoring data had only one
choice. Table C-4 shows an example of the situation explained above.
Table C-4. Observed results versus simulated results.
Observed results
FF No FF
Simulated FF 10 (a) 0 (b)
results No FF 5 (c) 0 (d)

FF (choice 1) denotes a FF event, No FF (choice 2) is related with an event that did not
present the FF phenomenon. According to Table C-4, the observed FF events were 15 (a plus
b), the simulated FF events were 10 and the simulated No FF events were 5. The k of this
example is equal to 0. This value does not give us a real evaluation of the simulation, because

83
the method evaluates the agreement between options. Therefore, the use of another indicator
becomes necessary.
As a solution, the concept of reliability was implemented in this case. It was calculated using
the equation E11-1 and E11-2.

a
Rsimulated−choice1 = (E11-1)
a+c

d
Rsimulated−choice2 = (E11-2)
b+d

Continuing with the example, the reliability of FF event and no FF event will be (Equations
E11-3 and E11-4):

a 10
RFF = = = 0.67 (E11-3)
a + c 10 +15

d 0
RNoFF = = = Inf (E11-4)
b+d 0+0

For this example, a RFF of 0.67 means that the implemented tool was able to simulate 67% of
the FF observed events.
When the results of the observed monitoring data were all of the same choice, the reliability of
this choice was calculated in order to have an evaluation of the simulation. This method was
implemented in the first flush and water uses simulations. The script developed for this
method is presented on Appendix C-11-2.

11.3 EVALUATION METHOD TO IDENTIFY THE REDUNDANT VARIABLES


Support Vector Machine (SVM) (see Section 3.10 Part A) was used to classify water uses. This
method allows us to simulate the water uses with the storm parameters and hydraulic
performance variables. Therefore, the quality measurements are no longer needed for water
uses' definition. The SVM was run 1000 times, changing randomly the validation and the
calibration data during each execution.
The output variables were the water uses (uses A and B, see Section 10.1) and the input
variables were the storm parameters and hydraulic performance variables (18 input variables).
The evaluated storm parameters were: Antecedent Dry Weather Period (ADWP), rainfall
depth (P), maximum storm intensity (Imax), mean storm intensity (Imean) and rain duration
(Dplu). The hydraulic performance variables defined were: maximum head over the entrance
weir (Hmax), mean head over the entrance weir (Hmean), difference between head over the

84
entrance weir in the time i and time i+1 (Δhmax) and storm event duration according to the
head over the entrance weir (Dweir).
The results using SVM were evaluated using Cohen's kappa coefficient (Ckc) (see Section 3.6
Part A) and the reliabilities of use A and use B (see Section 11.2). After obtaining these results,
a redundant variable analysis was developed with the aim of identifying which variables were
not essential. That means the identification of the minimum requirement in terms of variables
needed to obtain the water use. Hence, the Wilcoxon signed-rank test (see Section 3.4 Part A)
was used for the identification of variables that can be removed. This procedure will be
explained as follows.
First, the SVM model was running without one variable (v1) of the 18 variables mentioned
above (the storm parameters and hydraulic performance variables). Then, the Ckc was
calculated for this simulation using 17 variables. This Ckc-v1 value was compared with the
Ckc18 obtained from the simulation with all the variables using Wilcoxon signed-rank test. A p-
value higher than 0.05 means that the variable removed (v1) does not generate a significant
difference. So, the variable v1 can be put aside. All the 18 variables were removed one at a time.
Figure C-10 resumes the method explained above. After recognizing the variables that can be
put aside, a second simulation was done without two variables at one time (e.g. v1&v2). Then,
the Ckc-v1&v2 was compared with Ckc-v1 using Wilcoxon signed-rank test. This second
simulation was done with all the possible pairs, a total of 160 pairs or simulations. Once again,
the pairs of variables that can be removed were identified. Figure C-11 shows the second
simulation procedure.

Figure C- 10. Process of the first simulation removing one variable at a time

85
Figure C- 11. Process of the second simulation removing two variables at a time

If we continue to remove four or five or six variables at one time, this would be a long and
exhausting procedure. Therefore, the following method was proposed. A third simulation was
done removing three variables at one time starting with the pairs identified above (e.g.
v1&v2&v3). Then the trios of variables that can be removed were identified using Wilcoxon
signed-rank test comparing Ckc-v1&v2&v3 with Ckc-v1&v2. The pairs that cannot become a trio
were marked. These variables were identified as the “must be in the simulation” variables. The
trios formed by the remaining variables were searched and the most repeated trios were
identified. The variables that formed more trios were marked and classified as possible
redundant variables. Figure C-12 sketches the procedure explained above.

Figure C- 12. Process of the third simulation removing three variables at a time

The ideal variables to remove would be the ones that we have to wait until the end of the
event (e.g. Dweir- storm event duration according to the head over the entrance weir, Dplu-

86
rain duration, among others). The chosen trio would be the one that has more of these
variables. Therefore, SVM for classification of water uses would implement fewer variables and
with the same performance using the 18 variables.
The evaluation method that employs the Wilcoxon signed-rank test helped to identify the
essential variables that are needed in the SVM method to simulate water uses. The script
developed for this method is presented in Appendix C-11-3.

87
CHAPTER 12

EFFICIENCY OF THE CONSTRUCTED WETLAND SYSTEM

In order to simulate the water quality at the system outlet using as less as possible monitoring
equipment, the efficiency of the constructed wetland (CW) was calculated. On-line
spectrometer measured data and total load removal method were used to evaluate the
efficiency of the CW. As it was mentioned in Chapter 6, the spectrometer probes were located
at the system’s inlet and outlet. With the aim of employing the method explained on Section
10.2 (water uses limits employing spectrometer absorbances), the UV-Vis spectra were used
instead of Event Mean Concentrations (EMC). Usually, the total load removal is calculated
using EMC. However as we are interested in on-line decisions, the EMC was not used because
this method groups the pollutant concentrations of the entire event in one single number. This
time-resolution does not allow us to have the information per minute. Therefore, to evaluate
the efficiency of the CW the equation E12-1 was implemented.

(E12-1)

where and are the absorbances of the UV-Vis spectra and Vin(t) and
Vout(t+19) are the volume runoff at the inlet and outlet of the system, respectively. Nineteen
minutes was defined as the retention time within the CW, according to the results found by
Hernández et al. (2017) using Cross-correlation Function (CCF) applied to the first principal
components of affluent and effluent absorbances data of the CW. The efficiency was
calculated for each storm event and equation E12-1 was used only when the following
conditions were met: Vin > Vout and Vout ≠ 0. Only 31 from 80 (39%) events met the
mentioned conditions for the efficiency calculation. During the monitoring period the
ultrasonic level located at the input worked most of the time without fails. On the contrary, the
output ultrasonic level did not work well, because this sensor was located farther than the one
in the inlet, and the cables and joins presented corrosion. Thus, most of the time it was not
possible to obtain outflow water quality measurements. Hence, for these months, a relation
(rel) between the inlet absorbances and the outlet ones was used ( / ).
After calculating the efficiency per event using the method described above, a general
efficiency model was determined in order to simulate the water quality at the outlet of the
system. A random variable's density function using a kernel non-parametric function
(Tsybakov 2009) (see Part B Section 3.9) was assessed with the efficiencies and the
absorbances relations computed for each storm event. Therefore, a non-parametric function of

88
efficiency and the absorbances relation for each wavelength was obtained. As the UV-Vis
spectrum has 214 wavelengths, 214 kernel models were calculated. The script developed for
the general efficiency model is presented on Appendix C-12-1.
With the general efficiency model the water quality at outlet of the system can be simulated.
Otherwise, a direct relation between the spectrometer wavelengths and efficiency was used
.
With the methodology explained in this Chapter the outlet spectra can be predicted, therefore
they can be related with the final water use. The script developed for this simulation is
presented on Appendix C-12-2. As example, Figures C-13 and C-14 show the calculated
absorbance evolution at the outlet (λout) for the λ=437.5 nm of the event September 18th 2014
12:33 using the explained methodology. The λ=437.5 nm was selected because the Spearman
correlation between the wavelength and TSS is about 0.93 (p-value < 0.05). In the left upper
part is the wavelength at the inlet (λin) for 437.5 nm (Figure C-13a). Figure C-13b shows the
volume at the inlet and outlet during the event. In this figure, we can be observed when the
following conditions are met: Vin is greater than Vout and Vout is greater than Vin. Figure C-
13c illustrates the kernel model for the wavelength 437.5 nm and in Figure C-14 is the
simulated λout for the λ=437.5 nm.
Figure C-13a shows how the values for λin increase during the event. Figure C-13b illustrates
the inlet (black line) and the outlet volume (red line). This particular event shows the problems
associated with the ultrasonic level at the outlet: observe the abrupt change of the volume
close to the 200 minutes time. The Kernel model efficiency for the wavelength 437.5 nm
(Figure C-13c) prioritizes the positive efficiency values. From the simulated λout, it can be
observed that the values for the first 100 minutes are negative, probably because of efficiency
high positive values. The peak for the λout was between 100 and 200 minutes, otherwise for the
λin was close to 200 minutes. The observation explained above can be related to a pollutant
wash-off within the CW. More results will be shown in Chapter 19 of this thesis.

(a) λ n in = 437.5 nm (b) Vin (black line) and Vout (red line)

89
(c) Kernel model efficiency for the wavelength 437.5 nm
Figure C- 13. Process to obtain the wavelength 437.5 nm at the outlet during the event of September 18th 2014 12:33

(d) λ n out = 437.5 nm


Figure C- 14. Simulated wavelength 437.5 nm at the outlet during the event of September 18th 2014 12:33

90
CONCLUSIONS – PART C

The proposed water quantity calibration method would be used to translate directly water level
to flow of the CWRT (constructed-wetland/reservoir-tank) system. It was necessary to
develop a method because the constructor of the CWRT system did not deliver the weirs
calibrated, so we could not use the standard weir equation. The described method was the
simplest way to develop a weir calibration using the installed weirs and monitoring system, and
with budget restrictions. This method used observed data instead of defining an explicit and
single model. During the calibration process, all the measured data can be used. Hence, the
relation between the levels and flows is not compressed in an explicit model.
It is worth mentioning that the water quantity calibration method has a limitation. This
method depends on the measured field data. So, for the outlet weir it is recommended to
develop additional calibration campaigns ensuring that the pumping system is working well or
using another system. Also, if there is a possibility of investment it is recommended to repair
weir leaks or install calibrated weirs. As well, we suggest using another way to measure the flow
such as directly in the inflow pipes without using the weir.
Regarding the water quality calibration using turbidity probe, the aforementioned method
allows us to get directly TSS concentrations per minute and TSS confidence bands for each
month of recorded turbidity data. As this method depends on the field data, it is recommended
to undertake more sampling campaigns focused on the turbidity probe. The first flush
phenomenon can be quantified for each event using the TSS concentrations and the inflow
(Chapter 13) per minute. Identifying an event with the first flush phenomenon could be a
determinant in the quality of the water inlet constructed wetland (CW). As well the inlet TSS
concentrations calculated with the turbidity can be compared with the inlet TSS concentrations
using the spectrometer (Part D Chapter 15). This comparison would give us relevant
information of the behaviour of these different probes.
Regarding to the definition of the water uses groups A (Urban reuse, agricultural reuse,
recreational impoundments) and B (Agricultural reuse –food processed & non-food crops–,
landscape impoundments, restricted areas irrigation), this approach let us to simplify the water
use evaluation process and gives the opportunity to know directly the water use at the outlet of
the CW. Therefore, with the UV-Vis spectra recorded every minute for most of the monitored
months we are able to know the outflow water use without taking water samples or using local
or global probes calibrations. As it was mentioned, this method only uses five water quality
indicators. It is recommended to add more water quality indicators (e.g. heavy metals, color,
among others) to these defined groups to strengthen the obtained results.
Concerning the method of the rules for the first flush identification, this method let us use one
or two variables (hydrologic and/or hydraulic parameters) to identify an event with the first
flush phenomenon. This approach simplifies the first flush detection; otherwise to know if an
event presents this phenomenon we have to wait until the end of the event and an online (real

91
time) decision-making would not be possible. This method depends on the amount of storm
events that have hydrologic and hydraulic parameters related with the detection of the first
flush phenomenon. Therefore, it is recommended to increase the database to strength the
results.
The Kappa coefficient (k) was used in this thesis to compare the observed monitoring data
results with the simulated results with two choices. The reliability as an evaluation method
becomes essential when the results of the observed monitoring data were all of the same
choice and using the kappa coefficient method did not give us an understandable result. With
this method we are able to have an evaluation of the simulation for all the possible cases:
when the observed monitoring data have the same choice or when it have the two possible
choices. These two methods were implemented in the first flush and water uses simulations
(Part E – Decision-making tool).
To simulate water uses, the Support Vector Machine (SVM) method (Section 3.10 Part A) was
used. With the aim of reducing the amount of variables (storm parameters and hydraulic
performance variables) the Wilcoxon signed-rank test (see Section 3.4 Part A) was used for the
identification of variables that can be removed. Therefore, using this evaluation method would
give us the chance to employ fewer variables to simulate the water uses with SVM.
Finally, with the aim of simulate the water quality at the system outlet using less monitoring
equipment, the assessment of the CW's efficiency becomes relevant. A general efficiency
model would be built using on-line spectrometer measured data, total load removal method
and a random variable's density function using a non-parametric function kernel. Usually, the
total load removal is calculated using event mean concentrations (EMC) of pollutants. The
proposed method employs UV-Vis spectra in order to use the developed method on Section
10.2 (water uses limits employing spectrometer wavelengths). Therefore, with the general
efficiency model we are able to predict the outlet spectra that can be related with the final
water uses. Since this method depends on the inflow and outflow volumes, it is recommended
to improve the reliability of water quantity data with the enhancement of the flow
measurement.

92
REFERENCES – PART C

Asano, T. (2007). Water reuse: issues, technologies, and applications. McGraw-Hill Professional.
Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). “Distribution of pollutant mass vs
volume in stormwater discharges and the first flush phenomenon.” Water Research, 32(8), 2341–
2356.
Caradot, N., Sonnenberg, H., Rouault, P., Gruber, G., Hofer, T., Torres, A., Pesci, M., and
Bertrand-Krajewski, J.-L. (2015). “Influence of local calibration on the quality of online wet
weather discharge monitoring: feedback from five international case studies.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 71(1), 45–51.
Colombia. (1985). DECRETO 1594 DE 1984. Usos del agua y residuos líquidos. DECRETO 1594
DE 1984.
EPA. (2004). “Guidelines for Water Reuse.” Washington, DC.
EU. (2006). Directive 2006/7/EC of the European Parliament and of the Council of 16 February 2006
Concerning the Management of Bathing Water Quality and Repealing Directive 76/160/EEC. L64, 37–
51.
Hernández, N., Camargo, J., Moreno, F., Plazas-Nossa, L., and Torres, A. (2017). “ARIMA as
a Forecasting Tool for Water Quality Time Series Measured With Uv-Vis Spectrometers in a
Constructed Wetland.” Tecnología y Ciencia del Agua, III(5), 127–139.
MLIT. (2005). Manual on Water Quality for Reuse of Treated Municipal Wastewater. MLIT-Ministry
of Land, Infrastructure, and Transportation, Tokyo, Japan.
Pescod, M. B. (1992). Wastewater treatment and use in agriculture - FAO irrigation and drainage paper
47. FAO, Rome.
Torres, A. (2011). “Metodología para la Estimación de Incertidumbres Asociadas a
Concentraciones de Sólidos Suspendidos Totales Mediante Métodos de Generación Aleatoria.”
Rev. Tecno Lógicas, 26, 181–200.
Torres, A., Araújo Acosta, J. M., González Acosta, M., Vargas Luna, A., and Lara-Borrero, J.
A. (2013). “Metodología para Estimar Concentraciones de SST en Tiempo Real en
Hidrosistemas Urbanos a Partir de Mediciones de Turbiedad. Turbidity-Based Methodology
For Real-Time TSS Concentrations Estimates In Urban Water Systems.” Ciencia e Ingeniería
Neogranadina, 23–1.
Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer Series in Statistics,
Springer New York, New York, NY.

93
PART D
SUPPORTING RESULTS
Part D shows the results obtained using the described methods in Part B and C. Some of them
are the probes' calibration, quantifying first flush phenomenon, among others. These methods
generated a base in which we could quantify the system's performance.
This Part begins showing the results from the sampling campaigns (Chapter 13), which were
used in Chapter 15 for the development of quality probes’ calibration. Then in Chapter 14 we
illustrate the water quantity calibration results. With the calibration results for water quantity
(Chapter 14) and water quality (Chapter 15) the assessment of the first flush phenomenon was
done, using the spectrometer and the turbidity, respectively (Chapter 16). Finally, Chapter 17
compiles the system efficiency performance using results from Chapter 14 and 15.
CHAPTER 13

SAMPLING CAMPAIGNS RESULTS

Table D-1 shows the number of samples taken per campaign and a summary of the water
quality indicators (WQI) results. The complete results are presented in Appendix D 13-1. In
sum, 65 water samples were analyzed: 36 at the constructed wetland's inflow and 29 at the
outflow. During each event the samples were collected every two to five minutes. The
triplicate analysis was carried out, for a total of 195 sub-samples. All the sub-samples were
analyzed for TSS, BOD5, Turbidity (T) and pH in the water quality laboratory of the School of
Engineering at the PUJ, following the procedures established by the Standard Methods (Rice et
al. 2012).
Table D-1. Water samples per campaign and summary of the laboratory results at the system’s inlet and outlet. Minimum and
maximum values for TSS, BOD5, turbidity and pH concentrations.
TSS TSS BOD BOD T T
Number of pH pH
in out in out in out
Campaigns samples in out
(mg/L) (mg/L) (mg/L) (mg/L) (NTU) (NTU)
Inflow Outflow
Apr-22-14 5 2 1.7–23.2 5.2–11.7 12.8–1.0 0.6–2.7 2.8–19.6 8.8–10.9 – –
May-6-14 5 5 11.5–50.7 6.1–7.3 7.0– 2.8 0.3–6.1 12.6–14.5 8.9–9.7 – –
Oct-9-14 5 5 28.4–40.3 3.9–5.8 5.9–3.9 – 36–47 6.8–7.3 7.5–7.7 6.5–6.8
Mar-3-15 5 3 96–172 4.8–5.4 5.6–3.1 0.5–1.2 40–70 5.7–6.7 6.4–7.2 6.6–6.7
Mar-16-15 5 5 20–28.9 4.3–6.5 8.9–10.8 5.9–8.5 26–32 9.7–12 7.7–8.0 6.8–7
Nov-5-15 5 5 7.4–90.8 6.5–8.7 6.3–4.3 3.6–3.8 8.4–39 5–6 – 6.5–6.6
Nov-19-15 6 4 18–33.6 3.6–4.6 1.8–2.0 0.9 23–32 8.1–8.8 7.7–8.3 6.3–6.6

Corrales and Malagón (2014) developed a study about the pathogens removal within the CW.
After five sampling campaigns (developed during October to November 2014), the authors
found at the outlet that the Total Coliforms varied between 5.20 MPN and 2612.5 MPN. The
E.coli (Fecal Coliforms) varied between 1 MPN and 152 MPN. Based on their results, we
decided to develop one additional campaign to corroborate these findings. From this campaign
(May 19th 2016), results show that E.coli were below the measure limit (<1 MPN), Total
Coliforms values range between 104.6 MPN and 325.5 MPN and BOD5 between 6 mg/L and
14 mg/L. The spectra that correspond to the microbiological parameters were compared with
the spectra limits defined in Part C Section 10.2. We found that the spectra from these samples
were below the spectra limits defined. Therefore, with these results we decided to focus on the
other parameters rather than the microbiological ones.
For the calibration procedure of the water quality monitoring system using the UV-Vis
spectrometer, all the campaigns were used. Regarding the turbidity, we did three campaigns:
Mar-16-15, Nov-5-15 and Nov-19-15. For a total of 28 of samples, that is in line with the
recommendation of Caradot et al. (2015): between 15 to 20 samples are required for the
calibration.

96
On the other hand, all the water quality indicators’ results were compared with the guidelines
limits defined for the uses A and B (see Part C Section 10.1). The use A was defined as: urban
reuse, agricultural reuse and recreational impoundments. The use B was defined as: agricultural
reuse (food processed & non-food crops), landscape impoundments and restricted areas
irrigation. From this comparison, the samples that met at least three of the four WQI (TSS,
BOD5, T, pH, see Part C Section 10.2) were grouped in water use A and in water use B
respectively. For example, Table D-2 illustrates two events Mar-03-15 and Nov-09-15 with the
WQI TSS, BOD5, T and pH. Only one sub-sample of the water sample #33 (Mar-03-15)
fulfills water use A and B (see Table D-2 in Bold), the other two fulfill water use B. In the case
of the water sample #62 all the sub-samples fulfill use A.
Table D-2. Mar-03-15 and Nov-09-15 outflow water sub-samples. In Bold are the sub-
samples that fulfills water use A and B.
Event Sample TSS (mg/L) BOD5 (mg/L) T (NTU) pH
Mar-03-15 33 5.4 1.19 5.7 6.7
Mar-03-15 33 4.8 1.08 5.8 6.7
Mar-03-15 33 5.0 0.74 6.7 6.7
Nov-09-15 62 3.57 0.90 8.1 6.4
Nov-09-15 62 3.57 0.86 8.3 6.3
Nov-09-15 62 3.93 0.85 8.5 6.3

Table D-3 shows the sub-samples that met the water uses A and B. A total of 22 sub-samples
fulfill the requirements for use A, for the case of use B 128 sub-samples. There were more
samples (the entire triplicate) that fit the use B (e.g. May 6th 2015, samples #9,10,11,12,13,14
and November 5th 2015, samples #48,49,50,51,52,53,54,55) than for use A (e.g. March 16th
2015 sample #41).
Table D-3.Water sub-samples per campaign that met the guidelines limits for use A and B. The samples ids are in parentheses
(a: denotes for one sub-sample, b: two sub-samples; no letter: the entire triplicate).
Campaigns USE A USE B
Apr-22-14 15 (1,3,4,5,7)
May-6-14 23 (9,10,11,12,13,14,15a,16a,17)
Oct-9-14 4 (19, 21a)
Mar-3-15 4 (33a,34b,35a) 5 (33a,33b,34a,34b,35a,35b)
Mar-16-15 5 (41,42b) 25 (36,37,38,39,40,41,42a,42b,43,44,45)
Nov-5-15 1 (49a) 24 (48,49,49a,50,51,52,53,54,55)
Nov-19-15 12 (62,63,64,65) 16 (56,57b,58b,59,60,61,62,63,64,65)
May-3-16 16 (67b,68b,69b,70b,71b,72b,73b,74b)
TOTAL of sub-samples 22 128

We can observe that water quality (water uses) can change between one event to other. Even
we observe the same behavior within the same event. This variability should be taken into
account for a possible decision making system. Therefore, it seems pertinent to consider water
quality online measurements. There are some WQI that can be measured directly online, such
as turbidity and pH. But there are other WQI for which we need indirect measurements (e.g.
absorbances) that have to be coupled with a calibration in order to arrive to a WQI. Finally, it

97
is possible that this variability is associated with phenomena already identified in the literature
as the first-flush. Hence, it would seem pertinent to study such phenomena in this case.

98
CHAPTER 14

WATER QUANTITY MONITORING SYSTEMS: CALIBRATION


RESULTS

The water levels (Part B Section 4.2) database is composed of 11 months of levels
continuously recorded from June 2014 to June 2015. After analyzing this database, the
beginning of the storm events could be identified. Levels up to 101 cm marked the event start
(Figure D-1: in red the events for October 2014). We measured the distance between the floor
and the inlet constructed wetland weir crest to verify the identified limit of 101 cm. Then, the
end of an event was identified when it returned to 101 cm or below. Therefore, for the 11
months, 175 storm rain events were identified.

Figure D-1. Water levels at the system entrance: October 2014.

Then, using the weir calibration developed (see Part C Chapter 8), the levels were translated
into flow rates. Table D-4 shows the rainfall characteristics and CWRT system performance
for 35 events. The complete database is presented in Appendix D-14-1.
From the analyzed period, rain intensities range between 15 mm/h and 92 mm/h. Results
show that during 2014 the CW handled flow peaks between 0.04 L/s and 50.6 L/s (e.g. Figure
D-2 recorded flows of October 2014 and March 2015). In addition, outflow runoff peaks vary
between 92% and 2% (see Cp in Table D-4) of those observed for inflow. In terms of lag time,
the CW delays runoff hydrographs between 9 and 696 minutes (see k in Table D-4). This
variation is mainly a function of the events’ total duration (significant Spearman correlation
with a p-value <0.05). During the first semester of 2015 the behavior was similar. For the first
period (second semester of 2014), we had problems with the ultrasonic level located at the
output, and thus it was not possible to obtain outflow-cumulated volumes for long periods. In
the case of the second period (first semester of 2015), we could compute the volume retained

99
by the CW. During this semester, the system was able to retain up to 92% of the total inflow
volume.
Now, having the flows per month and per event, the first flush phenomenon (Chapter 16) and
the system efficiency (Chapter 17) could be assessed.
Table D-4. Rainfall Characteristics and Constructed-Wetland/Reservoir-Tank System Performance.
Rain Event Inflow Outflow
I Max I Mean
Event start duration duration k (min) Cp Cvol Max Max
(mm/h) (mm/h)
(hh:mm) (min) (L/s) (L/s)
Sep-03-14 17:55 3:38:00 15.36 2.11 381 222 0.13 0.21 3.00 0.39
Sep-14-14 00:43 5:00:00 15.36 0.61 300 63 0.34 - 1.30 0.44
Sep-18-14 12:16 4:01:00 15.36 0.51 435 19 0.35 - 1.29 0.45
Sep-20-14 05:42 4:05:00 15.36 1.32 547 265 0.50 0.76 1.42 0.71
Oct-08-14 13:36 0:19:00 15.36 15.36 211 85 0.14 - 3.16 0.45
Oct-08-14 17:10 8:01:00 46.08 4.66 2824 696 0.02 0.03 32.82 0.75
Oct-22-14 15:08 3:17:00 15.36 0.62 367 112 0.33 - 1.44 0.48
Oct-25-14 05:20 1:24:00 15.36 3.66 370 151 0.11 0.84 5.58 0.64
Oct-25-14 23:49 5:34:00 15.36 1.33 6303 393 0.02 0.07 30.07 0.75
Oct-30-14 22:44 1:27:00 15.36 0.71 287 64 0.82 - 0.70 0.57
Oct-31-14 17:10 5:19:00 92.16 2.21 2365 576 0.04 0.09 31.90 1.25
Jan-18-15 14:24 0:31:00 15.36 9.41 368 156 0.07 0.85 7.66 0.57
Jan-23-15 18:41 2:17:00 15.36 1.35 384 139 0.15 - 3.37 0.49
Jan-24-15 14:53 0:12:00 30.72 15.36 289 122 0.05 0.92 11.30 0.53
Jan-27-15 14:07 7:39:00 76.8 1.81 1324 157 0.02 0.04 27.34 0.62
Feb-10-15 16:13 0:20:00 30.72 7.68 336 142 0.04 0.67 10.94 0.46
Feb-24-15 15:39 1:51:00 46.08 4.43 298 156 0.03 0.07 9.71 0.33
Mar-03-15 15:00 1:06:00 15.36 4.19 225 100 0.03 0.08 10.04 0.34
Mar-05-15 15:48 0:45:00 46.08 6.83 209 111 0.07 0.42 7.96 0.56
Mar-13-15 16:55 1:28:00 15.36 2.79 251 124 0.10 0.47 5.91 0.57
Mar-19-15 11:15 5:41:00 46.08 5.09 2415 521 0.02 0.03 30.04 0.69
Mar-21-15 16:35 0:39:00 15.36 5.12 134 101 0.06 0.70 6.97 0.43
Mar-22-15 16:33 0:15:00 30.72 9.22 217 91 0.11 - 3.92 0.45
Mar-23-15 13:11 3:55:00 30.72 2.88 2221 563 0.02 0.07 26.66 0.63
Apr-03-15 8:46 3:16:00 15.36 1.18 426 241 0.17 0.09 3.77 0.65
Apr-16-15 21:49 4:09:00 30.72 3.45 1110 38 0.16 0.17 26.82 4.31
Apr-17-15 22:06 4:23:00 15.36 0.88 531 193 0.34 0.80 2.09 0.71
Apr-18-15 15:56 5:50:00 30.72 0.92 566 215 0.59 - 5.36 3.14
Jun-01-15 1:08 3:49:00 15.36 2.68 534 259 0.10 0.48 5.65 0.54
Jun-11-15 2:31 6:54:00 15.36 0.41 486 158 0.02 0.15 22.09 0.38
Jun-13-15 1:15 6:54:00 15.36 1.08 1107 474 0.18 - 2.97 0.54
Jun-14-15 11:31 0:12:00 15.36 2.56 213 51 0.73 - 0.65 0.47
Jun-15-15 17:16 1:07:00 15.36 4.59 277 142 0.10 0.33 3.96 0.40
Jun-21-15 22:02 10:11:00 15.36 0.85 956 198 0.12 0.75 4.94 0.60

100
Figure D-2. Flow at the system inlet (black) and outlet (red) for October 2014 (upper part) and March 2015 (lower part).

101
CHAPTER 15

WATER QUALITY MONITORING SYSTEMS: CALIBRATION


RESULTS

This Chapter shows the results obtained by applying the methodology for the spectrometer
(Part B Section 5.1) and turbidity probes calibration (Part B Section 5.2 and Part C Chapter 9).
For the case of the spectrometer, as input data we used the TSS results from the sampling
campaigns (Chapter 13) and the measurements of the absorbance spectra of these samples.
Then, we applied Monte Carlo method for the uncertainty analysis, generating 5000 random
replicas of TSS laboratory concentrations (mg/L) per sample and for absorbance spectra
measurements. From the uncertainty analysis (Part B Section 5.1) five water samples were
discarded with an uncertainty above to 25%: samples #2 (u = 32%) and #6 (u = 30%) of April
22nd 2014; samples #23 (u = 28%), #24 (u = 29%) and #27 (u = 25%) of October 9th 2014.
The calibration methods were applied with the remaining samples (60 samples). We developed
one model that included the inlet and outlet samples. For calibration we chose 40 samples of
60 (66 %) and for validation 20 samples.
The PLS method described in Part B Section 5.1 was implemented using the OPP
methodology (Torres and Bertrand-Krajewski 2008; Zamora and Torres 2014) adapted to R
using the pls library (R Core Team 2016). Figure D-3a shows the TSS observed vs. TSS
predicted, with a RMSE of 16 mg/L and r of 0.92 for validation.
In the case of SVM method, we used the methodology proposed by Torres et al. (2013b) (see
Part B Section 5.1). The results using this method show that the best r (0.97 for calibration and
0.95 for validation) and the lowest RMSE (10 mg/L for calibration and 12 mg/L for
validation) were obtained for the following functions: polydot Polynomial kernel and
vanilladot Linear kernel. Since the results were the same, Figure D-3b shows the vanilladot
Linear kernel function. For the other models, the RMSE varies between 2 mg/L and 309
mg/L for calibration and 3 mg/L and 297 mg/L for validation; r varies between -0.29 and 1.0
for calibration and -0.6 and 1.0 for validation. As an example, Figure D-3c shows the results
for the kernel function: rbfdot-Radial Basis kernel “Gaussian” with a r 0.95 for calibration and
0.76 for validation and a RMSE 19 mg/L for calibration and 28 mg/L for validation.
We can observe that with the SVM method, better results were obtained than with the PLS
method. After the calibration, one of the kernel models vanilladot-Linear kernel was
implemented to obtained TSS temporal series. As an example of the chosen model for
spectrometer calibration, Figure D-4 presents 12 days of January 2015. Using the models with
the best fit (10000- vanilladot-Linear kernel), TSS mean equivalent concentrations at the inlet
and outlet were calculated. For this period, the TSS maximum concentration for the inflow is
146 mg/L and for the outflow is 20 mg/L. Table D-5 summarizes the TSS maximum
concentration values for each month. According to this table, we can observe that the TSS in
vary between 16 mg/L and 450 mg/L, and the TSS out vary between 3 mg/L and 45 mg/L.

102
(a)

(b) (c)
Figure D-3. TSS observed vs. TSS predicted (a) PLS method; (b) SVM method Kernel Function: vanilladot-Linear; (c) SVM
method Kernel Function: rbfdot-Radial Basis kernel “Gaussian”. TSSp: TSS predicted; Blue dots: calibration samples; Red
asterisks: validation samples.

103
Figure D-4. Simulated TSS
mean values using vanilladot-Linear kernel. TSS concentration inflow (blue
color) and TSS outflow (red color)- January 2015.

Table D-5. TSS maximum concentration values for each month in mg/L
Month TSS in TSS out Month TSS in TSS out
August 2014 16 3 March 205 130 24
September 2014 210 25 April 2015 118 25
October 2014 110 30 May 2015 98 12
January 2015 167 29 June 2015 450 31
February 2015 110 45

Regarding to the turbidity calibration, as we mention in Chapter 13 28 water samples were


used. Table D-6 shows a summary of the number of samples, the TSS and the turbidity values
measured in field. After applying the methodology proposed in Part C Chapter 9, Figure D-5a
presents the calibration results.
Table D-6. Samples used for the turbidity calibration
Campaigns # Samples TSS T-field
Mar-16-15 36-45 4.29–28.25 4.1– 32.2
Nov-5-15 48-55 4.4–22.9 6.5–26.5
Nov-19-15 56-65 4.5–26.9 3.57–33.6

A RMSE of 51 mg/L and r of 0.89 for validation was obtained. This was due to the high
variations among the turbidity values during the calibration process. Although the results were
not what we expected, this was the model implemented for the turbidity data. As an example,
Figure D-5b shows the TSS values obtained from the turbidity data of September 2014.
After the calibration procedures, the TSS measurement from the spectrometer and the
turbidity probes of the same periods were compared. As an example, Figure D-6a shows the
TSS mean measurements during the month of February. We can observe that during this
month, the two probes detected the rain events. At a glance it can be seen that the two
measurements differ in the maximum values. This can be confirmed with Figure D-6b, that
shows a zoom of the month, specifically the event of February 6th 13:23. In this figure we can
observe that the maximum TSS concentration with the spectrometer (blue line) is more or less

104
110 mg/L. In the other hand the concentration using the turbidity probe is up to 1000 mg/L.
This behavior replicates in the other comparable periods (August and September of 2014,
March and May of 2015). According to these results, it seems that the TSS values with the
spectrometer probe using the SVM calibration are more reliable that the ones with the
turbidity probe.

(b)

(a)
Figure D-5. (a) TSS observed vs. TSS predicted – for turbidity calibration; TSSp: TSS predicted; Blue dots: calibration
samples; Red asterisks: validation samples. (b) TSS from turbidimeter measurements at the inlet: September 2014. In red the
median value and in grey the minimum and maximum values.

To sum up, the spectrometer probe, the SVM calibration method obtained the best results in
terms of RMSE (12 mg/L) and r (0.95) (validation data). This method will be used for the first
flush analysis using the spectrometer probe (Section 16.1). As well, the spectrometer data will
be used as point of reference for the comparisons that will be shown in Part E. In the case of
turbidity probe, the TSS predicted using the calibration method will be used for the first flush
analysis (Section 16.2).

(a) (b)
Figure D-6. (a) TSS mean concentrations from turbidimeter (red) and spectrometer (blue) at the inlet: February 2014.
(b) Zoom of the event of February 6th 13:23.

105
CHAPTER 16

FIRST FLUSH ANALYSIS

In order to assess the first flush (FF) effect, it is necessary to analyze the variation of the
pollutant mass that is transported in the total volume of a stormwater event. The
methodologies proposed by Bertrand-Krajewski et al. (1998) (Part A Section 2.3) and Torres et
al. (2016) (see Part B Section 5.3) were implemented. From the recorded data and in the case
of the spectrometer, nine months of continuous pollutant plus flow data were identified
(August to October 2014 and January to June of 2015). On the other hand, for the turbidity
probe, five months fit the criteria: August and September 2014, February, March and May of
2015. First, we are going to explain the procedure to compute the FF phenomenon and the
results using the spectrometer probe data. Then, we are going to show the FF phenomenon
determination and the results using the turbidity probe.
After the weirs (Chapter 14) and spectrometer probes (Chapter 15) calibrations were done, we
studied the distribution of the pollutant load vs. the volume in the CW (constructed wetland)
inlet from different storm events. This analysis helped us to understand the phenomena and to
quantify the first flush.
Since uncertainty analysis was done during the calibrations' procedures, we could compute
5000 possible M(V) curves (pollutant mass distribution vs. volume) for each event. Therefore,
for the 63 storm events we calculated the M(V) curves using the 5000 flows and TSS
concentrations (e.g. Figure D-7). Each M(V) curve was fitted using the power function
(Bertrand-Krajewski et al. 1998) and the parameter b (which characterizes the gap between the
M(V) curve and the bisector – when pollutant mass is proportional to the volume) was
determined. This parameter b gives us a typology of the M(V) curve, which helps us to classify
each event in a zone (see the first three columns of Table D-7). The zones related to the FF
effect are zones I and II. For more information see Part A Section 2.3.
The classification of the event in zones was done by means of calculating the occurrence
probability that has an event of falling within first flush zones (Torres et al. 2016). Table D-7
shows a summary of the occurrence probability of FF for some events (results for all events
are shown in Appendix D-16-1). As a result, we obtained two events in zone I (3.2%), 36 in
zone II (57.1%), 9 in zone III (14.3%) and 16 in zones IV to VI (25.4%).
In the case of the turbidity probe, the data recorded with this probe were used to compute the
M(V) curves (Figure D-8) for the determination of the occurrence probability of FF. From the
turbidity database we were able to analyze 43 events. From these events, 30 coincide with the
ones identified with the spectrometer. Table D-8 shows a summary of the FF occurrence
probability for some events (results for all events are shown in Appendix D-16-2).

106
Figure D-7. M(V) curves of event EV-2 August 23rd 13:17 2014 and event EV-42 March 22nd 16:33 2015

Table D-7. Typology of the M(V)curve zones proposed by Bertrand-Krajewski et al. (1998) and the distribution of the
occurrence probability of FF for some events. EV: event spectrometer probe
Space between M(V) EV- EV- EV- EV- EV- EV- EV-
b Value ZONE
curve and bisector 1 24 39 46 49 60 62
0<b≤ 0.185 I High 0 0 11% 4% 0 87% 0
Positive
b<1 0.185<b≤ 0.862 II Medium 1% 45% 85% 4% 20% 13% 96%
space
0.862<b≤ 1.000 III Insignificant 48% 49% 2% 13% 67% 0 4%
1.000<b≤ 1.159 IV Insignificant
1.156 < b ≤ Negativ
b>1 V Medium 50% 6% 3% 79% 13% 0 0
5.395 e space
5.395 < b < ∞ VI High
Table D-8. The distribution of the occurrence probability of FF for some events, using the turbidity probe. EVT: event
turbidity probe. In parentheses are the events that coincide with the events of the spectrometer probe.
EVT-8 EVT-34
ZONE EVT-5 EVT-11 EVT-21 EVT-22 EVT-38 EVT-39 EVT-42
(EV-1) (EV-39)
I 0 0 0 0 0 0 0 0 0
II 0 16% 98% 1% 20% 100% 97% 0 4%
III 0 52% 0 0 35% 0 0.8% 92% 76%
IV 100% 31% 2% 99% 45% 0 2.2% 8% 20%

107
Figure D-8. M(V) curves predicted of event EV-A August 6th 7:53 2014 and event EV-40 March 19th 11:15 2015

With the turbidity probe, seven events were classified in zone II (16.3%), 12 in zone III
(27.9%) and 24 in zone IV (55.8%). In sum, seven events were classified as FF event. Five
events from these seven coincided with the ones identified with the spectrometer. Of the
remaining two, one was classified as a no FF event and the other was not identified by the
spectrometer.
The FF results from the spectrometer will be named as the reference execution. It can be
observed that the FF phenomenon seems to be relevant to the system (60.3% of the events
were classified in zone I and II). The detection of this phenomenon can be a determinant in
the quality of the water entering the constructed wetland. Therefore, it seems relevant to take it
into account for an eventual decision-making system in the end use of water.

108
CHAPTER 17

CONSTRUCTED-WETLAND EFFICIENCY PERFORMANCE

The efficiency of the constructed wetland (CW) was calculated in order to simulate the water
quality at the system outlet, using as less as possible monitoring equipment. The efficiencies
per event were obtained using the proposed methodology on Part C Chapter 12. In sum, 80
events were analyzed. Figure D-9 shows the efficiency values per wavelength of two events.
Figure D-9a shows the case of an event without outflow volume and Figure D-9b shows an
event with an outflow volume different from zero (Table D-9 summarizes the analyzed events
per month and those that have outflow volume).

(a) (b)
Figure D- 9. Event efficiencies per wavelength: (a) EV-1 August 25th 3:49 2014 and lower fig: event. (b) EV-40 March 27th
14:07 2015

Table D-9. Summary of the events per month used


for the efficiency analysis.
Month Events With outflow volume
AUG-14 4 0
SEP-14 8 3
OCT-14 5 5
JAN-15 4 3
FEB-15 5 2
MAR-15 15 9
APR-15 12 3
MAY-15 10 0
JUN-15 17 6

Figure D-10 shows the boxplot of the efficiencies for each event in terms of TSS
concentration, which is equivalent to the wavelength 437.5 nm: the Spearman correlation
between the wavelength 437.5 nm and TSS is about 0.93 (p-value < 0.05). The description of
these events is presented in Appendix App-D-17-1. From the 80 events used for the efficiency
analysis, 29 have negative values; all these events have outflow volume. From Figure D-10 and
without considering the events with negative efficiencies, we can observe that 16 events

109
obtained efficiency between 5% and 50%, and the remaining ones (35 events) obtained
efficiencies up to 50%. These finding are in line with the CW performance literature review
done by Malaviya and Singh (2012) and Lucas et al. (2015), TSS efficiencies varied between 7%
and 95 %.

Figure D-10. Boxplot of the events’ efficiencies for TSS concentrations

In order to have a general efficiency model, a random variable's density function using a non-
parametric function kernel (Tsybakov 2009) (see Part C Chapter 12) was assessed with the
efficiencies computed for each storm event. Therefore, a non-parametric function of efficiency
for each wavelength was obtained (214 kernel models). Figure D-11 shows the kernel model
for wavelength 437.5 nm.
According to Figure D-11, this model prioritizes the positive values of efficiency. With this
kernel model, we are able to simulate the efficiencies per wavelength. The efficiency will not be
a global number but it will be treated as a random variable. Therefore, we can obtain the outlet
fingerprint and with the water-quality limits (Part C Chapter 10), we can predict a final water
use.

110
Figure D-11. Kernel model efficiency for the wavelength l=437.5nm

111
CONCLUSIONS PART D

The base results that support the development of the decision making tool have been present.
The results from the sampling campaigns showed that water quality (water uses) could change
between one event to other, even within the same event. Therefore, this variability should be
taking into account for a possible decision making system. Also, it seems pertinent to consider
water quality online measurements. As well, these samples help us to identify which samples
met the requirements for use A or B. The use A was defined as: urban reuse, agricultural reuse
and recreational impoundments. In the case of use B was defined as: agricultural reuse (food
processed & non-food crops), landscape impoundments and restricted areas irrigation. Since
each sample is coupled with a UV-Vis absorbance spectrum this gives limits related directly
with the measurement. Therefore, with this finding it can be known the final use of the water
with the fingerprint measurement.
To sum up, the calibration method of the turbidity probe will be used for the first flush
analysis (Section 16.2). In the case of the spectrometer probe, we obtained the best results with
the SVM method in terms of RMSE (12 mg/L) and r (0.95) (validation data). This method will
be used for the first flush analysis using the spectrometer probe (Section 16.1). As well, the
spectrometer data will be used as point of reference for the comparisons that will be shown in
Part E.
The water quantity calibration reveals that it is possible to relate flow rates with water levels
over the weir directly, using the proposed method on Part C Chapter 8. In the case of the
spectrometer water quality calibrations, SVM shows to be more accurate than PLS. This
finding allows us to have better TSS estimation. On the other hand, for the turbidity the results
were not what we expected (high value of RMSE of 51 mg/L). Therefore, the TSS prediction
using the turbidity data has a lower reliability.
The variability observed among the water samples (Chapter 13) can be associated with
phenomena already identified in the literature as the first-flush. Hence, the first flush
phenomenon was assessed for our case study with the calibrated data (water quantity and
quality). The detection of this phenomenon can be a determinant in the quality of the water
entering the constructed wetland. Therefore, it seems relevant to take it into account for an
eventual decision-making system in the end use of water. For example, from the measured
events it can be detected the time to reach 40% or 50% of the total pollutant mass. Therefore,
a bypass time can be suggested for future events.
Finally, an efficiency kernel model was developed for simulate the efficiencies per wavelength.
Therefore, we can obtain the outlet fingerprint and with the water-quality limits (Chapter 10
Part C), we can predict a final water use. The computed TSS efficiencies, above to 5% for
most of the events, are in line with the CW performance literature review done by Malaviya
and Singh (2012) and Lucas et al. (2015), they compiled TSS efficiencies between 7 to 95 %.

112
REFERENCES PART D

Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). “Distribution of pollutant mass vs
volume in stormwater discharges and the first flush phenomenon.” Water Research, 32(8), 2341–
2356.
Caradot, N., Sonnenberg, H., Rouault, P., Gruber, G., Hofer, T., Torres, A., Pesci, M., and
Bertrand-Krajewski, J.-L. (2015). “Influence of local calibration on the quality of online wet
weather discharge monitoring: feedback from five international case studies.” Water Science and
Technology: A Journal of the International Association on Water Pollution Research, 71(1), 45–51.
Corrales, A., and Malagón, L. (2014). “Remoción de Patógenos con Humedales Construidos
Para Aprovechamiento de Aguas Lluvias en la Pontificia Universidad Javeriana.” Pontificia
Universidad Javeriana., Bogotá D.C.
Lucas, R., Earl, E. R., Babatunde, A. O., and Bockelmann-Evans, B. N. (2015). “Constructed
wetlands for stormwater management in the UK: a concise review.” Civil Engineering and
Environmental Systems, 32(3), 251–268.
Malaviya, P., and Singh, A. (2012). “Constructed Wetlands for Management of Urban
Stormwater Runoff.” Critical Reviews in Environmental Science and Technology, 42(20), 2153–2214.
R Core Team. (2016). R: A language and environment for statistical computing. R Foundation for
Statistical Computing, Vienna, Austria.
Rice, E. W., Bridgewater, L., American Public Health Association, American Water Works
Association, and Water Environment Federation. (2012). Standard methods for the examination of
water and wastewater. American Public Health Association, Washington D.C.
Torres, A., and Bertrand-Krajewski, J.-L. (2008). “Partial Least Squares local calibration of a
UV-visible spectrometer used for in situ measurements of COD and TSS concentrations in
urban drainage systems.” Water Science and Technology: A Journal of the International Association on
Water Pollution Research, 57(4), 581–588.
Torres, A., Lepot, M., and Bertrand-Krajewski, J.-L. (2013). “Local calibration for a UV/Vis
spectrometer: PLS vs. SVM. A case study in a WWTP.” 7th International Conference on Sewer
Processes & Networks, Sheffield, United Kingdom, 1–8.
Torres, A., Salamanca-López, C. A., Prieto-Parra, S. F., and Galarza-Molina, S. (2016).
“OFFIS: a method for the assessment of first flush occurrence probability in storm drain
inlets.” Desalination and Water Treatment, 0(0), 1–12.
Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer Series in Statistics,
Springer New York, New York, NY.
Zamora, D., and Torres, A. (2014). “Method for outlier detection: a tool to assess the
consistency between laboratory data and ultraviolet-visible absorbance spectra in wastewater
samples.” Water Science and Technology: A Journal of the International Association on Water Pollution
Research, 69(11), 2305–2314.

113
PART E
DECISION-MAKING TOOL
Part E illustrates the development of a Decision-making tool based on the results of Part D. In
part D, we observed that the water quality throughout the system changes during the event and
from one event to other. This quality changing translates into a water-use changing, which
should be considered for an on-line decision-making tool. The results from the sampling
campaigns helped us to identify which samples met the requirements for certain uses. We got
the water-use limits related directly with the measurement, because each sample was coupled
with an UV-Vis absorbance fingerprint. Therefore, based on UV-Vis spectrometry on-line
measurements, the final use of the water can be known on-line and without further signal
treatments.
The water quality and quantity calibrations introduced in part D were the support for the first
flush (FF) phenomenon characterization. FF quantification gives us tools to make real time
decisions in terms of operation. Finally, the definition of an efficiency model allowed us to
visualize less demanding monitoring options.
Chapter 18 shows the results observed for water uses employing the methodology proposed
on Section 10.2 with the absorbance measurements. Then, Chapter 19 proposes three
decision-making (DM) tool scenarios for the operation of the stormwater harvesting system
with as less as possible monitoring equipment. These scenarios have different probes and
methods in order to evaluate less demanding monitoring options (in terms of cost).
Chapter 20 illustrates the evaluation of the maximum measurement frequency of best scenario
in order to reduce operational costs. Finally, Chapter 22 shows the general conclusions of the
decision-making tool.
CHAPTER 18

OBSERVED WATER USES: USING ON-LINE UV-Vis


ABSORBANCES

For the definition of the observed water uses the methodology proposed in Section 10.2 was
implemented. This methodology relates the measured spectrometer UV-Vis absorbances with
the defined water uses A and B. The use A was defined as: urban reuse, agricultural reuse and
recreational impoundments. The water use B was defined as: agricultural reuse (food processed
& non-food crops), landscape impoundments and restricted areas irrigation.
In Chapter 13 Part D, the laboratory concentrations from the sampling campaigns that met the
criteria for water uses A or B were identified. As these samples are coupled with spectrometer
UV-Vis absorbances, these absorbances could be related with defined water uses. Therefore,
with the absorbance measurements, the final use of the water can be known without making
water sample laboratory analysis.
Following the methodology (Section 10.2 Part C), two absorbances limits for water uses A and
B (see Part D Chapter 13- Table D-2) were chosen from the dataset (water samples coupled
with the UV-Vis absorbances). As a reminder of Chapter 13 Part D, a total of 22 water sub-
samples fulfilled the requirements for use A and 128 sub-samples fulfilled the requirements for
use B. The selection of these UV-Vis spectra limits is explained as follows. The quartiles and
box plot values of the first wavelength (200 nm) of the UV-Vis spectra for each water use
group were calculated. The 200 nm wavelength was chosen because is the first recorded
wavelength of each spectrum and it is located en the UV region of the spectrum that
corresponds to organic compounds (Hochedlinger 2005). For the case of use A, the spectrum
of the sample 49 corresponds to the upper whisker of wavelength 200 nm. For use B, the
spectrum of the sample 56 corresponds to the third quartile of wavelength 200 nm. We chose
these limits in order to include the spectra coupled with the microbiological samples. Hence,
sample 49 was chosen as the limit for water use A and sample 56 was chosen as the limit for
water use B. Then, one by one the recorded spectra of each month were compared with the
defined limits. If 95% of the absorbances of a spectra were below the defined limit, the water
fulfills a certain water use. For most of the months the absorbance spectra were measured
every minute. Therefore, using this method we were able to identify when (in time) and for
how long (during the measured period) the treated water meets certain use. These results will
be named the observed water uses.
In sum, the on-line water quality data set is made of nine months of spectra measured every
minute with the spectrometer probe located at the outlet of the system. This data set is called
field-data-set and the recorded months were August to October of 2014 and January to June
of 2015. As an example, Figure E-1 shows the spectrometer UV-Vis absorbances of two
months (September 2014 and March 2015). The spectrum in blue represents the moment
when the water fulfills water use A and B. The cyan spectrum shows when the water fulfills

116
water use B. And the red spectrum (warning spectrum) illustrates that the water cannot be
used.

Figure E- 1. Observed spectra during September 2014 and March 2015. Spectra in blue: the water fulfils water use A and
B. Spectra in cyan: the water fulfils water use B. Spectra in red: (warning spectrum) the water cannot be used.

Table E-1 shows a summary of the results obtained for nine months. For example, during
August 2014, 100% of the harvested stormwater could have been used for use B: agricultural
reuse (food processed & non-food crops), landscape impoundments and restricted areas
irrigation. In contrast, for June 2015, only 25% of the harvested water fulfilled the above
requirement. On the other hand, for the more demanding uses (use A), September, October
and April 2015 had the lowest percentage (0%) to meet these uses and August 2014 the highest
percentage (100%). The longest consecutive period that fulfills use A was 14.1 days during May
2015 and for use B was 5.4 days. As an example, Figure E-2 illustrates the observed uses
during the rain event of March 5th 2015 at 15:51. We can observe that the first 30 minutes of
the event the treated water could not be used (red lines), after 45 minutes the water fulfilled the
use B.

117
Table E-1. Recorded spectra of each month that fulfill uses A and B. P: total precipitation per month.
Longest consecutive
min fulfill min Total % %
P Total period (days) with
Month & Year for uses fulfill for minutes for for
(mm) days use A & only use
A&B uses B recorded uses A uses B
B B
August 2014 8.4 13945 13956 13956 9.7 100 100 9.7 9.7
September 2014 25.9 0 5383 19245 13.4 0 28 0 2.9
October 2014 91.2 0 7313 23250 16.1 0 31 0 4.9
January 2015 28.4 9211 14994 16344 11.35 56 91 4.7 4.7
February 2015 51.1 31611 38778 40320 28 78 96 6 6
March 2015 78.7 4574 27536 44640 31 10 62 1.9 5.4
April 2015 29.7 0 0 28198 19.6 0 0 0 0
May 2015 14.0 22646 6600 31507 21.9 72 21 14.1 14.1
June 2015 38.0 326 10612 43200 30 0.8 25 0.2 5.3

Figure E-2. Observed water uses during the rain event of March 5th 2015 15:51 (inlet flow). The red lines represent the time
that the treated water did not fulfilled a defined water use, in contrast the yellow ones represent the time that the treated water
fulfilled water use B.

Additionally, Table E-1 shows the total precipitation (P) per month. The rainiest months
(P>70 mm) were October and March and the more frequently use was B (31–62%). The
months with medium precipitation (25 mm< P <40 mm) were September, January, February,
April and June. Most of them, four of five months, fulfilled use B (25–96%). In the case of use
A, three months (0.8–69%) fulfilled this use. Finally, the less rainy months were August and
May. Use B was the most frequent use during these two months (93–100%). Regarding use A
during August and May, 72–100% of the time fulfilled this use. These findings seem to show a
relation between the rainy seasons and the water quality.
In general, the implemented method allows knowing the final water use in real time using a
spectrometer probe. Therefore, this approach has the potential to be implemented in other
applications (e.g. operation of a wastewater treatment plants). Finally, it is worthy to mention a
potential limitation of the proposed methodology, related with the selection of the water use
absorbance limits. Therefore, it is recommended to undertake more sampling campaigns
coupled with spectrometer measurements.

118
CHAPTER 19

TOWARDS A DECISION-MAKING TOOL

The present Chapter aims to propose a decision making (DM) tool for the operation of the
stormwater harvesting system with as less monitoring requirements as possible. Based on the
results so far, the water quality of the system’s outlet can change suddenly. Therefore, the use
of on-line probes becomes relevant. Another observation is that first flush phenomenon can
be used in the DM tool, because this would give useful information: such as the amount of
stormwater that could be bypassed.
Three DM tool scenarios were proposed. These scenarios have different requirements of
probes and methods in order to evaluate less demanding monitoring options (in terms of
costs). We proposed to build the following scenarios with the monitoring equipment installed
at the system’s inlet, with the aim of knowing the possible water use before the end of the
storm event and to detect the first flush phenomenon with the same equipment. The first
scenario uses an ultrasonic level and a spectrometer probe at the entrance of the system. The
second scenario employs the same methods of the first scenario but with a different
monitoring configuration: instead of using a spectrometer probe at the entrance of the system,
a turbidity probe was used. Finally, taking advantage of having observed uses per event and the
events characterization with the storm parameters and hydraulic performance variables, we
proposed a third scenario that implements Support Vector Machine (SVM) (see Section 3.10
Part A) for water use classification. This method allows us to simulate the water uses with the
named variables. Therefore, the quality measurements are no longer needed for water uses'
definition. Each scenario is explained as follows.
19.1. DM TOOL FIRST SCENARIO – USING SPECTROMETER
For this scenario we employ an ultrasonic level and a spectrometer probe at the entrance of
the system. The beginning of a storm event could be identified with the ultrasonic level (levels
up to 101 cm). Then, for a bypass time definition, we evaluated two methods for the first flush
(FF) phenomenon identification. Lastly, the spectrometer data of the inlet and the efficiency
model were used to simulate the absorbance fingerprints at the outlet of the system. These
steps are explained as follows.
19.1.1 First Flush identification and definition of the bypass time
As was mentioned in Chapter 18 Part D, in order to identify the FF phenomenon is necessary
to have recorded the entire event. Therefore, two methods were applied to identify an event as
a FF or No FF at the beginning of the event. The first method employs the methods explained
on Part C Sections 11.1 –rules for first flush identification– and 11.2 –using reliability as an
evaluation method–. The second method applied was Support Vector Machine method (SVM)
to predict an event as a FF or No FF using the variables of the storm events characterization.

119
The first method is explained as follows. Initially, an analysis of variance (Section 3.3 Part A)
was done to recognize the influential variables to identify an event as a FF or No FF. The
studied variables were the FF results presented in Section 18.1 and the storm events
characterization. The evaluated storm parameters were: Antecedent Dry Weather Period
(ADWP), rainfall depth (P), maximum storm intensity (Imax), mean storm intensity (Imean)
and rain duration (Dplu). The hydraulic performance variables defined were: maximum head
over the entrance weir (Hmax), mean head over the entrance weir (Hmean), difference
between head over the entrance weir in the time i and time i+1 (Δhmax) and storm event
duration according to the head over the entrance weir (Dweir). Additionally, for each event we
took into account these variables but of the event before and we denote them with the b letter.
Also, we transformed these data to categorical data.
Since some of the variables did not accomplish normality (Shapiro-Wilk test) and variance
homogeneity (Bartlett test), the nonparametric Kruskal-Wallis test was used. In sum, 18
Kruskal-Wallis tests were conducted with each one of the storm parameters and hydraulic
variables versus the zones for the M(V) curves defined by Bertrand-Krajewski et al. 1998. We
decided to merge Zone I and II in order to classify an event as FF. Table E-2 shows the p-
values obtained for each variable. Appendix 19-1 shows the complete test results. Hmean and
Δhmax variables (p-value <0.05) were identified as the influential factors on the difference
between the zones.
Table E-2. p-values of the 18 Kruskal-Wallis tests conducted with each one of the storm parameters and hydraulic
variables versus the zones for the M(V) curves. (*) means the variables with p-value <0.05. Antecedent Dry Weather
Period (ADWP), rainfall depth (P), maximum storm intensity (Imax), mean storm intensity (Imean) and rain duration (Dplu),
maximum head over the entrance weir (Hmax), mean head over the entrance weir (Hmean), difference between head over the
entrance weir (Δhmax) and storm event duration according to the head over the entrance weir (Dweir). b: corresponds to the
variables of the event before.
Variables p-value Variables p-value Variables p-value
Hmean 0.0166* Hmeanb 0.6484 Imean 0.0750
Dhmax 0.0045* Dhmaxb 0.5333 Imeanb 0.1313
Hmax 0.0622 Hmaxb 0.7883 Dweir 0.1480
ADWP 0.5308 ADWPb 0.5026 Dweirb 0.6611
P 0.2114 Pb 0.9801 Dplu 0.4554
Imax 0.6330 Imaxb 0.5670 Dplub 0.8471

Figure E-3 shows the Hmean and Δhmax boxplot per zone. This Figure illustrates that the
Δhmax median values per zone have a clear tendency. Otherwise, the Hmean median values
do not show a clear tendency. But according to the Kruskal-Wallis tests this variables were
identified as the ones that can differentiate the zone related to FF-Zone I and II from others.
Then, with the identified variables Hmean and Δhmax we proceeded to define rules (see
Section 11.1 Part C) with the quartiles of the chosen variables (see Table E-3). These rules
allowed classifying the event as a FF or not based on Hmean or Δhmax values. With these
rules, we simulated the FF detection during each event. Then, we compared the simulated FF
or No FF with the observed FF or No FF (results from Chapter 16 Part D) using Cohen's
kappa coefficient (Ckc) (see Section 3.6 Part A) and the reliabilities of FF and No FF event
(Section 11.2 Part C). It is worthy mention that to calculate the observed FF and No FF, we
have to wait until the end of each event. The Ckc was used in this thesis to compare the
observed monitoring data results with the simulated results.

120
Table E-4 shows the results obtained for the eleven rules that were established in order to
identify a first flush event. For example, with the rule A the event would be classified as a FF
event if its Hmean is equal or greater than 102.25 cm and its Δhmax is equal or greater than -
1.80 cm (see Table E-3 and Table E-4).

Figure E-3. Boxplot per FF zone of variables Hmean and Δhmax

Table E-3. Hmean and Δhmax quartiles variables (q: quartile; lw: lower whisker; uw:
upper whisker).
Variables lw q1 q2 q3 uw
Hmean (cm) 101.60 102.25 102.79 103.49 104.17
Δhmax (cm) -5.31 -3.24 -1.80 -0.87 -0.11

Table E-4. Rules to identifying a first flush event and Cohen's kappa coefficient (Ckc) values after running the simulations. (the
variables that obtained the highest Ckc are highlighted)
Rule Description Ckc
Rule A First quartile (q1) of Hmean and second quartile (q2) of Δhmax 0.15
Rule B Lower whisker of Hmean and third quartile (q3) Δhmax 0.49
Rule C Lower whisker of Hmean and upper whisker Δhmax 0.15
Rule D Lower whisker of Hmean and second quartile (q2) Δhmax 0.29
Rule E First quartile (q1) of Hmean and third quartile (q3) of Δhmax 0.49
Rule F Second quartile (q1) of Hmean and third quartile (q3) of Δhmax 0.49
Rule G Lower whisker of Hmean 0.12
Rule H Upper whisker Δhmax 0.15
Rule I Third quartile (q3) Δhmax 0.49
Rule J Second quartile (q2) Δhmax 0.29
Rule K First quartile (q1) Hmean 0.33

Table E-4 in the column Ckc shows the results after running the simulation for FF detection.
In sum, 70 events from 80 were simulated for FF or No FF. According to Table E-4 the
highest Cohen's kappa coefficients was 0.49 – a moderate strength of agreement (see Table A-
4 Section 3.6 Part A) (Landis and Koch 1977). This coefficient corresponds to Rules B, E, F
and I (rules that are in bold in Table E-4). From these rules, Δhmax (-0.87 cm) was the
common factor. The tool was able to simulate 27 events as FF from observed 36 (defined as
reliability of FF: 0.75). In the case of No FF, of 34 events 25 agree (defined as reliability of No
FF: 0.74). Hence, a Δhmax lower than -0.87cm was chosen to classify an event with a first

121
flush effect. This opens the possibility to classify an event as FF without having to wait until
the event's end and without water quality measurements.
Now, the second method is explained as follows. Taking advantage of having observed FF per
event and the events characterization with the storm parameters and hydraulic performance
variables, Support Vector Machine (SVM) (see Section 3.10 Part A) for FF detection was
implemented. First, the SVM was executed 1000 times changing the calibration and validation
events for each of the seven kernel functions (rbfdot, polydot, vanilladot, tanhdot, laplacedot,
besseldot, anovadot). Due to the fact that SVM method does not allow NA in the database, we
could simulate 60 % of the events (38 of 63). Appendix E-19-2 shows the database used with
the SVM tool.
In order to select a function, these simulations were evaluated using Ckc and reliabilities of FF
and No FF. Figures E-4, E-5 and E-6 show the values of Ckc and reliabilities of FF or No FF
for each function. Figure E-4 illustrates that the median values for most of the kernel
functions are close to zero, except for anovadot function (0.22 means a fair strength of
agreement – see Table A-4 Section 3.6 Part A (Landis and Koch 1977)). This observation
means that for all the events, these functions only simulate one choice (FF or NO FF), but not
both choices. The Ckc values of the kernel function anovadot varied between negative values
(-0.26) and 0.70. Figure E-5 shows that the higher median reliabilities of FF values were
obtained with the functions besseldot, laplacedot and rbfdot. On the other hand, Figure E-6
shows that we obtained reliabilities of No FF of zero with the mentioned functions. As well, in
Figure E-6 we can observe median reliabilities of No FF below 0.4 for most of the kernel
functions. The only function that showed a good performance for Ckc and the reliabilities of
FF and No FF was anovadot. Therefore, we selected this function for the simulation.

Figure E-4. Ckc values for the seven kernel functions

122
Figure E-5. Reliabilities of FF for the seven kernel functions

Figure E-6. Reliabilities of No FF for the seven kernel functions

Then, the next step was evaluating the redundant variables. For this step, and using the
selected kernel function, we employed the method explained in Section 11.3 Part C
(Evaluation method to identify the redundant variables). This method begins removing one
variable at a time and the Ckc values are calculated for each execution. Later, these values are
compared with the Ckc values obtained with all the variables using Wilcoxon signed-rank test.
For this case, we obtained p-values upper than 0.05 for the following variables: P, Dplu,
Hmeanb, Dhmaxb, ADWPb, Pb, imaxb and Dplub. This finding means that the variables
removed do not generate significant differences between the Ckc values using all the variables

123
and the Ckc values removing the selected variables. So, any of these variables can be put aside.
Then, a second simulation was done without two variables at one time using the identified
variables in the previous step. This second simulation was done with all the possible pairs, a
total of 109 pairs or simulations. Then, the Ckc of each pair was compared with Ckc of the
variable identified in the first simulation using Wilcoxon signed-rank test. Once again, the pairs
of variables that can be removed were identified. Table E-5 shows the pairs of variables that
can be removed, a total of 13 pairs.
Table E-5. Pairs of variables that can be removed.
Pairs Pairs
1 Dhmaxb Hmax 8 Dhmaxb Hmeanb
2 Dhmaxb Hmean 9 Dhmaxb Imaxb
3 Dhmaxb ADWP 10 ADWPb Hmax
4 Dhmaxb P 11 ADWPb P
5 Dhmaxb Imean 12 ADWPb Dhmaxb
6 Dhmaxb Dplu 13 Pb Imeanb
7 Dhmaxb Hmaxb

A third simulation was done removing three variables at one time starting with the pairs
identified above (e.g. Dhmaxb & Hmax & Hmean). Then the trios of variables that can be
removed were identified using Wilcoxon signed-rank test comparing Ckc-v1&v2&v3 with Ckc-
v1&v2. The pairs that cannot become a trio were marked as variables that “must be in the
simulation”. The trios formed by the remaining variables were searched and the most repeated
trios were identified. The variables that formed more trios were marked and classified as
possible redundant variables. Table E-6 shows the identified trios.
Table E-6. Trios of variables that can be removed and Ckc obtained (q: quartile; lw:
lower whisker; uw: upper whisker).
Ckc values
lw q1 q2 q3 uw
All variables -0.26 0.05 0.22 0.35 0.61
Trios of variables
1 Dhmaxb P Imaxb -0.19 0.11 0.26 0.38 0.73
2 Dhmaxb P Hmaxb -0.22 0.13 0.25 0.38 0.73
3 Dhmaxb P Dplu -0.24 0.15 0.30 0.40 0.70
4 Dhmaxb P ADWPb -0.27 0.11 0.26 0.38 0.73
5 Dhmaxb Dplu Hmeanb -0.25 0.10 0.28 0.38 0.73

Then, with the selected trios we ran a simulation using SVM for classification of FF without
these variables. Table E-5 illustrates the Ckc values obtained removing each trio. According to
this table, the Ckc values (median: q2) improved when we removed the variables from the
trios. The variables that depend on the end of the event would be the ideal variables to remove
(e.g. Dplu- rain duration), because we do not have to wait until the end of the event.
Therefore, for this case the trio number 3 captured our attention. This trio has the variables
Dhmaxb, P and Dplu.
The Ckc value improved with the remotion of the variables using the SVM method, but the
rules method obtained a better performance. The former method obtained a Ckc value of 0.30
and the later a Ckc value of 0.49. Therefore, the rules method was the chosen one.
After identifying a FF event, a bypass time was determined. We mean by bypass to the action
that does not allow enter to the CW a portion of the storm water during a rainfall event. From
63 events that were able to classify in first flush Zones II to IV, the time to reach 40% and

124
30% of the total mass was calculated. For a total mass of 40%, the median time was 19
minutes and in the case of 30% the median time was 16 minutes. For practical purposes, it was
decided to round the time of 20 minutes. Then, the process was carried out in reverse that is:
with 20 minutes, the mass that enters into the system was computed. Figure E-7 shows that
the total mass varies between 11% (q2) and 44% (q4) with a median value of 23% (q2). There
is a probability of 50% that with a bypass time of 20 minutes the system would no receive 11%
to 44% of the total pollutant mass. Therefore, the treated water would have a better quality.
Appendix E-19-3 shows the FF simulated and the bypass time for each event.

Figure E-7. Total mass at the first 20 minutes of the FF events

19.1.2 Prediction of the spectra at the outlet


Then, with the bypass time defined, the next step was the prediction of the spectra at the
outlet. For this step we have to take into account the time that the water takes to arrive at the
outlet of the constructed wetland. We followed the recommendations of the work done by
Hernández et al. (2015), they suggested 19 minutes as the retention time.
As mentioned before, the proposed tool used a spectrometer probe at the entrance of the
system. Hence, the spectra at the outlet of the system were simulated using the developed
efficiency model (Chapter 12 Part C) for each event. Then, these spectra were compared with
the spectra limits for uses A and B (defined in Section 11.1 Part C). As a reminder, the use A
grouped: urban reuse, agricultural reuse and recreational impoundments. And use B grouped:
agricultural reuse (food processed & non-food crops), landscape impoundments and restricted
areas irrigation. As an example, Figure E-7 shows the spectrometer absorbances of two
simulated events (January and March of 2015). The spectrum in blue represents the moment
when the water fulfills water use A and B. The cyan spectrum shows when the water fulfills
water use B. And the red spectrum (warning spectrum) illustrates that the water cannot be
used.
Figure E-8 and E-9 illustrate the observed and predicted spectra for two rain events: March
31st 13:47 (Figure E-8) and January 27th 14:12 2015 (Figure E-9). It can be observed that the
predicted spectra (Figure E-8 and E-9 right side) sometimes overestimated the predicted use or
underestimated it. That is, for the event March 31st 13:47 (Figure E-8) the tool predicted 70
spectra as use A and B, 63 spectra as use B and 29 spectra as restricted to use (NU). Therefore,
the tool overestimated the use for this event, because the observed use was NU for the entire
event (162 spectra). On other hand, for the event January 27th 14:34 (Figure E-9) the tool

125
simulated 531 spectra as use A and B, 238 for use B and 551 as restricted to use. In this case,
the tool underestimates the water quality because the water fulfills the use A and B for 1308
spectra and the use B for 12 spectra. Later in this section the results from the evaluated events
will be shown.

Figure E-8 Observed (left) and simulated (right) fingerprints of March 31st 13:47 2015. Spectra in blue: the water fulfils water
use A and B. Spectra in cyan: the water fulfils water use B. Spectra in red: (warning spectrum) the water cannot be used.

Figure E-9 Observed (left) and simulated (right) fingerprints of January 27th 14:12 2015. Spectra in blue: the water fulfils
water use A and B. Spectra in cyan: the water fulfils water use B. Spectra in red: (warning spectrum) the water cannot be used.

126
19.1.3 Proposed DM tool and Results
Figure E-10 shows a flow diagram with the proposed on-line decision-making tool (first
scenario).

Event&start ΔH=Hi+1-Hi Yes FF&event


H>101cm' Bypass'influent'for'20min
t=1'min ΔH<-0.87cm

No
No&FF&event Wait'19min'(retention'
time)'to'use'the'effluent'
water

The'treated'water'can'be'used'for:
Urban&reuse,&agricultural&reuse,&recreational& Yes absevent<'absUSE-
impoundments
A

No
The'treated'water'can'be'used'for:
Agricultural&reuse&(food&processed&&&non&food& Yes absevent<'absUSE-
crops),&landscape&impoundments,&restricted& B
areas&irrigation

No
The'treated'water'cannot'be'use

Figure E-10 Flowchart of the On-line Decision-Making Tool for the first scenario

The On-line Decision-Making Tool was tested with database of nine months. From this
database, 94 storm events were detected and simulated. The simulations were done at general
level (final use per event) and at specific level (use per minute per event). For the general level,
the use that fulfills more minutes than the other uses will be the use for the entire event. Then,
the simulated uses per event were compared against theπ observed uses implementing Ckc. In
the case of the specific level, each use simulated per minute was compared with the observed
one, and then the Ckc was calculated. For the specific level, a Ckc per minute was obtained,
otherwise for the general level just one Ckc was calculated.
The first simulation for general level showed that most of the simulated events do not
accomplish use A. The Ckc value was -0.52, which means poor strength of agreement (see
Table A-4 Section 3.6 Part A) (Landis and Koch 1977). From 21 events that fulfill use A, the
model only could predict one event (reliability of use A: 0.06 and reliability of use B: 0.40).
Table E-7 supports the above statement. This table shows the events that fulfill use A (column
#3 obs) and the results of the prediction (column #4 predi). As well, Table E-7 illustrates the
minutes that fulfill each use for the observed and predicted uses. Therefore, we decided to
simulate only two levels of use: use B and no use (NU). Appendix E-19-3 shows the complete
results including events that fulfill observed use B and NU.
The Ckc value obtained for the simulation of use B and NU for the general level was 0.11 – a
slight strength of agreement (see Table A-4 Section 3.6 Part A) (Landis and Koch 1977). From
42 events that fulfill use B, the tool was able to simulate 35 (reliability of use B: 0.83). In the
case of NU of 52 events 15 agree (reliability of NU: 0.29). Table E-8 shows the minutes that

127
fulfill each use for the observed and predicted uses for some of the simulated events.
Appendix E-19-4 shows the complete table with the simulated events.
Table E-7. Minutes per use for events that fulfill observed (obs.) use A and predicted (predi.) use A, B and NU.
Events Final Use Minutes per use
per event Observed uses Predicted uses
Beginning
Day predi
Time obs. A B NU A B NU
.
Ago-25-2014 04:33:00 A NU 476 0 0 180 31 265
Ago-25-2014 15:42:00 A B 107 0 0 41 22 44
Ago-26-2014 13:29:00 A B 173 0 0 67 28 78
Ago-27-2014 11:33:00 A B 75 0 0 31 32 12
Ago-28-2014 12:36:00 A A 44 0 0 22 16 6
Jan-23-2015 19:04:00 A B 362 0 0 164 150 48
Jan-24-2015 14:54:00 A B 289 0 0 133 109 47
Jan-27-2015 14:12:00 A B 1308 12 0 531 238 551
Feb-02-2015 22:31:00 A B 65 0 0 30 26 9
Feb-06-2015 13:23:00 A NU 4891 179 0 1749 514 2807
Feb-10-2015 16:22:00 A B 328 0 0 172 109 47
Feb-19-2015 16:55:00 A B 221 0 0 83 30 108
Feb-22-2015 07:01:00 A B 492 2 0 208 197 89
Feb-24-2015 15:39:00 A B 290 0 0 125 112 53
Mar-02-2015 19:29:00 A B 166 0 0 75 61 30
Mar-03-2015 15:10:00 A B 216 0 0 88 65 63
May-02-2015 14:40:00 A B 332 0 0 158 65 109
May-05-2015 10:27:00 A B 129 0 0 60 49 20
May-06-2015 09:51:00 A B 47 0 0 24 17 6
May-13-2015 20:40:00 A NU 206 0 0 72 14 120
May-15-2015 11:23:00 A B 353 0 0 202 109 42

Regarding the specific level, Table E-8 shows the Ckc and the reliability values for some
events. Table E-9 summarizes the number of events that fall in each kappa coefficient category
(e.g. 91 events fall in slight strength of agreement and two in almost perfect agreement) . For
this simulation, 90.4 % of the events (85/94) obtained a Ckc value of 0. Based on that, the Ckc
measures the level of agreement between two options (see Section 3.6 Part A), a Ckc value of 0
means that the observed or predicted values have one option; hence there are not two options
to compare. In our case, some observed values had the same use during the event. Only six
events (6.4%) achieved a Ckc value between 0.05 and 0.06 – a slight strength of agreement (see
Table A-4 Section 3.6 Part A) (Landis and Koch 1977). One event (1.1%) fell in the category
of moderate strength of agreement with a Ckc value of 0.57. And two events (2.1%) achieved
the almost perfect strength of agreement with Ckc values of 0.93 and 1.
The reliability was implemented in the cases that the Ckc value was equal to 0. From the 85
events with a Ckc value of 0, 66 (78%) were predicted as use B. From these 66 events, almost
half of them (32 events, 48%) coincided with the observed use. The reliability values for these
events ranged between 0.51 and 0.96. From these 32 events, 23 events were predicted as use B
with reliability above 0.75.
In the case of NU option, 19 (22%) events were predicted as use NU. Thirteen events (68%)
predicted as NU coincided with the observed used. The reliability for these events ranged
between 10% and 72%. From the 19 events predicted as NU, 8 events obtained this use with a

128
reliability above 50 %. Therefore, for these 85 events we were able to simulate 40 events as use
B and NU with a reliability of above 0.50. Appendix E-19-5 shows the events with the Ckc
and the reliabilities values.
Table E-8. Observed (obs.) and predicted (predi.) B and NU water use, minutes per use and kappa
coefficients and reliabilities for some events.
Events Minutes per use
Kappa Coefficient and
Uses Observed Predicted
Beginning Reliabilities
Day Uses Uses
Time
Obs Predi. B NU B NU Ckc RB RNU
Ago-25-2014 04:33:00 B NU 476 0 211 265 0 0.44 0
Ago-28-2014 12:36:00 B B 44 0 38 6 0 0.86 0
Sep-16-2014 11:11:00 NU B 0 224 191 33 0 0.0 0.15
Sep-22-2014 14:35:00 B B 137 0 126 11 0 0.92 0
Oct-12-2014 19:23:00 B B 295 0 262 33 0 0.89 0
Oct-18-2014 17:50:00 NU B 0 2593 1408 1185 0 0.0 0.46
Jan-23-2015 19:04:00 B B 362 0 314 48 0 0.87 0
Jan-31-2015 15:08:00 B B 17 0 14 3 0 0.82 0
Feb-06-2015 13:23:00 B NU 5070 0 2263 2807 0 0.45 0
Feb-10-2015 16:22:00 B B 328 0 281 47 0 0.86 0
Mar-02-2015 19:29:00 B B 166 0 136 30 0 0.82 0
Mar-03-2015 15:10:00 B B 216 0 153 63 0 0.71 1
Mar-21-2015 16:43:00 B B 227 0 209 18 0 0.92 0
Mar-29-2015 11:30:00 NU B 0 335 287 48 0 0 0.14
Apr-03-2015 08:59:00 NU B 0 414 397 17 0 0 0.4
Apr-03-2015 18:43:00 NU B 0 143 136 7 0 0 0.5
Apr-17-2015 22:57:00 NU B 0 481 466 15 0 0 0.3
Apr-19-2015 01:23:00 NU B 0 76 72 4 0 0 0.5
May-02-2015 14:40:00 B B 332 0 223 109 0 0.67 0
May-16-2015 23:02:00 NU B 0 69 63 6 0 0 0.9
Jun-07-2015 09:49:00 B NU 111 0 35 76 0 0.32 0
Jun-15-2015 17:16:00 NU B 0 266 231 35 0 0 0.13
Jun-18-2015 08:21:00 NU NU 0 442 125 317 0 0 0.72
Jun-27-2015 06:11:00 NU B 0 592 495 97 0 0 0.16
Jun-28-2015 21:37:00 NU B 0 264 203 61 0 0 0.23
Table E-9. Classification of events as function of kappa coefficient.
Strength of agreement (see Table A-4 Section 3.6 Part A - Landis and Koch 1977)
Total
Poor Slight Fair Moderate Substantial Almost perfect
events
<0 0–0.20 0.21–0.40 0.41–0.60 0.61–0.80 0.81–1.0
0 91 0 1 0 2 94

According to the results obtained, the tool is more accurate to predict the B use rather than the
NU option. In sum, the specific level of simulation allowed us to have a better evaluation of
the predicted uses. With this level, the reliability of any predicted use of 40 events was above
or equal to 0.50. Compared to the general level, the reliability of NU option was 0.29. Since
this scenario employs the spectrometer, one remarking point is that we have the capacity to
add more quality indicators to assess more accurately the water use prediction.

129
19.2. DM TOOL SECOND SCENARIO – USING TURBIDITY PROBE
With the aim of evaluating less demanding monitoring options in terms of cost, a second
scenario was developed following the same methodology described in Section 19.1, but instead
of using a spectrometer probe at the entrance of the system, a turbidity probe was used.
For this case, we got five months of continuous turbidity and level measurements at 1-min
intervals: August to September 2014, February to March 2015 and May 2015. From this
database, a total of 43 storm events were recorded. Compared with the database used in
Section 19.1, 35 storm events coincided. TSS values were obtained following the calibration
procedure described on Chapter 9 Part C and the results showed on Chapter 15 Part D.
19.2.1 First Flush identification and definition of the bypass time
In the same token as Section 19.1, for FF detection two methods were evaluated: the rules
method (Section 11.1 Part C) and the SVM method (Section 3.10 Part A). In this situation, we
found that the influential factors on the difference of the FF zones were: Hmean, Hmax,
Δhmax and Δhmaxb variables (p-value <0.05) (see Figures E-11 and E-12). The decision rules
were built using and combining the quartiles of the chosen variables (see Table E-9). With
these rules we proceeded to simulate each event. Then, the simulation was evaluated using
Cohen's kappa coefficient (Ckc) (see Section 3.6 Part A) and the reliabilities of FF and No FF
event (Section 11.2 Part C). The Ckc was used in this thesis to compare the observed
monitoring data results with the simulated results. The reliability was implemented in the cases
that the Ckc value was equal to 0. The above statement means that the observed or predicted
values have the same option. Table E-9 shows the rules that were proposed and the Ckc after
running the FF simulations.

Figure E-11. Boxplot per FF zone of variables Hmean and Hmax using the turbidity probe

According to Table E-9 the highest Cohen's kappa coefficient was 0.31 – a fair strength of
agreement (see Table A-4 Section 3.6 Part A) (Landis and Koch 1977). This coefficient
corresponded to Rules C and G, the common factor of these rules was quartile four of Hmax
(117 cm). In detail, from seven events detected as a FF event, the tool was able to simulate

130
three (reliability of FF: 0.29). Otherwise, the tool was able to simulate 35 events from the 36
events with No FF observed (reliability of No FF: 0.97). Although it was not possible to
obtain a better Ckc, it was decided to use 117 cm of Hmax for classifying an event with a
strong first flush effect.

Figure E-12. Boxplot per FF zone of variables Dhmax and Dhmaxb using the turbidity probe

Table E- 10. Rules to identify a first flush event (turbidity data). Cohen's kappa coefficient (Ckc) values after running the
FF simulations. (q: means quartile; we highlight the variables that obtained the highest Ckc)
Rule Description Ckc
Rule A Using q3 Hmean or q3 Hmax or q4 Dhmax or q2 Dhmaxb for FF identification 0.31 (Hmax)
Rule B Using q2 Hmean or q2 Hmax or q5 Dhmax or q3 Dhmaxb for FF identification 0.26 (Hmax)
Rule C Using q4 Hmean or q4 Hmax or q3 Dhmax or q1 Dhmaxb for FF identification 0.33 (Hmax)
Rule D Using q3 Hmean or q3 Hmax or q4 Dhmax or q3 Dhmaxb for FF identification 0.31 (Hmax)
Rule E Using q3 Hmean and q4 Dhmaxb for FF identification 0.30
Rule F Using q3 Hmax and q4 Dhmax for FF identification 0.31
Rule G Using q4 Hmean and q4 Hmax for FF identification 0.33
Rule H Using q4 Hmean and q3 Hmax for FF identification 0.31

Here as well, SVM method (Section 3.10 Part A) for FF detection was implemented (executed
1000 times). For this case, we were able simulate 26 from the 43 events because SVM method
does not allow NA in the database. Appendix E-19-6 shows the database used in this process.
In order to select a function, theses simulations were evaluated using Ckc and reliabilities of FF
or No FF. Figures E-13, E-14 and E-15 show the values of Ckc and reliabilities of FF or No
FF for each function. Figure E-13 illustrates that four of the seven kernel functions obtained a
Ckc values of zero. This observation means that for all the events, these functions only
simulate one choice (FF or No FF), but not both choices. The Ckc values of the kernel
functions anovadot, polydot and vanilladot varied between negative values (-0.40) and 0.80.
Figure E-14 shows that the functions that obtained higher median reliability of FF were
polydot and vanilladot. Figure E-15 illustrates that most of the kernel functions for No FF
obtained a median reliability of one, with the exception of the functions polydot and
vanilladot. Therefore, the selected functions were anovadot, polydot and vanilladot.

131
Figure E- 13. Ckc values for the simulation of FF for the seven kernel functions

Figure E-14. Reliabilities of FF for the seven kernel functions

The next step was evaluating the redundant variables. For this case, we obtained p-values lower
than 0.05. The above finding means that the variables removed do generate a significant
difference. So, none of the variables can be put aside. After the evaluation of the redundant
variables, the SVM model was executed using the chosen kernel functions. Table E-11 shows
the results for each function. Then, the FF observed per event was compared with FF
predicted. The Ckc values obtained were 0 for anovadot (slight strength of agreement) and
0.24 (Fair strength of agreement) for polydot and vanilladot. Therefore, the rules method with
a Ckc of 0.31 was chosen as a FF detection method.

132
Figure E-15. Reliabilities of No FF for the seven kernel functions

Table E-11. Observed and predicted values for FF and No FF, using anovadot, polydot and
vanilladot kernel functions.
Predicted Predicted
Observed Predicted anovadot
polydot vanilladot
Prob
Date FF Prob-FFp FFp Prob-FFp FFp Prob-FFp FFp
FF
Ago-13-2014 1.00 No FF 0.33 No FF 0.11 No FF 0.10 No FF
Ago-27-2014 0.89 No FF 0.32 No FF 0.31 No FF 0.30 No FF
Sep-03-2014 0.98 FF 0.00 No FF 0.05 No FF 0.05 No FF
Sep-06-2014 0.58 FF 0.00 No FF 0.00 No FF 0.00 No FF
Sep-14-2014 0.82 No FF 0.32 No FF 0.24 No FF 0.22 No FF
Sep-16-2014 1.00 No FF 0.27 No FF 0.28 No FF 0.27 No FF
Sep-20-2014 1.00 No FF 0.25 No FF 0.25 No FF 0.25 No FF
Sep-22-2014 0.89 No FF 0.25 No FF 0.28 No FF 0.28 No FF
Feb-02-2015 0.80 No FF 0.33 No FF 0.18 No FF 0.19 No FF
Feb-06-2015 1.00 FF 0.00 No FF 0.22 No FF 0.22 No FF
Feb-10-2015 1.00 No FF 0.32 No FF 0.33 No FF 0.31 No FF
Feb-19-2015 0.00 No FF 0.32 No FF 0.30 No FF 0.28 No FF
Feb-24-2015 0.94 FF 0.10 No FF 0.16 No FF 0.16 No FF
Mar-02-2015 1.00 No FF 0.12 No FF 0.15 No FF 0.13 No FF
Mar-03-2015 0.98 No FF 0.18 No FF 0.30 No FF 0.29 No FF
Mar-05-2015 0.00 No FF 0.33 No FF 0.31 No FF 0.34 No FF
Mar-13-2015 0.98 No FF 0.35 No FF 0.30 No FF 0.33 No FF
Mar-16-2015 1.00 No FF 0.13 No FF 0.14 No FF 0.14 No FF
Mar-16-2015 0.89 No FF 1.00 No FF 0.97 No FF 0.98 No FF
Mar-17-2015 1.00 No FF 0.88 No FF 0.79 No FF 0.76 No FF
Mar-19-2015 1.00 FF 0.14 No FF 0.21 No FF 0.24 No FF
May-13-2015 1.00 No FF 0.92 No FF 0.64 No FF 0.64 No FF
May-15-2015 0.97 FF 0.10 No FF 0.53 FF 0.52 FF
May-16-2015 1.00 No FF 1.00 No FF 0.97 No FF 0.97 No FF
May-17-2015 1.00 No FF 1.00 No FF 0.96 No FF 0.95 No FF
May-17-2015 1.00 No FF 1.00 No FF 1.00 No FF 0.99 No FF

133
The bypass time was defined with the same methodology explained in Section 19.2. We
obtained similar results, for a total mass of 40% the median time was 28 minutes and in the
case of 30% the median time was 25 minutes. Therefore the defined bypass time was 20
minutes. Then, with the bypass time defined the next step was the prediction of the spectra at
the outlet. Here as well, we used the retention time within the CW of 19 minutes that
Hernández et al. (2015) had found.
19.2.2 Prediction of the water quality at the outlet
Taking into account that this scenario used a turbidity probe at the entrance of the system, the
water quality at the outlet of the system was predicted using the kernel model for wavelength
437.5 nm. This wavelength is correlated with TSS concentrations (Spearman's rho equal to
0.93, p-value<0.05) (defined on Chapter 17). The TSS values predicted at the outlet of the
system were compared with the TSS guidelines limits for uses A and B, defined on Section
10.1 Table C-2.
19.2.3 Proposed DM tool and Results
Figure E-16 shows a flow diagram with the proposed on-line decision-making tool employing
the turbidity probe at the system inlet.

Event&start ΔH=Hi+1-Hi Yes FF&event


H>101cm' Bypass'influent'for'20min
t=1'min ΔH<-0.87cm

No
No&FF&event Wait'19min'to'use'the'
effluent'water

The'treated'water'can'be'used'for:
Urban&reuse,&agricultural&reuse,&recreational& Yes TSSsimu<'TSSlimitUSE-A
impoundments

No

The'treated'water'can'be'used'for:
Agricultural&reuse&(food&processed&&&non&food& Yes TSSsimu<'TSSlimitUSE-B
crops),&landscape&impoundments,&restricted&
areas&irrigation

No
The'treated'water'cannot'be'use

Figure E-16. Flowchart of the On-line Decision-Making Tool using a turbidity probe at the entrance of the system

For this scenario, the tool was tested with a database of five months. We detected and
simulated 34 storm events. Here we employed the methodology mentioned in Section 19.1.3
for the simulations levels and the water use definition for each case using Ckc.
We observed in this section a similar behavior as Section 19.1.3. Most of the simulated events
(general level) do not accomplish use A. The Ckc value was lower than 0 (-0.23) means poor
strength of agreement (see Table A-4 Section 3.6 Part A) (Landis and Koch 1977). From 16
events that fulfill use A, the model predicted four events (reliability of use A: 0.24). In the case

134
of use B, the model predicted three of eight events (reliability of use B: 0.38). And for the NU
(no use) the model predicted only one of ten events (reliability of NU: 0.10). Table E-12
supports the above observations. Hence, we simulated water use B and no use (NU).
Table E-12. Observed (obs.) and predicted (predi.) final use, employing the turbidity
probe at the inlet. For use A, B and NU.
Beginning Uses Beginning Uses
Day Day
Time Obs Predi. Time Obs Predi.
Ago-25-2014 04:33:00 A B Mar-02-2015 19:29:00 A A
Ago-25-2014 15:42:00 A B Mar-03-2015 15:10:00 A B
Ago-26-2014 13:29:00 A B Mar-05-2015 15:51:00 B B
Ago-27-2014 11:33:00 A A Mar-13-2015 17:12:00 B NU
Sep-16-2014 11:11:00 NU A Mar-16-2015 15:12:00 NU B
Sep-17-2014 22:22:00 NU A Mar-16-2015 22:11:00 NU B
Sep-18-2014 12:33:00 B B Mar-17-2015 23:29:00 NU B
Sep-20-2014 05:53:00 B B Mar-19-2015 11:23:00 A A
Sep-21-2014 17:46:00 B A May-02-2015 14:40:00 A NU
Sep-22-2014 14:35:00 B A May-02-2015 17:23:00 A NU
Sep-26-2014 15:06:00 NU NU May-13-2015 20:40:00 A B
Sep-28-2014 19:20:00 NU A May-15-2015 11:23:00 A B
Feb-02-2015 22:31:00 A A May-16-2015 23:02:00 NU B
Feb-06-2015 13:23:00 A NU May-17-2015 00:44:00 NU B
Feb-10-2015 16:22:00 A B May-17-2015 17:53:00 NU B
Feb-19-2015 16:55:00 A B May-19-2015 12:11:00 B A
Feb-24-2015 15:39:00 A B May-19-2015 20:59:00 B A

For water uses B and NU general level simulation the Ckc value obtained was -0.08 – a poor
strength of agreement (see Table A-4 Section 3.6 Part A) (Landis and Koch 1977). Although,
the Ckc follows in the same category of the previous execution, the reliability of use B
improved to 0.83. However, in this case, the reliability of NU did not improve (0.10).
Table E-13 shows the Ckc and the reliability values for all the events obtained of the specific
level simulation. For this simulation, 91 % of the events (31/34) obtained a Ckc value of 0.
Based on that, the Ckc measures the level of agreement between two options (see Section 3.6
Part A), a Ckc value of 0 means that the observed or predicted values have the same option. In
our case, some observed values had the same use during the event. One event obtained a Ckc
value lower than zero. Another event achieved a Ckc value between 0.05 and 0.06 – a slight
strength of agreement (see Table A-4 Section 3.6 Part A) (Landis and Koch 1977). And one
event fell in the category of moderate strength of agreement with a Ckc value of 0.37.
The reliability tool for the simulation evaluation was used when the events obtained a Ckc
value of 0. From the 31 events with a Ckc value of 0, 26 (84%) were predicted as use B. From
these 26 events, most of them (19 events, 73%) coincided with the observed use. As it was
mentioned, the reliability of B improves in this simulation. The reliability values for these
events ranged between 0.57 and 1.00. Only one event was simulated as NU and coincided with
the observed use. This event obtained a reliability of NU of 0.82.

135
Table E-13. Observed (obs.) and predicted (predi.) final use, kappa coefficient and reliabilities employing the turbidity probe
at the inlet, for use B and NU.
Kappa
Kappa Coefficient
Beginnin Final Uses Coefficient and Beginnin Final Uses
and Reliabilities
Day g Reliabilities Day g
Time Predi Time Predi
Obs. RB RNU Ckc Obs. Ckc RB RNU
. .
Ago-25-2014 04:33:00 B B 1.00 0.00 0.00 Mar-02-2015 19:29:00 B B 0.00 1.00 0.00
Ago-25-2014 15:42:00 B B 1.00 0.00 0.00 Mar-03-2015 15:10:00 B B 0.00 0.89 0.00
Ago-26-2014 13:29:00 B B 1.00 0.00 0.00 Mar-05-2015 15:51:00 B B -0.23 0.01 0.00
Ago-27-2014 11:33:00 B B 1.00 0.00 0.00 Mar-13-2015 17:12:00 B NU 0.00 0.02 0.00
Sep-16-2014 11:11:00 NU B 0.00 0.00 0.00 Mar-16-2015 15:12:00 NU B 0.01 1.00 0.09
Sep-17-2014 22:22:00 NU B 0.00 0.00 0.00 Mar-16-2015 22:11:00 NU B 0.00 0.00 0.00
Sep-18-2014 12:33:00 B B 0.71 0.00 0.00 Mar-17-2015 23:29:00 NU B 0.37 1.00 0.47
Sep-20-2014 05:53:00 B B 1.00 0.00 0.00 Mar-19-2015 11:23:00 B B 0.00 0.68 0.00
Sep-21-2014 17:46:00 B B 1.00 0.00 0.00 May-02-2015 14:40:00 B NU 0.00 0.00 0.00
Sep-22-2014 14:35:00 B B 1.00 0.00 0.00 May-02-2015 17:23:00 B NU 0.00 0.00 0.00
Sep-26-2014 15:06:00 NU NU 0.00 0.82 0.00 May-13-2015 20:40:00 B B 0.00 1.00 0.00
Sep-28-2014 19:20:00 NU B 0.00 0.00 0.00 May-15-2015 11:23:00 B B 0.00 0.98 0.00
Feb-02-2015 22:31:00 B B 1.00 0.00 0.00 May-16-2015 23:02:00 NU B 0.00 0.00 0.00
Feb-06-2015 13:23:00 B NU 0.60 0.00 0.00 May-17-2015 00:44:00 NU B 0.00 0.00 0.00
Feb-10-2015 16:22:00 B B 0.99 0.00 0.00 May-17-2015 17:53:00 NU B 0.00 0.00 0.00
Feb-19-2015 16:55:00 B B 0.57 0.00 0.00 May-19-2015 12:11:00 B B 0.00 1.00 0.00
Feb-24-2015 15:39:00 B B 0.69 0.00 0.00 May-19-2015 20:59:00 B B 0.00 1.00 0.00

As the first scenario (Section 19.1) the second scenario is more accurate to predict the B use
rather than the NU option. As well in this case, the specific level of simulation allowed us to
have a better evaluation of the predicted uses. With this level the reliability of any predicted use
of 20 events was above or equal to 0.50.
19.3. DM TOOL THIRD SCENARIO – USING SVM
Support Vector Machine (SVM) (see Section 3.10 Part A) allows us to simulate the water uses
(A, B and NU – no use) with the storm parameters and hydraulic performance variables.
Therefore, the quality measurements are no longer needed for water uses' definition. The
output variables were the water uses and the input variables were the storm parameters and
hydraulic performance variables. The SVM tool was executed 1000 times changing the
calibration and validation events for each of the seven kernel functions (rbfdot, polydot,
vanilladot, tanhdot, laplacedot, besseldot, anovadot).
For the case of use A and B, we have simulated 49% of the events (43 of 88), because the
events with NA were removed. Appendix E-19-7 shows the complete database used in this
section. The Ckc and reliabilities of each water use were evaluated to select one of the kernel
functions. Figures E-15 to E-17 show the values of Ckc and reliabilities of use A and use B for
each function. Figure E-15 illustrates a slight strength of agreement between the predicted and
observed water uses with Ckc median values between 0 and 0.20 for most of the kernel
functions. The function with the upper Ckc median value is anovadot (0.18) and this function
Ckc values varied between -0.10 and 0.46. For the case of water use A, the functions that
obtained higher median reliability were besseldot, laplacedot and rbfdot (see Figure E-16). On
the other hand, Figure E-17 shows that these functions obtained values between 0 and 0.2 for
the reliability of use B. In this Figure we can observe that the functions with median values

136
upper to 0.4 are polydot, vanilladot and anovadot. Anovadot was identified as the only
function that obtained good performance for Ckc and the reliabilities of use A and use B.
Hence, this function was selected for the simulation.

Figure E-17. Ckc values for the simulation of use A and B for the seven kernel functions

Figure E-18. Reliabilities of use A for the seven kernel functions

137
Figure E-19. Reliabilities of use B for the seven kernel functions

The redundant analysis was done using the method explained in Section 11.3 Part C
(Evaluation method to identify the redundant variables) and with the selected kernel function.
As it was explained in the prior sections, this method helps to identify which are the variables
that can be put aside. For this scenario, we obtained p-values upper than 0.05 for the following
variables: Hmax, Hmean, Imax, Dplu, Hmeanb, ADWPb and Dplub. This means that we can
take out any of these variables from the analysis. A second simulation was done without two
variables at one time using the identified variables (98 pairs or simulations), and the Ckc of
each pair was compared with Ckc of the variable identified in the first simulation. For this case,
we obtained p-values lower than 0.05. The above finding means that the pairs of variables
removed do generate a significant difference. So, no one of the pairs of variables can be put
aside.
On the other hand, we simulated water use B and NU. The same methodology employed for
simulation of use A and B was used. Figures E-20, E-21 and E-22 show the values of Ckc and
reliabilities of use B and NU for each function. Despite not having a better performance of the
Ckc values (Figure E-20) in comparison with the above simulation, the reliability of B and NU
improved. Figure E-20 illustrates that the function vanilladot obtains the Ckc highest median
value (0.29) with the less negative values. Figure E-21 illustrates that the reliabilities of use B
median values upper to 0.6 are polydot, rbfdot and vanilladot. In the case of the reliabilities of
NU the functions with values upper to 0.6 are anovadot, rbfdot and vanilladot. Vanilladot
shows to have good performance on the Ckc and the reliabilities of use B and use NU. This
gives us an indicator that Vanilladot should be the function to use for the simulation of water
uses using SVM.
For this case, in the evaluation of the redundant variables we obtained p-values upper than 0.05
for the following variables: Hmax, Dhmax, Imean, Dweir, Dplu, Dhmaxb, Pb, Imaxb, Dweirb
and Dplub. Then, the second simulation was executed removing two variables. A total of 125
pairs were evaluated and 123 pairs were identified as possible redundant pairs of variables.
Appendix E-19-7 shows the 123 redundant pairs of variables. A third simulation was executed

138
by removing three variables at one time using the pairs identified above. Then, the trios of
variables that can be removed were identified using Wilcoxon signed-rank test. In Table E-14
are the pairs that cannot become a trio. These variables are identified as the “must be in the
simulation” variables. Table E-15 shows the most repeated trios formed by the remaining
variables. Finally, the variables that formed more trios were marked and classified possible
redundant variables.

Figure E-20. Ckc values for the simulation of use A and B for the seven kernel functions

Figure E-21. Reliabilities of use A for the seven kernel functions

139
Figure E-22. Reliabilities of use B for the seven kernel functions

Table E-14. Pairs of variables that cannot become a trio: Variables that “must be in the simulation”.
Hma Dhmax ADWP Imax Imean
Hmean ADWP P Imax Hmaxb Hmeanb
x b b b b
Hmax X X X X X X X
Dhmax X X X X X
Imean X X X X X X
Dweir X X X X X X
Dplu X X X X
Dhmax
X X X X X
b
Pb X X X X X
Imaxb X X X X X
Dplub X X X X X X
Table E-15. Trios of variables that can be removed and Ckc obtained. (q: quartile;
lw: lower whisker; uw: upper whisker).
Ckc values
lw q1 q2 q3 uw
All variables -0.29 0.13 0.29 0.43 0.86
Trios of variables
1 Imean Dplu Dweir -0.32 0.13 0.29 0.43 0.86
2 Imean Dplu Dplub -0.32 0.13 0.29 0.43 0.86
3 Dweirb Imean Dweir -0.29 0.14 0.29 0.43 0.86

The SVM tool was executed without the trios of variables that can be removed (Table E-15).
The Ckc values obtained removing each trio is illustrated in Table E-15. These results shows
that the Ckc values without the trios versus the Ckc obtained with all the variables are almost
the same. We only observed a slight difference in values of the lower whisker. The ideal
variables to remove would be the ones that we have to wait until the end of the event (e.g.
Dplu- rain duration). Therefore, for this case the trio number 1 captured our attention. This
trio has the variables Imean, Dplu and Dweir. Therefore, SVM for classification of water uses
would implement 15 variables and with the same performance using the 18 variables. Table E-

140
16 shows the results using the SVM tool with all the variables and without the chosen trio of
variables. This table shows the probability of use B or NU for each event. The predicted use
was defined with a probability up to 90%.
Table E-16. Observed (obs.) and predicted (predi.) for B an NU final use. Predicted use employing SVM tool with all the
variables and without the identified trio.
Final uses
# Day Time Predicted - all variables Predicted - without 3 variables
Obs
Prob-B Prob-NU Use predi Prob-B Prob-NU Use predi
1 Ago-26-2014 13:29:00 B 0.81 0.19 NU 0.90 0.10 B
2 Sep-16-2014 11:11:00 NU 0.85 0.15 NU 0.90 0.10 B
3 Sep-20-2014 05:53:00 NU 0.94 0.06 B 0.70 0.30 NU
4 Sep-22-2014 14:35:00 B 0.27 0.73 NU 0.20 0.80 NU
5 Oct-10-2014 16:16:00 B 0.89 0.11 B 0.72 0.28 NU
6 Oct-12-2014 19:23:00 B 0.37 0.63 NU 0.54 0.46 NU
7 Oct-16-2014 23:24:00 NU 0.11 0.89 NU 0.21 0.79 NU
8 Oct-18-2014 17:50:00 NU 0.88 0.12 B 0.74 0.26 NU
9 Oct-22-2014 15:08:00 NU 0.84 0.16 NU 0.88 0.12 NU
10 Jan-23-2015 19:04:00 B 0.59 0.41 NU 0.71 0.29 NU
11 Jan-24-2015 14:54:00 B 0.93 0.07 B 0.69 0.31 NU
12 Jan-27-2015 14:12:00 B 0.87 0.13 NU 0.95 0.05 B
13 Jan-31-2015 11:47:00 B 0.96 0.04 B 0.94 0.06 B
14 Feb-02-2015 22:31:00 B 0.63 0.37 NU 0.19 0.81 NU
15 Feb-06-2015 13:23:00 B 0.04 0.96 NU 0.91 0.09 B
16 Feb-10-2015 16:22:00 B 0.83 0.17 NU 0.80 0.20 NU
17 Feb-19-2015 16:55:00 B 0.98 0.02 B 0.97 0.03 B
18 Feb-24-2015 15:39:00 B 1.00 0.00 B 1.00 0.00 B
19 Mar-02-2015 19:29:00 B 0.96 0.04 B 1.00 0.00 B
20 Mar-03-2015 15:10:00 B 0.09 0.91 NU 0.05 0.95 NU
21 Mar-05-2015 15:51:00 B 0.67 0.33 NU 0.56 0.44 NU
22 Mar-13-2015 17:12:00 B 0.99 0.01 B 0.99 0.01 B
23 Mar-16-2015 15:12:00 NU 0.99 0.01 B 1.00 0.00 B
24 Mar-16-2015 22:11:00 NU 0.57 0.43 NU 0.26 0.74 NU
25 Mar-17-2015 23:29:00 NU 0.58 0.42 NU 0.71 0.29 NU
26 Mar-19-2015 11:23:00 B 0.64 0.36 NU 0.63 0.37 NU
27 Mar-21-2015 16:43:00 B 0.98 0.02 B 0.95 0.05 B
28 Mar-22-2015 16:39:00 B 0.27 0.73 NU 0.15 0.85 NU
29 Mar-23-2015 13:11:00 B 0.40 0.60 NU 0.63 0.37 NU
30 Apr-04-2015 09:28:00 NU 0.07 0.93 NU 0.13 0.87 NU
31 Apr-05-2015 16:23:00 NU 0.00 1.00 NU 0.00 1.00 NU
32 Apr-17-2015 22:57:00 NU 0.44 0.56 NU 0.79 0.21 NU
33 Apr-18-2015 16:02:00 NU 0.27 0.73 NU 0.64 0.36 NU
34 May-13-2015 20:40:00 B 0.56 0.44 NU 0.49 0.51 NU
35 May-15-2015 11:23:00 B 0.52 0.48 NU 0.71 0.29 NU
36 May-16-2015 23:02:00 NU 0.36 0.64 NU 0.02 0.98 NU
37 May-17-2015 00:44:00 NU 0.19 0.81 NU 0.23 0.77 NU
38 May-17-2015 17:53:00 NU 0.00 1.00 NU 0.00 1.00 NU
39 Jun-14-2015 11:45:00 NU 0.04 0.96 NU 0.05 0.95 NU
40 Jun-15-2015 17:16:00 NU 0.03 0.97 NU 0.02 0.98 NU
41 Jun-18-2015 08:21:00 NU 0.09 0.91 NU 0.13 0.87 NU
42 Jun-21-2015 23:58:00 NU 0.06 0.94 NU 0.17 0.83 NU
43 Jun-22-2015 07:32:00 NU 0.52 0.48 NU 0.14 0.86 NU

141
Table E-17 summarizes the kappa coefficient and reliabilities for the two executions. We can
observe an improvement in the Ckc value from 0.19 (all variables) to 0.28 (without the chosen
trio). This result change the label of the Ckc from a slight strength of agreement to a fair
strength of agreement (see Table A-4 Section 3.6 Part A (Landis and Koch 1977)). The use
reliabilities improved, the reliability of use B changed from 0.35 to 0.39 and for NU changed
from 0.85 to 0.90. Hence, these variables make the model adjustment more complex due to
their lower contribution to the model.
Table E-17. Kappa coefficients (Ckc) and reliabilities (RB and RNU) for the SVM executions.
Kappa Coefficient and Reliabilities
Execution of SVM with Ckc RB RNU
All variables 0.19 0.35 0.85
Without 3 variables 0.28 0.39 0.90

19.4. COMPARISON OF THREE SCENARIOS RESULTS


Section 19.1 has presented the results of the DM tool first scenario for the prediction of water
uses employing the spectrometer at the system’s entrance. On the other hand, Section 19.2 has
shown the results of a DM tool that uses a turbidity probe. Finally, Section 19.3 has exposed
the use of SVM method as a DM tool for the prediction of water uses. Therefore, in this
section we are going to present a comparison of these scenarios' results.
Table E-18 summarizes the level of agreement (Ckc) per month between the observed and
predicted water uses of each scenario. This table includes the reliability of use B (RB) and NU
(RNU) per month. We can observe that most of the Ckc values are lower or equal to zero.
Therefore, the strength of agreement per month is poor. For most of the events per month the
water uses were of the same option, which was the reason to obtain Ckc values equal to zero.
In the case of the RB, we obtained the highest values with the first scenario. For RNU, the
best scenario was the third one. The second scenario was the worst one. NA means that the
prediction could not be made due to lack of data.
Table E-18. Kappa coefficients and reliabilities per month.
Scenario 1 Scenario 2 Scenario 3
Ckc RB RNU Ckc RB RNU Ckc RB RNU
Aug-14 0.00 0.80 0.00 0.00 0.60 0.00 NA 0.20 0.00
Sep-14 -0.05 0.75 0.20 -0.11 0.50 0.20 -0.50 0.00 0.20
Oct-14 -0.33 0.67 0.00 NA 0.00 0.00 0.00 0.00 1.00
Jan-15 NA 1.00 0.00 NA 0.00 0.00 0.00 0.33 1.00
Feb-15 0.00 0.83 0.00 0.00 0.67 0.00 0.00 0.50 0.00
Mar-15 -0.01 0.88 0.11 -0.23 0.50 0.00 0.03 0.38 0.22
Apr-15 0.00 0.00 0.92 NA 0.00 0.00 NA 0.00 0.31
May-15 -0.16 0.88 0.00 -0.23 0.50 0.00 0.00 0.00 1.00
Jun-15 0.06 0.67 0.47 NA 0.00 0.00 NA 0.00 0.32

Figure E-23 shows the observed water uses versus general results of the first (Pred-S1), second
(Pred-S2) and third (Pred-S2) scenarios per event (Ev.). The events details, date and time, are
shown in Appendix 19-4. The illustrated periods are September of 2014 and February of 2015
and show the observed water uses B and NU and their predictions.

142
(a) September 2014 (b) February 2015
Figure E-23. September 2014 and February 2015 observed water uses (Obs-WU) per event versus general results of the first
(Pred-S1), second (Pred-S2), and third (Pred-S3) scenarios.

Figure E-23a shows nine storm events of September 2014. In the case of the first scenario
(Pred-S1), the tool made four accurate predictions (Ev. 10 to 13) (RB: 0.75; RNU: 0.20; Ckc:
-0.05). For the second scenario (Pred-S2), the tool predicted three events (Ev. 11 to 13) (RB:
0.50; RNU: 0.20; Ckc: -0.11). And the third scenario (Pred-S3) predicted two events (Ev.9 and
14) (RB: 0; RNU: 0.20; Ckc: -0.5). The best tool in this case was Pred-S1. Figure E-23b
illustrates the events of February. The first scenario made an accurate prediction of five (Ev.
27, 29 to 32) of the six events (RB: 0.83; RNU: 0; Ckc: 0). The Pred-S2 predicted five events
and agreed in four (Ev. 27, 29, 30 and 32) (RB: 0.67; RNU: 0; Ckc: 0). The Pred-S3 agreed on
three events (Ev. 28, 30 and 32) (RB: 0.5; RNU: 0; Ckc: 0). For this month, the best tool was
Pred-S1. Appendix 19-8 shows the figures of the remaining periods.
Table E-19 summarizes the characteristics of each scenario and the general results obtained. In
the case for the first flush detection, the method with the highest Ckc value was the rules'
method of the first scenario. This method obtained a Ckc values of 0.49, that means a
moderate strength of agreement. In terms of water use at general level, the first scenario tool
obtained a Ckc value of 0.11 (slight strength of agreement), a water use B reliability of 0.83 and
a NU reliability of 0.29 for the general level execution. On the other hand, the second scenario
did not accomplish a better Ckc value (-0.08, poor strength of agreement) in contrast with the
third scenario the Ckc value improved to 0.28 (fair strength of agreement). Therefore, using
the SVM method we observed that there was an improvement in the Ckc value from a slight
strength of agreement (0.11) to a fair strength of agreement (0.28).
Table E-19. Kappa coefficients (Ckc) and reliabilities (RB and RNU) for all the scenarios
Water use
FF
Kappa Coefficients and
Ckc results
Reliabilities
Sectio Database Rules SVM
DM tools Ckc RB RNU
n used method method
19.1 First scenario – spectrometer probe 94 events 0.49 0.30 0.11 0.83 0.29
19.2 Second scenario – turbidity probe 34 events 0.24 0.31 -0.08 0.83 0.10
19.3 Third scenario – SVM method 43 events - - 0.28 0.39 0.90

Looking on detail the reliabilities, the first scenario and the second one differ by the reliability
of NU. There was a decrease of the reliability to 0.29 to 0.10. For the reliability of use B, we

143
obtained a high value of 0.83 with the proposed tools on Sections 19.1 and 19.2. However, the
with the SVM tool (section 19.3) we obtained a lower value of 0.39. A similar result was
observed with the reliability of NU, in this case the SVM tool obtained a high value of 0.90
and the proposed tool on Section 19.1 obtained a low value of 0.29. Hence, we can affirm that
the third scenario – SVM method prediction is on the safe side.
On the other hand, for the specific level we could only compare the first and second scenario,
because the third level only make predictions at general level. A total of 34 events are shown
on Table E-20. Implementing the first scenario we obtained five events with elevated reliability
of water use B. On the other hand with the second scenario we obtained 19 events. Further,
with the first scenario we obtained 12 events with high reliability of water use NU and with the
second scenario we got one event. The above observation confirms that the water use
prediction with the first scenario is on the safe side. In terms of Ckc value, most of the events
got a Ckc equal to zero (30 of 34) and the first tool obtained 4 events with a Ckc greater than
zero.
For illustrative purposes, Figure E-24 shows the specific level observed water uses and
predicted DM tool first and second scenarios water uses of two storm events. This Figure uses
the input flow (L/s) as reference. At the left side of the Figure E-24 is the event of February
6th 2015 13:23 and in the right side is the event of March 17th 2015 23:29. The yellow color
indicates the water use B and the red color shows when the water cannot be used (NU). The
blue line indicates when the bypass end. The predicted figures (DM tool first and second
scenarios) have the Ckc, RB (reliability of water use B) and the RNU (reliability of no water use
-NU). From Figure E-24 we can observe that for the event February 6th 2015 13:23 (left side
of the Figure), the RB is better for the second scenario (0.66) than for the first one (0.45). In
the case of the event of March 17th 2015 23:29, the RNU of the first scenario (0.66) is better
than for the second scenario (0.47).
Within this Chapter three DM tool scenarios were proposed for the operation of the
stormwater harvesting system. According to the results presented so far, determining the best
scenario depends on if we want to have the water use prediction on real time or in a deferred
time. On real time, the most accurate scenario is the first one using the spectrometer probe
with a Ckc of 0.11, a RB of 0.83 and RNU of 0.29. Compared with the second scenario, we are
on the safe side with the first scenario with a RNU of 0.29 instead of 0.10. The second
scenario, using the turbidity probe, the decision-making is at high risk in terms of water uses.
Another plus point that has the first scenario is that more events were analyzed. On the
contrary, the second scenario was the one with the fewer events to analyze. In the case of
taking decisions in deferred time, means that we have to wait until the end of the storm event,
the third scenario is the indicated one with a Ckc of 0.28, a RB of 0.39 and RNU of 0.90. In
financial terms, the scenario that requires less economic resources is the third one. This
method allows us to simulate the water uses with 15 variables. To measure these variables we
only need an ultrasonic level sensor, for the hydraulic performance variables, and a
pluviometer for the storm parameters. Therefore, the quality measurements are not longer
needed for water uses' definition.

144
Table E- 20. DM tool 1st and 2nd scenarios results: Observed (obs.) and predicted (predi.) final use, Ckc, RB and RNU
DM tool 1st scenario results DM tool 2nd scenario results
Kappa Coefficient Kappa Coefficient
Final Uses Final Uses
Day Beginning Time and Reliabilities and Reliabilities
Obs. Predi. Ckc RB RNU Obs. Predi. Ckc RB RNU
Ago-25-2014 04:33:00 B NU 0.00 0.44 0.00 B B 0.00 1.00 0.00
Ago-25-2014 15:42:00 B B 0.00 0.59 0.00 B B 0.00 1.00 0.00
Ago-26-2014 13:29:00 B B 0.00 0.55 0.00 B B 0.00 1.00 0.00
Ago-27-2014 11:33:00 B B 0.00 0.84 0.00 B B 0.00 1.00 0.00
Sep-16-2014 11:11:00 NU B 0.00 0.00 0.15 NU B 0.00 0.00 0.00
Sep-17-2014 22:22:00 NU B 0.00 0.00 0.42 NU B 0.00 0.00 0.00
Sep-18-2014 12:33:00 B NU 0.00 0.43 0.00 B B 0.00 0.71 0.00
Sep-20-2014 05:53:00 NU B 0.02 0.26 0.06 B B 0.00 1.00 0.00
Sep-21-2014 17:46:00 B B 0.00 0.96 0.00 B B 0.00 1.00 0.00
Sep-22-2014 14:35:00 B B 0.00 0.92 0.00 B B 0.00 1.00 0.00
Sep-26-2014 15:06:00 NU NU 0.00 0.00 0.61 NU NU 0.00 0.00 0.82
Sep-28-2014 19:20:00 NU B 0.00 0.00 0.21 NU B 0.00 0.00 0.00
Feb-02-2015 22:31:00 B B 0.00 0.86 0.00 B B 0.00 1.00 0.00
Feb-06-2015 13:23:00 B NU 0.00 0.45 0.00 B NU 0.00 0.60 0.00
Feb-10-2015 16:22:00 B B 0.00 0.86 0.00 B B 0.00 0.99 0.00
Feb-19-2015 16:55:00 B B 0.00 0.51 0.00 B B 0.00 0.57 0.00
Feb-24-2015 15:39:00 B B 0.00 0.82 0.00 B B 0.00 0.69 0.00
Mar-02-2015 19:29:00 B B 0.00 0.82 0.00 B B 0.00 1.00 0.00
Mar-03-2015 15:10:00 B B 0.00 0.71 1.00 B B 0.00 0.89 0.00
Mar-05-2015 15:51:00 B B 0.04 0.81 1.00 B B -0.23 0.01 0.00
Mar-13-2015 17:12:00 B NU 0.00 0.50 0.00 B NU 0.00 0.02 0.00
Mar-16-2015 15:12:00 NU B 0.03 1.00 0.42 NU B 0.01 1.00 0.09
Mar-16-2015 22:11:00 NU B 0.00 0.00 0.14 NU B 0.00 0.00 0.00
Mar-17-2015 23:29:00 NU B 0.57 1.00 0.66 NU B 0.37 1.00 0.47
Mar-19-2015 11:23:00 B B 0.00 0.57 0.00 B B 0.00 0.68 0.00
May-02-2015 14:40:00 B B 0.00 0.67 0.00 B NU 0.00 0.00 0.00
May-02-2015 17:23:00 B NU 0.00 0.43 0.00 B NU 0.00 0.00 0.00
May-13-2015 20:40:00 B NU 0.00 0.42 0.00 B B 0.00 1.00 0.00
May-15-2015 11:23:00 B B 0.00 0.88 0.00 B B 0.00 0.98 0.00
May-16-2015 23:02:00 NU B 0.00 0.00 0.09 NU B 0.00 0.00 0.00
May-17-2015 00:44:00 NU B 0.00 0.00 0.09 NU B 0.00 0.00 0.00
May-17-2015 17:53:00 NU B 0.00 0.00 0.10 NU B 0.00 0.00 0.00
May-19-2015 12:11:00 B B 0.00 0.95 0.00 B B 0.00 1.00 0.00
May-19-2015 20:59:00 B B 0.00 0.94 0.00 B B 0.00 1.00 0.00

145
Observed water uses

DM tool first scenario

DM tool second scenario

February 6th 2015, 13:23 March 17th 2015, 23:29

Figure E-24. Observed water uses and predicted DM tool first and second scenarios water uses, of two storm events using
the input flow (L/s) as reference. Yellow color: water use B; Red color: the water cannot be use (NU); Blue line: Bypass end

146
CHAPTER 20

CHANGING MEASUREMENT FREQUENCY OF THE


RECORDED DATA

According to Chapter 19, the best on-line decision-making tool scenario for water uses
prediction is the scenario number 1. This scenario uses an ultrasonic level and a spectrometer
probe at the entrance of the system. We evaluated the maximum measurement frequency of
the spectrometer probe in order to reduce operational costs. The maximum measurement
frequency is the value that can affect the water-use prediction. Figure E-25 shows the
methodology developed.
Beginning&X=1min

Event&Simulation
DB&records Predicted&Uses
Without&X&(min)&of&DB Using&DM&tool&&
First&Scenario

Simulated&Uses&all& Comparison
DB Simulated&Uses&vs.&Observed&Uses
&vs.&Observed&Uses Ckcwm>&RBwm>RNUwm
Ckcall>&RBall>RNUall
X&=&X&+1&(min)
Wilcoxon&Test

No p"value&<&0.05?

Yes

X&=&total&of&min&that&
can&be&removed

Figure E-25. Methodology for the measurement frequency analysis of the recorded data. X is the minute or minutes to
remove; Ckcwm is the Cohen's kappa coefficient without minute X; RBwm is the reliability of water use B without minute X and
RNUwm is the reliability of no water use without minute X.

The database (DB) has records at 1-min intervals for the 85 events. The process of Figure E-
25 begins by increasing the interval of measurement and removing one minute between each
record (X). For example, if we have records at 13:15, 13:16, 13:17, 13:18 and 13:19 and X is
equals to one, we can remove the 13:16 and 13:18 records or 13:15, 13:17 and 13:19. So, this
data would have records every two minutes. If X increase to two, the data would have records
every three minutes and so on. Each execution was evaluated using Cohen's kappa coefficient
(Ckc) (see Section 3.6 Part A) and the reliabilities of B and NU (no use) (Section 11.2 Part C).
Then, the proposed DM tool on Chapter 19 was executed and the simulated uses were
compared with the observed ones. This simulation was named without minute (wm). As a
partial result we obtained: Ckcwm, reliability of use B (RBwm) and of NU (RNUwm). The
reference evaluation indicators (Ckall, RBall and RNUall) are the values obtained from the
comparison between simulated uses with the complete DB versus observed ones

147
Later, the Ckcwm, RBwm and RNUwm were compared with the reference points (Ckall - RBall -
RNUall) using Wilcoxon signed-rank test. If the p-value was upper than 0.05, the change of the
measurement frequency do not generate a significant difference. So, we can keep changing the
frequency by removing another minute. Otherwise, the process must stop and we cannot
continue to change the frequency. At the end of the process, the method will give the total of
minutes that can be removed. Table E-21 shows the results using the explained method. The
minimum minutes that can be removed are between 3 and 16 minutes that correspond to 25%
of the data, 30 minutes correspond to the median and 61 minutes correspond to 75% of the
data. The tool was no able to compute the frequency analysis for nine events (e.g. Oct-16-2014
- 23:24:00, Jun-25-2015 - 02:20:00), because these events had a Ckc value of 0.
Table E-21. Analysed events for definition of the optimal frequency of the recorded data. Ed is the event duration in minutes
and wm is the maximum minutes that can be removed.
Event Event Event
wm wm wm
Ed (min) Ed (min) Ed (min)
Day Beginning Day Beginning Day Time
(min) (min) (min)
Ago-25-2014 04:33:00 475 3 Mar-02-2015 19:29:00 165 77 May-06-2015 09:51:00 46 24
Ago-25-2014 15:42:00 106 30 Mar-03-2015 15:10:00 215 91 May-13-2015 20:40:00 205 91
Ago-26-2014 13:29:00 172 15 Mar-05-2015 15:51:00 206 27 May-15-2015 11:23:00 352 91
Ago-27-2014 11:33:00 74 40 Mar-13-2015 17:12:00 234 91 May-16-2015 23:02:00 68 6
Ago-28-2014 12:36:00 43 27 Mar-16-2015 15:12:00 261 15 May-17-2015 00:44:00 324 31
Sep-16-2014 11:11:00 223 40 Mar-16-2015 19:38:00 26 3 May-17-2015 17:53:00 222 26
Sep-17-2014 22:22:00 190 86 Mar-16-2015 22:11:00 165 26 May-19-2015 12:11:00 269 40
Sep-18-2014 12:33:00 418 12 Mar-17-2015 23:29:00 1807 69 May-19-2015 20:59:00 34 13
Sep-20-2014 05:53:00 510 32 Mar-19-2015 11:23:00 2407 37 Jun-04-2015 06:48:00 198 NA
Sep-20-2014 14:25:00 24 32 Mar-21-2015 16:43:00 226 67 Jun-04-2015 10:32:00 218 23
Sep-21-2014 17:46:00 82 16 Mar-22-2015 16:39:00 211 66 Jun-05-2015 06:11:00 139 31
Sep-22-2014 14:35:00 136 35 Mar-23-2015 13:11:00 2213 23 Jun-05-2015 10:00:00 363 11
Sep-26-2014 15:06:00 174 70 Mar-29-2015 05:42:00 286 42 Jun-07-2015 09:49:00 110 5
Sep-28-2014 19:20:00 18 5 Mar-29-2015 11:30:00 334 27 Jun-11-2015 07:43:00 196 91
Oct-09-2014 08:29:00 1905 3 Mar-30-2015 13:42:00 1386 NA Jun-12-2015 02:37:00 26 30
Oct-10-2014 16:16:00 1861 11 Mar-31-2015 13:47:00 161 30 Jun-13-2015 01:56:00 1063 6
Oct-12-2014 19:23:00 294 9 Mar-31-2015 16:29:00 23 6 Jun-14-2015 11:45:00 188 11
Oct-16-2014 23:24:00 236 NA Apr-03-2015 08:59:00 413 61 Jun-14-2015 17:34:00 41 15
Oct-18-2014 17:50:00 2592 NA Apr-03-2015 18:43:00 142 27 Jun-15-2015 17:16:00 265 30
Oct-22-2014 15:08:00 341 NA Apr-04-2015 09:28:00 245 43 Jun-18-2015 08:21:00 431 37
Jan-23-2015 19:04:00 361 91 Apr-05-2015 16:23:00 166 69 Jun-20-2015 00:04:00 42 26
Jan-24-2015 14:54:00 288 91 Apr-08-2015 15:35:00 109 43 Jun-21-2015 23:58:00 449 28
Jan-27-2015 14:12:00 1319 91 Apr-13-2015 12:40:00 135 43 Jun-22-2015 07:32:00 504 43
Jan-31-2015 11:47:00 130 58 Apr-16-2015 04:09:00 175 56 Jun-22-2015 22:34:00 422 66
Jan-31-2015 14:00:00 27 15 Apr-16-2015 10:32:00 414 58 Jun-23-2015 06:11:00 695 65
Jan-31-2015 15:08:00 16 15 Apr-16-2015 21:54:00 1105 27 Jun-24-2015 00:48:00 997 91
Feb-02-2015 22:31:00 64 27 Apr-17-2015 22:57:00 480 53 Jun-25-2015 02:20:00 916 NA
Feb-06-2015 13:23:00 5069 34 Apr-18-2015 09:52:00 165 13 Jun-26-2015 19:53:00 3001 NA
Feb-10-2015 16:22:00 327 19 Apr-18-2015 16:02:00 557 6 Jun-27-2015 06:11:00 591 NA
Feb-19-2015 16:55:00 220 91 Apr-19-2015 01:23:00 74 16 Jun-28-2015 21:37:00 263 NA
Feb-22-2015 07:01:00 493 91 May-02-2015 14:40:00 329 30
Feb-24-2015 15:39:00 289 91 May-05-2015 10:27:00 124 30

148
Figure E-26 shows the water uses of two storm events, February 6th 2015 13:23 (Figure E-23
left side) and March 17th 2015 23:29 (Figure E-23 right side). In the upper part of this Figure
are the observed water uses. In the middle of Figure E-26, are located the predicted water uses
with a measurement frequency of 16 minutes. And in the lower part of this Figure are located
the predicted water uses with a measurement frequency of 35 minutes for February 6th and 70
minutes for March 17th. For the storm event of February 6th, we can observe that with 16 as
measurement frequency the RB (reliability of water use B) remains almost the same as with 1
minute of measurement frequency (see Figure 24- DM tool first scenario). Something similar
happens with the storm event of March 17th for the same measurement frequency. In the case
of 35 and 70 minutes of measurements frequency, the water use prediction indicators decrease.
For February 6th the Ckc value changes to 0.42 and for March 17th Ckc and RNU (reliability of
no water use -NU) values decrease to 0.47 and 0.52 respectively. Therefore, we can notice how
in these storm events the change of the measurement frequency over to the maximum defined
affects the water use prediction.
The above results suggest that for the decision-making tool of water uses is not necessary to
have a high frequency record such as at 1-min intervals in the water use prediction. For
example, the monitoring equipment can be programmed to record every 3 minutes. This
finding can have the following implications: fewer amounts of stored data and processing time,
and fewer operating costs at long term.

149
Observed water uses

Predicted water uses: Removing 16 minutes

Predicted water uses: Removing 35 minutes Predicted water uses: Removing 70 minutes

March 17th 2015, 23:29


February 6th 2015, 13:23

Figure E-26. Observed and predicted water uses of two storm events. The predicted water uses have different measurement
frequencies. Yellow color: water use B; Red color: the water cannot be use (NU). Yellow color: water use B; Red color: the
water cannot be use (NU)

150
CHAPTER 21

DECISION-MAKING TOOL GENERAL CONCLUSIONS

The present part of the thesis aims to propose a decision-making (DM) tool for the operation
of the stormwater harvesting system, in terms of water use with as less monitoring
requirements as possible. First, we considered that the changes of water quality translates into a
water-use changing should be taken into account for an on-line decision-making tool. Finally,
three DM tool scenarios with different monitoring options were built and evaluated.
The DM tool scenarios are based on the proposed method on Chapter 18 that allows knowing
the final water use in real time using a spectrometer probe. This method has the potential to be
implemented in other applications. But, it is worthy to mention a potential limitation of the
proposed methodology that is related with the selection of the water use absorbance limits.
This issue can be addressed by undertaking more sampling campaigns coupled with
spectrometer measurements.
The first and the second DM tool scenarios proposed the identification of the first flush (FF)
phenomenon without the need of having recorded the entire event. This FF identification
allows defining a bypass time. Two methods were evaluated, the rules' method and SVM. The
method with the highest Ckc value was the rules' method of the first scenario (0.49, a
moderate strength of agreement). Therefore, an event with a Δhmax lower than -0.87 cm is
classified as a first flush event.
The first DM tool scenario simulated 94 storm events. This scenario allows an on-line
decision-making because the water use prediction can be known per minute. For the general
level, this scenario obtained a high reliability of use B (0.83), but a lower reliability of NU
(0.29). The specific level of simulation allowed us to have a better evaluation of the predicted
uses. With this level the reliability of any predicted use of 40 events was above or equal to 0.50.
Compared to the general level, the reliability of NU option was 0.29. We identify two possible
limitations of this scenario. One is related to the highest cost of the monitoring equipment
(spectrometer probe) and the other is the specific training that is required for the probe
operation and maintenance. One remarking point is that with this scenario we have the
capacity to add more quality indicators to assess more accurately the water use prediction.
As with the first scenario, with the second one we can take on-line decisions concerning the
water use. Regarding the number of analyzed events, this scenario was the one with the less
simulated storm events (34). For this scenario, we obtained a high reliability of use B (0.83) and
a lowest reliability of NU (0.10) for the general level compared with the other scenarios. The
reliability of B improves in the specific level simulation. The reliability values for these events
ranged between 0.57 and 1.00. One event was simulated NU with reliability of NU of 0.82.
According to the results obtained, the tool is more accurate to predict the B use rather than the
NU option. In financial terms, this scenario has a lower cost of the monitoring equipment
(turbidity probe) in comparison with the first scenario. And in terms of the probe operation
and maintenance, this scenario demands less specific training. One limitation is that this

151
scenario has only the capacity to address one quality indicator. Hence, we cannot add others
indicators to assess more accurately the water use prediction.
Table E-22. Classification of events as function of kappa coefficient.
Strength of agreement (see Table A-4 Section 3.6 Part A - Landis and Koch
1977) Total
Poor Slight Fair Moderate Substantial Almost perfect events
<0 0–0.20 0.21–0.40 0.41–0.60 0.61–0.80 0.81–1.0
0 91 0 1 0 2 94

Regarding the third scenario, the SVM method obtained the highest Ckc (0.28, a fair strength
of agreement) compared to the other two scenarios. In this case, the reliability of NU (0.90)
improves and the reliability of use B decreases to 0.39. These results suggest that the tool has a
better performance to predict NU. So, using this tool the water use prediction is on the safe
side. One limitation is that we have to wait until the end of the event to obtain the water use
prediction. Therefore, we cannot have an online decision-making; it will be a deferred
decision-making. In terms of operational and maintenance costs, this scenario requires less
economic resources, because with 15 variables (storm parameters and hydraulic performance
variables) we are able to predict water uses: to simulate the water uses we only need an
ultrasonic level sensor, for the hydraulic performance variables, and a pluviometer for the
storm parameters. Another positive point, this scenario requires less training for the operation
of the ultrasonic level sensor and the pluviometer, comparing to the other two scenarios. Two
limitations of this scenario are the SVM tool requires to have all the recorded 15 variables and
to improve the SVM prediction we will need more events with the 15 variables coupled with
the observed water uses.
The first and the second scenarios are more versatile. In terms of water use prediction on real
time, the best scenario is the first one. For this reason we evaluated the maximum
measurement frequency of the spectrometer probe in order to reduce operational costs. We
observed that if the measurement frequency is programmed to record every 3 minutes the
operational costs could be less at long term and the amounts of stored data and processing
time will be fewer. As well, we will obtain the same results as with 1-min intervals.

152
REFERENCES PART E

Bertrand-Krajewski, J.-L., Chebbo, G., and Saget, A. (1998). “Distribution of pollutant mass vs
volume in stormwater discharges and the first flush phenomenon.” Water Research, 32(8), 2341–
2356.
Hernández, N., Camargo, J., Moreno, F., Plazas-Nossa, L., and Torres, A. (2015). “ARIMA as
a Forecasting Tool for Water Quality Time Series Measured With Uv-Vis Spectrometers in a
Constructed Wetland.” UDM - Urban Drainage Modelling, Québec- Canada, 1–18.
Hochedlinger, M. (2005). “Assessment of Combined Sewer Overflow Emissions.” PhD
Thesis, Faculty of Civil Engineering, University of Technology Graz, Austria.
Landis, J. R., and Koch, G. G. (1977). “The Measurement of Observer Agreement for
Categorical Data.” Biometrics, 33(1), 159–174.

153
GENERAL CONCLUSIONS
The objective of this PhD thesis was to propose a decision-making tool that allows us to have
operational protocols of SUDS (sustainable urban drainage systems) used for SWH
(stormwater harvesting) with an eye towards water end-use. The estimation relies on online
high-frequency quantity and quality data (Campisano et al. 2013).
Some methods were applied and developed in order to: (i) calibrate water flow and water
quality on-line equipment; (ii) detect first flush before the end of runoff event; (iii) predict final
uses with as less monitoring requirements as possible. We brought tools from other disciplines
to implement them in the SWH operation. Decision-making (DM) tools were developed based
on above-mentioned methods. The final water use can be predicted per minute (on-line) or per
event (deferred). As well, we observed that is feasible to relate hydraulic variables and storm
parameters with the final water uses in line with Sandoval et al. (2013, 2014). This allows us to
simplify the process and the quality measurements are not longer needed for water uses'
definition. Even if the methods proposed could be applied for a wide range of study cases, for
the present PhD study they were applied to a specific SWH system located in a university
campus of Andean tropical city: a constructed-wetland/reservoir-tank system (CWRT) in a
university campus in Bogotá-Colombia. It is worth to mention these methods were tested in
real time and on a real scale. Therefore, this process allows us to bring knowledge based on a
study case. For other SWH systems it is recommended to apply the proposed methodologies
in order to address the feasibility of using them.
The on-line DM tool allows us to have every minute a predicted water use. This tool uses a
spectrometer probe and an ultrasonic level at the entrance of the system. The specific level of
simulation allowed us to have a better evaluation of the predicted uses. With this scenario, we
observed that if the measurement frequency is programmed to record every 3 minutes the
operational costs could be less at long term and the amounts of stored data and processing
time will be fewer. As well, we will obtain the same results as with 1-min intervals. It is
recommended adding more quality indicators to the on-line DM tool, to assess more accurately
the water use prediction.
The deferred DM tool (we have to wait until the end of the event to obtain the water use
prediction), based on Support Vector Machine (SVM) methodology, allows us to have the
general predicted water use per event. This tool uses 15 hydraulic performance variables and
storm parameters for the water use prediction. Hence, we focus on the water quantity
measurements rather than the water quality ones. The results obtained suggest that this tool is
more efficient to predict any use than the other one. To improve the tool prediction, it is
necessary to add more events to the database with the 15 variables and the observed water use.
However, it is recommended to work on the possible prediction of storm parameters, with the
aim of change the deferred DM tool to on-line. Aulinas et al. (2011) have developed a DM
system based on other intelligent-based approaches, specifically knowledge-based approach.
The aim of this tool was to assess the management of a wastewater discharge in order to
reduce the pollutants that enter to the river. This tool depends on the on-line measurements of
certain quality parameters (e.g. PH, COD), the main objective was to organize the information
and define reliable actions. Other authors (David et al. 2013; Matos et al. 2014) developed in a
larger scale (Alcântara basin, Lisbon Portugal) a real-time urban warning system for flooding
and pollution events. This system is based on the integration of different model approaches

155
and forcing predictions. As the authors' interest is not to simplify the amount of monitoring
equipment, the system is supervised by on-line and real-time monitoring equipment located in
strategic points over the basin. Hence, the system use a 48-hours precipitation forecast for the
water quantity and quality prediction and then based on the on-line quality measurements it
predicts the water quality in the drainage network and estuary (David et al. 2013; Matos et al.
2014).
More specifically, we proposed a methodology that allows us to know the final water use in
real time using a spectrometer probe. This result gives the direct water use without local
calibrations and laboratory analysis. Fingerprint can give more details than water quality
indicators. There is a still remain various quality indicators (e.g. pathogens, micro-pollutants)
that cannot be measured with on-line probes (Campisano et al. 2013). Hence, with the
proposed methodology this gap can be reduced. It is worthy to mention a potential limitation
of the proposed methodology that is related with the selection of the water use absorbance
limits: limited water quality indicators and information from the spectra underused. This issue
can be addressed by undertaking more sampling campaigns and to add more quality indicators
coupled with spectrometer measurements. For other applications it is recommended to follow
the proposed methodology in order to define site-specific spectra limits.
Another remarking point is that with the monitoring of SWH we open the panorama to
simplify the first flush (FF) phenomenon detection. Hence, this methodology identifies the FF
phenomenon without the need of having recorded the entire event and without measured
quality indicators. With only a hydraulic variable (in our case Δhmax – difference between head
over the entrance weir in the time i and time i+1) a specific storm event could be related with a
first flush effect.
The efficiency of the constructed wetland (CW) was calculated with the UV-Vis spectra instead
of Event Mean Concentrations (EMC) that groups the pollutant concentrations of the entire
event in one single number. This method allows us to have the efficiency per minute that is the
time-resolution needed for on-line decision-making. As the efficiency is a function of the
volume (measured flow), it is recommended to guarantee the accuracy of the flow
measurement at the outflow and to explore other methodologies that can model the CW
efficiency.
The developed methods could be part of a real time control systems (e.g. operation of sewer
systems, stormwater basins) (Lacour and Schütze 2011; Muschalla et al. 2014). These methods
were conceived thinking in using as less monitoring equipment as possible, hence this could be
traduced in less specific training of the equipment and fewer monitoring and operational cost.

156
GENERAL CONCLUSIONS REFERENCES

Aulinas, M., Nieves, J. C., Cortés, U., and Poch, M. (2011). “Supporting decision making in
urban wastewater systems using a knowledge-based approach.” Environmental Modelling &
Software, 26(5), 562–572.
Campisano, A., Ple, J. C., Muschalla, D., Pleau, M., and Vanrolleghem, P. A. (2013). “Potential
and limitations of modern equipment for real time control of urban wastewater systems.”
Urban Water Journal, 10(5), 300–311.
David, L. M., Oliveira, A., Rodrigues, M., Jesus, G., Povoa, P., David, M., Costa, R., Fortunato,
A., Menaia, J., Frazao, M., and Matos, R. (2013). “Development of an integrated system for
early warning of recreational waters contamination.” GRAIE, Lyon, France.
Lacour, C., and Schütze, M. (2011). “Real-time control of sewer systems using turbidity
measurements.” Water Science and Technology: A Journal of the International Association on Water
Pollution Research, 63(11), 2628–2632.
Matos, R., Ferreira, F., Matos, J. S., Oliveira, A., David, L., Rodrigues, M., Jesus, G., Rogeiro,
J., Costa, J., Mota, T., Brito, R., Povoa, P., David, C., and Santos, J. (2014). “Implementation of
an early warning system in urban drainage infrastructures for direct discharges and flood risk
management.” 329–338.
Muschalla, D., Vallet, B., Anctil, F., Lessard, P., Pelletier, G., and Vanrolleghem, P. A. (2014).
“Ecohydraulic-driven real-time control of stormwater basins.” Journal of Hydrology, 511, 82–91.
Sandoval, S., Torres, A., Duarte, M., and Velasco, A. (2014). “Assessment of rainfall influence
over water quality effluent of an urban catchment: a data driven approach.” Urban Water
Journal, 11(2), 116–126.
Sandoval, S., Torres, A., Pawlowsky-Reusing, E., Riechel, M., and Caradot, N. (2013). “The
evaluation of rainfall influence on combined sewer overflows characteristics: the Berlin case
study.” Water Science and Technology: A Journal of the International Association on Water Pollution
Research, 68(12), 2683–2690.

157
APPENDICES

!
APPENDIX B-4-1
MONITORING(SYSTEM(
(

Commissioning VisoTurb® 700 IQ (SW)


(a)

3.3.2 Measuring in an open channel (range > 100 FNU)

Example: In an open channel, the sensor can be immersed in the sample using
Outlet of a waste water a wall mounting assembly, e.g. EH/W 170 wall mounting assembly,
treatment plant (open (please note the minimum immersion depth).
channel, wall material:
concrete) Protect the measuring location and the environment against direct
®
Overview VisoTurb 700 IQ (SW) sunlight (sun shield or similar)
Mount the sensor rigidly in the channel. At the same time, tilt the
(b) (c)
sensor approx. 20 to 45 ° against the direction of the flow.
Figure 1. (a) Spectrometer probe dimensions (s::can-spectro::lyzerTM) (b) ZoomInstall
of the
the
spectrometer
sensor so that
probe
the
with
marking
35 mm and 5 mm measuring
on the sensor points towards
®
1.2 Structure
path of the
length (c)VisoTurb
Slide for 700 IQ (SW) turbidity
spectrometer probe up to 35 mm (adapted
the from
outlet ofs::can Messtechnik GmbH 2007, n.d.)
the channel.
sensor

1 2

Marking
in this °
45
direction

D
ire
ct
io
n
of
flo
w
3

Immersion depth min. 7 cm

4
min. 10 cm
Fig. 1-2 Structure of the turbidity sensor (example: VisoTurb® 700 IQ) min. 10 cm

1 Shaft Gro min. 10 cm


und
2 Connection head
3 Optical measurement window
4 Sapphire disc with ultrasound cleaning system

(a) Fig. 3-4 Turbidity sensor in the open(b)


channel with EH/W 170 fixture assembly for
Figure 2.( (a) Structure of the turbidity sensor (b) Installation of the turbidity sensordirect
in wall
an mounting.
open channel (adapted from WTW GmbH 2013)
1.3 Recommended fields of application Note
Interferences at the measuring location (see section 3.2.1) may require
VisoTurb® 700 IQ Stationary measurement of the turbidity or suspended solids special adaptations of the installation. For exceptions to the direction of
concentration (total suspended solids - TSS) in water/wastewater flow, see section 3.2.2 FLOW DIRECTION.
applications.

VisoTurb® 700 IQ SW Stationary measurements in seawater and brackish water,


aquaculture. 3-6 ba57301e08 07/2013

The VisoTurb® 700 IQ (SW) is particularly well suited for applications


in polluted measuring media, e.g. in wastewater treatment plants,
thanks to its robust construction and its efficient ultrasound cleaning

1-2 ba57301e08 07/2013

APP#B%4%1#–#1#of#2#
Distributed mounting - electrical connection via cable.
General key symbols are used for all operating steps in this operating
manual.
Locally
Locallyseparated
separated MIQ MIQ modules
modules or module
or module stacks
stacks are are
connected connectedMIQ/TC
with one another via the SNCIQ or SNCIQ/UG cable. 2020 XT
with
Details and differences with
when onethe
one
using another
another
softwarevia thethe
via SNCIQ
terminal can be or
SNCIQ SNCIQ/UG
found cable.
or SNCIQ/UG cable. (as Terminal)
The
in the MIQ/IF232 component operating manual.
The following following
diagram shows an diagram
IQ SENSORshows anwith
NET system IQ two
SENSOR NET system with two
The following
mounting
The terminal is equipped with diagram
mounting
variants
a large display(Fig. shows
variants
for the1-3).
clear an(Fig.
presentation 1-3). NET system with
IQ SENSOR two
mounting
of current measured values, the graph MIQ/PS
variants (Fig.values,
of measured 1-3).status dis-
M
C

MIQ/CR3
S
plays and message texts.
stacked mounting distributed mounting ES
C OK

Use the 5 keys <M>, <C>, <S>, <ESC>, MIQ/MC


<OK> and the arrow keys stacked mounting distributed mounting
<
stacked mounting
> to operate the IQ SENSOR NET system.
distributed mounting MIQ/TC 2020 XT
(as Terminal)
MIQ/TC 2020 XT MIQ/TC 2020 XT
MIQ/TC 2020 XT
MIQ/JB (as Terminal) (asMIQ/JB
Terminal)
MIQ/PS
LAB

M
C

MIQ/CR3
S

ES OK
C

MIQ/MC
MIQ/PS MIQ/PS 1 OK
M
C
M
C

MIQ/CR3
Pow
er !
S

MIQ/CR3
S

MIQ/JB ES
C OK
MIQ/JB Pow
er
OK
!
ES
C OK
Pow
er
OK
!

MIQ/MC MIQ/MC
OK
r !
Powe

MIQ/JB MIQ/JB
OK OK
r ! r !
Powe Powe

M C S OK
ESC
SNCIQ MIQ/JB SNCIQ MIQ/JB

OK
SNCIQ SNCIQ
#
er !
Pow

4 3 2
OK OK

er ! er !
Pow Pow

OK

er !
Pow

SACIQ
OK OK

er ! er !
Pow Pow

Fig. 4-1 View of the MIQ/TC 2020 XT


SACIQ + IQ Sensor
1 Display (see section 4.2.2 DISPLAY) + IQ Sensor
SNCIQ SNCIQ
2 Arrow keys (see section 4.2.4 ROTARY SWITCH)
3 5 keys <M>, <C>, <S>, <ESC>, <OK> (see section SNCIQ SNCIQ
4.2.3 KEYS)

Measuring Fig. 1-3


(a)
Example of an IQ SENSOR NET system configuration
SACIQ
VisoTurb® 700 IQ (SW)
Fig. 1-3 Example of an IQ SENSOR NET system configuration (b)
IQ sensors Up to 20 IQ sensors of any type can be used in the + IQ
2020 Sensor
XT system.
Figure 3. (a) Sketch of the
TheyMIQ/TC 2020 XT
can be connected (b) IQ
to any MIQSENSOR
module thatNET has asystem configuration (Adapted from WTW GmbH 2012)
free connection
IQ sensors for the IQ SUp
ENSORto 20
NET.IQThesensors
connectionofbetween
any type
the IQcan beand
sensor usedSACIQin the 2020 XT system.
MIQ module They canviabe
is made theconnected
SACIQ sensorto any MIQ
connection module
cable. The + IQhas
that Sensor
a free connection
IQ sensor connection cable is connected with the
for the IQ S ENSOR N ET . The
( connection
plug head connector
between the IQ sensor and
of the IQ sensor via a screwable socket to form a watertight connection.
4-2 ba64104e09 04/2012
As a result,MIQ
the IQmodule
sensor can is be
made Marking
quicklyvia in maintenance
the SACIQ
removed for sensor connection cable. The
Fig.activities
1-3 Example
and ofconnected
an IQ SENSORagain.NETthissystem configuration
IQ then
sensor connection cable is connected with the plug head connector
direction
IQ sensors Up to 20 IQ sensors of any type can be used socket
of the IQ sensor via a screwable to form
in the 2020 XT a watertight connection.
system.
ba64104e09 04/2012 They can beFig. As a result,
connected
1-3 Example the IQ
to anyof sensor
MIQ
an IQ can
module
SENSOR be
that quickly
has
NET removed for maintenance
a freeconfiguration
system 1 - 9 connection
for the IQ Sactivities
ENSOR NET and. Thethen connected
connection again.the IQ sensor and
between
IQ sensors
MIQ moduleUp to 20 via
is made IQ thesensors approx.
SACIQ ofsensor
any 10type
cm can becable.
connection used Thein the 2020 XT system.
IQ sensor connection
They cancable is connected
be connected to with
any the MIQ plug head connector
module that has a free connection
ba64104e09 04/2012 of the IQ sensor for thevia a
IQscrewable
SENSORsocket NET. The to form a watertight
connection connection.
between the IQ sensor1and-9
As a result, the IQ sensor can be quickly removed for maintenance
MIQ module is made via the SACIQ sensor connection cable. The
activities and then connected again.
IQ sensor connection cable is connected with the plug head connector
of the IQ sensor via a screwable socket to form a watertight connection.
ba64104e09 04/2012 As a result, the approx. 20 can
IQ sensor cm be quickly removed for maintenance
1-9
r b i d i t y S e n s o r V i s o Tu r b ® activities and then connected again.

y Measurement according to the


Figure 4.MeasurementFig.
environment
4-1 Ideal
Turbidity formeasurement
theVisoTurb
Sensor turbidity ®probe calibration
environment
700 IQ (adapted
for the from offset
application WTW GmbH 2013)
ometric Principle ba64104e09 04/2012 1-9
is principle, scattered light is measured at an angle
gree. This method is ideal for low and medium
rbidity up to 4000 FNU. In compliance with EN 4.2.3 User calibration for measuring the
Particle
nd ISO 7027, infra red light with a wavelenght of total suspended solids (g/l TSS)
is used. This wavelength is outside of the visible
thus potential coloration of the sample does not The turbidity values of the TSS measurement are converted into FNU
e measurements.
units for the concentration of dry substance. The
90˚ mode displays
LEDg/l TSS measuring Detector
the turbidity value as a secondary
Scattered
measured value in
Light FNU.

The correlation between the FNU units and the concentration of dry
Figure 5. Internalsubstance
componentsisofachieved via
the turbidity a user
sensor calibration.
VisioTurb® At(Source:
700 IQ the point of time
Xylem n.d.) of the
user calibration, the test sample should be in a state representative of
the later measurement (type and amount of total suspended solids,
h n i c a l D a t a V i s o Tu r b ® 7 0 0 I Q ( S Wcoloration,
*) (
etc.). The results of the user calibration are input manually
r FNU; NTU; TEF
in the setting table of the( VisoTurb® 700 IQ (SW) (see section 3.4.2).
mg/l SiO2; ppm SiO2 g/l TSS
g Range 0.05 … 4000 FNU 0.1 … 4000 mg/l SiO2 0.0001 … 400 g/l TSS
pplications Drinking water, surface water, waste water plant: effluent, aeration basin ≤3 g/l TSS
on Factory calibration with formazine Factory calibration with SiO2 Calibration by user,
Setting for total (TSS regulations in compliance with
1 Bring the sensor into the measuring position.
suspended solids DIN 38414)
ariation coefficient < 1 % (in the range up to 2000 FNU)
g to DIN 38402 part 51 measurement 2 In the setting table of the turbidity sensor, select the
ility < 0,015 % or ≥ 0,006 FNU g/l TSS measuring mode and the AutoRange measuringAPP#B%4%1#–#2#of#2#
range
g to DIN ISO 5725
319 (see section 3.4.2).
n Automatic according to measuring
range 3 Switch to the measured value display with <M>.
APPENDIX B-5-1
1. UNCERTAINTY,ANALYSIS,
#setwd("C:/Users/soporte/Documents/UTSS")#
#C:\Users\soporte\Documents\UTSS#
#E:\Documentos#Sandra#Galarza\Documents\Parte2_Doctorado\calibracionTSS%
MCverAT_definitivo\u_incfertidumbre\UTSS#
setwd("E:/Documentos#Sandra#Galarza/Documents/Parte2_Doctorado/calibracionTSS%
MCverAT_definitivo/u_incertidumbre/UTSS")#
###########################################################
###############PROPAGACIoN#DE#INCERTIDUMBRE################
rm(list=ls(all=TRUE))#
numsim=5000#
#
####################CALCULO#DE#LA#TSS######################
#
labo=read.table("tssdata.txt",header=TRUE)#con#las#nuevas#muestras#de#TSS#
attach(labo)#
View(labo)#
#
###precisiOn#probetas#
#precisiOn#probeta### p#(ml)# #
pp1=2##precision#probeta#en#mL#
pp2=5#
pb=0.001##precision#balanza#en#mg#
#INCERTIDUMBRE#RELATIVA#
#######
j=1#
while(j<=numsim){#
##i=1#i==muestra#
##while#(i<=max(muestra)){#
####filas=which(muestra==i)#
#####peso#final#e#inicial#
####mfg=rnorm(length(filas),labo[filas,6],sd=pb/2)#
####mig=rnorm(length(filas),labo[filas,5],sd=pb/2)#
#####volumen#
####if(vol[filas][1]<=500){vg=rnorm(length(filas),vol[filas],pp1/2)}#
####else{vg=rnorm(length(filas),vol[filas],pp2/2)}#
###
####TSSs=1000000*(mfg%mig)/(vg)#
####TSSs[which(TSSs<0)]=NA#
####TSSg=mean(TSSs,na.rm=TRUE)#promedio#de#las#tres#replicas#
####if(i==1){MTSSg=TSSg}else{MTSSg=c(MTSSg,TSSg)}#
####i=i+1#
##}#
##if(j==1){MMTSSg=MTSSg}else{MMTSSg=cbind(MMTSSg,MTSSg)}#
##j=j+1#
}#
#
#Generamos#vectores#con#los#lim#superior(Msup),#mediana(Mmed),##
#lim#inferior(Minf)por#muestra#
i=1#
while(i<=dim(MMTSSg)[1]){#

APP#B%5%1#–#1#of#7#
##if(i==1){Msup=boxplot.stats(MMTSSg[i,])$stats[5]#
###########Mmed=boxplot.stats(MMTSSg[i,])$stats[3]#
###########Minf=boxplot.stats(MMTSSg[i,])$stats[1]#
##}else{#
####Msup=c(Msup,boxplot.stats(MMTSSg[i,])$stats[5])#
####Mmed=c(Mmed,boxplot.stats(MMTSSg[i,])$stats[3])#
####Minf=c(Minf,boxplot.stats(MMTSSg[i,])$stats[1])#
##}#
##i=i+1##
}#
#
#CAlculo#de#la#incertidumbre#relativa#por#muestra#
Mu=100*((Msup%Minf)/4)/Mmed#
which(Mu>=25)#
write.csv(MMTSSg,"MMTSSg.csv",row.names=FALSE)#TSS#simuladas#dim(55,numsim)#
write.csv(Mu,"MuTSS.csv",row.names=FALSE)#incertidumbre#relativa#
#
#INCERTIDUMBRE#POR#MUESTRA#
######
#INCERTIDUMBRE#POR#MUESTRA#
#prueba#
##St=173.333333333236#
##vL=30#
##pbal=0.1#
##ppro=0.03#
##um=1/(vL/1000)*sqrt((pbal/2)^2*2+(St^2*(ppro/2/1000)^2))#
St=1000000*(mf%mi)/(vol)#mg/L#
#
#um=1/(vol/1000)*sqrt((pb/2)^2*2+(St^2*(pp1/2/1000)^2))#
#incertidumbre#por#muestra#depende#del#precisiOn#volumen#y#peso#y#SST#
i=1#
while(i<=dim(labo)[1]){#
##if(vol[i]<=500){um=1/(vol[i]/1000)*sqrt((pb/2)^2*2+(St[i]^2*(pp1/2/1000)^2))}#
##else{um=1/(vol[i]/1000)*sqrt((pb/2)^2*2+(St[i]^2*(pp2/2/1000)^2))}#
##if(i==1){uM=um}#
##else{uM=rbind(uM,um)}#
##i=i+1#
}#
uMm=cbind(muestra,uM)#
write.csv(uMm,"uporMuestra%tss.csv",row.names=FALSE)#
View(uMm)#
up=uM/St*100#
labo2=cbind(labo,St,uM,up)#
View(labo2)#
#Prueba#t%test#
i=1#
while(i<=max(muestra)){#
##filas=which(muestra==i)#
##cf=combn(filas,2)#
##j=1#
##while(j<=dim(cf)[2]){#
####x1=labo2[cf[1,j],7]#Tss#muestra#1#
####x2=labo2[cf[2,j],7]#Tss#muestra#2#
####s1=labo2[cf[1,j],8]#incertidumbre#Tss#muestra#1#
####s2=labo2[cf[2,j],8]#incertidumbre#Tss#muestra#2#

APP#B%5%1#–#2#of#7#
####n1=3#no#de#medidas#utilizadas#para#obtener#x1=>#TSS#son#3=#Vol,#peso#i,#peso#f##
####n2=3#
####t=(x1%x2)/(s1^2/n1+s2^2/n2)^(0.5);#
####df=(s1^2/n1+s2^2/n2)^2/((s1^2/n1)^2/(n1%1)+(s2^2/n2)^2/(n2%1));#
####pvalue12=2*(pt(%abs(t),df))##se#multiplica#por#dos#porque#es##
#####de#dos#colas#y#se#usa#t#negativo#porque#siempre#cosulta#el#area#
#####a#la#izquierda#del#valor#
####if(j==1){rtest=cbind(i,cf[1,j],cf[2,j],t,pvalue12)}#
####else{rtest=rbind(rtest,cbind(i,cf[1,j],cf[2,j],t,pvalue12))}#
####j=j+1#
##}#
if(i==1){rT=rtest}#
else{rT=rbind(rT,rtest)}#
i=i+1#
}#
View(rT)#
write.csv(rT,"result_t%test.csv",row.names=FALSE)#
which(rT[,5]<0.05)#
#
###ejemplo#
#####comparacion#1#con#2#
##x1=173.33#
##x2=170#
##s1=2.358615#
##s2=2.358555#
##n1=3#
##n2=3#
###
##t=(x1%x2)/(s1^2/n1+s2^2/n2)^(0.5);#
##df=(s1^2/n1+s2^2/n2)^2/((s1^2/n1)^2/(n1%1)+(s2^2/n2)^2/(n2%1));#
##pvalue12=2*(pt(%abs(t),df))###se#multiplica#por#dos#porque#es##
#de#dos#colas#y#se#usa#t#negativo#porque#siempre#cosulta#el#area#
#a#la#izquierda#del#valor#
#
setwd("C:/Users/soporte/Documents/Uscan")#
#setwd("E:/Documentos#Sandra#Galarza/Documents/Parte2_Doctorado/calibracionTSS%
MCverAT_definitivo/u_incertidumbre/Uscan")#
#E:\Documentos#Sandra#Galarza\Documents\Parte2_Doctorado\calibracionTSS%
MCverAT_definitivo\u_incertidumbre\Uscan#
#
###########################################################
###############PROPAGACI╙N#DE#INCERTIDUMBRE################
rm(list=ls(all=TRUE))#
numsim=5000#
#
######################S::CAN############################
scan=read.table("scan%R_all_uni_fine.txt",header=TRUE)##
attach(scan)#
puv=.001###precision#del#s::can#en#abs/m##
scan=as.matrix(scan)#
#Generamos#matriz#scanpromM#de#numsim#X#(214Xnum_muestras)#
l=1#
while(l<=max(numscan)){#
##filass=which(numscan==l)#
###

APP#B%5%1#–#3#of#7#
##scantot=rnorm(length(filass)*214*numsim,t(as.matrix(scan[filass,2:215])),puv/2)#
##scant=t(matrix(scantot,214*length(filass),numsim))#
###
##cont=1#
###
##while(cont<=dim(scant)[1]){#
####cont2=1#
####while(cont2<=dim(scant)[2]){#
######if(cont2==1){scanM=scant[cont,cont2:(cont2+213)]}else{#
########scanM=rbind(scanM,scant[cont,cont2:(cont2+213)])#
######}#
######cont2=cont2+214#
####}#
####if(cont==1){scanprom=apply(scanM,2,mean)}else{scanprom=rbind(scanprom,apply(scanM,2,mean))}#
####cont=cont+1#
##}#
###
##k=1#
###CAlculo#de#la#incertidumbre#relativa#(uscan)#por#muestra#
##while#(k<=214){#
####if#(k==1)#{estad=boxplot.stats(scanprom[,k])$stats#
###############uscan=100*((estad[5]%estad[1])/4)/estad[3]}else{#
#################estad=boxplot.stats(scanprom[,k])$stats#
#################uscan=max(uscan,100*((estad[5]%estad[1])/4)/estad[3])#
###############}#
####k=k+1#
##}#
##if(l==1){scanpromM=scanprom#
###########uscanM=uscan}else{scanpromM=cbind(scanpromM,scanprom)#
#############################uscanM=c(uscanM,uscan)}#
##l=l+1}#
#
#absin#es#una#matriz#de#(num_muestrasXnumsim)x214#
#cada#55#filas#tenemos#una#matriz#(55X214)#para#cada#simulaciOn#
j=1#
while(j<=numsim){#
##
if(j==1){absin=t(matrix(scanpromM[j,],214,max(numscan)))}else{absin=rbind(absin,t(matrix(scanpromM[j,],214,ma
x(numscan))))}#
##j=j+1#
}#
write.csv(absin,"absin.csv",row.names=FALSE)#abs#simuladas#dim(55xnumsim,214)#
write.csv(uscanM,"uscanM.csv",row.names=FALSE)#incertidumbre#relative#
#

APP#B%5%1#–#4#of#7#
2. OUTLIER,ANALYSIS,
################################################################
#########PROGRAMA#PARA#IDENTIFICACIoN#DE#OUTLIERS###############
#########PARA#DATOS#DEL#SCAN#%#ABSORBANCIAS#################
#
rm(list=ls(all=TRUE))#
#####Cargar#los#datos#
#
spectra=as.matrix(read.csv("absin.csv",header=TRUE))#(45*numsim)x214#
#
#
#while#para#colocar#las#ejecuciones#por#muestra#matriz#de##45*(214*10000)##
##m=1#
##while(m<dim(spectra)[1]){#
####if(m==1){#
######WL=spectra[m:(m+44),]}#
####else{WL=cbind(WL,spectra[m:(m+44),])}#
####m=m+45##
##}#
###
##write.csv(WL,"absin_col.csv",row.names=FALSE)##45*(214*10000)#
#Matriz#45#X#214#cada#fila#es#la#mediana#de#las#10000#respeticiones#de#cada#muestra#
numsim=10000#
m=1#
while(m<46){#
##i=1##
while(i<=(numsim%2)){#
##if(i==1){#
####WLm=rbind(spectra[m,],spectra[(45+m)],spectra[45*(i+1)+m,])}#absorbancias#por#muestra#
##else{WLm=rbind(WLm,spectra[45*(i+1)+m,])}#
i=i+1#
}#
if(m==1){#
WLmm=apply(WLm,2,median)}#
else{WLmm=rbind(WLmm,#apply(WLm,2,median))}#
m=m+1#
}#
write.csv(WLmm,"absin_median.csv",row.names=FALSE)##45*(214)#
#
#Sacamos#las#muestras#identificadas#por#incertidumbre!!!#
WLmms=WLmm[%37,]##sin#muestra#37#
WLmms=WLmms[%33:%34,]##sin#muestras#33#y#34#
WLmms=WLmms[%23:%27,]##sin#muestras#23#a#la#27#
WLmms=WLmms[%13:%15,]#sin#muestras#13#a#la#15#
WLmms=WLmms[%2,]#
#
write.table(WLmms,"absin_median_sin_m.txt",row.names=FALSE)##45*(214)#
#
################################################################
#####IDENTIFICACIoN#DE#OUTLIERS#
library(sgeostat)#
library(mvoutlier)#
#

APP#B%5%1#–#5#of#7#
#######Outliers#de#la#matriz#de#absorbancias#
#
#datos#de#entrada#
WLmms=read.table("absin_median_sin_m.txt",header=TRUE)#
attach(WLmms)#
#
M=as.data.frame(WLmms)#
Mo=M[order(M[,1]),]##Matriz#de#absorbancias#con#los#datos#ordenados#para#la#primera#columna#
ind=seq(1,dim(Mo)[1],1)##Vector#con#la#secuencia#del#tama?o#de#filas#del#WL#
indesord=ind[order(M[,1])]##vector#con#el###de#fila##de#'M'#ordenada#x#columna#1#
#
wind=3#
while#(wind<=length(M[,1]))#{#
##m=1#
##VMout=matrix(NA,length(M[,1]),length(M[,1])%wind+1)###Matriz#de#'NA'#con#las#dim#indicadas#
##while#(m<=(length(M[,1])%wind+1))#{VMout[m:(m+wind%1),m]=pcout(Mo[m:(m+wind%1),],makeplot=FALSE)$wfinal#
#####################################m=m+1}#
##cont=matrix(0,dim(VMout)[1],dim(VMout)[2])#
##cont[which(is.na(VMout)==FALSE)]=1#
##vcont=apply(cont,1,sum)#
##if#(wind==3)#{Mvcont=vcont#
################VMout[which(is.na(VMout)==TRUE)]=0#
################MVMout=apply(VMout,1,sum)#
##}else{Mvcont=cbind(Mvcont,vcont)#
########VMout[which(is.na(VMout)==TRUE)]=0#
########MVMout=cbind(MVMout,apply(VMout,1,sum))#
##}#
##wind=wind+1#
}#
##
##Guarda#las#probabilidades#de#outliers#de#cada#espectro#
probout=round(apply(MVMout,1,sum)/apply(Mvcont,1,sum),2)#
probout=probout[order(indesord)]#
print(probout)#
write.table(probout,"prob_outlier_spectra_sin_m.txt",row.names=FALSE,col.names=TRUE,quote=FALSE)#
lim=0.95#
probout>lim#
#
#
#####ANaLISIS#DE#OUTLIERS#CON#LA#MATRIZ#DE#ABSORBANCIAS#+#resultados#LAB#
######
labo=read.table("labo_median.txt",header=TRUE)#
attach(labo)#
WLmms=read.table("absin_median_sin_m.txt",header=TRUE)#
attach(WLmms)#
#
ALL=cbind(labo,WLmms)#
#
M=as.data.frame(ALL[,%3])#
Mo=M[order(M[,1]),]##Matriz#de#absorbancias#con#los#datos#ordenados#para#la#primera#columna#
ind=seq(1,dim(Mo)[1],1)##Vector#con#la#secuencia#del#tama?o#de#filas#del#WL#
indesord=ind[order(M[,1])]##vector#con#el###de#fila##de#'M'#ordenada#x#columna#1#
#
wind=3#
while#(wind<=length(M[,1]))#{#

APP#B%5%1#–#6#of#7#
##m=1#
##VMout=matrix(NA,length(M[,1]),length(M[,1])%wind+1)###Matriz#de#'NA'#con#las#dim#indicadas#
##while#(m<=(length(M[,1])%wind+1))#{VMout[m:(m+wind%1),m]=pcout(Mo[m:(m+wind%1),],makeplot=FALSE)$wfinal#
#####################################m=m+1}#
##cont=matrix(0,dim(VMout)[1],dim(VMout)[2])#
##cont[which(is.na(VMout)==FALSE)]=1#
##vcont=apply(cont,1,sum)#
##if#(wind==6)#{Mvcont=vcont#
################VMout[which(is.na(VMout)==TRUE)]=0#
################MVMout=apply(VMout,1,sum)#
##}else{Mvcont=cbind(Mvcont,vcont)#
########VMout[which(is.na(VMout)==TRUE)]=0#
########MVMout=cbind(MVMout,apply(VMout,1,sum))#
##}#
##wind=wind+1#
}#
#
##Guarda#las#probabilidades#de#outliers#de#cada#espectro#
probout=round(apply(MVMout,1,sum)/apply(Mvcont,1,sum),2)#
probout=probout[order(indesord)]#
print(probout)#
write.table(probout,"prob_outlier_ALL.txt",row.names=FALSE,col.names=TRUE,quote=FALSE)#
lim=0.95#
probout>lim#
#
#

APP#B%5%1#–#7#of#7#
APPENDIX B-5-2
PLS$CALIBRATION$METHOD$$
#
##################################################
#######PROCESOS#DE#CALIBRACION#VALIDACION#########
###################usando#PLS####################
##################################################
###################PARA#TSS#######################
#
#
rm(list=ls(all=TRUE))#
DQO=read.csv("MMTSSgsm%new%sinmue.csv",header=TRUE)#matriz#de#45xnumsim#
#X=as.matrix(read.csv("absin.csv",header=TRUE))#matriz#de#(45*numsim#x#215)#
#system.time(read.csv("absin.csv",header=TRUE))#
#
library(data.table)##para#leer#grandes#archivos#
##system.time(X<%fread("absin.csv",header=TRUE))#
##dim(X)#
system.time(X1<%fread("absin_col%new%sinmue.csv",header=TRUE))#
dim(X1)#
#
##m=1#
###while#para#colocar#las#ejecuciones#por#muestra#matriz#de##45*(214*10000)#
##while(m<dim(X)[1]){#
####if(m==1){#
######WL=X[m:(m+44),]}#
####else{WL=cbind(WL,X[m:(m+44),])}#
####m=m+45##
##}#
#
#save(WL,file=paste0("WL",m,".txt"))##
##dim(WL)#
##Wabsin=WL#
#Sacamos#las#muestras#identificadas#por#incertidumbre!!!#
##WLu=X1[%37,]##sin#muestra#37#
##WLu=WLu[%33:%34,]##sin#muestras#33#y#34#
##WLu=WLu[%23:%27,]##sin#muestras#23#a#la#27#
##WLu=WLu[%13:%15,]#sin#muestras#13#a#la#15#
##WLu=WLu[%2,]#
##dim(WLu)#
WLu=as.matrix(X1)#
#######
#
#
#X=read.table("absin2.txt",header=TRUE)#matriz#de#(45*numsim#x#215)#
library(pls)#
numsim=1000#
n=1#
m=1#
#
while(n<=numsim){#
DQO1=DQO[,n]#TSS#de#cada#simulacion#

APP#B%5%2#–#1#of#3#
#X1=Wlu[m:(m+44),]#absorbancia#para#cada#numsim#
X1=WLu[,m:(m+213)]##
#conformacion#datos#de#calibracion#y#validacion#
ical=sort(sample(c(1:length(DQO1)),round(length(DQO1)*2/3)))#
DQOcal=DQO1[ical]#
DQOval=DQO1[%ical]#
X1=as.data.frame(X1)#
Xcal=as.matrix(X1[ical,])#
Xval=as.matrix(X1[%ical,])#
#
#PLS##
regm=plsr(DQOcal~.,ncomp=20,data=as.data.frame(Xcal),validation="LOO",method="widekernelpls")#
nopt=which.min(RMSEP(regm)$val)/2%1#
if(nopt<1){nopt=1}#
nopt#
#nopt=6#
DQOpc=predict(regm,ncomp=nopt,newdata=Xcal)#
DQOpv=predict(regm,ncomp=nopt,newdata=Xval)#
#
save(regm,file=paste0("mod%",n,".RData"))##
#
#Agrupamos#resultados#por#cada#simulaciOn#ical#
if(n==1){#
##icalPls=ical#
##DQOpcPls=DQOpc##TSS#de#calibraciOn#
##DQOpvPls=DQOpv#TSS#de#validaciOn#
##noptPls=nopt#
}else{icalPls=cbind(icalPls,ical)#
######DQOpcPls=cbind(DQOpcPls,DQOpc)#
######DQOpvPls=cbind(DQOpvPls,DQOpv)#
######noptPls=rbind(noptPls,nopt)#guardar#nopt#tambiEn!!!!#
}#
#
save.image(file="PLS%TSS.RData")#
n=n+1#
#m=m+45#
m=m+214#para#matriz#de#[nummuestrasX214*numsim]#
}#
#
#
####MEDIDAS#DE#AJUSTE###
#obtener#TSScal:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScal=DQO[icalPls[,i],i]}#
##else{TSScal=cbind(TSScal,DQO[icalPls[,i],i])}#
##i=i+1#
}#
#obtener#TSSval:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSval=DQO[%icalPls[,i],i]}#
##else{TSSval=cbind(TSSval,DQO[%icalPls[,i],i])}#

APP#B%5%2#–#2#of#3#
##i=i+1#
}#
#
rcG=round(cor(matrix(TSScal,#ncol#=#1),#
##############matrix(DQOpcPls,#ncol#=#1)),2)#
rvG=round(cor(matrix(TSSval,#ncol#=#1),#
##############matrix(DQOpvPls,#ncol#=#1)),2)#
RMSEcG=round(((sum((matrix(TSScal,#ncol#=#1)%#
matrix(DQOpcPls,#ncol#=#1))^2))/length(matrix(TSScal,#ncol#=#1)))^0.5)#
RMSEvG=round(((sum((matrix(TSSval,#ncol#=#1)%#
matrix(DQOpvPls,#ncol#=#1))^2))/length(matrix(TSSval,#ncol#=#1)))^0.5)#
#
TRMA=rbind(cbind(RMSEcG,RMSEvG,rcG,rvG))#
write.csv(TRMA,"TOTALr_majuste_pls_tss.csv",row.names=FALSE)#
#####
#
#fig#PLS#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_pls%tss",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#
"white",#res#=#300)#
#
par(mar=c(5,5,2,2))#
#
plot(DQO[icalPls[,1],1],#DQOpcPls[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="PLS:#Calibration#(blue)#%#Validation#(red)",#
#####col="blue",xlim=c(min(DQO,DQOpcPls,DQOpvPls),max(DQO,DQOpcPls,DQOpvPls)),#
#####ylim=c(min(DQO,DQOpcPls,DQOpvPls),max(DQO,DQOpcPls,DQOpvPls)))#
lines(c(min(DQO,DQOpcPls,DQOpvPls),max(DQO,DQOpcPls,DQOpvPls)),c(min(DQO,DQOpcPls,DQOpvPls),max(DQO,D
QOpcPls,DQOpvPls)),lty=2)#
points(DQO[(%icalPls[,1]),1],DQOpvPls[,1],col="red",pch=3)#
#
#las#demAs#numsim#
#calibraciOn#
i=2#
while(i<=numsim){#
##points(DQO[icalPls[,i],i],DQOpcPls[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
#validaciOn#
i=2#
while(i<=numsim){#
##points(DQO[%icalPls[,i],i],DQOpvPls[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
#
legend("topleft",c(paste("cal:#r=",round(mean(rcG),2),";#RMSE=",round(mean(RMSEcG),2),"#mg/L"),#
paste("val:#r=",round(mean(rvG),2),";#RMSE=#",round(mean(RMSEvG),2),"#mg/L")),#
#######pch=c(20,8),col=c("blue","red"),bty="n")#
#
dev.off()#

APP#B%5%2#–#3#of#3#
APPENDIX B-5-2
SVM$CALIBRATION$METHOD$
$
##################################################
#######PROCESOS#DE#CALIBRACION#VALIDACION#########
##################################################
###################PARA#TSS#######################
#
numsim=5000#
#
#Archivos#scan#contemplando#incertidumbre#
library(data.table)##para#leer#grandes#archivos#
system.time(absin<%fread("absin.csv",header=TRUE))#
dim(absin)#
absin=as.matrix(absin)#
dim(absin)#
#
#while#para#colocar#las#ejecuciones#por#muestra#matriz#de##55*(214*10000)#
m=1#
while(m<dim(absin)[1]){#
##if(m==1){#
####WL=absin[m:(m+64),]}#
##else{WL=cbind(WL,absin[m:(m+64),])}#
##m=m+65##
}#
dim(WL)#
#guardamos#Wl#
write.csv(WL,"absin_col%new.csv",row.names=FALSE)#
#Sacamos#las#muestras#identificadas#por#incertidumbre!!!#
WLu=WL[%23:%27,]##sin#muestras#23#a#27#
WLu=WLu[%14:%16,]##sin#muestras#14#a#16#
WLu=WLu[%6,]#
WLu=WLu[%2,]#
dim(WLu)#
#guardamos#WLu#
#Archivo#scan#contemplando#incertidumbre#con#las#muestras#excluidas#por#
#alta#incertidumbre#
write.csv(WLu,"absin_col%new%sinmue.csv",row.names=FALSE)#
#
#cargar#WLu#
system.time(WLu<%fread("absin_col%new%sinmue.csv",header=TRUE))#
dim(WLu)#
WLu=as.matrix(WLu)#
#
#Archivos#TSS#contemplando#incertidumbre#y#sin#las#muestras#excluidas#por#
#alta#incertidumbre#
MMTSSg=as.matrix(read.csv("MMTSSg.csv",#header#=#TRUE))#
dim(MMTSSg)#
#Sacamos#las#muestras#identificadas#por#incertidumbre!!!#
##
MMTSSgsm=MMTSSg[%23:%27,]##sin#muestras#23#a#27#
MMTSSgsm=MMTSSgsm[%14:%16,]##sin#muestras#14#a#16#
MMTSSgsm=MMTSSgsm[%6,]#
MMTSSgsm=MMTSSgsm[%2,]#
dim(MMTSSgsm)#
#guardamos#MMTSSgsm#
#Archivo#scan#contemplando#incertidumbre#con#las#muestras#excluidas#por#
#alta#incertidumbre#
write.csv(MMTSSgsm,"MMTSSgsm%new%sinmue.csv",row.names=FALSE)#
#
library(kernlab)#
#
#ejecutamos#todos#las#kernel#functions#
n=1#
m=1#

APP#B%5%3#–#1#of#14#
while(n<=numsim){#
##ical=sort(sample(seq(1,dim(WLu)[1],1),round(0.7*dim(WLu)[1])))#
##TSSo=MMTSSgsm[,n]#
###para#matriz#de#[nummuestrasX214*numsim]#
##WL=WLu[,m:(m+213)]#
###
##write.table(WL,"WL.txt",row.names=FALSE)#
###
##source("KUVis_v11a.R")#
##save(regm,file=paste0("modA",n,".RData"))##
##source("KUVis_v11b.R")#
##save(regm,file=paste0("modB",n,".RData"))##
##source("KUVis_v11c.R")#
##save(regm,file=paste0("modC",n,".RData"))##
##source("KUVis_v11d.R")#
##save(regm,file=paste0("modD",n,".RData"))##
##source("KUVis_v11e.R")#
##save(regm,file=paste0("modE",n,".RData"))##
##source("KUVis_v11f.R")#
##save(regm,file=paste0("modF",n,".RData"))##
##source("KUVis_v11g.R")#
##save(regm,file=paste0("modG",n,".RData"))##
###
##save.image(file="TSS.RData")#
##n=n+1#
##m=m+214#para#matriz#de#[nummuestrasX214*numsim]#
}#
#
load("TSS.RData")#
##RESULTADOS####
########################
####rbfdot#Radial#Basis#kernel#function#"Gaussian"###
write.csv(icalA,"icalA.csv",row.names=FALSE)#guarda#las#muestras#usadas#en#cada#sim#para#calibraciOn#
write.csv(DQOpcA,"TSSpcA.csv",row.names=FALSE)#guarda#TSS#de#calibraciOn#
write.csv(DQOpvA,"TSSpvA.csv",row.names=FALSE)#guarda#TSS#de#validaciOn#
#write.csv(DQOsA,"TSSsA.csv",row.names=FALSE)#guarda#TSS#de#simulaciOn#
write.csv(cbind(RMSEcA,RMSEvA,rcA,rvA),"r_majusteA_TSS.csv",row.names=FALSE)#
#
####polydot#Polynomial#kernel###
write.csv(icalB,"icalB.csv",row.names=FALSE)#
write.csv(DQOpcB,"TSSpcB.csv",row.names=FALSE)#
write.csv(DQOpvB,"TSSpvB.csv",row.names=FALSE)#
#write.csv(DQOsB,"TSSsB.csv",row.names=FALSE)#
write.csv(cbind(RMSEcB,RMSEvB,rcB,rvB),"r_majusteB_TSS.csv",row.names=FALSE)#
#
####vanilladot#Linear#kernel###
write.csv(icalC,"icalC.csv",row.names=FALSE)#
write.csv(DQOpcC,"TSSpcC.csv",row.names=FALSE)#
write.csv(DQOpvC,"TSSpvC.csv",row.names=FALSE)#
#write.csv(DQOsC,"TSSsC.csv",row.names=FALSE)#
write.csv(cbind(RMSEcC,RMSEvC,rcC,rvC),"r_majusteC_TSS.csv",row.names=FALSE)#
#
####tanhdot#Hyperbolic#tangent#kernel###
write.csv(icalD,"icalD.csv",row.names=FALSE)#
write.csv(DQOpcD,"TSSpcD.csv",row.names=FALSE)#
write.csv(DQOpvD,"TSSpvD.csv",row.names=FALSE)#
#write.csv(DQOsD,"TSSsD.csv",row.names=FALSE)#
write.csv(cbind(RMSEcD,RMSEvD,rcD,rvD),"r_majusteD_TSS.csv",row.names=FALSE)#
#
####laplacedot#Laplacian#kernell###
write.csv(icalE,"icalE.csv",row.names=FALSE)#
write.csv(DQOpcE,"TSSpcE.csv",row.names=FALSE)#
write.csv(DQOpvE,"TSSpvE.csv",row.names=FALSE)#
#write.csv(DQOsE,"TSSsE.csv",row.names=FALSE)#
write.csv(cbind(RMSEcE,RMSEvE,rcE,rvE),"r_majusteE_TSS.csv",row.names=FALSE)#
#
####besseldot#Bessel#kernel###
write.csv(icalF,"icalF.csv",row.names=FALSE)#
write.csv(DQOpcF,"TSSpcF.csv",row.names=FALSE)#

APP#B%5%3#–#2#of#14#
write.csv(DQOpvF,"TSSpvF.csv",row.names=FALSE)#
#write.csv(DQOsF,"TSSsF.csv",row.names=FALSE)#
write.csv(cbind(RMSEcF,RMSEvF,rcF,rvF),"r_majusteF_TSS.csv",row.names=FALSE)#
#
####anovadot#ANOVA#RBF#kernel###
write.csv(icalG,"icalG.csv",row.names=FALSE)#
write.csv(DQOpcG,"TSSpcG.csv",row.names=FALSE)#
write.csv(DQOpvG,"TSSpvG.csv",row.names=FALSE)#
#write.csv(DQOsG,"TSSsG.csv",row.names=FALSE)#
write.csv(cbind(RMSEcG,RMSEvG,rcG,rvG),"r_majusteG_TSS.csv",row.names=FALSE)#
#
########MEDIDAS#DE#AJUSTE#CON#TODAS#LAS#EJECUCIONES####
####rbfdot#Radial#Basis#kernel#function#"Gaussian"###
#obtener#TSScalA:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
##TSScalA=MMTSSgsm[icalA[,i],i]}#
##else{TSScalA=cbind(TSScalA,MMTSSgsm[icalA[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalA:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalA=MMTSSgsm[%icalA[,i],i]}#
##else{TSSvalA=cbind(TSSvalA,MMTSSgsm[%icalA[,i],i])}#
##i=i+1#
}#
#
rcA=round(cor(matrix(TSScalA,#ncol#=#1),#
##############matrix(DQOpcA,#ncol#=#1)),2)#
rvA=round(cor(matrix(TSSvalA,#ncol#=#1),#
##############matrix(DQOpvA,#ncol#=#1)),2)#
RMSEcA=round(((sum((matrix(TSScalA,#ncol#=#1)%#
matrix(DQOpcA,#ncol#=#1))^2))/length(matrix(TSScalA,#ncol#=#1)))^0.5)#
RMSEvA=round(((sum((matrix(TSSvalA,#ncol#=#1)%#
matrix(DQOpvA,#ncol#=#1))^2))/length(matrix(TSSvalA,#ncol#=#1)))^0.5)#
#
####polydot#Polynomial#kernel###
#obtener#TSScalB:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScalB=MMTSSgsm[icalB[,i],i]}#
##else{TSScalB=cbind(TSScalB,MMTSSgsm[icalB[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalB:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalB=MMTSSgsm[%icalB[,i],i]}#
##else{TSSvalB=cbind(TSSvalB,MMTSSgsm[%icalB[,i],i])}#
##i=i+1#
}#
#
rcB=round(cor(matrix(TSScalB,#ncol#=#1),#
##############matrix(DQOpcB,#ncol#=#1)),2)#
rvB=round(cor(matrix(TSSvalB,#ncol#=#1),#
##############matrix(DQOpvB,#ncol#=#1)),2)#
RMSEcB=round(((sum((matrix(TSScalB,#ncol#=#1)%#
######################matrix(DQOpcB,#ncol#=#1))^2))/length(matrix(TSScalB,#ncol#=#1)))^0.5)#
RMSEvB=round(((sum((matrix(TSSvalB,#ncol#=#1)%#
######################matrix(DQOpvB,#ncol#=#1))^2))/length(matrix(TSSvalB,#ncol#=#1)))^0.5)#
#
####vanilladot#Linear#kernel###
#obtener#TSScalC:datos#usados#para#calibraciOn##
i=1#

APP#B%5%3#–#3#of#14#
while(i<=numsim){#
##if(i==1){#
####TSScalC=MMTSSgsm[icalC[,i],i]}#
##else{TSScalC=cbind(TSScalC,MMTSSgsm[icalC[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalC:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalC=MMTSSgsm[%icalC[,i],i]}#
##else{TSSvalC=cbind(TSSvalC,MMTSSgsm[%icalC[,i],i])}#
##i=i+1#
}#
#
rcC=round(cor(matrix(TSScalC,#ncol#=#1),#
##############matrix(DQOpcC,#ncol#=#1)),2)#
rvC=round(cor(matrix(TSSvalC,#ncol#=#1),#
##############matrix(DQOpvC,#ncol#=#1)),2)#
RMSEcC=round(((sum((matrix(TSScalC,#ncol#=#1)%#
######################matrix(DQOpcC,#ncol#=#1))^2))/length(matrix(TSScalC,#ncol#=#1)))^0.5)#
RMSEvC=round(((sum((matrix(TSSvalC,#ncol#=#1)%#
######################matrix(DQOpvC,#ncol#=#1))^2))/length(matrix(TSSvalC,#ncol#=#1)))^0.5)#
#
####tanhdot#Hyperbolic#tangent#kernel###
#obtener#TSScalD:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScalD=MMTSSgsm[icalD[,i],i]}#
##else{TSScalD=cbind(TSScalD,MMTSSgsm[icalD[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalD:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalD=MMTSSgsm[%icalD[,i],i]}#
##else{TSSvalD=cbind(TSSvalD,MMTSSgsm[%icalD[,i],i])}#
##i=i+1#
}#
#
rcD=round(cor(matrix(TSScalD,#ncol#=#1),#
##############matrix(DQOpcD,#ncol#=#1)),2)#
rvD=round(cor(matrix(TSSvalD,#ncol#=#1),#
##############matrix(DQOpvD,#ncol#=#1)),2)#
RMSEcD=round(((sum((matrix(TSScalD,#ncol#=#1)%#
######################matrix(DQOpcD,#ncol#=#1))^2))/length(matrix(TSScalD,#ncol#=#1)))^0.5)#
RMSEvD=round(((sum((matrix(TSSvalD,#ncol#=#1)%#
######################matrix(DQOpvD,#ncol#=#1))^2))/length(matrix(TSSvalD,#ncol#=#1)))^0.5)#
#
####laplacedot#Laplacian#kernel###
#obtener#TSScalE:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScalE=MMTSSgsm[icalE[,i],i]}#
##else{TSScalE=cbind(TSScalE,MMTSSgsm[icalE[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalE:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalE=MMTSSgsm[%icalE[,i],i]}#
##else{TSSvalE=cbind(TSSvalE,MMTSSgsm[%icalE[,i],i])}#
##i=i+1#
}#
#

APP#B%5%3#–#4#of#14#
rcE=round(cor(matrix(TSScalE,#ncol#=#1),#
##############matrix(DQOpcE,#ncol#=#1)),2)#
rvE=round(cor(matrix(TSSvalE,#ncol#=#1),#
##############matrix(DQOpvE,#ncol#=#1)),2)#
RMSEcE=round(((sum((matrix(TSScalE,#ncol#=#1)%#
######################matrix(DQOpcE,#ncol#=#1))^2))/length(matrix(TSScalE,#ncol#=#1)))^0.5)#
RMSEvE=round(((sum((matrix(TSSvalE,#ncol#=#1)%#
######################matrix(DQOpvE,#ncol#=#1))^2))/length(matrix(TSSvalE,#ncol#=#1)))^0.5)#
#
####besseldot#Bessel#kernel###
#obtener#TSScalF:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScalF=MMTSSgsm[icalF[,i],i]}#
##else{TSScalF=cbind(TSScalF,MMTSSgsm[icalF[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalF:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalF=MMTSSgsm[%icalF[,i],i]}#
##else{TSSvalF=cbind(TSSvalF,MMTSSgsm[%icalF[,i],i])}#
##i=i+1#
}#
#
rcF=round(cor(matrix(TSScalF,#ncol#=#1),#
##############matrix(DQOpcF,#ncol#=#1)),2)#
rvF=round(cor(matrix(TSSvalF,#ncol#=#1),#
##############matrix(DQOpvF,#ncol#=#1)),2)#
RMSEcF=round(((sum((matrix(TSScalF,#ncol#=#1)%#
######################matrix(DQOpcF,#ncol#=#1))^2))/length(matrix(TSScalF,#ncol#=#1)))^0.5)#
RMSEvF=round(((sum((matrix(TSSvalF,#ncol#=#1)%#
######################matrix(DQOpvF,#ncol#=#1))^2))/length(matrix(TSSvalF,#ncol#=#1)))^0.5)#
#
####anovadot#ANOVA#RBF#kernel###
#obtener#TSScalG:datos#usados#para#calibraciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSScalG=MMTSSgsm[icalG[,i],i]}#
##else{TSScalG=cbind(TSScalG,MMTSSgsm[icalG[,i],i])}#
##i=i+1#
}#
#obtener#TSSvalG:datos#usados#para#validaciOn##
i=1#
while(i<=numsim){#
##if(i==1){#
####TSSvalG=MMTSSgsm[%icalG[,i],i]}#
##else{TSSvalG=cbind(TSSvalG,MMTSSgsm[%icalG[,i],i])}#
##i=i+1#
}#
#
rcG=round(cor(matrix(TSScalG,#ncol#=#1),#
##############matrix(DQOpcG,#ncol#=#1)),2)#
rvG=round(cor(matrix(TSSvalG,#ncol#=#1),#
##############matrix(DQOpvG,#ncol#=#1)),2)#
RMSEcG=round(((sum((matrix(TSScalG,#ncol#=#1)%#
######################matrix(DQOpcG,#ncol#=#1))^2))/length(matrix(TSScalG,#ncol#=#1)))^0.5)#
RMSEvG=round(((sum((matrix(TSSvalG,#ncol#=#1)%#
######################matrix(DQOpvG,#ncol#=#1))^2))/length(matrix(TSSvalG,#ncol#=#1)))^0.5)#
#
##Resultados#de#las#medidas#de#ajuste#Todos#los#modelos#de#A#hasta#G#
TRMA=rbind(cbind(RMSEcA,RMSEvA,rcA,rvA),cbind(RMSEcB,RMSEvB,rcB,rvB),#
###########cbind(RMSEcC,RMSEvC,rcC,rvC),cbind(RMSEcD,RMSEvD,rcD,rvD),#
###########cbind(RMSEcE,RMSEvE,rcE,rvE),cbind(RMSEcF,RMSEvF,rcF,rvF),#
###########cbind(RMSEcG,RMSEvG,rcG,rvG))#
write.csv(TRMA,"TOTALr_majuste_A%G_TSS.csv",row.names=FALSE)#

APP#B%5%3#–#5#of#14#
#
######
#primer,#tercer#y#quinto#valor#del#boxplot.stats#
rTssA=as.matrix(read.csv("r_majusteA_TSS.csv",#header#=#TRUE))#
rTssB=as.matrix(read.csv("r_majusteB_TSS.csv",#header#=#TRUE))#
rTssC=as.matrix(read.csv("r_majusteC_TSS.csv",#header#=#TRUE))#
rTssD=as.matrix(read.csv("r_majusteD_TSS.csv",#header#=#TRUE))#
rTssE=as.matrix(read.csv("r_majusteE_TSS.csv",#header#=#TRUE))#
rTssF=as.matrix(read.csv("r_majusteF_TSS.csv",#header#=#TRUE))#
rTssG=as.matrix(read.csv("r_majusteG_TSS.csv",#header#=#TRUE))#
#
#summary(rTssA)#
#
##########################################################
##FIGURA#rbfdot#Radial#Basis#kernel#function#"Gaussian"###
##########################################################
#MMTSSgsm[which(is.na(MMTSSgsm)==TRUE)]=0#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_A_TSS%#",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalA[,1],1],DQOpcA[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####
pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T))+10),ylim=c(0,ceiling(max(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T))+1
0))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalA[,i],i],DQOpcA[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T),#
########max(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T),#
########max(MMTSSgsm,DQOpcA,DQOpvA,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalA[,i],i],DQOpvA[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcA,na.rm=T),2),";#RMSE=",round(mean(RMSEcA,na.rm=T),2),"#mg/L"),#
########paste("val:#r=",round(mean(rvA,na.rm=T),2),";#RMSE=#",round(mean(RMSEvA,na.rm=T),2),"#mg/L")),#
########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssA[,3])$stats[1],";#median=#",boxplot.stats(rTssA[,3])$stats[3],#
########";#max=#",boxplot.stats(rTssA[,3])$stats[5]),##
########paste("cal#RMSE:#min=#",boxplot.stats(rTssA[,1])$stats[1],";#median=#",boxplot.stats(rTssA[,1])$stats[3],";#max=#
",boxplot.stats(rTssA[,1])$stats[5],"#mg/L"),#
########paste("val##r:#########min=#",boxplot.stats(rTssA[,4])$stats[1],";#median=#",boxplot.stats(rTssA[,4])$stats[3],#
##############";#max=#",boxplot.stats(rTssA[,4])$stats[5]),#
########paste("val#RMSE:#min=#",boxplot.stats(rTssA[,2])$stats[1],";#median=#",boxplot.stats(rTssA[,2])$stats[3],";#max=#
",boxplot.stats(rTssA[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
#
dev.off()#
#
##########################################################
###########FIGURA#polydot#Polynomial#kernel###############
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_B_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalB[,1],1],DQOpcB[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#

APP#B%5%3#–#6#of#14#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalB[,i],i],DQOpcB[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T),#
########max(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T),#
########max(MMTSSgsm,DQOpcB,DQOpvB,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalB[,i],i],DQOpvB[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcB),2),";#RMSE=",round(mean(RMSEcB),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvB),2),";#RMSE=#",round(mean(RMSEvB),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssB[,3])$stats[1],";#median=#",boxplot.stats(rTssB[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssB[,3])$stats[5]),##
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssB[,1])$stats[1],";#median=#",boxplot.stats(rTssB[,1])$stats[3],";#max=#
",boxplot.stats(rTssB[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssB[,4])$stats[1],";#median=#",boxplot.stats(rTssB[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssB[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssB[,2])$stats[1],";#median=#",boxplot.stats(rTssB[,2])$stats[3],";#max=#
",boxplot.stats(rTssB[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
dev.off()#
#
##########################################################
###########FIGURA#vanilladot#Linear#kernel###############
##########################################################
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_C_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalC[,1],1],DQOpcC[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalC[,i],i],DQOpcC[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T),#
########max(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T),#
########max(MMTSSgsm,DQOpcC,DQOpvC,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalC[,i],i],DQOpvC[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcC),2),";#RMSE=",round(mean(RMSEcC),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvC),2),";#RMSE=#",round(mean(RMSEvC),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssC[,3])$stats[1],";#median=#",boxplot.stats(rTssC[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssC[,3])$stats[5]),##

APP#B%5%3#–#7#of#14#
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssC[,1])$stats[1],";#median=#",boxplot.stats(rTssC[,1])$stats[3],";#max=#
",boxplot.stats(rTssC[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssC[,4])$stats[1],";#median=#",boxplot.stats(rTssC[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssC[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssC[,2])$stats[1],";#median=#",boxplot.stats(rTssC[,2])$stats[3],";#max=#
",boxplot.stats(rTssC[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
dev.off()#
#
##########################################################
#######FIGURA#tanhdot#Hyperbolic#tangent#kernel###########
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_D_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalD[,1],1],DQOpcD[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalD[,i],i],DQOpcD[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T),#
########max(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T),#
########max(MMTSSgsm,DQOpcD,DQOpvD,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalD[,i],i],DQOpvD[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
###
##legend("topleft",c(paste("cal:#r=",round(mean(rcD),2),";#RMSE=",round(mean(RMSEcD),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvD),2),";#RMSE=#",round(mean(RMSEvD),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssD[,3])$stats[1],";#median=#",boxplot.stats(rTssD[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssD[,3])$stats[5]),##
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssD[,1])$stats[1],";#median=#",boxplot.stats(rTssD[,1])$stats[3],";#max=#
",boxplot.stats(rTssD[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssD[,4])$stats[1],";#median=#",boxplot.stats(rTssD[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssD[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssD[,2])$stats[1],";#median=#",boxplot.stats(rTssD[,2])$stats[3],";#max=#
",boxplot.stats(rTssD[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
dev.off()#
##########################################################
#######FIGURA#laplacedot#Laplacian#kernell###########
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_E_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalE[,1],1],DQOpcE[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalE[,i],i],DQOpcE[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T),#
########max(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T)),#

APP#B%5%3#–#8#of#14#
######c(min(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T),#
########max(MMTSSgsm,DQOpcE,DQOpvE,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalE[,i],i],DQOpvE[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcE),2),";#RMSE=",round(mean(RMSEcE),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvE),2),";#RMSE=#",round(mean(RMSEvE),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssE[,3])$stats[1],";#median=#",boxplot.stats(rTssE[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssE[,3])$stats[5]),##
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssE[,1])$stats[1],";#median=#",boxplot.stats(rTssE[,1])$stats[3],";#max=#
",boxplot.stats(rTssE[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssE[,4])$stats[1],";#median=#",boxplot.stats(rTssE[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssE[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssE[,2])$stats[1],";#median=#",boxplot.stats(rTssE[,2])$stats[3],";#max=#
",boxplot.stats(rTssE[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
dev.off()#
##########################################################
###########FIGURA#besseldot#Bessel#kernel##############
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_F_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalF[,1],1],DQOpcF[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalF[,i],i],DQOpcF[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T),#
########max(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T),#
########max(MMTSSgsm,DQOpcF,DQOpvF,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalF[,i],i],DQOpvF[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcF),2),";#RMSE=",round(mean(RMSEcF),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvF),2),";#RMSE=#",round(mean(RMSEvF),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssF[,3])$stats[1],";#median=#",boxplot.stats(rTssF[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssF[,3])$stats[5]),##
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssF[,1])$stats[1],";#median=#",boxplot.stats(rTssF[,1])$stats[3],";#max=#
",boxplot.stats(rTssF[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssF[,4])$stats[1],";#median=#",boxplot.stats(rTssF[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssF[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssF[,2])$stats[1],";#median=#",boxplot.stats(rTssF[,2])$stats[3],";#max=#
",boxplot.stats(rTssF[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
dev.off()#
##########################################################
###########FIGURA#anovadot#ANOVA#RBF#kernel##############
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_G_TSS%",numsim,".tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#

APP#B%5%3#–#9#of#14#
par(mar=c(5,5,2,2))#
#
plot(MMTSSgsm[icalG[,1],1],DQOpcG[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,ceiling(max(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T))+10),#
#####ylim=c(0,ceiling(max(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T))+10))#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(MMTSSgsm[icalG[,i],i],DQOpcG[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T),#
########max(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T)),#
######c(min(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T),#
########max(MMTSSgsm,DQOpcG,DQOpvG,na.rm=T)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(MMTSSgsm[%icalG[,i],i],DQOpvG[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
##legend("topleft",c(paste("cal:#r=",round(mean(rcG),2),";#RMSE=",round(mean(RMSEcG),2),"#mg/L"),#
#####################paste("val:#r=",round(mean(rvG),2),";#RMSE=#",round(mean(RMSEvG),2),"#mg/L")),#
#########pch=c(20,8),col=c("blue","red"),bty="n")#
legend("topleft",c(paste("cal##r:#########min=#",boxplot.stats(rTssG[,3])$stats[1],";#median=#",boxplot.stats(rTssG[,3])$stats[3],#
#########################";#max=#",boxplot.stats(rTssG[,3])$stats[5]),##
###################paste("cal#RMSE:#min=#",boxplot.stats(rTssG[,1])$stats[1],";#median=#",boxplot.stats(rTssG[,1])$stats[3],";#max=#
",boxplot.stats(rTssG[,1])$stats[5],"#mg/L"),#
###################paste("val##r:#########min=#",boxplot.stats(rTssG[,4])$stats[1],";#median=#",boxplot.stats(rTssG[,4])$stats[3],#
#########################";#max=#",boxplot.stats(rTssG[,4])$stats[5]),#
###################paste("val#RMSE:#min=#",boxplot.stats(rTssG[,2])$stats[1],";#median=#",boxplot.stats(rTssG[,2])$stats[3],";#max=#
",boxplot.stats(rTssG[,2])$stats[5],"#mg/L")),#
#######pch=c(20,20,8,8),col=c("blue","white","red","white"),bty="n",cex=0.95)#
#
dev.off()#
##########MOD#A#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="rbfdot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalA=ical#
##DQOpcA=DQOpcdef##TSS#de#calibraciOn#
##DQOpvA=DQOpvdef##TSS#de#validaciOn#
###DQOsA=DQOs##TSS#predichos#
##rcA=rc#
##rvA=rv#
##RMSEcA=RMSEc#

APP#B%5%3#–#10#of#14#
##RMSEvA=RMSEv#
}else{icalA=cbind(icalA,ical)#
######DQOpcA=cbind(DQOpcA,DQOpcdef)#
######DQOpvA=cbind(DQOpvA,DQOpvdef)#
#######DQOsA=cbind(DQOsA,DQOs)#
######rcA=c(rcA,rc)#
######rvA=c(rvA,rv)#
######RMSEcA=c(RMSEcA,RMSEc)#
######RMSEvA=c(RMSEvA,RMSEv)#
}#
##########MOD#B#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="polydot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalB=ical#
##DQOpcB=DQOpcdef##TSS#de#calibraciOn#
##DQOpvB=DQOpvdef##TSS#de#validaciOn#
###DQOsB=DQOs##TSS#predichos#
##rcB=rc#
##rvB=rv#
##RMSEcB=RMSEc#
##RMSEvB=RMSEv#
}else{icalB=cbind(icalB,ical)#
######DQOpcB=cbind(DQOpcB,DQOpcdef)#
######DQOpvB=cbind(DQOpvB,DQOpvdef)#
#######DQOsB=cbind(DQOsB,DQOs)#
######rcB=c(rcB,rc)#
######rvB=c(rvB,rv)#
######RMSEcB=c(RMSEcB,RMSEc)#
######RMSEvB=c(RMSEvB,RMSEv)#
}#
##########MOD#C#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="vanilladot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#

APP#B%5%3#–#11#of#14#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalC=ical#
##DQOpcC=DQOpcdef##TSS#de#calibraciOn#
##DQOpvC=DQOpvdef##TSS#de#validaciOn#
###DQOsC=DQOs##TSS#predichos#
##rcC=rc#
##rvC=rv#
##RMSEcC=RMSEc#
##RMSEvC=RMSEv#
}else{icalC=cbind(icalC,ical)#
######DQOpcC=cbind(DQOpcC,DQOpcdef)#
######DQOpvC=cbind(DQOpvC,DQOpvdef)#
#######DQOsC=cbind(DQOsC,DQOs)#
######rcC=c(rcC,rc)#
######rvC=c(rvC,rv)#
######RMSEcC=c(RMSEcC,RMSEc)#
######RMSEvC=c(RMSEvC,RMSEv)#
}#
##########MOD#D#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="tanhdot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalD=ical#
##DQOpcD=DQOpcdef##TSS#de#calibraciOn#
##DQOpvD=DQOpvdef##TSS#de#validaciOn#
###DQOsD=DQOs##TSS#predichos#
##rcD=rc#
##rvD=rv#
##RMSEcD=RMSEc#
##RMSEvD=RMSEv#
}else{icalD=cbind(icalD,ical)#
######DQOpcD=cbind(DQOpcD,DQOpcdef)#
######DQOpvD=cbind(DQOpvD,DQOpvdef)#
#######DQOsD=cbind(DQOsD,DQOs)#
######rcD=c(rcD,rc)#
######rvD=c(rvD,rv)#
######RMSEcD=c(RMSEcD,RMSEc)#
######RMSEvD=c(RMSEvD,RMSEv)#

APP#B%5%3#–#12#of#14#
}#
##########MOD#E#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="laplacedot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalE=ical#
##DQOpcE=DQOpcdef##TSS#de#calibraciOn#
##DQOpvE=DQOpvdef##TSS#de#validaciOn#
###DQOsE=DQOs##TSS#predichos#
##rcE=rc#
##rvE=rv#
##RMSEcE=RMSEc#
##RMSEvE=RMSEv#
}else{icalE=cbind(icalE,ical)#
######DQOpcE=cbind(DQOpcE,DQOpcdef)#
######DQOpvE=cbind(DQOpvE,DQOpvdef)#
#######DQOsE=cbind(DQOsE,DQOs)#
######rcE=c(rcE,rc)#
######rvE=c(rvE,rv)#
######RMSEcE=c(RMSEcE,RMSEc)#
######RMSEvE=c(RMSEvE,RMSEv)#
}#
##########MOD#F#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="besseldot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#

APP#B%5%3#–#13#of#14#
if(n==1){#
##icalF=ical#
##DQOpcF=DQOpcdef##TSS#de#calibraciOn#
##DQOpvF=DQOpvdef##TSS#de#validaciOn#
###DQOsF=DQOs##TSS#predichos#
##rcF=rc#
##rvF=rv#
##RMSEcF=RMSEc#
##RMSEvF=RMSEv#
}else{icalF=cbind(icalF,ical)#
######DQOpcF=cbind(DQOpcF,DQOpcdef)#
######DQOpvF=cbind(DQOpvF,DQOpvdef)#
#######DQOsF=cbind(DQOsF,DQOs)#
######rcF=c(rcF,rc)#
######rvF=c(rvF,rv)#
######RMSEcF=c(RMSEcF,RMSEc)#
######RMSEvF=c(RMSEvF,RMSEv)#
}#
##########MOD#G#########
COD=as.matrix(as.data.frame(TSSo))#
X=as.matrix(as.data.frame(WL))#
#
#conformacion#datos#de#calibracion#y#validacion#
DQOcal=(COD[ical])#
DQOval=(COD[%ical])#
Xcal=X[ical,]#
Xval=X[%ical,]#
#
set.seed(123)#
regm#<%#ksvm(Xcal,DQOcal,kernel="anovadot",kpar="automatic",cross=100)#
rm(.Random.seed,envir=globalenv())#
DQOpcdef=predict(regm,Xcal)#
DQOpvdef=predict(regm,Xval)#
#DQOs=predict(regm,serie)#
#
##medidas#de#ajuste#
rc=round(cor(DQOcal,DQOpcdef),2)#
rv=round(cor(DQOval,DQOpvdef),2)#
RMSEc=round(((sum((DQOcal%DQOpcdef)^2))/length(DQOcal))^0.5)#
RMSEv=round(((sum((DQOval%DQOpvdef)^2))/length(DQOval))^0.5)#
#
#Agrupamos#resultados#por#cada#simulaciOn#ical,#TSS#de#calibraciOn#
#TSS#de#validaciOn,#TSS#predichos,#medidas#de#ajuste#
if(n==1){#
##icalG=ical#
##DQOpcG=DQOpcdef##TSS#de#calibraciOn#
##DQOpvG=DQOpvdef##TSS#de#validaciOn#
###DQOsG=DQOs##TSS#predichos#
##rcG=rc#
##rvG=rv#
##RMSEcG=RMSEc#
##RMSEvG=RMSEv#
}else{icalG=cbind(icalG,ical)#
######DQOpcG=cbind(DQOpcG,DQOpcdef)#
######DQOpvG=cbind(DQOpvG,DQOpvdef)#
#######DQOsG=cbind(DQOsG,DQOs)#
######rcG=c(rcG,rc)#
######rvG=c(rvG,rv)#
######RMSEcG=c(RMSEcG,RMSEc)#
######RMSEvG=c(RMSEvG,RMSEv)#
}#

APP#B%5%3#–#14#of#14#
APPENDIX B-5-4
1. TURBIDITY*UNCERTAINTY*ANALYSIS*
#INCERTIDUMBRE#RELATIVA#para#Turbiedad#
######
##INCERTIDUMBRE#RELATIVA#para#Turbiedad#
numsim=5000#
pb=0.01#
j=1#
while(j<=numsim){#
##i=36#i==muestra#
##while#(i<=max(Muestra)){#
####filas=which(Muestra==i)#
#####turbiedad#
####Tg=rnorm(length(filas),data[filas,3],sd=pb/2)#
####Tmg=mean(Tg,na.rm=TRUE)#promedio#de#las#tres#replicas#
####if(i==36){MTmg=Tmg}else{MTmg=c(MTmg,Tmg)}#
####i=i+1#
##}#
##if(j==1){MMTmg=MTmg}else{MMTmg=cbind(MMTmg,MTmg)}#
##j=j+1#
}#
#
which(is.na(MMTmg)==TRUE)#
MMTmg=MMTmg[%11:%12,]#matriz#de#28#X5000#posibles#valores#de#Turbiedad#
#
#Generamos#vectores#con#los#lim#superior(Msup),#mediana(Mmed),##
#lim#inferior(Minf)por#muestra#
i=1#
while(i<=dim(MMTmg)[1]){#
##if(i==1){Msup=boxplot.stats(MMTmg[i,])$stats[5]#
###########Mmed=boxplot.stats(MMTmg[i,])$stats[3]#
###########Minf=boxplot.stats(MMTmg[i,])$stats[1]#
##}else{#
####Msup=c(Msup,boxplot.stats(MMTmg[i,])$stats[5])#
####Mmed=c(Mmed,boxplot.stats(MMTmg[i,])$stats[3])#
####Minf=c(Minf,boxplot.stats(MMTmg[i,])$stats[1])#
##}#
##i=i+1##
}#
#
#CAlculo#de#la#incertidumbre#relativa#por#muestra#
Mu=100*((Msup%Minf)/4)/Mmed#
which(Mu>=25)#
write.csv(MMTmg,"MMTmg.csv",row.names=FALSE)#Tubiedad#simuladas#dim(28,numsim)#
write.csv(Mu,"MuTmg.csv",row.names=FALSE)#incertidumbre#relativa#
#
#
#
#

APP#B%5%4#–#1#of#2#
2. TURBIDITY*OUTLIER*ANALYSIS*
################################################################
#########PROGRAMA#PARA#IDENTIFICACIoN#DE#OUTLIERS###############
#########PARA#STT/DQO/TURBIEDAD#################
rm(list=ls(all=TRUE))#
#####Cargar#los#datos#
#DespuEs#de#descartar#las#muestras#por#alta#incertidumbre#se#cargan#los#datos#
TSS=as.matrix(read.csv("MMTSSg_sinmuestras.csv",header=TRUE))##33xnumsim#(10000)#
DQO=as.matrix(read.csv("MMDQOg_sinmuestras.csv",header=TRUE))##33xnumsim#(10000)#
Tu=as.matrix(read.table("turbiedad_lab.txt",header=TRUE))##33xnumsim#(10000)#
#Caluculamos#la#mediana#para#TSS#y#DQO#
TSSm=apply(TSS,1,median)#
DQOm=apply(DQO,1,median)#
#unimos#en#la#misma#matriz#TSS,#DQO#y#T#
labo=cbind(TSSm,#DQOm,#Tu)#
#####Guardar#matriz#TSS,#DQO#y#T#
write.table(labo,#file="labo_median.txt",row.names=FALSE,col.names=TRUE,quote=FALSE)#
#####IDENTIFICACIoN#DE#OUTLIERS#MULTIVARIADO#
library(sgeostat)#
library(mvoutlier)#
#####Outliers#de#la#matriz#de#resultados#de#laboratorio##
#labo=read.table("labo_median.txt",header=TRUE)#
#attach(labo)#
M=as.data.frame(labo)#
View(M)#
Mo=M[order(M[,1]),]##ordena#los#datos#de#lab#de#menor#a#mayor#de#acuerdo##
##a#la#primera#columna#
ind=seq(1,dim(Mo)[1],1)##
indesord=ind[order(M[,1])]##Indices#de#los#datos#de#calibraci≤n#ordenados#
#
wind=3#
#while#que#recorre#Mo#todas#las#columnas#cada#tres#muestras#(filas)#
while#(wind<=length(M[,1]))#{#
##m=1#
##VMout=matrix(NA,length(M[,1]),length(M[,1])%wind+1)###Matriz#de#'NA'#con#las#dim#indicadas#
##while#(m<=(length(M[,1])%wind+1))#{VMout[m:(m+wind%1),m]=pcout(Mo[m:(m+wind%1),],makeplot=FALSE)$wfinal#
#####################################m=m+1}#
##cont=matrix(0,dim(VMout)[1],dim(VMout)[2])#
##cont[which(is.na(VMout)==FALSE)]=1#
##vcont=apply(cont,1,sum)#
##if#(wind==3)#{Mvcont=vcont#
################VMout[which(is.na(VMout)==TRUE)]=0#
################MVMout=apply(VMout,1,sum)#
##}else{Mvcont=cbind(Mvcont,vcont)#
########VMout[which(is.na(VMout)==TRUE)]=0#
########MVMout=cbind(MVMout,apply(VMout,1,sum))#
##}#
##wind=wind+1}#
##Guarda#las#probabilidades#de#outliers#de#cada#observaci?n#
probout=round(apply(MVMout,1,sum)/apply(Mvcont,1,sum),2)#
probout=probout[order(indesord)]#
print(probout)#
write.table(probout,"prob_outlier_labo_all.txt",row.names=FALSE,col.names=TRUE,quote=FALSE)#
lim=0.95#
probout>lim#

APP#B%5%4#–#2#of#2#
APPENDIX B-5-5
1. FIRST(FLUSH(OCCURRENCE(PROBABILITY(METHOD(
!

Figure 1. OFFIS methodology procedure (Torres et al. 2016).

2. SCRIPT(FOR(FIRST(FLUSH(ANALYSIS(–(USING(TORRES(et(al.(2016(AND(BERTRAND?
KRAJEWSKI(et(al.(1998(
########PROGRAMA#PARA#ANALIZAR#FIRST#FLUSH#################
rm(list=ls(all=TRUE))#
#
library(data.table)#
#TSSall=read.table("TSSsimulALL%In%AGO%14.txt",header=T)#
TSSall=as.matrix(fread("TSSsimulALL%In%MAR%15.txt",header=T))#
#attach(TSSall)#
dim(TSSall)#
#Qall=read.csv("Qx1%ene%15.csv",header=T)#
Qall=as.matrix(fread("Qx1%jun%15.csv",header=T))#
#attach(Qall)#
dim(Qall)#
#Qall=Qall[%32612,]#rows#repetidas#
#Qall=Qall[%32609,]#
#dim(Qall)#
####
Qtss=Qall[923:dim(Qall)[1],]#caudales#donde#tenemos#TSS#

APP#B%5%5#–#1#of#5#
dim(Qtss)#
#Qtss=Qall#
######
##TSS#(%s)#pasarlo#a#0#para#colocar#los#negativos#en#cero#
Tss=TSSall[,1]#concentraciOn#en#mg/L#
par(mar=c(5,5,2,2))#
plot(Tss,#type="l")#,xlim=c(32000,35000)##
i=1#
while(i<=length(Tss)){#
##if(Tss[i]<0){#Tss[i]=0}#
##else{Tss[i]=Tss[i]}#
##i=i+1#
}#
lines(Tss,#col=3,#lty=3)#
#
##GRAFICA#DE#LOS#CAUDAL#VS#TSS#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%QvsTSS%JUN%15.tif",#width#=#wth,#height#=#hth1,#compression#=#"none",#pointsize#=#6,#bg#=#
"transparent",#res#=#450)#
layout(matrix(c(1,2),#nrow#=#1),#widths#=#c(1.0,#0.16))#
#GRAFICAR#TSS#POLUTOGRAMA#
t=seq(1,length(Tss),by=1)#
par(mar=c(4.5,4.5,1,2.5))#
plot(Tss,type="n",xaxt='n',yaxt='n',col="red",ylim=c(0,3*max(Tss)),#
#####ylab="#",xlab="t#(min)",lty=1,lwd=0.2)#
axis(1,#pretty(c(0,#max(t))),#col='black')#
axis(2,#pretty(c(0,#1.1*max(Tss))),#col='red')#
mtext("TSS#(mg/L)",#side=2,#line=2.5,#cex.lab=1,las=3,#col="red")#
#para#la#ENTRADA#
lines(t,Tss,col="red",lty=1,lwd=0.5)#
#
#AHORA#GRAFICAR#CAUDAL#
par(new=T)#
plot(t,Qtss[,1],type="l",ylim=rev(range(2*Qtss[,1])),xaxt='n',#
#####axes=F,ylab='',xlab='',col="blue",lwd=0.2,xlim=c(0,max(t)))#
axis(4,#pretty(c(0,#max(Qall))),#col='blue',pos=NA)#
mtext("Q#(L/s)",#side=4,#line=1.75,#cex.lab=1,las=3,#col="blue")#
dev.off()#
#primera#curva#M(v)#
#tomar#aleatoria/#1#de#los#5000#Q#que#tengo#
#para#TSS#tomar#aleatoria/#1#de#los#5000#TSS#que#tengo#
j=1#
while(j<=#dim(me)[1]){#
##i=1#
##while(i<=dim(TSSall)[2]){##
##Q=Qtss[,i]#caudal#el#L/s#
##Tss=TSSall[,i]#concentraciOn#en#mg/L###
##
###para#TSS#tomar#aleatoria/#1#de#los#2500#TSS#que#tengo#y#los#TSS#(%s)#pasarlo#a#0#
###para#colocar#los#negativos#en#cero#
##Tss[which(Tss<0)]=0###
###evento#
##ini=me[j,3]#

APP#B%5%5#–#2#of#5#
##fin=me[j,4]#
###Volumen#que#entra#al#sistema#
##vol=Q[ini:fin]*60##volumen#en#Litros#
###Volumen#acumulado#
##volc=cumsum(vol)/1000##en#m3#
##volT=volc/(sum(vol)/1000)#
###Masa#en#g#
##Mtss=Tss[ini:fin]*vol/(1000)##
###Masa#total#acumulada#
##MT=cumsum(Mtss)#
###plot(MTT,#type="l")#
##MTT=MT/sum(Mtss)##masa#acumulada#sobre#masa#total###
###MV#curve#
##if(i==1){Mtsse=Mtss#
####MTt=MTT#
####vole=vol#
####volTt=volT}#
##else{Mtsse=cbind(Mtsse,Mtss)#
####MTt=cbind(MTt,MTT)##
####vole=cbind(vole,vol)#
####volTt=cbind(volTt,volT)}#####
#
##i=i+1#
##}#
#
##write.csv(MTt,file=paste0("mPor%jun%15%",j,".csv"))##porcentajes#de#masas#
##write.csv(Mtsse,file=paste0("masas%jun%15%",j,".csv"))##valores#de#masas#
##write.csv(vole,file=paste0("vol%jun%15%",j,".csv"))#
##write.csv(volTt,file=paste0("volpor%jun%15%",j,".csv"))#
j=j+1#
}#
#
#Figure#
numsim=dim(MTt)[2]#
rm(volTt)#
rm(MTt)#
volTt=as.matrix(read.csv("volpor%mar%15%10.csv",header=T))#
MTt=as.matrix(read.csv("mPor%mar%15%10.csv",header=T))#
dim(MTt)#
dim(volTt)#
MTt=MTt[,%1]#
volTt=volTt[,%1]#
dim(MTt)#
dim(volTt)#
length(which(is.na(volTt)==TRUE))/dim(volTt)[1]#
length(which(is.na(MTt)==TRUE))/dim(MTt)[1]#
numsim=dim(MTt)[2]#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_MV%curves%#",numsim,"event%10v2.tif"),#width#=#wth,#height#=#wth,#compression#=#"lzw",##
#####pointsize#=#12,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
plot(volTt[,2]*100,MTt[,2]*100,type="l",col="gray",#xlim=c(0,100),##
#####ylim=c(0,100),#ylab="M(%)",#xlab="V(%)",)#

APP#B%5%5#–#3#of#5#
#lines(c(min(volTt),#min(MTt)),c(max(volTt),#max(MTt)),lyt=2)#
#lines(c(0,0),c(100,100),lty=2,col="red"#)#
abline(a=0,b=1,#lty=2)#
i=1#
while(i<=dim(MTt)[2]){#
lines(volTt[,i]*100,MTt[,i]*100,#col="gray")#
i=i+1#
}#
abline(a=0,b=1,#lty=2)#
dev.off()#
#
#COEFICIENTES#a#Y#b#
rm(MMalea)#
rm(MValea)#
MMalea=MTt#
MValea=volTt#
dim(MTt)#
dim(volTt)#
#
i=1#
for#(i#in#1:numsim)#{#
M=MMalea[,i]#
V=MValea[,i]#
#print(i)#
#print(summary(M))#
##if(length(which(is.na(M)==TRUE))>0){#para#detectar#NA#en#M#
####if#(i==1)#{Mab=NA#
####Pab=NA}#
####else{Mab=rbind(Mab,NA)#
####Pab=cbind(Pab,NA)}#
####i=i+1#
##}#
##else{modelo=nls(M#~##A*V^B,#start#=#list(A=18,B=0.36),nls.control(maxiter=100,warnOnly=TRUE))#
#########Warning#messages:#
###########1:#In#min(x)#:#no#non%missing#arguments#to#min;#returning#Inf#
#########2:#In#max(x)#:#no#non%missing#arguments#to#max;#returning#%Inf#
#######predo=coef(summary(modelo))[1]*V^coef(summary(modelo))[2]#
#######aa=coef(summary(modelo))[1]#
#######bb=coef(summary(modelo))[2]#
#######aabb=c(aa,bb)#
#######if#(i==1)#{Mab=aabb}else{Mab=rbind(Mab,aabb)}#
#######if#(i==1)#{Pab=predo}else{Pab=cbind(Pab,predo)}#M#predicted#
####}#
}#
#
write.csv(Mab,"Mab%MAY%13.csv",row.names=FALSE)#
write.csv(Pab,"Pab%MAY%13.csv",row.names=FALSE)#
dim(Pab)#
dim(Mab)#
#Figura#predicted#
MMalea=as.matrix(MMalea)#
dim(MMalea)#
##M=matrix(MMalea,#ncol#=#1)#
##P=matrix(Pab,#ncol#=#1)#
##r=round(cor(M,P,use="na.or.complete"),2)#

APP#B%5%5#–#4#of#5#
##RMSE=round((sum((matrix(MMalea,#ncol#=#1)%matrix(Pab,#ncol#=#1))^2,na.rm=T)/length(Pab))^0.5,2)#
rm(RMSE)#
rm(r)#
for#(i#in#1:numsim)#{#
##RMSE=round((sum(MMalea[,i]%Pab[,i]^2,na.rm=T)/length(Pab[,i]))^0.5,2)#
##r=round(cor(MMalea[,i],Pab[,i],use="na.or.complete"),2)#
#
##if(i==1){RMSEa=RMSE#
##ra=r}#
##else{RMSEa=rbind(RMSEa,RMSE)#
##ra=rbind(ra,r)}#
}#
###
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("fig_%predo_MV%curves%#",numsim,"event%13.tif"),#width#=#wth,#height#=#wth,#compression#=#
"lzw",##
#####pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
plot(MValea[,1]*100,Pab[,1]*100,#type="l",col="gray",ylab="Mp(%)",#xlab="V(%)",#
#####main="Mpredi%FF%#TSS#%event#13%#May#%#15",ylim=c(0,100),xlim=c(0,100))#
for#(i#in#1:dim(Pab)[2])#{#
lines(MValea[,i]*100,Pab[,i]*100,col="gray")#
}#
abline(a=0,b=1,#lty=2)#
legend("topleft",c(paste("r#########:#min=",boxplot.stats(ra)$stats[1],#";#mean=",boxplot.stats(ra)$stats[3],#
######";#max=",boxplot.stats(ra)$stats[5]),#
######paste("RMSE:#min=",boxplot.stats(RMSEa)$stats[1]*100,#";#mean=",boxplot.stats(RMSEa)$stats[3]*100,#
############";#max=",boxplot.stats(RMSEa)$stats[5]*100,"%")),#bty="n")#
#
dev.off()#

APP#B%5%5#–#5#of#5#
APPENDIX B-6-1
WATER&USE&AND&REUSE&QUALITY&GUIDELINES&TABLES&
&
Table B6-1. Comments for U.S. EPA suggested guidelines (FC: food crops; NFC: nonfood crops; CP: commercially processed; NCP: not
commercially processed; impound: impoundment) (adapted form Asano et al. 2007)
2-
4- 5-
1- Restricted 3-Agric. 7-
EPA Suggested Guidelines Agric. Agric. 6-Recreational
Urban access Reuse FC Landscape
Comments Reuse Reuse impound
reuse area NCP impound
FC CP NFC
irrigation
Consult recommended agricultural X X X X X
(crop) limits for metals
The reclaimed water should not contain X X X
measurable levels of pathogens.
The reclaimed water should be clear X X
and odorless
Higher chlorine residual and/or a longer X X
contact time may be necessary to
assure that viruses and parasites are
inactivated or destroyed
A chlorine residual of 0.5 mg/L or X
greater in the distribution system is
recommended to reduce odors, slime,
and bacterial regrowth
If spray irrigation, TSS less than 30 X X X
mg/L may be necessary
Milking animals should be prohibited X
from grazing for 15 d after irrigation
ceases. A higher level of disinfection,
for example, to achieve ≤ 14 fecal
coli/100 mL, should be provided if this
waiting period is not adhered to
Dechlorination may be necessary to X X
protect aquatic species of flora and
fauna
Reclaimed water should be X
nonirritating to skin and eyes
Nutrient removal may be necessary to X X
avoid algae growth in impoundments
Provide treatment reliability X X X X X X

Table B6-2. Microbiological indicators limits for bathing water groups for inland waters (adapted from EU 2006).
Reference methods of
Parameter Excellent quality Good quality Sufficient
analysis
Intestinal enterococci 200 (*) 400 (*) 330 (**) ISO 7899-1 or
(cfu/100 ml) ISO 7899-2
Escherichia Coli 500 (*) 1000 (*) 900 (**) ISO 9308-3 or
(cfu/100 ml) ISO 9308-1
(*) Based upon a 95-percentile evaluation. See Annex II. Bathing water assessment and classification
(**) Based upon a 90-percentile evaluation. See Annex II. Bathing water assessment and classification
#
# #

APP#B%6%1#–#1#of#2#
#
Table B6-3. Suggested microbiological quality guidelines for water reuse in agriculture (adapted from Pescod, M.B. 1992).
Intestinal nematodesb Fecal coliforms (no.
Category Reuse condition Exposed group c (no. of eggs per litre )
c per 100 ml )
Irrigation of crops likely to be eaten Workers,
A ≤1 ≤ 1000d
uncooked, sports fields, public parksd consumers, public
Irrigation of cereal crops, industrial No standard
B Workers ≤1
crops, fodder crops, pasture and treese recommended
Localized irrigation of crops in category
C B if exposure of workers and the public None Not applicable Not applicable
does not occur
a In specific cases, local epidemiological, socio-cultural and environmental factors should be taken into account, and the guidelines modified accordingly.
b Ascaris and Trichuris species and hookworms
c During the irrigation period.
d A more stringent guideline (<200 fecal coliforms per 100 ml) is appropriate for public lawns, such as hotel lawns, with which the public may come into direct contact.
e In the case of fruit trees, irrigation should cease two weeks before fruit is picked, and no fruit should be picked off the ground. Sprinkler irrigation should not be used.
Source: WHO (1989)

Table B6-4. Water reuse guidelines for urban uses in Japan (adapted form: Asano 2007; MLIT 2005).
Quality parameters Toliet/urinal flushing Landscape irrigation Recreational use
E. coli no detected no detected
Total Coliform (CFU/100 mL) no detected no detected no detected
Turbidity (NTU) 2 2 2
pH 5.8 – 8.6 5.8 – 8.6 5.8 – 8.6
Color (CU) <10
Appearance not unpleasant not unpleasant not unpleasant
Odor not unpleasant not unpleasant not unpleasant
Chlorine residual (mg/L) 0.1 (free), 0.4 (combined) 0.1 (free), 0.4 (combined) 0.1 (free), 0.4 (combined)
#
Table B6-5. Suggested quality parameters limits for agricultural use (adapted from Colombia 1985)
Concentration Concentration Quality
Quality parameter Quality parameter Limit Value
limit mg/L limit mg/L parameter
Al (aluminum) 5 Fe (Iron) 5 pH 4.5–9.0
As (Arsenic) 0.1 Li (Lithium) 2.5 Total Coliform <5000 MPN**
Be (Beryllium) 0.1 Mn (Manganese) 0.2 Fecal Coliform <1000 MPN**
Cd (Cadmium) 0.01 Mo (Molybdenum) 0.01
Zn (Zinc) 2 Ni (Nickel) 0.2
Co (Cobalt) 0.05 Pb (Lead) 5
Cu (Copper) 0.2 Se (Selenium) 0.02
Cr (Chromium) + V(Vanadium) 0.1
F (Fluorine) 1 Br (Bromine) 0.3–0.4*
*as function of the type of soil and crop ** for irrigation of fruits that are consumed without removing the peel and for vegetables with short stem.

Table B6-6. Suggested quality parameters for Recreational use (adapted from Colombia 1985)
Recreational use Recreational use
Quality parameters
Direct contact* Indirect contact**
Fecal Coliform (MPN) 200 microorganisms/100mL 5.000 microorganisms/100mL
Total Coliform (MPN) 1.000 microorganisms/100mL
Phenolic compounds 0.002 Phenols
Dissolved oxygen 70% saturation concentration 70% saturation concentration
pH 5.0 – 9.0 5.0 – 9.0
Surfactants (mg/L) 0.5 0.5
*Direct contact: swimming and diving activities
*Indirect contact: fishing and nautical sports
#

APP#B%6%1#–#2#of#2#
APPENDIX C-8-1
1. SCRIPT)FOR)WATER)QUANTITY)CALIBRATION)METHOD)8)INFLOW)
#
rm(list=ls(all=TRUE))#
######
#SCRIPT#PARA#PASAR#DE#NIVELES#DE#ENTRADA#DEL#HUMEDAL#A#CAUDALES#
#
##REALIZAMOS#UNA#PRUEBA#EL#16#Y#EL#17#DE#ABRIL#DE#2015#VARIANDO#CAUDAL##
###EN#LA#ENTRADA#DEL#HUMEDAL#Y#GUARDANDO#LOS#NIVELES#REGISTRADOS.#
####caudales#medidos#para#la#calibraciOn#del#vertedero#
hQ=read.table("HQ%all.txt",h=T)##caudales#medidos#en#L/s#
attach(hQ)#
#
#DeterminaciOn#Cd#para#los#Q#y#H#medidos#
#para#la#definiciOn#de#Q#superiores#a#los#medidos#para#la#
#calibraciOn#
H=(37.5%level)/100#
p=8/15*tan(pi/4)*(2*9.81)^(1/2)#
cd=(Q/1000)/(p*H^(5/2))#
plot(H,cd,cex=0.1)#
points(H[which(H>0.115)],cd[which(H>0.115)],cex=0.1,#col="green")#
cdm=cd[which(H>0.115)]##cd#para#niveles#menores#a#25.7266,#los#cuales#no#medimos#
#
#####creaciOn#de#vector=>#niveles#medidos#sin#repeticiOn#
i=1#
while(i<=length(level)){#
##if(i==1){vector=level[i]#
###########i=i+1}#
##else#if(level[i]!=level[i+1]){vector=c(vector,#level[i])#
################################i=i+1}#
##else{i=i+1}###
}#
vector=c(vector,#level[3030])#
par(mar=c(4.5,4.5,1,1))#
plot(level,Q,cex=0.5)#
#####
#
####Niveles#a#convertir#en#caudales#
N1=as.matrix(read.csv("All_FEB_15.csv",h=T))#archivo#completo#
#interpolaciOn#info#faltante#
xx=seq(1:dim(N1)[1])#
L1=approx(xx,N1[,3],xout=xx)$y#
#
##detecciOn#de#eventos:#nivel#donde#tenemos#Q#menor#a#37.5#
t=N1[1,2]#
dayy=N1[1,1]#
li=1#
ini1=matrix(0,100,4)#matriz#con#los#eventos#DIa,#Hora,#Inicio#y#Fin#
uu=1#
#
while(li<dim(N1)[1])#
{#

APP#C%8%1#–#1#of#9#
##if(L1[li]>(37.5))#
##{#NO#detectO#evento#
####li=li+1#
##}else{#
#####SI#detectO#evento#
####t=N1[(li),2]#
####dayy=N1[(li),1]#
####ini1[uu,1]=dayy##dIa#del#evento#
####ini1[uu,2]=t###hora#del#evento#
####ini1[uu,3]=li###Indice#del#inicio#del#evento#
####li=li+1#
#####while#detectar#el#fin#del#evento#
####while(L1[li]<=(37.5))#
####{li=li+1#
####}#
####ini1[uu,4]=li#
####uu=uu+1#
##}###
}#
###
#verificaciOn#de#eventos##
#Matriz#eventos#ini1#
ini1=ini1[%24:%100,]#
ini1=ini1[%5:%22,]#
#
#
write.csv(ini1,"Eventos%feb%15.csv",row.names=FALSE)#
#grafica#niveles#detectando#eventos#
L2=138.5%L1#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"level%feb%15.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(L2,#type="l",#ylab="Water#level",#xlab="Records#1/min",ylim=c(95,130))#
i=1#
while(i<=dim(ini1)[1])#
{lines(xx[ini1[i,3]:ini1[i,4]],L2[ini1[i,3]:ini1[i,4]],#col="red")#
i=i+1#
}#
dev.off()#
#
#caudales#correspondientes#a#L1#
n=1#
numsim=5000#
#caudal#equivalente#del#nivel#L1#
Qx1=rep(0,#length.out#=#length(L1))#
#
while(n<=numsim){#
###creaciOn#yalea=>#Q#aleatorio#de#los#Q#medidos#
##i=1#
##yalea=0#
##while(i<=length(vector)){#
####if(length(Q[which(level==vector[i])])==1){#
######yalea=c(yalea,Q[which(level==vector[i])])##

APP#C%8%1#–#2#of#9#
####}#
####else#{yalea=c(yalea,sample(Q[which(level==vector[i])],size=1))}#
####i=i+1#
##}#
##yalea=yalea[%1]#
###
##m=1###
##j=1#
##QlqT=0#
##while(j<=dim(ini1)[1]){#
####if(m==ini1[j,3]){#entrO#en#el#nivel#donde#tenemos#evento#
######Lq=L1[ini1[j,3]:ini1[j,4]]#nivel#del#evento#
######k=1#
######Qlq=0#
######while(k<=length(Lq)){#
########if(Lq[k]>level[which.max(Q)]){#
#########utilizamos#Q#y#level#medidios#
########Qlq=c(Qlq,approx(vector,yalea,Lq[k],rule=2)$y)#
########}else{#
#########utilizamos#cd#determinado#por#Qvertedero#teOrico###
########Hl1=(37.5%Lq[k])/100##nivel#desde#la#base#del#vertedero#
########qt=sample(cdm,size=1)*Hl1^(5/2)*p*1000#caudal#calculado#con#cd#
########Qlq=c(Qlq,qt)}#
########k=k+1#
########}#
######Qlq=Qlq[%1]#
######QlqT=c(QlqT,Qlq)#Todos#los#Qs#
######m=as.numeric(ini1[j,4])+1#
######j=j+1#
#######
######}else{QlqT=c(QlqT,0)#
######m=m+1}#####
##}#
##QlqT=QlqT[%1]#
##Qf=rep(0,length.out#=length(L1)%length(QlqT))#Qfaltantes#
##QlqT=c(QlqT,Qf)#serie#completa#de#Qs#
##Qx1=cbind(Qx1,QlqT)#para#agrupar#las#simulaciones##
n=n+1#
}#
#
Qx1=Qx1[,%1]#
#
plot(Qx1[,1],#type="l",col=1,ylim=c(min(Qx1),max(Qx1)))#
i=2#
while(i<=numsim){#
lines(Qx1[,i],col=i)#
i=i+1}#
#
#para#el#cAlculo#de#la#incertidumbre#relativa#por#min#
##Generamos#vectores#con#los#lim#superior(Msup),#mediana(Mmed),##
#lim#inferior(Minf)por#muestra#
i=1#
while(i<=dim(Qx1)[1]){#
##if(i==1){Msup=boxplot.stats(Qx1[i,])$stats[5]#
###########Mmed=boxplot.stats(Qx1[i,])$stats[3]#

APP#C%8%1#–#3#of#9#
###########Minf=boxplot.stats(Qx1[i,])$stats[1]#
##}else{#
####Msup=c(Msup,boxplot.stats(Qx1[i,])$stats[5])#
####Mmed=c(Mmed,boxplot.stats(Qx1[i,])$stats[3])#
####Minf=c(Minf,boxplot.stats(Qx1[i,])$stats[1])#
##}#
##i=i+1##
}#
#
#CAlculo#de#la#incertidumbre#relativa#por#muestra#
Mu=100*((Msup%Minf)/4)/Mmed#
write.csv(Msup,"Msup%Qx1%feb%15.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Mmed,"Mmed%Qx1%feb%15.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Minf,"Minf%Qx1%feb%15.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Qx1,"Qx1%feb%15.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Mu,"Mu%Qx1%feb%15.csv",row.names=FALSE)#incertidumbre#relativa#
#
######
#figura#caudales#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"flow%feb%15%zoom.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(Mmed,#type="l",col=1,ylim=c(min(Minf),max(Msup)),xlab="Records#1/min",#
#####ylab="Flows#L/s",lwd=.2,#xlim=c(9000,#12000))#
lines(Minf,#col="red",lwd=.2)#
lines(Msup,#col="red",lwd=.2)#
dev.off()#
#
#figura#caudales#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"flow%feb%15.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(Mmed,#type="l",col=1,ylim=c(min(Minf),max(Msup)),xlab="Records#1/min",#
#####ylab="Flows#L/s",lwd=.2)#
lines(Minf,#col="red",lwd=.2)#
lines(Msup,#col="red",lwd=.2)#
dev.off()#
#
######
#PAra#indetificar#Indice#de#los#Qmaxs#para#determinar#el#nivel#mAx#por#evento#
#cargar#Mmed#
Mmed=as.matrix(read.csv("Mmed%Qx1%feb%15.csv",#h=T))#
class(Mmed)#
ini1=read.csv("Eventos%feb%15.csv",#header=T)#
attach(ini1)#
#ini1=as.matrix(ini1)#
class(ini1$V3)#
#
####Niveles#a#convertir#en#caudales#
N1=as.matrix(read.csv("All_FEB_15.csv",h=T))#archivo#completo#
#interpolaciOn#info#faltante#

APP#C%8%1#–#4#of#9#
xx=seq(1:dim(N1)[1])#
L1=approx(xx,N1[,3],xout=xx)$y#
L2=138.5%L1#
l=1#
while(l<=dim(ini1)[1])#
{#
##if(l==1){Nemax=L1[ini1$V3[l]+(which.max(Mmed[ini1$V3[l]:ini1$V4[l]])%1)]#
###########Nemean=mean(L1[ini1$V3[l]:ini1$V4[l]])}#
##else{Nemax=c(Nemax,L1[ini1$V3[l]+(which.max(Mmed[ini1$V3[l]:ini1$V4[l]])%1)])#
#######Nemean=c(Nemean,mean(L1[ini1$V3[l]:ini1$V4[l]]))}#
##l=l+1}#
write.csv(cbind(Nemax,Nemean),"Nmax&mean%feb%15.csv",row.names=FALSE)#Nivel==Qmax#
#
######
#script#para#encontrar#DeltaHmax#
N1=as.matrix(read.csv("All_FEB_15.csv",h=T))#archivo#completo#
#interpolaciOn#info#faltante#
xx=seq(1:dim(N1)[1])#
L1=approx(xx,N1[,3],xout=xx)$y#
ini1=read.csv("Eventos%feb%15.csv",#header=T)#
ini1=ini1[,%5]#
#
k=1#
while(k<=dim(ini1)[1]){#
##if(k==1){#
####dhmax=min(diff(L1[ini1[k,3]:ini1[k,4]],lag=1))#
##}#
##else{dhmax=c(dhmax,min(diff(L1[ini1[k,3]:ini1[k,4]],lag=1)))}#
##k=k+1#
}#
#
DHmax=cbind(ini1,dhmax)#
#
write.csv(DHmax,"DHmaxFEB%15.csv")#
#

2. SCRIPT)FOR)WATER)QUANTITY)CALIBRATION)METHOD)8)OUTFLOW)
#
rm(list=ls(all=TRUE))#
###SCRIPT#PARA#PASAR#DE#NIVELES#DE#SALIDA#DEL#HUMEDAL#A#CAUDALES#
#
##REALIZAMOS#UNA#PRUEBA#EL#25#Y#EL#29#DE#SEPT#DE#2015#VARIANDO#CAUDAL##
###EN#LA#SALIDA#DEL#HUMEDAL#Y#GUARDANDO#LOS#NIVELES#REGISTRADOS.#
####caudales#medidos#para#la#calibraciOn#del#vertedero#
hQ=read.table("HQ%out.txt",h=T)##caudales#medidos#en#L/s#
attach(hQ)#
#
#DeterminaciOn#Cd#para#los#Q#y#H#medidos#
#para#la#definiciOn#de#Q#superiores#a#los#medidos#para#la#
#calibraciOn#####
H=(46.5%level)/100##en#cm#en#46.5#empieza#a#funcionar#este#VERTEDERO#
p=8/15*tan(68*pi/(180*2))*(2*9.81)^(1/2)#
cd=(Q/1000)/(p*H^(5/2))#
#
####FIGURA#

APP#C%8%1#–#5#of#9#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"HvsCd%out.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(H,cd,cex=0.5,xlab="H#(m)",#ylab="Cd%out",#pch=16)#
points(H[which(H>0.115)],cd[which(H>0.115)],pch=16,cex=0.5,#col="blue")#
points(H[which(H>0.0945&H<0.10)],cd[which(H>0.0945&H<0.10)],cex=0.7,##
#######pch=16,col="red")#
#
dev.off()#
#
#
cdm=cd[which(H>0.115)]##cd#para#niveles#menores#a#35.17,##
#los#cuales#no#medimos#osea#CAUDALES#GRANDES#
cdp=cd[which(H>0.0945&H<0.10)]#cd#para#los#cuales#no#medimos#osea#CAUDALES#PEQUE?OS#
#
#
#####creaciOn#de#vector=>#niveles#medidos#sin#repeticiOn#
i=1#
while(i<=length(level)){#
##if(i==1){vector=level[i]#
###########i=i+1}#
##else#if(level[i]!=level[i+1]){vector=c(vector,#level[i])#
################################i=i+1}#
##else{i=i+1}###
}#
vector=c(vector,#level[63])#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"qvslevel%measured%out.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#
10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(level,Q,cex=0.5,xlab="water#level#(cm)",ylab="Q#L/s",#pch=16)#
dev.off()#
#####
#
####Niveles#a#convertir#en#caudales#
N1=as.matrix(read.csv("All_FEB_15.csv",h=T))#archivo#completo#
#la#serie#tiene#NA?#
length(which(is.na(N1[,7])))#
#interpolaciOn#info#faltante#
xx=seq(1:dim(N1)[1])#salida#
L4=approx(xx,N1[,7],xout=xx)$y#salida#
#
#L4=N1[,7]#
L4l=as.numeric(unlist(lowess(L4,f=.008)['y']))#
plot(L4,#type="l")#
lines(L4l,col="red")#
#
###detecciOn#de#eventos:#nivel#donde#tenemos#Q#menor#a#37.5#en#la#entrada#
#

APP#C%8%1#–#6#of#9#
ini1=read.csv("Eventos%feb%15.csv",header=T)#ACA#VOY#
#para#agregar#fin#de#eventos#en#nivel#de#salida#
ini1=cbind(ini1,rep(0,#length.out=dim(ini1)[1]),rep(0,#length.out=dim(ini1)[1]))#
i=1#
while(i<=dim(ini1)[1])#
{ini1[i,3]=as.numeric(ini1[i,3])#
ini1[i,4]=as.numeric(ini1[i,4])#
##ini1[i,5]=as.numeric(ini1[i,3])+10#
#ini1[i,6]=as.numeric(ini1[i,4])+101#
#i=i+1#
}#
class(ini1[1,4])#
z<%list("V1","V2","V3","V4","V5","V6")#
names(ini1)<%z#
class(ini1[1,4])#
#
#
#######
#grafica#niveles#detectando#eventos#
L4b=88.7%L4l#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"level%FEB%15%OUT%low.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#
10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(L4l,#type="l",#ylab="Water#level",#xlab="Records#1/min")#
i=1#
while(i<=dim(ini1)[1])#
{lines(xx[ini1[i,5]:ini1[i,6]],L4l[ini1[i,5]:ini1[i,6]],#col="red")#
i=i+1#
}#
lines(xx,rep(46.5,length(L4l)))##por#debajo#de#este#nivel#tenemos#caudal#
dev.off()#
#
#caudales#correspondientes#a#L4#
n=1#
numsim=5000#
#caudal#equivalente#del#nivel#L1#
Qx1=rep(0,#length.out#=#length(L4))#
#
while(n<=numsim){#
###creaciOn#yalea=>#Q#aleatorios#de#los#Q#medidos#para#cada#simulaciOn#n#
##i=1#
##yalea=0#
##while(i<=length(vector)){#
####if(length(Q[which(level==vector[i])])==1){#
######yalea=c(yalea,Q[which(level==vector[i])])##
####}#
####else#{yalea=c(yalea,sample(Q[which(level==vector[i])],size=1))}#
####i=i+1#
##}#
##yalea=yalea[%1]#
###

APP#C%8%1#–#7#of#9#
###determinaci?n#Q#para#nivel#por#evento#
##j=1#recorre#matriz#eventos#
##m=ini1[j,5]#recorre#L4#desde#el#primer#evento#
##QlqT=rep(0,(ini1[j,5]%1))#caudal#Q#correspondiente#a#L4#
###
##while(j<=dim(ini1)[1]){#
#####if(m==ini1[j,5]){#entrO#en#el#nivel#donde#tenemos#evento#
#######
######Lq=L4l[ini1[j,5]:ini1[j,6]]#nivel#del#evento#
######k=1#recorre#Lq#
#######plot(Lq,#type="l")#
#######Qlq=0#
######while(k<=length(Lq)){#
#########if(Lq[k]>=level[which.max(Q)]&Lq[k]<=level[which.min(Q)]){#
#########utilizamos#Q#y#level#medidios#
###########QlqT=c(QlqT,approx(vector,yalea,Lq[k],rule=2)$y)#
############Qlq=c(Qlq,approx(vector,yalea,Lq[k],rule=2)$y)#
########}else#if(Lq[k]<level[which.max(Q)]){#
#########Q#grandes:#utilizamos#cd#determinado#por#Qvertedero#teOrico##
##########Hl1=(51.3%Lq[k])/100##nivel#desde#la#base#del#vertedero#
##########qt=sample(cdm,size=1)*Hl1^(5/2)*p*1000#caudal#calculado#con#cd#
##########QlqT=c(QlqT,qt)#
###########Qlq=c(Qlq,qt)#
########}else#if(Lq[k]>level[which.min(Q)]&Lq[k]<46.5){#
#########Q#peque?os:#utilizamos#cd#determinado#por#Qvertedero#teOrico#
##########Hl1=(51.3%Lq[k])/100##nivel#desde#la#base#del#vertedero#
##########qt=sample(cdp,size=1)*Hl1^(5/2)*p*1000#caudal#calculado#con#cd#
##########QlqT=c(QlqT,qt)#
###########Qlq=c(Qlq,qt)#
########}else{QlqT=c(QlqT,0)}#
##########k=k+1#
########}#
#####
#######QlqT=caudales#hasta#el#evento#j#
######m=ini1[j,6]+1#
######j=j+1#
######r=(ini1[j,5])%m#
######if(j<=dim(ini1)[1]){#
########if(r<0){#caso#2#cuando#el#fin#del#evento#i#es#mayor#al#
#########inicio#evento#i+1#
########QlqT=QlqT[1:ini1[j,5]]#
########}else{QlqT=c(QlqT,#rep(0,r))}#caso#1#cuando#el#fin#evento#i#no#se##
###########traslapa#con#el#inicio#del#evento#i+1#
######}#
##}#
###
##QlqT=QlqT[%1]#los#Q#del#j#evento#
##Qf=rep(0,length.out#=length(L4l)%length(QlqT))#Qfaltantes#
##QlqT=c(QlqT,Qf)#serie#completa#de#Qs#
##Qx1=cbind(Qx1,QlqT)#para#agrupar#las#simulaciones##
n=n+1#
}#
###AQUI#VOY#correr#los#5000#caudales#
Qx1=Qx1[,%1]#
write.csv(Qx1,"Qx1%FEB%15%out%low.csv",row.names=FALSE)#Caudales#==#niveles#

APP#C%8%1#–#8#of#9#
plot(Qx1[,1],#type="l",col=1,ylim=c(min(Qx1),max(Qx1)))#
i=2#
while(i<=numsim){#
lines(Qx1[,i],col=i)#
i=i+1}#
#
#para#el#cAlculo#de#la#incertidumbre#relativa#por#min#
##Generamos#vectores#con#los#lim#superior(Msup),#mediana(Mmed),##
#lim#inferior(Minf)por#muestra#
i=1#
while(i<=dim(Qx1)[1]){#
##if(i==1){Msup=boxplot.stats(Qx1[i,])$stats[5]#
###########Mmed=boxplot.stats(Qx1[i,])$stats[3]#
###########Minf=boxplot.stats(Qx1[i,])$stats[1]#
##}else{#
####Msup=c(Msup,boxplot.stats(Qx1[i,])$stats[5])#
####Mmed=c(Mmed,boxplot.stats(Qx1[i,])$stats[3])#
####Minf=c(Minf,boxplot.stats(Qx1[i,])$stats[1])#
##}#
##i=i+1##
}#
#
#CAlculo#de#la#incertidumbre#relativa#por#muestra#
Mu=100*((Msup%Minf)/4)/Mmed#
write.csv(Msup,"Msup%Qx1%FEB%15%out%low.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Mmed,"Mmed%Qx1%FEB%15%out%low.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Minf,"Minf%Qx1%FEB%15%out%low.csv",row.names=FALSE)#Caudales#==#niveles#
write.csv(Mu,"Mu%Qx1%FEB%15%out%low.csv",row.names=FALSE)#incertidumbre#relativa#
#
#figura#caudales#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"flow%FEB%15zoom%low.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#
10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(Mmed,#type="l",col=1,ylim=c(min(Minf),max(Msup)),#
#####xlim=c(4000,6000),xlab="Records#1/min",#
#####ylab="Flows#L/s",lwd=.2)#
lines(Minf,#col="red",lwd=.2)#
lines(Msup,#col="red",lwd=.2)#
dev.off()#
#
#figura#caudales#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename#=#"flow%FEB%15%low.tif",#width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#10,##
#####bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,1))#
plot(Mmed,#type="l",col=1,ylim=c(min(Minf),max(Msup)),xlab="Records#1/min",#
#####ylab="Flows#L/s",lwd=.2)#
lines(Minf,#col="red",lwd=.2)#
lines(Msup,#col="red",lwd=.2)#
dev.off()#
#

APP#C%8%1#–#9#of#9#
APPENDIX C-9-1
TURBIDITY(PROBES(CALIBRATION((
#
########PROGRAMA#CALIBRACIoN#TURBIDIMETROS#################
rm(list=ls(all=TRUE))#
#######
data=read.table("datos_calT.txt",#header=T)#datos#de#campo#que#relacionan#Turbiedad#con#SST#
attach(data)#
Tu=data[,3]#
SSTu=data[,4]#
par(mar=c(5,5,2,2))#
plot(Tu,SSTu,#ylab="TSS#(mg/L)",#xlab="T#(NTU)",pch=20)#
#
datan=cbind(data,c(rep("IN",(15)),rep("OUT",15),rep("IN",15),rep("OUT",9),rep("IN",15),rep("OUT",15
)))#
View(datan)#
datan=data.frame(datan)#
names(datan)<%NULL#
names(datan)<%c("Fecha"#,"Muestra","Tu","SST","Place")#
iint=data$Muestra[which(datan[5]=="IN")]#
iin=unique(iint[duplicated(iint)])#
ioutt=data$Muestra[which(datan[5]=="OUT")]#
iout=unique(ioutt[duplicated(ioutt)])#
#
#Figura#turbiedad#medida#en#campo#vs#laboratorio#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename="TurbvsSTTlab.tif",##width#=#wth,#height#=#hth,##
#####compression#=#"lzw",#pointsize#=#11,#bg#=#"transparent",#res#=#300)#
par(mar=c(4.5,4.5,1,2.5))#
plot(datan$T[which(datan[5]=="IN")],datan$SST[which(datan[5]=="IN")],#ylab="TSS#(mg/L)",##
#####xlab="T#(NTU)",pch=19,#col="red")#
points(datan$T[which(datan[5]=="OUT")],datan$SST[which(datan[5]=="OUT")],##
#######ylab="TSS#(mg/L)",#xlab="T#(NTU)",pch=18,#col="blue")#
dev.off()#
#separar#IN#and#OUT#para#calibraciOn#
#####
######
#SST#5000#generaciones#de#SST#para#cada#una#de#las#muestras#
MSST=read.csv("MMTSSg%tubi.csv",h=TRUE)#
attach(MSST)#
dim(MSST)#
#matriz#muestras#IN##
i=1#
while(i<=length(iin)){#
##if(i==1){iinm=which(MSST[,1]==iin[i])}#
##else{iinm=c(iinm,#which(MSST[,1]==iin[i]))}#
##i=i+1#
}#
MSSTin=MSST[iinm,]#MAtriz#de#SST#SOLO#ENTRADA#
MSSTinm=MSSTin[,%1]##matriz#de#15#X5000#posibles#valores#de#SST#relacionados#a#Turbiedad#campo#
dim(MSSTinm)#

APP#C%9%1#–#1#of#10#
#
#MAtriz#SOLO#SALIDA#
MSSTout=MSST[%iinm,]#MAtriz#de#SST#SOLO#ENTRADA#
MSSToutm=MSSTout[,%1]##matriz#de#13#X5000#posibles#valores#de#SST#relacionados#a#Turbiedad#
campo#
dim(MSSToutm)#
#
#Turbiedad#5000#generaciones#de#Turbiedad#para#cada#una#de#las#muestras#
MMTmg=read.csv("MMTmg.csv",header=TRUE)#
attach(MMTmg)#
MMtmgin=MMTmg[iinm,]##matriz#de#15#X5000#
MMtmgout=MMTmg[%iinm,]##matriz#de#13#X5000#
#
######
#Primero#IN#
n=1#
numsim=5000#
while(n<=numsim){#
#AquI#empieza#el#AnAlisis#con#la#primera#COL#de#datos#de#T#y#SST#lo#MEDIDO#en#CAMPO#organizamos#
de#menor#a#mayor#por#la#columna#Turbiedad#
##ical=sort(sample(c(1:dim(MMtmgin)[1]),round(dim(MMtmgin)[1]*2/3)))#
##Tucal=MMtmgin[ical,n]#
##Tuval=MMtmgin[%ical,n]#estos#son#como#mis#datos#reales#
##Tsscal=MSSTinm[ical,n]#
##Tssval=MSSTinm[%ical,n]#
###guardo#los#datos#utilizados#
##if(n==1){TucalT=Tucal#
####TuvalT=Tuval#
####TsscalT=Tsscal#
####TssvalT=Tssval#
##}else{TucalT=cbind(TucalT,Tucal)#
##TuvalT=cbind(TuvalT,Tuval)#
##TsscalT=cbind(TsscalT,Tsscal)#
##TssvalT=cbind(TssvalT,Tssval)}#
###
##SSTTu=cbind(Tucal,Tsscal)##Turbiedad,#SST#
##SSTTuorg=SSTTu[order(SSTTu[,1]),]##organizan#de#acuerdo#a#Turbiedad#de#mayor#a#menor#
##Tu=SSTTuorg[,1]#
##STTu=SSTTuorg[,2]#
###con#datos#de#validaciOn#
##realT=Tuval#
####k=1#
####Tssr=0#
####while(k<=length(realT)){#entra#a#cada#columna#de#Tu#y#SSTu#de#las#5000##
########if(realT[k]<max(Tu)&realT[k]>min(Tu)){#caso#que#Turb#real#estE#entre#los#valores#medidos#
############Tssr=c(Tssr,approx(Tu,STTu,realT[k],ties="ordered",method#=#"linear")$y)#
########}#
########else#if(realT[k]>max(Tu)){#caso#que#Turb#real#sea#mayor#que#lo#medido#
#############p=(STTu[length(Tu)]%STTu[length(Tu)%1])/(Tu[length(Tu)]%Tu[length(Tu)%1])#Triangulos#
semejantes==pendiente#
############Tssr=c(Tssr,STTu[length(Tu)%1]+p*(realT[k]%Tu[length(Tu)%1]))#
########}#
########else{##if(realT[k]<min(Tu))#caso#que#Turb#real#sea#menor#que#lo#medido#
############Tsall=STTu[1:14]#
############nad=which(is.na(STTu[1:14])==TRUE)#
############if(length(nad)>0){#

APP#C%9%1#–#2#of#10#
#
############Tsall=Tsall[%which(is.na(STTu[1:14])==TRUE)]}#
############else{Tsall=Tsall}#
############Ts=sample(Tsall,2)#escojemos#aletoria/#un#STT#de#este#rango#medido#
############Ts2=Ts[1]#
############Ts1=Ts[2]#
############T2=Tu[which(Tsall==Ts2)]#correspondiente#Turbiedad#
############T1=Tu[which(Tsall==Ts1)]#
############p=(Ts2%Ts1)/(T2%T1)#
############Tsr=Ts1%p*(T1%realT[k])#
##############if#(Tsr>max(Tsall)){#
##############Tssr=c(Tssr,realT[k])}#
##############else#if(Tsr<0){Tssr=c(Tssr,0)}#
############else{Tssr=c(Tssr,Tsr)}#
########}###
####k=k+1###
####}#
##Tssr=Tssr[%1]#
###Guardando#resultados#de#validaciOn#
##if(n==1){TsspvT=Tssr}else{TsspvT=cbind(TsspvT,Tssr)}#
###
###con#datos#de#calibraciOn#
##realT=Tucal#
##k=1#
##Tssr=0#
##while(k<=length(realT)){#entra#a#cada#columna#de#Tu#y#SSTu#de#las#5000##
####if(realT[k]<max(Tu)&realT[k]>min(Tu)){#caso#que#Turb#real#estE#entre#los#valores#medidos#
######Tssr=c(Tssr,approx(Tu,STTu,realT[k],ties="ordered",method#=#"linear")$y)#
####}#
####else#if(realT[k]>max(Tu)){#caso#que#Turb#real#sea#mayor#que#lo#medido#
######p=(STTu[length(Tu)]%STTu[length(Tu)%1])/(Tu[length(Tu)]%Tu[length(Tu)%1])#Triangulos#
semejantes==pendiente#
######Tssr=c(Tssr,STTu[length(Tu)%1]+p*(realT[k]%Tu[length(Tu)%1]))#
####}#
####else{##if(realT[k]<min(Tu))#caso#que#Turb#real#sea#menor#que#lo#medido#
######Tsall=STTu[1:14]#
######nad=which(is.na(STTu[1:14])==TRUE)#
######if(length(nad)>0){#
########Tsall=Tsall[%which(is.na(STTu[1:14])==TRUE)]}#
######else{Tsall=Tsall}#
######Ts=sample(Tsall,2)#escojemos#aletoria/#un#STT#de#este#rango#medido#
######Ts2=Ts[1]#
######Ts1=Ts[2]#
######T2=Tu[which(Tsall==Ts2)]#correspondiente#Turbiedad#
######T1=Tu[which(Tsall==Ts1)]#
######p=(Ts2%Ts1)/(T2%T1)#
######Tsr=Ts1%p*(T1%realT[k])#
######if#(Tsr>max(Tsall)){#
########Tssr=c(Tssr,realT[k])}#
######else#if(Tsr<0){Tssr=c(Tssr,0)}#
######else{Tssr=c(Tssr,Tsr)}#
####}###
####k=k+1###
##}#
##Tssr=Tssr[%1]#
###Guardando#resultados#de#calibraciOn#

APP#C%9%1#–#3#of#10#
#
##if(n==1){TsspcT=Tssr}else{TsspcT=cbind(TsspcT,Tssr)}#
###
##n=n+1#########
######}#
#
write.csv(TsscalT,"TSS%cal%all%IN.csv")#
write.csv(TssvalT,"TSS%val%all%IN.csv")#
write.csv(TucalT,"Turb%cal%all%IN.csv")#
write.csv(TuvalT,"Turb%val%all%IN.csv")#
write.csv(TsspcT,"TSSp%cal%all%IN.csv")#
write.csv(TsspvT,"TSSp%val%all%IN.csv")#
#
#Ahora#OUT#
n=1#
numsim=5000#
while(n<=numsim){#
###AquI#empieza#el#AnAlisis#con#la#primera#COL#de#datos#de#T#y#SST#lo#MEDIDO#en#CAMPO#
###organizamos#de#menor#a#mayor#por#la#columna#Turbiedad#
##ical=sort(sample(c(1:dim(MMtmgout)[1]),round(dim(MMtmgout)[1]*2/3)))#
##Tucal=MMtmgout[ical,n]#
##Tuval=MMtmgout[%ical,n]#estos#son#como#mis#datos#reales#
##Tsscal=MSSToutm[ical,n]#
##Tssval=MSSToutm[%ical,n]#
###guardo#los#datos#utilizados#
##if(n==1){TucalT=Tucal#
###########TuvalT=Tuval#
###########TsscalT=Tsscal#
###########TssvalT=Tssval#
##}else{TucalT=cbind(TucalT,Tucal)#
########TuvalT=cbind(TuvalT,Tuval)#
########TsscalT=cbind(TsscalT,Tsscal)#
########TssvalT=cbind(TssvalT,Tssval)}#
###
##SSTTu=cbind(Tucal,Tsscal)##Turbiedad,#SST#
##SSTTuorg=SSTTu[order(SSTTu[,1]),]##organizan#de#acuerdo#a#Turbiedad#de#mayor#a#menor#
##Tu=SSTTuorg[,1]#
##STTu=SSTTuorg[,2]#
###con#datos#de#validaciOn#
##realT=Tuval#
##k=1#
##Tssr=0#
##while(k<=length(realT)){#entra#a#cada#columna#de#Tu#y#SSTu#de#las#5000##
####if(realT[k]<max(Tu)&realT[k]>min(Tu)){#caso#que#Turb#real#estE#entre#los#valores#medidos#
######Tssr=c(Tssr,approx(Tu,STTu,realT[k],ties="ordered",method#=#"linear")$y)#
####}#
####else#if(realT[k]>max(Tu)){#caso#que#Turb#real#sea#mayor#que#lo#medido#
######p=(STTu[length(Tu)]%STTu[length(Tu)%1])/(Tu[length(Tu)]%Tu[length(Tu)%1])#Triangulos#
semejantes==pendiente#
######Tssr=c(Tssr,STTu[length(Tu)%1]+p*(realT[k]%Tu[length(Tu)%1]))#
####}#
####else{##if(realT[k]<min(Tu))#caso#que#Turb#real#sea#menor#que#lo#medido#
######Tsall=STTu[1:14]#
######nad=which(is.na(STTu[1:14])==TRUE)#
######if(length(nad)>0){#
########Tsall=Tsall[%which(is.na(STTu[1:14])==TRUE)]}#

APP#C%9%1#–#4#of#10#
#
######else{Tsall=Tsall}#
######Ts=sample(Tsall,2)#escojemos#aletoria/#un#STT#de#este#rango#medido#
######Ts2=Ts[1]#
######Ts1=Ts[2]#
######T2=Tu[which(Tsall==Ts2)]#correspondiente#Turbiedad#
######T1=Tu[which(Tsall==Ts1)]#
######p=(Ts2%Ts1)/(T2%T1)#
######Tsr=Ts1%p*(T1%realT[k])#
######if#(Tsr>max(Tsall)){#
########Tssr=c(Tssr,realT[k])}#
######else#if(Tsr<0){Tssr=c(Tssr,0)}#
######else{Tssr=c(Tssr,Tsr)}#
####}###
####k=k+1###
##}#
##Tssr=Tssr[%1]#
###Guardando#resultados#de#validaciOn#
##if(n==1){TsspvT=Tssr}else{TsspvT=cbind(TsspvT,Tssr)}#
###
###con#datos#de#calibraciOn#
##realT=Tucal#
##k=1#
##Tssr=0#
##while(k<=length(realT)){#entra#a#cada#columna#de#Tu#y#SSTu#de#las#5000##
####if(realT[k]<max(Tu)&realT[k]>min(Tu)){#caso#que#Turb#real#estE#entre#los#valores#medidos#
######Tssr=c(Tssr,approx(Tu,STTu,realT[k],ties="ordered",method#=#"linear")$y)#
####}#
####else#if(realT[k]>max(Tu)){#caso#que#Turb#real#sea#mayor#que#lo#medido#
######p=(STTu[length(Tu)]%STTu[length(Tu)%1])/(Tu[length(Tu)]%Tu[length(Tu)%1])#Triangulos#
semejantes==pendiente#
######Tssr=c(Tssr,STTu[length(Tu)%1]+p*(realT[k]%Tu[length(Tu)%1]))#
####}#
####else{##if(realT[k]<min(Tu))#caso#que#Turb#real#sea#menor#que#lo#medido#
######Tsall=STTu[1:14]#
######nad=which(is.na(STTu[1:14])==TRUE)#
######if(length(nad)>0){#
########Tsall=Tsall[%which(is.na(STTu[1:14])==TRUE)]}#
######else{Tsall=Tsall}#
######Ts=sample(Tsall,2)#escojemos#aletoria/#un#STT#de#este#rango#medido#
######Ts2=Ts[1]#
######Ts1=Ts[2]#
######T2=Tu[which(Tsall==Ts2)]#correspondiente#Turbiedad#
######T1=Tu[which(Tsall==Ts1)]#
######p=(Ts2%Ts1)/(T2%T1)#
######Tsr=Ts1%p*(T1%realT[k])#
######if#(Tsr>max(Tsall)){#
########Tssr=c(Tssr,realT[k])}#
######else#if(Tsr<0){Tssr=c(Tssr,0)}#
######else{Tssr=c(Tssr,Tsr)}#
####}###
####k=k+1###
##}#
##Tssr=Tssr[%1]#
###Guardando#resultados#de#calibraciOn#
##if(n==1){TsspcT=Tssr}else{TsspcT=cbind(TsspcT,Tssr)}#

APP#C%9%1#–#5#of#10#
#
###
##n=n+1#########
}#
#
write.csv(TsscalT,"TSS%cal%all%OUT.csv")#TSS#cal#Total#
write.csv(TssvalT,"TSS%val%all%OUT.csv")#
write.csv(TucalT,"Turb%cal%all%OUT.csv")#
write.csv(TuvalT,"Turb%val%all%OUT.csv")#
write.csv(TsspcT,"TSSp%cal%all%OUT.csv")#TSS#predicted#cal#Total#
write.csv(TsspvT,"TSSp%val%all%OUT.csv")#
#
#
#MEdidas#de#ajuste#con#los#resultados#de#IN+OUT#
#Matriz#TSS#calibraciOn#TOTAL#
TsscalTin=read.csv("TSS%cal%all%IN.csv",header=TRUE)#
dim(TsscalTin)#
TsscalTin=TsscalTin[,%1]#
TsscalTout=read.csv("TSS%cal%all%OUT.csv",header=TRUE)#
dim(TsscalTout)#
TsscalTout=TsscalTout[,%1]#
TsscalT=rbind(TsscalTin,TsscalTout)#
dim(TsscalT)#
class(TsscalT)#
TsscalT=as.matrix(TsscalT)#
#
#Matriz#TSS#predicted#TOTAL#
TsspcTin=read.csv("TSSp%cal%all%IN.csv",header=TRUE)#
dim(TsspcTin)#
TsspcTin=TsspcTin[,%1]#
TsspcTout=read.csv("TSSp%cal%all%OUT.csv",header=TRUE)#
dim(TsspcTout)#
TsspcTout=TsspcTout[,%1]#
TsspcT=rbind(TsspcTin,TsspcTout)#
dim(TsspcT)#
class(TsspcT)#
TsspcT=as.matrix(TsspcT)#
#
#Matriz#TSS#validaciOn#TOTAL#
TssvalTin=read.csv("TSS%val%all%IN.csv",header=TRUE)#
dim(TssvalTin)#
TssvalTin=TssvalTin[,%1]#
TssvalTout=read.csv("TSS%val%all%OUT.csv",header=TRUE)#
dim(TssvalTout)#
TssvalTout=TssvalTout[,%1]#
TssvalT=rbind(TssvalTin,TssvalTout)#
dim(TssvalT)#
class(TssvalT)#
TssvalT=as.matrix(TssvalT)#
#Matriz#TSS#predicted#TOTAL#
TsspvTin=read.csv("TSSp%val%all%IN.csv",header=TRUE)#
dim(TsspvTin)#
TsspvTin=TsspvTin[,%1]#
TsspvTout=read.csv("TSSp%val%all%OUT.csv",header=TRUE)#
dim(TsspvTout)#
TsspvTout=TsspvTout[,%1]#

APP#C%9%1#–#6#of#10#
#
TsspvT=rbind(TsspvTin,TsspvTout)#
dim(TsspvT)#
class(TsspvT)#
TsspvT=as.matrix(TsspvT)#
#
tssc=matrix(TsscalT,#ncol#=#1)#
class(tssc)#
length(which(is.na(tssc)==TRUE))#
length(tssc)#
tsspc=matrix(TsspcT,#ncol#=#1)#
length(tsspc)#
length(which(is.na(tsspc)==TRUE))#
#
rc=round(cor(matrix(TsscalT,#ncol#=#1),matrix(TsspcT,#ncol#=#1),#
#############use="na.or.complete"),2)#
rv=round(cor(matrix(TssvalT,#ncol#=#1),matrix(TsspvT,#ncol#=#1),#
#############use="na.or.complete"),2)#
RMSEc=round(((sum((matrix(TsscalT,#ncol#=#1)%matrix(TsspcT,#ncol#=#1)),na.rm#
=T)^2)/length(matrix(TsscalT,#ncol#=#1)))^0.5)#
RMSEv=round(((sum((matrix(TssvalT,#ncol#=#1)%matrix(TsspvT,#ncol#=#1)),na.rm#
=T)^2)/length(matrix(TssvalT,#ncol#=#1)))^0.5)#
#
#Turbiedad#
Tucalin=read.csv(as.matrix("Turb%cal%all%IN.csv",#header=TRUE))#
Tuvalin=read.csv(as.matrix("Turb%val%all%IN.csv",#header=TRUE))#
dim(Tucalin)#
dim(Tuvalin)#
Tucalin=Tucalin[,%1]#
Tuvalin=Tuvalin[,%1]#
#
#
Tucalout=read.csv(as.matrix("Turb%cal%all%OUT.csv",#header=TRUE))#
Tuvalout=read.csv(as.matrix("Turb%val%all%OUT.csv",#header=TRUE))#
dim(Tucalout)#
dim(Tuvalout)#
Tucalout=Tucalout[,%1]#
Tuvalout=Tuvalout[,%1]#
#
#turbiedad#validaciOn#
TuvalT=rbind(Tuvalin,Tuvalout)#
dim(TuvalT)#
#turbiedad#calibraciOn#
TucalT=rbind(Tucalin,Tucalout)#
dim(TucalT)#
#
#
#Figura#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("In&out%fig_cal%Turbidity%vs%TSS%#",numsim,".tif"),#width#=#wth,##
#####height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
plot(TucalT[,1],TsspcT[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,120),#ceiling(max(TucalT,TuvalT,TsspcT,TsspvT,na.rm=TRUE))+10#

APP#C%9%1#–#7#of#10#
#
#####ylim=c(0,120))#ceiling(max(TucalT,TuvalT,TsspcT,TsspvT,na.rm=TRUE))+10#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(TucalT[,i],TsspcT[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(TucalT,TuvalT,TsspcT,TsspvT,na.rm=TRUE),#
########max(TucalT,TuvalT,TsspvT,na.rm=TRUE)),#
######c(min(TucalT,TuvalT,TsspvT,na.rm=TRUE),#
########max(TucalT,TuvalT,TsspvT,na.rm=TRUE)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(TuvalT[,i],TsspvT[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#
#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(TucalT[,i],TsspcT[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
#
legend("topleft",c(paste("cal:#r=",round(mean(rc,na.rm=TRUE),2),";#
RMSE=",round(mean(RMSEc,na.rm=TRUE),2),"#mg/L"),#
######paste("val:#r=",round(mean(rv,na.rm=TRUE),2),";#RMSE=#",round(mean(RMSEv,na.rm=TRUE),2),"#
mg/L")),#
######pch=c(20,8),col=c("blue","red"),bty="n")#
#
#
dev.off()#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
#
tiff(filename="STT%Turb%MAY%15.tif",##width#=#wth,#height#=#hth,#compression#=#"lzw",#pointsize#=#8,#bg#
=#"transparent",#res#=#400)#
par(mar=c(4.5,4.5,1,2.5))#
plot(Tsssup,#type="l",col="blue",ylim=c(min(Tssinf,Tssmed,Tsssup),#
####max(Tssinf,Tssmed,Tsssup)),#main="Tss#from#Turbidity",xlab="Time",#ylab="TSS#mg/l")#
lines(Tssinf,#col="blue")#
lines(Tssmed,#col="gray")#
legend("topleft",c("Tss%sup","Tss%med","Tss%min"),#
#######col=c("blue","gray","blue"),lty=c(1,1,1),bty="n")#
dev.off()#
#
########
####Con#todos#los#datos#sin#separar#por#IN#and#out#
#turbiedad#validaciOn#
TuvalT=read.csv("Turb%val%all.csv")#
dim(TuvalT)#
TuvalT=TuvalT[,%1]#

APP#C%9%1#–#8#of#10#
#
#turbiedad#calibraciOn#
TucalT=read.csv("Turb%cal%all.csv")#
dim(TucalT)#
TucalT=TucalT[,%1]#
#
TsscalT=as.matrix(read.csv("TSS%cal%all.csv"))#
dim(TsscalT)#
TsscalT=TsscalT[,%1]#
TssvalT=as.matrix(read.csv("TSS%val%all.csv"))#
dim(TssvalT)#
TssvalT=TssvalT[,%1]#
TsspcT=as.matrix(read.csv("TSSp%cal%all.csv"))#
dim(TsspcT)#
TsspcT=TsspcT[,%1]#
TsspvT=as.matrix(read.csv("TSSp%val%all.csv"))#
dim(TsspvT)#
TsspvT=TsspvT[,%1]#
#
rc=round(cor(matrix(TsscalT,#ncol#=#1),matrix(TsspcT,#ncol#=#1),#
#############use="na.or.complete"),2)#
rv=round(cor(matrix(TssvalT,#ncol#=#1),matrix(TsspvT,#ncol#=#1),#
#############use="na.or.complete"),2)#
RMSEc=round(((sum((matrix(TsscalT,#ncol#=#1)%matrix(TsspcT,#ncol#=#1)),na.rm#
=T)^2)/length(matrix(TsscalT,#ncol#=#1)))^0.5)#
RMSEv=round(((sum((matrix(TssvalT,#ncol#=#1)%matrix(TsspvT,#ncol#=#1)),na.rm#
=T)^2)/length(matrix(TssvalT,#ncol#=#1)))^0.5)#
#
#
numsim=5000#
#Figura#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename=paste0("All%fig_cal%Turbidity%vs%TSS%#",numsim,".tif"),#width#=#wth,##
#####height#=#wth,#compression#=#"lzw",#pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
plot(TsscalT[,1],TsspcT[,1],xlab="TSS#(mg/L)",ylab="TSSp#(mg/L)",#
#####main="calibration#(blue)#%#validation#(red)",col="blue",#
#####pch=20,cex=.1,xlim=c(0,120),#ceiling(max(TucalT,TuvalT,TsspcT,TsspvT,na.rm=TRUE))+10#
#####ylim=c(0,120))#ceiling(max(TucalT,TuvalT,TsspcT,TsspvT,na.rm=TRUE))+10#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(TsscalT[,i],TsspcT[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
lines(c(min(TsscalT,TssvalT,TsspcT,TsspvT,na.rm=TRUE),#
########max(TsscalT,TssvalT,TsspvT,na.rm=TRUE)),#
######c(min(TsscalT,TssvalT,TsspvT,na.rm=TRUE),#
########max(TsscalT,TssvalT,TsspvT,na.rm=TRUE)),lty=2)#
#puntos#de#validaciOn#
i=1#
while(i<=numsim){#
##points(TssvalT[,i],TsspvT[,i],pch=3,cex=.1,#col="red")#
##i=i+1#
}#

APP#C%9%1#–#9#of#10#
#
#
#puntos#de#calibraciOn#
i=2#
while(i<=numsim){#
##points(TsscalT[,i],TsspcT[,i],pch=20,cex=.1,#col="blue")#
##i=i+1#
}#
#
legend("topleft",c(paste("cal:#r=",round(mean(rc,na.rm=TRUE),2),";#
RMSE=",round(mean(RMSEc,na.rm=TRUE),2),"#mg/L"),#
###################paste("val:#r=",round(mean(rv,na.rm=TRUE),2),";#RMSE=#
",round(mean(RMSEv,na.rm=TRUE),2),"#mg/L")),#
#######pch=c(20,8),col=c("blue","red"),bty="n")#
#
#
dev.off()#
#
#
#

APP#C%9%1#–#10#of#10#
#
APPENDIX C-10-1
WATER&USES&LIMITS&EMPLOYING&SPECTROMETER&WAVELENGTHS&
#
##################################################
##############GRAFICA##ABS#vs#LONG.#ONDA##############
##################################################
#
rm(list=ls(all=TRUE))#
#absorbancias#por#mes#entrada#y/o#salida#
dM=read.table("Out%AGO%14%scan+level.txt",header=T)#
attach(dM)#
#
head(which(is.na(dM)))#
##interpolaciOn#info#faltante##
par(mar=c(5,5,2,2))#
plot(dM[,200],type="l")#
#plot(dist_M1,#type="l")#
i=4#col#L200#
xx=seq(1:dim(dM)[1])#
while(i<=dim(dM)[2]){#
##dM[,i]=approx(xx,dM[,i],xout=xx)$y##c?mo#hacerlo#para#todas#las#col#
##i=i+1#
}#
lines(dM[,200],#col="red")#
#
#solo#las#long#de#onda#
dM2=as.matrix(dM[,4:217])#
####
##Absorbancias#de#calibraciOn#que#corresponden#a#posibles#uso#
#Longitudes#de#ondas#definitos#para#los#usos#A#y#B#usos#tipo#A##
Aall=as.matrix(read.csv("A%usos%new.csv",header=TRUE))#
##Absorbancias#de#calibraciOn#que#corresponden#a#posibles#uso#B#
Ball=as.matrix(read.csv("B%usos%new.csv",header=TRUE))#
lmu=1#
lmub=2#
#
wth=3*580#
hth=wth/2^(.5)#
mes="AGO"#
anho=14#
#
#
##################################################
###AGUA#SALIDA#
##Figura#TODOS#usos#con#warnning#cuando#no#cumple#algUn#USO#
tiff(filename=paste0("OBS%Out_",mes,anho,"usosA&B.tif"),#width#=#wth,#height#=#hth,##
#####compression#=#"lzw",#pointsize#=#10,#bg#=#"transparent",#res#=#300)#
par(mar=c(5,5,2,2))#
par(mar#=#rep(2,#4))#
plot(Aall[lmu,],type="l",#ylim=c(0,#max(Aall[lmu,],Ball[lmub,])+10),main=paste0(mes,anho,"%
Event#",l),xlab=substitute(lambda),ylab="Out#%#Abs#1/m",#lwd=2)#
lines(Ball[lmub,],col="black",lwd=2,#lty=2)#

APP#C%10%1#–#1#of#3#
j=1#
while(j<=dim(dM2)[1]){##
##upr=length(which(dM2[j,]%Aall[lmu,]<0))/length(Aall[lmu,])#LIMITE#ABS#USO#A#
##up2r=length(which((dM2[j,]%Ball[lmub,])<0))/length(Ball[lmub,])#LIMITE#ABS#USO#b###
##if(upr<0.95){lines(dM2[j,],col="blue",lwd=1.5)}#
##if(up2r<0.95){lines(dM2[j,],col="cyan",lwd=1.5)}##absorbancia#min#j#que#NO#cumple#
##if(upr>=0.95#|up2r>=0.95){lines(dM2[j,],col="gray",lwd=2)}#absorbancia#min#j#que#cumple###
##j=j+1#
}#
lines(Ball[lmub,],col="black",lwd=2,#lty=2)#
lines(Aall[lmu,],col="black",lwd=2,lty=1)#
legend("topright",#legend=c("#Limit#Water#uses:#A","#Limit#Water#uses:#B","#Month#absorbances","#
Warnning#use#A#obs",#"#Warnning#use#B#obs"),##
#######col=c("black","black","grey",#"blue","cyan"),lwd=c(1.5,1.5,4,1,1),#
#######lty=c(1,2,1,1,1),cex=1)#
dev.off()###
#
#
#CuAndo#cumple#la#calidad??#
##################
#Para#usos#A%TSS+COD#AGUA#DE#SALIDA#
a=1#
up1=50#porcentaje#que#cumple#del#uso#
while(a<=length(dM[,1])){#
up1=c(up1,(length(which(dM2[a,]%Aall[lmu,]<0))/length(Aall[lmu,])))#
a=1+a#
}#
up1=up1[%1]#
#write.table(up1,"P%usosA%TSS+COD%Ago%14.txt",row.names=FALSE)#
#
nm=length(which(up1>=0.95&up1==1))#numero#de#minutos#que#cumplen#para#usos#A#Tss+COD#
mt=length(dM2[,1])##minutos#totales#
dt=length(dM2[,1])/1440##dIas#totales#
ru=nm/mt##realaciOn#%#que#cumple#
write.table(cbind(nm,mt,dt,ru),paste0(mes,anho,"%datos%usosA.txt"),col.names=T)#
#
#while#para#identificar#min#que#NO#cumplen#
aa=1#
while(aa<=length(which(up1<0.95))){#
##if(aa==1){dtnm=dM2[which(up1<0.95)[aa],1:2]#
############aa=aa+1}#
##else{dtnm=rbind(dtnm,dM2[which(up1<0.95)[aa],1:2])#
##aa=aa+1}#
}#
View(dtnm)#
write.table(dtnm,paste0(mes,anho,"%datos_NO_Uso%A.txt"),row.names=FALSE)#
#
#############Para#usos#B#TSS+COD%#AGUA#DE#SALIDA#
a=1#
up2=50#porcentaje#que#cumple#del#uso#
while(a<=length(dM2[,1])){#
##up2=c(up2,(length(which(dM2[a,]%Ball[lmub,]<0))/length(Ball[lmub,])))#
##a=1+a#
}#
up2=up2[%1]#

APP#C%10%1#–#2#of#3#
write.table(up2,"P%usos%B%TSS+COD%Ago%14.txt",row.names=FALSE)#
nm2=length(which(up2>=0.95&up2<=1))#numero#de#minutos#que#cumplen#para#usos#7%8##
mt2=length(dM2[,1])##minutos#totales#
dt2=length(dM2[,1])/1440##dIas#totales#
ru2=nm2/mt2#
write.table(cbind(nm2,mt2,dt2,ru2),paste0(mes,anho,"%datos%usos%B.txt"),col.names=T)#
#which(up1<1)#indices#minutos#que#no#cumplen#
#while#para#identificar#min#que#no#cumple#
aa=1#
while(aa<=length(which(up2<0.95))){#
##if(aa==1){dtnm2=dM2[which(up2<0.95)[aa],1:2]#
############aa=aa+1}#
##else{dtnm2=rbind(dtnm2,dM2[which(up2<0.95)[aa],1:2])#
#######aa=aa+1}#
}#
View(dtnm2)#
write.table(dtnm2,paste0(mes,anho,"datos_NO_B.txt"),row.names=FALSE)#
#
#

APP#C%10%1#–#3#of#3#
APPENDIX C-11-1
RULES&FOR&FIRST&FLUSH&IDENTIFICATION&
#
######
#PROGRAMA#PARA#DETERMINAR#LA#RELACION#ENTRE#b#(FF)#VS#CARACTARISTICAS#DEL#SISTEMA##
#HUMEDAL%CONSTRUIDO#
rm(list=ls(all=TRUE))#
#
#Hmax(cm)##Dhmax(cm)#ADWP#(hour)##P(mm)# Imax(mm/h)# D(min)# ADWP%1(hour)# #
#P%1(mm)# Imax%1(mm/h)# D%1(min)#
datos=read.table("datos%input.txt",header=T)#
attach(datos)#
names(datos)#
dim(datos)#
#datos#a#y#b#First#Flush#
#Agosto#2014#
ago_1=read.csv("Mab%AGO%1.csv",header=T)#
ago_2=read.csv("Mab%AGO%2.csv",header=T)#
#SEpt#2014#
sep_1=read.csv("Mab%SEP%1.csv",header=T)#
sep_2=read.csv("Mab%SEP%2.csv",header=T)#
sep_4=read.csv("Mab%SEP%4.csv",header=T)#
sep_5=read.csv("Mab%SEP%5.csv",header=T)#
sep_6=read.csv("Mab%SEP%6.csv",header=T)#
sep_7=read.csv("Mab%SEP%7.csv",header=T)#
sep_8=read.csv("Mab%SEP%8.csv",header=T)#
sep_9=read.csv("Mab%SEP%9.csv",header=T)#
sep_10=read.csv("Mab%SEP%10.csv",header=T)#
sep_12=read.csv("Mab%SEP%12.csv",header=T)#
sep_13=read.csv("Mab%SEP%13.csv",header=T)#
#Oct#2014#
oct_2=read.csv("Mab%OCT%2.csv",header=T)#
oct_3=read.csv("Mab%OCT%3.csv",header=T)#
oct_4=read.csv("Mab%OCT%4.csv",header=T)#
oct_5=read.csv("Mab%OCT%5.csv",header=T)#
oct_6=read.csv("Mab%OCT%6.csv",header=T)#
oct_7=read.csv("Mab%OCT%7.csv",header=T)#
oct_8=read.csv("Mab%OCT%8.csv",header=T)#
oct_9=read.csv("Mab%OCT%9.csv",header=T)#
oct_10=read.csv("Mab%OCT%10.csv",header=T)#
oct_11=read.csv("Mab%OCT%11.csv",header=T)#
#ENE#2015#
ene_5=read.csv("Mab%ENE%5.csv",header=T)#
ene_6=read.csv("Mab%ENE%6.csv",header=T)#
ene_7=read.csv("Mab%ENE%7.csv",header=T)#
ene_8=read.csv("Mab%ENE%8.csv",header=T)#
#FEB#2015#
#feb_1=read.csv("Mab%FEB%1.csv",header=T)#
feb_2=read.csv("Mab%FEB%2.csv",header=T)#
feb_3=read.csv("Mab%FEB%3.csv",header=T)#
feb_4=read.csv("Mab%FEB%4.csv",header=T)#
feb_5=read.csv("Mab%FEB%5.csv",header=T)#

APP#C%11%1#–#1#of#9#
#MAR#2015#
mar_1=read.csv("Mab%MAR%1.csv",header=T)#
mar_2=read.csv("Mab%MAR%2.csv",header=T)#
mar_3=read.csv("Mab%MAR%3.csv",header=T)#
mar_4=read.csv("Mab%MAR%4.csv",header=T)#
mar_5=read.csv("Mab%MAR%5.csv",header=T)#
mar_6=read.csv("Mab%MAR%6.csv",header=T)#
mar_7=read.csv("Mab%MAR%7.csv",header=T)#
mar_8=read.csv("Mab%MAR%8.csv",header=T)#
mar_9=read.csv("Mab%MAR%9.csv",header=T)#
mar_11=read.csv("Mab%MAR%11.csv",header=T)#
mar_12=read.csv("Mab%MAR%12.csv",header=T)#
#ABR#2015#
abr_1=read.csv("Mab%ABR%1.csv",header=T)#
abr_2=read.csv("Mab%ABR%2.csv",header=T)#
abr_3=read.csv("Mab%ABR%3.csv",header=T)#
abr_4=read.csv("Mab%ABR%4.csv",header=T)#
abr_5=read.csv("Mab%ABR%5.csv",header=T)#
abr_6=read.csv("Mab%ABR%6.csv",header=T)#
abr_7=read.csv("Mab%ABR%7.csv",header=T)#
abr_8=read.csv("Mab%ABR%8.csv",header=T)#
abr_9=read.csv("Mab%ABR%9.csv",header=T)#
abr_10=read.csv("Mab%ABR%10.csv",header=T)#
abr_11=read.csv("Mab%ABR%11.csv",header=T)#
abr_12=read.csv("Mab%ABR%12.csv",header=T)#
#MAy#2015#
#may_1=read.csv("Mab%MAY%1.csv",header=T)#
may_2=read.csv("Mab%MAY%2.csv",header=T)#
may_3=read.csv("Mab%MAY%3.csv",header=T)#
may_4=read.csv("Mab%MAY%4.csv",header=T)#
#may_5=read.csv("Mab%MAY%5.csv",header=T)#
may_6=read.csv("Mab%MAY%6.csv",header=T)#
#may_7=read.csv("Mab%MAY%7.csv",header=T)#
#may_8=read.csv("Mab%MAY%8.csv",header=T)#
#may_9=read.csv("Mab%MAY%9.csv",header=T)#
#Junio#2015#
jun_5=read.csv("Mab%JUN%5.csv",header=T)#
jun_9=read.csv("Mab%JUN%9.csv",header=T)#
jun_10=read.csv("Mab%JUN%10.csv",header=T)#
jun_11=read.csv("Mab%JUN%11.csv",header=T)#
jun_12=read.csv("Mab%JUN%12.csv",header=T)#
jun_13=read.csv("Mab%JUN%13.csv",header=T)#
jun_14=read.csv("Mab%JUN%14.csv",header=T)#
jun_15=read.csv("Mab%JUN%15.csv",header=T)#
jun_16=read.csv("Mab%JUN%16.csv",header=T)#
#
datos=read.table("datos%input.txt",header=T)#
dataf=datos[%61:%59,]#
dataf=dataf[%57,]#
dataf=dataf[%53,]#
dataf=dataf[%48,]#
dataf=dataf[%28,]#
attach(dataf)#
rm(datos)#
#

APP#C%11%1#–#2#of#9#
#AnAlisis#de#VARianzas#
i=1#
while(i<=dim(may_6)[1]){##
#para#el#primer#valor#de#a#y#b#
a=c(ago_1[i,1],ago_2[i,1],#
####sep_1[i,1],sep_2[i,1],sep_4[i,1],sep_5[i,1],sep_6[i,1],sep_7[i,1],#
####sep_8[i,1],sep_9[i,1],sep_10[i,1],sep_12[i,1],sep_13[i,1],#
####oct_2[i,1],oct_3[i,1],oct_4[i,1],oct_5[i,1],oct_6[i,1],oct_7[i,1],#
####oct_8[i,1],oct_9[i,1],oct_10[i,1],oct_11[i,1],#
####ene_5[i,1],ene_6[i,1],ene_7[i,1],ene_8[i,1],#
####feb_2[i,1],feb_3[i,1],feb_4[i,1],feb_5[i,1],#
####mar_1[i,1],mar_2[i,1],mar_3[i,1],mar_4[i,1],mar_5[i,1],mar_6[i,1],#
####mar_7[i,1],mar_8[i,1],mar_9[i,1],mar_11[i,1],mar_12[i,1],#
####abr_1[i,1],abr_2[i,1],abr_3[i,1],#
####abr_7[i,1],abr_9[i,1],abr_10[i,1],abr_11[i,1],abr_12[i,1],#
####may_2[i,1],may_3[i,1],may_4[i,1],may_6[i,1],#
####jun_5[i,1],jun_9[i,1],jun_10[i,1],jun_11[i,1],jun_12[i,1],#
####jun_13[i,1],jun_14[i,1],jun_15[i,1],jun_16[i,1])#
b=c(ago_1[i,2],ago_2[i,2],#
####sep_1[i,2],sep_2[i,2],sep_4[i,2],sep_5[i,2],sep_6[i,2],sep_7[i,2],#
####sep_8[i,2],sep_9[i,2],sep_10[i,2],sep_12[i,2],sep_13[i,2],#
####oct_2[i,2],oct_3[i,2],oct_4[i,2],oct_5[i,2],oct_6[i,2],oct_7[i,2],#
####oct_8[i,2],oct_9[i,2],oct_10[i,2],oct_11[i,2],#
####ene_5[i,2],ene_6[i,2],ene_7[i,2],ene_8[i,2],#
####feb_2[i,2],feb_3[i,2],feb_4[i,2],feb_5[i,2],#
####mar_1[i,2],mar_2[i,2],mar_3[i,2],mar_4[i,2],mar_5[i,2],mar_6[i,2],#
####mar_7[i,2],mar_8[i,2],mar_9[i,2],mar_11[i,2],mar_12[i,2],#
####abr_1[i,2],abr_2[i,2],abr_3[i,2],#
####abr_7[i,2],abr_9[i,2],abr_10[i,2],abr_11[i,2],abr_12[i,2],#
####may_2[i,2],may_3[i,2],may_4[i,2],may_6[i,2],#
####jun_5[i,2],jun_9[i,2],jun_10[i,2],jun_11[i,2],jun_12[i,2],#
####jun_13[i,2],jun_14[i,2],jun_15[i,2],jun_16[i,2])#
#
#cuando#alguna#no#cumple#NORMALIDAD#o#Homogeneidad#=>#Kruskall#si#no#ANOVA#
#anova=anova(lm(ST~LUGAR+EVENTO+SUPERFICIE))#
#kruskal_a=anova(lm(rank(a)~Hmax+Hmean+Dhmax+ADWP+Imax+Imean+Dvert+Dplu+Hmaxb+Hm
eanb+Dhmaxb+ADWPb+Imaxb+Imeanb+Dvertb+Dplub))#
kruskal_a=anova(lm(rank(a)~Hmax+Hmean+Dhmax+ADWP+Imax+Imean+Dvert+Dplu))#
#se#puede#utilizar#kruskal.test#pero#uno#a#uno#
#
#print(kruskal_b)#
if(i==1){pva=kruskal_a$Pr}#
else{pva=rbind(pva,kruskal_a$Pr)}#
#
#kruskal_b=anova(lm(rank(b)~Hmax+Hmean+Dhmax+ADWP+Imax+Imean+Dvert+Dplu+Hmaxb+Hm
eanb+Dhmaxb+ADWPb+Imaxb+Imeanb+Dvertb+Dplub))#
kruskal_b=anova(lm(rank(b)~Hmax+Hmean+Dhmax+ADWP+Imax+Imean+Dvert+Dplu))#
if(i==1){pvb=kruskal_b$Pr}#
else{pvb=rbind(pvb,kruskal_b$Pr)}#
#
i=i+1#
}#
#dim#pvb=>#pvalue#de#las#variables#respecto#a#b#matriz#de#5000x15#
write.csv(pvb,"pvalue_b_v2.csv")#
write.csv(pva,"pvalue_a_v2.csv")#

APP#C%11%1#–#3#of#9#
#
#PRobabilidad#de#significancia#de#los#5000#cuantos#veces#pvalue<0.05#
j=1#
while(j<=dim(pva)[2]){#
##if(j==1){pro_a=length(which(pva[,j]<=0.05))/5000#
##pro_b=length(which(pvb[,j]<=0.05))/5000}#
##else{pro_a=rbind(pro_a,length(which(pva[,j]<=0.05))/5000)#
####pro_b=rbind(pro_b,length(which(pvb[,j]<=0.05))/5000)############
##}#
j=j+1#
}#
var=names(dataf)[4:21]#
var=var[%14]#
var=var[%5]#
var=c(var,"residuals")#
#
write.csv(cbind(var,pro_b,pro_a),"prob_b&a_pvalue_v2.csv")#
#
#
#PARa#b:#REsultados#primera#ejecuciOn#sin#Junio2015#
#Hmax=#0.2996#%#0.2696#
#Hmean=#0.2310#%#0.3382#
#ADWP=#0.0000#%#0.0046#
#Imax=0.0004#%#0#
#Imean=0.0000#%#0.1046#
#Dvert=0.1720#%#0.010#
#Dplu=0.0058#%#0.0000#
#Hmaxb=0.0008#%#0.0146#
#Hmeanb=0.0000#%#0#
#ADWPb=0.0000#%#0.0018#
#Imaxb=0.0000#%#0.2216#
#Imeanb=0.0680#%#0.0380#
#Dvertb=0.0002#%#0#
#Dplub=0.0236#%#0.3548#
#residuales=0.0000#%#0#
#
#
#si#influye#significativamente#se#calcula#Indice#de#variabilidad#explicada#
##eta_cuadrado=round(kruskal$'Sum#Sq'[%7]/sum(kruskal$'Sum#Sq'[%7])*100,0)#
##barplot(eta_cuadrado)#
#
######
#PRueba#con#var_numericas~var_catergorica#FF#
#TODOS#los#valores#de#b#en#una#sola#matriz#(63x5000)#63Notas#O#Reporte#Acad?mico#
bT=rbind(ago_1[,2],ago_2[,2],#
#########sep_1[,2],sep_2[,2],sep_4[,2],sep_5[,2],sep_6[,2],sep_7[,2],#
#########sep_8[,2],sep_9[,2],sep_10[,2],sep_12[,2],sep_13[,2],#
#########oct_2[,2],oct_3[,2],oct_4[,2],oct_5[,2],oct_6[,2],oct_7[,2],#
#########oct_8[,2],oct_9[,2],oct_10[,2],oct_11[,2],#
#########ene_5[,2],ene_6[,2],ene_7[,2],ene_8[,2],#
#########feb_2[,2],feb_3[,2],feb_4[,2],feb_5[,2],#
#########mar_1[,2],mar_2[,2],mar_3[,2],mar_4[,2],mar_5[,2],mar_6[,2],#
#########mar_7[,2],mar_8[,2],mar_9[,2],mar_11[,2],mar_12[,2],#
#########abr_1[,2],abr_2[,2],abr_3[,2],#
#########abr_7[,2],abr_9[,2],abr_10[,2],abr_11[,2],abr_12[,2],#

APP#C%11%1#–#4#of#9#
#########may_2[,2],may_3[,2],may_4[,2],may_6[,2],#
#########jun_5[,2],jun_9[,2],jun_10[,2],jun_11[,2],jun_12[,2],#
#########jun_13[,2],jun_14[,2],jun_15[,2],jun_16[,2])#
dim(bT)#
#probabilidad#de#FF#de#cada#evento#
i=1#
while(i<=dim(bT)[1]){#
pz1=length(which(bT[i,]<=0.185))/dim(bT)[2]#Probibilidad#en#la#ZONA#1#
pz2=length(which(bT[i,]>0.185&bT[i,]<=0.862))/dim(bT)[2]#Probibilidad#en#la#ZONA#2#
pz3=length(which(bT[i,]>0.862&bT[i,]<=1))/dim(bT)[2]#Probibilidad#en#la#ZONA#3#
pz4=length(which(bT[i,]>1))/dim(bT)[2]#Probibilidad#en#las#ZONAS#4%5%6#
pna=length(which(is.na(bT[i,])==TRUE))/dim(bT)[2]#
PTzs=pz1+pz2+pz3+pz4+pna#
#print(PTzs)#
Zf=which.max(c(pz1,pz2,pz3,pz4,pna))#
if(i==1){#
##ZT=paste0("Z",Zf)#
}#
else{#
##ZT=rbind(ZT,paste0("Z",Zf))#
}#
i=i+1#
}#
#
dO=read.table("datos%input_v0.txt",header=T)#
dOf=dO[%61:%59,]#
dOf=dOf[%57,]#
dOf=dOf[%53,]#
dOf=dOf[%48,]#
dOf=dOf[%28,]#
attach(dOf)#
head(dOf)#
dim(dOf)#
#
dataFF=cbind(ZT,dOf[,4:21])#
write.table(dataFF,"FFxZonas+data.txt",col.names=TRUE)#
kwt_b=anova(lm(rank(Pb)~ZT),data=dataFF)#
print(kwt_b)#
#
#
#AHORA#Z1#LO#VAMOS#A#INCLUIR#EN#Z2#
ZT[which(ZT=="Z1")]="Z2"#
dataFF=cbind(ZT,dOf[,4:21])#
dataFF=read.table("FFxZonas+data.txt",h=T)#
head(dataFF)#
dataFF$ZT[which(dataFF$ZT=="Z1")]="Z2"#
#
#TEST#kruskal#Wallis#con#cada#una#de#las#variables#
kwt_b=anova(lm(rank((dataFF$Dhmaxb))~ZT),data=dataFF)#
print(kwt_b)#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename="fig_BOXP_Hmaxb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#

APP#C%11%1#–#5#of#9#
par(mar=c(5,5,2,2))##
boxplot(Hmaxb~ZT,ylab="Hmaxb",#xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Hmean_FF%Z2_sinOUT.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#
pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Hmean~ZT,ylab="Hmean",#xlab="FF_zones",outline=F)#
dev.off()#
#
tiff(filename="fig_BOXP_Dhmax_FF%Z2_sinOUT.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#
pointsize#=#10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot((dataFF$Dhmax)~ZT,ylab=expression(paste(Delta,#"hmax"))#
######,xlab="FF_zones",outline=F)#
##dev.off()#
#
tiff(filename="fig_BOXP_ADWPb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(ADWPb~ZT,ylab="ADWPb",xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Pb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#10,#
bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Pb~ZT,ylab="Pb",xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Imaxb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Imaxb~ZT,ylab="Imaxb",xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Imeanb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Imeanb~ZT,ylab="Imeanb",xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Dvertb_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Dvertb~ZT,ylab="Dvert",xlab="FF_zones")#
dev.off()#
#
tiff(filename="fig_BOXP_Dplub_FF%Z2.tif",#width#=#wth,#height#=#wth,#compression#=#"lzw",#pointsize#=#
10,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))##
boxplot(Dplub~ZT,ylab="Dplu",xlab="FF_zones")#
dev.off()#
#
#
######

APP#C%11%1#–#6#of#9#
#PROBABILIDAD#DE#FF#ALTO#de#cada#evento#y#cada#mes#
p1=length(which(jun_16[,2]<=0.185))/length(jun_16[,2])#Probibilidad#en#la#ZONA#1#
p2=length(which(jun_16[,2]>0.185&jun_16[,2]<=0.862))/length(jun_16[,2])#Probibilidad#en#la#ZONA#
2#
p3=length(which(jun_16[,2]>0.862&jun_16[,2]<=1))/length(jun_16[,2])#Probibilidad#en#la#ZONA#3#
p4=length(which(jun_16[,2]>1))/length(jun_16[,2])#Probibilidad#en#las#ZONAS#4%5%6#
pna=length(which(is.na(jun_16[,2])==TRUE))/length(jun_16[,2])#
PT=p1+p2+p3+p4+pna#
write.table(c(p1,p2,p3,p4,pna,PT),"PFF%jun_16.txt")#
#
#
######
head(dataf)#
dim(dataf)#
#agrupar#todos#los#b#de#todos#los#eventos#63X5000#
bT=rbind(ago_1[,2],ago_2[,2],#
#########sep_1[,2],sep_2[,2],sep_4[,2],sep_5[,2],sep_6[,2],sep_7[,2],#
#########sep_8[,2],sep_9[,2],sep_10[,2],sep_12[,2],sep_13[,2],#
#########oct_2[,2],oct_3[,2],oct_4[,2],oct_5[,2],oct_6[,2],oct_7[,2],#
#########oct_8[,2],oct_9[,2],oct_10[,2],oct_11[,2],#
#########ene_5[,2],ene_6[,2],ene_7[,2],ene_8[,2],#
#########feb_2[,2],feb_3[,2],feb_4[,2],feb_5[,2],#
#########mar_1[,2],mar_2[,2],mar_3[,2],mar_4[,2],mar_5[,2],mar_6[,2],#
#########mar_7[,2],mar_8[,2],mar_9[,2],mar_11[,2],mar_12[,2],#
#########abr_1[,2],abr_2[,2],abr_3[,2],#
#########abr_7[,2],abr_9[,2],abr_10[,2],abr_11[,2],abr_12[,2],#
#########may_2[,2],may_3[,2],may_4[,2],may_6[,2],#
#########jun_5[,2],jun_9[,2],jun_10[,2],jun_11[,2],jun_12[,2],#
#########jun_13[,2],jun_14[,2],jun_15[,2],jun_16[,2])#
dim(bT)#
#
#probabilidad#de#b<=0.185#para#cada#Hmean#
j=1#
while(j<=length(Hmean)){#
##if(j==1){#
##RbHmean=c(Hmean[j],length(which(bT[j,]<=0.185)))#
##}#
##else{#
####RbHmean=rbind(RbHmean,c(Hmean[j],length(which(bT[j,]<=0.185))))#
##}#
##j=j+1#
}#
#
RbHmean=RbHmean[order(RbHmean[,1]),]##
write.csv(RbHmean,"RbHmean.csv")#
RbHmeanO=as.matrix(read.csv("RbHmeanorder.csv"))#
#
pbff=cbind(RbHmeanO[,1],(RbHmeanO[,2]/5000)*100)#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%prob_b_hmean.tif",#width#=#wth,#height#=#hth1,##
#####compression#=#"none",#pointsize#=#5,#bg#=#"white",#res#=#450)#
par(mar=c(5,5,2,2))#

APP#C%11%1#–#7#of#9#
barplot(pbff[,2],#names.arg=c(pbff[,1]),cex.axis=0.6,#las=2,#xlab="Hmean",##
########ylab="Prob#b%Z1")#
dev.off()#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%FF%bvsHmean.tif",#width#=#wth,#height#=#hth1,##
#####compression#=#"none",#pointsize#=#6,#bg#=#"white",#res#=#450)#
#
plot(Hmean,bT[,1],xlab="Hmean",ylab="FF%b",pch=20,xlim=c(min(Hmean),max(Hmean)),#
#####ylim=c(min(bT,na.rm=T),max(bT,na.rm=T)))#
yb=rep(0.185,length(Hmean))#
lines(Hmean,yb,lty=2,#col="red")#
i=2#
while(i<=dim(bT)[2]){#
points(Hmean,bT[,i],col=i,pch=20)#
i=i+1#
}#
dev.off()#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%FF%bvsHmax.tif",#width#=#wth,#height#=#hth1,##
#####compression#=#"none",#pointsize#=#6,#bg#=#"white",#res#=#450)#
#
plot(Hmax,bT[,1],xlab="Hmax",ylab="FF%b",pch=20,xlim=c(min(Hmax),max(Hmax)),#
#####ylim=c(min(bT,na.rm=T),max(bT,na.rm=T)))#
yb=rep(0.185,length(Hmax))#
lines(Hmax,yb,lty=2,#col="red")#
i=2#
while(i<=dim(bT)[2]){#
##points(Hmax,bT[,i],col=i,pch=20)#
##i=i+1#
}#
dev.off()#
#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%FF%bvsImean.tif",#width#=#wth,#height#=#hth1,##
#####compression#=#"none",#pointsize#=#6,#bg#=#"white",#res#=#450)#
#
plot(Imean,bT[,1],xlab="Imean",ylab="FF%b",pch=20,xlim=c(min(Imean,na.rm=T),#
#########################################################max(Imean,na.rm=T)),#
#####ylim=c(min(bT,na.rm=T),max(bT,na.rm=T)))#
yb=rep(0.185,length(Imean))#
lines(Imean,yb,lty=2,#col="red")#
i=2#
while(i<=dim(bT)[2]){#
##points(Imean,bT[,i],col=i,pch=20)#
##i=i+1#
}#

APP#C%11%1#–#8#of#9#
dev.off()#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="fig%FF%bvsHmeanb%log.tif",#width#=#wth,#height#=#hth1,##
#####compression#=#"none",#pointsize#=#6,#bg#=#"white",#res#=#450)#
#
plot(Hmeanb,bT[,1],xlab="Hmeanb",ylab="FF%b",pch=20,xlim=c(min(Hmeanb,na.rm=T),#
#########################################################max(Hmeanb,na.rm=T)),#
#####ylim=c(min(bT,na.rm=T),max(bT,na.rm=T)),log="y")#
yb=rep(0.185,length(Hmeanb))#
lines(Hmeanb,yb,lty=2,#col="red")#
i=2#
while(i<=dim(bT)[2]){#
##points(Hmeanb,bT[,i],col=i,pch=20)#
##i=i+1#
}#
dev.off()#
#
#
#

APP#C%11%1#–#9#of#9#
APPENDIX C-11-2
USING&RELIABILITY&AS&A&EVALUATION&METHOD&
#
#kappa#simu#vs#obs#con#todos#los#datos#
####Ycal=usor[(inie+tw+tf):(fine+tw+tf)]#observados#
####Ypcdef=UP2#simulados#con#prob#>0.95#
####YYc=cbind(Ycal,Ypcdef)#
####table(Ypcdef,Ycal)#
####
if(length(which(Ycal=="NU"))==length(UP2)&length(which(Ycal=="B"))==0&length(which(Ypcdef=="
NU"))==0|#
#########
length(which(Ycal=="B"))==length(UP2)&length(which(Ycal=="NU"))==0&length(which(Ypcdef=="B")
)==0){#simulado==NU#
######Cb=0#Confiabilidad#uso#B#
######Cnu=0#Confiabilidad#uso#NU###
######ck=0##coeficiente#Kappa#
####}else#if(length(which(Ypcdef=="B"))==length(UP2)&length(which(Ycal=="B"))==length(UP2)|#
###############length(which(Ypcdef=="B"))!=0&length(which(Ypcdef=="NU"))==0){#simulado==B#
######Cb=1#Confiabilidad#uso#B#
######Cnu=0#Confiabilidad#uso#NU###
######ck=0#coeficiente#Kappa}#
#######length(which(Ypcdef=="NU"))==length(UP2)&length(which(Ypcdef=="B"))==0#
####}else#if(length(which(Ypcdef=="NU"))==length(UP2)&length(which(Ycal=="NU"))==length(UP2)#
#############|length(which(Ypcdef=="NU"))!=0&length(which(Ypcdef=="B"))==0){#observado==B#
######Cb=0#Confiabilidad#uso#B#
######Cnu=1#Confiabilidad#uso#NU###
######ck=0#coeficiente#Kappa}#
####}else#if(length(which(Ycal=="B"))==0&length(which(Ypcdef=="B"))>0){ck=0#
#######################################################################Cnu=table(Ypcdef,Ycal)[2]/sum(table(Ypcdef,Ycal))#
#######################################################################Cb=NA#
####}else#if(length(which(Ycal=="NU"))==0&length(which(Ypcdef=="NU"))>0){ck=0#
#########################################################################Cb=table(Ypcdef,Ycal)[1]/sum(table(Ypcdef,Ycal))#
#########################################################################Cnu=NA###
####}#else{#
######a=table(Ypcdef,Ycal)[1]#Sireal=Sisimu#check#
######b=table(Ypcdef,Ycal)[3]#Noreal=Sisimu#
######c=table(Ypcdef,Ycal)[2]#Sireal=Nosimu#
######d=table(Ypcdef,Ycal)[4]#Noreal=Nosimu#check#
######po=(a+d)/sum(a,b,c,d)#
######pesa=((a+b)*(a+c))/sum(a,b,c,d)#
######pesb=((c+d)*(b+d))/sum(a,b,c,d)#
######pe=(pesa+pesb)/sum(a,b,c,d)#
######Cb=a/(a+c)#Confiabilidad#uso#B#
######Cnu=d/(b+d)#Confiabilidad#uso#NU#
######ck=(po%pe)/(1%pe)##coeficiente#Kappa#
####}#
#####

APP#C%11%2#–#1#of#1#
APPENDIX C-11-3
EVALUATION*METHOD*TO*IDENTIFY*THE*REDUNDANT*VARIABLES*
#
###Programa#para#escoger#funciOn#Kernel#para#SVM#clasificaciOn##
###SimulaciOn#usos#definidos#por#espectrometro#
#
rm(list=ls(all=TRUE))#
M=read.csv("bd_events+usos.csv",header=T)#
attach(M)#
library(kernlab)#
library(DEoptim)#
#conformacion#datos#de#calibracion#y#validacion#para#CLASIFICACION#
X=as.data.frame(M[,%1:%4])#QUITAMOS#PRIMERAS#4#COL#
X=as.data.frame(X[%82:%88,])#QUITAMOS#NA#
X=as.data.frame(X[%64:%76,])#
X=as.data.frame(X[%57:%58,])#
X=as.data.frame(X[%52:%54,])#
X=as.data.frame(X[%44:%49,])#
X=as.data.frame(X[%13:%18,])#
X=as.data.frame(X[%9:%10,])#
X=as.data.frame(X[%3:%7,])#
X=as.data.frame(X[%1,])#
USOoutRm=X[,20]#
length(USOoutRm)#
X=as.data.frame(X[,%19:%20])#QUITAMOS#ULTIMAS#DOS#COL##
which(is.na(X)==TRUE)#
######
#evaluar#cada#una#de#las#funciones#de#KERNEL,##
#cuAl#es#la#mejor#para#USO#A#y#USO#B#
kernelf=c("rbfdot",#"polydot",#"vanilladot","tanhdot","laplacedot","besseldot","anovadot")#
j=1#
while(j<=length(kernelf)){#
##n=1#
##numsim=1000#
##ker=kernelf[j]#
##while(n<=numsim){#
####ical=sort(sample(c(1:dim(X)[1]),round(dim(X)[1]*2/3)))#
####SUPcal=as.factor(USOoutRm[ical])#
####SUPval=as.factor(USOoutRm[%ical])#
####Xcal=as.matrix(X[ical,])#
####Xval=as.matrix(X[%ical,])#
####if(length(which(SUPcal=="B"))!=0&#length(which(SUPcal=="NU"))!=0){#
#####CLASIFICACIoN##
####filter#<%#ksvm(SUPcal~.,data=Xcal,kernel=ker)#
####SUPpc=predict(filter,Xcal)#
####SUPpv=predict(filter,Xval)#
####tablacal=table(SUPpc,SUPcal)#
#####print(tablacal)#
####tablaval=table(SUPpv,SUPval)#
#####print(tablaval)#
######if(length(which(SUPpv=="NU"))==length(SUPpv)){#
########Cb=0#Confiabilidad#uso#B#

APP#C%11%3#–#1#of#11#
########Cnu=1#Confiabilidad#uso#NU###
########ck=0##coeficiente#Kappa#
######}else#if(length(which(SUPpv=="B"))==length(SUPpv)){#
########Cb=1#Confiabilidad#uso#B#
########Cnu=0#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="B"))==length(SUPval)){#
########Cb=tablaval[1]/sum(tablaval)#Confiabilidad#uso#B#
########Cnu=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="NU"))==length(SUPval)){#
########Cnu=tablaval[2]/sum(tablaval)#Confiabilidad#uso#B#
########Cb=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}#else{#
########a=tablaval[1]#Sireal=Sisimu#check#
########b=tablaval[3]#Noreal=Sisimu#
########c=tablaval[2]#Sireal=Nosimu#
########d=tablaval[4]#Noreal=Nosimu#check#
########po=(a+d)/sum(a,b,c,d)#
########pesa=((a+b)*(a+c))/sum(a,b,c,d)#
########pesb=((c+d)*(b+d))/sum(a,b,c,d)#
########pe=(pesa+pesb)/sum(a,b,c,d)#
########Cb=a/(a+c)#Confiabilidad#uso#B#
########Cnu=d/(b+d)#Confiabilidad#uso#NU#
########ck=(po%pe)/(1%pe)##coeficiente#Kappa#
######}#
#######
######if(n==1){#
########tT=tablaval#
########CbT=Cb#
########CnuT=Cnu#
########ckT=ck#
######}else{tT=rbind(tT,tablaval)#
###########CbT=c(CbT,Cb)#
###########CnuT=c(CnuT,Cnu)#
###########ckT=c(ckT,ck)}#
#####
####}else#{tT=rbind(tT,c(NA,NA))#
#########CbT=c(CbT,NA)#
#########CnuT=c(CnuT,NA)#
#########ckT=c(ckT,NA)}#
####n=n+1#
##}#
###
##write.csv(ckT,paste0("CoefKappa",ker,"%kernel_",numsim,".csv"))#
###write.csv(tT,paste0("table%",ker,"%kernel_",numsim,".csv"))#
##write.csv(CbT,paste0("confi_UB%",ker,numsim,".csv"))#
##write.csv(CnuT,paste0("confi_UNU%",ker,numsim,".csv"))#
##j=j+1#
}#
#
######
#GRafica#boxplot#de#confiabilidad#B#
i=1#

APP#C%11%3#–#2#of#11#
while(i<=length(kernelf)){#
##ker=kernelf[i]#
##CaT=read.csv(paste0("confi_UB%",ker,"1000.csv"),header=T)#
##head(CaT)#
###CaT[,1]=rep(ker,dim(CaT)[1])#
###class(CaT[,2])#
###class(CaT[,1])#
###names(CaT)<%c("Y",ker)#
###assign(paste(ker,#sep=""),CaT)#
##CaT=CaT[,2]#
##head(CaT)#
##if(i==1){CAT=CaT}else{CAT=cbind(CAT,CaT)}#
##i=i+1#
}#
CAT=data.frame(CAT)#
names(CAT)<%c("rbfdot",#"polydot",#"vanilladot","tanhdot","laplacedot","besseldot","anovadot")#
names(CAT)#
write.csv(CAT,"confi_UsoB%Total.csv")#
CATs=stack(CAT)#
#GRafica#boxplot#de#confiabilidad#NU#
i=1#
while(i<=length(kernelf)){#
##ker=kernelf[i]#
##CbT=read.csv(paste0("confi_UNU%",ker,"1000.csv"),header=T)#
##head(CbT)#
###CaT[,1]=rep(ker,dim(CaT)[1])#
###class(CaT[,2])#
###class(CaT[,1])#
###names(CaT)<%c("Y",ker)#
###assign(paste(ker,#sep=""),CaT)#
##CbT=CbT[,2]#
##head(CbT)#
##if(i==1){CBT=CbT}else{CBT=cbind(CBT,CbT)}#
##i=i+1#
}#
CBT=data.frame(CBT)#
names(CBT)<%c("rbfdot",#"polydot",#"vanilladot","tanhdot","laplacedot","besseldot","anovadot")#
names(CBT)#
write.csv(CBT,"confi_UsoB%Total.csv")#
#
CBTs=stack(CBT)#
#GRafica#boxplot#de#coef#kappa#
i=1#
while(i<=length(kernelf)){#
##ker=kernelf[i]#
##ckT=read.csv(paste0("CoefKappa",ker,"%kernel_",numsim,".csv"),header=T)#
##head(ckT)#
##ckT=ckT[,2]#
##head(ckT)#
##if(i==1){CKT=ckT}else{CKT=cbind(CKT,ckT)}#
##i=i+1#
}#
CKT=data.frame(CKT)#
names(CKT)<%c("rbfdot",#"polydot",#"vanilladot","tanhdot","laplacedot","besseldot","anovadot")#
names(CKT)#

APP#C%11%3#–#3#of#11#
write.csv(CKT,"CoefKappa_Uso%Total.csv")#
#
CKTs=stack(CKT)#
#
#kwt_a=aov(lm(values)#~#ind),#data#=#CATs)#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename="fig_Box_confi_SVM_USO%B.tif",#width#=#wth,#height#=#hth,#
compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
boxplot(values#~#ind,ylab="Reliability#USE#B",#xlab="SVM#Kernel#function",#data=CATs)#
dev.off()#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename="fig_Box_confi_SVM_USO%NU.tif",#width#=#wth,#height#=#hth,#
compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
boxplot(values#~#ind,ylab="Reliability#NO#USE",#xlab="SVM#Kernel#function",data=CBTs)#
dev.off()#
#
wth=3*580#
hth=wth/2^(.5)#
tiff(filename="fig_Box_CKappa_SVM_USO.tif",#width#=#wth,#height#=#hth,#
#####compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
boxplot(values#~#ind,ylab="Cohen's#kappa#coefficient",##
########xlab="SVM#Kernel#function",#data=CKTs)#
dev.off()#
#
######
###Ahora#evaluaciOn#quitando#cada#una#de#las#variables##
#QUITANDO#SOLO#una#VARIABLE#
kernelf=c("rbfdot",#"polydot",#"vanilladot","tanhdot","laplacedot","besseldot","anovadot")#
ker=kernelf[3]#Para#USo#A#y#B#FunciOn#Kernel#escogida#"anovadot"#
numsim=1000#
#Con#resultados#KAPPA#no#confiabilidad#
ckTf=read.csv(paste0("CoefKappa",ker,"%kernel_",numsim,".csv"),header=TRUE)#
ckTf=ckTf[,2]#
##CaTf=read.csv(paste0("confi_UA%",ker,"1000.csv"),header=TRUE)#con#todas#
##paste0("confi_UA%",ker,"1000.csv")#
##CbTf=read.csv(paste0("confi_UB%",ker,"1000.csv"),header=TRUE)#con#todas#
##CaTf=CaTf[,2]#
##CbTf=CbTf[,2]#
#
dim(X)#TODA#la#matriz#18#variables#
SV=1#SIN#VARIABLE#TAL#
numsim=1000#
while(SV<=dim(X)[2]){#
##n=1#
##x=X[,%SV]#sin#1#variable#Hmax#
##dim(x)#
###evaluaciOn#caonfiabilidad#sin#la#variable#SV#
##while(n<=numsim){#

APP#C%11%3#–#4#of#11#
####ical=sort(sample(c(1:dim(x)[1]),round(dim(x)[1]*2/3)))#
####SUPcal=as.factor(USOoutRm[ical])#
####SUPval=as.factor(USOoutRm[%ical])#
####if(length(which(SUPcal=="B"))!=0&#length(which(SUPcal=="NU"))!=0){#
######Xcal=as.matrix(x[ical,])#
######Xval=as.matrix(x[%ical,])#
#######CLASIFICACIoN##
######filter#<%#ksvm(SUPcal~.,data=Xcal,kernel=ker)#
######SUPpc=predict(filter,Xcal)#
######SUPpv=predict(filter,Xval)#
######tablacal=table(SUPpc,SUPcal)#
######tablaval=table(SUPpv,SUPval)#
#######print(tablaval)#
#######indices#COef#KAppa#
######if(length(which(SUPpv=="NU"))==length(SUPpv)){#
########Cb=0#Confiabilidad#uso#B#
########Cnu=1#Confiabilidad#uso#NU###
########ck=0##coeficiente#Kappa#
######}else#if(length(which(SUPpv=="B"))==length(SUPpv)){#
########Cb=1#Confiabilidad#uso#B#
########Cnu=0#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="B"))==length(SUPval)){#
########Cb=tablaval[1]/sum(tablaval)#Confiabilidad#uso#B#
########Cnu=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="NU"))==length(SUPval)){#
########Cnu=tablaval[2]/sum(tablaval)#Confiabilidad#uso#B#
########Cb=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}#else{a=tablaval[1]#Sireal=Sisimu#check#
########b=tablaval[3]#Noreal=Sisimu#
########c=tablaval[2]#Sireal=Nosimu#
########d=tablaval[4]#Noreal=Nosimu#check#
########po=(a+d)/sum(a,b,c,d)#
########pesa=((a+b)*(a+c))/sum(a,b,c,d)#
########pesb=((c+d)*(b+d))/sum(a,b,c,d)#
########pe=(pesa+pesb)/sum(a,b,c,d)#
########Cb=a/(a+c)#Confiabilidad#uso#B#
########Cnu=d/(b+d)#Confiabilidad#uso#NU#
########ck=(po%pe)/(1%pe)##coeficiente#Kappa#
######}#
######if(n==1){tT=tablaval#
########CbT=Cb#
########CnuT=Cnu#
########ckT=ck#
######}else{tT=rbind(tT,tablaval)#
############CbT=c(CbT,Cb)#
############CnuT=c(CnuT,Cnu)#
############ckT=c(ckT,ck)}#
#######
####}else#if(n==1){#
######tT=c(NA,NA)#
######CbT=NA#
######CnuT=NA#

APP#C%11%3#–#5#of#11#
######ckT=NA#
####}else{tT=rbind(tT,c(NA,NA))#
##########CbT=c(CbT,NA)#
##########CnuT=c(CnuT,NA)#
##########ckT=c(ckT,NA)}#
####n=n+1#
##}#
##nv=names(X[SV])#
###write.csv(tT,paste0(SV,"%",nv,"%table%",ker,"%kernel_",numsim,".csv"))#
##write.csv(CbT,paste0(SV,"%",nv,"%confi_UB%",ker,numsim,".csv"))#
##write.csv(CnuT,paste0(SV,"%",nv,"%confi_UNU%",ker,numsim,".csv"))#
##write.csv(ckT,paste0(SV,"%",nv,"%CoefKappa",ker,"%kernel_",numsim,".csv"))#
###
###USO#A##TEST#de#Wilcox#
##CKT2=data.frame(cbind(ckT,ckTf))#
##names(CKT2)<%c(paste0("V18%",nv),"V18")#
###head(CKT2)#
##CKT2=stack(CKT2)#
###wilcox#test#
##pvwck=pairwise.wilcox.test(CKT2[,1],CKT2[,2],p.adjust="bonf")$p.value#
###boxplot#
##if(pvwck>0.05){wth=3*580#
##hth=wth/2^(.5)#
##tiff(filename=paste0(SV,"%",nv,"%fig_Box_CKT_SVM.tif"),#width#=#wth,#height#=#hth,#
#######compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
##par(mar=c(5,5,2,2))#
##boxplot(values#~#ind,ylab="Cohen's#kappa#coefficient",##
##########xlab="Variables#used",#data=CKT2)#
##dev.off()}#
##if(SV==1){pvwTck=pvwck#
##}else{pvwTck=rbind(pvwTck,pvwck)}#
SV=SV+1#
}#
#
which(is.na(ckT)==TRUE)#
#
write.csv(pvwTck,paste0("1sin%var%Pv%",ker,"%",numsim,".csv"))#
ps=which(pvwTck>0.05)#variables#sin#diferencias#significativas#
rn=rownames(pvwTck)#
ps=cbind(rn[which(pvwTck>0.05)],ps)#
write.csv(ps,paste0("1%var%sinsig%",ker,"%",numsim,".csv"))#
#
#
######
#QUITANDO#DOS#VARIABLES#las#ps##variables#sin#diferencias#significativas#
ker=kernelf[3]#Para#USo#A#FunciOn#Kernel#escogida#"VAniladot"#
#variable#no#significativas#
paste0("1%var%sinsig%",ker,"%1000.csv")#
psf=read.csv(paste0("1%var%sinsig%",ker,"%1000.csv"))#
#psfb=read.csv(paste0("1%var%sinsig%",ker,"USO%B%1000.csv"))#
#psf=psf[,3]#
k=1#que#recorre#SV#
SV=as.numeric(psf[,3])#
nv=as.character(psf$X.1)#
class(nv)#

APP#C%11%3#–#6#of#11#
nv=substr(nv,5,nchar(nv))#
####
k=k+1#
#1%Hmax%CoefKappaanovadot%kernel_1000#
paste0(SV[k],"%",nv[k],"%CoefKappa",ker,"%kernel_",numsim,".csv")#
#####
ckTf=read.csv(paste0(SV[k],"%",nv[k],"%CoefKappa",ker,"%kernel_",numsim,".csv"),header=TRUE)#
dim(ckTf)#
head(ckTf)#
ckTf=ckTf[,2]#
length(ckTf)#
#
#matriz#de#referencia:"full"#con#todas#excepto#una#var#no#signi#=nv#
x=X[,%SV[k]]#
dim(x)#
l=1#numero#variables#
while(l<=dim(x)[2]){#
##n=1#
##numsim=1000#
###SV=l#SIN#VARIABLE#TAL#
##x1=x[,%l]#sin#1#variable#Hmax#
##dim(x1)#
####USOoutRm#variable#con#el#USO#real#
##while(n<=numsim){#
####ical=sort(sample(c(1:dim(x1)[1]),round(dim(x1)[1]*2/3)))#
####SUPcal=as.factor(USOoutRm[ical])#
####SUPval=as.factor(USOoutRm[%ical])#
####Xcal=as.matrix(x[ical,])#
####Xval=as.matrix(x[%ical,])#
#####CLASIFICACIoN##
####filter#<%#ksvm(SUPcal~.,data=Xcal,kernel=ker)#
####SUPpc=predict(filter,Xcal)#
####SUPpv=predict(filter,Xval)#
####tablacal=table(SUPpc,SUPcal)#
#####print(tablacal)#
####tablaval=table(SUPpv,SUPval)#
#####print(tablaval)#
#####indices#COef#KAppa#
####if(length(which(SUPpv=="NU"))==length(SUPpv)){#
########Cb=0#Confiabilidad#uso#B#
########Cnu=1#Confiabilidad#uso#NU###
########ck=0##coeficiente#Kappa#
######}else#if(length(which(SUPpv=="B"))==length(SUPpv)){#
########Cb=1#Confiabilidad#uso#B#
########Cnu=0#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="B"))==length(SUPval)){#
########Cb=tablaval[1]/sum(tablaval)#Confiabilidad#uso#B#
########Cnu=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}else#if(length(which(SUPval=="NU"))==length(SUPval)){#
########Cnu=tablaval[2]/sum(tablaval)#Confiabilidad#uso#B#
########Cb=NA#Confiabilidad#uso#NU###
########ck=0#coeficiente#Kappa}#
######}#else{a=tablaval[1]#Sireal=Sisimu#check#

APP#C%11%3#–#7#of#11#
#############b=tablaval[3]#Noreal=Sisimu#
#############c=tablaval[2]#Sireal=Nosimu#
#############d=tablaval[4]#Noreal=Nosimu#check#
#############po=(a+d)/sum(a,b,c,d)#
#############pesa=((a+b)*(a+c))/sum(a,b,c,d)#
#############pesb=((c+d)*(b+d))/sum(a,b,c,d)#
#############pe=(pesa+pesb)/sum(a,b,c,d)#
#############Cb=a/(a+c)#Confiabilidad#uso#B#
#############Cnu=d/(b+d)#Confiabilidad#uso#NU#
#############ck=(po%pe)/(1%pe)##coeficiente#Kappa#
######}#
####if(n==1){tT=tablaval#
#############CbT=Cb#
#############CnuT=Cnu#
#############ckT=ck#
####}else{tT=rbind(tT,tablaval)#
##########CbT=c(CbT,Cb)#
##########CnuT=c(CnuT,Cnu)#
##########ckT=c(ckT,ck)}#
####n=n+1#
##}#
##nvv=names(x[l])#
##write.csv(ckT,paste0(SV[k],"%",nvv,l,"%CK%",ker,"%",numsim,".csv"))#
###
###TEST#de#Wilcox#
##CKT2=data.frame(cbind(ckT,ckTf))#
##names(CKT2)<%c(paste0("V17%",nvv),paste0("V17:",nv[k]))#
##head(CKT2)#
##CKT2=stack(CKT2)#
###wilcox#test#
##pvwck=pairwise.wilcox.test(CKT2[,1],CKT2[,2],p.adjust="bonf")$p.value#
###boxplot#
####if(pvwck>0.05){#
######wth=3*580#
######hth=wth/2^(.5)#
######tiff(filename=paste0(nv[k],"%",nvv,"%fig_Box_CKT_SVM.tif"),#width#=#wth,#height#=#hth,#
###########compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
######par(mar=c(5,5,2,2))#
######boxplot(values#~#ind,ylab="Cohen's#kappa#coefficient",##
##############xlab="Variables#used",#data=CKT2)#
######dev.off()}#
#####
##if(l==1){pvwTck=t(pvwck)#
##}else{pvwTck=rbind(pvwTck,t(pvwck))}#
###
##l=l+1#
}#
#
write.csv(pvwTck,paste0(nv[k],"%1var%Pv%",ker,"%",numsim,".csv"))#
ps=which(pvwTck>0.05)#variables#sin#diferencias#significativas#
rn=rownames(pvwTck)#
#rownames(pvwTa)[1]<%paste0("A_X%",nv[k])#
#rn=rownames(pvwTa)#
ps=cbind(rn[which(pvwTck>0.05)],ps)#
write.csv(ps,paste0(nv[k],"%1var%nosig%",ker,"%",numsim,".csv"))#

APP#C%11%3#–#8#of#11#
dim(ps)#
#
##write.csv(pvwTb,paste0(SVb[k],nvb[k],"%var%Pv%",ker,"USO%B%",numsim,".csv"))#
##psb=which(pvwTb>0.05)#variables#sin#diferencias#significativas#
##rn=rownames(pvwTb)#
###rownames(pvwTa)[1]<%paste0("A_X%",nv[k])#
###rn=rownames(pvwTa)#
##psb=cbind(rn[which(pvwTb>0.05)],psb)#
##write.csv(psb,paste0(SVb[k],"%",nvb[k],"%var%nosig%",ker,"USO%B%",numsim,".csv"))#
##dim(psb)#
#
######
#QUITANDO#TRES#VARIABLES#las#ps##variables#sin#diferencias#significativas#
ker=kernelf[3]#Para#USo#A#FunciOn#Kernel#escogida#"ANovadot"#
#pvf=read.csv("1%var%Pvalues%polydotUSO%A%1000.csv",header=T)#
#ps=which(pvf[,2]<0.05)#
rm(psf)#
rm(SV)#
rm(nv)#
#variable#no#significativas#
psf=read.csv("Dhmaxb%1var%nosig%vanilladot%1000.csv",header=TRUE)#
#Estas#variables#las#puedo#ir#quietando#junto#con#Hmax#por#parejas,##
#ahora#voy#a#evaluar#si#puedo#quitar#tres#al#mismo#tiempo#
numsim=1000#
SV=as.numeric(psf[,3])#
nv=as.character(psf$X.1)#
class(nv)#
nv=substr(nv,5,nchar(nv))#las#que#vamos#ir#quitando#en#esta#ejecuciOn#
#
k=1#para#recorrer#SV#y#nv#
#Confi#A#de#referecia:"full"#con#todas#excepto#ps#
paste0("12%",nv[k],SV[k],"%CK%",ker,"%",numsim,".csv")#
while(k<=length(SV)){#
CaTf=read.csv(paste0("12%",nv[k],SV[k],"%CK%",ker,"%",numsim,".csv"),header=TRUE)#
dim(CaTf)#
CaTf=CaTf[,2]#
#matriz#de#referencia:"full"#con#todas#excepto#psf#
x=X[,%12]#quitando#Dhmax#
x=x[,%SV[k]]#quitando#la#primera#var#de#SV#
dim(x)#
l=1#numero#variables#
while(l<=dim(x)[2]){#
##n=1#
##numsim=1000#
###SV=l#SIN#VARIABLE#TAL#
##x1=x[,%l]#sin#1#variable#=>#l#
##dim(x1)#
####USOoutRm#variable#con#el#USO#real#
##while(n<=numsim){#
####ical=sort(sample(c(1:dim(x1)[1]),round(dim(x1)[1]*2/3)))#
####SUPcal=as.factor(USOoutRm[ical])#
####SUPval=as.factor(USOoutRm[%ical])#
####Xcal=as.matrix(x[ical,])#
####Xval=as.matrix(x[%ical,])#
#####CLASIFICACIoN##

APP#C%11%3#–#9#of#11#
####filter#<%#ksvm(SUPcal~.,data=Xcal,kernel=ker)#
####SUPpc=predict(filter,Xcal)#
####SUPpv=predict(filter,Xval)#
####tablacal=table(SUPpc,SUPcal)#
####tablaval=table(SUPpv,SUPval)#
#####print(tablaval)#
#####indices#COef#KAppa#
####if(length(which(SUPpv=="NU"))==length(SUPpv)){#
######Cb=0#Confiabilidad#uso#B#
######Cnu=1#Confiabilidad#uso#NU###
######ck=0##coeficiente#Kappa#
####}else#if(length(which(SUPpv=="B"))==length(SUPpv)){#
######Cb=1#Confiabilidad#uso#B#
######Cnu=0#Confiabilidad#uso#NU###
######ck=0#coeficiente#Kappa}#
####}else#if(length(which(SUPval=="B"))==length(SUPval)){#
######Cb=tablaval[1]/sum(tablaval)#Confiabilidad#uso#B#
######Cnu=NA#Confiabilidad#uso#NU###
######ck=0#coeficiente#Kappa}#
####}else#if(length(which(SUPval=="NU"))==length(SUPval)){#
######Cnu=tablaval[2]/sum(tablaval)#Confiabilidad#uso#B#
######Cb=NA#Confiabilidad#uso#NU###
######ck=0#coeficiente#Kappa}#
####}#else{a=tablaval[1]#Sireal=Sisimu#check#
###########b=tablaval[3]#Noreal=Sisimu#
###########c=tablaval[2]#Sireal=Nosimu#
###########d=tablaval[4]#Noreal=Nosimu#check#
###########po=(a+d)/sum(a,b,c,d)#
###########pesa=((a+b)*(a+c))/sum(a,b,c,d)#
###########pesb=((c+d)*(b+d))/sum(a,b,c,d)#
###########pe=(pesa+pesb)/sum(a,b,c,d)#
###########Cb=a/(a+c)#Confiabilidad#uso#B#
###########Cnu=d/(b+d)#Confiabilidad#uso#NU#
###########ck=(po%pe)/(1%pe)##coeficiente#Kappa#
####}#
####if(n==1){tT=tablaval#
#############CbT=Cb#
#############CnuT=Cnu#
#############ckT=ck#
####}else{tT=rbind(tT,tablaval)#
##########CbT=c(CbT,Cb)#
##########CnuT=c(CnuT,Cnu)#
##########ckT=c(ckT,ck)}#
####n=n+1#
##}#
###
##nvv=names(x[l])#
###write.csv(tT,paste0(SV,"%",nvv,l,"%table%",ker,"%kernel_",numsim,".csv"))#
###write.csv(ckT,paste0("1%",SV[k],nv[k],"%",nvv,l,"%CKappa%",ker,numsim,".csv"))#
###write.csv(CbT,paste0(SV,"%",nvv,l,"%confi_UB%",ker,numsim,".csv"))#
###write.csv(ckT,paste0(SV,"%",nvv,l,"%CoefKappa",ker,"%kernel_",numsim,".csv"))#
###
###Solo#interesa#mirar#USO#A#
###TEST#de#Wilcox#
##CAT2=data.frame(cbind(ckT,CaTf))#

APP#C%11%3#–#10#of#11#
##names(CAT2)<%c(paste0("A_1%",nv[k],"%",nvv),paste0("A%1%",nv[k]))#
##head(CAT2)#
##CAT2=stack(CAT2)#
##pvw=pairwise.wilcox.test(CAT2[,1],CAT2[,2],p.adjust="bonf")$p.value#
####if(pvw>0.05){#boxplot#
####wth=3*580#
####hth=wth/2^(.5)#
####tiff(filename=paste0("1%",nv[k],"%",nvv,"%fig_Box_Ck_SVM.tif"),#width#=#wth,#height#=#hth,#
#########compression#=#"lzw",#pointsize#=#8,#bg#=#"white",#res#=#300)#
####par(mar=c(5,5,2,2))#
####boxplot(values#~#ind,ylab="Cohen's#kappa#coefficient",##
############xlab="Variables#used",#data=CAT2)#
####dev.off()}#
###
##if(l==1){pvwT=pvw}else{pvwT=rbind(pvwT,pvw)}#
##l=l+1#
}#
#
write.csv(pvwT,paste0("1%",nv[k],"%var%Pv%",ker,"CK%",numsim,".csv"))#
ps=which(pvwT>0.05)#variables#sin#diferencias#significativas#
rn=rownames(pvwT)#
#rownames(pvwT)[1]<%paste0("A_X%",nvv)#
ps=cbind(rn[which(pvwT>0.05)],ps)#
write.csv(ps,paste0("1%",nv[k],"%var%nosignifi%",ker,"CK",numsim,".csv"))#
length(ps#)#
#
k=k+1#
}#

APP#C%11%3#–#11#of#11#
APPENDIX C-12-1
EFFICIENCY'PER'EVENT'AND'GENERAL'EFFICIENCY'MODEL'
#

1. EFFICIENCY'PER'EVENT'
#####SCRIPT#para#simulaciOn#DETERMINAR#EFIciencia#x#evento#
rm(list=ls(all=TRUE))#
Sin_all=read.table("In%AGO%14%scan+level.txt",header=TRUE)#
#por#ahora#sOlo#con#las#long#de#ondas#
attach(Sin_all)#
Sin=as.matrix(Sin_all[,4:217])#
dim(Sin)#
#
#cargarmos#los#caudales#del#mes#
Qinm=read.csv(as.matrix("Mmed%Qx1%ago%14.csv",header=T))#
Qinm=as.vector(Qinm[,1])#
length(Qinm)#
Qoutm=read.csv(as.matrix("Mmed%Qx1%AGO%14%out%low.csv",header=T))#
class(Qoutm)#
Qoutm=as.vector(Qoutm[,1])#
length(Qoutm)#
plot(Qinm,#type="l")#
lines(Qoutm,#col="green")#
#long#de#onda#de#salida#para#comparar#
Soutr_all=read.table("out%AGO%14%scan+level.txt",header=TRUE)#
attach(Soutr_all)#
Soutr=as.matrix(Soutr_all[,4:217])#
length(which(is.na(Soutr[,3])))#
i=1#col#L200#
xx=seq(1:dim(Soutr)[1])#
while(i<=dim(Soutr)[2]){#
##Soutr[,i]=approx(xx,Soutr[,i],xout=xx)$y#
##i=i+1#
}#
dim(Soutr)#
#
ini1=read.csv("Eventos%ago%14.csv",#header=T)##tenemos#info#de#scan#in#and#out#desde#el#evento#l=7#
#
#IMPORTANTE!!!#
Sin_all[1:2,1:2]#inicio#datos#espectrometro#in#
Sin_all[dim(Sin_all)[1],1:2]##fin#datos#espectrometro#in#
Soutr_all[1:2,1:2]#inicio#datos#espectrometro#in#
Soutr_all[dim(Soutr_all)[1],1:2]##fin#datos#espectrometro#in#
#Soutr_all[ini1[l,3]%tin+1,1:2]#
tin=46#
tfin=29*1440+1*60+21%(19*1440+8*60)#
#
Sin=as.matrix(Sin_all[,4:217])#
datesin=as.matrix(Sin_all[tin:tfin,1:3])#
dim(datesin)#
datesin[(ini1[l,3]%lq),1:3]#
lq=19*1440+8*60+45#lo#que#falta#para#completar#la#BD#de#calidad#de#entrada#y#salida#

APP#C%12%1#–#1#of#6#
Sin[1:2,1:3]#
length(which(is.na(Sin[,3])))#
#Completar#info#faltante#
i=1#col#L200#
xx=seq(1:dim(Sin)[1])#
while(i<=dim(Sin)[2]){#
##Sin[,i]=approx(xx,Sin[,i],xout=xx)$y#
##i=i+1#
}#
dim(Sin)#
Sin=Sin[tin:tfin,]#
datesin[1,]#
#
#ajustar#el#mismo#periodo#de#tiempo#para#Sin#vs#ve1#and#ve2#
ve1=Qinm*60#volumen#por#min#en#Litros#
ve2=as.numeric(Qoutm*60)#volumen#por#min#en#Litros#
length(ve2)#
dim(Sin)#
ve1=ve1[tin:tfin]#
ve2=ve2[tin:tfin]#
length(ve2)#
dim(Sin)#
dim(Soutr)#
#####
#
mes="AGO"#
anho=14#
#
tw=19#
l=8#contador#de#eventos#
while(l<=dim(ini1)[1]){#
##le=ini1[l,4]%ini1[l,3]#long#del#evento#
###WHILE#PARA#DETECTAR#POSIBLE#EVENTO#FF#
##nm=1##minuto#nuevo#
###
##Sine=Sin[(ini1[l,3]%lq):(ini1[l,4]%lq),]#long#del#evento#j#
##dim(Sine)#
##ve1s=ve1[(ini1[l,3]%lq):(ini1[l,4]%lq)]#
##length(ve1s)#
##ve2s=ve2[(ini1[l,3]%lq+tw):(ini1[l,4]%lq+tw)]#
##length(ve2s)#
##ile=which(ve1s>ve2s&ve2s>0)#
###long#de#onda#REAL#de#salida#por#evento#e#
##Souter=Soutr[(ini1[l,3]%lq+tw):(ini1[l,4]%lq+tw),]#
##dim(Souter)#
###determinacion#de#la#eficiencia#por#evento#resultado#MATRIz#de#Eficiencia#
##if(length(ile)>0){#
####y=1#
####while(y<=dim(Sine)[2]){#
######if(y==1){efir=(Sine[,y]*ve1s%Souter[,y]*ve2s)/(Sine[,y]*ve1s)}#
######else{efir=cbind(efir,(Sine[,y]*ve1s%Souter[,y]*ve2s)/(Sine[,y]*ve1s))}#
######y=y+1}#
##}else{y=1#
####while(y<=dim(Sine)[2]){#
######if(y==1){efir=Souter[,y]/Sine[,y]}else{efir=cbind(efir,Souter[,y]/Sine[,y])}#

APP#C%12%1#–#2#of#6#
####y=y+1}#
##}#
##write.csv(efir,#paste0(l,"%EVE%Efi%",mes,#anho,".csv"))#
###Eficiencia#solo#mirando#la#long#de#onda#437.5==TSS#
##if(l==8){efTtss=boxplot.stats(efir[,96])$stats#
##names(efTtss)<%
c("min","q1","q2","q3","max")}else{efTtss=rbind(efTtss,boxplot.stats(efir[,96])$stats)}#
##if(length(ile)>0){qout="YES"}else{qout="NO"}#
##if(l==8){qoutT=qout}else{qoutT=c(qoutT,qout)}#
###Figura#boxplot#variaciOn#eficiencia#por#longitud#de#onda#
##efir=as.data.frame(efir)#
##names(efir)<%names(Sin_all[4:217])#
##efirs=stack(efir)#
##wth=1.5*580#
##hth=wth/2^(.5)#
##tiff(filename=paste0(l,"EVE%fig_Box_Efi%",mes,anho,"%Qout=",qout,".tif"),#width#=#wth,#height#=#hth,#
#######compression#=#"lzw",#pointsize#=#5,#bg#=#"white",#res#=#200)#
##par(mar=c(5,5,2,2))#
##boxplot(values#~#ind,outline=FALSE,ylab="Efficiency",#xlab=expression(lambda),#data=efirs,#
##########main=paste0(mes,"%",anho,"%#Event%",l))#
##dev.off()#
l=l+1###
}###
#
INIT=cbind(ini1[8:dim(ini1)[1],],qoutT,efTtss)###
write.csv(INIT,paste0(mes,anho,"%Eficiencia%TSS%l%96.csv"))#
#
#
#
#Solo#las#grAficas#
l=8#
mes="AGO"#
anho=14#
efir=as.matrix(read.csv(paste0(l,"%EVE%Efi%",mes,#anho,".csv")))#
efir=efir[,2:215]#
efir=as.data.frame(efir)#
names(efir)<%names(Sin_all[4:217])#
efirs=stack(efir)#
#
#
wth=3*580#
hth=wth/2^(.5)#
#
tiff(filename=paste0(l,"EVE%fig_Box_Efi%",mes,anho,".tif"),#width#=#wth,#height#=#hth,#
#####compression#=#"lzw",#pointsize#=#11,#bg#=#"white",#res#=#300)#
par(mar=c(5,5,2,2))#
boxplot(values#~#ind,outline=FALSE,ylab="Efficiency",#xlab=expression(lambda),#data=efirs)#
dev.off()#
#
#
#
#
#

APP#C%12%1#–#3#of#6#
2. GENERAL'EFFICIENCY'MODEL'
####SCRIPT#para#crear#el#modelo#de#EFICIencia#
rm(list=ls(all=TRUE))#
ef1=read.csv("8%EVE%Efi%AGO14.csv",row.names=1)#
ef2=read.csv("9%EVE%Efi%AGO14.csv",row.names=1)#
ef3=read.csv("10%EVE%Efi%AGO14.csv",row.names=1)#
ef4=read.csv("11%EVE%Efi%AGO14.csv",row.names=1)#
#
ef5=read.csv("7%EVE%Efi%SEP14.csv",row.names=1)#
ef6=read.csv("8%EVE%Efi%SEP14.csv",row.names=1)#
ef7=read.csv("9%EVE%Efi%SEP14.csv",row.names=1)#
ef8=read.csv("10%EVE%Efi%SEP14.csv",row.names=1)#
ef9=read.csv("11%EVE%Efi%SEP14.csv",row.names=1)#
ef10=read.csv("12%EVE%Efi%SEP14.csv",row.names=1)#
ef11=read.csv("13%EVE%Efi%SEP14.csv",row.names=1)#
ef12=read.csv("14%EVE%Efi%SEP14.csv",row.names=1)#
#
ef13=read.csv("7%EVE%Efi%OCT14.csv",row.names=1)#
ef14=read.csv("8%EVE%Efi%OCT14.csv",row.names=1)#
ef15=read.csv("9%EVE%Efi%OCT14.csv",row.names=1)#
ef16=read.csv("10%EVE%Efi%OCT14.csv",row.names=1)#
ef17=read.csv("11%EVE%Efi%OCT14.csv",row.names=1)#
#
ef18=read.csv("5%EVE%Efi%ENE15.csv",row.names=1)#
ef19=read.csv("6%EVE%Efi%ENE15.csv",row.names=1)#
ef20=read.csv("7%EVE%Efi%ENE15.csv",row.names=1)#
ef21=read.csv("8%EVE%Efi%ENE15.csv",row.names=1)#
#
ef22=read.csv("1%EVE%Efi%FEB15.csv",row.names=1)#
ef23=read.csv("2%EVE%Efi%FEB15.csv",row.names=1)#
ef24=read.csv("3%EVE%Efi%FEB15.csv",row.names=1)#
ef25=read.csv("4%EVE%Efi%FEB15.csv",row.names=1)#
ef26=read.csv("5%EVE%Efi%FEB15.csv",row.names=1)#
#
ef27=read.csv("1%EVE%Efi%MAR15.csv",row.names=1)#
ef28=read.csv("2%EVE%Efi%MAR15.csv",row.names=1)#
ef29=read.csv("3%EVE%Efi%MAR15.csv",row.names=1)#
ef30=read.csv("4%EVE%Efi%MAR15.csv",row.names=1)#
ef31=read.csv("5%EVE%Efi%MAR15.csv",row.names=1)#
ef32=read.csv("6%EVE%Efi%MAR15.csv",row.names=1)#
ef33=read.csv("7%EVE%Efi%MAR15.csv",row.names=1)#
ef34=read.csv("8%EVE%Efi%MAR15.csv",row.names=1)#
ef35=read.csv("9%EVE%Efi%MAR15.csv",row.names=1)#
ef36=read.csv("10%EVE%Efi%MAR15.csv",row.names=1)#
ef37=read.csv("11%EVE%Efi%MAR15.csv",row.names=1)#
ef38=read.csv("12%EVE%Efi%MAR15.csv",row.names=1)#
ef39=read.csv("13%EVE%Efi%MAR15.csv",row.names=1)#
ef40=read.csv("14%EVE%Efi%MAR15.csv",row.names=1)#
ef41=read.csv("15%EVE%Efi%MAR15.csv",row.names=1)#
#
ef42=read.csv("1%EVE%Efi%ABR15.csv",row.names=1)#
ef43=read.csv("2%EVE%Efi%ABR15.csv",row.names=1)#
ef44=read.csv("3%EVE%Efi%ABR15.csv",row.names=1)#
ef45=read.csv("4%EVE%Efi%ABR15.csv",row.names=1)#
ef46=read.csv("5%EVE%Efi%ABR15.csv",row.names=1)#

APP#C%12%1#–#4#of#6#
ef47=read.csv("6%EVE%Efi%ABR15.csv",row.names=1)#
ef48=read.csv("7%EVE%Efi%ABR15.csv",row.names=1)#
ef49=read.csv("8%EVE%Efi%ABR15.csv",row.names=1)#
ef50=read.csv("9%EVE%Efi%ABR15.csv",row.names=1)#
ef51=read.csv("10%EVE%Efi%ABR15.csv",row.names=1)#
ef52=read.csv("11%EVE%Efi%ABR15.csv",row.names=1)#
ef53=read.csv("12%EVE%Efi%ABR15.csv",row.names=1)#
#
ef54=read.csv("1%EVE%Efi%MAY15.csv",row.names=1)#
ef55=read.csv("2%EVE%Efi%MAY15.csv",row.names=1)#
ef56=read.csv("3%EVE%Efi%MAY15.csv",row.names=1)#
ef57=read.csv("4%EVE%Efi%MAY15.csv",row.names=1)#
ef58=read.csv("5%EVE%Efi%MAY15.csv",row.names=1)#
ef59=read.csv("6%EVE%Efi%MAY15.csv",row.names=1)#
ef60=read.csv("7%EVE%Efi%MAY15.csv",row.names=1)#
ef61=read.csv("8%EVE%Efi%MAY15.csv",row.names=1)#
ef62=read.csv("9%EVE%Efi%MAY15.csv",row.names=1)#
ef63=read.csv("10%EVE%Efi%MAY15.csv",row.names=1)#
#
ef64=read.csv("2%EVE%Efi%JUN15.csv",row.names=1)#
ef65=read.csv("3%EVE%Efi%JUN15.csv",row.names=1)#
ef66=read.csv("4%EVE%Efi%JUN15.csv",row.names=1)#
ef67=read.csv("5%EVE%Efi%JUN15.csv",row.names=1)#
ef68=read.csv("6%EVE%Efi%JUN15.csv",row.names=1)#
ef69=read.csv("7%EVE%Efi%JUN15.csv",row.names=1)#
ef70=read.csv("8%EVE%Efi%JUN15.csv",row.names=1)#
ef71=read.csv("9%EVE%Efi%JUN15.csv",row.names=1)#
ef72=read.csv("10%EVE%Efi%JUN15.csv",row.names=1)#
ef73=read.csv("11%EVE%Efi%JUN15.csv",row.names=1)#
ef74=read.csv("12%EVE%Efi%JUN15.csv",row.names=1)#
ef75=read.csv("13%EVE%Efi%JUN15.csv",row.names=1)#
ef76=read.csv("14%EVE%Efi%JUN15.csv",row.names=1)#
ef77=read.csv("15%EVE%Efi%JUN15.csv",row.names=1)#
ef78=read.csv("16%EVE%Efi%JUN15.csv",row.names=1)#
ef79=read.csv("17%EVE%Efi%JUN15.csv",row.names=1)#
ef80=read.csv("18%EVE%Efi%JUN15.csv",row.names=1)#
#
EfT=rbind(ef1,ef2,ef3,ef4,#
##########ef5,ef6,ef7,ef8,ef9,ef10,ef11,ef12,#
##########ef13,ef14,ef15,ef16,ef17,#
##########ef18,ef19,ef20,ef21,#
##########ef22,ef23,ef24,ef25,ef26,#
##########ef27,ef28,ef29,ef30,ef31,ef32,ef33,ef34,ef35,ef36,ef37,ef38,ef39,ef40,ef41,#
##########ef42,ef43,ef44,ef45,ef46,ef47,ef48,ef49,ef50,ef51,ef52,ef53,#
##########ef54,ef55,ef56,ef57,ef58,ef59,ef60,ef61,ef62,ef63,#
##########ef64,ef65,ef66,ef67,ef68,ef69,ef70,ef71,ef72,ef73,ef74,ef75,ef76,ef77,ef78,ef79,ef80)#
#EfT=EfT[%which(is.na(EfT[,3])==TRUE),]#
#DISTRIBUCION#NO#PARAMETRICA##
library(ks)#
k=1#
while(k<=dim(EfT)[2]){#
##DQO=EfT[,k]#
##DQOvect=as.vector(as.matrix(DQO))#
##fhat#<%kde(x=DQOvect[is.finite(DQOvect)],h=hpi(DQOvect[is.finite(DQOvect)]),#
#############xmin=min(DQOvect[is.finite(DQOvect)]),#

APP#C%12%1#–#5#of#6#
#############xmax=max(DQOvect[is.finite(DQOvect)]),positive=F)#
##save(fhat,file=paste0("modkef%v2%",k,".RData"))#modkef%v1%214#
##k=k+1###
}#
#
k=96#
load(file=paste0("modkef%v2%",k,".RData"))#
#
plot(fhat)#
DQOalea#<%#rkde(fhat=fhat,#n=10000,positive=FALSE)#
par(mar#=#rep(2,#4))#
#
wth=3*580#
hth=wth#
hth1=wth/2^(.5)#
tiff(filename="Modelo%tss%EFI.tiff",#width#=#wth,#height#=#hth1,##
#####compression#=#"lzw",#pointsize#=#11,#bg#=#"transparent",#res#=300)#
par(mar=c(5,5,2,2))#
hist(DQOalea,freq=FALSE,#main="",#xlab="Efficiency",###
######las=1,#ylim=c(0,1),#nclass=100)#,#xlim=c(%1,1)#
lines(density(DQOalea),#col="blue",#lty=3,#lwd=1.5)#
#
dev.off()#
#
#
#
utDQO=boxplot.stats(DQOalea,#do.out=T)$out#
DQO=DQOalea[%which(DQOalea==utDQO[1])]#
#
#
#

APP#C%12%1#–#6#of#6#
APPENDIX C-12-2
OUTLET&WAVELENGTHS&SIMULATIONS&
#
##while#para#crear#Soutsimu#
#######cargamos#los#modelos#de#eficicia#por#long#de#onda#para#determinar##
#####las#longitudes#de#onda#para#la#salida#
####k=1#contador#long#de#onda#
####while(k<=dim(Sine1)[2]){#
######load(file=paste0("modkef%v2%",k,".RData"))#
######efi#<%#rkde(fhat=fhat,#dim(Sine1)[1],positive=FALSE)#generaciOn#de#valores#aleatorios#de#
eficiencias#
######if(length(ile)>0){#esto#quiere#decir#que#ve1>ve2#y#ve2=!#de#cero#
########lout=rep(0,#length(ve1s))#
########lout[ile]=Sine1[ile,k]*(ve1s[ile]/ve2s[ile])*(1%sort(efi[ile]))#[ts:length(ve1)]#
########lout[%ile]=Sine1[%ile,k]*sort(efi[%ile])#
######}else{lout=Sine1[,k]*sort(efi)}#si#funciona#esta#lInea#tenemos#ve2>ve1#
######if(k==1){#
########Soute=lout#
######}else{Soute=cbind(Soute,lout)}#
######k=k+1#
####}#
####dim(Soute)#
#####
####dM1=Soute#sout#simulada##
####dM2=loutr[(inie+tf+tw):(fine+tf+tw),]#sout#real#
####dim(dM2)#
#######AGUA#SALIDA##Figura#TODOS#usos#con#warnning#cuando#no#cumple#algUn#USO#
####tiff(filename=paste0(l,"%Ev%OBS%Out_",mes,anho,"usosA&B%v2.tif"),#width#=#wth,#height#=#hth,##
#########compression#=#"lzw",#pointsize#=#10,#bg#=#"transparent",#res#=#300)#
####par(mar=c(5,5,2,2))#
####plot(Aall[lmu,],type="l",#ylim=c(0,#max(Aall[lmu,],Ball[lmub,])+10),main=paste0(mes,anho,"%
Event#",l),xlab=substitute(lambda),ylab="Out#%#Abs#1/m",#lwd=2)#
####lines(Ball[lmub,],col="black",lwd=2,#lty=2)#
####j=1#
####while(j<=dim(dM2)[1]){##
######upr=length(which(dM2[j,]%Aall[lmu,]<0))/length(Aall[lmu,])#LIMITE#ABS#USO#A#
######up2r=length(which((dM2[j,]%Ball[lmub,])<0))/length(Ball[lmub,])#LIMITE#ABS#USO#b###
######if(upr<0.95){lines(dM2[j,],col="blue",lwd=1.5)}#
######if(up2r<0.95){lines(dM2[j,],col="cyan",lwd=1.5)}##absorbancia#min#j#que#NO#cumple#
######if(upr>=0.95#|up2r>=0.95){lines(dM2[j,],col="gray",lwd=2)}#absorbancia#min#j#que#cumple###
######j=j+1#
####}#
####lines(Ball[lmub,],col="black",lwd=2,#lty=2)#
####lines(Aall[lmu,],col="black",lwd=2,lty=1)#
####legend("topright",#legend=c("#Limit#Water#uses:#A","#Limit#Water#uses:#B","#Month#absorbances","#
Warnning#use#A#obs",#"#Warnning#use#B#obs"),##
###########col=c("black","black","grey",#"blue","cyan"),lwd=c(1.5,1.5,4,1,1),#
###########lty=c(1,2,1,1,1),cex=1)#
####dev.off()###
#####
####tiff(filename=paste0(l,"%Ev%SIMU%Out_",mes,anho,"usosA&B%v2.tif"),#width#=#wth,#height#=#hth,##
#########compression#=#"lzw",#pointsize#=#10,#bg#=#"transparent",#res#=#300)#

APP#C%12%2#–#1#of#2#
####par(mar=c(5,5,2,2))#
####plot(Aall[lmu,],type="l",#ylim=c(0,#max(Aall[lmu,],Ball[lmub,])+10),main=paste0(mes,anho,"%
Event#",l),xlab=substitute(lambda),ylab="Out#%#Abs#1/m",#lwd=2)#
####lines(Ball[lmub,],col="black",lwd=2,#lty=2)#
####j=1#
####while(j<=dim(dM2)[1]){##
######upr=length(which(dM2[j,]%Aall[lmu,]<0))/length(Aall[lmu,])#LIMITE#ABS#USO#A#
######up2r=length(which((dM2[j,]%Ball[lmub,])<0))/length(Ball[lmub,])#LIMITE#ABS#USO#b###
######if(upr<0.95){lines(dM2[j,],col="blue",lwd=1.5)}#
######if(up2r<0.95){lines(dM2[j,],col="cyan",lwd=1.5)}##absorbancia#min#j#que#NO#cumple#
######if(upr>=0.95#|up2r>=0.95){lines(dM2[j,],col="gray",lwd=2)}#absorbancia#min#j#que#cumple###
######j=j+1#
####}#
#####j=1#absorbancias#MES#Simuladas#
####while(j<=dim(dM1)[1]){##
########up=length(which((dM1[j,]%Aall[lmu,])<0))/length(Aall[lmu,])#LIMITE#ABS#USO#A#
########up2=length(which((dM1[j,]%Ball[lmub,])<0))/length(Ball[lmub,])#LIMITE#ABS#USO#b#
########if(up<0.95){lines(dM1[j,],col="red",lwd=1.5)}#absorbancia#min#j#que#NO#cumple##################
########if(up2<0.95){lines(dM1[j,],col="magenta",lwd=1.5)}#absorbancia#min#j#que#NO#cumple#
########if(up>=0.95#|up2>=0.95){lines(dM1[j,],col="grey75",#lty=3)}#absorbancia#min#j#que#cumple###
########j=j+1#
######}#
####lines(Ball[lmub,],col="black",lwd=2,#lty=2)#
####lines(Aall[lmu,],col="black",lwd=2,lty=1)#
####legend("topright",#legend=c("#Limit#Water#uses:#A","#Limit#Water#uses:#B",#"#Month#absorbances","#
Warnning#use#A#simu",#
################################"#Warnning#use#A#obs",#"#Warnning#use#B#simu","#Warnning#use#B#obs"),##
###########col=c("black","black","grey","red",#"blue",#
"magenta","cyan"),lwd=c(1.5,1.5,4,1,1,1,1),lty=c(1,2,1,1,1,1,1),cex=1)#
####dev.off()#
#####

APP#C%12%2#–#2#of#2#
APPENDIX D-13-1
WATER&QUALITY&INDICATORS&
#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
22-Apr-14 1 13.00 112.60 12.79 13.2 -
22-Apr-14 1 14.00 88.25 10.03 13.2 -
22-Apr-14 1 15.50 84.20 9.57 13.6 -
22-Apr-14 2 1.60 -5.07 NA 2.86 -
22-Apr-14 2 2.40 -8.12 NA 2.83 -
22-Apr-14 2 4.60 -5.07 NA 2.88 -
22-Apr-14 3 14.00 9.13 1.04 13 -
22-Apr-14 3 14.67 13.19 1.50 13.3 -
22-Apr-14 3 15.33 14.20 1.61 12.9 -
22-Apr-14 4 21.67 43.62 4.96 16.3 -
22-Apr-14 4 22.00 49.71 5.65 15.9 -
22-Apr-14 4 23.00 39.56 4.50 15.8 -
22-Apr-14 5 17.67 35.50 4.03 19.2 -
22-Apr-14 5 17.67 35.50 4.03 19.6 -
22-Apr-14 5 15.00 49.71 5.65 19.3 -
6-May-14 8 48.00 61.30 6.97 14.2 -
6-May-14 8 50.00 52.33 5.95 14.3 -
6-May-14 8 50.00 38.38 4.36 14.5 -
6-May-14 9 20.60 59.31 6.74 12.6 -
6-May-14 9 20.80 58.31 6.63 12.7 -
6-May-14 9 20.80 56.32 6.40 12.9 -
6-May-14 10 11.40 35.39 4.02 13.2 -
6-May-14 10 12.80 31.40 3.57 13.1 -
6-May-14 10 12.80 33.39 3.79 13 -
6-May-14 11 12.00 39.37 4.47 13.8 -
6-May-14 11 12.80 27.41 3.11 13.9 -
6-May-14 11 14.20 30.40 3.45 14.2 -
6-May-14 12 12.00 31.40 3.57 12.9 -
6-May-14 12 12.40 24.42 2.77 13.2 -
6-May-14 12 12.40 28.41 3.23 13.3 -
9-Oct-14 18 40.00 37.68 4.28 47.1 7.74
9-Oct-14 18 39.37 38.68 4.40 46.9 7.71
9-Oct-14 18 40.00 37.68 4.28 46.6 7.64
9-Oct-14 19 28.75 35.67 4.05 39.9 7.53
9-Oct-14 19 28.13 34.67 3.94 40.5 7.55
9-Oct-14 19 28.75 34.67 3.94 40.6 7.62
9-Oct-14 20 30.62 48.73 5.54 36.5 7.64
9-Oct-14 20 30.63 47.73 5.42 36.2 7.66
9-Oct-14 20 30.62 47.73 5.42 36.9 7.67

APP#D%13%1#–#1#of#6#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
9-Oct-14 21 30.63 50.74 5.77 36 7.53
9-Oct-14 21 30.00 50.74 5.77 36.1 7.56
9-Oct-14 21 30.63 51.75 5.88 36.4 7.56
9-Oct-14 22 38.13 43.71 4.97 40.7 7.55
9-Oct-14 22 38.13 42.70 4.85 39.8 7.53
9-Oct-14 22 38.13 43.71 4.97 40.4 7.5
3-Mar-15 28 170.00 49.30 5.60 65 7.18
3-Mar-15 28 164.33 41.33 4.70 65 7.22
3-Mar-15 28 170.67 39.34 4.47 70 7.14
3-Mar-15 29 133.33 34.36 3.90 65 6.41
3-Mar-15 29 132.00 32.37 3.68 65 6.43
3-Mar-15 29 129.67 31.37 3.56 60 6.43
3-Mar-15 30 129.00 29.38 3.34 55 7.02
3-Mar-15 30 129.50 30.38 3.45 55 6.92
3-Mar-15 30 131.75 27.39 3.11 55 6.94
3-Mar-15 31 130.00 28.39 3.23 45 6.46
3-Mar-15 31 129.83 30.38 3.45 50 6.37
3-Mar-15 31 131.00 27.39 3.11 45 6.38
3-Mar-15 32 97.25 32.37 3.68 40 7.16
3-Mar-15 32 102.00 30.38 3.45 40 7.13
3-Mar-15 32 95.75 31.37 3.56 45 7.12
16-Mar-15 36 27.25 85.38 9.70 26 7.73
16-Mar-15 36 21.12 85.38 9.70 27 7.78
16-Mar-15 36 23.75 84.45 9.60 27 7.86
16-Mar-15 37 19.87 88.16 10.02 26 7.85
16-Mar-15 37 21.88 86.30 9.81 27 7.88
16-Mar-15 37 22.00 89.09 10.12 28 7.90
16-Mar-15 38 25.37 78.88 8.96 29 7.94
16-Mar-15 38 26.50 77.95 8.86 29 7.96
16-Mar-15 38 28.25 82.59 9.38 29 7.97
16-Mar-15 39 24.38 92.80 10.54 31 7.94
16-Mar-15 39 23.50 91.87 10.44 32 7.96
16-Mar-15 39 26.38 90.02 10.23 31 7.97
16-Mar-15 40 20.50 94.66 10.76 30 7.96
16-Mar-15 40 21.00 90.94 10.33 31 7.97
16-Mar-15 40 22.62 89.09 10.12 31 7.97
5-Nov-15 46 80.80 55.17 6.27 30 -
5-Nov-15 46 90.80 55.09 6.26 39 -
5-Nov-15 46 87.40 55.39 6.29 35 -
5-Nov-15 47 41.80 44.17 5.02 20 -
5-Nov-15 47 43.13 44.09 5.01 21 -
5-Nov-15 47 45.89 44.02 5.00 22 -
5-Nov-15 48 23.20 40.99 4.66 16 -

APP#D%13%1#–#2#of#6#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
5-Nov-15 48 25.22 40.76 4.63 16 -
5-Nov-15 48 26.50 40.61 4.61 16 -
5-Nov-15 49 17.73 37.80 4.30 12 -
5-Nov-15 49 18.60 37.65 4.28 11 -
5-Nov-15 49 17.53 37.50 4.26 12 -
5-Nov-15 50 7.40 37.95 4.31 8.6 -
5-Nov-15 50 9.80 38.18 4.34 8.4 -
5-Nov-15 50 9.48 37.88 4.30 9.1 -
19-Nov-15 56 18.80 17.95 2.04 32.0 7.7
19-Nov-15 56 23.20 17.72 2.01 31.0 7.79
19-Nov-15 56 27.60 17.57 2.00 32.0
19-Nov-15 57 18.00 17.18 1.95 26.0 7.83
19-Nov-15 57 25.20 16.56 1.88 27.0 7.84
19-Nov-15 57 33.60 16.72 1.90 27.0
19-Nov-15 58 27.20 16.41 1.86 25.0 7.89
19-Nov-15 58 28.80 16.33 1.86 26.0 7.9
19-Nov-15 58 31.40 16.02 1.82 26.0
19-Nov-15 59 20.00 16.80 1.91 23.0 7.86
19-Nov-15 59 19.76 16.72 1.90 23.0 7.83
19-Nov-15 59 23.33 16.56 1.88 23.0
19-Nov-15 60 27.22 17.57 2.00 31.0 8.01
19-Nov-15 60 28.61 17.34 1.97 29.0 7.97
19-Nov-15 60 28.89 17.41 1.98 29.0
19-Nov-15 61 23.06 17.18 1.95 24.0 8
19-Nov-15 61 24.72 17.03 1.93 25.0 8.3
19-Nov-15 61 25.56 16.80 1.91 25.0
3-May-16 66 31.85 2.66 0.30 21.0 7.16
3-May-16 66 32.80 3.67 0.42 23.0
3-May-16 66 3.13 0.36 22.0
3-May-16 67 18.55 7.37 0.84 11.0 6.37
3-May-16 67 18.45 7.14 0.81 12.0
3-May-16 67 7.37 0.84 12.0
3-May-16 68 11.40 12.78 1.45 17.0 6.33
3-May-16 68 10.25 13.09 1.49 17.0
3-May-16 68 12.55 1.43 18.0
3-May-16 69 23.10 14.94 1.70 24.0 7.15
3-May-16 69 23.30 14.71 1.67 26.0
3-May-16 69 14.63 1.66 24.0
3-May-16 70 19.75 3.44 0.39 23.0 7.16
3-May-16 70 19.45 3.28 0.37 21.0
3-May-16 70 3.36 0.38 23.0
3-May-16 71 16.85 11.78 1.34 21.0 7.2
3-May-16 71 17.80 11.16 1.27 21.0

APP#D%13%1#–#3#of#6#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
3-May-16 71 10.85 1.23 21.0
22-Apr-14 6 4.0 8.12 0.92 8.9 -
22-Apr-14 6 10.0 -2.03 NA 8.8 -
22-Apr-14 6 9.5 24.35 2.77 8.9 -
22-Apr-14 7 10.5 5.07 0.58 9.8 -
22-Apr-14 7 10.5 22.32 2.54 10.8 -
22-Apr-14 7 10.5 10.14 1.15 10.9 -
6-May-14 13 6.2 33.39 3.79 9.4 -
6-May-14 13 6.0 25.42 2.89 9.6 -
6-May-14 13 5.8 15.45 1.76 9.7 -
6-May-14 14 6.8 5.48 0.62 9.0 -
6-May-14 14 6.5 6.48 0.74 8.9 -
6-May-14 14 6.5 2.49 0.28 9.1 -
6-May-14 15 6.7 -0.50 NA 9.0 -
6-May-14 15 6.0 4.49 0.51 9.3 -
6-May-14 15 6.5 -3.49 NA 9.0 -
6-May-14 16 6.5 2.49 0.28 9.0 -
6-May-14 16 6.0 -4.49 NA 9.1 -
6-May-14 16 6.0 -6.48 NA 9.1 -
6-May-14 17 6.5 53.33 6.06 9.2 -
6-May-14 17 6.3 40.37 4.59 9.1 -
6-May-14 17 6.5 24.42 2.77 9.3 -
9-Oct-14 23 4.3 -0.50 NA 6.8 6.8
9-Oct-14 23 4.2 -1.51 NA 6.8 6.7
9-Oct-14 23 4.7 -2.51 NA 6.8 6.7
9-Oct-14 24 3.8 -4.52 NA 6.8 6.7
9-Oct-14 24 4.0 -3.52 NA 6.8 6.7
9-Oct-14 24 4.4 -4.52 NA 6.9 6.7
9-Oct-14 25 5.1 -5.53 NA 7.0 6.6
9-Oct-14 25 4.9 -4.52 NA 7.1 6.6
9-Oct-14 25 4.9 -5.53 NA 7.1 6.6
9-Oct-14 26 5.8 -2.51 NA 7.0 6.5
9-Oct-14 26 4.9 -3.52 NA 7.1 6.5
9-Oct-14 26 4.7 -3.52 NA 7.0 6.5
9-Oct-14 27 4.9 -4.52 NA 7.2 6.6
9-Oct-14 27 4.9 -4.52 NA 7.3 6.6
9-Oct-14 27 4.9 -5.53 NA 7.2 6.6
3-Mar-15 33 5.4 10.46 1.19 5.7 6.7
3-Mar-15 33 4.8 9.46 1.08 5.8 6.7
3-Mar-15 33 5.0 6.47 0.74 6.7 6.7
3-Mar-15 34 5.0 8.47 0.96 6.1 6.6
3-Mar-15 34 4.8 6.47 0.74 6.1 6.6
3-Mar-15 34 4.8 6.47 0.74 6.4 6.6

APP#D%13%1#–#4#of#6#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
3-Mar-15 35 5.0 4.48 0.51 6.7 6.7
3-Mar-15 35 4.8 8.47 0.96 6.6 6.7
3-Mar-15 35 5.0 7.47 0.85 6.7 6.7
16-Mar-15 41 4.3 60.32 6.85 9.7 6.97
16-Mar-15 41 4.4 61.25 6.96 9.8 6.97
16-Mar-15 41 4.8 63.10 7.17 10.0 6.96
16-Mar-15 42 4.9 73.31 8.33 11.0 6.9
16-Mar-15 42 5.1 72.38 8.22 11.0 6.9
16-Mar-15 42 4.6 75.17 8.54 12.0 6.9
16-Mar-15 43 5.2 54.75 6.22 12.0 6.9
16-Mar-15 43 5.2 53.82 6.12 12.0 6.8
16-Mar-15 43 5.2 51.97 5.90 12.0 6.8
16-Mar-15 44 5.4 71.46 8.12 11.0 6.8
16-Mar-15 44 5.1 75.17 8.54 11.0 6.8
16-Mar-15 44 5.5 68.67 7.80 11.0 6.8
16-Mar-15 45 6.2 64.03 7.28 11.0 6.8
16-Mar-15 45 6.4 65.89 7.49 12.0 6.8
16-Mar-15 45 5.6 63.10 7.17 12.0 6.8
5-Nov-15 51 8.0 33.78 3.84 5 6.5
5-Nov-15 51 7.8 33.55 3.81 5
5-Nov-15 51 8.7 33.40 3.80 5
5-Nov-15 52 8.2 31.89 3.62 5 6.5
5-Nov-15 52 8.3 31.58 3.59 5
5-Nov-15 52 7.8 31.81 3.61 5
5-Nov-15 53 7.8 32.80 3.73 6 6.5
5-Nov-15 53 8.7 32.95 3.74 5
5-Nov-15 53 8.2 32.64 3.71 5
5-Nov-15 54 8.0 32.19 3.66 5 6.54
5-Nov-15 54 8.0 31.89 3.62 5
5-Nov-15 54 7.0 32.04 3.64 5
5-Nov-15 55 7.7 32.72 3.72 5
5-Nov-15 55 7.7 32.57 3.70 5 6.6
5-Nov-15 55 6.5 32.42 3.68 5
19-Nov-15 62 3.57 7.92 0.90 8.1 6.44
19-Nov-15 62 3.57 7.53 0.86 8.3 6.33
19-Nov-15 62 3.93 7.45 0.85 8.5
19-Nov-15 63 4.64 7.68 0.87 8.4 6.25
19-Nov-15 63 4.29 7.61 0.86 8.4 6.26
19-Nov-15 63 4.64 7.45 0.85 8.5
19-Nov-15 64 4.64 7.99 0.91 8.1 6.62
19-Nov-15 64 3.93 7.68 0.87 8.3 6.28
19-Nov-15 64 4.29 7.92 0.90 8.6
19-Nov-15 65 3.57 7.76 0.88 8.8 6.32

APP#D%13%1#–#5#of#6#
Event Sample TSS (mg/L) COD (mg/L) BOD (mg/L) T (NTU) pH
19-Nov-15 65 4.29 7.68 0.87 8.5 6.44
19-Nov-15 65 4.64 7.53 0.86 8.4
#
Event Sample BOD (mg/L) Total Coliforms MPN E.coli MPN
19-May-16 75 10 123.6 <1
19-May-16 75 10 123.6 <1
19-May-16 75 8 123.6 <1
19-May-16 76 10 104.6 <1
19-May-16 76 10 104.6 <1
19-May-16 76 8 104.6 <1
19-May-16 77 10 107.1 <1
19-May-16 77 14 107.1 <1
19-May-16 77 107.1 <1
19-May-16 78 10 325.5 <1
19-May-16 78 8 325.5 <1
19-May-16 78 6 325.5 <1
#
#

APP#D%13%1#–#6#of#6#
APPENDIX D-14-1
RAINFALL'CHARACTERISTICS'AND'CONSTRUCTED0WETLAND/RESERVOIR0
TANK'SYSTEM'PERFORMANCE'
'
Rain
I Max I Mean k Inflow Outflow
Event start D-lluvia duration Dvert(min) Cp Cvol
(mm/h) (mm/h) (min) (L/s) (L/s)
(min) (hh:mm)
9-Aug-14 03:49 NA NA NA NA 797 191 1.35 2.67 2.30 3.11
10-Aug-14 17:00 NA NA NA NA 119 5 54.46 59.69 0.05 2.57
12-Aug-14 11:36 120 02:00 30.72 1.92 264 152 0.44 1.22 3.09 1.36
3-Sep-14 17:55 218 03:38 15.36 2.11 381 222 0.13 0.21 3.00 0.39
6-Sep-14 12:57 6 00:60 15.36 5.12 243 11 9.08 9.79 0.05 0.43
14-Sep-14 00:43 300 05:00 15.36 0.61 300 63 0.34 1.33 1.30 0.44
14-Sep-14 05:43 47 00:47 15.36 0.65 209 4 9.46 10.13 0.05 0.44
16-Sep-14 10:53 13 00:13 15.36 2.36 241 11 9.94 12.08 0.05 0.52
17-Sep-14 22:22 NA NA NA NA 190 35 8.44 6.86 0.05 0.40
18-Sep-14 12:16 241 04:01 15.36 0.51 435 19 0.35 3.26 1.29 0.45
20-Sep-14 05:42 245 04:05 15.36 1.32 547 265 0.50 0.76 1.42 0.71
22-Sep-14 14:35 7 00:07 15.36 6.58 148 1 11.55 13.10 0.05 0.54
26-Sep-14 15:06 NA NA NA NA 182 56 0.36 1.96 1.43 0.51
3-Oct-14 02:59 68 01:08 15.36 0.45 251 48 7.05 1.50 0.05 0.33
8-Oct-14 11:09 NA NA NA NA 134 54 8.79 1.53 0.04 0.31
8-Oct-14 13:36 19 0:19 15.36 15.36 211 85 0.14 1.06 3.16 0.45
8-Oct-14 17:10 481 08:02 46.08 4.66 2824 696 0.02 0.03 32.82 0.75
10-Oct-14 16:08 159 02:39 46.08 6.09 1869 450 0.08 0.07 32.01 2.52
12-Oct-14 19:08 132 02:12 15.36 1.16 309 91 0.41 2.02 1.14 0.46
16-Oct-14 23:24 114 01:54 15.36 1.35 247 65 0.40 1.66 1.37 0.55
18-Oct-14 17:50 93 01:33 30.72 7.10 2599 527 0.04 0.06 24.87 1.10
22-Oct-14 15:08 197 03:17 15.36 0.62 367 112 0.33 2.72 1.44 0.48
24-Oct-14 13:51 8 0:08 15.36 7.68 191 5 8.58 10.80 0.05 0.45
25-Oct-14 05:20 84 01:24 15.36 3.66 370 151 0.11 0.84 5.58 0.64
25-Oct-14 23:49 334 05:34 15.36 1.33 6303 393 0.02 0.07 30.07 0.75
30-Oct-14 22:44 87 01:27 15.36 0.71 287 64 0.82 7.19 0.70 0.57
31-Oct-14 17:10 319 05:19 92.16 2.21 2365 576 0.04 0.09 31.90 1.25
18-Jan-15 14:24 31 00:31 15.36 9.41 368 156 0.07 0.85 7.66 0.57
23-Jan-15 18:41 137 02:17 15.36 1.35 384 139 0.15 1.35 3.37 0.49
24-Jan-15 14:53 12 00:12 30.72 15.36 289 122 0.05 0.92 11.30 0.53
27-Jan-15 14:07 459 07:39 76.8 1.81 1324 157 0.02 0.04 27.34 0.62
6-Feb-15 13:06 4183 69:44 61.44 0.23 5086 551 0.05 0.05 27.17 1.37
10-Feb-15 16:13 20 00:20 30.72 7.68 336 142 0.04 0.67 10.94 0.46
24-Feb-15 15:39 111 01:51 46.08 4.43 298 156 0.03 0.07 9.71 0.33
3-Mar-15 15:00 66 01:07 15.36 4.19 225 100 0.03 0.08 10.04 0.34
5-Mar-15 15:48 45 00:45 46.08 6.83 209 111 0.07 0.42 7.96 0.56
13-Mar-15 16:55 88 01:28 15.36 2.79 251 124 0.10 0.47 5.91 0.57
16-Mar-15 15:05 106 01:46 30.72 2.90 266 106 0.10 0.35 5.36 0.52
16-Mar-15 22:02 1 00:01 15.36 15.36 179 1 8.87 9.81 0.05 0.42
17-Mar-15 23:20 530 08:50 30.72 1.54 1817 295 0.04 0.08 17.70 0.63
19-Mar-15 11:15 341 05:41 46.08 5.09 2415 521 0.02 0.03 30.04 0.69
21-Mar-15 16:35 39 00:39 15.36 5.12 134 101 0.06 0.70 6.97 0.43

APP#D%14%1#–#1#of#2#
Rain
I Max I Mean k Inflow Outflow
Event start D-lluvia duration Dvert(min) Cp Cvol
(mm/h) (mm/h) (min) (L/s) (L/s)
(min) (hh:mm)
22-Mar-15 16:33 15 00:15 30.72 9.22 217 91 0.11 1.17 3.92 0.45
23-Mar-15 13:11 235 03:55 30.72 2.88 2221 563 0.02 0.07 26.66 0.63
29-Mar-15 5:42 NA NA NA NA 286 112 0.28 2.16 1.43 0.40
29-Mar-15 11:30 NA NA NA NA 334 38 0.24 2.01 1.99 0.48
30-Mar-15 13:42 NA NA NA NA 1386 306 0.02 0.04 25.36 0.63
3-Apr-15 8:46 196 03:16 15.36 1.18 426 241 0.17 0.09 3.77 0.65
3-Apr-15 18:16 1 00:01 15.36 15.36 169 3 28.90 25.80 0.04 1.01
4-Apr-15 5:46 212 03:32 15.36 0.22 537 7 43.78 26.07 0.04 1.53
5-Apr-15 16:04 18 00:18 15.36 3.41 185 49 1.61 9.61 0.65 1.04
8-Apr-15 15:35 NA NA NA NA 109 29 0.22 0.39 5.61 1.24
16-Apr-15 3:36 61 01:01 15.36 0.50 208 4 67.17 60.41 0.04 2.35
16-Apr-15 21:49 249 04:10 30.72 3.45 1110 38 0.16 0.17 26.82 4.31
17-Apr-15 22:06 263 04:23 15.36 0.88 531 193 0.34 0.80 2.09 0.71
18-Apr-15 15:56 350 05:50 30.72 0.92 566 215 0.59 8.46 5.36 3.14
2-May-15 14:18 7 00:07 15.36 4.39 146 - - - 0.05 -
2-May-15 17:14 8 00:08 15.36 3.84 183 - - - 1.45 -
13-May-15 20:16 58 00:58 15.36 1.59 229 - - - 0.78 -
15-May-15 11:23 67 01:07 15.36 3.21 352 - - - 4.47 -
16-May-15 21:55 1 00:01 15.36 15.36 135 - - - 0.03 -
17-May-15 0:31 131 02:11 15.36 0.59 337 - - - 0.67 -
17-May-15 17:53 44 00:44 15.36 1.40 222 - - - 0.05 -
19-May-15 12:11 NA NA NA NA 269 - - - 0.03 -
19-May-15 19:54 NA NA NA NA 99 - - - 2.89 -
22-May-15 11:38 22 00:22 15.36 4.19 381 - - - 2.06 -
23-May-15 15:48 1 00:01 15.36 15.36 147 - - - 0.03 -
25-May-15 8:54 33 00:33 15.36 3.72 416 - - - 3.25 -
28-May-15 7:05 NA NA NA NA 193 - - - 0.05 -
28-May-15 20:58 38 00:38 15.36 2.43 241 - - - 1.02 -
1-Jun-15 1:08 229 03:49 15.36 2.68 534 259 0.10 0.48 5.65 0.54
4-Jun-15 6:48 NA NA NA NA 470 - - - 0.03 -
5-Jun-15 11:00 NA NA NA NA 315 - - - 1.29 -
7-Jun-15 9:49 NA NA NA NA 123 - - - 0.04 -
11-Jun-15 2:31 414 06:54 15.36 0.41 486 158 0.02 0.15 22.09 0.38
13-Jun-15 1:15 414 06:54 15.36 1.08 1107 474 0.18 1.42 2.97 0.54
14-Jun-15 11:31 12 00:12 15.36 2.56 213 51 0.73 5.30 0.65 0.47
15-Jun-15 17:16 67 01:07 15.36 4.59 277 142 0.10 0.33 3.96 0.40
18-Jun-15 10:08 72 01:12 15.36 0.85 131 - - - 1.22 -
21-Jun-15 22:02 611 10:11 15.36 0.85 956 198 0.12 0.75 4.94 0.60
22-Jun-15 22:34 NA NA NA NA 422 83 8.28 3.77 0.05 0.40
23-Jun-15 6:11 NA NA NA NA 695 33 10.23 9.37 0.05 0.48
24-Jun-15 0:48 NA NA NA NA 997 164 0.55 5.21 0.80 0.44
25-Jun-15 2:20 NA NA NA NA 916 - 0.25 2.41 1.91 0.47
25-Jun-15 19:53 NA NA NA NA 3001 364 0.38 1.21 1.47 0.56
28-Jun-15 5:54 557 09:17 15.36 0.14 608 42 10.51 9.86 0.05 0.53
28-Jun-15 21:09 73 01:13 15.36 0.63 291 9 9.15 9.49 0.05 0.43
#

APP#D%14%1#–#2#of#2#
APPENDIX D-16-1
DISTRIBUTION*OF*THE*OCCURRENCE*PROBABILITY*OF*FIRST*FLUSH*FOR*ALL*EVENTS*
USING*THE*SPECTROMETER*PROBE*
*
FIRST FLUSH ZONE I II III IV
Positive space Negative space
Space between M(V) curve and bisector
High Medium Insignificant Insignificant
Beginning
No Date
time
EV-1 Aug/25/14 3:49 0.0% 1.3% 48.4% 49.8%
EV-2 Aug/26/14 13:17 0.0% 96.7% 2.7% 0.6%
EV-3 Sep/3/14 17:55 0.0% 100.0% 0.0% 0.0%
EV-4 Sep/6/14 12:57 0.8% 89.1% 2.7% 0.8%
EV-5 Sep/11/14 2:33 0.8% 7.6% 1.3% 62.3%
EV-6 Sep/14/14 0:43 14.6% 77.7% 2.1% 5.6%
EV-7 Sep/14/14 5:43 35.3% 12.7% 0.6% 0.1%
EV-8 Sep/16/14 10:53 0.0% 97.8% 2.1% 0.0%
EV-9 Sep/17/14 22:22 0.1% 59.1% 12.9% 16.5%
EV-10 Sep/18/14 12:16 0.0% 0.0% 0.0% 100.0%
EV-11 Sep/20/14 5:42 0.0% 100.0% 0.0% 0.0%
EV-12 Sep/22/14 14:35 0.4% 73.6% 21.7% 4.3%
EV-13 Sep/26/14 15:06 0.0% 100.0% 0.0% 0.0%
EV-14 Oct/3/14 2:59 0.0% 28.5% 43.4% 7.7%
EV-15 Oct/7/14 22:13 0.0% 1.6% 3.1% 54.3%
EV-16 Oct/8/14 11:09 0.0% 0.1% 0.2% 52.0%
EV-17 Oct/8/14 13:36 0.0% 100.0% 0.0% 0.0%
EV-18 Oct/8/14 17:10 0.0% 0.0% 0.0% 100.0%
EV-19 Oct/10/14 16:08 68.9% 31.1% 0.0% 0.0%
EV-20 Oct/12/14 19:08 20.6% 73.2% 0.2% 0.0%
EV-21 Oct/16/14 23:24 0.0% 100.0% 0.0% 0.0%
EV-22 Oct/18/14 17:50 10.0% 89.7% 0.3% 0.0%
EV-23 Oct/22/14 15:08 0.0% 100.0% 0.0% 0.0%
EV-24 Jan/23/15 18:41 0.0% 45.0% 49.0% 6.0%
EV-25 Jan/24/15 14:53 0.0% 100.0% 0.0% 0.0%
EV-26 Jan/27/15 14:07 11.0% 64.8% 16.6% 7.5%
EV-27 Jan/31/15 11:34 0.0% 1.6% 0.4% 66.6%
EV-28 Feb/6/15 13:06 0.0% 18.1% 48.4% 33.5%
EV-29 Feb/10/15 16:13 0.0% 100.0% 0.0% 0.0%
EV-30 Feb/19/15 16:55 0.0% 100.0% 0.0% 0.0%
EV-31 Feb/24/15 15:39 0.0% 99.6% 0.4% 0.0%
EV-32 Mar/2/15 17:42 0.0% 4.8% 75.5% 0.2%
EV-33 Mar/3/15 15:00 0.0% 100.0% 0.0% 0.0%
EV-34 Mar/5/15 15:48 0.0% 100.0% 0.0% 0.0%
EV-35 Mar/13/15 16:55 0.0% 100.0% 0.0% 0.0%
EV-36 Mar/16/15 15:05 0.0% 100.0% 0.0% 0.0%
EV-37 Mar/16/15 22:02 1.7% 11.1% 4.4% 37.5%
EV-38 Mar/17/15 23:20 10.8% 84.7% 1.8% 2.7%
EV-39 Mar/19/15 11:15 0.0% 97.0% 2.9% 0.1%
EV-40 Mar/21/15 16:35 0.0% 100.0% 0.0% 0.0%
EV-41 Mar/23/15 13:11 0.0% 99.3% 0.7% 0.0%
EV-42 Mar/29/15 5:42 0.0% 100.0% 0.0% 0.0%

APP#D%16%1#–#1#of#2#
FIRST FLUSH ZONE I II III IV
Positive space Negative space
Space between M(V) curve and bisector
High Medium Insignificant Insignificant
Beginning
No Date
time
EV-43 Apr/3/15 18:16 0.0% 40.0% 20.2% 39.8%
EV-44 Apr/4/15 5:46 0.0% 0.1% 99.3% 0.6%
EV-45 Apr/5/15 16:04 0.0% 39.2% 46.2% 14.6%
EV-46 Apr/8/15 15:35 4.1% 3.9% 13.2% 78.6%
EV-47 Apr/16/15 21:49 0.0% 0.0% 0.0% 39.1%
EV-48 Apr/17/15 22:06 0.0% 0.0% 2.4% 97.6%
EV-49 Apr/18/15 15:56 0.0% 20.3% 67.2% 12.5%
EV-50 May/2/15 17:14 0.0% 100.0% 0.0% 0.0%
EV-51 May/17/15 0:31 0.0% 71.4% 28.6% 0.0%
EV-52 May/17/15 17:53 0.0% 99.9% 0.0% 0.0%
EV-53 May/19/15 12:11 0.0% 100.0% 0.0% 0.0%
EV-54 May/19/15 19:54 0.1% 80.4% 0.1% 0.0%
EV-55 Jun/11/15 2:31 0.0% 0.0% 100.0% 0.0%
EV-56 Jun/18/15 10:08 0.0% 0.0% 0.0% 100.0%
EV-57 Jun/21/15 22:02 0.0% 57.2% 42.8% 0.0%
EV-58 Jun/22/15 7:32 0.0% 0.0% 0.0% 93.8%
EV-59 Jun/22/15 22:34 0.0% 12.1% 2.0% 80.6%
EV-60 Jun/23/15 6:11 87.1% 12.9% 0.0% 0.0%
EV-61 Jun/24/15 0:48 0.0% 23.5% 61.7% 14.7%
EV-62 Jun/25/15 2:20 0.0% 95.6% 4.4% 0.0%
EV-63 Jun/25/15 19:53 0.0% 0.2% 1.1% 97.6%
#
#

APP#D%16%1#–#2#of#2#
APPENDIX D-16-2
DISTRIBUTION*OF*THE*OCCURRENCE*PROBABILITY*OF*FIRST*FLUSH*FOR*ALL*EVENTS*
USING*THE*TURBIDITY*PROBE*
*
FIRST FLUSH ZONE I II III IV
Positive space Negative space
Space between M(V) curve and bisector
High Medium Insignificant Insignificant
Beginning
No Date
time
EVT-1 Aug/6/14 07:53:00 0% 0% 0% 100%
EVT-2 Aug/8/14 16:44:00 0% 0% 0% 100%
EVT-3 Aug/8/14 20:44:00 0% 0% 93% 7%
EVT-4 Aug/9/14 03:49:00 0% 0% 0% 100%
EVT-5 Aug/10/14 17:00:00 0% 16% 52% 31%
EVT-6 Aug/11/14 11:36:00 0% 0% 99% 1%
EVT-7 Aug/13/14 23:42:00 0% 0% 9% 91%
EVT-8 Aug/25/14 03:49:00 0% 0% 0% 100%
EVT-9 Aug/26/14 13:17:00 0% 0% 1% 100%
EVT-10 Aug/27/14 11:33:00 0% 11% 51% 38%
EVT-11 Sep/3/14 17:55:00 0% 98% 0% 2%
EVT-12 Sep/6/14 12:57:00 0% 58% 41% 1%
EVT-13 Sep/11/14 02:33:00 0% 1% 14% 86%
EVT-14 Sep/14/14 00:43:00 0% 0% 0% 100%
EVT-15 Sep/14/14 05:43:00 0% 18% 43% 39%
EVT-16 Sep/16/14 10:53:00 0% 0% 0% 100%
EVT-17 Sep/17/14 22:22:00 0% 0% 0% 100%
EVT-18 Sep/18/14 12:16:00 0% 0% 1% 99%
EVT-19 Sep/20/14 05:42:00 0% 0% 92% 9%
EVT-20 Sep/22/14 14:35:00 0% 12% 61% 27%
EVT-21 Sep/26/14 15:06:00 0% 1% 0% 99%
EVT-22 Feb/2/15 21:41:00 0% 20% 35% 45%
EVT-23 Feb/6/15 13:06:00 0% 100% 0% 0%
EVT-24 Feb/10/15 16:13:00 0% 0% 100% 0%
EVT-25 Feb/19/15 16:55:00 NA NA NA NA
EVT-26 Feb/24/15 15:39:00 0% 94% 4% 2%
EVT-27 Mar/2/15 17:42:00 0% 0% 15% 85%
EVT-28 Mar/3/15 15:00:00 0% 2% 0% 98%
EVT-29 Mar/5/15 15:48:00 NA NA NA NA
EVT-30 Mar/13/15 16:55:00 2% 0% 98% 0%
EVT-31 Mar/16/15 15:05:00 0% 0% 0% 100%
EVT-32 Mar/16/15 22:02:00 0% 11% 89% 0%
EVT-33 Mar/17/15 23:20:00 0% 0% 16% 84%
EVT-34 Mar/19/15 11:15:00 0% 100% 0% 0%
EVT-35 May/2/15 14:18:00 0% 0% 0% 100%
EVT-36 May/2/15 17:14:00 0% 98% 0% 2%
EVT-37 May/13/15 20:16:00 0% 0% 0% 100%
EVT-38 May/15/15 11:23:00 0% 97% 1% 3%
EVT-39 May/16/15 21:55:00 0% 0% 92% 8%
EVT-40 May/17/15 00:31:00 0% 0% 9% 91%
EVT-41 May/17/15 17:53:00 0% 0% 42% 58%
EVT-42 May/19/15 12:11:00 0% 5% 76% 20%
#
#

APP#D%16%2#–#1#of#1#
#
APPENDIX D-17-1
BOXPLOT'DATA'OF'THE'EVENTS’'EFFICIENCIES'FOR'TSS'CONCENTRATIONS''
#
Beginning Recorded
Event Date lw q1 q2 q3 uw
Time outflow?
EV-1 25-Aug-14 04:33:00 NO 0.18 0.21 0.22 0.30 0.43
EV-AA 25-Aug-14 15:42:00 NO 0.24 0.24 0.25 0.25 0.26
EV-2 26-Aug-14 13:29:00 NO 0.19 0.20 0.21 0.21 0.23
EV-H 27-Aug-14 11:33:00 NO 0.26 0.27 0.27 0.27 0.28
EV-8 16-Sep-14 11:11:00 NO 1.07 1.14 1.16 1.18 1.24
EV-9 17-Sep-14 22:22:00 NO 0.74 0.77 0.79 0.80 0.85
EV-10 18-Sep-14 12:33:00 YES -8.98 -4.22 -1.49 -1.00 1.00
EV-11 20-Sep-14 05:53:00 YES -16.86 -11.76 -2.81 1.00 1.00
EV-BB 21-Sep-14 17:46:00 NO 0.95 0.97 0.98 0.99 1.01
EV-12 20-Sep-14 14:35:00 NO 0.79 0.81 0.83 0.84 0.85
EV-13 28-Sep-14 15:06:00 YES -7.41 -5.55 -5.31 -3.97 -1.85
EV-CC 23-Sep-14 19:20:00 NO 1.96 1.97 1.99 1.99 2.01
EV-19 10-Oct-14 16:16:00 YES -0.35 0.42 0.87 0.95 1.00
EV-20 12-Oct-14 19:23:00 YES -20.02 -15.48 -10.39 -2.30 1.00
EV-21 16-Oct-14 23:24:00 YES -28.13 -20.58 -17.23 -1.64 0.75
EV-22 18-Oct-14 17:50:00 YES -0.61 0.32 0.91 0.94 1.00
EV-23 22-Oct-14 15:08:00 YES -16.12 -13.06 -11.70 -9.96 -5.34
EV-DD 16-Jan-15 16:41:00 YES -4.22 -2.68 -2.01 -1.59 -0.26
EV-EE 17-Jan-15 14:11:00 YES -2.35 -1.65 -1.41 -0.89 0.08
EV-FF 17-Jan-15 21:50:00 YES 0.98 0.99 0.99 0.99 1.00
EV-GG 18-Jan-15 14:35:00 NO 0.51 0.52 0.53 0.53 0.54
EV-24 23-Jan-15 19:04:00 YES -4.22 -2.68 -2.01 -1.59 -0.26
EV-25 24-Jan-15 14:54:00 YES -2.35 -1.65 -1.41 -0.89 0.08
EV-26 27-Jan-15 14:12:00 YES 0.98 0.99 0.99 0.99 1.00
EV-27 31-Jan-15 11:47:00 NO 0.51 0.52 0.53 0.53 0.54
EV-28 2-Feb-15 22:31:00 NO 0.43 0.44 0.44 0.44 0.45
EV-29 6-Feb-15 13:23:00 YES 0.80 0.91 0.98 0.99 1.00
EV-30 10-Feb-15 16:22:00 YES -3.16 -2.05 -1.44 0.92 1.00
EV-31 19-Feb-15 16:55:00 NO 0.16 0.18 0.19 0.19 0.20
EV-32 24-Feb-15 15:39:00 NO 0.07 0.15 0.19 0.20 0.22
EV-33 2-Mar-15 19:29:00 NO 0.28 0.29 0.29 0.29 0.30
EV-34 3-Mar-15 15:10:00 NO 0.16 0.20 0.22 0.23 0.24
EV-35 5-Mar-15 15:51:00 YES -5.90 -4.65 -3.27 0.59 1.00
EV-36 13-Mar-15 17:12:00 YES -3.78 -1.82 -1.72 1.00 1.00
EV-37 16-Mar-15 15:12:00 YES -8.77 -3.06 -1.98 0.94 1.00
EV-38 16-Mar-15 22:11:00 NO 0.58 0.60 0.61 0.61 0.64
EV-39 17-Mar-15 23:29:00 YES -0.13 0.52 0.93 0.97 1.00
EV-40 19-Mar-15 11:23:00 YES 0.52 0.79 0.96 0.99 1.00
EV-41 21-Mar-15 16:43:00 YES -10.34 -8.98 -7.73 -4.94 1.00
EV-42 22-Mar-15 16:39:00 YES -12.16 -8.98 -8.67 -6.31 -2.79
EV-43 23-Mar-15 13:11:00 YES 0.02 0.57 0.88 0.96 1.00
EV-44 29-Mar-15 05:42:00 YES -9.08 -7.50 -6.78 -5.47 -3.48
EV-45 29-Mar-15 11:30:00 YES -13.36 -9.83 -7.91 -6.23 -3.97
EV-46 30-Mar-15 13:42:00 YES 0.60 0.82 0.94 0.97 1.00

APP#D%17%1#–#1#of#2#
Beginning Recorded
Event Date lw q1 q2 q3 uw
Time outflow?
EV-47 31-Mar-15 13:47:00 NO 0.82 0.87 0.91 0.93 0.95
EV-48 3-Apr-15 08:59:00 NO 0.72 0.84 1.04 1.07 1.10
EV-49 3-Apr-15 18:43:00 NO 1.12 1.13 1.14 1.14 1.16
EV-50 4-Apr-15 09:28:00 NO 1.04 1.05 1.06 1.06 1.08
EV-51 5-Apr-15 16:23:00 NO 0.80 0.82 0.83 0.84 0.87
EV-52 8-Apr-15 15:35:00 YES -15.66 -12.36 1.00 1.00 1.00
EV-53 16-Apr-15 04:09:00 NO 1.80 1.84 1.86 1.88 1.94
EV-54 16-Apr-15 21:54:00 YES 0.88 0.95 1.00 1.00 1.00
EV-55 17-Apr-15 22:57:00 NO 0.77 1.19 1.61 1.65 2.02
EV-56 18-Apr-15 16:02:00 YES -18.06 -12.72 -9.39 1.00 1.00
EV-57 2-May-15 14:40:00 NO 0.52 0.53 0.54 0.55 0.57
EV-58 2-May-15 17:23:00 NO 0.19 0.22 0.24 0.24 0.25
EV-59 13-May-15 20:40:00 NO 0.16 0.19 0.20 0.21 0.24
EV-60 15-May-15 11:23:00 NO 0.32 0.36 0.37 0.38 0.42
EV-61 16-May-15 23:02:00 NO 1.00 1.02 1.02 1.03 1.04
EV-62 17-May-15 00:44:00 NO 0.78 0.84 0.93 0.99 1.09
EV-63 17-May-15 17:53:00 NO 0.88 0.91 0.92 0.93 0.97
EV-64 19-May-15 12:11:00 NO 1.00 1.03 1.05 1.06 1.11
EV-65 19-May-15 19:54:00 NO 1.04 1.04 1.04 1.05 1.05
EV-HH 19-May-15 20:59:00 NO 1.03 1.04 1.05 1.05 1.06
EV-72 4-Jun-15 06:48:00 NO 0.69 0.70 0.72 0.72 0.75
EV-73 5-Jun-15 11:00:00 NO 0.29 0.39 0.43 0.46 0.49
EV-74 7-Jun-15 09:49:00 NO 0.12 0.13 0.13 0.23 0.38
EV-75 11-Jun-15 02:31:00 NO 0.27 0.30 0.31 0.32 0.35
EV-76 13-Jun-15 01:45:00 YES -4.51 -1.46 0.35 0.59 1.00
EV-77 14-Jun-15 11:45:00 YES -2.14 -1.66 -1.13 -0.31 1.00
EV-78 15-Jun-15 17:16:00 NO 0.67 0.71 0.72 0.74 0.78
EV-79 18-Jun-15 10:08:00 NO 0.24 0.36 0.40 0.47 0.54
EV-80 21-Jun-15 23:58:00 YES -11.74 -8.25 -5.88 0.69 1.00
EV-81 22-Jun-15 07:32:00 NO 0.56 0.58 0.59 0.73 0.82
EV-82 22-Jun-15 22:34:00 NO 0.57 0.68 0.73 0.75 0.78
EV-83 23-Jun-15 06:11:00 NO 0.09 0.12 0.51 0.65 0.79
EV-84 24-Jun-15 00:48:00 YES -7.04 -3.53 0.06 0.22 1.00
EV-85 25-Jun-15 02:20:00 YES -6.69 -2.22 0.05 1.00 1.00
EV-86 25-Jun-15 19:53:00 YES -20.05 -9.45 -6.68 0.49 1.00
EV-87 28-Jun-15 06:11:00 NO 0.77 0.79 0.79 0.80 0.82
EV-88 28-Jun-15 21:37:00 NO 0.74 0.75 0.75 0.75 0.76
#

APP#D%17%1#–#2#of#2#
APPENDIX E-19-1
KRUSKAL'WALLIS*TESTS*RESULTS*
*
>kwt_b=anova(lm(rank((dataFF$Hmean))~(ZT)),data >kwt_b=anova(lm(rank((dataFF$Hmeanb))~(ZT)),dat
=dataFF) a=dataFF)
> print(kwt_b) > print(kwt_b)
Analysis of Variance Table Analysis of Variance Table
Response: rank((dataFF$Hmean)) Response: rank((dataFF$Hmeanb))
Df Sum Sq Mean Sq F value Pr(>F) Df Sum Sq Mean Sq F value Pr(>F)
ZT 2 2658 1329.01 4.3887 0.01664 * ZT 2 298.6 149.30 0.4364 0.6484
Residuals 60 18170 302.82 Residuals 60 20528.9 342.15
---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' '
1 >kwt_b=anova(lm(rank((dataFF$Dhmaxb))~(ZT)),dat
a=dataFF)
>kwt_b=anova(lm(rank((dataFF$Dhmax))~(ZT)),data > print(kwt_b)
=dataFF) Analysis of Variance Table
> print(kwt_b) Response: rank((dataFF$Dhmaxb))
Analysis of Variance Table Df Sum Sq Mean Sq F value Pr(>F)
Response: rank((dataFF$Dhmax)) ZT 2 432 216 0.6353 0.5333
Df Sum Sq Mean Sq F value Pr(>F) Residuals 60 20400 340
ZT 2 3430.5 1715.27 5.9142 0.004525 **
Residuals 60 17401.5 290.02 >kwt_b=anova(lm(rank((dataFF$ADWPb))~(ZT)),data
---Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' =dataFF)
1 print(kwt_b)
Analysis of Variance Table
>kwt_b=anova(lm(rank((dataFF$ADWP))~(ZT)),data Response: rank((dataFF$ADWPb))
=dataFF) Df Sum Sq Mean Sq F value Pr(>F)
> print(kwt_b) ZT 2 472.2 236.12 0.6959 0.5026
Analysis of Variance Table Residuals 60 20359.3 339.32
Response: rank((dataFF$ADWP))
Df Sum Sq Mean Sq F value Pr(>F) >kwt_b=anova(lm(rank((dataFF$Hmaxb))~(ZT)),data
ZT 2 435.2 217.60 0.6401 0.5308 =dataFF)
Residuals 60 20396.3 339.94 > print(kwt_b)
Analysis of Variance Table
>kwt_b=anova(lm(rank((dataFF$Hmax))~(ZT)),data= Response: rank((dataFF$Hmaxb))
dataFF) Df Sum Sq Mean Sq F value Pr(>F)
> print(kwt_b) ZT 2 164.3 82.15 0.2388 0.7883
Analysis of Variance Table Residuals 60 20638.7 343.98
Response: rank((dataFF$Hmax))
Df Sum Sq Mean Sq F value Pr(>F) >
ZT 2 2083.3 1041.65 3.3388 0.06218 . kwt_b=anova(lm(rank((dataFF$Pb))~(ZT)),data=dataF
Residuals 60 18719.2 311.99 F)
--- > print(kwt_b)
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Analysis of Variance Table
Response: rank((dataFF$Pb))
> Df Sum Sq Mean Sq F value Pr(>F)
kwt_b=anova(lm(rank((dataFF$P))~(ZT)),data=dataF ZT 2 13.9 6.96 0.0201 0.9801
F) Residuals 60 20801.6 346.69
> print(kwt_b)

APP#E%19%1#–#1#of#2#
Analysis of Variance Table >kwt_b=anova(lm(rank((dataFF$Imaxb))~(ZT)),data=
Response: rank((dataFF$P)) dataFF)
Df Sum Sq Mean Sq F value Pr(>F) > print(kwt_b)
ZT 2 1051.1 525.57 1.595 0.2114 Analysis of Variance Table
Residuals 60 19770.9 329.51 Response: rank((dataFF$Imaxb))
Df Sum Sq Mean Sq F value Pr(>F)
>kwt_b=anova(lm(rank((dataFF$Imax))~(ZT)),data= ZT 2 322.5 161.24 0.5729 0.567
dataFF) Residuals 60 16887.5 281.46
> print(kwt_b)
Analysis of Variance Table >kwt_b=anova(lm(rank((dataFF$Imeanb))~(ZT)),data
Response: rank((dataFF$Imax)) =dataFF)
Df Sum Sq Mean Sq F value Pr(>F) > print(kwt_b)
ZT 2 255.5 127.77 0.4607 0.633 Analysis of Variance Table
Residuals 60 16639.5 277.32 Response: rank((dataFF$Imeanb))
Df Sum Sq Mean Sq F value Pr(>F)
>kwt_b=anova(lm(rank((dataFF$Imean))~(ZT)),data ZT 2 1361.4 680.68 2.1006 0.1313
=dataFF) Residuals 60 19442.1 324.04
> print(kwt_b)
Analysis of Variance Table >kwt_b=anova(lm(rank((dataFF$Dvertb))~(ZT)),data=
Response: rank((dataFF$Imean)) dataFF)
Df Sum Sq Mean Sq F value Pr(>F) > print(kwt_b)
ZT 2 1722.2 861.11 2.7053 0.07501 . Analysis of Variance Table
Residuals 60 19098.3 318.30 Response: rank((dataFF$Dvertb))
--- Df Sum Sq Mean Sq F value Pr(>F)
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ZT 2 285.4 142.71 0.4167 0.6611
Residuals 60 20546.1 342.43
>kwt_b=anova(lm(rank((dataFF$Dvert))~(ZT)),data=
dataFF) >kwt_b=anova(lm(rank((dataFF$Dplub))~(ZT)),data=
> print(kwt_b) dataFF)
Analysis of Variance Table > print(kwt_b)
Response: rank((dataFF$Dvert)) Analysis of Variance Table
Df Sum Sq Mean Sq F value Pr(>F) Response: rank((dataFF$Dplub))
ZT 2 1285.3 642.65 1.9728 0.148 Df Sum Sq Mean Sq F value Pr(>F)
Residuals 60 19545.2 325.75 ZT 2 114.8 57.42 0.1663 0.8471
Residuals 60 20712.2 345.20#
> #
kwt_b=anova(lm(rank((dataFF$Dplu))~(ZT)),data=da
taFF)
> print(kwt_b)
Analysis of Variance Table
Response: rank((dataFF$Dplu))
Df Sum Sq Mean Sq F value Pr(>F)
ZT 2 539 269.50 0.7969 0.4554
Residuals 60 20291 338.18

#
#

APP#E%19%1#–#2#of#2#
APPENDIX E-19-2
DATABASE'USED'IN'SVM'METHOD'FOR'FF'PREDICTION'
Part'1'
Hmax Hmean Dhmax ADWP Imax Imean Dvert Dplu
date hour P (mm)
(cm) (cm) (cm) (hour) (mm/h) (mm/h) (min) (min)
Aug-26-14 13:17:00 104.49 102.22 -0.50 21.12 0.76 15.36 1.77 184.00 26.00
Sep-03-14 17:55:00 108.69 104.09 -1.87 175.43 7.62 15.36 2.11 381.00 218.00
Sep-06-14 12:57:00 102.32 101.56 -0.35 63.92 0.51 15.36 5.12 243.00 6.00
Sep-14-14 05:43:00 102.30 101.65 -0.25 0.00 6.10 15.36 0.65 209.00 47.00
Sep-16-14 10:53:00 103.74 101.74 -0.60 52.35 0.51 15.36 2.36 241.00 13.00
Sep-20-14 05:42:00 106.46 103.28 -1.52 37.38 5.33 15.36 1.32 547.00 245.00
Sep-22-14 14:35:00 102.30 101.59 -0.36 52.57 0.76 15.36 6.58 148.00 7.00
Oct-08-14 17:10:00 122.21 113.53 -3.83 2.45 1.46 46.08 4.66 2824.00 481.00
Oct-10-14 16:08:00 122.00 110.86 -3.26 40.12 0.63 46.08 6.09 1869.00 159.00
Oct-12-14 19:08:00 105.84 102.77 -0.66 48.33 0.10 15.36 1.16 309.00 132.00
Oct-16-14 23:24:00 106.20 102.91 -1.08 97.85 0.10 15.36 1.35 247.00 114.00
Oct-18-14 17:50:00 119.98 111.54 -3.31 40.60 0.43 30.72 7.10 2599.00 93.00
Oct-22-14 15:08:00 106.24 102.21 -1.35 91.42 0.08 15.36 0.62 367.00 197.00
Jan-23-15 18:41:00 109.18 102.63 -0.92 123.73 0.12 15.36 1.35 384.00 137.00
Jan-24-15 14:53:00 114.85 102.48 -4.56 17.90 0.12 30.72 15.36 289.00 12.00
Jan-27-15 14:07:00 120.72 114.83 -7.32 71.00 0.54 76.80 1.81 1324.00 459.00
Jan-31-15 11:34:00 101.12 101.12 -0.06 85.77 0.01 15.36 15.36 143.00 1.00
Feb-02-15 21:41:00 101.07 101.07 -0.03 58.08 0.01 15.36 15.36 114.00 1.00
Feb-06-15 13:06:00 120.67 113.03 -6.12 87.38 1.49 61.44 0.23 5086.00 4183.00
Feb-10-15 16:13:00 114.67 102.81 -4.94 53.58 0.10 30.72 7.68 336.00 20.00
Feb-19-15 16:55:00 111.22 102.74 -3.54 216.22 0.09 15.36 6.28 230.00 22.00
Feb-24-15 15:39:00 114.03 104.00 -2.66 166.40 0.32 46.08 4.43 298.00 111.00
Mar-02-15 17:42:00 101.12 101.12 -0.04 144.37 0.03 15.36 0.19 272.00 244.00
Mar-03-15 15:00:00 114.20 103.61 -2.00 17.25 0.18 15.36 4.19 225.00 66.00
Mar-05-15 15:48:00 113.04 103.37 -5.19 47.72 0.20 46.08 6.83 209.00 45.00
Mar-13-15 16:55:00 111.46 103.23 -5.31 191.62 0.16 15.36 2.79 251.00 88.00
Mar-16-15 15:05:00 110.51 104.17 -4.67 68.72 0.20 30.72 2.90 266.00 106.00
Mar-16-15 22:02:00 102.32 101.63 -0.22 5.20 0.01 15.36 15.36 179.00 1.00
Mar-17-15 23:20:00 117.57 109.56 -3.23 25.47 0.53 30.72 1.54 1817.00 530.00
Mar-19-15 11:15:00 121.21 113.09 -2.88 27.10 1.13 46.08 5.09 2415.00 341.00
Mar-21-15 16:35:00 111.67 102.88 -3.09 47.67 0.13 15.36 5.12 134.00 39.00
Mar-23-15 13:11:00 120.50 109.48 -2.02 20.27 0.44 30.72 2.88 2221.00 235.00
Apr-04-15 05:46:00 101.43 101.43 -0.18 11.50 0.03 15.36 0.22 537.00 212.00
Apr-05-15 16:04:00 104.50 102.03 -0.74 21.98 0.04 15.36 3.41 185.00 18.00
Apr-17-15 22:06:00 107.83 102.57 -0.70 20.15 0.15 15.36 0.88 531.00 263.00
Apr-18-15 15:56:00 110.53 102.43 -1.94 37.47 0.21 30.72 0.92 566.00 350.00
May-13-15 20:16:00 105.29 102.26 -1.10 266.92 0.06 15.36 1.59 229.00 58.00
May-15-15 11:23:00 110.09 103.03 -2.66 38.17 0.14 15.36 3.21 352.00 67.00
May-16-15 21:55:00 101.01 101.05 -0.02 33.43 0.01 15.36 15.36 135.00 1.00
May-17-15 00:31:00 104.97 102.03 -0.60 1.47 0.05 15.36 0.59 337.00 131.00
May-17-15 17:53:00 102.35 101.87 -0.41 17.18 0.04 15.36 1.40 222.00 44.00
Jun-18-15 10:08:00 105.96 103.07 -10.94 63.88 0.04 15.36 0.85 131.00 72.00
Jun-21-15 22:02:00 110.34 103.08 -1.06 106.80 0.34 15.36 0.85 956.00 611.00
Jun-22-15 07:32:00 138.50 138.50 -0.14 0.00 0.00 0.00 0.00 0.00 0.00
#
#
#
#

APP#E%19%2#–#1#of#2#
APPENDIX E-19-2
DATABASE'USED'IN'SVM'METHOD'FOR'FF'PREDICTION'
Part'2'
Hmaxb Hmeanb Dhmaxb ADWPb Imaxb Imeanb Dvertb Dplub
date hour Pb (mm) FF-obs
(cm) (cm) (cm) (hour) (mm/h) (mm/h) (min) (min)
Aug-26-14 13:17:00 105.13 101.82 -0.73 336.00 2.03 15.36 0.17 732 819 FF
Sep-03-14 17:55:00 104.49 102.22 -0.01 21.12 0.76 15.36 1.77 26 184 FF
Sep-06-14 12:57:00 108.69 104.09 -1.87 175.43 7.62 15.36 2.11 218 381 FF
Sep-14-14 05:43:00 106.08 103.06 -0.87 70.38 3.05 15.36 0.61 300 300 NO
Sep-16-14 10:53:00 102.30 101.65 -0.25 0.00 6.10 15.36 0.65 47 209 FF
Sep-20-14 05:42:00 106.06 102.48 -0.81 0.50 2.03 15.36 0.51 241 435 FF
Sep-22-14 14:35:00 106.46 103.28 -1.52 37.38 5.33 15.36 1.32 245 547 FF
Oct-08-14 17:10:00 108.81 102.67 -2.10 0.17 0.19 15.36 15.36 19 211 NO
Oct-10-14 16:08:00 122.21 113.53 -3.83 2.45 1.46 46.08 4.66 481 2824 FF
Oct-12-14 19:08:00 122.00 110.86 -3.26 40.12 0.63 46.08 6.09 159 1869 FF
Oct-16-14 23:24:00 105.84 102.77 -0.66 48.33 0.10 15.36 1.16 132 309 FF
Oct-18-14 17:50:00 106.20 102.91 -1.08 97.85 0.10 15.36 1.35 114 247 FF
Oct-22-14 15:08:00 119.98 111.54 -3.31 40.60 0.43 30.72 7.10 93 2599 FF
Jan-23-15 18:41:00 112.85 102.56 -1.58 16.82 0.19 15.36 9.41 368 31 NO
Jan-24-15 14:53:00 109.18 102.63 -0.92 123.73 0.12 15.36 1.35 384 137 FF
Jan-27-15 14:07:00 114.85 102.48 -4.56 17.90 0.12 30.72 15.36 289 12 FF
Jan-31-15 11:34:00 120.72 114.83 -7.32 71.00 0.54 76.80 1.81 1324 459 NO
Feb-02-15 21:41:00 101.12 101.12 -0.06 85.77 0.01 15.36 15.36 143 1 NO
Feb-06-15 13:06:00 101.07 101.07 -0.03 58.08 0.01 15.36 15.36 114 1 NO
Feb-10-15 16:13:00 120.67 113.03 -6.12 2.67 1.49 61.44 0.23 5086 4183 FF
Feb-19-15 16:55:00 114.67 102.81 -4.94 53.58 0.10 30.72 7.68 336 20 FF
Feb-24-15 15:39:00 111.22 102.74 -3.54 216.22 0.09 15.36 6.28 230 22 FF
Mar-02-15 17:42:00 114.03 104.00 -2.66 166.40 0.32 46.08 4.43 298 111 NO
Mar-03-15 15:00:00 101.12 101.12 -0.04 144.37 0.03 15.36 0.19 272 244 FF
Mar-05-15 15:48:00 114.20 103.61 -2.00 17.25 0.18 15.36 4.19 225 66 FF
Mar-13-15 16:55:00 113.04 103.37 -5.19 47.72 0.20 46.08 6.83 209 45 FF
Mar-16-15 15:05:00 111.46 103.23 -5.31 191.62 0.16 15.36 2.79 251 88 FF
Mar-16-15 22:02:00 110.51 104.17 -4.67 68.72 0.20 30.72 2.90 266 106 NO
Mar-17-15 23:20:00 102.32 101.63 -0.22 5.20 0.01 15.36 15.36 179 1 FF
Mar-19-15 11:15:00 117.57 109.56 -3.23 25.47 0.53 30.72 1.54 1817 530 FF
Mar-21-15 16:35:00 121.21 113.09 -2.88 27.10 1.13 46.08 5.09 2415 341 FF
Mar-23-15 13:11:00 109.39 102.45 -2.20 23.33 0.09 30.72 9.22 217 15 FF
Apr-04-15 05:46:00 101.38 101.38 -0.25 6.25 0.01 15.36 15.36 169 1 NO
Apr-05-15 16:04:00 101.43 101.43 -0.18 11.50 0.03 15.36 0.22 537 212 NO
Apr-17-15 22:06:00 120.56 113.27 -2.89 17.22 0.56 30.72 3.45 1110 249 NO
Apr-18-15 15:56:00 107.83 102.57 -0.70 20.15 0.15 15.36 0.88 531 263 FF
May-13-15 20:16:00 106.23 102.24 -1.32 2.83 0.02 15.36 3.84 183 8 FF
May-15-15 11:23:00 105.29 102.26 -1.10 266.92 0.06 15.36 1.59 229 58 FF
May-16-15 21:55:00 110.09 103.03 -2.66 38.17 0.14 15.36 3.21 352 67 NO
May-17-15 00:31:00 101.01 101.05 -0.02 33.43 0.01 15.36 15.36 135 1 NO
May-17-15 17:53:00 104.97 102.03 -0.60 1.47 0.05 15.36 0.59 337 131 NO
Jun-18-15 10:08:00 109.56 103.38 -1.25 18.63 0.20 15.36 4.59 277 67 NO
Jun-21-15 22:02:00 105.96 103.07 -10.94 63.88 0.04 15.36 0.85 131 72 FF
Jun-22-15 07:32:00 110.34 103.08 -1.06 106.80 0.34 15.36 0.85 956 611 NO
#

APP#E%19%2#–#2#of#2#
APPENDIX E-19-3
GENERAL'SIMULATION'FOR'USE'A'AND'B'
#
Events Minutes per use
Uses
ed Time Observed Uses Predicted Uses
Day Time FFs
min bypass Obs Predi. A B NU A B NU
Ago-25-2014 04:33:00 475 NO 0 A NU 476 0 0 180 31 265
Ago-25-2014 15:42:00 106 NO 0 A B 107 0 0 41 22 44
Ago-26-2014 13:29:00 172 NO 0 A B 173 0 0 67 28 78
Ago-27-2014 11:33:00 74 NO 0 A B 75 0 0 31 32 12
Ago-28-2014 12:36:00 43 NO 0 A A 44 0 0 22 16 6
Sep-16-2014 11:11:00 223 NO 0 NU B 0 0 224 96 95 33
Sep-17-2014 22:22:00 190 NO 0 NU B 0 0 191 77 33 81
Sep-18-2014 12:33:00 418 NO 0 B NU 0 419 0 147 35 237
Sep-20-2014 05:53:00 510 FF 20 NU A 0 102 409 438 51 22
Sep-20-2014 14:25:00 24 NO 0 B A 0 24 1 21 3 1
Sep-21-2014 17:46:00 82 NO 0 B A 0 83 0 74 6 3
Sep-22-2014 14:35:00 136 NO 0 B A 0 137 0 103 23 11
Sep-26-2014 15:06:00 174 FF 20 NU NU 0 0 175 57 12 106
Sep-28-2014 19:20:00 18 NO 0 NU B 0 0 19 7 8 4
Oct-09-2014 08:29:00 1905 NO 0 B NU 0 1845 61 758 194 954
Oct-10-2014 16:16:00 1861 FF 20 B A 0 1862 0 1027 50 785
Oct-12-2014 19:23:00 294 NO 0 B A 0 295 0 237 25 33
Oct-16-2014 23:24:00 236 FF 20 NU A 0 0 237 184 20 33
Oct-18-2014 17:50:00 2592 FF 20 NU A 0 0 2593 1265 143 1185
Oct-22-2014 15:08:00 341 FF 20 NU B 0 0 342 161 128 53
Jan-23-2015 19:04:00 361 FF 20 A B 362 0 0 164 150 48
Jan-24-2015 14:54:00 288 FF 20 A B 289 0 0 133 109 47
Jan-27-2015 14:12:00 1319 FF 20 A B 1308 12 0 531 238 551
Jan-31-2015 11:47:00 130 NO 0 B B 0 131 0 58 53 20
Jan-31-2015 14:00:00 27 NO 0 B B 0 28 0 13 11 4
Jan-31-2015 15:08:00 16 NO 0 B B 0 17 0 8 6 3
Feb-02-2015 22:31:00 64 NO 0 A B 65 0 0 30 26 9
Feb-06-2015 13:23:00 5069 FF 20 A NU 4891 179 0 1749 514 2807
Feb-10-2015 16:22:00 327 FF 20 A B 328 0 0 172 109 47
Feb-19-2015 16:55:00 220 FF 20 A B 221 0 0 83 30 108
Feb-22-2015 07:01:00 493 NO 0 A B 492 2 0 208 197 89
Feb-24-2015 15:39:00 289 FF 20 A B 290 0 0 125 112 53
Mar-02-2015 19:29:00 165 NO 0 A B 166 0 0 75 61 30
Mar-03-2015 15:10:00 215 FF 20 A B 216 0 0 88 65 63
Mar-05-2015 15:51:00 206 FF 20 B B 0 206 1 84 83 40
Mar-13-2015 17:12:00 234 FF 20 B NU 0 235 0 90 27 118
Mar-16-2015 15:12:00 261 FF 20 NU B 0 11 251 108 80 74
Mar-16-2015 19:38:00 26 NO 0 NU A 0 0 27 15 8 4
Mar-16-2015 22:11:00 165 NO 0 NU B 0 0 166 77 65 24
Mar-17-2015 23:29:00 1807 FF 20 NU A 0 612 1196 945 73 790
Mar-19-2015 11:23:00 2407 FF 20 B A 0 2408 0 1309 72 1027
Mar-21-2015 16:43:00 226 FF 20 B A 0 227 0 186 23 18
Mar-22-2015 16:39:00 211 FF 20 B A 0 212 0 171 23 18
Mar-23-2015 13:11:00 2213 FF 20 B NU 0 2209 5 984 208 1022
Mar-29-2015 05:42:00 286 FF 20 NU B 0 0 287 127 119 41
Mar-29-2015 11:30:00 334 NO 0 NU A 0 0 335 222 65 48
Mar-30-2015 13:42:00 1386 FF 20 NU NU 0 0 1387 603 70 714
Mar-31-2015 13:47:00 161 NO 0 NU B 0 0 162 70 63 29

APP#E%19%3#–#1#of#2#
Events Minutes per use
Uses
ed Time Observed Uses Predicted Uses
Day Time FFs
min bypass Obs Predi. A B NU A B NU
Mar-31-2015 16:29:00 23 NO 0 NU B 0 0 24 10 8 6
Apr-03-2015 08:59:00 413 FF 20 NU A 0 0 414 355 42 17
Apr-03-2015 18:43:00 142 NO 0 NU A 0 0 143 122 14 7
Apr-04-2015 09:28:00 245 NO 0 NU A 0 0 248 197 34 17
Apr-05-2015 16:23:00 166 NO 0 NU B 0 0 167 77 66 24
Apr-08-2015 15:35:00 109 FF 20 NU B 0 0 110 59 38 13
Apr-13-2015 12:40:00 135 NO 0 NU B 0 0 137 60 57 20
Apr-16-2015 04:09:00 175 NO 0 NU B 0 0 176 87 69 20
Apr-16-2015 10:32:00 414 FF 20 NU B 0 0 418 167 165 86
Apr-16-2015 21:54:00 1105 FF 20 NU NU 0 0 1106 402 37 667
Apr-17-2015 22:57:00 480 NO 0 NU A 0 0 481 427 39 15
Apr-18-2015 09:52:00 165 NO 0 NU A 0 0 170 147 16 7
Apr-18-2015 16:02:00 557 FF 20 NU A 0 0 558 472 49 37
Apr-19-2015 01:23:00 74 NO 0 NU A 0 0 76 64 8 4
May-02-2015 14:40:00 329 NO 0 A B 332 0 0 158 65 109
May-05-2015 10:27:00 124 FF 20 A B 129 0 0 60 49 20
May-06-2015 09:51:00 46 FF 20 A B 47 0 0 24 17 6
May-13-2015 20:40:00 205 FF 20 A NU 206 0 0 72 14 120
May-15-2015 11:23:00 352 FF 20 A B 353 0 0 202 109 42
May-16-2015 23:02:00 68 NO 0 NU B 0 0 69 51 12 6
May-17-2015 00:44:00 324 NO 0 NU B 0 0 325 172 124 29
May-17-2015 17:53:00 222 NO 0 NU B 0 0 223 116 84 23
May-19-2015 12:11:00 269 NO 0 B A 0 270 0 224 33 13
May-19-2015 20:59:00 34 FF 20 B B 0 35 0 28 5 2
Jun-04-2015 06:48:00 198 NO 0 NU A 0 0 199 133 43 23
Jun-04-2015 10:32:00 218 NO 0 NU B 0 0 219 130 59 30
Jun-05-2015 06:11:00 139 NO 0 NU B 0 0 144 49 38 57
Jun-05-2015 10:00:00 363 NO 0 NU B 0 164 201 122 30 213
Jun-07-2015 09:49:00 110 NO 0 B NU 0 111 0 31 4 76
Jun-11-2015 07:43:00 196 FF 20 B NU 0 201 0 72 78 51
Jun-12-2015 02:37:00 26 NO 0 B B 0 27 0 10 11 6
Jun-13-2015 01:56:00 1063 NO 0 NU NU 0 2 1063 429 20 616
Jun-14-2015 11:45:00 188 FF 20 NU NU 0 0 190 51 6 133
Jun-14-2015 17:34:00 41 NO 0 NU NU 0 0 42 11 2 29
Jun-15-2015 17:16:00 265 FF 20 NU B 0 0 266 168 63 35
Jun-18-2015 08:21:00 431 NO 0 NU NU 0 0 442 107 18 317
Jun-20-2015 00:04:00 42 NO 0 NU NU 0 0 43 13 2 28
Jun-21-2015 23:58:00 449 FF 20 NU A 0 0 450 364 40 46
Jun-22-2015 07:32:00 504 NO 0 NU NU 0 0 506 198 105 203
Jun-22-2015 22:34:00 422 NO 0 NU B 0 0 423 186 174 63
Jun-23-2015 06:11:00 695 NO 0 NU NU 0 0 696 260 63 373
Jun-24-2015 00:48:00 997 NO 0 NU NU 0 0 998 396 114 488
Jun-25-2015 02:20:00 916 NO 0 NU NU 0 0 917 293 44 580
Jun-26-2015 19:53:00 3001 NO 0 NU B 0 0 3002 1465 977 560
Jun-27-2015 06:11:00 591 NO 0 NU B 0 0 592 270 225 97
Jun-28-2015 21:37:00 263 NO 0 NU B 0 0 264 111 92 61
#

APP#E%19%3#–#2#of#2#
APPENDIX E-19-4
GENERAL'SIMULATION'FOR'USE'B'AND'NU'
'
Events Minutes per use
Uses
Observed Uses Predicted Uses
No. Day Time
Predi. Obs B NU B NU
Ev-1 Ago-25-2014 04:33:00 NU# B# 476# 0# 211# 265#
Ev-2 Ago-25-2014 15:42:00 B# B# 107# 0# 63# 44#
Ev-3 Ago-26-2014 13:29:00 B# B# 173# 0# 95# 78#
Ev-4 Ago-27-2014 11:33:00 B# B# 75# 0# 63# 12#
Ev-5 Ago-28-2014 12:36:00 B# B# 44# 0# 38# 6#
Ev-6 Sep-16-2014 11:11:00 B# NU# 0# 224# 191# 33#
Ev-7 Sep-17-2014 22:22:00 B# NU# 0# 191# 110# 81#
Ev-8 Sep-18-2014 12:33:00 NU# B# 419# 0# 182# 237#
Ev-9 Sep-20-2014 05:53:00 B# NU# 102# 409# 489# 22#
Ev-10 Sep-20-2014 14:25:00 B# B# 24# 1# 24# 1#
Ev-11 Sep-21-2014 17:46:00 B# B# 83# 0# 80# 3#
Ev-12 Sep-22-2014 14:35:00 B# B# 137# 0# 126# 11#
Ev-13 Sep-26-2014 15:06:00 NU# NU# 0# 175# 69# 106#
Ev-14 Sep-28-2014 19:20:00 B# NU# 0# 19# 15# 4#
Ev-15 Oct-09-2014 08:29:00 NU# B# 1845# 61# 952# 954#
Ev-16 Oct-10-2014 16:16:00 B# B# 1862# 0# 1077# 785#
Ev-17 Oct-12-2014 19:23:00 B# B# 295# 0# 262# 33#
Ev-18 Oct-16-2014 23:24:00 B# NU# 0# 237# 204# 33#
Ev-19 Oct-18-2014 17:50:00 B# NU# 0# 2593# 1408# 1185#
Ev-20 Oct-22-2014 15:08:00 B# NU# 0# 342# 289# 53#
Ev-21 Jan-23-2015 19:04:00 B# B# 362# 0# 314# 48#
Ev-22 Jan-24-2015 14:54:00 B# B# 289# 0# 242# 47#
Ev-23 Jan-27-2015 14:12:00 B# B# 1320# 0# 769# 551#
Ev-24 Jan-31-2015 11:47:00 B# B# 131# 0# 111# 20#
Ev-25 Jan-31-2015 14:00:00 B# B# 28# 0# 24# 4#
Ev-26 Jan-31-2015 15:08:00 B# B# 17# 0# 14# 3#
Ev-27 Feb-02-2015 22:31:00 B# B# 65# 0# 56# 9#
Ev-28 Feb-06-2015 13:23:00 NU# B# 5070# 0# 2263# 2807#
Ev-29 Feb-10-2015 16:22:00 B# B# 328# 0# 281# 47#
Ev-30 Feb-19-2015 16:55:00 B# B# 221# 0# 113# 108#
Ev-31 Feb-22-2015 07:01:00 B# B# 494# 0# 405# 89#
Ev-32 Feb-24-2015 15:39:00 B# B# 290# 0# 237# 53#
Ev-33 Mar-02-2015 19:29:00 B# B# 166# 0# 136# 30#
Ev-34 Mar-03-2015 15:10:00 B# B# 216# 0# 153# 63#
Ev-35 Mar-05-2015 15:51:00 B# B# 206# 1# 167# 40#
Ev-36 Mar-13-2015 17:12:00 NU# B# 235# 0# 117# 118#
Ev-37 Mar-16-2015 15:12:00 B# NU# 11# 251# 188# 74#
Ev-38 Mar-16-2015 19:38:00 B# NU# 0# 27# 23# 4#
Ev-39 Mar-16-2015 22:11:00 B# NU# 0# 166# 142# 24#
Ev-40 Mar-17-2015 23:29:00 B# NU# 612# 1196# 1018# 790#
Ev-41 Mar-19-2015 11:23:00 B# B# 2408# 0# 1381# 1027#
Ev-42 Mar-21-2015 16:43:00 B# B# 227# 0# 209# 18#
Ev-43 Mar-22-2015 16:39:00 B# B# 212# 0# 194# 18#
Ev-44 Mar-23-2015 13:11:00 B# B# 2209# 5# 1192# 1022#
Ev-45 Mar-29-2015 05:42:00 B# NU# 0# 287# 246# 41#
Ev-46 Mar-29-2015 11:30:00 B# NU# 0# 335# 287# 48#
Ev-47 Mar-30-2015 13:42:00 NU# NU# 0# 1387# 673# 714#

APP#E%19%4#–#1#of#2#
Events Minutes per use
Uses
Observed Uses Predicted Uses
No. Day Time
Predi. Obs B NU B NU
Ev-48 Mar-31-2015 13:47:00 B# NU# 0# 162# 133# 29#
Ev-49 Mar-31-2015 16:29:00 B# NU# 0# 24# 18# 6#
Ev-50 Apr-03-2015 08:59:00 B# NU# 0# 414# 397# 17#
Ev-51 Apr-03-2015 18:43:00 B# NU# 0# 143# 136# 7#
Ev-52 Apr-04-2015 09:28:00 B# NU# 0# 248# 231# 17#
Ev-53 Apr-05-2015 16:23:00 B# NU# 0# 167# 143# 24#
Ev-54 Apr-08-2015 15:35:00 B# NU# 0# 110# 97# 13#
Ev-55 Apr-13-2015 12:40:00 B# NU# 0# 137# 117# 20#
Ev-56 Apr-16-2015 04:09:00 B# NU# 0# 176# 156# 20#
Ev-57 Apr-16-2015 10:32:00 B# NU# 0# 418# 332# 86#
Ev-58 Apr-16-2015 21:54:00 NU# NU# 0# 1106# 439# 667#
Ev-59 Apr-17-2015 22:57:00 B# NU# 0# 481# 466# 15#
Ev-60 Apr-18-2015 09:52:00 B# NU# 0# 170# 163# 7#
Ev-61 Apr-18-2015 16:02:00 B# NU# 0# 558# 521# 37#
Ev-62 Apr-19-2015 01:23:00 B# NU# 0# 76# 72# 4#
Ev-63 May-02-2015 14:40:00 B# B# 157# 0# 146# 11#
Ev-63a May-02-2015 14:40:00 NU# B# 175# 0# 77# 98#
Ev-64 May-05-2015 10:27:00 B# B# 129# 0# 109# 20#
Ev-65 May-06-2015 09:51:00 B# B# 47# 0# 41# 6#
Ev-66 May-13-2015 20:40:00 NU# B# 206# 0# 86# 120#
Ev-67 May-15-2015 11:23:00 B# B# 353# 0# 311# 42#
Ev-68 May-16-2015 23:02:00 B# NU# 0# 69# 63# 6#
Ev-69 May-17-2015 00:44:00 B# NU# 0# 325# 296# 29#
Ev-70 May-17-2015 17:53:00 B# NU# 0# 223# 200# 23#
Ev-71 May-19-2015 12:11:00 B# B# 270# 0# 257# 13#
Ev-72 May-19-2015 20:59:00 B# B# 35# 0# 33# 2#
Ev-73 Jun-04-2015 06:48:00 B# NU# 0# 199# 176# 23#
Ev-74 Jun-04-2015 10:32:00 B# NU# 0# 219# 189# 30#
Ev-75 Jun-05-2015 06:11:00 NU# NU# 0# 144# 87# 57#
Ev-76 Jun-05-2015 10:00:00 NU# NU# 164# 201# 152# 213#
Ev-77 Jun-07-2015 09:49:00 NU# B# 111# 0# 35# 76#
Ev-78 Jun-11-2015 07:43:00 B# B# 201# 0# 150# 51#
Ev-79 Jun-12-2015 02:37:00 B# B# 27# 0# 21# 6#
Ev-80 Jun-13-2015 01:56:00 NU# NU# 2# 1063# 449# 616#
Ev-81 Jun-14-2015 11:45:00 NU# NU# 0# 190# 57# 133#
Ev-82 Jun-14-2015 17:34:00 NU# NU# 0# 42# 13# 29#
Ev-83 Jun-15-2015 17:16:00 B# NU# 0# 266# 231# 35#
Ev-84 Jun-18-2015 08:21:00 NU# NU# 0# 442# 125# 317#
Ev-85 Jun-20-2015 00:04:00 NU# NU# 0# 43# 15# 28#
Ev-86 Jun-21-2015 23:58:00 NU# NU# 0# 450# 404# 46#
Ev-87 Jun-22-2015 07:32:00 NU# NU# 0# 506# 303# 203#
Ev-88 Jun-22-2015 22:34:00 B# NU# 0# 423# 360# 63#
Ev-89 Jun-23-2015 06:11:00 NU# NU# 0# 696# 323# 373#
Ev-90 Jun-24-2015 00:48:00 NU# NU# 0# 998# 510# 488#
Ev-91 Jun-25-2015 02:20:00 NU# NU# 0# 3919# 2779# 1140#
Ev-92 Jun-26-2015 19:53:00 B# NU# 0# 3002# 2442# 560#
Ev-93 Jun-27-2015 06:11:00 B# NU# 0# 592# 495# 97#
Ev-94 Jun-28-2015 21:37:00 B# NU# 0# 264# 203# 61#
#

APP#E%19%4#–#2#of#2#
APPENDIX E-19-5
DETAIL'SIMULATION'FOR'USE'B'AND'NU'
'
Events Minutes per use Reliabilities and Kappa
Uses
Beginning Observed Uses Predicted Uses Coefficient
Day
Time Predi. Obs B NU B NU Rb Rnu CK
Ago-25-2014 04:33:00 NU# B# 476# 0# 211# 265# 0.44# 0.00# 0.000#
Ago-25-2014 15:42:00 B# B# 107# 0# 63# 44# 0.59# 0.00# 0.000#
Ago-26-2014 13:29:00 B# B# 173# 0# 95# 78# 0.55# 0.00# 0.000#
Ago-27-2014 11:33:00 B# B# 75# 0# 63# 12# 0.84# 0.00# 0.000#
Ago-28-2014 12:36:00 B# B# 44# 0# 38# 6# 0.86# 0.00# 0.000#
Sep-16-2014 11:11:00 B# NU# 0# 224# 191# 33# 0.00# 0.15# 0.00#
Sep-17-2014 22:22:00 B# NU# 0# 191# 110# 81# 0.00# 0.42# 0.00#
Sep-18-2014 12:33:00 NU# B# 419# 0# 182# 237# 0.43# 0.00# 0.00#
Sep-20-2014 05:53:00 B# NU# 102# 409# 489# 22# 0.26# 0.06# 0.022#
Sep-20-2014 14:25:00 B# B# 24# 1# 24# 1# 1.00# 1.00# 1.000#
Sep-21-2014 17:46:00 B# B# 83# 0# 80# 3# 0.96# 0.00# 0.000#
Sep-22-2014 14:35:00 B# B# 137# 0# 126# 11# 0.92# 0.00# 0.000#
Sep-26-2014 15:06:00 NU# NU# 0# 175# 69# 106# 0.00# 0.61# 0.000#
Sep-28-2014 19:20:00 B# NU# 0# 19# 15# 4# 0.00# 0.21# 0.000#
Oct-09-2014 08:29:00 NU# B# 1845# 61# 952# 954# 0.52# 1.00# 0.064#
Oct-10-2014 16:16:00 B# B# 1862# 0# 1077# 785# 0.58# 0.00# 0.000#
Oct-12-2014 19:23:00 B# B# 295# 0# 262# 33# 0.89# 0.00# 0.000#
Oct-16-2014 23:24:00 B# NU# 0# 237# 204# 33# 0.00# 0.14# 0.000#
Oct-18-2014 17:50:00 B# NU# 0# 2593# 1408# 1185# 0.00# 0.46# 0.000#
Oct-22-2014 15:08:00 B# NU# 0# 342# 289# 53# 0.00# 0.15# 0.000#
Jan-23-2015 19:04:00 B# B# 362# 0# 314# 48# 0.87# 0.00# 0.000#
Jan-24-2015 14:54:00 B# B# 289# 0# 242# 47# 0.84# 0.00# 0.000#
Jan-27-2015 14:12:00 B# B# 1320# 0# 769# 551# 0.58# 0.00# 0.000#
Jan-31-2015 11:47:00 B# B# 131# 0# 111# 20# 0.85# 0.00# 0.000#
Jan-31-2015 14:00:00 B# B# 28# 0# 24# 4# 0.86# 0.00# 0.000#
Jan-31-2015 15:08:00 B# B# 17# 0# 14# 3# 0.82# 0.00# 0.000#
Feb-02-2015 22:31:00 B# B# 65# 0# 56# 9# 0.86# 0.00# 0.000#
Feb-06-2015 13:23:00 NU# B# 5070# 0# 2263# 2807# 0.45# 0.00# 0.000#
Feb-10-2015 16:22:00 B# B# 328# 0# 281# 47# 0.86# 0.00# 0.000#
Feb-19-2015 16:55:00 B# B# 221# 0# 113# 108# 0.51# 0.00# 0.000#
Feb-22-2015 07:01:00 B# B# 494# 0# 405# 89# 0.82# 0.00# 0.000#
Feb-24-2015 15:39:00 B# B# 290# 0# 237# 53# 0.82# 0.00# 0.000#
Mar-02-2015 19:29:00 B# B# 166# 0# 136# 30# 0.82# 0.00# 0.000#
Mar-03-2015 15:10:00 B# B# 216# 0# 153# 63# 0.71# 1.00# 0.000#
Mar-05-2015 15:51:00 B# B# 206# 1# 167# 40# 0.81# 1.00# 0.040#
Mar-13-2015 17:12:00 NU# B# 235# 0# 117# 118# 0.50# 0.00# 0.000#
Mar-16-2015 15:12:00 B# NU# 11# 251# 188# 74# 1.00# 0.42# 0.034#
Mar-16-2015 19:38:00 B# NU# 0# 27# 23# 4# 0.00# 0.15# 0.000#
Mar-16-2015 22:11:00 B# NU# 0# 166# 142# 24# 0.00# 0.14# 0.000#
Mar-17-2015 23:29:00 B# NU# 612# 1196# 1018# 790# 1.00# 0.66# 0.568#
Mar-19-2015 11:23:00 B# B# 2408# 0# 1381# 1027# 0.57# 0.00# 0.000#
Mar-21-2015 16:43:00 B# B# 227# 0# 209# 18# 0.92# 0.00# 0.000#
Mar-22-2015 16:39:00 B# B# 212# 0# 194# 18# 0.92# 0.00# 0.000#
Mar-23-2015 13:11:00 B# B# 2209# 5# 1192# 1022# 1.00# 0.54# 0.005#
Mar-29-2015 05:42:00 B# NU# 0# 287# 246# 41# 0.00# 0.14# 0.000#
Mar-29-2015 11:30:00 B# NU# 0# 335# 287# 48# 0.00# 0.14# 0.000#
Mar-30-2015 13:42:00 NU# NU# 0# 1387# 673# 714# 0.00# 0.51# 0.000#

APP#E%19%5#–#1#of#2#
Events Minutes per use Reliabilities and Kappa
Uses
Beginning Observed Uses Predicted Uses Coefficient
Day
Time Predi. Obs B NU B NU Rb Rnu CK
Mar-31-2015 13:47:00 B# NU# 0# 162# 133# 29# 0.00# 0.18# 0.000#
Mar-31-2015 16:29:00 B# NU# 0# 24# 18# 6# 0.00# 0.25# 0.000#
Apr-03-2015 08:59:00 B# NU# 0# 414# 397# 17# 0.00# 0.04# 0.000#
Apr-03-2015 18:43:00 B# NU# 0# 143# 136# 7# 0.00# 0.05# 0.000#
Apr-04-2015 09:28:00 B# NU# 0# 248# 231# 17# 0.00# 0.07# 0.000#
Apr-05-2015 16:23:00 B# NU# 0# 167# 143# 24# 0.00# 0.14# 0.000#
Apr-08-2015 15:35:00 B# NU# 0# 110# 97# 13# 0.00# 0.12# 0.000#
Apr-13-2015 12:40:00 B# NU# 0# 137# 117# 20# 0.00# 0.15# 0.000#
Apr-16-2015 04:09:00 B# NU# 0# 176# 156# 20# 0.00# 0.11# 0.000#
Apr-16-2015 10:32:00 B# NU# 0# 418# 332# 86# 0.00# 0.21# 0.000#
Apr-16-2015 21:54:00 NU# NU# 0# 1106# 439# 667# 0.00# 0.60# 0.000#
Apr-17-2015 22:57:00 B# NU# 0# 481# 466# 15# 0.00# 0.03# 0.000#
Apr-18-2015 09:52:00 B# NU# 0# 170# 163# 7# 0.00# 0.04# 0.000#
Apr-18-2015 16:02:00 B# NU# 0# 558# 521# 37# 0.00# 0.07# 0.000#
Apr-19-2015 01:23:00 B# NU# 0# 76# 72# 4# 0.00# 0.05# 0.000#
May-02-2015 14:40:00 B# B# 332# 0# 223# 109# 0.67# 0.00# 0.000#
May-02-2015 17:23:00 NU# B# 175# 0# 77# 98# 0.43 0.00 0.000
May-05-2015 10:27:00 B# B# 129# 0# 109# 20# 0.84# 0.00# 0.000#
May-06-2015 09:51:00 B# B# 47# 0# 41# 6# 0.87# 0.00# 0.000#
May-13-2015 20:40:00 NU# B# 206# 0# 86# 120# 0.42# 0.00# 0.000#
May-15-2015 11:23:00 B# B# 353# 0# 311# 42# 0.88# 0.00# 0.000#
May-16-2015 23:02:00 B# NU# 0# 69# 63# 6# 0.00# 0.09# 0.000#
May-17-2015 00:44:00 B# NU# 0# 325# 296# 29# 0.00# 0.09# 0.000#
May-17-2015 17:53:00 B# NU# 0# 223# 200# 23# 0.00# 0.10# 0.000#
May-19-2015 12:11:00 B# B# 270# 0# 257# 13# 0.95# 0.00# 0.000#
May-19-2015 20:59:00 B# B# 35# 0# 33# 2# 0.94# 0.00# 0.000#
Jun-04-2015 06:48:00 B# NU# 0# 199# 176# 23# 0.00# 0.12# 0.000#
Jun-04-2015 10:32:00 B# NU# 0# 219# 189# 30# 0.00# 0.14# 0.000#
Jun-05-2015 06:11:00 NU# NU# 0# 144# 87# 57# 0.00# 0.40# 0.000#
Jun-05-2015 10:00:00 NU# NU# 164# 201# 152# 213# 0.93# 1.00# 0.933#
Jun-07-2015 09:49:00 NU# B# 111# 0# 35# 76# 0.32# 0.00# 0.000#
Jun-11-2015 07:43:00 B# B# 201# 0# 150# 51# 0.75# 0.00# 0.000#
Jun-12-2015 02:37:00 B# B# 27# 0# 21# 6# 0.78# 0.00# 0.000#
Jun-13-2015 01:56:00 NU# NU# 2# 1063# 449# 616# 1.00# 0.58# 0.005#
Jun-14-2015 11:45:00 NU# NU# 0# 190# 57# 133# 0.00# 0.70# 0.000#
Jun-14-2015 17:34:00 NU# NU# 0# 42# 13# 29# 0.00# 0.69# 0.000#
Jun-15-2015 17:16:00 B# NU# 0# 266# 231# 35# 0.00# 0.13# 0.000#
Jun-18-2015 08:21:00 NU# NU# 0# 442# 125# 317# 0.00# 0.72# 0.000#
Jun-20-2015 00:04:00 NU# NU# 0# 43# 15# 28# 0.00# 0.65# 0.000#
Jun-21-2015 23:58:00 NU# NU# 0# 450# 404# 46# 0.00# 0.10# 0.000#
Jun-22-2015 07:32:00 NU# NU# 0# 506# 303# 203# 0.00# 0.40# 0.000#
Jun-22-2015 22:34:00 B# NU# 0# 423# 360# 63# 0.00# 0.15# 0.000#
Jun-23-2015 06:11:00 NU# NU# 0# 696# 323# 373# 0.00# 0.54# 0.000#
Jun-24-2015 00:48:00 NU# NU# 0# 998# 510# 488# 0.00# 0.49# 0.000#
Jun-25-2015 02:20:00 NU# NU# 0# 3919# 2779# 1140# 0.00# 0.29# 0.000#
Jun-26-2015 19:53:00 B# NU# 0# 3002# 2442# 560# 0.00# 0.19# 0.000#
Jun-27-2015 06:11:00 B# NU# 0# 592# 495# 97# 0.00# 0.16# 0.000#
Jun-28-2015 21:37:00 B# NU# 0# 264# 203# 61# 0.00# 0.23# 0.000#
#

APP#E%19%5#–#2#of#2#
!

APPENDIX E-19-6
DATABASE'USED'IN'SVM'METHOD'FOR'FF'PREDICTION'
'
Part'1'
Hmax Hmean Dhmax ADWP P Imax Imean Dvert Dplu
Date Hour
(cm) (cm) (cm) (hour) (mm) (mm/h) (mm/h) (min) (min)
13-Aug-14 23:42:00 102.30 101.70 -0.36 34.68 0.02 15.36 6.14 174 5
27-Aug-14 11:33:00 101.01 101.02 -0.01 20.70 0.01 15.36 15.36 142 1
Sep-03-14 17:55:00 108.69 104.09 -1.87 175.43 7.62 15.36 2.11 381 218
Sep-06-14 12:57:00 102.30 101.60 -0.35 63.92 0.51 15.36 5.12 243 6
Sep-14-14 05:43:00 102.30 101.70 -0.25 0.00 6.10 15.36 0.65 209 47
Sep-16-14 10:53:00 103.70 101.70 -0.60 52.35 0.51 15.36 2.36 241 13
Sep-20-14 05:42:00 106.50 103.30 -1.52 37.38 5.33 15.36 1.32 547 245
Sep-22-14 14:35:00 102.30 101.60 -0.36 52.57 0.76 15.36 6.58 148 7
Feb-02-15 21:41:00 101.07 101.07 -0.03 58.08 0.01 15.36 15.36 114 1
Feb-06-15 13:06:00 120.67 113.03 -6.12 87.38 1.49 61.44 0.23 5086 4183
Feb-10-15 16:13:00 114.67 102.81 -4.94 53.58 0.10 30.72 7.68 336 20
Feb-19-15 16:55:00 111.22 102.74 -3.54 216.22 0.09 15.36 6.28 230 22
Feb-24-15 15:39:00 114.03 104.00 -2.66 166.40 0.32 46.08 4.43 298 111
Mar-02-15 17:42:00 101.12 101.12 -0.04 144.37 0.03 15.36 0.19 272 244
Mar-03-15 15:00:00 114.20 103.61 -2.00 17.25 0.18 15.36 4.19 225 66
Mar-05-15 15:48:00 113.04 103.37 -5.19 47.72 0.20 46.08 6.83 209 45
Mar-13-15 16:55:00 111.46 103.23 -5.31 191.62 0.16 15.36 2.79 251 88
Mar-16-15 15:05:00 110.51 104.17 -4.67 68.72 0.20 30.72 2.90 266 106
Mar-16-15 22:02:00 102.32 101.63 -0.22 5.20 0.01 15.36 15.36 179 1
Mar-17-15 23:20:00 117.57 109.56 -3.23 25.47 0.53 30.72 1.54 1817 530
Mar-19-15 11:15:00 121.21 113.09 -2.88 27.10 1.13 46.08 5.09 2415 341
May-13-15 20:16:00 105.29 102.26 -1.10 266.92 0.06 15.36 1.59 229 58
May-15-15 11:23:00 110.09 103.03 -2.66 38.17 0.14 15.36 3.21 352 67
May-16-15 21:55:00 101.01 101.05 -0.02 33.43 0.01 15.36 15.36 135 1
May-17-15 00:31:00 104.97 102.03 -0.60 1.47 0.05 15.36 0.59 337 131
May-17-15 17:53:00 102.35 101.87 -0.41 17.18 0.04 15.36 1.40 222 44
!
!
!
! !

APP!E%19%6!–!1!of!2!
!

!
Part'2'
Hmaxb% Hmeanb% Dhmaxb% ADWPb% Pb% Imaxb% Imeanb% Dvertb% Dplub% FF=
date% hour%
(cm)% (cm)% (cm)% (hour)% (mm)% (mm/h)% (mm/h)% (min)% (min)% obs%
13-Aug-14 23:42:00 0.01$ 0.25$ 15.36$ 15.36$ 142.00$ 1.00$ 1.90$ 264$ 120$ NO$
27-Aug-14 11:33:00 104.49$ 102.22$ -0.50$ 21.12$ 0.76$ 15.36$ 1.77$ 184$ 26$ NO$
Sep-03-14 17:55:00 104.50$ 102.20$ -0.50$ 21.10$ 0.76$ 15.36$ 1.77$ 26$ 184$ FF$
Sep-06-14 12:57:00 108.70$ 104.10$ -1.87$ 175.40$ 7.62$ 15.36$ 2.11$ 218$ 381$ FF$
Sep-14-14 05:43:00 106.10$ 103.10$ -0.87$ 70.40$ 3.05$ 15.36$ 0.61$ 300$ 300$ NO$
Sep-16-14 10:53:00 102.30$ 101.70$ -0.25$ 0.00$ 6.10$ 15.36$ 0.65$ 47$ 209$ NO$
Sep-20-14 05:42:00 106.10$ 102.50$ -0.81$ 0.50$ 2.03$ 15.36$ 0.51$ 241$ 435$ NO$
Sep-22-14 14:35:00 106.50$ 103.30$ -1.52$ 37.40$ 5.33$ 15.36$ 1.32$ 245$ 547$ NO$
Feb-02-15 21:41:00 101.12$ 101.12$ -0.06$ 85.77$ 0.01$ 15.36$ 15.36$ 143$ 1$ NO$
Feb-06-15 13:06:00 101.07$ 101.07$ -0.03$ 58.08$ 0.01$ 15.36$ 15.36$ 114$ 1$ FF$
Feb-10-15 16:13:00 120.67$ 113.03$ 0.00$ 2.67$ 1.49$ 61.44$ 0.23$ 5086$ 4183$ NO$
Feb-19-15 16:55:00 114.67$ 102.81$ -4.94$ 53.58$ 0.10$ 30.72$ 7.68$ 336$ 20$ NO$
Feb-24-15 15:39:00 111.22$ 102.74$ -3.54$ 216.22$ 0.09$ 15.36$ 6.28$ 230$ 22$ FF$
Mar-02-15 17:42:00 114.03$ 104.00$ -2.66$ 166.40$ 0.32$ 46.08$ 4.43$ 298$ 111$ NO$
Mar-03-15 15:00:00 101.12$ 101.12$ -0.04$ 144.37$ 0.03$ 15.36$ 0.19$ 272$ 244$ NO$
Mar-05-15 15:48:00 114.20$ 103.61$ -2.00$ 17.25$ 0.18$ 15.36$ 4.19$ 225$ 66$ NO$
Mar-13-15 16:55:00 113.04$ 103.37$ -5.19$ 47.72$ 0.20$ 46.08$ 6.83$ 209$ 45$ NO$
Mar-16-15 15:05:00 111.46$ 103.23$ -5.31$ 191.62$ 0.16$ 15.36$ 2.79$ 251$ 88$ NO$
Mar-16-15 22:02:00 110.51$ 104.17$ -4.67$ 68.72$ 0.20$ 30.72$ 2.90$ 266$ 106$ NO$
Mar-17-15 23:20:00 102.32$ 101.63$ -0.22$ 5.20$ 0.01$ 15.36$ 15.36$ 179$ 1$ NO$
Mar-19-15 11:15:00 117.57$ 109.56$ -3.24$ 25.47$ 0.53$ 30.72$ 1.54$ 1817$ 530$ FF$
May-13-15 20:16:00 106.23$ 102.24$ -1.32$ 2.83$ 0.02$ 15.36$ 3.84$ 183$ 8$ NO$
May-15-15 11:23:00 105.29$ 102.26$ -1.10$ 266.92$ 0.06$ 15.36$ 1.59$ 229$ 58$ FF$
May-16-15 21:55:00 110.09$ 103.03$ -2.66$ 38.17$ 0.14$ 15.36$ 3.21$ 352$ 67$ NO$
May-17-15 00:31:00 101.01$ 101.05$ -0.02$ 33.43$ 0.01$ 15.36$ 15.36$ 135$ 1$ NO$
May-17-15 17:53:00 104.97$ 102.03$ -0.60$ 1.47$ 0.05$ 15.36$ 0.59$ 337$ 131$ NO$
!

APP!E%19%6!–!2!of!2!
APPENDIX E-19-7
DATABASE'USED'IN'SVM'METHOD'FOR'USES'PREDICTION'
Part'1'
#
Hmax Hmean Dhmax ADWP P Imax Imean Dvert Dplu
Date Hour
(cm) (cm) (cm) (hour) (mm) (mm/h) (mm/h) (min) (min)
Ago-26-2014 13:29:00 104.49 102.22 -0.50 21.12 0.76 15.36 1.77 184.00 26.00
Sep-16-2014 11:11:00 103.74 101.74 -0.60 52.35 0.51 15.36 2.36 241 13
Sep-20-2014 05:53:00 106.46 103.28 -1.52 37.38 5.33 15.36 1.32 547 245
Sep-22-2014 14:35:00 102.30 101.59 -0.36 52.57 0.76 15.36 6.58 148 7
Oct-10-2014 16:16:00 122.00 110.86 -3.26 40.12 0.63 46.08 6.09 1869 159
Oct-12-2014 19:23:00 105.84 102.77 -0.66 48.33 0.10 15.36 1.16 309 132
Oct-16-2014 23:24:00 106.20 102.91 -1.08 97.85 0.10 15.36 1.35 247 114
Oct-18-2014 17:50:00 119.98 111.54 -3.31 40.60 0.43 30.72 7.10 2599 93
Oct-22-2014 15:08:00 106.24 102.21 -1.35 91.42 0.08 15.36 0.62 367 197
Jan-23-2015 19:04:00 109.18 102.63 -0.92 123.73 0.12 15.36 1.35 384 137
Jan-24-2015 14:54:00 114.85 102.48 -4.56 17.90 0.12 30.72 15.36 289 12
Jan-27-2015 14:12:00 120.72 114.83 -7.32 71.00 0.54 76.80 1.81 1324 459
Jan-31-2015 11:47:00 101.12 101.12 -0.06 85.77 0.01 15.36 15.36 143 1
Feb-02-2015 22:31:00 101.07 101.07 -0.03 58.08 0.01 15.36 15.36 114 1
Feb-06-2015 13:23:00 120.67 113.03 -6.12 87.38 1.49 61.44 0.23 5086 4183
Feb-10-2015 16:22:00 114.67 102.81 -4.94 53.58 0.10 30.72 7.68 336 20
Feb-19-2015 16:55:00 111.22 102.74 -3.54 216.22 0.09 15.36 6.28 230 22
Feb-24-2015 15:39:00 114.03 104.00 -2.66 166.40 0.32 46.08 4.43 298 111
Mar-02-2015 19:29:00 101.12 101.12 -0.04 144.37 0.03 15.36 0.19 272 244
Mar-03-2015 15:10:00 114.20 103.61 -2.00 17.25 0.18 15.36 4.19 225 66
Mar-05-2015 15:51:00 113.04 103.37 -5.19 47.72 0.20 46.08 6.83 209 45
Mar-13-2015 17:12:00 111.46 103.23 -5.31 191.62 0.16 15.36 2.79 251 88
Mar-16-2015 15:12:00 110.51 104.17 -4.67 68.72 0.20 30.72 2.90 266 106
Mar-16-2015 22:11:00 102.32 101.63 -0.22 5.20 0.01 15.36 15.36 179 1
Mar-17-2015 23:29:00 117.57 109.56 -3.23 25.47 0.53 30.72 1.54 1817 530
Mar-19-2015 11:23:00 121.21 113.09 -2.88 27.10 1.13 46.08 5.09 2415 341
Mar-21-2015 16:43:00 111.67 102.88 -3.09 47.67 0.13 15.36 5.12 134 39
Mar-22-2015 16:39:00 109.39 102.45 -2.20 23.33 0.09 30.72 9.22 217 15
Mar-23-2015 13:11:00 120.50 109.48 -2.02 20.27 0.44 30.72 2.88 2221 235
Apr-04-2015 09:28:00 101.43 101.43 -0.18 11.50 0.03 15.36 0.22 537 212
Apr-05-2015 16:23:00 104.50 102.03 -0.74 21.98 0.04 15.36 3.41 185 18
Apr-17-2015 22:57:00 107.83 102.57 -0.70 20.15 0.15 15.36 0.88 531 263
Apr-18-2015 16:02:00 110.53 102.43 -1.94 37.47 0.21 30.72 0.92 566 350
May-13-2015 20:40:00 105.29 102.26 -1.10 266.92 0.06 15.36 1.59 229 58
May-15-2015 11:23:00 110.09 103.03 -2.66 38.17 0.14 15.36 3.21 352 67
May-16-2015 23:02:00 101.01 101.05 -0.02 33.43 0.01 15.36 15.36 135 1
May-17-2015 00:44:00 104.97 102.03 -0.60 1.47 0.05 15.36 0.59 337 131
May-17-2015 17:53:00 102.35 101.87 -0.41 17.18 0.04 15.36 1.40 222 44
Jun-14-2015 11:45:00 104.62 101.57 -11.96 20.62 0.02 15.36 2.56 213 12
Jun-15-2015 17:16:00 109.56 103.38 -1.25 18.63 0.20 15.36 4.59 277 67
Jun-18-2015 08:21:00 105.96 103.07 -10.94 63.88 0.04 15.36 0.85 131 72
Jun-21-2015 23:58:00 110.34 103.08 -1.06 106.80 0.34 15.36 0.85 956 611
Jun-22-2015 07:32:00 138.50 138.50 -0.14 0.00 0.00 0.00 0.00 0 0
#
#
# #

APP#E%19%7#–#1#of#2#
#
Part'2'
Hmaxb Hmeanb Dhmaxb ADWPb Pb Imaxb Imeanb Dvertb Dplub
Date Hour Obs Uses
(cm) (cm) (cm) (hour) (mm) (mm/h) (mm/h) (min) (min)
Ago-26-2014 13:29:00 105.13 101.82 -0.73 336.00 2.03 15.36 0.17 732 819 B A
Sep-16-2014 11:11:00 102.30 101.65 -0.25 0.00 6.10 15.36 0.65 47 209 NU NU
Sep-20-2014 05:53:00 106.06 102.48 -0.81 0.50 2.03 15.36 0.51 241 435 NU NU
Sep-22-2014 14:35:00 106.46 103.28 -1.52 37.38 5.33 15.36 1.32 245 547 B B
Oct-10-2014 16:16:00 122.21 113.53 -3.83 2.45 1.46 46.08 4.66 481 2824 B A
Oct-12-2014 19:23:00 122.00 110.86 -3.26 40.12 0.63 46.08 6.09 159 1869 B A
Oct-16-2014 23:24:00 105.84 102.77 -0.66 48.33 0.10 15.36 1.16 132 309 NU NU
Oct-18-2014 17:50:00 106.20 102.91 -1.08 97.85 0.10 15.36 1.35 114 247 NU NU
Oct-22-2014 15:08:00 119.98 111.54 -3.31 40.60 0.43 30.72 7.10 93 2599 NU NU
Jan-23-2015 19:04:00 112.85 102.56 -1.58 16.82 0.19 15.36 9.41 368 31 B A
Jan-24-2015 14:54:00 109.18 102.63 -0.92 123.73 0.12 15.36 1.35 384 137 B A
Jan-27-2015 14:12:00 114.85 102.48 -4.56 17.90 0.12 30.72 15.36 289 12 B A
Jan-31-2015 11:47:00 120.72 114.83 -7.32 71.00 0.54 76.80 1.81 1324 459 B A
Feb-02-2015 22:31:00 101.12 101.12 -0.06 85.77 0.01 15.36 15.36 143 1 B A
Feb-06-2015 13:23:00 101.07 101.07 -0.03 58.08 0.01 15.36 15.36 114 1 B A
Feb-10-2015 16:22:00 120.67 113.03 -6.12 2.67 1.49 61.44 0.23 5086 4183 B A
Feb-19-2015 16:55:00 114.67 102.81 -4.94 53.58 0.10 30.72 7.68 336 20 B A
Feb-24-2015 15:39:00 111.22 102.74 -3.54 216.22 0.09 15.36 6.28 230 22 B A
Mar-02-2015 19:29:00 114.03 104.00 -2.66 166.40 0.32 46.08 4.43 298 111 B A
Mar-03-2015 15:10:00 101.12 101.12 -0.04 144.37 0.03 15.36 0.19 272 244 B A
Mar-05-2015 15:51:00 114.20 103.61 -2.00 17.25 0.18 15.36 4.19 225 66 B A
Mar-13-2015 17:12:00 113.04 103.37 -5.19 47.72 0.20 46.08 6.83 209 45 B A
Mar-16-2015 15:12:00 111.46 103.23 -5.31 191.62 0.16 15.36 2.79 251 88 NU A
Mar-16-2015 22:11:00 110.51 104.17 -4.67 68.72 0.20 30.72 2.90 266 106 NU B
Mar-17-2015 23:29:00 102.32 101.63 -0.22 5.20 0.01 15.36 15.36 179 1 NU B
Mar-19-2015 11:23:00 117.57 109.56 -3.23 25.47 0.53 30.72 1.54 1817 530 B A
Mar-21-2015 16:43:00 121.21 113.09 -2.88 27.10 1.13 46.08 5.09 2415 341 B A
Mar-22-2015 16:39:00 111.67 102.88 -3.09 47.67 0.13 15.36 5.12 134 39 B A
Mar-23-2015 13:11:00 109.39 102.45 -2.20 23.33 0.09 30.72 9.22 217 15 B A
Apr-04-2015 09:28:00 101.38 101.38 -0.25 6.25 0.01 15.36 15.36 169 1 NU NU
Apr-05-2015 16:23:00 101.43 101.43 -0.18 11.50 0.03 15.36 0.22 537 212 NU NU
Apr-17-2015 22:57:00 120.56 113.27 -2.89 17.22 0.56 30.72 3.45 1110 249 NU NU
Apr-18-2015 16:02:00 107.83 102.57 -0.70 20.15 0.15 15.36 0.88 531 263 NU NU
May-13-2015 20:40:00 106.23 102.24 -1.32 2.83 0.02 15.36 3.84 183 8 B A
May-15-2015 11:23:00 105.29 102.26 -1.10 266.92 0.06 15.36 1.59 229 58 B A
May-16-2015 23:02:00 110.09 103.03 -2.66 38.17 0.14 15.36 3.21 352 67 NU NU
May-17-2015 00:44:00 101.01 101.05 -0.02 33.43 0.01 15.36 15.36 135 1 NU NU
May-17-2015 17:53:00 104.97 102.03 -0.60 1.47 0.05 15.36 0.59 337 131 NU NU
Jun-14-2015 11:45:00 108.50 102.73 -4.81 39.85 0.29 15.36 1.08 1107 414 NU NU
Jun-15-2015 17:16:00 104.62 101.57 -11.96 20.62 0.02 15.36 2.56 213 12 NU NU
Jun-18-2015 08:21:00 109.56 103.38 -1.25 18.63 0.20 15.36 4.59 277 67 NU NU
Jun-21-2015 23:58:00 105.96 103.07 -10.94 63.88 0.04 15.36 0.85 131 72 NU NU
Jun-22-2015 07:32:00 110.34 103.08 -1.06 106.80 0.34 15.36 0.85 956 611 NU NU
#

APP#E%19%7#–#2#of#2#
APPENDIX E-19-8
OBSERVED(WATER(USES(VERSUS(GENERAL(RESULTS(OF(THE(FIRST((PRED5S1),(SECOND(
(PRED5S2)(AND(THIRD((PRED5S2)(SCENARIOS(PER(EVENTS(
(

#
August#2014# #
September#2014(
(

#
October#2014# #
January#2015#
#
(
#
# #

APP#E%19%8#–#1#of#3#
# #
February#2015# March#2015#

# #
March#2015# April#2015#

# #
April#2015# May#2015#

APP#E%19%8#–#2#of#3#
# #
June#2015# June#2015#

#
June#2015#
#
#
#
#
#
#

APP#E%19%8#–#3#of#3#
APPENDIX F - CURRRICULUM VITAE
SANDRA LORENA GALARZA MOLINA
Mobil number: (57)312 5240402
E-mail: salogamo@gmail.com
sgalarza@javeriana.edu.co
Colombian

PROFESSIONAL BACKGROUND AND KEY QUALIFICATIONS


Five years of professional experience as a junior researcher, mainly in the development, implementation
and monitoring of urban drainage research projects.#Special expertise in the development of decision-making
methodologies, stormwater systems and SUDS performance determination, in addition to the implementation
and calibration of water quality and quantity monitoring systems.

Three years of professional experience as a planning leader in the design and implementation of schemes
for the monitoring and control of a company projects.

AWARDS
Dates January 2013 to December 2016
Scholarship from Colciencias - Estudiantes Doctorales Nacionales 567
(PhD National Students announcement 567)
Dates January 2011 to December 2012
Scholarship from Fundación CEIBA

Dates June 2009 – June 2011, Pontificia Universidad Javeriana


Research assistant
Dates June 2002 – December 2003, Pontificia Universidad Javeriana
Monitor of hydraulics class

EDUCATION AND TRAINING


Dates: September 2011– to June 2017
Organization: Pontificia Universidad Javeriana. Bogotá. Faculty of Engineering
Principal subjects / PhD Student
Occupational skills Research on On-line decision-making for stormwater harvesting systems
covered: Advisor: Andrés Torres
Dates: January 2009 – September 2011
Organization: Pontificia Universidad Javeriana. Bogotá. Faculty of Engineering
Qualification
Masters in Hydrosystems
awarded:
Principal subjects / Master’s thesis: “Developing of a multi-criteria analysis tool for decision making support
Occupational skills for rainwater harvesting in Pontificia Universidad Javeriana’s campus” Adviser: Andrés
covered: Torres
Dates August 1999 – December 2005
Organization Pontificia Universidad Javeriana. Cali. Faculty of Engineering

APP#F#–#1#'#4#
#
Principal subjects / Bachelor’s thesis “Technical feasibility study of sustainable urban drainage systems for
Occupational skills the tropical conditions in Colombia”
covered Advisor: Fabio Garzón Contreras
Qualification awarded Civil Engineering

PARTICIPATION IN RESEARCH PROJECTS


Project 4656, “Proyecto piloto de techos verdes como alternativa de preservación del medio ambiente,
seguridad alimentaria y fortalecimiento comunitario en Usme” (“Green roof pilot project as an alternative for
environmental preservation, food security and community empowerment in Usme, Bogota, Colombia”) –
Pontificia Universidad Javeriana
Project 5666 “Valoración de la oferta y la demanda hídrica del sistema humedal-construido/tanque-regulador
para el aprovechamiento de aguas lluvias en el campus de la pontificia universidad Javeriana, sede Bogotá”
(Determination of the Constructed-wetland/reservoir-tank demand and offer performance for rainwater
harvesting in the PUJB campus). – Pontificia Universidad Javeriana

PUBLICATIONS
Journals
Galarza-Molina, S., Torres, A., Rengifo, P., Puentes, A., Cárcamo-Hernández, E., Méndez-Fajardo, S., &
Devia, C. (2016). The benefits of an eco-productive green roof in Bogota, Colombia. Indoor and Built
Environment, 1420326X16665896.
Torres A., Salamanca-López C., Prieto-Parra S., Galarza-Molina S. (2015). OFFIS: a method for the
assessment of first flush occurrence probability in storm drain inlets. Desalination and Water Treatment, 57-2,
24913-24924, DOI: 10.1080/19443994.2016.1143880
Galarza-Molina S., Torres A., Lara-Borrero J., Fajardo-Méndez S., Solarte L., González L. (2015). Towards a
Constructed-Wetland/Reservoir-Tank System for Rainwater Harvesting in an Experimental Catchment in
Colombia. Ingeniería y Universidad, 19 (2), 415 -421
Galarza-Molina S., Torres A., Moura P., Lara-Borrero J. (2015). CRIDE: A Case Study in Multi-Criteria
Analysis for Decision-Making Support in Rainwater Harvesting. Int J Inf Tech Decis, 14 (1), 43-67
Torres A., Fajardo-Méndez S., López-Kleine L., Galarza Molina S.L., Oviedo N. (2013) Calidad de vida y
ciudad: análisis del nivel de desarrollo en Bogotá a través del método de necesidades básicas insatisfechas /
Quality of life and the city: Analysis of the level of development in Bogota using the basic unsatisfied needs" .
Estudios Gerenciales, 29 (127), 231 -238
Galarza Molina S.L., Torres A., Méndez Fajardo S., Pérez B. (2011). Herramienta de Análisis Multi-Criterio
como Soporte para el Diseño del Programa Social de la Facultad de Ingeniería. Revista Estudios Gerenciales,
27 (121), 25 - 36
Galarza, S. y Garzón, F. (2005). Estudio de viabilidad técnica de los sistemas urbanos de drenaje sostenible
para las condiciones tropicales de Colombia. Epiciclos, 4(1), 59-70

Books and chapters books


Galarza-Molina S., Torres A., Fajardo-Méndez S. (2014) Herramientas multi-criterio para la toma de
decisiones en proyectos de carácter social. Editorial Académica Española
Campuzano Ochoa C.P., Roldan Perez G.A., Torres A., Lara Borrero J.A., Galarza-Molina S., Giraldo Osorio
J.D., Duarte Castrillon M.A., Montoya Jaramillo L.J., Ruiz Carrascal C.D. (2015). "Urban Water in Colombia".

APP#F#–#2#'#4#
#
In: Urban Water Challenges in the Americas - A perspective from the Academies of Sciences. México , ISBN:
978-607-8379-12-5 ed: UNESCO , p.168 - 201
Ruiz Lopez A, Rivero Lopez M.I., Zamora D., Galarza-Molina S., Torres A. (2015). “Experiencias con captores
in situ para el monitoreo en continuo de la calidad de agua en hidrosistemas de saneamiento urbano" IN:
Avances en Hidrología Urbana. Colombia ISBN: 978-958-716-841-9 ed: Editorial Pontificia Universidad
Javeriana, p.107 - 122 ,
Sandoval S., Galarza-Molina S., Hernandez Rodriguez N., Guanay Forero Y.K., Obregon N. (2015).
"Herramientas para la toma de decisiones en gestión patrimonial de alcantarillados urbanos". In: Avances en
Hidrología Urbana . Colombia ISBN: 978-958-716-841-9 ed: Editorial Pontificia Universidad Javeriana , p.181
- 233
Galarza, S. y Garzón, F. (2007). La sostenibilidad en la gestión del drenaje urbano. In: Restrepo I, Sánchez
L., Galvis A., Rojas J., Sanabria I., Avances En Investigación Y Desarrollo En Agua Y Saneamiento: Para El
Cumplimiento De Las Metas Del Milenio (p. 69-77). Santiago de Cali: Programa Editorial de La Universidad
del Valle

Conferences
Galarza-Molina S., Gómez A., Hernández N., Burns M., Fletcher T., Torres A. (2016). On-line equipment
installed in a stormwater harvesting system: calibration procedures, first performance results and applications.
Proceedings of the 9th International Conference Novatech, Lyon, France, June 28- July 1 2016
Galarza-Molina S., Torres A., Plazas-Nossa L. (2016). Towards the development of an on-line decision-
making system for stormwater harvesting. Proceedings of the 8 th International Conference on Sewer
Processes and Networks (SPN), Rotterdam, The Netherlands, August 31 –September 2, 2016
Galarza-Molina S., Gómez A., Hernández N., Fletcher T., Torres A. (2015). Towards a Stormwater Harvesting
Constructed Wetland Performance. Proceedings of RESURBE II, Conferencia Internacional en Resiliencia
Urbana, Bogotá, Colombia, 7-20 September 2015
Galarza-Molina S., Lara-Borrero J., Torres A. (2014) First results and future work on a constructed-
wetland/reservoir-tank system used for rainwater harvesting in a university campus in Bogota. Proceedings of
2da Conferencia Panamericana de Sistemas de Humedales para el Manejo, Tratamiento y Mejoramiento de la
Calidad del Agua, Morelia, Michoacán, México
Galarza-Molina S., Torres A., Lara-Borrero J., Fajardo-Méndez S. (2014). Constructed-wetland/reservoir-tank
system used for rainwater harvesting in Bogota, Colombia. Proceedings of13th International Conference on
Urban Drainage, Sarawak, Malaysia, 7-12 September 2014
Galarza-Molina S., Torres A., Rengifo P., Puentes A., Cárcamo-Hernández E. (2014). The Hydrological
Behaviour Of An Eco-Productive Green Roof In Bogota, Colombia. Proceedings of13th International
Conference on Urban Drainage, Sarawak, Malaysia, 7-12 September 2014
Torres A., Rivero-Lopez I., Ruiz A, Zamora D., Galarza-Molina S.L. (2013) Monitoreo en continuo de la
calidad de aguas en hidrosistemas urbanos: requerimientos de operación y mantenimiento. Proceedings of
Seminario internacional Manejo del riesgo en el ciclo del Agua, AGUA 2013
Galarza Molina S.L., Torres A. (2013). Simplified method for rainwater harvesting tank sizing using long day-
resolution rainfall time series. Proceedings of the 8th International Conference Novatech, Lyon, France, June
24-26 2013, pp. 218
Galarza Molina S.L., Torres A., Lara-Borrero J, Moura P. (2013). CRIDE - Multi-Criteria analysis tool for
decision making support for rainwater harvesting in Javeriana university’s campus. Proceedings of 5th IWA
International Conference on Benchmarking and Performance Assessment
APP#F#–#3#'#4#
#
Galarza Molina S.L., Torres A., Lara-Borrero J, Moura P. (2012). CRIDE: herramienta de análisis multi-criterio
para el soporte de toma de decisiones en el aprovechamiento de aguas lluvias en el campus de la pontificia
universidad javeriana, sede Bogotá. Proceedings of AIDIS 2012 - Congreso Interamericano de ingeniería
Sanitaria y Ambiental.
Sánchez-Vega A.M., Polanco-Andrade A.M., Torres A., Galarza-Molina S.L. (2013). Diseño hidráulico de losas
de concreto poroso como estructuras complementarias al sistema de drenaje de Bogotá. Proceedings of
Seminario internacional Manejo del riesgo en el ciclo del Agua, AGUA 2013
Galarza-Molina S.L., Fajardo-Méndez S., Torres A., Pérez-Muzuzu B., López-Kleine L. (2010). Análisis
Preliminar del Acceso A Servicios Públicos Domiciliarios como Macro-Indicador de Desarrollo de
Asentamientos Humanos De Bogotá D.C. Proceedings of 53 CONGRESO DE ACODAL
Galarza, S. y Garzón, F. (2006). Estudio de viabilidad técnica de los sistemas urbanos de drenaje sostenible
para Colombia. Proceedings of Conferencia XVII Seminario Nacional de Hidráulica e Hidrológica (Popayán)
Galarza, S. y Garzón, F. (2005). Efectos de la urbanización sobre la cuenca urbana y los sistemas urbanos
de drenaje sostenible. Proceedings of V Congreso nacional de cuencas hidrográficas, Santiago de Cali:
Corpocuencas

Other Publications
Mejía P., Galarza Molina S.L., Lara-Borrero J., Fajardo-Méndez S., Torres A. (2014) Proyecto para el
aprovechamiento de aguas lluvias en la Pontificia Universidad Javeriana. Revista Acodal, 234, 41 – 43

RELEVANT WORK EXPERIENCE / PRESENT OCCUPATION


From January 2011 until present
At: Javeriana University in Bogotá
Experience: PhD student. Central topic: Research on the performance of Sustainable Urban Drainage
Systems (SUDS) for stormwater / rainwater harvesting purpose and decision making

From March 2010 until January 2011


At: Javeriana University in Bogotá
Experience: Participation on a proposal for re-use of Bogota’s wastewater treatment effluent

From November 2006 until February 2010


At: DataBANK MKS Ltda.
Experience: Planning and tracking of company projects

From February 2006 until November 2006


At: Consultoría y Gerencia de proyectos - C&G Ltda. Consorcio Urbano 046
Experience: Engineering assistant in the construction of a public school (Bogotá)

KNOWLEDGE OF LANGUAGES
English: fluent, oral and written. LEVEL C1 - Common European Framework of Reference for Languages
French: basic knowledge; Spanish: native language
TECHNICAL SKILLS AND COMPETENCES
R – Development Core Team
MATLAB; AUTOCAD 2014, SWMM versión 5 (EPA)

APP#F#–#4#'#4#
#

You might also like