Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Antonio Lopez

Certified – Data Architect / Engineer


lopez.antonio1706@gmail.com
786 633 1410

I. Executive Summary

Mr. Antonio stands out as a consummate IT professional, boasting an impressive track record of
engineering and refining sophisticated processes for continuously enhancing critical systems
across premier organizations in the tech landscape. With his exceptional prowess in
orchestrating the deployment of cutting-edge systems, he meticulously aligns with the apex of
performance architecture and adheres to the zenith of IT best practices.

Boasting over 15 years of rich experience in IT consultancy, Mr. Antonio's expertise spans a wide
array of pivotal areas, including Data Architecture, Cloud migrations, and comprehensive support
and operations. His proficiency is not confined to these domains alone; he has made significant
contributions to formulating and implementing Data Governance policies, impacting a broad
spectrum of industries such as energy, government, transportation, healthcare, airlines, and
insurance with his visionary approach.

An ardent advocate for self-improvement and knowledge exchange, Mr. Antonio embodies the
spirit of continuous learning and collaboration. His commitment to sharing insights and fostering
a culture of mutual growth has not only catalyzed his professional evolution but also profoundly
influenced the teams and organizations he has worked with. His approach underscores a belief in
the transformative power of collective wisdom and the pivotal role of shared experiences in
driving professional and personal advancement.

II. Summary

With over eight years of experience as a Data Architect / Engineer, I have a strong background in
application and database modernization, migrations, and cloud implementation.
Experience in Automated processes with custom-built C#, Python & Shell scripting.
Experience in core Amazon Web Services, including EC2, S3, RDS, EBS, ELB, Elastic Beanstalk,
Lightsail, IAM, Auto Scaling, and Security Groups.
Experience in data governance. Knowledge of ISO 8000 and DAMA DMBOK.
Experience with IT service and change management (ITSM)
Management and design of Integrated build pipelines using continuous integration and continuous
integration using tools such as Jira, Git, and Azure DevOps.
Experience with Azure architecture and services: Audit Logs, Monitor, Traffic Manager, Front
Door, Policy, Service Bus, Queue Storage, App Service, ADF, Synapse Analytics, PowerBI
Experience in managing software releases, deployments, and builds.
Experienced in installation, upgrades, patches, configuration, and performance tuning of Windows
servers, LINUX, and system software and hardware.
Experience building .Net microservices with C# and deploying them on Azure infrastructure with
Azure DevOps.
Implementing the integration of business applications with Generative AI APIs such as ChatGPT
and Claude.
Experienced in branching, tagging, and maintaining versions across environments using SCM tools
like GIT and Bitbucket installed on Linux and Windows platforms.
Has Experience in continuous integration technologies with Jenkins. Designed and created multiple
deployment strategies using Continuous Integration and Continuous Development Pipelines
(including Azure DevOps).
Worked in supporting .Net Windows forms, console, and web applications.
Excellent knowledge of Agile Methodology (Scrum), ITIL process, and microservices architecture.
Capable of quickly learning and delivering solutions as part of a team.
Experience on ServiceNow for handling change requests and incident management.
Experience in Migrating On-premises apps to the Cloud.
Experience in CI/CD using Azure DevOps
Experience in version control using GitHub and GitLab for source version control.
Built real-time streaming and batch data pipelines using Databricks and the Medallion
architecture.
Experience building data pipelines with Azure Data Lake, Azure Data Factory (ADF), and Azure
Databricks.
Experience building data pipelines with AWS Glue.
Designed and Developed data pipelines to process data using Azure Synapse Analytics and Synapse
Pipelines.
Intelligence Systems and Project Management which consists of over 10 years in Cloud
technologies.
Exceedingly proficient in Big Data Technologies, Advanced Analytics, Data Engineering, Data
Warehouse Concepts, Cloud Platforms, and business Intelligence using major databases and
distributed systems such as Microsoft BI stack and big Data to process structured and unstructured
data.
Design or conceptual, logical, and physical architecture models.
Extensive experience in data modeling and dimension modeling like Star and Snowflake schema.
Built an integration datahub on Azure cloud using the medallion architecture with Azure Data
Lake, Databricks and Delta Lakehouse technology.
Work experience in FTP Tools like FileZilla, WinSCP, Visio, Visual Studio, & Putty.
Good Programming Knowledge in C#, R, Python, C, C++ & SQL Languages.
Strong analytical, presentation, and problem-solving skills, excellent interpersonal communication
skills, and flexibility to learn new technologies in the IT industry are essential to the company's
success.
Implementing mechanisms to capture user feedback on AI interactions and using this information
to improve the system.
Mentored junior software developers in the projects on design patterns, development best
practices, and DevOps trade-offs in big data and Cloud platforms.
Skills:

Operating Systems Linux (RHEL, Ubuntu), Windows Server


Cloud Services Azure, AWS, GCP
Scripting Languages Python, Power Shell
Programming Languages C#, Python, R, SQL, JavaScript
Databases SQL Server, Cosmos DB, Snowflake, MySQL, PostgreSQL, DB2,
MongoDB, Cassandra, Couchbase, Synapse Data warehouse.
Source/Version Control Tools GIT, GitHub, GitLab
Bug tracking tools JIRA
Web Technology .Net Core, .NET Framework, C#, Python JavaScript, ASP .Net,
Python Flask
Data Governance Frameworks ISO 8000, DAMA DMBOK, IBM Knowledge Catalog, DGI
Data Governance Cloud Tools Azure Purview, GCP Data Catalog, AWS Glue
III. Professional Experience
Company: IBM Coral Gables, FL
Role: Cloud Data Architect / Data
Industry: IT Consulting / Government
Governance
Start Date: May 2022 End Date: To date
Description:
IBM is a distinguished information technology company, and its branch, IBM Consulting,
specializes in providing comprehensive IT services. With a strong focus on innovation and
efficiency, the firm caters to many clients and industries, including the Federal Government,
Airlines, Banks, Healthcare, and Oil & Gas. IBM is renowned for its expertise in developing and
implementing software solutions that solve complex business challenges. Leveraging cutting-
edge technologies and methodologies, such as cloud and hybrid cloud technologies, the company
excels in delivering projects that enhance operational capabilities, streamline processes, improve
business performance, and deliver competitive advantage.

During my tenure with IBM, I was assigned several projects. Here is an overview of the major
ones.

Project 1 - CENTRALIZED DATAHUB FOR THE OIL AND GAS INDUSTRY - Syncrude and Suncor
The oil & gas industry has historically utilized data silos, presenting opportunities for the
customer to streamline data integration, unify analytics, and enhance data consistency. In this
role, I was instrumental in the architecture design and development of a new centralized data
hub that integrated multiple data sources into a single, coherent structure.
As the senior data engineer, I guided the team through the conceptualization and
implementation phases, ensuring the project aligned with industry-specific data needs and
challenges.

For this project, I leveraged Databricks on Azure and Delta Lake for real-time data processing and
aggregation, Azure Data Lake for storage, and the Databricks Unity Catalog for data governance.
The centralized datahub significantly streamlined data governance and analytics, improved data
accuracy, and facilitated comprehensive insights, driving strategic decisions in the oil & gas
sector.

Project 2 - AIRLINE DATA WAREHOUSE MIGRATION - Air Canada


Air Canada operated on a legacy on-premises Teradata data warehouse. A modern cloud solution
presented enhanced scalability and cost-efficiency opportunities. As a data architect, I worked on
the detailed solution architecture and migration strategy, liaising with key stakeholders. I also
ensured minimal downtime and data loss during the transition. In this project, I used Snowflake
on Azure for cloud data warehousing, integrating with various data integration tools such as ADF
for smooth data transfer and consolidation.
The migration to the new data warehouse system enhanced data retrieval speeds by over 60%,
reduced maintenance costs by 40%, and unlocked cloud-based analytics potential. This enabled
the airline to harness deeper insights from its data and save money on operational costs.

Project 3 - GOVERNMENT ENTITY ON-PREMISE TO CLOUD MODERNIZATION - PSPC


A government institution faced some challenges with its legacy HR and payroll system, which
needed to be updated to enhance its security against new vectors of cyberattacks, obtain new
compliance, and improve operational efficiency. This is a common scenario for organizations that
want to future-proof their operations in a landscape of rapid technological change.
As the Data Architect, I was responsible for designing and implementing the new system's data
architecture and governance. My expertise in these areas was critical in ensuring that the new
system met the institution's current needs and was scalable and adaptable to future
requirements.
Tech Stack: The project was designed on Azure Gov, leveraging its robust and secure cloud
capabilities to meet the specific needs of government operations. This choice of technology stack
was pivotal in ensuring a seamless and secure transition from the legacy system to the new,
cloud-based solution.
Outcomes: The project delivered the blueprint for the institution's transition to a new data
architecture that aligns with industry best practices, including the DAMA DMBOK principles.
Additionally, I was crucial in implementing change management and IT Service Management
(ITSM) processes, which were instrumental in smoothly adopting the new system. The institution
now benefits from enhanced data management, improved operational efficiency, and a robust
framework for ongoing data governance.

Responsibilities:
Develop and implement scalable, secure, and efficient Azure cloud-based data architectures.
Optimize Azure cloud resources for cost, performance, and scalability.
Create and maintain optimal data models to support business processes and decision-making.
Led Microsoft Azure Gov. cloud data architecture projects, ensuring alignment with business
objectives.
Performed data mining techniques, including data gathering, cleaning, and transformation to
develop new insights.
Practiced database query languages and technologies (SSIS, Synapse Analytics, ADF, SQL, Python,
R) to satisfy all aspects of data mining.
Migrated applications from Hadoop to Databricks to improve performance and Data Governance.
Implemented Azure Synapse Analytics for real-time analytics and data warehousing, optimizing
query performance and reducing time-to-insight.
Created a benchmark between CouchDB and MongoDB for fast ingestion.
Processed Terabytes of information in real-time using Databricks autoloader and Delta Lake
technology.
Established data governance committees and working groups to oversee data governance
initiatives and enforce data policies.
Implemented automated schema evolution using Delta Lake and Databricks. This new technology
introduced the ability to modify the schema of the data storage system without disrupting
existing data or applications.
implemented scalable data solutions leveraging Azure cloud services such as Azure SQL Database,
Azure Cosmos DB, and Azure Data Lake Storage. Architected an MDM solution to streamline the
customer business processes and improve data quality across different business units.
Upgraded older Lambda data architecture to the modern medallion architecture using Databricks
on Azure.
Utilized Azure Purview and other Azure APIs for monitoring, querying, and billing analysis,
contributing to enhanced Azure Synapse Analytics usage efficiency.
Implemented DevSecOps practices to integrate security into the data engineering and
architecture processes.
Integrated Apigee with data storage and processing systems such as Amazon S3, RDS, Redshift,
or Azure Blob Storage, leveraging APIs for seamless data access and manipulation.
Ensured data security and compliance by implementing Azure Active Directory for authentication
and authorization, and configuring Azure Key Vault for data encryption.
Developed and implemented data governance frameworks, policies, and procedures to ensure
the effective management and utilization of data assets.
Developed Google Cloud Functions in GCP to add new capabilities to serverless event-driven
workflow.
Maintained data pipelines and ETL workflows using Databricks and Apache Spark.
Provided technical leadership and mentorship to junior team members, facilitating knowledge
sharing and skill development in Azure cloud technologies.
Enhancement of existing data pipelines by integrating generative AI capabilities into data
processing for advanced functionalities such as data enrichment, summarization, and
categorization.
Evaluating, recommending, and implementing emerging data architecture technologies such as
Data Lakehouse, Data Fabric, and Data Mesh to support more sophisticated data management
and analysis.
Designed the architecture for Edge Data Management that facilitated the processing and analysis
of data at the source.
Implemented metadata governance processes to ensure metadata accuracy, consistency, and
relevance.
Implemented data encryption and masking solutions to ensure sensitive information is not
exposed to non-privileged users.
Created architecture decision documents (ADD) that capture the essential details of significant
architecture decisions made during the development and maintenance of a software system.
Implemented unified data governance solutions using Purview Data Catalog creating glossary
management and captured metadata with Purview Data Map.
Extensive experience with popular Python libraries including PySpark, Hugging Face
Transformers, NLTK, Gensim, OpenCV, PyTorch, Flask, SciPy, Pandas, Matplotlib, NumPy, Scikit-
learn, SQL Alchemy, and Apache Beam.
Data Warehouse design and implementation with Synapse Analytics.
Data Lake implementation with Azure Data Lake.
Responsible for the data modernization and migration of an on-premise data warehouse built on
Teradata to Snowflake on Azure cloud.
Conducted security assessments and audits of Apigee API proxies to identify and mitigate
potential vulnerabilities and compliance risks.
Led the development of an Azure Purview data catalog and metadata management solution for
improved data transparency and accessibility.
Implemented Natural Language Processing (NLP) in Azure Synapse Data Warehouse to facilitate
insights extraction using natural language.
Responsible for designing the foundation of MLOps for clients that are ready to move their
machine learning models from experimentation into production at scale.
Designed disaster recovery and business continuity planning for client companies.

Company: Payflo Montreal, QC


Industry: Financial Startup Role: Data Engineer / Data Architect
Start Date: Nov 2021 End Date: May 2022
Description:
PayFlo is an innovative financial startup based in Montreal. It is dedicated to transforming how
businesses manage and process payments, focusing on cutting-edge technology and user-centric
design. At Payflo, I was pivotal in architecting, developing, and optimizing the company data
infrastructure to support scalable and robust financial solutions. I collaborated closely with cross-
functional teams to drive data-centric initiatives, ensuring that the company data architecture
aligns with their strategic objectives and supports rapid growth.

Responsibilities:
Participated in the Azure and AWS data security strategy, resulting in SOC2 certification for the
company.
Designed and built the data architecture roadmap based on best practices for governance and
management.
Implemented DevSecOps principles to integrate security practices into the data architecture and
engineering processes.
Achieved 30% cost reduction in AWS cloud storage and enhanced data reliability.
Created Pipelines in ADF to Extract, Transform, and load data from different sources, such as
third-party applications and data lake storage.
Conducted data analysis and visualization using Databricks notebooks and Apache Spark SQL.
Hands-on Experience with all Azure Cloud Services, also have experience in Architecting cloud
projects and leading the team to achieve customer requirements with scalability, reliability,
performance optimization, and security.
Involved in debugging the applications monitored on JIRA using agile methodology.
Leveraged Azure Machine Learning to build and deploy predictive models, enabling data-driven
decision-making across the organization.
Implemented caching and optimization techniques in Apigee API proxies to improve performance
and reduce latency for data-intensive applications.
Developed scheduled jobs and cloud triggers to automate business processes.
Used Python for logging, monitoring, and debugging for code optimization.
Programming of AWS Lambda functions with Python.
Management of database migration with AWS Database Migration Service (Oracle and
PostgreSQL).
Led the implementation of DevSecOps principles to integrate security into the data architecture
and engineering processes.
Set up production environment alerts and acted as the point of contact for infrastructure
incidents using CloudWatch in AWS and Azure Monitor in Azure.
Implemented data audit trails to monitor data access and changes. Ensuring any anomalies or
attempts of unauthorized activity can be traced and analyzed.
Designed the company's hybrid cloud strategy with secure and seamless data movement across
environments.
Experience on design and development of data pipelines on the Google Cloud Platform using
Apache Beam and Cloud Dataflow with help of DAGs (Directed Acyclic Graph).

Company: Income Access (Paysafe) Montreal, QC


Industry: Affiliate Marketing Role: Data Engineer / Administrator
Start Date: Sept 2014 End Date: Nov 2021
Description:
Income Access, a Paysafe company, is at the forefront of affiliate marketing, digital advertising,
and proprietary technology for the iGaming industry. With a robust and dynamic platform,
Income Access empowers brands and marketers to optimize their online presence and
engagement, driving growth and performance.

As a Data Engineer / Administrator at Income Access, I was integral to the company's data
management and responsible for the efficiency and reliability of its data services. My role
involved both data engineering and administrative responsibilities, focusing on the optimization,
scalability, and security of the company data systems to support its advanced marketing
platforms and analytics services.

Responsibilities:
Oversaw database administration and contributed significantly to the company's efforts to
achieve the PCI DSS standard certification and data management strategies on Azure cloud.
Executed data architecture blueprints, materializing the solutions to support business growth on
MS Azure.
Data Lake deployment and management on Azure.
Developed process mining software with C# .NET, Python, and MongoDB.
SQL Server, PostgreSQL, and MySQL database administration activities
PostgreSQL security administration. LDAP Authentication
PostgreSQL installation and deployment
Implemented data pipelines and ETL processes using Databricks Delta Lake and Apache Spark
Streaming for real-time data processing.
Workload Management for SQL Server planning and deployment.
Build graphs and plots using Python and R libraries like Pyplot and GGPlot.
Performed analytics and recommendations for the business using SSIS, Python, or R for ETL
processing.
Applied transformations such as filters and aggregations to the data using R and Python.
Ingested data from multiple sources into the HDFS data lake.
Created and maintained technical documentation for executing SQL and Python code.
Implemented the ITIL standard as part of the ITSM change and incident management
methodology.
Integrated Databricks solutions with data storage and processing services such as Amazon S3,
Azure Data Lake Storage, and Google BigQuery.
Augmented data management project to facilitate the automation of routine tasks such as data
quality, data lineage tracking, and metadata management.
Designed an affiliate marketing process for enhanced user tracking and attribution to the
respective affiliate source.
Designed and documented the unified data quality and governance policies across the several
Paysafe subsidiary companies.
Implemented data quality solution using SQL Server Data Quality Services (DQS) that supports
the ISO/IEC 25012 standard for data quality.
Lead and mentor data engineers and analysts team members in best practices for data
governance.
Installation of SQL Server on physical clusters with varying configurations on Windows Server
2003 and 2008 versions.
Deploying DRP (Disaster Recovery Plan) environments with data mirroring and transaction log
technology.

Company: Canada Direct Montreal, QC


Industry: IT Consulting, Marketing Role: Senior Programmer Analyst
Start Date: Mar 2011 End Date: Aug 2014
Description:
Canada Direct is an established Information Technology, marketing, and communication
solutions provider based in Montreal. It is dedicated to delivering innovative and impactful
strategies for businesses across Canada.

As a Senior Programmer Analyst at Canada Direct, I played a crucial role in developing and
enhancing the company's clients' software systems and applications. My expertise contributed to
designing, developing, and implementing software solutions for the pharmaceutical, telecom,
and marketing industries.

Responsibilities:
Developed secure data exchange solutions based on the HL7 data format for pharmaceutical
companies.
Cleansed and preprocessed data implementing ETL jobs in an SQL Server cluster.
Provided platform and infrastructure support, including database maintenance, administration,
and tuning.
Acted as a liaison between the healthcare policy lines of business and the technology team, as
well as other data management analysts.
Designing and implementing web applications and web services using the .NET platform.
Contributed to the development of various data-centric pharmaceutical software applications.
Planning and implementation of fixes, patches, and upgrades for SQL Server
Deployment of Business Intelligence, data warehouse, and data mining projects with SQL Server
Integration Services (SSIS).
ETL workflows and B2B jobs administration and monitoring.
Developed data cleansing approach and execution, adherence, and enhancements to data
governance policies.
Designed and developed clinical trial applications for Roche and Novartis following regulatory
compliance, such as adhering to Good Clinical Practice.
Design and development of data pipelines with focus on data governance and data quality
following the ISO 8000 standard.
Company: OEC Group Montreal, QC
Industry: Logistics / Transportation Role: Programmer Analyst
Start Date: Apr 2008 End Date: Sep 2010
Description:
OEC Group is a prominent player in the global logistics and freight forwarding industry. It
provides top-tier supply chain solutions to a diverse client base.

As a Programmer Analyst at OEC Group, I contributed to the development and optimization of


the company's IT systems, playing a key role in enhancing their operational efficiency and service
delivery. My work involved designing, testing, and implementing software solutions that
improved the business processes and supported the company's commitment to providing
exceptional service.

Responsibilities:
Database development based on data governance principles, ensuring the creation and
maintenance of tables, stored procedures, and reports to support the data-driven decision-
making processes.
Management of archival applications with SQL Server
Planned, deployed, and operated database instances such as SQL Server and PostgreSQL.
Planned and implemented fixes, patches, and upgrades to SQL Server, PostgreSQL, performance
monitoring, and improvement.
Update the engine version for all SQL Server productive databases.
Addition of storage for productive SQL Server databases over cluster and stand-alone servers.
Implementation of new access policies by joining the Information Security Risk department.
Deployment of Business Intelligence, data warehouse, and data mining projects with SQL Server
Data Transformation Services (DTS).

IV. Education
Education Institution Period
MSc, Data Engineering Edinburgh Napier University, U.K 2020- 2023
AS, Computer Science Penn Foster College, U.S 2004- 2006
C++ Diploma Penn Foster College, U.S 2002

V. Certifications and Courses


Course or Certification Institution Period
AI Security and Privacy by Design IBM 2023
Modern Data Accelerators IBM 2023
Databricks Fundamentals Databricks 2023
Trustworthy AI & Ethics IBM 2022
DevSecOps Essentials IBM 2022
Enterprise Design Thinking IBM 2022
Big Data Foundations IBM 2022
Microsoft Professional Program - Data Science Microsoft 2017
Microsoft Certified IT Professional - DBA Microsoft 2011
Microsoft .NET Certified Solution Developer Microsoft 2009

VI. Languages (Specify your knowledge under the following criteria : VG = Very Good; G =
Good; R = Regular
Language Spoken Reading Written TOEFL TOEIC ILR Level
English G G G
Spanish VG VG VG
French R R R

Some IT Tools you consider yourself an expert.


(Programming Languages, Databases, OS, Infrastructure, ERPs, etc.)
Azure Cloud
Relational, MPP Databases, Databricks
C#, Python, SQL, R
.NET Framework

You might also like