Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Harinath Naidu

Senior Data Engineer/ Business Intelligence


Email: harinath0323@gmail.com,
Phone: 4254350901
Linkedin: linkedin.com/in/harinath-naidu-b-53b343129

SUMMARY

Around 10+ years of IT experience in Implementing ETL Business Intelligence Solutions using
Azure Stack (Data Factory, Data Bricks, Data Lake, Azure SQL, Azure Synapse Analytics), IBM
DataStage, Microsoft Power BI and SAP Business objects.
Experienced professional with extensive domain knowledge in banking, finance, retail, clinical,
healthcare, and pharmaceutical industries.
Experience in building data pipelines using Azure Data Factory, Azure DataBricks and loading data to
Azure Data Lake, Azure SQL Database, USQL, Azure SQL Data Warehouse and controlling and
granting database access.
Involved in developing roadmaps and deliverables to advance the migration of existing solutions on-
premises systems/applications to Azure DevOps cloud.
Good understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark
Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
Design and develop Spark applications using PySpark and Spark-SQL for data extraction,
transformation, and aggregation from multiple formats for analyzing & transforming the data to
uncover insights into customer usage patterns.
Design and implement database solutions in Azure SQL Data Warehouse, Azure SQL
Azure Data Factory (ADF), Integration Run Time (IR), File System Data Ingestion, Relational Data
Ingestion.
Experience in developing Business Intelligence reports and interactive Dashboards using various BI
Tools like MS Power BI and Business Objects.
Design, develop and maintain MS Power BI reports and dashboards and publish them to targeted user
groups.
Expertise in various phases of project life cycles (Design, Analysis, Implementation, and testing).
Experienced working with JIRA for project management, and GIT for version control.
Experience in all of the Software Development Life Cycle (SDLC) under Waterfall and Agile models.
Participated in Sprint planning, daily scrum, Sprint demos and Retrospectives.
Effectively communicate with business units and stakeholders and provide strategic solutions
according to the client’s requirements.
Have good interpersonal, and communication, strong problem-solving skills, explore/adopt new
technologies with ease and a good team member.

TECHNICAL SKILLS

ETL/DW Azure Data Factory, DataBricks, Azure Synapse Analytics, Azure Data
Lake and IBM DataStage
Dashboarding/ Virtualization Microsoft Power BI, Tableau and SAP Business Objects

Databases/Scripting SQL Server, Oracle, Teradata, SAP Hana, PowerShell, Netezza, DB2,
Azure SQL, PySpark, Cosmos DB and Spark SQL
Programming Python

Tools Control-M, Git and JIRA.

CERTIFICATIONS

DP-203 Microsoft Certified: Azure Data Engineer Associate


DP-900 Microsoft Certified: Azure Data Fundamentals
Databricks Certified: Databricks Lakehouse Fundamentals
PROFESSIONAL EXPERIENCE

Client: Hewlett-Packard- Houston, TX Jan ’22 – Till date


Role: Senior Data Engineer
Project: HPI-TERA
Description:
Hewlett-Packard Inc. develops and provides a wide variety of hardware components, as well as
software and related services to consumers, small and medium-sized businesses, and large enterprises.
This project aims to extract the Quotes and Opportunity information from on-premise sources and load
data into the Azure cloud to build business reports and perform business insights.
Responsibilities:
 Understand and analyze the Quotes and Opportunity information and determine the impact of
new implementation on existing business processes.
 Design and build Modern data solutions using Azure PaaS service to support visualization of
data.
 Develop and maintain various data ingestion pipelines as per the design architecture and
processes source to landing, landing to curated & curated to process
 Extract Transform and Load data from Sources Systems to Azure Data Storage services using a
combination of Azure Data Factory, Azure Synapse and DataBricks.
 Used Liquibase for database migration and JDBC for database connections.
 Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL,
Azure DW) and processing the data in Azure Databricks.
 Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform and
load data from different sources like CI/CD, Azure SQL, Blob storage, Azure SQL Data
warehouse, write-back tool and backwards.
 Have good experience working with Azure BLOB and Data Lake Storage and loading data into
Azure SQL Synapse analytics (DW)
 Develop Spark applications using PySpark, and Spark-SQL for data extraction, transformation,
and aggregation from multiple le formats for analyzing & transforming the data to uncover
insights into customer usage patterns.
 Responsible for estimating the cluster size, monitoring, and troubleshooting of the Spark data
bricks cluster.
 Migration of on-premises data to Azure Data Lake Store (ADLS) using Azure Data Factory.
 Developed JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that
processes the data using the SQL Activity.
 Enabling monitoring and Azure log analytics to alert the support team on usage and stats of the
daily runs.
Environments: Azure Data Factory, Azure Blob storage, Azure Synapse Analytics, Databricks,
Azure SQL Data warehouse, PySpark, SAP HANA, Oracle, Squirrel, Putty, Power BI,
SQL/PLSQL, Jira

Client: Cardinal Health-Dublin, OH Oct ’19 – Jan ’22


Role: Data Engineer
Project: P-MOD (Pharma Modernization)
Description:
Cardinal Health is a global, integrated healthcare services and products company, providing
customized solutions for hospitals, health systems, pharmacies, ambulatory surgery centers, clinical
laboratories and physician offices worldwide. The data includes information on Pharma Warehouse &
Order management details, customer details and transaction information. The system has a HANA
operational database with data from different sources and Teradata has historical information.
Responsibilities:

 Involved in various phases of the Software Development Life cycle (SDLC) of the application
like Requirement gathering, Design, Analysis, and code development using Agile/ Scrum
methodology.
 Understand the impact of the centralized data layer which has all the data required for Pharma
Warehouse & Order management.
 Create New pipelines or modify the existing ones to orchestrate the data movement from On-
Prem Sources to Azure Data Lake.
 Create/modify linked Services, triggers and schedule the pipeline runs
 Create Notebooks in ADB and implement the complex business logic and transform the data
Catalog as per the requirement by using spark-SQL and PySpark.
 Monitor and debug the pipeline failures, find the root cause of the failures, provide the fixes
and enable the pipeline to run again
 Responsible for modifying the code, debugging, and testing the code before deploying on the
production cluster.
 Worked on Scala with Spark SQL and spark streaming integration.
 Design and develop data visualizations in Power BI.
 Debugging and troubleshooting the bugs identified during the testing phase (IST, UAT &
Production). Responsible for unit testing, integration testing, system testing, and regression
testing.
 Using Agile methodology and delivering featured work items periodically within the agreed
timelines
Environments: Azure Data Factory, Azure Blob storage, DataBricks, Spark, Azure SQL DB, Power
BI, SQL Server, Jira, GIT, Teradata and HANA

Client: USAA, San Antonio, TX Jun ’18 – Oct ’19


Role: ETL & Reporting Developer
Project: AML DATA – TMS
Description:
The United Services Automobile Association (USAA) provides banking, investing, and insurance to
people and families who serve, or served, in the United States military. The aim of the project is to
create data quality checks such as Completeness, Integrity, and consistency checks on all the data
sources such as Cash, ATM Deposits/ Withdraws, Wire transfers, Foreign withdraws, Mutual funds
and trust data etc. coming to centralized AML database to comply the OCC regulatory rules.
Responsibilities:
 Worked with clients and business partners to create a centralized data layer that
 has all the data required for all AML regulatory needs.
 Coordinated with multiple teams to extract the data and to understand the
 business transformations required for AML data.
 Daily communication with the off-shore team in developing jobs and clarifying the
 requirements.
 Weekly reviews of jobs with business partners and project team.
 Actively participated in business validations of data and production migrations.
 Designed reusable jobs for extraction and to load the data Catalog.
 Modifications to Universe for populating AML Data.
 Analyzing AML data flow for universe source modifications.
 Co-ordinating with the BO environment during the report migration
Environments: IBM DataStage, Control-M, SAP Business Objects, Tableau, Oracle 11g, DB2 and
Netezza

Client: USAA, San Antonio, TX Apr ’16 – Jun ’18


Role: Reporting Developer
Project: CFO report standardization
Description:
The United Services Automobile Association (USAA) provides banking, investing, and insurance to
people and families who serve, or served, in the United States military. The purpose of this project is to
modify the existing reports and dashboards in the CFO standard and create new reports for various
business units that align with the CFO.
Responsibilities:
 Analyze functional requirements and reporting requirements.
 Modified and created new CFO reports as per business requirements extensively
 used advanced reporting functionalities in WEBI like Drill, Filters, prompt, Sections,
 Formatting Rules, Breaks.
 Standardizing and automating the scheduled reports by making use of newly added
 tables in the case of existing reports.
 Extensively worked on Transaction reports, which are used to insert, update and delete records
in the database.
 Gathered requirements, performed feasibility analysis and suggested changes to make the
requirements more user-friendly.
 Assisting team in their development activities both technically and functionally.
 Closely worked with the Hyperion team on the data validation part.
 Created report design documents, Unit test documents, test case documents and Production
readiness documents for all production releases.
Environments: IBM DataStage, Control-M, SAP Business Objects, Oracle 11g, DB2 and Netezza

Client: Covance- Princeton, NJ Aug ’12 – Mar ’16


Role: Reporting Developer
Project: Healthcare Reporting Application (HRA)
Description:
Covance is one of the world’s largest and most comprehensive drug development services company.
The main aim of this project is to track clinical and nonclinical trial data among all the branches and
load it into a Data warehouse to create weekly, monthly and yearly reports
Responsibilities:
 Created & Modified Universes for various functional areas of high-level reports group using
Information Design Tool.
 Developing the Universe based on the Requirements and the Specification document used
various features like Joins, Alias, Derived tables, Access level restrictions, custom hierarchies,
Functions and Contexts in Universe designer
 Customized LOVs in Universes for quicker performance of reports.
 Changed the universe parameter settings to optimize the performance of universe.
 Exported the universes to the Repository making resources available to the users.
 Create daily, weekly and monthly reports for the management team and field operator
managers using BO reports.
 Created reports using variables and context operators as per the user requirement.
 Extensively used advanced reporting functionalities in WEB I like Drill, Filters, Ranking,
prompt, Sections, Formatting Rules, Graphs and Breaks.
 Involved in CMC for report publication and scheduling activities for all types of reports
involved in the application.
Environments: SAP Business Objects, Oracle 11g and SQL Server

EDUCATION

Course College Year


Bachelor of Technology in Sri Kalahastheeswara Institute of Technology, India 2012
Electronics & Communications
Engineering

You might also like