ManojKumar[8y_7m]-Cloud Data Engineer

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Manoj Kumar Phone: +91 8125502089 E-Mail:

manojkumaronman110@gmail.com

Seeking a job in challenging and healthy work environment where I can utilize my skills and knowledge
efficiently for organizational growth.

CAREER SUMMARY

• Over all 8.7 years of experience in IT industry. Present with Optum(UHG) Technologies as Azure Data
Engineer
• Interacted with the business users, gathered, and analysed the requirements, recommended and
designed solutions.
• Prepared BRS (Business Requirement Specifications) document that gives the detailed information
about the requirements.
• Created Databricks notebooks with Pyspark and spark SQL.
• Used Narrow and wide transformation for unstructured data to structured data transformation
• Created Keyvaults and have experience on Keys, Secrets and certificates
• Worked on storage account, access keys, SAS token, and ADLS. Migrated files from one storage
account to another storage account.
• Migrated SSIS packages to ADF with lift and shift method and installed Self-Hosted IR for High
availability.
• Code deployments in Databricks from one branch to another branch.
• Monitoring and maintenance of production, test and development environments.
• Created ADF pipelines and monitoring pipelines, fixing issues
• Created notebooks in ADB and implemented transformation logic in it.
• Created alerts and metrics for the ADF

WORK EXPERIENCE

• Working as Azure Data Engineer for Optum(UHG) from Aug 2015 to Till Date

KEY SKILLS

Azure : Azure Data Factory, Databricks, Pyspark, Azure Synapse, Logic apps, ADF pipelines,
Integration runtime, liked services, Triggers, On-Premise SQL, IaaS SQL, PaaS, Recovery services
vault, storage account, ADLS, Azure SQL DB Managed instance, Geo-replication, Failover groups,
Sync to other dbs. DMA, DMS, Migration, Automation accounts, Azure active directory, Azure
analysis services, Storage explorer, PowerBI, Azure monitoring, Keyvaults
Functional Domains : HealthCare
Databases : MS SQL, Azure SQL DB, Managed Instance
Operating System : Windows Server, UNIX.
Languages : Pyspark, Spark-SQL, T-SQL

Project #1

Clients : Optum
Role : DataEngineer
Domain : HealthCare
Duration : April 2021 – Till Date

Responsibilities:

• Worked on On-premise sql data movement to Azure SQL with ADF pipeline
• Building the pipelines to copy the data from source to destination in Azure Data Factory
• Transformed data using mapping dataflow with joins, union, derived column, filter
• Moved data from json to SQL table used mapping dataflow
• Merging files from source to destination azure blob
• Data loading from REST API to azure sql database
• Created databricks notebook to rename the csv file column and load data into sql table
• Successfully creating the Linked Services on the source and as well for the destination servers
• Scheduled Jobs in Flows and ADF Pipelines
• Scheduling Pipelines and monitoring the data movement from source to destinations
• Transforming data in Azure Data Factory with the ADF Transformations
• Troubleshoot any kind of data issues or validation issues
• Moving CSV files from Azure blob to Azure SQL Server
• Implemented logic app to get the email notification when pipeline got failed
• Installed Self-hosted IR with high availability
• Created linked services and dataset as per the requirements
• Databricks level code deployments doing from one branch to another branch
• SQL DB migration from on-premise to the PaaS MI and Azure SQL DB with DMA and DMS tool
• Delta load has performed in migration with ADF
• Moved 57 SSIS packages to the ADF with lift and shift method
• Created Keyvault for the ADF and using keyvault authentication. If any key’s got expire, we will
update the key’s
• Configured backup for SQL DB’s for IaaS server and monitoring those jobs
• Providing access on Azure SQL DB if any new member requested access
• Created alerts for ADF pipelines to monitor

Project #2

Clients : Optum
Role : Data Engineer
Domain : HeathCare
Duration : Dec 2019 – March 2021
Responsibilities:

• Designing and implementing data ingestion pipelines from multiple sources using Azure
Databricks
• Developing scalable and re-usable frameworks for ingesting of datasets
• Integrating the end to end data pipeline to take data from source systems to target data
repositories ensuring the quality and consistency of data is maintained at all times
• Working with event based / streaming technologies to ingest and process data
• Databricks level code deployments doing from one branch to another branch
• Delta load has performed in migration with ADF
• Monitoring ADF pipelines and if any issues are the I will try to fix it or else will escalate to L2
team
• Monitoring Databricks jobs
• Creating tickets as per the request
• Providing access to the ADF and databricks workspark, Azure SQL DBs
• Worked on Keyvault renewal and SSL certificate renewals
Project #3

Role : Analyst
Organization : UHG
Location : Hyderabad
Duration : Aug 2015 – Nov 2019

Responsibilities:

• As per the client request and depends upon the Excess Volume, Used to stretch on Non-working
days for clearance of the Volume and uploading the files.
• Generating the Quality, Production, Utilization and Login & Logout Reports and sending out to
the team on daily basis.
• Working on VLOOKUPs and Pivot tables to generate the reports whenever supervisor requests.
• Guide the juniors and Trainee Members and help them in Understanding the Process.
• Conduct Training Sessions and Audit the processed records for Mentees.
• Consolidate the records processed by Trainees and juniors, modify the sheet and Send it for
Internal Auditing.

Declaration

I hereby declare that all the information mentioned above is true to the best of my knowledge

Place: Manoj Kumar Onman

You might also like