Professional Documents
Culture Documents
Valluri Sunitha: Professional Summary
Valluri Sunitha: Professional Summary
Professional Summary
Having Total 4.6 years of Professional experience in IT industry out of which 3 years in
Microsoft Azure Services.
Summary:
Strong working knowledge on Azure Data bricks , Azure SQL, Azure Data
Factory.
Hands-on experience in Azure Data factory and its Core Concept's like Datasets,
Pipelines and Activities, Scheduling and Execution.
Excellent knowledge of ADF building components –Integration Runtime, Linked
Services, Data Sets, Pipelines, Activities.
Experience in Managing and storing confidential credential in Azure Key vault.
Good experience in Python code for data loads, extracts, in Azure Data bricks.
Knowledge on Data Extraction from On-Premise Sources and Delta Extraction
methods from Source Systems to ADLS.
Hands on experience in integrating Hive with Spark to perform HQL in spark.
Excellent communication, interpersonal, analytical skills, and strong ability to
perform as part of team.
Hands-on experience in python programming and Spark components like Spark-
core and Spark-SQL.
Worked on creating RDD's, DF's for the required input data and performed the data
transformations.
Experience on Spark core and creating RDD to perform aggregations and grouping
etc. in spark.
Manage data recovery for Azure Data Factory Pipelines.
Knowledge on Azure Data bricks, creation of Notebooks.
Good team player with excellent communication and interpersonal skills.
Play a key role in development team.
Professional Experience
Working as Software Engineer at Mouriya IT Solutions Private Limited from June 2018 – Till date.
Education Details
PROJECT-1:
Description :
CAQH was formed by a number of the nation’s largest health insurance companies with the goal of
creating a forum for healthcare industry stakeholders to discuss administrative burdens for
physicians, patients, and payers.Its mission is to accelerate the transformation of business
processes in healthcare through collaboration, innovation, and a commitment to ensuring value
across stakeholders, including healthcare providers, trade associations, and health plans.
Coordinate with external teams to reduce data flow issues and unblock team members.
Always actively participate in four ceremonies: Sprint planning meeting, Daily Scrum,
Sprint review meeting, and Sprint retrospective meeting.
Working with Source team to extract the data and it will be loaded in the ADLS.
Creating the linked service for source and target connectivity Based on the requirement.
Creating the pipelines and datasets which are deployed in ADF non-restricted.
Performing metadata insert scripts for the pipelines which is available in the logging
framework.
Once it’s created pipelines and datasets will be triggered based on LOAD (HISTORY/DELTA)
operations.
Based on source (big or small) data loaded files will be processed in AZURE DATABRICKS
by applying operations in spark SQL. Which will be deployed through AZURE DATA
FACTORY pipelines.
Involved in deploying the solutions to QA, DEV and PROD.
Involved in setting up the environments for QA, DEV and PROD using VSTS.
PROJECT-2:
Description :
PROJECT-3:
Description :
Lowell is debt management company, they purchase portfolios from banks and they will reach out
to customers for debt collections. As part of that Lowell team will create different plans for
collecting debts from customers, customers will make payments against plans which created by
Lowell team. Lowell also do prediction analysis for understanding the collection in next 3
years .What are the new accounts joined in this month and which customers completed their
debts in this month.
Roles and Responsibilities: