Professional Documents
Culture Documents
Bigdata .Profile
Bigdata .Profile
Mob: 6303286833
Email: mkreddy925@gmail.com
PROFILE
3.4 years of overall experience with strong emphasis on design, development,
Implementation, Testing and Deployment of Software applications in Bigdata, Oracle,
UNIX, BO (Business objects), RDBMS, Unix SHELL Scripting extensive development
experience
PROFESSIONAL SUMMARY
Having the experience on the development and maintenance of the Hadoop eco systems
Exploring Hadoop Applications and recommending the right solutions and technologies for
the applications.
Having good expertise on Hadoop tools like Hive, Sqoop, Scala, Spark, Autosys.
Knowledge on Spark framework for batch and real-time data processing.
Good Knowledge on Scala Programming Language.
Having strong knowledge on Agile and Water fall model and involving sprint planning.
Capable of processing large sets of structured, semi-structured and unstructured data and
supporting systems application architecture
Handled the TEXT, JSON, XML using Hive (SERDE) & Spark-Scala
Familiar with data architecture including data ingestion pipeline design, Hadoop
information architecture, data modeling and data mining and advanced data processing.
Good Communication and Interpersonal skills. Technically sound, Result-Oriented with
strong Problem-solving skills. Innovative & efficient. Capable of working as a Team
Member or Individually with minimum supervision.
Flexible and versatile to adapt to any new environment with a strong desire to keep pace
with latest technologies.
Closely worked with UI team for preparing Dash board reports & segments
TECHNICAL.
Working Experience in BO (Business objects), SQL, Unix, Shell scripting, Oracle, Tera
data, SKILLS:
Qualifications:
B.
Professional Experience
Working as a Bigdata Developer at TATA COSULTING SERVICES(TCS) March 2016 to Tilldate.
Name AMGEN
Client AMGEN
Environment Scala, Spark, HDFS, Hive, Unix Shell Script, AWS – EMR, S3, Script, Git Hub
Name TCS
Responsibilities Analyze and identify the resources (NDB, EPIM, NICE) to extract
information for selected data elements.
Reading the json file generated by the EPIM for the input data
elements.
Converting the JSON file data from EPIM into data frames, so the
records can be easily compared at data element level
Comparing the EPIM and NDB records and loading the result data to
ES.
Creation of the Step functions for Spinning up the cluster (EMR),
Execution of Multiple Spark Jobs and Terminating the Cluster.
Updating of repository for building the Jenkins Pipeline.
Provided design recommendations and thought leadership that
improved User recommendation engine and resolved technical
problems.
Used different Kinds of Wide-Narrow transformations and Actions
on Data Frames using Spark Scala.
Written Shell Scripts for Doing the Secure Copy of Json Data Logs
from Application Server to Big Data Servers.
Used Spark and Spark-SQL to read the Json data and create the tables
in hive using the Scala API.
Project 1:
Name TCS
Responsibilities Created Hierarchies from providing drill down options for the end-user.
Created complex reports and user objects like measure objects by using
aggregate aware function to create the summarized reports.
Created Report Level Variables, Objects and Formula to create new
columns based on the result set.
Created complicated reports including sub-reports, graphical reports,
formula base and well-formatted reports according user requirements
Analyzing and enhancing the existing BO Reports as per new
requirements.
Developed Web Intelligence reports by using of combined Queries.
solutions