Khaja Obedullah Haruni

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Khaja Obedullah Haruni

Senior DevOps Engineer

SUMMARY:
 Having 9+ years of experience in IT industry in various roles as DevOps, Cloud and
Build/Release Engineer, with excellent experience in Software Integration, Configuration,
Packaging, Building, Automating, managing, and releasing code from one environment to
another environment and deploy to servers.
 Experience in administrating, deploying, managing, monitoring, and troubleshooting all flavors
of Linux distributions like Red Hat, SUSE, Ubuntu, and CentOS servers.
 Well experienced on Cloud/DevOps, Continuous Integration and Continuous Delivery
(CI/CD pipeline), Site Reliability, Build and Release management, SDLC, Windows / Linux
administration (RHEL, CentOS, Ubuntu).
 Experience in working on different cloud infrastructures like AWS, AZURE and GCP. Used
most cloud services provided by AWS, AZURE and GCP.
 Worked with operations and development teams to plan and initiate the migration of on-
premises legacy solutions to AWS and Azure.
 Worked on building and deploying Java, Python, Node JS and .NET based Applications. Good
knowledge on IoT (Internet of Things) end-to-end application development.
 Responsible for managing AWS Cloud platform and its features, which include EC2, VPC,
EBS, Cloud Watch, Cloud Trail, Cloud Formation, CloudFront, IAM, S3, Route 53, SNS,
SQS, etc.
 Extensively worked on Azure Services such as Azure Virtual Network, Azure Virtual
Machines, ARM Templates, Azure AD, Azure Key Vault, Azure Logic Apps, Azure Blob
Storage, Functions, Application Gateways, etc.
 Implemented machine learning techniques to analyze historical data and forecast future
resource requirements, contributing to effective capacity planning and cost optimization in
AWS and Azure cloud environments.
 Experience in working on source code repository management tools like Git, Github, GitLab,
Bitbucket, Subversion (SVN) and package repository tools like JFrog and Nexus artifactory.
 Hands on experience using MAVEN, Gradle and ANT as build tools for the building of
deployable artifacts (Jar & War) from source code and performed Static Code Analysis using
SonarQube.
 Administered and managed Sonatype Nexus repository.
 Experience in working on CICD tools like Jenkins, Azure DevOps, Bamboo and TeamCity for
End-to-End automation for all build and deployments.
 Extensive knowledge working with Configuration/Deployment tools like ANSIBLE (Playbook,
inventory), CHEF (Knife, recipe, cookbook), and PUPPET (Puppet manifests, Factor, catalog)
as well as automation.
 Expertise in creating infrastructure using IaC tools like Terraform, Cloud formation and ARM
templates.
 Designed/Built, implemented, deployed, maintained, and monitored scalable Cloud-based-
Containerized Applications and Microservices through Docker and Kubernetes on AWS,
AZURE and GCP. Good knowledge of Cloud based concepts - IaaS, SaaS, PaaS, etc.
 Experience using Kubernetes (AKS / EKS / Openshift) and Docker for the run time
environment of the CI/CD system to build, test and deploy.
 Experienced in creating shell scripts for Canary and full deployment through Harness.
Khaja Obedullah Haruni
Senior DevOps Engineer

 Experience with Docker containerized environment, hosting web servers on containers, building
docker images using Docker and automating operations across multiple platforms (Linux,
Windows, and MAC).
 Hands on experience in writing manifest files, creating pods and managing on the cluster
environment on Kubernetes (AKS / EKS / Openshift).
 Extensively worked with scheduling, deploying, and managing container replicas onto a node
cluster using Kubernetes with building Kubernetes run time environment of the CICD system to
build, test and deploy in an open-source platform.
 Worked on Continuous Deployment through different Advanced Deployment Patterns such as
Blue/Green Deployments, Rolling Updates, Dark Launches, Feature Toggles, Canary
Releases, minimize and reach zero downtime, predict, and contain release-deployment risks,
manage/resolve incidents with minimal end user impact, address failed deployments in a reliable
- effective way.
 Experienced in APM and ALM tools such as Splunk, Nagios, Datadog, Dynatrace,
AppDynamics, Cyber Apica, Netcool, Grafana, Prometheus, ELK, Azure Monitoring, AWS
CloudWatch, etc.
 Well experienced in Scripts like Linux Unix Shell and Bash Scripts, Python, Java,
PowerShell, Ruby, Groovy and YAML.
 Experienced in API development and lifecycle management tools such as PostMan, Soap UI,
Ready API, Apigee, Apigee Hybrid, etc.
 Experienced in installation, configuration, and operation of Atlassian tools like Jira,
Confluence, fisheye, etc.
 Experience with event driven asynchronous architectures including tools like Kafka, Kinesis,
RabbitMQ and ActiveMQ.
 Hands on experience on main IDEs like Visual Studio, Eclipse, JDK Suite, for working on
software development and coding.
 Hands on experience on various testing frameworks like JUnit, various Application Servers like
MSSQL, NoSQL, Apache-Tomcat, WebLogic, WebSphere, JBoss, etc.
 Worked with SQL, NoSql and Big Data technologies such as MySQL, PostgreSQL,
MongoDB, DynamoDB, Cassandra, Redis, Hadoop supporting/Enhancing Lambda
functions in AWS that queries elastic search cluster for producing Servers.

Technical Skills:

Cloud Platforms AWS, AZURE, GCP


Version Control Tools Git, GitHub, GitLab, Bitbucket, SVN
Languages C, C++, Java, Python, .Net
Scripting Bash/Shell, Python, JSON, YAML
Continuous Integration Jenkins, Azure DevOps, Team City, Atlassian
Bamboo
Configuration Management Tools Ansible, Puppet, Chef
Containerization/Orchestration Docker, Docker Swarm, Apache MESOS,
Marathon, Kubernetes, Harness, OpenShift,
ECS
Khaja Obedullah Haruni
Senior DevOps Engineer

AWS Services AWS, Open Stack, EC2, S3, RDB, ELB,


EKS, EBS, VPC, Auto Scaling, Dynamo DB
Azure Services AKS, Storage, Logic Apps, VM
Build Tools Apache ANT, Maven, Gradle, Microsoft
Build.
Bug Tracking Tools JIRA, REMEDY
Repository Management JFrog, Nexus
Operating Systems Linux, Unix, Windows
Monitoring Tools Splunk, Nagios, Prometheus, Grafana,
Dynatrace, DataDog, New Relic, Specto,
ELK Stack, Elastic Search
Databases MySQL, SQL Server, PostgreSQL, Redis,
Elastic Search, Cosmos, Cassandra,
MongoDB, DynamoDB

Professional Summary:

Concentra - Lincoln, RI June 2021 - Till Date


Senior DevOps Engineer

Responsibilities:
 Provide Day-to-day Operational Support (Release Management, Deployments and Triage
Support) for Applications running in DEV, QA, UAT, Performance Testing and Production
environments.
 Extended support to existing product teams on how to integrate CI/CD into development life
cycle.
 Worked with continuous integration/continuous delivery using tools such as Jenkins, Azure
DevOps and GitHub.
 Responsible for overseeing all planned outages, assist with major upgrades to ensure minimum
downtime. Well experienced collaborating with Dev Teams working on .Net and Java
applications.
 Built and tested .Net code using MS build and MS test.
 Worked with Azure Services such as Azure Virtual Network, Azure Virtual Machines,
ARM Templates, Functions, Application Gateways, Azure AD, Azure Key Vault, Azure Logic
Apps, Azure Blob Storage, etc.
 Worked with AWS Services such as AWS Console, Elastic Cloud Computing, Simple Storage
Services, Cloud Formation, Cloud Native, Micro-services, Glacier, Block Storage, Elastic
Beanstalk, Amazon Lambda, Virtual Private cloud, Virtual Machines (VM’s), Load balancing,
Relational Database Service, Elastic Container Service (ECS), Elastic, Container Registry (ECR),
Elastic Kubernetes Service (EKS), Route53, AWS CloudWatch. Implemented AWS solutions
using EC2, S3, RDS, EBS, Elastic Load Balancer, Auto scaling groups.
 Built end to end CI/CD pipelines in Jenkins / Azure DevOps to retrieve code, compile
applications, perform tests and push build artifacts to Nexus artifactory and deploy them on the
web servers.
 Managed Jenkins / Azure DevOps pipelines and updated them as per the application
requirements and deployed SonarQube through the pipeline to check the code coverage of the
service.
Khaja Obedullah Haruni
Senior DevOps Engineer

 Implemented machine learning techniques to analyze historical data and forecast future resource
requirements, contributing to effective capacity planning and cost optimization in cloud
environments.
 Automated infrastructure and deployment with Terraform, Harness, Ansible, Jenkins / Azure
DevOps.
 Proficient knowledge in writing scripts for automating tasks at different levels of build and
release using PowerShell, Python 3, Go Language, Shell, Groovy, Bash, Ruby, etc.
 Wrote and used Ansible based Playbooks for script, code integration and automation of
processes. Utilized Ansible Tower to automate repetitive tasks, to deploy critical applications
quickly and proactively manage the changes.
 Created and maintained highly scalable and fault-tolerant multi-tier Azure and AWS cloud
infrastructure using Terraform.
 Written Terraform templates for Azure and AWS infrastructure to build staging and production
environments. Setup monitoring and alerting mechanisms for Azure, AWS, and private
datacenter infrastructure. Also, utilized Terraform graph to visualize execution plan using the
graph command.
 Deployed and maintained multi-container applications through Docker, orchestrated
containerized application using Docker-Compose and Kubernetes.
 Wrote Docker files and set up the automated build on Docker hub, installed and
configured Kubernetes.
 Extensively worked with scheduling, deploying, managing container replicas onto a node using
Kubernetes and experienced in creating Kubernetes clusters and worked with Helm charts
running on the same cluster resources.
 Proficient knowledge with Helm charts to manage and release of helm packages.
 Performed application troubleshooting by checking the logs from the running PODs and engaged
with respective teams for the resolution.
 Rendered deployments and microservices using advanced deployment patterns: Blue/Green
Deployments, Canary Releases and Rolling Updates for assessing infrastructure performance,
reduced Downtime, risk, and easier debugging. Also used Feature toggles to design continuous
delivery pipelines (CDP) that minimize operational costs through push-button deployment at
scale, build, configure, test and release applications with near-zero downtime.
 Deployed an Azure Databricks workspace to an existing virtual network with public and private
subnets, configured with network security groups.
 Expertise in database modeling and development using Databases such as Cosmo DB, MongoDB
and Cassandra implemented automation tasks by writing Scripts in Shell, Python, and Ruby.
 Automated key SRE metrics and IT Service Operations processes including customer impact, %
availability of critical business flows, SLO/SLI adherence, error budget, automate incident
process for IT Service Operations through data integrating with unified communications,
alerting/notification systems.
 Share support responsibilities for critical applications and customer journeys onboarded
to SRE including remediation of issues through Agile, conduct blameless post-mortems, root
cause analysis, and introduce continuous improvement solving problems once and for all with the
goal of no repeats.
 Installed, Managed and Configured monitoring tools such as Splunk, Nagios, ELK (Elastic
Search, Log Stash, Kibana), DataDog, Dynatrace, New Relic, Prometheus and Grafana for
monitoring and Logging.
 Actively involved in monitoring Application/server performance, network traffic to reduce
performance bottleneck by enhanced performance ensuring upkeep.
Khaja Obedullah Haruni
Senior DevOps Engineer

 Deployed and configured applications into Pre-Prod and Prod environments with various
Application server technologies like WebLogic, JBoss, & Apache Tomcat.
 Experience in Implementing Spring boot microservices to process the messages into the Kafka
cluster setup.
 Used Spring Kafka API calls to process the messages smoothly on Kafka Cluster setup.
 Designed, Developed, Deployed, and configured Microservices onto Apache servers, managed
through Tomcat.
 Strong experience with SQL, MySQL, PostgreSQL as well as with NoSQL databases,
specifically MongoDB, Cassandra, Word Press and Oracle 10.X
 Configured web servers (IIS, Nginx) to enable caching, CDN application servers, and load
balancers.
 Used Atlassian JIRA as a Story Planner, Ticketing, defect tracking system and integrated
Jenkins, Azure DevOps with JIRA, GitHub. Used Atlassian Confluence for documenting project
progress and Sprints.

Environment: AWS, Azure, Jenkins, Azure DevOps, Git, Github, Ansible, Terraform, Maven,
Splunk, Nagios, Dynatrace, Datadog, ELK, JIRA, Confluence, SRE, Kafka, Cloud Watch, New
Relic, Agile, Docker, Kubernetes, EKS, WebLogic, Tomcat, Shell & Perl Scripting, MySQL,
Veracode, EC2, S3

Home Depot - Austin, TX March 2019 - May 2021


DevOps Engineer

Responsibilities:
 Designed, developed, and maintained next generation DevOps processes which comprise
multiple stages including code build, test, deploy, monitor processes, and build automation jobs
for the same.
 Worked in an Agile-Scrum based SDLC Environment, through the Agile Service Manager,
Scrum, etc.
 Managed Azure DevOps build and release pipelines, overseeing the setup of new repositories and
permissions for various Git branches.
 Involved in CI/CD process using Azure DevOps, GitLab, Ansible, Jenkins builds and Docker
image creation to deploy applications.
 Deployed SonarQube server using containerized techniques, providing quality gates, and
managing users and user groups.
 Leveraged machine learning algorithms to enhance automation processes within the CI/CD
pipeline, improving efficiency.
 Designed and Developed Application using .Net Framework - C#.NET, ASP.NET, VB.NET.
 Feature/Master branch creation in GitLab to support new sprints.
 Responsible for Continuous Integration (CI) and Continuous Delivery (CD) process
implementation-using Azure DevOps Pipelines along with Python and Shell scripts to automate
routine jobs.
 Integrated Terraform and Azure DevOps to manage Azure Databricks.
 Wrote Python scripts to automate Azure and AWS services.
 Extensive exposure to configuration management policies along with automation of scripting
using Bash/Shell scripting.
 Developed Ansible playbooks to deploy services on cloud mainly on Linux and windows servers,
ensured code reusability and readability by using ansible roles, which saved time for writing a
playbook.
Khaja Obedullah Haruni
Senior DevOps Engineer

 Experience in writing Infrastructure as a Code (IaC) in Terraform and creating its reusable
modules in Azure and AWS Cloud environments.
 Responsible for implementing containerized based applications on Azure Kubernetes by using
Azure Kubernetes service (AKS), Kubernetes Cluster, which are responsible for cluster
management, Virtual Network to deploy agent nodes.
 Created Docker images and handled multiple images primarily for middleware installations and
domain configurations.
 Worked on creating the Docker containers and Docker consoles for managing the application
lifecycle. Worked on various Docker components like Docker Engine, Hub, Machine, Compose
and Docker Registry.
 Wrote and managed Kubernetes manifest files and Helm packages and implemented Kubernetes
to deploy, load balance, scale and manage pods with multiple namespaces.
 Worked with Kubernetes to automate deployment, scaling and management of web containerized
applications, experience in working on Docker and Vagrant for different infrastructure setup and
testing of code.
 Implemented the Canary POC. Created the Harness CD workflows and pipelines to support
Canary and full deployment.
 Managed Kubernetes using Helm charts and managed Kubernetes deployment and service files
and managed releases of Helm packages.
 Implemented Advanced deployment strategies such as Dark Launches and Feature Toggles
followed by Rolling Updates, to maintain high efficiency, reduce downtime and risks.
 Worked collaboratively with the management and development teams to understand
requirements and create development roadmaps on API’s.
 Apigee and Apigee Hybrid Platform in Analysis, Design, Development, Implementation, and
documentation of RESTful API’s. Also involved in designing and writing Unit Test Cases to test
out API’s.
 Set up and maintained Logging and Monitoring subsystems using tools like: Elasticsearch,
Fluentd, Kibana, Prometheus, Grafana, Nagios, Splunk, New Relic and Alert manager.
 Upgraded Atlassian Confluence, when users experienced intermittent issues Enterprise level,
with the help of Atlassian support team.
 Used Atlassian JIRA for Project story planning, making updates, administration, defect tracking
and configuring various workflows, customizations and plugins for Jira, bug/issue tracker and
used Atlassian Confluence for documenting project progress and Sprints.

Environment: Azure, AWS, Azure DevOps, Jenkins, Gradle, Docker, Ansible, CI/CD, Terraform,
Kubernetes, OpenShift, PowerShell, WebSphere, Chef, JIRA, Kafka, Splunk, Elasticsearch, Fluentd,
Kibana, New Relic, Prometheus, Grafana, Nagios.

Centene – New York City, NY Aug 2016 - Feb 2019


DevOps Engineer

Responsibilities:
 Involved in software development life cycle (SDLC), which includes requirement-gathering,
design, coding, testing.
 Utilized Agile Methodology for day-to-day support of applications.
 Created company's DevOps strategy in a mix environment of Linux (RHEL, Ubuntu) servers
along with creating and implementing a cloud strategy based on Amazon Web Services.
Khaja Obedullah Haruni
Senior DevOps Engineer

 Worked on AWS as a Cloud Service and Hosting platform/provider, using AWS Services such as
Systems Manager, Secrets Manager, ASG’s (Auto Scaling Groups, with Launch Configurations),
EC2 Instances (Elastic Cloud Compute), Launch Templates, Scheduled Instances, AMI (Amazon
Machine Images), EBS (Elastic Block Store- EBS Volumes, EBS Snapshots, EBS Lifecycle
Manager), Network and Security (Security Groups, Elastic Ips, Placement Groups, Key Pairs,
Network Interfaces), ELB (Elastic Load Balancing – Load Balancers, Target Groups), VPC’s,
IAM’s, IAM Roles, etc.
 Installed and administered various tools like Jenkins, GitLab, Ansible, Nexus Artifactory and
executed maintenance tasks such as creating users and groups.
 Created and Implemented branching & merging strategy with multiple branches.
 Installed and configured Nexus (Repository) to manage the artifacts in different Repositories and
handling dependency management using Nexus private repository.
 Implemented and managed Jenkins pipelines and updating them as per the application
requirements and deploying SonarQube through the pipeline to check the code coverage of the
service.
 Used Jenkins as build tools on Java projects for the development of build artifacts on the source
code.
 Wrote scripts and code for automating tasks at different levels of build and release using
PowerShell, Python, C/C#, Java, Go Language, Shell, Groovy, Bash, etc.
 Utilized Ansible and Ansible Tower as Configuration management tool, to automate repetitive
tasks, quickly deploys critical applications, and proactively manages changes in the AWS
environment.
 Worked with Terraform and CloudFormation template for creation of AWS resources.
 Container management using Docker by writing Docker files and set up the automated build
on Docker hub and installed and configured Kubernetes (EKS) and OpenShift.
 Automated build pipeline to retrieve the code from Source code repository, build and deploy the
artifacts onto Docker containers. Published Docker images to Docker hub and pulled images
from Docker registry.
 Managed Kubernetes using Helm charts and managed Kubernetes deployment and service files
and managed releases of Helm packages.
 Utilized monitoring tools like Splunk, AppDynamics, Datadog, Netcool, Cyber Apica, AWS
CloudWatch for monitoring the overall health of the applications, network services, hosts, and
allocated resources through Dashboard Applications Services Hub.
 Worked collaboratively with the management and development teams to understand
requirements and create development roadmaps on API’s.
 Documented the entire installation process for various tools and provided on-call support.
 Worked closely with developers to pinpoint and provide early warnings of common build
failures.
 Provided periodic feedback of status and scheduling issues to the management.
 Hands on experience on JIRA for creating bug tickets, workflows, pulling reports from
dashboard, creating and planning sprints.

Environment: Jenkins, AWS, Docker, Ansible, GitLab, Scripting Languages, SVN, Linux,
Kubernetes (EKS), OpenShift, Kafka, Terraform, Maven, etc.

Vivodoc - Dallas, TX May 2014 - July 2016


Build and Release Engineer
Khaja Obedullah Haruni
Senior DevOps Engineer

Responsibilities:
 Build and Configuration of Internally developed Software, Release Management activities.
 Build and release software baselines, code merges, branch and label creation and interfaced
between development and infrastructure.
 Designed and developed scripts using batch, Shell, and Perl for automating the build activities.
 Deployment of application to the Tomcat/ WebSphere Application Server.
 Worked closely with the Development Team in the design phase and developed Use case
diagrams using Rational Rose.
 Implemented maintained the branching and build/release strategies utilizing Subversion.
 Performed all necessary day-to-day infrastructure using Ansible.
 Worked on vagrant for configure lightweight, reproducible, and portable development
environments.
 Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-
used and modified same Chef Recipes to create a Deployment directly into Amazon EC2
instances.
 Worked on AWS AIM, which included managing application in the cloud and creating EC2
instances.
 Worked closely with developers to pinpoint and provide early warnings of common build
failures.
 Used Ansible to manage Web applications, Environments configuration Files, Users, Mount
points and Packages.
 Configure and administer Bitbucket source code repositories.
 Develop and implement an automated Linux infrastructure using Ansible.
 Worked on vagrant for configure lightweight, reproducible, and portable development
environments.
 Implemented Chef Recipes for Deployment on build on internal Data Centre Servers. Also re-
used and modified same Chef Recipes to create a Deployment directly into Amazon EC2
instances.
 Maintained high availability clustered and standalone server environments and refined
automation components with scripting and configuration management (Ansible).
 Communication with team members for both Ansible Core and Ansible Tower teams to clarify
requirements and overcome obstacles.
 Carried Deployments and builds on various environments & help to solve queries of developers
related to build release issues.
 Worked closely with developers and managers to resolve the issues that rose during the
deployments to different environments.

Environment: AWS, Bitbucket, EC2, Ansible, Chef, Shell, Apache Tomcat, WebSphere, etc.

Educational Qualifications:

 Master’s degree in Computer Science from Wright State University, Ohio

You might also like