Mehraj Khan Cloud, Devops Engineer Resume

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Sr.

AWS DevOps Engineer


Mehraj Khan
Email: mehrajkhan4946@gmail.com
Visa –H1B
Phone: 6304744410

Location: Chicago,Illinois
Linked in iD: https://www.linkedin.com/in/mehraj-khan-83aa12242/

PROFESSIONAL SUMMARY
 8+ years of Professional IT experience as a CLOUD Engineer, DevOps Engineer, Automation Engineer, Build
and Release Manager on various Cloud Services such as Amazon AWS, Microsoft Azure & other open source
cloud technologies like Google GCP.
 Expertise knowledge on AWS IAM service, IAM Policies, Roles, Users, Groups, AWS access keys and Multi
Factor Authentication and migrated applications to the AWS Cloud.
 Experienced with AWS Command Line Interface (AWS-CLI) and PowerShell for automating administrative
tasks.
 Experience in creating AWS AMI, have used Hashicorp Packer to create and manage the AMI’s.
 Expert in various Azure services like Compute(Web Roles, Worker Roles), Azure SQL, No SQL, Storage and
Network services, Azure Active Directory(AD), API Management, Scheduling, Azure auto scaling, and Power
Shell Automation.
 Designed Azure for multiple applications utilizing the Azure stack (including Compute, Web & Mobile, Blobs
Resources Groups, azure SQL, Cloud Services, and ARM), focused on high availability and Auto-scaling.
 Writing Jenkins Groovy Scripting to automate pipelines.
 A role describes an identity with a set of permissions, groups, or policies you want to attach a user of the
secrets engine.
 Writing ansible roles and used CRD/CR templates to install an Operator in clusters which are sysdig, services-
mesh, clusters logging, elastic search, Ansible and Monitoring Operator.
 Managed and Administered Jenkins weekly Build, managing Test and Deploy chain, GIT with Dev/Test/Prod
Branching with Dev/Test/Prob Branching Model for weekly release.
 Used GIT version control to manage the source code and integrating GIT with Jenkins to support build
automation and integrated with JIRA to monitor the commits.
 Worked with Dockers and Kubernetes on multiple cloud providers and helped developers to built and
containerize their application pipeline (CI/CD) to deploy then to cloud.
 Experience in using J2EE application servers such an Apache, Tomcat, IBM Web sphere.
 Experience on working with go to provide single binary for an entire project using Terraform, Vault and
Prometheus.
 Designed highly available, cost effective and fault tolerant systems using EC2 instances, S3 buckets, Auto
Scaling, Elastic Load Balancing, and managing security groups for EC2 servers with AWS CLI and SDK tools.
 Experienced in working on several Docker components that includes Docker Engine, Hub, Compose and
Docker Registry and deployed micro-service apps into containers using ECS (Elastic container service) and
handle the Kubernetes cluster.
 Hands on experience in HashiCorp technologies such as , Vault and/or Consul
 Experienced in working with fully managed Kubernetes using EKS service in AWS and managed the
deployment, maintenance, and Monitoring Kubernetes clusters on cloud and on-premises infrastructure.
 Experience on Chef Enterprise, installed workstation, bootstrapped nodes, wrote recipe and cookbooks, and
uploaded them to chef server.
 Experience on complete Software Development Life Cycle (S DLC) worked with different software
development models like Agile, Scrum Model, JIRA and Waterfall models.
 Experience on procedural and declarative style configuration management tools using Ansible, CHEF and
PUPPET.
 Experience in managing Ansible Playbooks with ansible roles, used files modules in Ansible Playbook to copy
and remove the systems, wrote scripts, wrote scripts using YAML and automate software update and verify
functionality.
 Analyze, design, develop, as well as implement RESTful services and APIs.
 Be involved in the development life cycle and perform definition and feasibility analysis with REST API
 Proficient with Shell, Python, Power shell, JSON, JAML and Groovy scripting languages.
 Experience in ticketing and project management tools like JIRA, Azure DevOps and ServiceNow.
 Expertise in configuring monitoring log application using tools like ELK, Nagios and Splunk.
 Functional knowledge of networking concepts of DHCP, TCP/IP, SSH, DNS, NIS and Sqiud proxy server.

TECHNICAL SKILLS

Cloud Services AWS, AZURE, GCP(Google Cloud Platform


Beginner)
Languages Python, Perl, PHP, SQL, YAML, Groovy, Bash, Shell
Scripting
Build & Release Engineering /DevOps Jenkins, Docker, Udeploy AWS, Azure, Chef, puppet,
Ant, Atlassian-Jira, Github, Ansible, Open Stack,
GitLab.
Web servers Apache Tomcat, Nginx, WebSphere, JBOSS,
WebLogic
SCM Tools Git, GIT Hub, CVS, Subversion, Bit Bucket
Built Tools ANT, Maven
Automation/Configuration Ansible, chef, Puppet, Packer.
Continuous Integration/ Deployment Tools Jenkins, Bamboo, Hudson, UDeploy
Monitoring Tools Splunk, Prometheus, Logstash, Apache, Grafana,
Datadog
Network Protcols HTTP, HTTPS, SMTP, FTP, SFTP,DHCP, DNS,
SNMP TCP/IP, UDP, ICMP, VPN, POP3,Cisco
Routers/Switches
Operating System Windows, Unix, Linux(Ubuntu, CentOS, Red Hat)
and MacOS
Repository Managers Nexus,Artifactory

PROFESSIONAL EXPERIENCE

Client: Cargill, Minneapolis, MN Feb 2022 – Present


Sr. AWS Devops Engineer/Cloud Engineer
Responsibilities:
 Experience in designing and deploying AWS Solutions using EC2, S3, EBS, Elastic Load balancer (ELB), auto
scaling groups.
 Responsible for Design and architecture of different Release Environments for new projects.
 Worked at optimizing volumes and EC2 instances and created multiple VPC instances.
 Writing Maven and Ant scripts for application layer modules.
 Implementing new projects builds framework using Jenkins & maven as build framework tools.
 Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto Scaling,
and RDS in Cloud Formation JSON templates.
 Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful
deployment of Web applications and database templates.
 Implementing a Continuous Delivery framework using Jenkins, Chef, Maven & Nexus as tools.
 Experience involving configuring S3 versioning and lifecycle policies to and backup files and archive files in
glacier.
 Utilize Amazon Glacier for archiving data.
 Creating alarms in Cloud watch service for monitoring the server’s performance, CPU Utilization and the disk
usage etc.
 Developed, deployed, and managed event-driven and scheduled AWS Lambda functions to be triggered in
response to events on various AWS sources including logging, monitoring, and security related events and to be
invoked on scheduled basis to take backups.
 Deployed a code using blue/green deployments with AWS Code deploy to reduce downtime due to application
deployment. If something unexpected happens with your new version on Green, you can immediately roll back to
the last version by switching back to Blue.
 The authentication API of a fictional coffee-shop application using the HashiCorp Vault Plugin SDK.
 An identity with a set of permissions, groups, or policies you want to attach a user of the secrets engine of
hashi corp vault
 Use it as a generic term for any vault secret manager on the market..
 Used Terraform and did "Infrastructure as code" and modifying terraform scripts as and when configuration
changes happens.
 Responsible for the operation, maintenance and integrity of a distributed networked Linux environment.
 Written Chef Cookbooks and recipes in Ruby to Provision several pre-prod environments consisting of Cassandra
DB installations, WebLogic domain creations and several proprietary middleware installations.
 Developed Scripts for AWS Orchestration.
 Implement and improve monitoring and alerting kubernate admin.
 Build and maintain highly available systems on Kubernetes.
 Use HashiCorp's logging package to surface errors from the plugin.
 Created Amazon Workspaces for employees.
 mplement and manage CI/CD pipelines with kubernates.
 Implement an auto-scaling system for our Kubernetes nodes.
 Worked on a cloud-based service and software for managing connected products and machines and implementing
Machine-to-Machine (M2M) and Internet of Things (IoT) applications like Axeda iSupport.
 Responsible in creating and deploying an Agent Gateway, Agent Connector in Axeda iSupport.
 Involved in a platform for the rapid development of applications designed for smart, connected sensors, devices,
and products or the Internet of Things (IoT) like ThingWorx.
 System monitoring with Nagios & Graphite.
 Installed, configured and maintained web servers like HTTP Web Server, Apache Web Server and WebSphere
Application Server on Red Hat Linux.
 Business data analysis using Big Data tools like Splunk, ELK.
 Experience in CI and CD with Jenkins.
 Used Puppet server and workstation to manage and configure nodes.
 Experience in writing Puppet manifests to automate configuration of a broad range of services.
 Designed tool API and Map Reduce job workflow using AWS EMR and S3.
 Secured Data is stored in MySQL. Vault (by HashiCorp) secures, stores and tightly controls access tokens and
passwords used by the overall platform, started in the AWS cloud and currently integrates with several services
like: AWS AIM, Amazon DynamoDB, Amazon SNS, Amazon RDS.
 Prepared projects, dashboards, reports and questions for all JIRA related services.
 Generated scripts for effective integration of JIRA applications with other tools.
 Defining Release Process & Policy for projects early in SDLC.
 Analyze, design, develop, as well as implement RESTful services and APIs.
 Be involved in the development life cycle and perform definition and feasibility analysis with REST API
 Branching and merging code lines in the GIT and resolved all the conflicts raised during the merges.
 Designed highly available, cost effective and fault tolerant systems using multiple EC2 instances, Auto Scaling,
Elastic Load Balance and AMIs.
 Highly skilled in the usage of data centre automation and configuration management tool such as Docker.
 Perform Deployment of Release in various QA & UAT environments.
 Responsible for installation and upgrade of patches and packages on RHEL 5/6 using RPM & YUM.
 Supporting different projects build & Release SCM effort e.g. branching, tagging, merge, etc.

Environment: AWS, S3, EBS, Elastic Load balancer (ELB), auto scaling groups, VPC, IAM, Cloud Watch, Glacier,
Dynamo Db, Elastic Cache, Directory Services, EMR(Elastic Map Reduce), Route53, Puppet, Jenkins, Maven,
Subversion, Ant, Bash Scripts, GIT, Docker, Jira, Chef, and Nexus in Linux environment, OpenStack, Axeda,
ThingWorx.

Client: Omnitracs, Dallas, Texas Jan 2021 – Jan 2022


DevOps/ Cloud Engineer
Responsibilities:
 Meetings with business/user groups to understand the business process, gather requirements, analyze, design,
development and implementation according to client requirement.
 Designing and Developing Azure Data Factory (ADF) extensively for ingesting data from different source
systems like relational and Non-relational to meet business functional requirements.
 Designed and Developed event driven architectures using blob triggers and DataFactory.
 Creating pipelines, data flows and complex data transformations and manipulations
using ADF and PySpark with Databricks.
 Automated jobs using different triggers like Events, Schedules and Tumbling in ADF.
 Created, provisioned different Databricks clusters, notebooks, jobs and autoscaling.
 Ingested huge volume and variety of data from disparate source systems into Azure DataLake Gen2 using Azure
Data Factory V2.
 Created several Databricks Spark jobs with Pyspark to perform several tables to table operations.
 Performed data flow transformation using the data flow activity.
 Implemented Azure, self-hosted integration runtime in ADF.
 Developed streaming pipelines using Apache Spark with Python.
 Implement and manage CI/CD pipelines with kubernates.
 Implement an auto-scaling system for our Kubernetes nodes.
 Participate in on-call rotations with kubernates
 Created, provisioned multiple Databricks clusters needed for batch and continuous streaming data processing and
installed the required libraries for the clusters.
 Improved performance by optimizing computing time to process the streaming data and saved cost to company by
optimizing the cluster run time.
 Perform ongoing monitoring, automation, and refinement of data engineering solutions.
 Designed and developed a new solution to process the NRT data by using Azure stream analytics, Azure Event
Hub and Service Bus Queue.
 Created Linked service to land the data from SFTP location to Azure Data Lake.
 Extensively used SQL Server Import and Export Data tool.
 Use HashiCorp's logging package to surface errors from the plugin.
 HashiCorp using a suite of tools with DevOps in mind, focusing on reducing manual coordination across the
elements of the application delivery lifecycle.
 Working with complex SQL views, Stored Procedures, Triggers, and packages in large databases from various
servers.
 Experience in working on both agile and waterfall methods in a fast pace manner.
 Generating alerts on the daily metrics of the events to the product people.
 Worked on Power Shell scripts to automate the Azure cloud system creation of Resource groups, Web
Applications, Azure Storage Blobs & Tables, firewall rules.
 Deployed and hosted Web Applications in Azure, created Application Insights for monitoring the applications.
 Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect.
 Provided 24/7 On-call Production Support for various applications and provided resolution for night-time
production job, attend conference calls with business operations, system managers for resolution of issues.

Environment: Azure Data Factory (ADF v2), Azure SQL Database, Azure functions Apps, Azure Data Lake, BLOB
Storage, SQL server, Windows remote desktop, UNIX Shell Scripting, AZURE PowerShell, Data bricks, Python,
ADLS Gen 2, Azure Cosmos DB, Azure Event Hub, Azure Machine Learning.

Client: OneMain Financial, Irving, TX Nov 2018 – Dec 2020


AWS Devops Engineer
Responsibilities:
 Created Terraform scripts to move existing on-premises applications to cloud.
 Had an extensive role in on-Premises Mid-tier application migrations to the Cloud-lift and shift to AWS
infrastructure.
 On boarded and migrated test and staging use cases for applications to AWS cloud with public and private IP
ranges to increase development productivity by reducing test-run times. Created monitors, alarms and
notifications for EC2 hosts using Cloud Watch.
 Implemented DNS service through Route 53 on ELBs to achieve secured connection via https. Utilized Amazon
Route53 to manage DNS zones and also assign public DNS names to elastic load balancers IP's.
 Involved in reviewing and assessing current infrastructure to be migrated to the AWS cloud platform. Created
new servers in AWS using EC2 instances, configured security groups and Elastic IPs for the instances.
 Implement and improve monitoring and alerting kubernate admin.
 Build and maintain highly available systems on Kubernetes.
 Analyze, design, develop, as well as implement RESTful services and APIs.
 Be involved in the development life cycle and perform definition and feasibility analysis with REST API
 Lead many critical on-premises data migrations to AWS cloud, assisting the performance tuning and providing
successful path towards Redshift Cluster and RDS DB engines
 Set up an Elastic Load Balancer to balance and distribute incoming traffic to multiple servers running on EC2
instances. Performed maintenance to ensure reliable and consistently available EC2 instances. Built DNS system
in EC2 and managed all DNS related tasks.
 Created Elastic Cache for the database systems to ensure quick access to frequently requested databases. Created
backup of database systems using S3, EBS and RDS services of AWS.
 Use HashiCorp's logging package to surface errors from the plugin.
 HashiCorp using a suite of tools with DevOps in mind, focusing on reducing manual coordination across the
elements of the application delivery lifecycle.
 Set up Route 53 to ensure traffic distribution among different regions of AWS. Set up a content delivery system
using AWS Cloud Front to distribute content like html and graphics files faster.
 Built DNS system in EC2 and managed all DNS related tasks. Created Amazon VPC to create public-facing
subnet for web servers with internet access, and backend databases & application servers in a private-facing
subnet with no Internet access.
 Experience with VPC Peering in data transfer from one VPC to Another VPC.
 Responsible for building out and improving the reliability and performance of cloud applications and cloud
infrastructure deployed on Amazon Web Services.
 Create and attach volumes on to EC2 instances.
 Provide highly durable and available data by using S3 data store, versioning, lifecycle policies, and create AMIs
for mission critical production servers for backup.
 Setup and build AWS infrastructure various resources, VPC EC2, S3, IAM, EBS, Security Group, Auto
Scaling, and RDS in Cloud Formation JSON templates.
 Build servers using AWS, importing volumes, launching EC2, RDS, creating security groups, auto-scaling, load
balancers (ELBs) in the defined virtual private connection.
 Create the new instance with the latest AMI with the same IP address and hostname.
 Manipulated Cloud Formation Templates and upload to S3 Service and automatically deploy into an entire
environment.
 Implemented, supported and maintained all network, firewall, storage, load balancers, operating systems, and
software in Amazon's Elastic Compute Cloud.
 Used Python scripts to store data in S3 and retrieve those files in redshift by using programmatic access by AWS
CLI.
 Tested and configured AWS Workspaces (Windows virtual desktop solution) for custom application requirement.
 Managed Ansible Playbooks with Ansible modules, implemented CD automation using Ansible, managing
existing servers and automation of build/configuration of new servers.
 Worked on Building server less web pages using API gateway and lambda.
 Manage Amazon Redshift clusters such as launching the cluster and specifying the node type.
 Security reference architecture spanned security groups, NACL, IAM group and Custom Roles, Key
management services and Key Vault, CloudHSM and Web Application Firewall.
 Worked on writing different automation scripts to help developers to interact with SQS and SNS performance
tuning of various processes running based on the SQS Queue.
 Configure and ensure connection to RDS database running on MySQL engines.
 Solid experience with onsite and offshore model. Directed build and deployment teams remotely, technically and
effectively.

Environment: EC2, Elastic IPs, Cloud Formation, SQS, SNS, Elastic Load Balancer, S3, EBS, RDS, Cloud watch,
Route53, Cloud Front, Cloud Trail, Active Directory, Jenkins, NACL, Security Groups, AWS Config, AWS CLI,
WAF, Terraform, Redshift, Python Scripts, ELB, Autoscaling, KMS, CloudHSM, Ansible.

Client: MedPlus, India Aug 2015– Oct 2018


DevOps Engineer
Responsibilities:
 Experience in creating, configuring and maintaining Amazon EC2 virtual servers.
 Knowledge in Cloud watch, Elastic IP and managing AWS infrastructure, Security Groups on AWS.
 Knowledge on deploying code to a virtual machine in AWS.
 Experience working with version control systems like Subversion, GIT and used Source code management
tools GitHub, GitLab, Bitbucket including command line applications.
 Hands - on experience with Continuous Integration and Continuous deployment using the tools Jenkins, Chef,
Git, and Docker.
 Implemented a CI/CD pipeline involving GitLab, Jenkins, Chef, Docker, and Selenium for complete
automation from commit to deployment.
 Installed and Configured Chef Enterprise and Chef Workstation hosted as well as On-Premise; Bootstrapped
Nodes; Wrote Recipes, Cookbooks and uploaded them to Chef-server.
 Hands-on experience using Maven as build tool for building of deployable artifacts from source code.
 Experience in using Amazon S3 to upload multiple project files online and used IAM to create multiple access to
various users and groups.
 Knowledge in using Load balancers to route incoming traffic to number of downstream servers.
 Experience in implementing CI/CD pipeline cycle in AWS by using AWS Code Commit, AWS CodeBuild and
AWS CodeDeploy.
 Experience in Bug/Issue tracking tools like HP Quality Center, Jira.
 Use HashiCorp's logging package to surface errors from the plugin.
 HashiCorp using a suite of tools with DevOps in mind, focusing on reducing manual coordination across the
elements of the application delivery lifecycle.
 Knowledge of Software Development Life Cycles Methodologies Waterfall, and Agile.
 Knowledge of Docker build using Dockerfile and 24/7 support in monitoring and maintaining Docker instances.
 Experience in multi platforms like UNIX, Ubuntu, CENTOS, RHEL and Windows 98/XP/Vista/7/8/10
production, test and development servers.
 Knowledge on IBM Urbancode Deploy on deploying applications and databases to various QA, UAT and
production environments.
 Deployed Source code from Jenkins(CI) tool to Apache Tomcat and JBoss servers by using Deploy to container
plugin.
 Experience on using Google Cloud Platform and created various VM’s through google cloud shell and managed
various clusters through kubernetes dashboard.

Environment: Cloud Formation, Cloud watch, SQS, SNS, EC2, AWS Config, AWS CLI, Jenkins, Elastic IPs,
Elastic Load Balancer, Terraform, S3, EBS, RDS, Route53, Cloud Front, Cloud Trail, Active Directory, NACL,
Security Groups, WAF, Redshift, Python Scripts, ELB, Autoscaling, KMS, CloudHSM.

Client: Azilen Technologies, India June 2013– June 2015


Role: DevOps Engineer(Internship Project)

Azilen Technologies is a Product Engineering company. We pioneer in Engineering Excellence to build NextGen digital
products. Our PRO engineering services are driven by agile methodologies induced within product lifecycle to catalyze
the change and adapt to market innovations. Our team of 300+ PRO Engineers thrive to shape customer success in turn
driving better business growth with excellence across industry innovations leveraging cutting edge technologies.

Responsibilities:
 Build and deploy our applications from code to production and everything in between
 Follow and improve upon best practices for source control, continuous integration, and automated testing
and release management
 Build tools for internal use to support software engineering best practices
 Tune our systems to get maximum performance and efficiency
 Deploy, support, and monitor new and existing services, platforms, and application stacks
 Experience in Bug/Issue tracking tools like HP Quality Center, Jira.
 Use HashiCorp's logging package to surface errors from the plugin.
 HashiCorp using a suite of tools with DevOps in mind, focusing on reducing manual coordination across the
elements of the application delivery lifecycle.
 Knowledge of Software Development Life Cycles Methodologies Waterfall, and Agile.
 Knowledge of Docker build using Dockerfile and 24/7 support in monitoring and maintaining Docker instances.
 Reated several Databricks Spark jobs with Pyspark to perform several tables to table operations.
 Performed data flow transformation using the data flow activity.
 Implemented Azure, self-hosted integration runtime in ADF.
 Developed streaming pipelines using Apache Spark with Python.
 Installed, configured and maintained web servers like HTTP Web Server, Apache Web Server and WebSphere
Application Server on Red Hat Linux.
 Business data analysis using Big Data tools like Splunk, ELK.
 Experience in CI and CD with Jenkins.

EDUCATION:

Sardar Patel college,Osmania university,2015

You might also like