Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Ananth Kumar K

Linkedin
Cloud Technologies, Machine Learning and Artificial Intelligence Specialist
Tech savvy with 15+ years of IT Experience. From the dawn of my career, I wore different caps and
donned different roles which helped me in transitioning and shaping my career to what I am today.
My most remarkable work is in Cloud Technologies, Machine Learning and Artificial Intelligence.

Professional Experience:
ML Ops and AI Specialist:
Company: AB Technologies Inc., Client: Kaiser Permanente, Nov 2017- Jan2022, Feb
2023 to Till Date Pleasanton, CA
Combined strong software engineering principles with machine learning to build scalable,
reproducible and easy-to-use end-to-end machine learning workflows for advanced deep learning
problems. Performed advanced platform research and worked in architecture design for efficient ML
model training and deployment in scale. Used quantitative analysis and data visualization tools to see
beyond the numbers and understand the hidden insights.
- Developed and deployed below models using Python, SQL, Spark and TensorFlow
Framework with Keras on AWS.
- Leveraged Amazon SageMaker cloud machine-learning platform to create, train, and deploy
machine-learning models in the cloud.
- Implemented MLOps methodologies using existing DevOps ecosystem and ML engineering
knowledge. Used Git and DVC for version control.
- Created Airflow DAGs s for ML workflow scheduling.
- Design an ML production system end-to-end: project scoping, data needs, modeling strategies,
and deployment requirements.
- Worked on python integration with Snowflake and S3 database hosted on AWS.
- Establish a model baseline, address concept drift.
- Apply best practices and progressive delivery techniques to maintain a continuously operating
production system.
- Developed Gitlab pipelines using Python scripts.
- Worked on operations monitoring using Splunk, Dynatrace.
Technology Stack: Python, SQL, Spark, TensorFlow, Keras, Flask, AWS Sagemaker,
Airflow, git, Kubernetes, S3, EMR Domino.

ML/AI Projects:
1.Medical Adherence: Developed NLP based XGBoost Model for MSR team to identify a member –
“who misses their medication.”
2.Missed appointments: Developed ML model which predicts how likely a patient’s next(planned)
appointment will result in a “no-show” using XGBoost. Special attention- like sending reminders
through calls and text messages is provided to patients. A missed appointment suffers cost.
3.NLP Digital Assistant: Developed NLP based digital assistant for Radiologists capable of
automatic text classification of NER data using CNN to detect any mention of cerebral aneurysm in
the context of a size less than 7mm and a specific location.
4.Call Topic Generation: Unsupervised NLP model for understanding why members/patients call.
This model identifies the topics using NMF.

Machine Learning Engineer:


Company: AB Technologies Inc., Client: USAA, Domain: Finance, Jan 2022-Feb 2023,
San Antonio, TX
Worked in model hardening team to build scalable, reproducible and easy-to-use end-to-end machine
learning workflows for advanced bank data analytic problems from python notebooks developed by
Data Scientists.
- Developed and deployed below ML projects using Python, Snowflake, Hive and sklearn
Frameworks.
- Executed MLOps practices using DevOps toolset and ML engineering for easy model
maintenance and execution.
- Implemented Gitlab for version control and CI/CD pipelines.
- Setup Domino ML model lifecycle management.
- Created Airflow DAGs s for ML workflow scheduling.
- Python upgrade of ML projects
- Model monitoring and End to End Testing

ML Projects:
1. SCRA: Developed NLP based XGBoost binary classification Model that predicts the likelihood that
a member services call pertains to SCRA or not. Where SCRA stand for The Service members' Civil
Relief Act applies in bankruptcy cases.
2. RegE: Developed NLP based XGBoost binary classification Model that predicts the likelihood that
a member services call pertains to Regulation E or a complaint regarding funds transfers. Where
Regulation E provides a basic framework that establishes the rights, liabilities, and responsibilities of
participants in electronic fund transfer systems.
3. Keyhole: Developed NLP based rule engine that facilities the governance process of poor sales
practices and behaviors of the MSR.
Technology Stack: Python, SQL, Airflow, Gitlab, Domino.

DevOps Solution Engineer:


Company: Clarinet Global/Thomson Reuters, Domain: Finance, June 2016-Nov 2017
Tampa, FL
Built an automated deployment toolset and infrastructure for code management, CI/CD process for
Clarinet Global (an DTCC Subsidiary- Startup). Defined the set of best practices for code
management, build, deploy and release across multiple teams/products in CI /CD model using ECS,
Docker and AWS. Worked closely with the software development team and other functional groups to
develop and provide a robust, flexible and scalable platform.
 Create and maintain SSL Security certificate management for enterprise, maintaining
certificates across multiple SSL-providers, and integrating certificates into products such as
nignix, apache, tomcat, AWS-ELB.
 Build Automation and deployments with Ant, Maven.
 Source code management using SVN and GIT
 Jboss, IBM BPM, Tomcat- Application server installation, configuration.
 Oracle Identity Management configuration and support.
 Experienced in implementing docker-maven-plugin and POM files to build images for all
microservices, and later used Docker file to build the docker images from the Java jar files.
 Created Docker containers to build, ship and run the images to deploy the applications, and
worked on several Docker components like Docker Engine, Docker-Hub, Docker-Compose,
Docker Registry and Docker Swarm.
 Create software configuration management plan and define policies like branching, tags
 Responsible for QA, UAT and Production deployment of builds on JBoss, IBMBPM servers.
 Locked the branches and labels before release, packaged the project for release.
 Applied security patches to build machines.
 Created Python scripts to totally automate AWS services which include web servers, ELB,
CloudFront distribution, EC2, database, security groups, S3 bucket and application
configuration, this script creates stacks, single servers, or joins web servers to stacks.
 Experience in implementing Data warehouse solutions in AWS Redshift. Worked on projects to
migrate data from one database to AWS Redshift, RDS, ELB, EMR, Dynamo DB and S3.
 Worked on IAM performance to optimize roles, tasks and identity policies.
 Using Chef, deployed and configured ElasticSearch, LogStash and Kibana (ELK) for log
analytics, full text search, application monitoring in integration with AWS Lambda and Cloud
Watch.
 Worked with Project Management team to co-ordinate build-Release schedule
 Installation and configuration of AppDynamics for Application monitoring.

Lead Middleware Engineer:


Company: Tech Mahindra, Client: Cisco, Domain: Hi-Tech, June 2011-June 2016,
Bangalore India & San Jose, CA
Lead Weblogic middleware team. Provided Technical expertise in middleware technologies across
production and non-production with specialization in Oracle Agile-PLM and Oracle Transportation
Management products. Successfully migrated instances with Automation from physical hosts to VMs.
 WebLogic 11g/12c Installation, Configuration and Domain Creation for both Production and
Non-production environments for different applications(Oracle SOA,BPM, Sabrix and WCC )
 Agile PLM 9.* installation, configuration and administration.
 Agile PLM Hot Fixes Installation.
 Experience in working with SDK APIs, SQL Procedures and database operations.
 Distributed File Manger configuration.
 Node manager setup for Agile PLM instances.
 Taking regular backups for agile schema and import/export activities for non-production.
 Maintain the applications among WebLogic run-time processes in a cluster of application.
 Cross DC Active-Active setup for WebLogic and SOA environments.
 Involved in Deployment the application as .war, .ear and .sar files.
 Involved in Configure JDBC Connection Pools, Multiple Pools and Data Source.
 Used Thread dump to check the status of WebLogic Server Instances.
 Involved in Trouble Shooting and JVM Performance Tuning, using JRMC, Samuri,
 SSL configuration, WLS integration with OHS.
 Experience in working with SDK APIs, SQL Procedures and database operations.
 Installation and configuration of SOA and BPM.
 Shell and WLST setup for WebLogic and SOA monitoring.
 Splunk setup log forwarding and WebLogic monitoring.

Software Engineer:
Company: Tech Mahindra (Satyam), Client: GE, Domain: Healthcare, April 2006-April
2009, Hyderabad, India
Started as Fresh Graduate in the company and worked on various projects for GE Healthcare on
Vignette Content Management. Developed the Application in MVC framework using J2EE
technologies JSP, Servlets, Weblogic on Oracle Data Base.
 Development using Java and Struts framework. Designed, developed and implemented new
site pages.
 Developed the Application in MVC framework using J2EE technologies JSP, Servlets,
WebLogic on Oracle.
 Designed and configuration of Custom Type Definitions.
 Application design and development process based on prototype and iterative incremental
development models. Migrated HTML pages
 Testing on different work environments and building of test cases.
 Involved in Deployment the application in Developing and production mode.
 Regression testing and fixing bugs logged after UAT.
 Tomcat and Apache Configuration.
 Responsible for unit testing the classes and support for integration testing.
 Responsible for application server configuration, environment setup for testing phases.

Technology Stack
Cloud Platform AWS, Azure, IBM Bluemix, Hybrid
Machine Learning NumPy, Pandas, scikit-learn, Tensor Flow, Keras, PyTorch,
Middleware WebLogic, Jboss, WebSphere
Database Oracle, SQL, NoSQL
Languages Java, Python
Operating System Windows and Linux
Other Tools Kubernetes, Docker, Airflow, GitLab, Domino, Jenkins

Education:
Bachelor of Technology in Computer Science and Engineering: 2001-2005
Masters in Technology in Computer Science and Engineering: 2009 -2011
Master of Science in Artificial Intelligence and Machine Learning (Online) 2020-2022
Certification:
AWS Certified Machine Learning – Specialty - Verify

You might also like