Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

Deployment Architecture, Jenkins, &

Schedulers
COMET, Genprio, & XL Center

1
Key Points

• Deployment Architecture

• CICD Overview

• Environment & Pipelines

• Kubernetes clusters and commands

• Schedulers

2
Deployment
Architecture

3
Deployment Architecture
(Development to Production)

Microservice Check code coverage and Integration and developer Creating merge requests
development in feature dependency testing in relevant SIT for the relevant staging
branches vulnerabilities environments branches

Deployment to QA Environment

Merge feature branch in Code review, Peer


Deploying into QA Successful Jenkins
to staging branch for QA review, and Code
environment pipeline process
release coverage check

Merge QA branch to
Successful Jenkins Deploying into
master branch and tag the
pipeline process Production environment
master branch

Deployment to Production Environment


4
Deployment Architecture
(COMET application deployed clusters)

5
Deployment Architecture
(Genprio application deployed clusters)

6
Deployment Architecture
(XL Center application deployed clusters)

7
02

CICD Pipelines

8
CICD Overview
(CICD Pipeline related technologies)

GitLab

CI CD
Jenkins

9
CICD (Continuous Integration)

Continuous Integration is the practice of automating the integration of code changes


from multiple contributors into a single entity

In COMET GitLab repository and XL Center GitLab repository,


• Branches are aligned with features and environments
• Master branch is always mapped into production (Except Genprio SOA)
• Master branch must be tagged to trigger the pipeline for the production deployment
• Staging environments only need a merge request to trigger the pipeline
• SonarQube is set up on all environments as a step of the pipeline
• Passing the SonarQube is essential to continue the deployment process
10
CICD
(COMET Environment triggers)

Application Environment Trigger

Production Creating a tag on master branch will trigger the deployment

COMET Dev/SIT Force pushing into ‘SIT’ branch will trigger the deployment

Merging into ‘buat’ or ‘staging’ or ‘postpaid’ branch will trigger the


buat, staging, postpaid
respective environment deployment

11
CICD
(Genprio Environment triggers)

Application Environment Trigger

Creating a tag on master branch will trigger the


Production (BSS)
deployment

Creating a tag on master-soa branch will trigger the


Production (SOA)
deployment

Genprio

Force pushing into ‘genprio-bss-sit’ or ‘genprio-soa-sit’


Dev/SIT
branch will trigger the respective deployment

Merging into ‘genprio-bss-staging’ or ‘genprio-soa-staging’ branch


staging will trigger the respective environment deployment

12
CICD
(XL Center Environment triggers)

Application Environment Trigger

Production Creating a tag on master branch will trigger the deployment

Using ‘Build with parameters’ option in Jenkins will trigger the deployment.
XL Center SIT
(Changes should be force pushed into ‘sit’ branch or merged into ‘sit’ branch)

Using ‘Build with parameters’ option in Jenkins will trigger the deployment. (Changes
Staging
should be merged into ‘sit’ branch)

13
CICD
(XL Center Pipeline manual build)

14
CICD (XL Center Pipeline URLs)

• xl center frontend
 non prod-
https://platform.xlaxiata.id/job/XLCenterOnline/job/xlcenteronline-frontend/job/nonprod/job/xlcenteronli
ne-frontend/
 prod -
https://platform.xlaxiata.id/job/XLCenterOnline/job/xlcenteronline-frontend/job/prod/job/xlcenteronline-f
rontend/

• xl center request handler


 non prod-
https://platform.xlaxiata.id/job/XLCenterOnline/job/xlcenteronline-backend/job/nonprod/job/xlcenter-cm
s/
15
Creating new MS with new pipelines
1. Create New Project via Gitlab
2. Add SSL certificate files, private keys if required.
3. Define main Docker file in root folder to trigger the build of jar.

16
Creating new MS with new pipelines Contd
4. Define Docker file inside the resources folder specified for each environment.

17
Creating new MS with new pipelines Contd

5. Define Jenkins file in the root folder to trigger the CICD pipeline.

• Included script for environment selection

• Includes stages for each of the steps in the Jenkins pipeline

• Includes steps to be followed in each of the stage.

6. Define service yaml file in the environment resources folder.

• Includes IPs and DNS of 3rd party systems consumed by the service.

• Includes resource allocation configurations specified to the microservice

• Includes health probe configurations.


18
Creating new MS with new pipelines Contd

7. Define the cluster config file in the environment resources folder to define configurations
mapped to the required cluster.

19
Creating Jenkins pipelines for the new MS
1. Create new pipeline by clicking on 'New Item' -> Pipeline.
2. Go to configuration and update the git URL and required details.

3. Copy the Jenkins URL and secret key generated to use in the Git repository. 20
Defining Jenkins pipelines for the new MS
1. Go to Settings -> Webhooks

21
2. Go to 'Add new webhook'

22
3. Add the URL and Secret token generated during Jenkins pipeline creation.
4. Add the trigger action defined for the CICD pipeline build. (push/merge/tag etc)

23
5. Click on 'Add webhook'
03

Kubernetes
Configurations

24
Kubernetes Clusters
(Production Clusters)

25
Kubernetes Clusters
(Non-Production Clusters)

26
Connecting to a cluster
• Click on 'Actions' -> Connect

• Cloud shell command:


gcloud container clusters get-credentials nonprod-comet-cluster-dev --region asia-southeast2 --
project comet-nonprod-200617 27
Executing Kubernetes Commands
• Go to 'Compute Engine' menu from GCP menu
• SSH using the monitoring-vm

28
Basic Kubernetes Commands
• Check all the namespaces in the relevant cluster
o kubectl get ns

• List all pods in the namespace


o kubectl get po -n namespace

• List all pods in the namespace with ip and additional information


o kubectl get po -n namespace –o wide

• List all the services in the namespace


o kubectl get svc -n namespace

• List all deployments in the namespace


o kubectl get deployment -n namespace
29
• Edit deployment ( This can be used to change the memory and CPU assigned to a
particular deployment without the pipeline )
o kubectl edit deployment DeploymentName -n namespace
o Enter the editor mode : i
o Save after editing : esc + shift + column -> wq
o Save without editing: esc + shift + column -> q

• Describe pod( to check the pod status when pod is getting up)
okubectl describe po PodName -n namespace

• Check inside a pod( to check the requests directly through the pod)
okubectl exec -it PodName -sh -n namespace

• Check logs in a specific pod


o kubectl logs –f podName -n namespace
o Kubectl logs podName –n namespace –tail=<# lines>

30
04

Schedulers

31
Monitor Applied Schedulers
(production environment)

Log in to the GCP Console and


connect to the ‘monitoring-vm’

Execute command ‘kubectl get cronjobs –n


Go to the root by executing <namespace>’ to list all the cronjobs
‘sudo su –’ command Namespace: Namespaces can be listed by executing
‘kubectl get ns’

32
View Single Scheduler Details
(production environment)

Log in to the GCP Console and connect to the


‘monitoring-vm’ and go to the root by executing ‘sudo
su –’ command

Go to the cronjobs directory using ‘cd cronjobs’ and


execute ‘ls’ to list all the cronjob yaml files deployed

Execute ‘cat <cronjob_yaml_name>’ to view the


configurations of the cronjob
cronjob_yaml_name: Yaml names can be lsited by
executing ‘ls’ command

33
Create New Scheduler
Create the yaml file with necessary settings

NOTE!
Scheduler time: This can be created
manually or use a third-party tool like
crontab.guru. Time should be
calculated related to the server time

metadata.name: Scheduler name for identification


spec.schedule: Scheduler executing time NOTE!
spec…….containers.name: Same as metadata.name As a practice, upload the yaml file to
spec…….containers.args: curl should be updated to the endpoint we need to call the cronjobs repository in
Rest of the settings can be used as it is, since they are default values

34
Apply New Scheduler
(production environment)

Log in to the GCP Console and connect to the Execute ‘vim <cronjob_yaml_name.yaml>’ to create a new yaml file
‘monitoring-vm’ and go to the root by executing ‘sudo in the cronjobs directory
su –’ command, then go to the cronjobs directory using
‘cd cronjobs’ cronjob_yaml_name: Use a unique name for the cronjob

Press ‘i’ to enter the insert mode and copy the cronjob
Execute ‘kubectl apply –f <cronjob_yaml_name.yaml> –n configurations
<namespace>’ to apply the cronjob and start Press ‘Esc’ to exit from insert mode
Execute ‘:wq!’ to save and exit the yaml file

35
Edit an Existing Scheduler
(production environment)

Log in to the GCP Console and connect to the


Execute ‘kubectl edit cronjob <cronjob_name> –n
‘monitoring-vm’ and go to the root by executing ‘sudo
<namespace>’ to enter the edit mode
su –’ command

Press ‘i’ to enter the insert mode and do the


Executing ‘:qw!’ will save the changes and automatically modifications
apply the cronjob Press ‘Esc’ to exit from insert mode
Execute ‘:qw!’ to save and exit the yaml file

36
Edit an Existing Scheduler
(production environment)

Log in to the GCP Console and


connect to the ‘monitoring-vm’
and go to the root by executing
‘sudo su –’ command

Execute ‘kubectl delete


cronjob <cronjob_name> –n
<namespace>’ to remove the
cronjob

37
Activity
(development environment)

Write a scheduler to run every 2 minutes and call below API "API“.
Scheduler name: training-scheduler-{your_name}
yaml File Name: training-scheduler-{your_name}.yaml

Apply the scheduler in development environment. Check the logs and catch
the scheduler execution log and confirm the execution

38
Thank you!
Innovation start’s with a conversation. Let’s talk!

info@axiatadigitallabs.com

www.axiatadigitallabs.com

39

You might also like