Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Building a production Environment:

Some of the tests commonly run in a staging environment include:

 Unit Testing - determines whether individual functions or units are working as


intended and checks the smallest unit of code that can be logically isolated.
 Regression Testing - confirms that all previous features and functionality
included in the new release still work as expected.
 Integration Testing - tests all software dependencies and all software modules
together as a single group to evaluate system functionality and compliance.
 Chaos Testing - helps provide a better sense of how the new software or app will
hold up under different types of chaotic conditions, which helps the final product
to be much more resilient.

Deploying and utilizing OpenStack in Production Environment:

Deploying and utilizing openstack with the help of below environments:

 Three infrastructure (control plane) hosts


 Two compute hosts
 One NFS storage device
 One log aggregation host
 Multiple Network Interface Cards (NIC) configured as bonded pairs for each host
 Full compute kit with the Telemetry service (ceilometer) included, with NFS
configured as a storage back end for the Image (glance), and Block Storage
(cinder) services
 Internet access via the router address 172.29.236.1 on the Management Network

Network configuration¶
Assign proper ip address
Host the network configuration

Deployment configuration¶
Environment layout¶
The /etc/openstack_deploy/openstack_user_config.yml file defines the environment
layout.

Environment customizations¶
The optionally deployed files in /etc/openstack_deploy/env.d allow the customization
of Ansible groups. This allows the deployer to set whether the services will run in a
container (the default), or on the host (on metal).

User variables¶
The /etc/openstack_deploy/user_variables.yml file defines the global overrides for
the default variables.
Application orchestration using Openstack HEAT:

OpenStack Orchestration
The mission of the OpenStack Orchestration program is to create a human- and
machine-accessible service for managing the entire lifecycle of infrastructure and
applications within OpenStack clouds.

Heat
Heat is the main project in the OpenStack Orchestration program. It implements an
orchestration engine to launch multiple composite cloud applications based on
templates in the form of text files that can be treated like code. A native Heat template
format is evolving, but Heat also endeavours to provide compatibility with the AWS
CloudFormation template format, so that many existing CloudFormation templates can
be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a
CloudFormation-compatible Query API.
How it works

 A Heat template describes the infrastructure for a cloud application in a text file that
is readable and writable by humans, and can be checked into version control, diffed,
&c.
 Infrastructure resources that can be described include: servers, floating ips, volumes,
security groups, users, etc.
 Heat also provides an autoscaling service that integrates with Telemetry, so you can
include a scaling group as a resource in a template.
 Templates can also specify the relationships between resources (e.g. this volume is
connected to this server). This enables Heat to call out to the OpenStack APIs to
create all of your infrastructure in the correct order to completely launch your
application.
 Heat manages the whole lifecycle of the application - when you need to change your
infrastructure, simply modify the template and use it to update your existing stack.
Heat knows how to make the necessary changes. It will delete all of the resources
when you are finished with the application, too.
 Heat primarily manages infrastructure, but the templates integrate well with
software configuration management tools such as Puppet and Chef. The Heat team
is working on providing even better integration between infrastructure and software.

You might also like