Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Powered by AI

You

write a detailed terraform code using modules. You need to create a resource group in azure. within that, create
vnets in unique address space of /16(use serial no=1) for unique logic . vnets need to be created in 5 azure
region (one each). within vnet, create two subnets of /24 each. resource group can be one module, vnet and
subnet can be one module. next, create another module for vm and network interface creation. Each region will
have pair of windows and linux vm. Windows with one interface and linux with 2 interface. subnet 1 will be
associated to both windows and linux eth0. and subnet2 will be associated to linux eth1 interface. Subnet ids
list per region in map format should be exported from vnets module and has to be used in respective region
config of vm module

Copilot

Certainly! Below is an example of how you can structure your Terraform code using modules to achieve the
described setup. Please note that you will need to replace placeholders like <subscription_id>, <tenant_id>,
and <client_id> with your actual Azure account details, and <client_secret> with your secure secret.

# Main configuration file

provider "azurerm" {
features {}
subscription_id = "<subscription_id>"
tenant_id = "<tenant_id>"
client_id = "<client_id>"
client_secret = "<client_secret>"
}

module "resource_group" {
source = "./modules/resource_group"
prefix = "mycompany"
location = "East US"
}

module "network" {
source = "./modules/network"
resource_group_name = module.resource_group.name
location = module.resource_group.location
regions = ["East US", "West US", "Central US", "North Europe", "West Europe"]
}

module "virtual_machines" {
source = "./modules/virtual_machines"
resource_group_name = module.resource_group.name
location = module.resource_group.location
subnet_ids = module.network.subnet_ids
}

The resource_group module might look like this:

# modules/resource_group/main.tf

variable "prefix" {
description = "Prefix for the resource group name"
type = string
}

variable "location" {
description = "Azure region for the resource group"
type = string
}

resource "azurerm_resource_group" "rg" {


name = "${var.prefix}-rg"
location = var.location
}

output "name" {
value = azurerm_resource_group.rg.name
}

The network module could be structured as follows:

# modules/network/main.tf

variable "resource_group_name" {
description = "Name of the resource group"
type = string
}

variable "regions" {
description = "List of Azure regions"
type = list(string)
}

variable "location" {
description = "Location for the resources"
type = string
}

output "subnet_ids" {
value = { for region in var.regions : region => {
subnet1_id = module.vnet[region].subnet1_id
subnet2_id = module.vnet[region].subnet2_id
}}
}

module "vnet" {
for_each = toset(var.regions)
source = "../vnet"
resource_group_name = var.resource_group_name
location = each.key
vnet_cidr = "10.${count.index}.0.0/16"
}

# modules/vnet/main.tf

variable "resource_group_name" {
description = "Name of the resource group"
type = string
}

variable "location" {
description = "Azure region for the VNet"
type = string
}

variable "vnet_cidr" {
description = "CIDR for the VNet"
type = string
}

resource "azurerm_virtual_network" "vnet" {


name = "${var.location}-vnet"
address_space = [var.vnet_cidr]
location = var.location
resource_group_name = var.resource_group_name
}

resource "azurerm_subnet" "subnet1" {


name = "subnet1"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["${cidrsubnet(var.vnet_cidr, 8, 1)}"]
}

resource "azurerm_subnet" "subnet2" {


name = "subnet2"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["${cidrsubnet(var.vnet_cidr, 8, 2)}"]
}

output "subnet1_id" {
value = azurerm_subnet.subnet1.id
}

output "subnet2_id" {
value = azurerm_subnet.subnet2.id
}

The virtual_machines module would be something like this:

# modules/virtual_machines/main.tf

variable "resource_group_name" {
description = "Name of the resource group"
type = string
}

variable "subnet_ids" {
description = "Map of subnet IDs per region"
type = map(map(string))
}

# ... VM and NIC creation resources go here ...

# Use the subnet_ids variable to associate NICs with subnets

This is a high-level example to get you started. You’ll need to fill in the details for the virtual_machines
module, including the VM and NIC resources, and ensure that the subnet IDs are correctly passed between
modules. Also, make sure to review and test the code thoroughly before applying it, especially in a production
environment. Always consider best practices for security and infrastructure management.

You

share detailed code of virtual machine as well including both network_interface resource and
azurerm_linux_virtual_machine and azurerm_windows_virtual_machine. also, please share project hierarchy
structure

Copilot

Certainly! Here’s a detailed example of the virtual_machines module, which includes resources for creating
network interfaces, Linux virtual machines, and Windows virtual machines. Additionally, I’ll provide a
suggested project hierarchy structure.

Virtual Machines Module (virtual_machines)


# modules/virtual_machines/main.tf

variable "resource_group_name" {
description = "Name of the resource group"
type = string
}

variable "subnet_ids" {
description = "Map of subnet IDs per region"
type = map(map(string))
}
variable "location" {
description = "Location for the resources"
type = string
}

resource "azurerm_network_interface" "windows_nic" {


name = "win-nic-${var.location}"
location = var.location
resource_group_name = var.resource_group_name

ip_configuration {
name = "ipconfig1"
subnet_id = var.subnet_ids[var.location]["subnet1_id"]
private_ip_address_allocation = "Dynamic"
}
}

resource "azurerm_network_interface" "linux_nic_eth0" {


name = "linux-nic-eth0-${var.location}"
location = var.location
resource_group_name = var.resource_group_name

ip_configuration {
name = "ipconfig1"
subnet_id = var.subnet_ids[var.location]["subnet1_id"]
private_ip_address_allocation = "Dynamic"
}
}

resource "azurerm_network_interface" "linux_nic_eth1" {


name = "linux-nic-eth1-${var.location}"
location = var.location
resource_group_name = var.resource_group_name

ip_configuration {
name = "ipconfig2"
subnet_id = var.subnet_ids[var.location]["subnet2_id"]
private_ip_address_allocation = "Dynamic"
}
}

resource "azurerm_windows_virtual_machine" "windows_vm" {


name = "win-vm-${var.location}"
resource_group_name = var.resource_group_name
location = var.location
size = "Standard_DS1_v2"
admin_username = "adminuser"
admin_password = "P@ssw0rd123!"
network_interface_ids = [
azurerm_network_interface.windows_nic.id,
]

os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}

source_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
}

resource "azurerm_linux_virtual_machine" "linux_vm" {


name = "linux-vm-${var.location}"
resource_group_name = var.resource_group_name
location = var.location
size = "Standard_DS1_v2"
admin_username = "adminuser"
admin_password = "P@ssw0rd123!"
network_interface_ids = [
azurerm_network_interface.linux_nic_eth0.id,
azurerm_network_interface.linux_nic_eth1.id,
]

os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}

source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS"
version = "latest"
}
}

Project Hierarchy Structure

Here’s a suggested project hierarchy structure for your Terraform project:

.
├── main.tf
├── variables.tf
├── outputs.tf
├── modules
│ ├── resource_group
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── network
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ ├── vnet
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── outputs.tf
│ └── virtual_machines
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── provider.tf

In this structure:

• main.tf is the main entry point where you call your modules.
• variables.tf defines the variables used in your configurations.
• outputs.tf defines the outputs from your modules.
• provider.tf contains the provider configuration.
• The modules directory contains subdirectories for each module, which in turn contain their own
main.tf, variables.tf, and outputs.tf.

Remember to replace sensitive information such as passwords with variables or secure secrets management
solutions. Also, ensure that you have the necessary permissions and that your Azure provider is correctly
configured before running the Terraform code. Always test your code in a safe environment before deploying
to production.

You might also like