Code Monkey home page Code Monkey logo

terraform-examples's Introduction

Terraform examples

This repository contains Terraform manifests examples for different Selectel services.

Repository structure

.
├── examples
│   ├── dbaas
│   │   ├── get_params
│   │   ├── mysql_cluster
│   │   ├── postgres_cluster
│   │   └── redis_cluster
│   ├── mks
│   │   ├── cluster_one_nodegroup
│   │   └── cluster_one_nodegroup_with_net_infra
│   └── vpc
│       ├── preemptible_server
│       ├── preemptible_server_with_gpu
│       ├── server_local_and_remote_disks
│       ├── server_local_root_disk
│       ├── server_remote_root_disk
│       ├── server_remote_root_disk_and_attached_share
│       ├── server_remote_root_disk_two_ports
│       ├── server_remote_root_disk_with_server_group
│       ├── server_windows
│       ├── several_servers_and_loadbalancers_with_project
│       ├── several_servers_and_loadbalancers_without_project
│       ├── several_servers_one_network
│       ├── several_servers_routing
│       └── several_servers_with_networking_and_fwaas
└── modules
    ├── mks
    │   ├── cluster
    │   ├── nodegroup
    │   └── nodegroup_local_disk
    └── vpc
        ├── flavor
        ├── floatingip
        ├── image_datasource
        ├── keypair
        ├── lb_active_standby
        ├── lb_components
        ├── lb_components_http
        ├── lb_sngl
        ├── license
        ├── multiple_servers
        ├── multiple_servers_with_fwaas
        ├── nat
        ├── os_lb_env
        ├── project
        ├── project_with_user
        ├── routing_network
        ├── routing_os
        ├── routing_selvpc
        ├── routing_servers
        ├── server_group
        ├── server_local_and_remote_disks
        ├── server_local_root_disk
        ├── server_remote_root_disk
        ├── server_remote_root_disk_and_attached_share
        ├── server_remote_root_disk_two_ports
        ├── server_remote_root_disk_with_gpu
        ├── share
        ├── single_instance
        ├── subnet
        └── vrrp_subnet
  • examples - Contains Terraform environments examples that usually have several resources and use modules from modules directory. Those examples can be used as-is or they can be updated to suit your specific needs.

  • modules - Contains reusable small Terraform modules that can be used in many Terraform environments. Those modules wrap Terraform resources and datasources and provide configurable variables. All of those modules are fully compatible with the Selectel VPC service.

terraform-examples's People

Contributors

dkder3k avatar f4rx avatar gadzhet avatar gogen120 avatar icerzack avatar idegree avatar isaevdo avatar kolsean avatar koodt avatar makonakro13 avatar manegspb avatar ozerovandrei avatar sel-bukharov avatar truepack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-examples's Issues

"The requested availability zone is not suitable for the requested instance type" for ru-2a

Trying your example server_local_root_disk, it works, but I need server in ru-2a.

After obvious modifications got the following output:

module.server_local_root_disk.module.nat.data.openstack_networking_network_v2.external_net: Refreshing state...
module.server_local_root_disk.module.image_datasource.data.openstack_images_image_v2.image_1: Refreshing state...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.server_local_root_disk.openstack_compute_instance_v2.instance_1 will be created
  + resource "openstack_compute_instance_v2" "instance_1" {
      + access_ip_v4        = (known after apply)
      + access_ip_v6        = (known after apply)
      + all_metadata        = (known after apply)
      + all_tags            = (known after apply)
      + availability_zone   = "ru-2a"
      + flavor_id           = (known after apply)
      + flavor_name         = (known after apply)
      + force_delete        = false
      + id                  = (known after apply)
      + image_id            = "72dc3264-2920-424d-bdf2-e62603578ae9"
      + image_name          = (known after apply)
      + key_pair            = (known after apply)
      + name                = "tf_server"
      + power_state         = "active"
      + region              = (known after apply)
      + security_groups     = (known after apply)
      + stop_before_destroy = false

      + network {
          + access_network = false
          + fixed_ip_v4    = (known after apply)
          + fixed_ip_v6    = (known after apply)
          + floating_ip    = (known after apply)
          + mac            = (known after apply)
          + name           = (known after apply)
          + port           = (known after apply)
          + uuid           = (known after apply)
        }

      + vendor_options {
          + ignore_resize_confirmation = true
        }
    }

  # module.server_local_root_disk.openstack_networking_floatingip_associate_v2.association_1 will be created
  + resource "openstack_networking_floatingip_associate_v2" "association_1" {
      + fixed_ip    = (known after apply)
      + floating_ip = (known after apply)
      + id          = (known after apply)
      + port_id     = (known after apply)
      + region      = (known after apply)
    }

  # module.server_local_root_disk.openstack_networking_port_v2.port_1 will be created
  + resource "openstack_networking_port_v2" "port_1" {
      + admin_state_up         = (known after apply)
      + all_fixed_ips          = (known after apply)
      + all_security_group_ids = (known after apply)
      + all_tags               = (known after apply)
      + device_id              = (known after apply)
      + device_owner           = (known after apply)
      + dns_assignment         = (known after apply)
      + dns_name               = (known after apply)
      + id                     = (known after apply)
      + mac_address            = (known after apply)
      + name                   = "tf_server-eth0"
      + network_id             = (known after apply)
      + port_security_enabled  = (known after apply)
      + qos_policy_id          = (known after apply)
      + region                 = (known after apply)
      + tenant_id              = (known after apply)

      + binding {
          + host_id     = (known after apply)
          + profile     = (known after apply)
          + vif_details = (known after apply)
          + vif_type    = (known after apply)
          + vnic_type   = (known after apply)
        }

      + fixed_ip {
          + subnet_id = (known after apply)
        }
    }

  # module.server_local_root_disk.random_string.random_name will be created
  + resource "random_string" "random_name" {
      + id          = (known after apply)
      + length      = 5
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + result      = (known after apply)
      + special     = false
      + upper       = true
    }

  # module.server_local_root_disk.module.flavor.openstack_compute_flavor_v2.flavor_1 will be created
  + resource "openstack_compute_flavor_v2" "flavor_1" {
      + disk         = 8
      + extra_specs  = (known after apply)
      + id           = (known after apply)
      + is_public    = false
      + name         = (known after apply)
      + ram          = 8192
      + region       = (known after apply)
      + rx_tx_factor = 1
      + vcpus        = 4
    }

  # module.server_local_root_disk.module.floatingip.openstack_networking_floatingip_v2.floatingip_1 will be created
  + resource "openstack_networking_floatingip_v2" "floatingip_1" {
      + address    = (known after apply)
      + all_tags   = (known after apply)
      + dns_domain = (known after apply)
      + dns_name   = (known after apply)
      + fixed_ip   = (known after apply)
      + id         = (known after apply)
      + pool       = "external-network"
      + port_id    = (known after apply)
      + region     = (known after apply)
      + tenant_id  = (known after apply)
    }

  # module.server_local_root_disk.module.keypair.selectel_vpc_keypair_v2.keypair_1 will be created
  + resource "selectel_vpc_keypair_v2" "keypair_1" {
      + id         = (known after apply)
      + name       = (known after apply)
      + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDkriqP2jjnHAfonP7M8YE7QLI9ykpWLYvklskLxoad0EWlMeHfz25Nx+sG66QpyRB8WdU8d6/dG6wD+Vz8QKs56cvbWFM8JicQ9wcW0GuE6UR2Kxn46PgIteqs3aLUcWNOgy+uK3n/FCymQWgkuIaJBmmBlhLvycVgkQR5WHinuH1Jg8Vhm2tHr+yVjPHI8vatZKHucMYP6axw/tCbkJ7CxJtOzzUqhx9Wk1yF+v4+SahfcKd36mkn8H4HUd4GITVPYO3hA8j2QTCVjTfw6JPRQ/JX72Q2uo92mkZH9OUwkdN3YBEyuvoQRVHmgxcvXo4lnBmfMLpkNK876tD13W54jA7Xi9Dl8KTGmSO14X9sGhTSoHGPykUZmEciqSTD2eEZjSrNyi1FvQPMY6BxQ4y4EUQG9aYFwlIuCsmbmlOqeXgFHhQylD97tFkdFRjXWvbKka2IJPBu2FnxHWAAQWd6FSotANGmYELsJnCKKZ8fTB+5P5LK0GX2bhPgQdQCL+O6sNC0kyme3dBMQBcwnpfnnhbQdSuGsJc0n91pb10bA2wFIlwpVH9an2gf432sPjwLFBbvviEzTPGSd94PhFg3zELH090KacdSP0EOy6YE+dSlILZGnToTPIe4wjxm5/c/e6jqzrSQWyABjh3yis8vKi4n0bvrCiNsga6KU1uq+w==\n"
      + user_id    = "aaed7c45bdb74ebfa1a9592fcb3c88ce"
    }

  # module.server_local_root_disk.module.nat.openstack_networking_network_v2.network_1 will be created
  + resource "openstack_networking_network_v2" "network_1" {
      + admin_state_up          = (known after apply)
      + all_tags                = (known after apply)
      + availability_zone_hints = (known after apply)
      + dns_domain              = (known after apply)
      + external                = (known after apply)
      + id                      = (known after apply)
      + mtu                     = (known after apply)
      + name                    = "network_1"
      + port_security_enabled   = (known after apply)
      + qos_policy_id           = (known after apply)
      + region                  = (known after apply)
      + shared                  = (known after apply)
      + tenant_id               = (known after apply)
      + transparent_vlan        = (known after apply)
    }

  # module.server_local_root_disk.module.nat.openstack_networking_router_interface_v2.router_interface_1 will be created
  + resource "openstack_networking_router_interface_v2" "router_interface_1" {
      + id        = (known after apply)
      + port_id   = (known after apply)
      + region    = (known after apply)
      + router_id = (known after apply)
      + subnet_id = (known after apply)
    }

  # module.server_local_root_disk.module.nat.openstack_networking_router_v2.router_1 will be created
  + resource "openstack_networking_router_v2" "router_1" {
      + admin_state_up          = (known after apply)
      + all_tags                = (known after apply)
      + availability_zone_hints = (known after apply)
      + distributed             = (known after apply)
      + enable_snat             = (known after apply)
      + external_gateway        = (known after apply)
      + external_network_id     = "8cc4ee6a-8dc9-4865-8877-ef5b5cdef020"
      + id                      = (known after apply)
      + name                    = "router_1"
      + region                  = (known after apply)
      + tenant_id               = (known after apply)

      + external_fixed_ip {
          + ip_address = (known after apply)
          + subnet_id  = (known after apply)
        }
    }

  # module.server_local_root_disk.module.nat.openstack_networking_subnet_v2.subnet_1 will be created
  + resource "openstack_networking_subnet_v2" "subnet_1" {
      + all_tags          = (known after apply)
      + cidr              = "192.168.0.0/24"
      + enable_dhcp       = true
      + gateway_ip        = (known after apply)
      + id                = (known after apply)
      + ip_version        = 4
      + ipv6_address_mode = (known after apply)
      + ipv6_ra_mode      = (known after apply)
      + name              = "192.168.0.0/24"
      + network_id        = (known after apply)
      + no_gateway        = false
      + region            = (known after apply)
      + tenant_id         = (known after apply)

      + allocation_pool {
          + end   = (known after apply)
          + start = (known after apply)
        }

      + allocation_pools {
          + end   = (known after apply)
          + start = (known after apply)
        }
    }

Plan: 14 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: 

===

module.server_local_root_disk.module.nat.openstack_networking_router_v2.router_1: Creating...                                            15:29
module.server_local_root_disk.module.keypair.selectel_vpc_keypair_v2.keypair_1: Creation complete after 2s [id=aaed7c45bdb74ebfa1a9592fcb3c88c
e/keypair-rSN0D]
module.server_local_root_disk.module.flavor.openstack_compute_flavor_v2.flavor_1: Creation complete after 1s [id=efbb01f1-6064-4090-a0bf-58d54
2804044]
module.server_local_root_disk.module.nat.openstack_networking_network_v2.network_1: Creation complete after 6s [id=a3dd2b3b-a41c-4608-8029-994
cb0576158]
module.server_local_root_disk.module.nat.openstack_networking_subnet_v2.subnet_1: Creating...
module.server_local_root_disk.module.floatingip.openstack_networking_floatingip_v2.floatingip_1: Creation complete after 9s [id=5a6313bd-87be-
49fa-a10b-723d51cf0600]
module.server_local_root_disk.module.nat.openstack_networking_router_v2.router_1: Creation complete after 9s [id=ccbe6a88-568b-454a-94ee-2b28e
bf6c229]
module.server_local_root_disk.module.nat.openstack_networking_subnet_v2.subnet_1: Creation complete after 6s [id=5c0a837e-5b67-4d45-bf49-1254e
91361a0]
module.server_local_root_disk.module.nat.openstack_networking_router_interface_v2.router_interface_1: Creating...
module.server_local_root_disk.openstack_networking_port_v2.port_1: Creating...
module.server_local_root_disk.openstack_networking_port_v2.port_1: Creation complete after 6s [id=ab3576f1-3528-4d30-b826-34c47c677491]
module.server_local_root_disk.openstack_networking_floatingip_associate_v2.association_1: Creating...
module.server_local_root_disk.openstack_compute_instance_v2.instance_1: Creating...
module.server_local_root_disk.openstack_networking_floatingip_associate_v2.association_1: Creation complete after 2s [id=5a6313bd-87be-49fa-a1
0b-723d51cf0600]
module.server_local_root_disk.module.nat.openstack_networking_router_interface_v2.router_interface_1: Creation complete after 8s [id=ccf82f0a-
b9ce-472c-9e35-419fa1dba477]

module.server_local_root_disk.openstack_compute_instance_v2.instance_1: Creating...

Error: Error creating OpenStack server: Bad request with: [POST https://api.ru-2.selvpc.ru/compute/v2.1/servers], error message: {"badRequest"
: {"message": "The requested availability zone is not suitable for the requested instance type", "code": 400}}

  on ..\..\..\modules\vpc\server_local_root_disk\main.tf line 49, in resource "openstack_compute_instance_v2" "instance_1":
  49: resource "openstack_compute_instance_v2" "instance_1" {

Any ideas how to fix it?

Argument "monitoring_enabled" is not expected for "openstack_containerinfra_clustertemplate_v1"

Environment
terraform v0.12.20
provider.openstack v1.25.0

Steps to reproduce

  1. In examples/vpc/kubernetes_cluster execute terraform init
  2. In the same directory execute terraform plan

Expected result
Execution plan is printed

Actual result
Unsupported argument error received

Error: Unsupported argument

  on ../../../modules/vpc/kubernetes_cluster/main.tf line 61, in resource "openstack_containerinfra_clustertemplate_v1" "clustertemplate_1":
  61:   monitoring_enabled    = "${var.cluster_monitoring_enabled}"

An argument named "monitoring_enabled" is not expected here.

Add the input variable region to the module nat

The region that is specified by the provider is used, which does not allow creating resources in different regions. The solution is to add the variable region to this file main.tf to call all the data and resources. This allows you not to specify the region in the provider, to indicate when calling the module nat.

Unfortunately, I can not offer PR, because failed to provide backward compatibility (take the region from the provider, unless explicitly specified when calling the module region).
Alternatively, add the module nat_with_region.

I understand that I can create a module for creating resources in each individual region and call a provider in it. I do not know if this is correct; I have met recommendations not to include the provider call in the module.

Нет примера для HTTPS listener

Здесь указано что будет создан балансировщик с HTTPS

# Создание балансировщика базового типа с резервированием, HTTPS правилом и SSL сертификатом

но в vars.tf

вы указываете протокол HTTP

Я так и не разобрался как добавить SSL к HTTPS слушателю

Provider source not supported in Terraform v0.12

% terraform init
Initializing modules...
- project_with_user in ../../../modules/vpc/project_with_user
- project_with_user.keypair in ../../../modules/vpc/keypair
- project_with_user.project in ../../../modules/vpc/project
- project_with_user.role in ../../../modules/vpc/role
- project_with_user.user in ../../../modules/vpc/user
- server_local_root_disk in ../../../modules/vpc/server_local_root_disk
- server_local_root_disk.flavor in ../../../modules/vpc/flavor
- server_local_root_disk.floatingip in ../../../modules/vpc/floatingip
- server_local_root_disk.image_datasource in ../../../modules/vpc/image_datasource
- server_local_root_disk.keypair in ../../../modules/vpc/keypair
- server_local_root_disk.nat in ../../../modules/vpc/nat
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Warning: Provider source not supported in Terraform v0.12

  on versions.tf line 3, in terraform:
   3:     selectel = {
   4:       source = "selectel/selectel"
   5:     }

A source was declared for provider selectel. Terraform v0.12 does not support
the provider source attribute. It will be ignored.

(and 12 more similar warnings elsewhere)


Error: Reserved argument name in module block

  on ../../../modules/vpc/project_with_user/main.tf line 19, in module "keypair":
  19:   count  = var.keypair_name != "" ? 1 : 0

The name "count" is reserved for use in a future version of Terraform.

Quota exceeded for resources: ["router"].", "type": "OverQuota"

Your example server_remote_root_disk works nice for one server.

But I need at least three servers. Ovbious modification https://pastebin.com/tM0gdDY2 does not work since it creates a router for each server and soon internal quota for routers (2 allowed) excceeds. I managed to modify your scripts to get three servers - it creates nat first and then creates servers - but I think there should be an "out of the box" way. May be I did not get the situation and it is possible?

Your query returned more than one result.

terraform-examples/examples/vpc/server_local_root_disk

 % terraform apply

╷
│ Error: Your query returned more than one result. Please try a more specific search criteria, or set `most_recent` attribute to true.
│ 
│   with module.server_local_root_disk.module.image_datasource.data.openstack_images_image_v2.image_1,
│   on ../../../modules/vpc/image_datasource/main.tf line 1, in data "openstack_images_image_v2" "image_1":
│    1: data "openstack_images_image_v2" "image_1" {
│ 
╵
2021-05-26T19:00:00.881+0300 [DEBUG] provider.terraform-provider-openstack_v1.42.0: 2021/05/26 19:00:00 [DEBUG] Multiple results found and `most_recent` is set to: false
2021-05-26T19:00:00.881+0300 [DEBUG] provider.terraform-provider-openstack_v1.42.0: 2021/05/26 19:00:00 [DEBUG] Multiple results found: []images.Image{images.Image{ID:"e5c07a92-e54a-4991-ad5e-60c1a5d6ba11", Name:"Ubuntu 18.04 LTS 64-bit", Status:"active", Tags:[]string{}, ContainerFormat:"bare", DiskFormat:"raw", MinDiskGigabytes:5, MinRAMMegabytes:512, Owner:"3acf7ceddc024b86b86ef151e4972805", Protected:false, Visibility:"public", Hidden:false, Checksum:"7696d5d6e5db80cb83d027f3ee81f39f", SizeBytes:2194145280, Metadata:map[string]string(nil), Properties:map[string]interface {}{"direct_url":"rbd://55923305-a505-448b-984c-c208ccff05f2/images/e5c07a92-e54a-4991-ad5e-60c1a5d6ba11/snap", "hw_disk_bus":"scsi", "hw_qemu_guest_agent":"yes", "hw_scsi_model":"virtio-scsi", "locations":[]interface {}{map[string]interface {}{"metadata":map[string]interface {}{"store":"ru-3a"}, "url":"rbd://55923305-a505-448b-984c-c208ccff05f2/images/e5c07a92-e54a-4991-ad5e-60c1a5d6ba11/snap"}}, "os_distro":"ubuntu", "os_hash_algo":"sha512", "os_hash_value":"578c87fa4bc58bbf3f15262816895bd5bc148bc865e29c443c1858e8bfe2b02b5d01a2ee8bb4ccb273e66f262ffccdb0460869a85f643c40c3d8801ae2fecaf8", "os_type":"linux", "stores":"ru-3a", "watchdog":"pause", "x_sel_image_agent_type":"cloud-init", "x_sel_image_os_arch":"amd64", "x_sel_image_os_dist":"ubuntu", "x_sel_image_owner":"Selectel", "x_sel_image_source_file":"ubuntu-bionic-amd64-selectel-master-product-0.1.img", "x_sel_image_type":"master", "x_sel_os_type":"linux"}, CreatedAt:time.Time{wall:0x0, ext:63756216653, loc:(*time.Location)(nil)}, UpdatedAt:time.Time{wall:0x0, ext:63757639506, loc:(*time.Location)(nil)}, File:"/v2/images/e5c07a92-e54a-4991-ad5e-60c1a5d6ba11/file", Schema:"/v2/schemas/image", VirtualSize:0, OpenStackImageImportMethods:[]string(nil), OpenStackImageStoreIDs:[]string(nil)}, images.Image{ID:"86abb421-7e8b-49d6-8718-e73098654ebd", Name:"Ubuntu 18.04 LTS 64-bit", Status:"active", Tags:[]string{}, ContainerFormat:"bare", DiskFormat:"raw", MinDiskGigabytes:5, MinRAMMegabytes:512, Owner:"3acf7ceddc024b86b86ef151e4972805", Protected:false, Visibility:"public", Hidden:false, Checksum:"fc79fbd901ad9172db5d6e4cc386b20e", SizeBytes:2196439040, Metadata:map[string]string(nil), Properties:map[string]interface {}{"direct_url":"rbd://55923305-a505-448b-984c-c208ccff05f2/images/86abb421-7e8b-49d6-8718-e73098654ebd/snap", "hw_disk_bus":"scsi", "hw_qemu_guest_agent":"yes", "hw_scsi_model":"virtio-scsi", "locations":[]interface {}{map[string]interface {}{"metadata":map[string]interface {}{"store":"ru-3a"}, "url":"rbd://55923305-a505-448b-984c-c208ccff05f2/images/86abb421-7e8b-49d6-8718-e73098654ebd/snap"}}, "os_distro":"ubuntu", "os_hash_algo":"sha512", "os_hash_value":"6d9f2225074990c31b5e1a21dd46307709c3b85e7f6211522db997b13dbd44c41ba4d50d57168a8b829656066e3a45f6d52644bba74d944963bbefc454daf7db", "os_type":"linux", "stores":"ru-3a", "watchdog":"pause", "x_sel_image_agent_type":"cloud-init", "x_sel_image_os_arch":"amd64", "x_sel_image_os_dist":"ubuntu", "x_sel_image_owner":"Selectel", "x_sel_image_source_file":"ubuntu-bionic-amd64-selectel-master-product-0.1.img", "x_sel_image_type":"master", "x_sel_os_type":"linux"}, CreatedAt:time.Time{wall:0x0, ext:63756822192, loc:(*time.Location)(nil)}, UpdatedAt:time.Time{wall:0x0, ext:63757632836, loc:(*time.Location)(nil)}, File:"/v2/images/86abb421-7e8b-49d6-8718-e73098654ebd/file", Schema:"/v2/schemas/image", VirtualSize:0, OpenStackImageImportMethods:[]string(nil), OpenStackImageStoreIDs:[]string(nil)}}

terraform -v:

Terraform v0.15.4
on darwin_amd64
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/selectel/selectel v3.5.0
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.42.0

Example cluster_one_nodegroup_with_net_infra does not work properly

Hello. Your examples are perfect, but i found little problem with cluster_one_nodegroup_with_net_infra

It did not create node group if you specify keypair

╷
│ Error: error creating nodegroup: unable to find new nodegroup by ID after creating
│ 
│   with module.kubernetes_front_nodegroup.selectel_mks_nodegroup_v1.nodegroup_1,
│   on terraform-examples/modules/mks/nodegroup/main.tf line 1, in resource "selectel_mks_nodegroup_v1" "nodegroup_1":
│    1: resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
│ 
╵
╷
│ Error: error creating nodegroup: unable to find new nodegroup by ID after creating
│ 
│   with module.kubernetes_system_nodegroup.selectel_mks_nodegroup_v1.nodegroup_1,
│   on terraform-examples/modules/mks/nodegroup/main.tf line 1, in resource "selectel_mks_nodegroup_v1" "nodegroup_1":
│    1: resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
│ 
╵
╷
│ Error: error creating nodegroup: unable to find new nodegroup by ID after creating
│ 
│   with module.kubernetes_nodegroup.selectel_mks_nodegroup_v1.nodegroup_1,
│   on terraform-examples/modules/mks/nodegroup/main.tf line 1, in resource "selectel_mks_nodegroup_v1" "nodegroup_1":
│    1: resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
│ 

How can I get a floatingip?

I need get the a floatingip from this module but I can't because this module doesn't have output for getting floatingip.

Add output floatingip

For automation, I need to know the public address assigned to the loadbalancer.
Can I add an output method?

не проходят валидацию проекты из examples

при попытке сделать validate на проекты папки :
── vpc
│ ├── preemptible_server
│ ├── preemptible_server_with_gpu"
и возможно другие получаю ошибку:
Error: Unsupported argument

│ on ../../../modules/vpc/project_with_user/main.tf line 8, in module "user":
│ 8: user_name = var.project_user_name

│ An argument named "user_name" is not expected here.

При этом изменений еще не вносилось

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.