Code Monkey home page Code Monkey logo

aws's Introduction

aws Cookbook

Cookbook Version CI State OpenCollective OpenCollective License

Overview

This cookbook provides resources for configuring and managing nodes running in Amazon Web Services as well as several AWS service offerings.

Included resources:

  • CloudFormation Stack Management (cloudformation_stack)
  • CloudWatch (cloudwatch)
  • CloudWatch Instance Monitoring (instance_monitoring)
  • DynamoDB (dynamodb_table)
  • EBS Volumes (ebs_volume)
  • EC2 Instance Role (instance_role)
  • EC2 Instance Termination Protection (instance_term_protection)
  • Elastic IPs (elastic_ip)
  • Elastic Load Balancer (elastic_lb)
  • IAM User, Group, Policy, and Role Management: (iam_user, iam_group, iam_policy, iam_role)
  • Kinesis Stream Management (kinesis_stream)
  • Resource Tags (resource_tag)
  • Route53 DNS Records (route53_record)
  • Route53 DNS Zones (route53_zone)
  • S3 Files (s3_file)
  • S3 Buckets (s3_bucket)
  • Secondary IPs (secondary_ip)
  • Security Groups (security_group)
  • AWS SSM Parameter Store (ssm_parameter_store)
  • Autoscaling (autoscaling)

Unsupported AWS resources that have other cookbooks include but are not limited to:

Important - Security Implications

Please review any and all security implications of using any of these resources. This cookbook presents resources which could easily be poorly implemented, abused or exploited.

  • They have the ability to perform destructive actions (ex. delete *)
  • They manage sensitive resources (ex. IAM/SSM)
  • They require IAM keys which could be compromised

You will want to understand any and all security implications and architect your implementation accordingly before proceeding.

Some recommendations are below:

  • Do not use IAM credentials of the node - pass a separate set of credentials to these resources
  • Use IAM to restrict credentials to only the actions required, implementing conditions whenever necessary (follow least privileged principles.)

See iam_restrictions_and_conditions

  • Follow any and all aws best practices for managing credentials and security
  • Review your cookbook implementation as cloudformation or alternative tooling may be a better fit for managing aws infrastructure as code.

Maintainers

This cookbook is maintained by the Sous Chefs. The Sous Chefs are a community of Chef cookbook maintainers working together to maintain important cookbooks. If you’d like to know more please visit sous-chefs.org or come chat with us on the Chef Community Slack in #sous-chefs.

Requirements

Platforms

  • Any platform supported by Chef and the AWS-SDK

Chef

  • Chef 15.3+

Cookbooks

  • None

Credentials

In order to manage AWS components, authentication credentials need to be available to the node. There are 3 ways to handle this:

  1. Explicitly set the credentials when using the resources
  2. Use the credentials in the ~/.aws/credentials file
  3. Let the resource pick up credentials from the IAM role assigned to the instance

Also new resources can now assume an STS role, with support for MFA as well. Instructions are below in the relevant section.

Using resource parameters

In order to pass the credentials to the resource, credentials must be available to the node. There are a number of ways to handle this, such as node attributes applied to the node or via Chef roles/environments.

We recommend storing these in an encrypted databag, and loading them in the recipe where the resources are used.

Example Data Bag:

% knife data bag show aws main
{
  "id": "main",
  "aws_access_key_id": "YOUR_ACCESS_KEY",
  "aws_secret_access_key": "YOUR_SECRET_ACCESS_KEY",
  "aws_session_token": "YOUR_SESSION_TOKEN"
}

This can be loaded in a recipe with:

aws = data_bag_item('aws', 'main')

And to access the values:

aws['aws_access_key_id']
aws['aws_secret_access_key']
aws['aws_session_token']

We'll look at specific usage below.

Using local credentials

If credentials are not supplied via parameters, resources will look for the credentials in the ~/.aws/credentials file:

[default]
aws_access_key_id = ACCESS_KEY_ID
aws_secret_access_key = ACCESS_KEY

Note that this also accepts other profiles if they are supplied via the ENV['AWS_PROFILE'] environment variable.

Using IAM instance role

If your instance has an IAM role, then the credentials can be automatically resolved by the cookbook using Amazon instance metadata API.

You can then omit the authentication properties aws_secret_access_key and aws_access_key when using the resource.

Of course, the instance role must have the required policies. Here is a sample policy for EBS volume management:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:AttachVolume",
        "ec2:CreateVolume",
        "ec2:ModifyInstanceAttribute",
        "ec2:ModifyVolumeAttribute",
        "ec2:DescribeVolumeAttribute",
        "ec2:DescribeVolumeStatus",
        "ec2:DescribeVolumes",
        "ec2:DetachVolume",
        "ec2:EnableVolumeIO"
      ],
      "Sid": "Stmt1381536011000",
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    }
  ]
}

For resource tags:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:CreateTags",
        "ec2:DescribeTags"
      ],
      "Sid": "Stmt1381536708000",
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    }
  ]
}

Assuming roles via STS and using MFA

The following is an example of how roles can be assumed using MFA. The following can also be used to assumes roles that do not require MFA, just ensure that the MFA arguments (serial_number and token_code) are omitted.

This assumes you have also stored the cfn_role_arn, and mfa_serial attributes as well, but there are plenty of ways these attributes can be supplied (they could be stored locally in the consuming cookbook, for example).

Note that MFA codes cannot be recycled, hence the importance of creating a single STS session and passing that to resources. If multiple roles need to be assumed using MFA, it is probably prudent that these be broken up into different recipes and chef-client runs.

require 'aws-sdk-core'
require 'securerandom'

session_id = SecureRandom.hex(8)
sts = ::Aws::AssumeRoleCredentials.new(
  client: ::Aws::STS::Client.new(
    credentials: ::Aws::Credentials.new(
      node['aws']['aws_access_key_id'],
      node['aws']['aws_secret_access_key']
    ),
    region: 'us-east-1'
  ),
  role_arn: node['aws']['cfn_role_arn'],
  role_session_name: session_id,
  serial_number: node['aws']['mfa_serial'],
  token_code: node['aws']['mfa_code']
)

aws_cloudformation_stack 'kitchen-test-stack' do
  action :create
  template_source 'kitchen-test-stack.tpl'
  aws_access_key sts.access_key_id
  aws_secret_access_key sts.secret_access_key
  aws_session_token sts.session_token
end

When running the cookbook, ensure that an attribute JSON is passed that supplies the MFA code. Example using chef-zero:

echo '{ "aws": { "mfa_code": "123456" } }' > mfa.json && chef-client -z -o 'recipe[aws_test]' -j mfa.json

Running outside of an AWS instance

region can be specified on each resource if the cookbook is being run outside of an AWS instance. This can prevent some kinds of failures that happen when resources try to detect region.

aws_cloudformation_stack 'kitchen-test-stack' do
  action :create
  template_source 'kitchen-test-stack.tpl'
  region 'us-east-1'
end

Resources

aws_cloudformation_stack

Manage CloudFormation stacks.

Actions

  • create: Creates the stack, or updates it if it already exists.
  • delete: Begins the deletion process for the stack.

Properties

  • template_source: Required - the location of the CloudFormation template file. The file should be stored in the files directory in the cookbook.
  • parameters: An array of parameter_key and parameter_value pairs for parameters in the template. Follow the syntax in the example above.
  • disable_rollback: Set this to true if you want stack rollback to be disabled if creation of the stack fails. Default: false
  • stack_policy_body: Optionally define a stack policy to apply to the stack, mainly used in protecting stack resources after they are created. For more information, see Prevent Updates to Stack Resources in the CloudFormation user guide.
  • iam_capability: Set to true to allow the CloudFormation template to create IAM resources. This is the equivalent of setting CAPABILITY_IAM When using the SDK or CLI. Default: false
  • named_iam_capability: Set to true to allow the CloudFormation template to create IAM resources with custom names. This is the equivalent of setting CAPABILITY_NAMED_IAM When using the SDK or CLI. Default: false

Examples

aws_cloudformation_stack 'example-stack' do
  region 'us-east-1'
  template_source 'example-stack.tpl'
  parameters ([
    {
      :parameter_key => 'KeyPair',
      :parameter_value => 'user@host'
    },
    {
      :parameter_key => 'SSHAllowIPAddress',
      :parameter_value => '127.0.0.1/32'
    }
  ])
end

aws_cloudwatch

Use this resource to manage CloudWatch alarms.

Actions

  • create - Create or update CloudWatch alarms.
  • delete - Delete CloudWatch alarms.
  • disable_action - Disable action of the CloudWatch alarms.
  • enable_action - Enable action of the CloudWatch alarms.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • alarm_name - the alarm name. If none is given on assignment, will take the resource name.
  • alarm_description - the description of alarm. Can be blank also.
  • actions_enabled - true for enable action on OK, ALARM or Insufficient data. if true, any of ok_actions, alarm_actions or insufficient_data_actions must be specified.
  • ok_actions - array of action if alarm state is OK. If specified actions_enabled must be true.
  • alarm_actions - array of action if alarm state is ALARM. If specified actions_enabled must be true.
  • insufficient_data_actions - array of action if alarm state is INSUFFICIENT_DATA. If specified actions_enabled must be true.
  • metric_name - CloudWatch metric name of the alarm. eg - CPUUtilization.Required parameter.
  • namespace - namespace of the alarm. eg - AWS/EC2, required parameter.
  • statistic - statistic of the alarm. Value must be in any of SampleCount, Average, Sum, Minimum or Maximum. Required parameter.
  • extended_statistic - extended_statistic of the alarm. Specify a value between p0.0 and p100. Optional parameter.
  • dimensions - dimensions for the metric associated with the alarm. Array of name and value.
  • period - in seconds, over which the specified statistic is applied. Integer type and required parameter.
  • unit - unit of measure for the statistic. Required parameter.
  • evaluation_periods - number of periods over which data is compared to the specified threshold. Required parameter.
  • threshold - value against which the specified statistic is compared. Can be float or integer type. Required parameter.
  • comparison_operator - arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.

For more information about parameters, see CloudWatch Identifiers in the Using CloudWatch guide.

Examples

aws_cloudwatch "kitchen_test_alarm" do
  period 21600
  evaluation_periods 2
  threshold 50.0
  comparison_operator "LessThanThreshold"
  metric_name "CPUUtilization"
  namespace "AWS/EC2"
  statistic "Maximum"
  dimensions [{"name" : "InstanceId", "value" : "i-xxxxxxx"}]
  action :create
end

aws_dynamodb_table

Use this resource to create and delete DynamoDB tables. This includes the ability to add global secondary indexes to existing tables.

Actions

  • create: Creates the table. Will update the following if the table exists:
  • global_secondary_indexes: Will remove non-existent indexes, add new ones, and update throughput for existing ones. All attributes need to be present in attribute_definitions. No effect if the resource is omitted.
  • stream_specification: Will update as shown. No effect is the resource is omitted.
  • provisioned_throughput: Will update as shown.
  • delete: Deletes the index.

Properties

  • attribute_definitions: Required. Attributes to create for the table. Mainly this is used to specify attributes that are used in keys, as otherwise one can add any attribute they want to a DynamoDB table.
  • key_schema: Required. Used to create the primary key for the table. Attributes need to be present in attribute_definitions.
  • local_secondary_indexes: Used to create any local secondary indexes for the table. Attributes need to be present in attribute_definitions.
  • global_secondary_indexes: Used to create any global secondary indexes. Can be done to an existing table. Attributes need to be present in
  • attribute_definitions.
  • provisioned_throughput: Define the throughput for this table.
  • stream_specification: Specify if there should be a stream for this table.

Several of the attributes shown here take parameters as shown in the AWS Ruby SDK Documentation. Also, the AWS DynamoDB Documentation may be of further help as well.

Examples

aws_dynamodb_table 'example-table' do
  action :create
  attribute_definitions [
    { attribute_name: 'Id', attribute_type: 'N' },
    { attribute_name: 'Foo', attribute_type: 'N' },
    { attribute_name: 'Bar', attribute_type: 'N' },
    { attribute_name: 'Baz', attribute_type: 'S' }
  ]
  key_schema [
    { attribute_name: 'Id', key_type: 'HASH' },
    { attribute_name: 'Foo', key_type: 'RANGE' }
  ]
  local_secondary_indexes [
    {
      index_name: 'BarIndex',
      key_schema: [
        {
          attribute_name: 'Id',
          key_type: 'HASH'
        },
        {
          attribute_name: 'Bar',
          key_type: 'RANGE'
        }
      ],
      projection: {
        projection_type: 'ALL'
      }
    }
  ]
  global_secondary_indexes [
    {
      index_name: 'BazIndex',
      key_schema: [{
        attribute_name: 'Baz',
        key_type: 'HASH'
      }],
      projection: {
        projection_type: 'ALL'
      },
      provisioned_throughput: {
        read_capacity_units: 1,
        write_capacity_units: 1
      }
    }
  ]
  provisioned_throughput ({
    read_capacity_units: 1,
    write_capacity_units: 1
  })
  stream_specification ({
    stream_enabled: true,
    stream_view_type: 'KEYS_ONLY'
  })
end

aws_ebs_volume

The resource only handles manipulating the EBS volume, additional resources need to be created in the recipe to manage the attached volume as a filesystem or logical volume.

Actions

  • create - create a new volume.
  • attach - attach the specified volume.
  • detach - detach the specified volume.
  • delete - delete the specified volume.
  • snapshot - create a snapshot of the volume.
  • prune - prune snapshots.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • size - size of the volume in gigabytes.
  • snapshot_id - snapshot to build EBS volume from.
  • most_recent_snapshot - use the most recent snapshot when creating a volume from an existing volume (defaults to false)
  • availability_zone - EC2 region, and is normally automatically detected.
  • device - local block device to attach the volume to, e.g. /dev/sdi but no default value, required.
  • volume_id - specify an ID to attach, cannot be used with action :create because AWS assigns new volume IDs
  • timeout - connection timeout for EC2 API.
  • snapshots_to_keep - used with action :prune for number of snapshots to maintain.
  • description - used to set the description of an EBS snapshot
  • volume_type - "standard", "io1", "io2", "gp2" or "gp3" ("standard" is magnetic, "io1" and "io2" are provisioned SSD, "gp2" and "gp3" are general purpose SSD)
  • piops - number of Provisioned IOPS to provision, must be >= 100, or between 3000 and 16000 for the "gp3" volume type
  • throughput - amount of throughput in MB/s for "gp3" volume types, must be between 125 and 1000 if specified
  • existing_raid - whether or not to assume the raid was previously assembled on existing volumes (default no)
  • encrypted - specify if the EBS should be encrypted
  • kms_key_id - the full ARN of the AWS Key Management Service (AWS KMS) master key to use when creating the encrypted volume (defaults to master key if not specified)
  • delete_on_termination - Boolean value to control whether or not the volume should be deleted when the instance it's attached to is terminated (defaults to nil). Only applies to :attach action.
  • tags - Hash value to tag the new volumes or snapshots. Only applies to :create and :snapshot actions.

Examples

Create a 50G volume, attach it to the instance as /dev/sdi:

aws_ebs_volume 'db_ebs_volume' do
  size 50
  device '/dev/sdi'
  action [:create, :attach]
end

Create a new 50G volume from the snapshot ID provided and attach it as /dev/sdi.

aws_ebs_volume 'db_ebs_volume_from_snapshot' do
  size 50
  device '/dev/sdi'
  snapshot_id 'snap-ABCDEFGH'
  action [:create, :attach]
end

aws_elastic_ip

The elastic_ip resource provider does not support allocating new IPs. This must be done before running a recipe that uses the resource. After allocating a new Elastic IP, we recommend storing it in a databag and loading the item in the recipe.

Actions

  • associate - Associate an allocated IP to the node
  • disassociate - Disassociate an allocated IP from the node

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • ip: String. The IP address to associate or disassociate.
  • timeout: Integer. Default: 180. Time in seconds to wait. 0 for unlimited.

Examples

aws_elastic_ip '34.15.30.10' do
  action :associate
end

aws_elastic_ip 'Server public IP' do
  ip '34.15.30.11'
  action :associate
end

aws_elastic_lb

elastic_lb handles registering and removing nodes from ELBs. The resource also adds basic support for creating and deleting ELBs. Note that currently this resource is not fully idempotent so it will not update the existing configuration of an ELB.

Actions

  • register - Add a node to the ELB
  • deregister - Remove a node from the ELB
  • create - Create a new ELB
  • delete - Delete an existing ELB

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • name - the name of the ELB, required.
  • region, The region of the ELB. Defaults to the region of the node.
  • listeners, Array or hashes. The ports/protocols the ELB will listen on. See the example for a sample.
  • security_groups, Array. Security groups to apply to the ELB. Only needed when creating ELBs.
  • subnets, Array. The subnets the ELB will listen in. Only needed when creating ELBs and when using VPCs.
  • availability_zones: Array. The availability zones the ELB will listen in. Only needed when creating ELBs and when using classic networking.
  • tags: Array.
  • scheme: Array.

Examples

ELB running in classic networking listening on port 80.

aws_elastic_lb 'Setup the ELB' do
  name 'example-elb'
  action :create
  availability_zones ['us-west-2a']
  listeners [
    {
      instance_port: 80,
      instance_protocol: 'HTTP',
      load_balancer_port: 80,
      protocol: 'HTTP',
    },
  ]
end

To register the node in the 'QA' ELB:

aws_elastic_lb 'elb_qa' do
  name 'QA'
  action :register
end

aws_iam_user

Use this resource to manage IAM users.

Actions

  • create: Creates the user. No effect if the user already exists.
  • delete: Gracefully deletes the user (detaches from all attached entities, and deletes the user).

Properties

The IAM user takes the name of the resource. A path can be specified as well. For more information about paths, see IAM Identifiers in the Using IAM guide.

Examples

aws_iam_user 'example-user' do
  action :create
  path '/'
end

aws_iam_group

Use this resource to manage IAM groups. The group takes the name of the resource.

Actions

  • create: Creates the group, and updates members and attached policies if the group already exists.
  • delete: Gracefully deletes the group (detaches from all attached entities, and deletes the group).

Properties

  • path: A path can be supplied for the group. For information on paths, see IAM Identifiers in the Using IAM guide.
  • members: An array of IAM users that are a member of this group.
  • remove_members: Set to false to ensure that members are not removed from the group when they are not present in the defined resource. Default: true
  • policy_members: An array of ARNs of IAM managed policies to attach to this resource. Accepts both user-defined and AWS-defined policy ARNs.
  • remove_policy_members: Set to false to ensure that policies are not detached from the group when they are not present in the defined resource. Default: true

Examples

aws_iam_group 'example-group' do
  action :create
  path '/'
  members [
    'example-user'
  ]
  remove_members true
  policy_members [
    'arn:aws:iam::123456789012:policy/example-policy'
  ]
  remove_policy_members true
end

aws_iam_policy

Use this resource to create an IAM policy. The policy takes the name of the resource.

Actions

  • create: Creates or updates the policy.
  • delete: Gracefully deletes the policy (detaches from all attached entities, deletes all non-default policy versions, then deletes the policy).

Properties

  • path: A path can be supplied for the group. For information on paths, see IAM Identifiers in the Using IAM guide.
  • policy_document: The JSON document for the policy.
  • account_id: The AWS account ID that the policy is going in. Required if using non-user credentials (ie: IAM role through STS or instance role).

Examples

aws_iam_policy 'example-policy' do
  action :create
  path '/'
  account_id '123456789012'
  policy_document <<-EOH.gsub(/^ {4}/, '')
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Stmt1234567890",
                "Effect": "Allow",
                "Action": [
                    "sts:AssumeRole"
                ],
                "Resource": [
                    "arn:aws:iam::123456789012:role/example-role"
                ]
            }
        ]
    }
  EOH
end

aws_iam_role

Use this resource to create an IAM role. The policy takes the name of the resource.

Actions

  • create: Creates the role if it does not exist. If the role exists, updates attached policies and the assume_role_policy_document.
  • delete: Gracefully deletes the role (detaches from all attached entities, and deletes the role).

Properties

  • path: A path can be supplied for the group. For information on paths, see IAM Identifiers in the Using IAM guide.
  • policy_members: An array of ARNs of IAM managed policies to attach to this resource. Accepts both user-defined and AWS-defined policy ARNs.
  • remove_policy_members: Set to false to ensure that policies are not detached from the group when they are not present in the defined resource. Default: true
  • assume_role_policy_document: The JSON policy document to apply to this role for trust relationships. Dictates what entities can assume this role.

Examples

aws_iam_role 'example-role' do
  action :create
  path '/'
  policy_members [
    'arn:aws:iam::123456789012:policy/example-policy'
  ]
  remove_policy_members true
  assume_role_policy_document <<-EOH.gsub(/^ {4}/, '')
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Deny",
          "Principal": {
            "AWS": "*"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  EOH
end

aws_instance_monitoring

Allows detailed CloudWatch monitoring to be enabled for the current instance.

Actions

  • enable - Enable detailed CloudWatch monitoring for this instance (Default).
  • disable - Disable detailed CloudWatch monitoring for this instance.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the instance. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.

Examples

aws_instance_monitoring "enable detailed monitoring"

aws_instance_role

Used to associate an IAM role (by way of an IAM instance profile) with an instance. Replaces the instance's current role association if one already exists.

Actions

  • associate - Associate role with the instance (Default).

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the instance. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.
  • instance_id - The id of the instance to modify. Default: The current instance.
  • profile_arn - The IAM instance profile to associate with the instance

Requirements

IAM permisions:

  • ec2:DescribeIamInstanceProfileAssociations

  • ec2:AssociateIamInstanceProfile

    • Only needed if the instance is not already associated with an IAM role
  • ec2:ReplaceIamInstanceProfileAssociation

    • Only needed if the instance is already associated with an IAM role
  • iam:PassRole

    • This can be restricted to the resource of the IAM role being associated

Examples

aws_instance_role "change to example role" do
  profile_arn 'arn:aws:iam::123456789012:instance-profile/ExampleInstanceProfile'
end

aws_instance_term_protection

Allows termination protection (AKA DisableApiTermination) to be enabled for an instance.

Actions

  • enable - Enable termination protection for this instance (Default).
  • disable - Disable termination protection for this instance.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the instance. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.
  • instance_id - The id of the instance to modify. Default: The current instance.

Examples

aws_instance_term_protection "enable termination protection"

aws_kinesis_stream

Use this resource to create and delete Kinesis streams. Note that this resource cannot be used to modify the shard count as shard splitting is a somewhat complex operation (for example, even CloudFormation replaces streams upon update).

Actions

  • create: Creates the stream. No effect if the stream already exists.
  • delete: Deletes the stream.

Properties

  • starting_shard_count: The number of shards the stream starts with

Examples

aws_kinesis_stream 'example-stream' do
 action :create
 starting_shard_count 1
end

aws_resource_tag

resource_tag can be used to manipulate the tags assigned to one or more AWS resources, i.e. ec2 instances, EBS volumes or EBS volume snapshots.

Actions

  • add - Add tags to a resource.
  • update - Add or modify existing tags on a resource -- this is the default action.
  • remove - Remove tags from a resource, but only if the specified values match the existing ones.
  • force_remove - Remove tags from a resource, regardless of their values.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • tags - a hash of key value pairs to be used as resource tags, (e.g. { "Name" => "foo", "Environment" => node.chef_environment },) required.
  • resource_id - resources whose tags will be modified. The value may be a single ID as a string or multiple IDs in an array. If no resource_id is specified the name attribute will be used.

Examples

Assigning tags to a node to reflect its role and environment:

aws_resource_tag node['ec2']['instance_id'] do
  tags('Name' => 'www.example.com app server',
       'Environment' => node.chef_environment)
  action :update
end

Assigning a set of tags to multiple resources, e.g. ebs volumes in a disk set:

aws_resource_tag 'my awesome raid set' do
  resource_id ['vol-d0518cb2', 'vol-fad31a9a', 'vol-fb106a9f', 'vol-74ed3b14']
  tags('Name' => 'My awesome RAID disk set',
       'Environment' => node.chef_environment)
end
aws_resource_tag 'db_ebs_volume' do
  resource_id lazy { node['aws']['ebs_volume']['db_ebs_volume']['volume_id'] }
  tags ({ 'Service' => 'Frontend' })
end

aws_route53_record

Actions

  • create - Create a Route53 record
  • delete - Remove a Route53 record

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • name Required. String. - name of the domain or subdomain.
  • record_name Optional. String. - name of the domain or subdomain overrides the name. Useful property to use when the resource was called with the same name and different values, like in Split view DNS structure.
  • value String Array - value appropriate to the type.. for type 'A' value would be an IP address in IPv4 format for example.
  • type Required. String DNS record type
  • ttl Integer default: 3600 - time to live, the amount of time in seconds to cache information about the record
  • weight Optional. String. - a value that determines the proportion of DNS queries that will use this record for the response. Valid options are between 0-255. NOT CURRENTLY IMPLEMENTED
  • set_identifier Optional . String. - a value that uniquely identifies record in the group of weighted record sets
  • geo_location String.
  • geo_location_country String
  • geo_location_continent String
  • geo_location_subdivision String
  • zone_id String
  • region String
  • overwrite [true, false] default: true
  • alias_target Optional. Hash. - Associated with Amazon 'alias' type records. The hash contents varies depending on the type of target the alias points to.
  • mock [true, false] default: false
  • fail_on_error [true, false] default: false

Examples

Create a simple record

route53_record "create a record" do
  name  "test"
  value "16.8.4.2"
  type  "A"
  weight "1"
  set_identifier "my-instance-id"
  zone_id "ID VALUE"
  overwrite true
  fail_on_error false
  action :create
end

Delete an existing record. Note that value is still necessary even though we're deleting. This is a limitation in the AWS SDK.

aws_route53_record "delete a record" do
  name  "test"
  value "16.8.4.2"
  type 'A'
  value '123'
  action :delete
end

aws_route53_zone

Actions

  • create - Create a Route53 zone
  • delete - Remove a Route53 zone

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • name Required. String. - name of the zone.
  • description String. - Description shown in the Route53 UI
  • private [true, false]. default: false - Should this be a private zone for use in your VPCs or a Public zone
  • vpc_id String. If creating a Private zone this is the VPC to associate the zone with.

Examples

aws_route53_zone 'testkitchen.dmz' do
  description 'My super important zone'
  action :create
end

aws_secondary_ip.rb

This feature is available only to instances within VPCs. It allows you to assign multiple private IP addresses to a network interface.

Actions

  • assign - Assign a private IP to the instance.
  • unassign - Unassign a private IP from the instance.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • ip - the private IP address. - required.
  • interface - the network interface to assign the IP to. If none is given, uses the default interface.
  • timeout - connection timeout for EC2 API.

aws_s3_file

s3_file can be used to download a file from s3 that requires aws authorization. This is a wrapper around the core chef remote_file resource and supports the same resource attributes as remote_file. See [remote_file Chef Docs] (https://docs.chef.io/resource_remote_file.html) for a complete list of available attributes.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the file. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.
  • virtual_host - set to true will use bucket name as a virtual host (defaults to false). See.

Actions

  • create: Downloads a file from s3
  • create_if_missing: Downloads a file from S3 only if it doesn't exist locally
  • delete: Deletes a local file
  • touch: Touches a local file

Examples

aws_s3_file '/tmp/foo' do
  bucket 'i_haz_an_s3_buckit'
  remote_path 'path/in/s3/bukket/to/foo'
  region 'us-west-1'
end
aws_s3_file '/tmp/bar' do
  bucket 'i_haz_another_s3_buckit'
  remote_path 'path/in/s3/buckit/to/foo'
  region 'us-east-1'
  requester_pays true
end

aws_s3_bucket

s3_bucket can be used to create or delete S3 buckets. Note that buckets can only be deleted if they are empty unless you specify delete_all_objects true, which will delete EVERYTHING in your bucket first.

Actions

  • create: Creates the bucket
  • delete: Deletes the bucket

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the bucket. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.
  • versioning - Enable or disable S3 bucket versioning. Default: false
  • delete_all_objects - Used with the :delete action to delete all objects before deleting a bucket. Use with EXTREME CAUTION. default: false (for a reason)

Examples

aws_s3_bucket 'some-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  versioning true
  region 'us-west-1'
  action :create
end
aws_s3_bucket 'another-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  region 'us-west-1'
  action :delete
end

aws_secondary_ip

The secondary_ip resource provider allows one to assign/un-assign multiple private secondary IPs on an instance within a VPC. The number of secondary IP addresses that you can assign to an instance varies by instance type. If no ip address is provided on assign, a random one from within the subnet will be assigned. If no interface is provided, the default interface as determined by Ohai will be used.

Examples

aws_secondary_ip 'assign_additional_ip' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  ip ip_info['private_ip']
  interface 'eth0'
  action :assign
end

aws_security_group

security_group can be used to create or update security groups and associated rules.

Actions

  • create: Creates the security group

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • region - The AWS region containing the group. Default: The current region of the node when running in AWS or us-east-1 if the node is not in AWS.
  • name - The name of the security group to manage
  • description - The security group description
  • vpc_id - The vpc_id where the security group should be created

Tags

  • tags - Security Group tags. Default: []

Ingress/Egress rules

Note - this manages ALL rules on the security group. Any exist rules not included in these definitions will be removed.

  • ip_permissions - Ingress rules. Default: []
  • ip_permissions_egress - Egress rules. Default []

Examples

aws_security_group 'some-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  description 'some-unique-description'
  vpc_id 'vpc-000000000'
  ip_permissions []
  ip_permissions_egress []
  tags []
  action :create
end

Manages tags

aws_security_group 'some-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  description 'some-unique-description'
  vpc_id 'vpc-000000000'
  ip_permissions []
  ip_permissions_egress []
  tags [{ key: 'tag_key', value: 'tag_value' }]
  action :create
end

Manages ingress/egress rules

aws_security_group 'some-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  description 'some-unique-description'
  vpc_id 'vpc-000000000'
  ip_permissions [{
                   from_port: 22,
                   ip_protocol: 'tcp',
                   ip_ranges: [
                     {
                       cidr_ip: '10.10.10.10/24',
                       description: 'SSH access from the office',
                     },
                   ],
                   to_port: 22,
                  }]
  ip_permissions_egress [{
                   from_port: 123,
                   ip_protocol: 'udp',
                   ip_ranges: [
                     {
                       cidr_ip: '10.10.10.10/24',
                       description: 'ntp from the office',
                     },
                   ],
                   to_port: 123,
                        }]
  action :create
end

Alternatively you can use the class definitions for a more strongly typed object

aws_security_group 'some-unique-name' do
  aws_access_key aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
  description 'some-unique-description'
  vpc_id 'vpc-000000000'
  ip_permissions [Aws::EC2::Types::IpPermission.new.to_h]
  ip_permissions_egress [Aws::EC2::Types::IpPermission.new.to_h]
  action :create
end

aws_ssm_parameter_store

The ssm_parameter_store resource provider allows one to get, create and delete keys and values in the AWS Systems Manager Parameter Store. Values can be stored as plain text or as an encrypted string. In order to use the paramater store resource your ec2 instance role must have the proper policy. This sample policy allows get, creating and deleting parameters. You can adjust the policy to your needs. It is recommended that you have one role with the ability to create secrets and another that can only read the secrets. It is important to set sensitive true in the resources where the secrets are used so that secrets are not exposed in log files.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ssm:PutParameter",
                "ssm:DeleteParameter",
                "ssm:RemoveTagsFromResource",
                "ssm:GetParameterHistory",
                "ssm:AddTagsToResource",
                "ssm:GetParametersByPath",
                "ssm:GetParameters",
                "ssm:GetParameter",
                "ssm:DeleteParameters"
            ],
            "Resource": [
                "arn:aws:ssm:*:*:document/*",
                "arn:aws:ssm:*:*:parameter/*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ssm:DescribeParameters",
            "Resource": "*"
        }
    ]
}

Actions

  • get - Retrieve a key/value from the AWS Systems Manager Parameter Store.
  • get_parameters - Retrieve multiple key/values by name from the AWS Systems Manager Parameter Store. Values are stored in a hash indexed by the corresponding path value.
  • get_parameters_by_path - Retrieve multiple key/values by path from the AWS Systems Manager Parameter Store. Values are stored in a hash indexed by the key's name. If recursive is set to true, it will retrieve all parameters in the path hierarchy, constructing a representative hash structure with nested keys/values.
  • create - Create a key/value in the AWS Systems Manager Parameter Store.
  • delete - Remove the key/value from the AWS Systems Manager Parameter Store.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • path - Specify the target parameter (String) or parameters (Array - :get_parameters). (required)
  • recursive - If set to true the code will retrieve all parameters in the hierarchy (get_parameters_by_path, optional defaults to false)
  • description - Type a description to help identify parameters and their intended use. (create, optional)
  • value - Item stored in AWS Systems Manager Parameter Store (create, required)
  • type - Describes the value that is stored. Can be a String, StringList or SecureString (create, required)
  • key_id - The value after key/ in the ARN of the KSM key which is used with a SecureString. If SecureString is chosen and no key_id is specified AWS Systems Manager Parameter Store uses the default AWS KMS key assigned to your AWS account (create, optional)
  • overwrite - Indicates if create should overwrite an existing parameters with a new value. AWS Systems Manager Parameter Store versions new values (create, optional defaults to true)
  • with_decryption - Indicates if AWS Systems Manager Parameter Store should decrypt the value. Note that it must have access to the encryption key for this to succeed (get, optional, defaults to false)
  • allowed_pattern - A regular expression used to validate the parameter value (create, optional)
  • return_key - The key name to set the returned value into. This can then be used by calling node.run_state['returnkeyname'] in other resources (get, optional)

Examples

Create String Parameter
aws_ssm_parameter_store 'create testkitchen record' do
  path 'testkitchen'
  description 'testkitchen'
  value 'testkitchen'
  type 'String'
  action :create
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end
Create Encrypted String Parameter with Custom KMS Key
aws_ssm_parameter_store "create encrypted test kitchen record" do
  path '/testkitchen/EncryptedStringCustomKey'
  description 'Test Kitchen Encrypted Parameter - Custom'
  value 'Encrypted Test Kitchen Custom'
  type 'SecureString'
  key_id '5d888999-5fca-3c71-9929-014a529236e1'
  action :create
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
 end
Delete Parameter
aws_ssm_parameter_store 'delete testkitchen record' do
  path 'testkitchen'
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
  action :delete
end
Get Parameters and Populate Template
aws_ssm_parameter_store 'get clear_value' do
  path '/testkitchen/ClearTextString'
  return_key 'clear_value'
  action :get
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end

aws_ssm_parameter_store 'get decrypted_value' do
  path '/testkitchen/EncryptedStringDefaultKey'
  return_key 'decrypted_value'
  with_decryption true
  action :get
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end

aws_ssm_parameter_store 'get decrypted_custom_value' do
  path '/testkitchen/EncryptedStringCustomKey'
  return_key 'decrypted_custom_value'
  with_decryption true
  action :get
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end

aws_ssm_parameter_store 'getParameters' do
  path ['/testkitchen/ClearTextString', '/testkitchen']
  return_key 'parameter_values'
  action :get_parameters
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end

aws_ssm_parameter_store 'getParametersbypath' do
  path '/pathtest/'
  recursive true
  with_decryption true
  return_key 'path_values'
  action :get_parameters_by_path
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end
Get bucket name and retrieve file
aws_ssm_parameter_store 'get bucketname' do
  path 'bucketname'
  return_key 'bucketname'
  action :get
  aws_access_key node['aws_test']['key_id']
  aws_secret_access_key node['aws_test']['access_key']
end

aws_s3_file "/tmp/test.txt" do
  bucket lazy {node.run_state['bucketname']}
  remote_path "test.txt"
  sensitive true
  aws_access_key_id node[:custom_access_key]
  aws_secret_access_key node[:custom_secret_key]
end

aws_autoscale

autoscale can be used to attach and detach EC2 instances to/from an AutoScaling Group (ASG). Once the instance is attached autoscale allows one to move the instance into and out of standby mode. Standby mode temporarily takes the instance out of rotation so that maintenance can be performed.

Properties

  • aws_secret_access_key, aws_access_key and optionally aws_session_token - required, unless using IAM roles for authentication.
  • asg_name - The instance will be attached to this AutoScaling Group. The name is case sensitive. (attach_instance, required)
  • 'should_decrement_desired_capacity' - Indicates whether the Auto Scaling group decrements the desired capacity value by the number of instances moved to standby or detached. (enter_standby and detach_instance, optional, defaults to true)

Actions

  • attach_instance: Attach an instance to an ASG. If the instance is already attached it will generate an error.
  • detach_instance: Detach an instance from an ASG. If the instance is not already attached and in service it will generate an error.
  • enter_standby: Put ths instance into standby mode. Will generate an error if already in standby mode
  • exit_standby: Remove the instance from standby mode. Will generate an error if not in standby mode

Examples

aws_autoscaling 'attach_instance' do
  action :attach_instance
  asg_name 'Test'
end
aws_autoscaling 'enter_standby' do
  should_decrement_desired_capacity true
  action :enter_standby
end
 aws_autoscaling 'exit_standby' do
   action :exit_standby
 end
aws_autoscaling 'detach_instance' do
  should_decrement_desired_capacity true
  action :detach_instance
end

Contributors

This project exists thanks to all the people who contribute.

Backers

Thank you to all our backers!

https://opencollective.com/sous-chefs#backers

Sponsors

Support this project by becoming a sponsor. Your logo will show up here with a link to your website.

https://opencollective.com/sous-chefs/sponsor/0/website https://opencollective.com/sous-chefs/sponsor/1/website https://opencollective.com/sous-chefs/sponsor/2/website https://opencollective.com/sous-chefs/sponsor/3/website https://opencollective.com/sous-chefs/sponsor/4/website https://opencollective.com/sous-chefs/sponsor/5/website https://opencollective.com/sous-chefs/sponsor/6/website https://opencollective.com/sous-chefs/sponsor/7/website https://opencollective.com/sous-chefs/sponsor/8/website https://opencollective.com/sous-chefs/sponsor/9/website

aws's People

Contributors

alejandrod avatar bazbremner avatar bdwyertech avatar charity avatar cwebberops avatar damacus avatar iennae avatar jjhuff avatar josephrdsmith avatar joshs85 avatar kitchen-porter avatar lamont-granquist avatar lancefrench avatar lastgabs avatar majormoses avatar meringu avatar miketheman avatar ramereth avatar renovate[bot] avatar scalp42 avatar scythril avatar sethvargo avatar smcavallo avatar tas50 avatar troyready avatar ubiquitousthey avatar vancluever avatar wormzer avatar xorimabot avatar zl4bv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws's Issues

Needs integration tests

Hi. I've added a testing integration testing skeleton to this cookbook with this commit:

ad2e0e3?diff=unified

Can somebody please add some recipes in that fixture cookbook that exercise the resource in this cookbook?

aws_s3_file Provider Cannot Handle Redirects from S3

The following is output from a system trying to converge an aws_s3_file provider resource from a recipe like this. The chef.sean.horn.test.bucket bucket was originally created in the us-west-2 region, while the instance trying to converge the recipe lies in the eu-west-1 region. Currently the aws_s3_file will not follow redirects, although it should, to meet the spec at http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro

Also, bucket names can have dots, so we should prefer path-based, rather than virtual-host-based URLs to avoid ssl issues due to unmatched certs.

testjunk/recipes/default.rb

include_recipe "aws"

aws = data_bag_item("aws", "main")

aws_s3_file "/tmp/foo" do
  bucket "chef.sean.horn.test.bucket"
  remote_path "yepit.json"
  aws_access_key_id aws['aws_access_key_id']
  aws_secret_access_key aws['aws_secret_access_key']
end

[2015-03-24T20:05:25+00:00] DEBUG: Instance's availability zone is eu-west-1c

* remote_file[/tmp/foo] action create[2015-03-24T20:05:25+00:00] INFO: Processing remote_file[/tmp/foo] action create (/var/chef/cache/cookbooks/aws/providers/s3_file.rb line 36)
[2015-03-24T20:05:25+00:00] DEBUG: remote_file[/tmp/foo] checking for changes
[2015-03-24T20:05:25+00:00] DEBUG: Cache control headers: {}
[2015-03-24T20:05:25+00:00] DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_request
[2015-03-24T20:05:25+00:00] DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_request
[2015-03-24T20:05:25+00:00] DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_request
[2015-03-24T20:05:25+00:00] DEBUG: Initiating GET to https://s3-eu-west-1.amazonaws.com/chef.sean.horn.test.bucket/yepit.json?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJUYAZ6NNDKZCUEJA%2F20150324%2Feu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20150324T200525Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signature=532a8e121b9d05d3229f90d593053d944c2253699be877b9547366024c49efef
[2015-03-24T20:05:25+00:00] DEBUG: ---- HTTP Request Header Data: ----
[2015-03-24T20:05:25+00:00] DEBUG: Accept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3
[2015-03-24T20:05:25+00:00] DEBUG: ---- End HTTP Request Header Data ----
[2015-03-24T20:05:25+00:00] DEBUG: ---- HTTP Status and Header Data: ----
[2015-03-24T20:05:25+00:00] DEBUG: HTTP 1.1 301 Moved Permanently
[2015-03-24T20:05:25+00:00] DEBUG: x-amz-request-id: D66DC67D58AB6B7F
[2015-03-24T20:05:25+00:00] DEBUG: x-amz-id-2: zmy89OOz1JD+2sWP9egkhuXaoRscw51zkUtkXmlB0YMuBYjJUagNBlPHPo5i5lWeWU8EwBohNCQ=
[2015-03-24T20:05:25+00:00] DEBUG: content-type: application/xml
[2015-03-24T20:05:25+00:00] DEBUG: transfer-encoding: chunked
[2015-03-24T20:05:25+00:00] DEBUG: date: Tue, 24 Mar 2015 20:05:25 GMT
[2015-03-24T20:05:25+00:00] DEBUG: server: AmazonS3
[2015-03-24T20:05:25+00:00] DEBUG: connection: close
[2015-03-24T20:05:25+00:00] DEBUG: ---- End HTTP Status/Header Data ----
[2015-03-24T20:05:25+00:00] DEBUG: ---- HTTP Response Body ----
[2015-03-24T20:05:25+00:00] DEBUG: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><Bucket>chef.sean.horn.test.bucket</Bucket><Endpoint>chef.sean.horn.test.bucket.s3-us-west-2.amazonaws.com</Endpoint><RequestId>D66DC67D58AB6B7F</RequestId><HostId>zmy89OOz1JD+2sWP9egkhuXaoRscw51zkUtkXmlB0YMuBYjJUagNBlPHPo5i5lWeWU8EwBohNCQ=</HostId></Error>

compile_time attribute in chef_gem resource breaks on chef < 12.1

the changes in 2.6.1 from [https://github.com/opscode-cookbooks/aws/commit/2fc04a4f5e45494da6e27a5246789d8b17797cf2] Intended to suppress the warnings from the newer compile_time resource on chef_gem break the cookbook if you are not using chef 12.1 (which isn't out yet anyway).

This fixes #106

pinging @someara who has made a few of these changes recently ;)

Question about volume_id attribute

Hi,

I have a recipe that uses the ebs_volume LWRP but on some of my nodes, the the volume_id attribute exists on some nodes, but on others it doesn't and I get the following error. All the machines in question have been provisioned the same way, using knife ec2 and are of the same role.

Please be advised that I have change the attribute name because of the way I'm loading the LWRP. Again, it works on some, but not on others. My question is, how do I get the attribute to populate on every node?

================================================================================
Error executing action update on resource 'aws_resource_tag[my ebs volume]'
================================================================================

Chef::Exceptions::ValidationFailed
----------------------------------
Option resource_id's value my ebs volume does not match regular expression /(i|snap|vol)-[a-zA-Z0-9]+/

Cookbook Trace:
---------------
/var/chef/cache/cookbooks/aws/providers/resource_tag.rb:83:in `load_current_resource'

Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/vungle/recipes/ebs_volume.rb

 62: aws_resource_tag 'my ebs volume' do
 63:   aws_access_key node[:vungle][:aws][:aws_access_key_id]
 64:   aws_secret_access_key node[:vungle][:aws][:aws_secret_access_key]
 65:   resource_id lazy { node[:aws][:ebs_volume][:data_volume][:volume_id] }
 66:   tags({"Name" => node[:machinename]})
 67: end

Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/vungle/recipes/ebs_volume.rb:62:in `from_file'

aws_resource_tag("my ebs volume") do
  action :update
  retries 0
  retry_delay 2
  guard_interpreter :default
  cookbook_name "vungle"
  recipe_name "ebs_volume"
  aws_access_key "fake_key"
  aws_secret_access_key "secret access"
  resource_id #<Chef::DelayedEvaluator:0x00000003f1be10@/var/chef/cache/cookbooks/vungle/recipes/ebs_volume.rb:65>
  tags {"Name"=>"machine01"}
end

No Member Zone found

Getting this error with the latest:

NameError: no member 'zone' in struct
C:\chef\b45ead9638cca100d4f0d0f39ff0ca8d\cookbooks\aws\providers\ebs_volume.rb:152:in `[]'

Looks like the the struct returned by AWS is now :availablity_zone, not :zone

Anybody else seeing this? I can create a pull request.

Version 2.6.4 cannot download s3 bucket files on non-ec2 hosted nodes

There might have been a breaking change in the aws-sdk with regards to the aws_s3_file resource. If I run a recipe with 2.6.4 on a node that is not hosted in AWS, it does not converge with the following:

Recipe: go-remote-control::install
       * aws_s3_file[/var/cache/rcserver-1.1.0] action create

           ================================================================================
           Error executing action `create` on resource 'aws_s3_file[/var/cache/rcserver-1.1.0]'
           ================================================================================

           Errno::ETIMEDOUT
           ----------------
           Connection timed out - connect(2) for "169.254.169.254" port 80

           Cookbook Trace:
           ---------------
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:83:in `query_instance_availability_zone'
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:51:in `instance_availability_zone'
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:63:in `create_aws_interface'
           /tmp/kitchen/cache/cookbooks/aws/libraries/s3.rb:9:in `s3'
           /tmp/kitchen/cache/cookbooks/aws/providers/s3_file.rb:33:in `do_s3_file'
           /tmp/kitchen/cache/cookbooks/aws/providers/s3_file.rb:9:in `block in class_from_file'

           Resource Declaration:
           ---------------------
           # In /tmp/kitchen/cache/cookbooks/go-remote-control/recipes/install.rb

            19: aws_s3_file artifact_path do
            20:   bucket "dnsimple-packages"
            21:   remote_path "travis-builds/go-remote-control/#{artifact}"
            22:   aws_access_key_id access_key
            23:   aws_secret_access_key secrey_key
            24:   action (Chef::Config[:solo] ? :create_if_missing : :create)
            25:   not_if { File.exist?(artifact_path) }
            26: end
            27:

           Compiled Resource:
           ------------------
           # Declared in /tmp/kitchen/cache/cookbooks/go-remote-control/recipes/install.rb:19:in `from_file'

As you can see, a small detail in the stack trace above, it appears a lookup is happening somewhere that is trying to use the internal 169.254.169.254 metadata endpoint. If I roll back to the older 2.5.0 version of this cookbook using the same exact provider, it works normally without issue.

EBS volume creation is broken using AWS SDK v2 gem

I believe this has been broken since 2a183fd.

The aws-sdk gem introduced response paging in v2 and this requires the user to extract the underlying collections (via method invocation) from the response pages. The current implementation does not account for that API shift.

AWS response paging explained

I have tested this use aws-sdk v2.0.22 (attribute version) and v2.0.38 (latest at the time of this writing).

Add support for uploading files to S3

Hello,

We stand up dynamic clusters that have config files we want to place on S3, so that they can be read and registered by another service.

I see the aws_s3_file resource for pulling files from S3. But what about pushing files from a node to a bucket?

Regards,
Joe Reid

"uninitialized constant Aws" error in version 2.6.0

Today our deployment start failing and the reason seems to be the recently released 2.6.0 version.
The error in aws opsworks log is this:

================================================================================
Error executing action `update` on resource 'aws_resource_tag[i-540be0b9]'
================================================================================


NameError
---------
uninitialized constant Aws


Cookbook Trace:
---------------
/var/lib/aws/opsworks/cache.stage2/cookbooks/aws/libraries/ec2.rb:43:in `ec2'
/var/lib/aws/opsworks/cache.stage2/cookbooks/aws/providers/resource_tag.rb:89:in `load_current_resource'

I see the recent change in the line mentioned in the error text.

Reverting to version 2.5.0 fixed the issue with our deployment.

Update:

Just for the case here is a custom recipe code:

include_recipe "aws"
tags_data = {
  "MakeSnapshot" => "True"
}
aws_resource_tag node['ec2']['instance_id'] do
    tags(tags_data)
    action :update
end

Rubocop offenses

This is what happens when I run rubocop on a clean clone of master (064f17e) using the included .rubocop.yml configuration file:

$ rubocop
Inspecting 36 files
.C.WC.CWC.CCWC......................

Offenses:

metadata.rb:9:11: C: Put one space between the method name and the first argument.
source_url       "https://github.com/opscode-cookbooks/aws"
          ^^^^^^^
metadata.rb:9:18: C: Prefer single-quoted strings when you don't need string interpolation or special symbols.
source_url       "https://github.com/opscode-cookbooks/aws"
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
metadata.rb:10:11: C: Put one space between the method name and the first argument.
issues_url       "https://github.com/opscode-cookbooks/aws/issues"
          ^^^^^^^
metadata.rb:10:18: C: Prefer single-quoted strings when you don't need string interpolation or special symbols.
issues_url       "https://github.com/opscode-cookbooks/aws/issues"
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
libraries/ec2.rb:31:9: W: end at 31, 8 is not aligned with if at 27, 20
        end
        ^^^
libraries/ec2.rb:43:9: C: Replace class var @@ec2 with a class instance var.
        @@ec2 ||= create_aws_interface(::Aws::EC2::Client)
        ^^^^^
libraries/ec2.rb:47:9: C: Replace class var @@instance_id with a class instance var.
        @@instance_id ||= query_instance_id
        ^^^^^^^^^^^^^
libraries/ec2.rb:51:9: C: Replace class var @@instance_availability_zone with a class instance var.
        @@instance_availability_zone ||= query_instance_availability_zone
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
libraries/ec2.rb:76:83: W: Useless assignment to variable - options.
        instance_id = open('http://169.254.169.254/latest/meta-data/instance-id', options = { proxy: false }) { |f| f.gets }
                                                                                  ^^^^^^^
libraries/ec2.rb:83:106: W: Useless assignment to variable - options.
        availability_zone = open('http://169.254.169.254/latest/meta-data/placement/availability-zone/', options = { proxy: false }) { |f| f.gets }
                                                                                                         ^^^^^^^
libraries/ec2.rb:97:115: W: Useless assignment to variable - options.
        ip_addresses = open("http://169.254.169.254/latest/meta-data/network/interfaces/macs/#{mac}/local-ipv4s", options = { proxy: false }) { |f| f.read.split("\n") }
                                                                                                                  ^^^^^^^
libraries/ec2.rb:104:110: W: Useless assignment to variable - options.
        eni_id = open("http://169.254.169.254/latest/meta-data/network/interfaces/macs/#{mac}/interface-id", options = { proxy: false }) { |f| f.gets }
                                                                                                             ^^^^^^^
libraries/elb.rb:9:9: C: Replace class var @@elb with a class instance var.
        @@elb ||= create_aws_interface(::Aws::ElasticLoadBalancing::Client)
        ^^^^^
libraries/s3.rb:9:9: C: Replace class var @@s3 with a class instance var.
        @@s3 ||= create_aws_interface(::Aws::S3::Client)
        ^^^^
providers/ebs_raid.rb:55:7: W: Use Kernel#loop with break rather than begin/end/until(or while).
  end while ::File.exist?(base_device)
      ^^^^^
providers/ebs_raid.rb:67:7: W: Use Kernel#loop with break rather than begin/end/until(or while).
  end while ::File.exist?(dir)
      ^^^^^
providers/ebs_raid.rb:91:65: C: Use %r around regular expression.
  node.set[:aws][:raid][mount_point][:raid_dev] = md_device.sub(/\/dev\//, '')
                                                                ^^^^^^^^^
providers/ebs_raid.rb:108:3: C: Favor modifier unless usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
  unless ::File.exist?(mount_point)
  ^^^^^^
providers/ebs_raid.rb:113:3: C: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
  if !md_device || md_device == ''
  ^^
providers/ebs_raid.rb:150:21: C: Avoid parameter lists longer than 5 parameters.
def locate_and_mount(mount_point, mount_point_owner, mount_point_group, mount_point_mode, filesystem, filesystem_options)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/ebs_raid.rb:187:3: C: Annotation keywords like TODO should be all upper case, followed by a colon, and a space, then a note describing the problem.
# TODO fix this kludge: ideally we'd pull in the device information from the ebs_volume
  ^^^^^
providers/ebs_raid.rb:259:17: C: Avoid parameter lists longer than 5 parameters.
def mount_device(_raid_dev, mount_point, mount_point_owner, mount_point_group, mount_point_mode, filesystem, filesystem_options)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/ebs_raid.rb:319:22: C: Avoid parameter lists longer than 5 parameters.
def create_raid_disks(mount_point, mount_point_owner, mount_point_group, mount_point_mode, num_disks, disk_size,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/ebs_raid.rb:391:11: C: Indent when as deep as case.
          when 'ext4'
          ^^^^
providers/ebs_raid.rb:394:15: C: Annotation keywords like TODO should be all upper case, followed by a colon, and a space, then a note describing the problem.
            # TODO fill in details on how to format other filesystems here
              ^^^^^
providers/ebs_volume.rb:95:19: C: Use array literal [] instead of Array.new.
  old_snapshots = Array.new
                  ^^^^^^^^^
providers/ebs_volume.rb:157:18: C: Avoid parameter lists longer than 5 parameters.
def create_volume(snapshot_id, size, availability_zone, timeout, volume_type, piops, encrypted, kms_key_id)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/elastic_lb.rb:6:5: C: Do not use unless with else. Rewrite these with the positive case first.
    unless target_lb[:instances].include?(instance_id)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/instance_monitoring.rb:31:5: C: Rename is_monitoring_enabled to monitoring_enabled?.
def is_monitoring_enabled
    ^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:4:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless @new_resource.resource_id
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:11:5: C: Do not use unless with else. Rewrite these with the positive case first.
    unless @current_resource.tags.keys.include?(k)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:23:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless @new_resource.resource_id
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:30:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless updated_tags.eql?(@current_resource.tags)
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:34:61: W: Ambiguous regexp literal. Parenthesize the method arguments if it's surely a regexp literal, or add a whitespace to the right of the / if it should be a division.
      updated_tags.delete_if { |key, _value| key.to_s.match /^aws/ }
                                                            ^
providers/resource_tag.rb:43:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless @new_resource.resource_id
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:51:18: C: Use next to skip iteration.
  tags_to_delete.each do |key|
                 ^^^^
providers/resource_tag.rb:62:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless @new_resource.resource_id
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:81:3: C: Do not use unless with else. Rewrite these with the positive case first.
  unless @new_resource.resource_id
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
providers/resource_tag.rb:90:5: C: Block argument expression is not on the same line as the block start.
    |tag| @current_resource.tags[tag[:key]] = tag[:value]
    ^^^^^
providers/s3_file.rb:31:20: C: Use %r around regular expression.
  remote_path.sub!(/^\/*/, '')
                   ^^^^^^
providers/s3_file.rb:38:23: C: Use %r around regular expression.
    source s3url.gsub(/https:\/\/([\w\.\-]*)\.{1}s3.amazonaws.com:443/, 'https://s3.amazonaws.com:443/\1') # Fix for ssl cert issue
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

36 files inspected, 41 offenses detected
$ rubocop --verbose-version
0.32.0 (using Parser 2.3.0.pre.2, running on ruby 2.0.0 x86_64-darwin12.5.0)

Invalid device name on CentOS

================================================================================
    Error executing action `attach` on resource 'aws_ebs_volume[sdi1]'
    ================================================================================

    Aws::EC2::Errors::InvalidParameterValue
    ---------------------------------------
    Value (/dev/sdi1) for parameter device is invalid. /dev/sdi1 is not a valid EBS device name.

    Cookbook Trace:
    ---------------
    /var/chef/cache/cookbooks/aws/providers/ebs_volume.rb:212:in `attach_volume'
    /var/chef/cache/cookbooks/aws/providers/ebs_volume.rb:70:in `block (2 levels) in class_from_file'
    /var/chef/cache/cookbooks/aws/providers/ebs_volume.rb:68:in `block in class_from_file'

Support for client side decryption for aws_s3_file

It would be great to be able to pass in an encryption_key parameter to an aws_s3_file resource to retrieve encrypted files. The AWS Ruby SDK has great support for client side encryption/decryption of S3 objects (http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Encryption/Client.html). Unfortunately, because of the way that aws_s3_file creates files (inline remote_file with a presigned url as the source), it can't support using Aws::S3::Encryption::Client. So, since this will require a pretty serious rewrite, I wanted to see if there was community support for the idea before proceeding. Thoughts?

add support for XFS filesystem

 Chef::Log.info("Format device found: #{md_device}")
        case filesystem
          when "ext4"
            system("mke2fs -t #{filesystem} -F #{md_device}")
          when "xfs"
            system("mkfs.xfs #{md_device}")
          else
            #TODO fill in details on how to format other filesystems here
            Chef::Log.info("Can't format filesystem #{filesystem}")
        end

EBS_Raid failing in 2.7.0 - see issue #118 AND Raid doesn't support Encryption

#118

EBS Raid creation is failing with:
Error "Aws::EC2::Errors::MissingParameter The request must contain the parameter size/snapshot"

This is due to:
provider/ebs_raid.rb :349
snapshot_id creating_from_snapshot ? snapshots[i - 1] : ''

I guess AWS SDK doesn't like '' as an empty snapshot_id. I changed the line to:
snapshot_id creating_from_snapshot ? snapshots[i - 1] : nil
nil is working fine in my fork.

Also,
EBS Volume was updated last week to support Encrypted volumes, but EBS Raid lacks the passthrough to allow creation of Encrypted volumes when creating them as part of a Raid.

Default Recipe Stuck on Chef::Provider::Package::Rubygems::CurrentGemEnvironment::Bundler

I'm writing a recipe that includes the aws default recipe, and then uses the aws_ebs_volume resource. At runtime, I get the following error:

uninitialized constant Chef::Provider::Package::Rubygems::CurrentGemEnvironment::Bundler
https://gist.github.com/jcderose/0f589b36aad50c05d737

Here's my recipe:

include_recipe "opsworks_initial_setup"
include_recipe 'aws'

instance_az = node[:opsworks][:instance][:availability_zone]
device_id = "/dev/xvdx"

directory '/srv-enc' do
  mode '0755'
end

  aws_ebs_volume 'EncryptedVolume' do
    size 8
    device device_id
    availability_zone instance_az
    volume_type "gp2"
    encrypted true
    action [:create, :attach]
  end

Here's my metadata.rb:

name             'setup-ebs-encryption'
maintainer       'XXXX'
maintainer_email 'XXXX'
license          'All rights reserved'
description      'Installs/Configures setup-ebs-encryption'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version          '0.1.0'

depends 'opsworks_initial_setup'
depends 'aws'

Here's my Berksfile:
https://gist.github.com/jcderose/1f054f7714571c25d7b1

Use of class variables causing resource issues

Hello!

One thing that I noticed, in the updates that I'm making:

Use of class variables like this, say in ec2.rb line 45, is causing some variable data to be duplicated across interface instances across resources.

      def ec2
        @@ec2 ||= create_aws_interface(::Aws::EC2::Client)
      end

There are far-reaching implications for this, of course - basically if you want to have different resource attributes across the same service that have to do with connectivity data, you are by and large out of luck. It could also be leading to code where items are specified in one resource and were missed in another.

I stumbled upon this as I am working on a PR that, among other things, allows one to run the cookbook using local credentials and to specify region. This facilitates the cookbook to be run in local mode outside of an instance profile, working off your ~/.aws/credentials file. Since there doesn't seem to be a way to get region out of API though (aside from environment), I have added a resource attribute for that (otherwise the cookbook falls back to instance profile, which doesn't work as I'm not running this within an EC2 instance). I noticed that I hadn't specified it within all resources, and when changed the IAM resources to use instanced-based variables, the resources that didn't have region specified predictably failed trying to grab instance data from http://169.254.169.254/.

What would the potential impact be to switch all of these items to instance varaibles? Switching the IAM resources to use an instance-based interface (by just altering it in my libraries/iam.rb) didn't really seem to cause issues, I was able to add the resources without issues.

Another part of my PR is the ability to assign resources to be deployed via assuming IAM roles thru STS (my company has multiple accounts and I want to manage resources centrally) so this I am predicting this being a bit more of an issue.

If some of this is not making sense, which I'm thinking it might not, feel free to look at the commits being made in the https://github.com/paybyphone/aws fork.

Thanks,

--Chris

ebs_volume Delete on Termination

Would it be possible to add an attribute to ebs_volume that sets the deleteOnTermination flag for volumes created/attached to an instance? Currently we have to go in and manually clean up volumes after instances are pulled out of service.

Amazon EC2 endpoint url different for eu-central-1

Got the following error when trying to use aws cookbook in ec2-central-1 region.

ec2-54-93-80-34.eu-central-1.compute.amazonaws.com [2015-01-21T12:12:19+00:00] WARN: Rightscale::HttpConnection : re-raising same error: https://eu-central-1.ec2.amazonaws.com:443 temporarily unavailable: (SocketError: getaddrinfo: Name or service not known) -- error count: 4, error age: 0

Looks like the endpoint is different for eu-central-1 vs other regions.

$ host ec2.eu-central-1.amazonaws.com
ec2.eu-central-1.amazonaws.com has address 54.239.54.44

$ host eu-central-1.ec2.amazonaws.com
Host eu-central-1.ec2.amazonaws.com not found: 3(NXDOMAIN)

$ host eu-west-1.ec2.amazonaws.com
eu-west-1.ec2.amazonaws.com has address 178.236.6.52

$ host ec2.eu-west-1.amazonaws.com
ec2.eu-west-1.amazonaws.com is an alias for eu-west-1.ec2.amazonaws.com.
eu-west-1.ec2.amazonaws.com has address 178.236.4.54

EBS Raid doesn't support Encryption (see #129)

I wrote this up as part of #129. The other part of #129 has been closed, so I'm opening a new issue just on this part.

PR #114 brought in the ability to create Encrypted EBS volumes, however it didn't provide a passthrough for volumes created as part of a raid to be created with encryption.

I wrote PR #130 to fix this right after PR #114 was merged, but it included a general fix for EBS Raids that was sliced out into its own PR #156, which was recently merged. So I am submitting a new PR with just the remaining Encryption fix for volumes created in a Raid.

secondary_ip doesn't work

I'm trying to use the secondary_ip resource, and it is telling me that interface_id was not found.

issues with gp2 volume type

hello, i just downloaded the latest version (2.5.0)

I'm trying to create and attach a 1TB gp2 volume with 3000 provisioned IOPS and can't seem to get past the following error:

[2015-01-16T17:44:08-05:00] ERROR: aws_ebs_volume[data_ebs_volume](cookbook::createEBSdatavolume line 25) had an error: RuntimeError: IOPS param without piops volume type.
[2015-01-16T17:44:08-05:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

here is output from chef.

Resource Declaration:
In /var/chef/cache/cookbooks/cookbook/recipes/createEBSdatavolume.rb

25: aws_ebs_volume "data_ebs_volume" do
26: provider "aws_ebs_volume"
27: #availability_zone "us-east-1a"
28: aws_access_key aws['access_key']
29: aws_secret_access_key aws['secret_key']
30: size ebs_size
31: piops piops
32: volume_type 'gp2'
33: device "#{device_id}"
34: action [ :create, :attach ]
35: end
36:

Compiled Resource:
Declared in /var/chef/cache/cookbooks/cookbook/recipes/createEBSdatavolume.rb:25:in `from_file'

aws_ebs_volume("data_ebs_volume") do
provider Chef::Provider::AwsEbsVolume
action [:create, :attach]
retries 0
retry_delay 2
cookbook_name "wordstream-postgresql"
recipe_name "createEBSdatavolume"
aws_access_key "XXXXXXXXXXXXXXXXXXX"
aws_secret_access_key "XXXXXXXXXXXXXXXXXXXXXX"
size 1024
volume_type "gp2"
device "/dev/sdc"
piops 3000
timeout 180
end

Is gp2 not a valid volume_type? if it is valid, i'm I missing a key component?

thanks
Austin

Missing "ec2:DescribeTags" permission

The documentation shows a sample policy to be able to create tags. The sample policy is missing the "ec2:DescribeTags" action permission, which is needed in order to use the "aws_resource_tag" resource without having a UnauthorizedOperation error.

By the way, you should update the repository guidelines for contributing, which says "If you would like to contribute, please open a ticket in JIRA", while JIRA says "We've moved issue tracking from JIRA to Github Issues". Very confusing... ;)

Wrong argument error in Chef 11.16.4 on 2.6.2 release

Running version 2.6.2 with Chef 11.16.4 on Ubuntu 14.04.1

================================================================================
  Recipe Compile Error in /var/chef/cache/cookbooks/go-remote-control/recipes/default.rb
  ================================================================================

  ArgumentError
  -------------
  wrong number of arguments (1 for 0)

  Cookbook Trace:
  ---------------
    /var/chef/cache/cookbooks/aws/recipes/default.rb:22:in `block in from_file'
    /var/chef/cache/cookbooks/aws/recipes/default.rb:20:in `from_file'
    /var/chef/cache/cookbooks/go-remote-control/recipes/install.rb:18:in `from_file'
    /var/chef/cache/cookbooks/go-remote-control/recipes/default.rb:10:in `from_file'

  Relevant File Content:
  ----------------------
  /var/chef/cache/cookbooks/aws/recipes/default.rb:

   15:  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   16:  # See the License for the specific language governing permissions and
   17:  # limitations under the License.
   18:  #
   19:
   20:  chef_gem 'aws-sdk' do
   21:    version node['aws']['aws_sdk_version']
   22>>   compile_time true if respond_to?(:compile_time)
   23:    action :install
   24:  end
   25:
   26:  require 'aws-sdk'
   27:

Snapshot IDs should be stored as node objects

Basically what the description says. Newly created volumes become part of the node object, why not store associated snapshots there as well? This would make them much easier to tag for later identification.

Unable to set Name of ebs_volume

EBS Volumes in the AWS Console have a "name" attribute, but the ebs_volume resource doesn't allow it to be set. This makes it hard ot keep track of the volumes in our account.

The "name" attribute should be added to the ebs_volume resource.

Error "Aws::EC2::Errors::MissingParameter The request must contain the parameter size/snapshot" related to #80

Hey,
the error is produced by this cookbook code

aws_ebs_raid "mysql_raid10_volume" do
    disk_count     4
    disk_size       250
    level               10
    filesystem      'ext4'
    disk_type       'standard'
end

it gets compiled to:

aws_ebs_volume("sdi1") do
  provider Chef::Provider::AwsEbsVolume
  action                    [:create, :attach]
  retries                    0
  retry_delay             2
  guard_interpreter  :default
  cookbook_name   "mysql"
  size                       250
  volume_type         "standard"
  piops                     0
  device                    "/dev/sdi1"
  timeout                 180
end

If i run the compiled code in the same cookbook, the volume gets build and attached. However, the ebs_raid.rb provider seems to be the problem.

Has anybody seen this before, or can reproduce this?

Any advice on how to further troubleshoot the issue?

unexpected value at params[:tags][0]["Tier"] when trying to remove existing tags

An attempt to remove an existing tag with the tag's explicit value fails:

  * aws_resource_tag[i-0e4f5d02] action remove
================================================================================
Error executing action `remove` on resource 'aws_resource_tag[i-0e4f5d02]'
================================================================================

ArgumentError
-------------
unexpected value at params[:tags][0]["Tier"]

Cookbook Trace:
---------------
/var/chef/cache/cookbooks/aws/providers/resource_tag.rb:54:in `block (3 levels) in class_from_file'
/var/chef/cache/cookbooks/aws/providers/resource_tag.rb:53:in `block (2 levels) in class_from_file'
/var/chef/cache/cookbooks/aws/providers/resource_tag.rb:51:in `each'
/var/chef/cache/cookbooks/aws/providers/resource_tag.rb:51:in `block in class_from_file'

Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/jorhett/recipes/tags.rb:47:in `block in from_file'

aws_resource_tag("i-0e445d02") do
  action [:remove]
  retries 0
  retry_delay 2
  cookbook_name "jorhett"
  recipe_name "tags"
  tags {"Tier"=>"front"}
  not_if { #code block }
end

elastic_lb LWRP does not remove nodes and always adds nodes to ELB

The elastic_lb LWRP runs a check to see if the node is currently in the ELB, this check does not currently work.

Lines:
providers/elastic_lb:6
providers/elastic_lb:18
providers/elastic_lb:20

Current Behavior:
When actions de/register run, the checks listed above in lines 6 and 18 always fail. So it will always try to add the node to the ELB with register and will never remove a node with deregister. converge_by will always claim that the action ran successfully.

The deregister_instances_with_load_balancer method does not exist either. The correct method is deregister_instances_from_load_balancer

http://docs.aws.amazon.com/sdkforruby/api/Aws/ElasticLoadBalancing/Client.html#deregister_instances_from_load_balancer-instance_method

So if the deregister action were to run, it would fail when trying to run this method on line 20.

Seahorse::Client::NetworkingError

This cookbook works with Linux but doesn't seem to work properly with Windows. I've tried creating EBS volumes and EC2 tags, and no matter what I do the Chef client run always produces the following SSL error:

Seahorse::Client::NetworkingError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

I've tried updating the CA bundle certificate, Ruby, Ruby gems, and I've also tried updating aws-sdk to 2.0.48 and nothing seems to fix the issue.

Has anyone else been able to update make this cookbook work on Windows servers?

opsworks (chef-solo) aws_ebs_volume: "RuntimeError: Volume no longer exists"

I'm attempting to use the aws_ebs_volume resource to attach an ebs volume to an opsworks instance at boot. However, when the custom recipe runs I get the following message:


================================================================================
Error executing action `create` on resource 'aws_ebs_volume[db_ebs_volume]'
================================================================================


RuntimeError
------------
Volume  no longer exists


Cookbook Trace:
---------------
/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_volume.rb:189:in `block in create_volume'
/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_volume.rb:177:in `create_volume'
/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_volume.rb:38:in `block (2 levels) in class_from_file'
/var/lib/aws/opsworks/cache/cookbooks/aws/providers/ebs_volume.rb:37:in `block in class_from_file'


Resource Declaration:
---------------------
# In /var/lib/aws/opsworks/cache/cookbooks/mongodb/recipes/ebs_backups.rb

60: aws_ebs_volume "db_ebs_volume" do
61:   size 7
62:   device "/dev/sdi"
63:   action [ :create, :attach ]
64: end


Compiled Resource:
------------------
# Declared in /var/lib/aws/opsworks/cache/cookbooks/mongodb/recipes/ebs_backups.rb:60:in `from_file'

aws_ebs_volume("db_ebs_volume") do
action [:create, :attach]
retries 0
retry_delay 2
cookbook_name "mongodb"
recipe_name "ebs_backups"
size 7
device "/dev/sdi"
timeout 180
volume_type "standard"
piops 0
end



[2014-06-09T16:51:52+00:00] INFO: Running queued delayed notifications before re-raising exception
[2014-06-09T16:51:52+00:00] INFO: template[/etc/default/mongodb] sending restart action to service[mongodb] (delayed)
[2014-06-09T16:51:52+00:00] INFO: Processing service[mongodb] action restart (mongodb::default line 127)
[2014-06-09T16:52:04+00:00] INFO: service[mongodb] restarted
[2014-06-09T16:52:04+00:00] ERROR: Running exception handlers
[2014-06-09T16:52:04+00:00] ERROR: Exception handlers complete
[2014-06-09T16:52:04+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-06-09T16:52:04+00:00] ERROR: aws_ebs_volume[db_ebs_volume] (mongodb::ebs_backups line 60) had an error: RuntimeError: Volume  no longer exists
[2014-06-09T16:52:05+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

Steps to reproduce:

  1. create custom recipe
  2. add the following code to custom recipe:
include_recipe "aws"

aws_ebs_volume "db_ebs_volume" do
  size 7
  device "/dev/sdi"
  action [ :create, :attach ]
end

I believe these recipe's are supposed to work with chef-solo but I haven't been able to use this cookbook to create a volume and attach it to my opsworks instance. Is this a bug? Am I doing something wrong? Any help appreciated! Thank you!

Proxy Support

What is the correct method for specifying a proxy when using this cookbook to attach and ebs volume? http_proxy environment variable doesn't seem to be working, nor does setting it in the chef config file.

I don't think that the version of ruby that is embedded with chef supports those methods anyway.

# grep -i http_proxy /opt/chef/embedded/lib/ruby/1.9.1/net/http.rb 
        # Note that net/http does not use the HTTP_PROXY environment variable.
        # If you want to use a proxy, you must set it explicitly.

aws_s3_file IAM role support

Would it be possible to add IAM role support to the aws_s3_file provider? It currently requires aws_access_key_id and aws_secret_access_key be set.

Thanks for consideration.

DescribeTag action is missing

Hi,
I'd like to use this cookbook to describe (query) AWS tags, but there's no "describe" action.

Actions:

add - Add tags to a resource.
update - Add or modify existing tags on a resource -- this is the default action.
remove - Remove tags from a resource, but only if the specified values match the existing ones.
force_remove - Remove tags from a resource, regardless of their values.

NameError: uninitialized constant Chef::Provider::AwsS3File::RightAws

Getting the following error when using aws cookbook
NameError: uninitialized constant Chef::Provider::AwsS3File::RightAws

aws_s3_file "demo.py" do
bucket "#{demo_bucket_name}"
remote_path "bootstrap/worker/demo.py"
aws_access_key_id node['aws']['aws_access_key_id']
aws_secret_access_key node['aws']['aws_secret_access_key']
end

I can see that the resource is using the correct data when executing the recipe.

Add source_url and issues_url, update CONTRIBUTING

Source URL Issues URL allow for supermarket to automatically show the "show issues and show source" when looking around at cookbooks. Without these in the metadata, these have to be handcrafted.

Expected format
source_url ""
issues_url ""

Contributing has some issues referring to retired wiki.

AWS reports volume doesn't exist, but it does, and it's mounted

I only see this error in the chef-client logs. Running manually via "chef-client" does not reproduce this error. However, firing off a run via /etc/init.d/chef-client run does produce the error, visible in the logs.

[2014-10-03T20:45:22+00:00] ERROR: Running exception handlers
[2014-10-03T20:45:22+00:00] ERROR: Creating JSON exception report
[2014-10-03T20:45:23+00:00] ERROR: Exception handlers complete
[2014-10-03T20:45:23+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2014-10-03T20:45:23+00:00] ERROR: aws_ebs_volume[data_volume](drbd-ebs::ha line 36) had an error: RuntimeError: Volume with id vol-a8fda8ed is registered with the node but does not exist in EC2. To clear this error, remove the ['aws']['ebs_volume']['data_volume']['volume_id'] entry from this node's data.
[2014-10-03T20:45:23+00:00] ERROR: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

chef-client -v
Chef: 11.14.0.alpha.4

root@use1d-admin101a:/var/log/chef# curl http://169.254.169.254/latest/meta-data/block-device-mapping/ebs19/ && echo
xvdb
root@use1d-admin101a:/var/log/chef# mount
/dev/xvda on / type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)
/dev/drbd0 on /data type ext4 (rw,relatime,user_xattr,barrier=1,data=ordered)

root@use1d-admin101a:/var/log/chef# cat /etc/drbd.d/pair.res
resource pair {
device /dev/drbd0;
disk /dev/xvdb;
}

root@use1d-admin101a:/var/log/chef# cat /proc/drbd
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:688215440 nr:0 dw:744654652 dr:2126176838 al:15406293 bm:141869 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

screen shot 2014-10-03 at 3 18 18 pm

ELB in VPC is not supported in this version of API

I'm trying to add instances to an ELB, but it doesn't seem to work in a VPC.

  ================================================================================
Error executing action `register` on resource 'aws_elastic_lb[test-mysql]'

RightAws::AwsError

ValidationError: ELB in VPC is not supported in this version of API. Please try 2011-11-15 or newer.

Resources for CloudFront, S3 buckets policy/websites, IAM users

I have a static website that gets deployed on S3/CloudFront. What I would like to see are resources for managing this configuration (just because you're not using nodes, doesn't mean it's not configuration management), where we could make stuff like:

aws_cloudfront_distribution do
  aliases ['example.com']
  origins [{ domain_name: '...', http_port: '...', ... }]
  error_pages [...]
  action :create
end

aws_iam_user 'my-iam-deploy-user' do
  policies [...]
  action :create
end

aws_s3_bucket 'where-my-stuff-goes' do
   policy({...})
   website_configuration({...})
end

and run that recipe on deploy to ensure all the things that are needed for the site to be deployed are present and configured correctly.

It looks like this cookbook currently uses right-aws, which supports the resources that are already here, but not much else. Doing something like this might require using Fog or the AWS SDK gem.

RuntimeError: Volume no longer exists

Hi,
Using the aws cookbook, I am trying to create a volume and attaching it using the provider, ebs_volume.rb. While doing this, I am getting the error "Volume no longer exists".

[2014-06-26T07:43:17-04:00] ERROR: aws_ebs_volume[new_ebs_volume](aws::default line 30) had an error: RuntimeError: Volume no longer exists
[2014-06-26T07:43:17-04:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)

The fix provided on resolved issue is already there in the provider, but not helping.
Let me know if you need any further information to resolve this.

Early help is much appreciated as I have very less time to complete this task. Thank you.

ebs_raid default

Error : aws_ebs_volume[sdi1](/var/chef/cache/cookbooks/aws/providers/ebs_raid.rb line 337) had an error: Aws::EC2::Errors::InvalidParameterValue: Value (/dev/sdi1) for parameter device is invalid. /dev/sdi1 is not a valid EBS device name.

Issue: /dev/sdi1 is invalid, /dev/sdi is not.

Details:
With mount_point set, ebs_raid takes the default from aws_ebs_volume:
aws_ebs_volume disk_dev_path do
device "/dev/#{disk_dev_path}"

set up our data bag info

devices[disk_dev_path] = 'pending'
Chef::Log.info("creating ebs volume for device #{disk_dev_path} with size #{disk_size}")

Can't create a RAID on an HVM instance

Per the AWS docs:

"Hardware virtual machine (HVM) AMIs (such as the base Windows and Cluster Compute images) do not support the use of trailing numbers on device names (xvd[a-p][1-15])."

The RAID recipe compiles the disk by making device names like sdi1, sdi2, etc., so I get an error like this when trying to create the RAID:

"InvalidParameterValue: Value (/dev/sdi1) for parameter device is invalid. /dev/sdi1 is not a valid EBS device name."

using s3_file on NON ec2 instances...

I keep getting an error and I have looked around and cant see what I am doing wrong...
Here is my resource...

include_recipe "aws"

aws_s3_file "/tmp/file_I_want.tar.gz" do
  bucket "config-files"
  remote_path "file_I_want.tar.gz"
  aws_access_key_id node["aws-key-id"]
  aws_secret_access_key node["aws-secret"]
end
         * aws_s3_file[/tmp/file_I_want.tar.gz] action create

           ================================================================================
           Error executing action `create` on resource 'aws_s3_file[/tmp/file_I_want.tar.gz]'
           ================================================================================

           Errno::ETIMEDOUT
           ----------------
           Connection timed out - connect(2) for "169.254.169.254" port 80

           Cookbook Trace:
           ---------------
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:83:in `query_instance_availability_zone'
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:51:in `instance_availability_zone'
           /tmp/kitchen/cache/cookbooks/aws/libraries/ec2.rb:63:in `create_aws_interface'
           /tmp/kitchen/cache/cookbooks/aws/libraries/s3.rb:9:in `s3'
           /tmp/kitchen/cache/cookbooks/aws/providers/s3_file.rb:33:in `do_s3_file'
           /tmp/kitchen/cache/cookbooks/aws/providers/s3_file.rb:9:in `block in class_from_file'

           Resource Declaration:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.