Code Monkey home page Code Monkey logo

packer-plugin-amazon's Introduction

Packer Plugin Amazon

The Amazon multi-component plugin can be used with HashiCorp Packer to create custom images. For the full list of available features for this plugin see docs.

Installation

Using pre-built releases

Using the packer init command

Starting from version 1.7, Packer supports a new packer init command allowing automatic installation of Packer plugins. Read the Packer documentation for more information.

To install this plugin, copy and paste this code into your Packer configuration . Then, run packer init.

packer {
  required_plugins {
    amazon = {
      version = ">= 1.3.2"
      source  = "github.com/hashicorp/amazon"
    }
  }
}

Manual installation

You can find pre-built binary releases of the plugin here. Once you have downloaded the latest archive corresponding to your target OS, uncompress it to retrieve the plugin binary file corresponding to your platform. To install the plugin, please follow the Packer documentation on installing a plugin.

From Sources

If you prefer to build the plugin from sources, clone the GitHub repository locally and run the command go build from the root directory. Upon successful compilation, a packer-plugin-amazon plugin binary file can be found in the root directory. To install the compiled plugin, please follow the official Packer documentation on installing a plugin.

Configuration

For more information on how to configure the plugin, please read the documentation located in the docs/ directory.

Contributing

  • If you think you've found a bug in the code or you have a question regarding the usage of this software, please reach out to us by opening an issue in this GitHub repository.
  • Contributions to this project are welcome: if you want to add a feature or a fix a bug, please do so by opening a Pull Request in this GitHub repository. In case of feature contribution, we kindly ask you to open an issue to discuss it beforehand.

packer-plugin-amazon's People

Contributors

aleksandrserbin avatar azr avatar catsby avatar cbednarski avatar chrislundquist avatar danham avatar dave2 avatar dependabot[bot] avatar glyphack avatar gmmephisto avatar henrysher avatar jen20 avatar jeremy-asher avatar jescalan avatar jmassara avatar johndaviesco avatar lbajolet-hashicorp avatar markpeek avatar mitchellh avatar mwhooker avatar nywilken avatar ofosos avatar rasa avatar rickard-von-essen avatar sargun avatar sethvargo avatar swampdragons avatar sylviamoss avatar timdawson264 avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

packer-plugin-amazon's Issues

Support writing description on create with Amazon EC2 Builders

This issue was originally opened by @jasonberanek as hashicorp/packer#8869. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Feature Description

Set the Amazon AMI description at the creation of the image, which allows the value to be set without requiring the ec2:ModifyImageAttribute IAM policy.

The existing Amazon AMI builders support setting the description, however set it after the AMI is created using ModifyImageAttribute requests. Some business IT policies restrict ec2:ModifyImageAttribute permissions to avoid their staff accidentally or intentionally exposing internal AMIs to external parties. In those cases, Packer will still run as long as none of the features requiring ec2:ModifyImageAttribute are part of the template, but this also means it is impossible to apply description text to help with image review/discovery.

The description attribute on the AMI is available to be set at create time in all of the existing Amazon EC2 builders, so the implementation should be straightforward.

Use Case(s)

  • Support defining AMI descriptions for relevant EC2 builders even when ec2:ModifyImageAttribute has not been given to the requesting account.

Accept `user-data` as a json object

This issue was originally opened by @curiositycasualty as hashicorp/packer#9235. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Feature Description

The amazon-* builders only accept user-data as a string. I'd much prefer if packer accepted a json object and rendered it as a string for AWS's sake OR explicitly accepted a "cloud_init_user_data" param that would render as yaml (including the mandatory first "#cloud-config" line). Do so would force JSON compliance on the contents of the param and aid in cloud-init configuration.

SCP session completes before file transfer completes?

This issue was originally opened by @DayneD89 as hashicorp/packer#8326. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I couldn't find anything anywhere, but sorry if this has been raised before.

I'm trying to use SSH instead of WinRM as my communicator for a windows amazon-ebs build to improve the speed of the file provisioner. I've got ssh connecting, but it fails when I transfer the file. This is the relevant provisioner (It runs fine is I take this section out):

{ "type":"file", "source":"files/installs/7z1900-x64.msi", "destination":"C:\\7zip.msi" }

And this is from my logs:

2019/11/05 12:45:45 ui: ^[[1;32m==> amazon-ebs: Connected to SSH!^[[0m
2019/11/05 12:45:45 packer: 2019/11/05 12:45:45 Running the provision hook
2019/11/05 12:45:45 [INFO] (telemetry) Starting provisioner file
2019/11/05 12:45:45 ui: ^[[1;32m==> amazon-ebs: Uploading files/installs/7z1900-x64.msi => C:\7zip.msi^[[0m
2019/11/05 12:45:45 packer: 2019/11/05 12:45:45 [DEBUG] Opening new ssh session
2019/11/05 12:45:46 packer: 2019/11/05 12:45:46 [DEBUG] Starting remote scp process: scp -vt .
2019/11/05 12:45:46 packer: 2019/11/05 12:45:46 [DEBUG] Started SCP session, beginning transfers...
2019/11/05 12:45:46 packer: 2019/11/05 12:45:46 [DEBUG] scp: Uploading C:\7zip.msi: perms=C0777 size=1748480
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [DEBUG] SCP session complete, closing stdin pipe.
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [DEBUG] Waiting for SSH session to complete.
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [DEBUG] non-zero exit status: 1
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [DEBUG] scp output: ^Ascp: ./C:\7zip.msi: Protocol not available
2019/11/05 12:45:47 packer: ^Ascp: protocol error: expected control record
2019/11/05 12:45:47 ui error: ^[[1;31m==> amazon-ebs: Upload failed: Process exited with status 1^[[0m
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 closing
2019/11/05 12:45:47 closing
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 closing
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 Error in ProgressTrackingClient.Read RPC call: reading body EOF
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 Error in ProgressTrackingClient.Read RPC call: connection is shut down
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [INFO] 884736 bytes written for 'uploadData'
2019/11/05 12:45:47 packer: 2019/11/05 12:45:47 [ERR] 'uploadData' copy error: read files/installs/7z1900-x64.msi: file already closed
2019/11/05 12:45:47 [INFO] (telemetry) ending file

amazon-import post-processor handles multiple disks improperly

This issue was originally opened by @benjamb as hashicorp/packer#8590. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

In my case, I'm using the hyperv-iso builder and have entries within disk_additional_size, when this is passed on to the amazon-import post-processor it ends up finding the wrong disk.

The code in question isn't particularly smart. It iterates over artefacts from a builder and returns the first artefact it finds that has the specified suffix (in my case vhdx). The issue with this, on Windows at least, is that vm-0.vhdx is found before the core image without the index suffix, i.e. vm.vhdx when iterating over the artefacts.

for _, path := range artifact.Files() {
	if strings.HasSuffix(path, "."+p.config.Format) {
		source = path
		break
	}
}

It would make sense to perhaps specify which image you would like to be imported, rather than attempting to guess which one. As an additional improvement, Amazon's VM Import supports multiple disk images so this could be implemented (it would still make sense to add an optional config to specify disk images, perhaps using an explicit list or regex/glob).

Packer version

1.5.1 (master)

Operating system and Environment details

Windows 10 Pro
amd64

amazon-ebs: Temporary snapshot not tagged causing permission denied

This issue was originally opened by @rednuht as hashicorp/packer#9894. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

In our company we use tags to dictate whether a role/user has access to perform actions on a resource.
When packer tries to clean up at the end it is unable to delete the temporary snapshot since it is untagged.

Reproduction Steps

Include a policy statement that has a condition on tags:

    {
        "Effect": "Allow",
        "Action": "ec2:*",
        "Resource": "*",
        "Condition": {
            "StringEquals": {
                "ec2:ResourceTag/Team": "MY_SPECIAL_TAG"
            }
        }
    }

Packer version

From 1.3.2 and onwards.
(haven't tested with earlier versions)

Simplified Packer Buildfile

https://gist.github.com/rednuht/f2c62fbc47186e41d0f8e9707954bcc7

Operating system and Environment details

Not relevant.

Log Fragments and crash.log files

2020-09-04T09:44:21+02:00: Build 'amazon-ebs' errored after 52 minutes 56 seconds: Error deleting existing snapshot: retry count exhausted. Last err: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message: ENCODED MSG status code: 403, request id: 6b6df0be-3b0d-4eb7-8185-41823465f67d

Important parts from decoded message:

{
  "allowed": false,
  "explicitDeny": false,
  "matchedStatements": {
    "items": []
  },
  "failures": {
    "items": []
  },
  "context": {
    "action": "ec2:DeleteSnapshot",
    "resource": "arn:aws:ec2:eu-west-1::snapshot/snap-FOOBAR123",
  }
}

amazon chroot builder doesn't work with new EC2 instance types

This issue was originally opened by @tjbaker as hashicorp/packer#6710. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html

With the following instances, EBS volumes are exposed as NVMe block devices: C5, C5d, i3.metal, M5, M5d, R5, R5d, T3, and z1d. The device names are /dev/nvme0n1, /dev/nvme1n1, and so on. The device names that you specify in a block device mapping are renamed using NVMe device names (/dev/nvme[0-26]n1).

The Packer chroot builder does not know about this change and fails.

amazon-chroot: Checking the root device on source AMI...
amazon-chroot: Error finding available device: device prefix could not be detected

https://github.com/hashicorp/packer/blob/78c0b7bd9c1c2464c407fadbed3da4e94e627796/builder/amazon/chroot/device.go#L42-L45

Build fails with set_partition_type: Non-standard volume device "/dev/nvme0n1p1" using amazon-instance

This issue was originally opened by @dimisjim as hashicorp/packer#9495. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

I am trying to build an AMI with amazon-instance builder.
Bundler fails with set_partition_type: Non-standard volume device "/dev/nvme0n1p1"

Reproduction Steps

run packer build

Packer version

1.5.6

Simplified Packer Buildfile

{
    "min_packer_version": "1.5.6",
    "builders": [{
      "type": "amazon-instance",
      "name": "packer-builder",
      "region": "eu-west-1",
      "instance_type": "t3.large",
      "account_id": "<my_account>",
      "s3_bucket": "<my_bucket>",
      "x509_cert_path": "<path/to/certificate.pem>",
      "x509_key_path": "<path/to/private-key.pem>",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "architecture": "x86_64",
          "name": "*amzn2-ami-hvm-2.0.*",
          "root-device-type": "ebs"
        },
        "owners": ["amazon"],
        "most_recent": true
      },
      "ssh_username": "ec2-user",
      "ami_name": "<ami_name>",
      "ami_description": "<ami_name>"
    }],
    "provisioners": [
      {
        "type": "shell",
        "script": "files/provision.sh"
      }
    ]
  }

Using this provisioning script to install the ami tools:

# Install EC2 AMI tools required by the amazon-instance packer builder
wget https://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.noarch.rpm
rpm -K ec2-ami-tools.noarch.rpm
rpm -Kv ec2-ami-tools.noarch.rpm
sudo yum install -y ec2-ami-tools.noarch.rpm
rpm -qil ec2-ami-tools | grep ec2/amitools/version
export RUBYLIB=$RUBYLIB:/usr/lib/ruby/site_ruby:/usr/lib64/ruby/site_ruby
ec2-ami-tools-version
ec2-bundle-vol --version
which ec2-bundle-vol

sudo sed -i 's#/sbin:/bin:/usr/sbin:/usr/bin#/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin#g' /etc/sudoers
sudo touch /etc/profile.d/myenvvars.sh
sudo chmod +x /etc/profile.d/myenvvars.sh
sudo bash -c 'echo "export RUBYLIB=$RUBYLIB:/usr/lib/ruby/site_ruby:/usr/lib64/ruby/site_ruby" >> /etc/profile.d/myenvvars.sh'
sudo bash -c 'echo "export PATH=/usr/local/bin:$PATH:" >> /etc/profile.d/myenvvars.sh'

Operating system and Environment details

Running packer in Ubuntu 20.04

Log Fragments and crash.log files

==> packer-builder: Uploading X509 Certificate...
==> packer-builder: Bundling the volume...
==> packer-builder: /usr/lib/ruby/site_ruby/ec2/platform/linux/image.rb:253:in `set_partition_type': Non-standard volume device "/dev/nvme0n1p1" (FatalError)
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/platform/linux/image.rb:71:in `initialize'
    packer-builder: Setting partition type to bundle "/" with...
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/amitools/bundlevol.rb:172:in `new'
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/amitools/bundlevol.rb:172:in `bundle_vol'
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/amitools/bundlevol.rb:231:in `main'
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/amitools/tool_base.rb:201:in `run'
==> packer-builder: 	from /usr/lib/ruby/site_ruby/ec2/amitools/bundlevol.rb:239:in `<main>'
==> packer-builder: Volume bundling failed. Please see the output above for more
==> packer-builder: details on what went wrong.
==> packer-builder: 
==> packer-builder: One common cause for this error is ec2-bundle-vol not being
==> packer-builder: available on the target instance.
==> packer-builder: Provisioning step had errors: Running the cleanup provisioner, if present...
==> packer-builder: Terminating the source AWS instance...

EBS Builder - Allow additional region_kms_key_ids specified but not used

This issue was originally opened by @atheiman as hashicorp/packer#10371. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Im trying to use the same packer template to build an AMI in multiple regions in the AWS commercial (us-east-1, us-west-1) and govcloud (us-gov-east-1, us-gov-west-1) partitions.

I have a variable for the regions to build in (commercial: "us-east-1,us-west-1" govcloud: "us-gov-east-1,us-gov-west-1") that works fine for the ami_regions config key. However, when I want to copy the AMI with encryption I need to set the region_kms_key_ids config key to an object:

// commercial
"region_kms_key_ids": {
  "us-east-1": "alias/my-key",
  "us-west-1": "alias/my-key"
}
// govcloud
"region_kms_key_ids": {
  "us-gov-east-1": "alias/my-key",
  "us-gov-west-1": "alias/my-key"
}

Looks like object variables ({"a": "A", "b": "B"}) are not possible in Packer - maybe this will change with the HCL support thats in progress?

My next thought was to declare all the regions (commercial and govcloud) in regions_kms_key_ids and whatever in ami_regions is the only regions that will be looked up from the object:

"region_kms_key_ids": {
  "us-east-1": "alias/my-key",
  "us-west-1": "alias/my-key",
  "us-gov-east-1": "alias/my-key",
  "us-gov-west-1": "alias/my-key"
}

This doesnt seem to work, if additional regional kms key ids are declared in region_kms_key_ids but not consumed in ami_regions, packer fails a validation check:

https://github.com/hashicorp/packer/blob/e89db37717b8a7696d636d88e0adc1f5c8311aed/builder/amazon/common/ami_config.go#L174-L182

I'm guessing there is a reason for this validation check, but its not apparent to me right now. I'd like to be able to specify additional regional kms key ids in region_kms_key_ids that are not consumed in ami_regions to simplify my template.

My workaround right now is processing the template through jq before running packer build ... in order to set the region_kms_key_ids with either the govcloud or commercial regional kms key ids. Another possible workaround would be to use variables in the keys of the "region_kms_key_ids" config key, but it looks like packer does not allow variables in config keys, only in config values.

If HCL packer will allow object variables, then I suppose this could be closed and the feature will effectively be implemented that way.

amazon-ebs authentication failures with ansible provisioner

This issue was originally opened by @justinhohner as hashicorp/packer#6255. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


After the packer ssh proxy establishes a connection ansible is not able to authenticate to the instance. Authentication fails with either a message similar to "Failed to connect to the host via ssh ... too many authentication failures..." or by asking for a password, the difference is likely due to ssh policy on different hosts. I am able to use the packer generated ssh to to connect to the instance via the packer ssh proxy.

Host platform:
macOS 10.13.4

Packer version:
1.2.3

PACKER LOG:
https://gist.github.com/justinhohner/e85ad4957790833e1004d4bb882435e2

JSON file:
https://gist.github.com/justinhohner/7437bfc9c5db47c2b97a3419242ba59f

boolean to make run_tags the same as tags

This issue was originally opened by @nitrocode as hashicorp/packer#8695. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Please search the existing issues for relevant feature requests, and use the
reaction feature
(https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/)
to add upvotes to pre-existing requests.

Feature Description

I'd like to make run_tags the same as tags using a boolean like "tags_same" = true, or similar which can make all tagging the same as what is defined in tags.

Use Case(s)

Tired of adding new tags to one but not the other. I'd like to keep things DRY as much as possible.

AWS WorkSpaces Image/Bundle Builder

This issue was originally opened by @Tensho as hashicorp/packer#9916. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Community Note

Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Description

AWS workspaces could be run from a bundle. Bundle is a combination of an operating system, storage, compute and software resources. Bundle is based on the image. Image is essentially the same as EC2 AMIs. It would be nice to build them with Packer.

Right now there is no AWS WorkSpace API action to take (create) an image from the workspace. I've submitted a feature request for it in AWS Support.

Use Case

I have a lot of AWS workspaces in my current organization. They have the same Linux baseline (essential system packages, common applications) and deviations depending on the department (some extra configuration tuning or applications installed). It requires a lot of manual actions to introduce any updates to the images. I have to launch a pristine workspace from the baseline image, install the requested software, bake an image, destroy workspace, create a bundle from the new image for a new department or update the existing bundle with the new image for the existing department. AWS proposes to manage Linux workspaces with OpsWorks (Puppet Enterprise). I'd prefer to manage WorkSpaces images with Packer leveraging all the benefits of immutable infrastructure.

ManageWorkspacesImageBundleUpdate

Potential configuration

{
    "builders": [
        {
            "type": "amazon-workspace",
            "image_regions": "us-east-1,eu-central-1",
            "image_users": "111111111111,222222222222",
            "image_name": "acme-linux-{{ isotime \"2006-01-02-15-04-05\" }}",
            "source_bundle_filter": {
                "filters": {
                    "id": "wsb-1wpvxgh6p",
                },
                "owners": [
                    "111111111111"
                ]
            },
            "vpc_filter": {
                "filters": {
                    "tag:Name": "main"
                }
            },
            "subnet_filter": {
                "filters": {
                    "tag:Tier": "private"
                },
                "random": true
            },
            "security_group_filter": {
                "filters": {
                    "tag:Name": "packer"
                }
            },
            "region": "us-east-1",
            "ssh_username": "workspaces\packer",
            "tags": {
                "timestamp": "{{ timestamp }}"
            }
        }
    ],
    "provisioners": [
        {
            "type": "ansible",
            "playbook_file": "ansible/ansible.yml",
            "user": "ec2-user",
            "extra_arguments": [
                "--become",
                "--extra-vars",
                "ami=true"
            ],
            "ansible_env_vars": [
                "ANSIBLE_HOST_KEY_CHECKING=False",
                "ANSIBLE_NOCOLOR=True",
                "ANSIBLE_SSH_ARGS='-o ControlPath=/dev/shm/control:%h:%p:%r'"
            ]
        }
    ]
}

Potential References

amazon-ebs timeout - Error waiting for instance to stop reoccur when building on newly available mac instances

This issue was originally opened by @OliverKoo as hashicorp/packer#10688. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


When building on the newly released mac1.metal instances that runs on dedicated host, the timeout occurs when waiting for instance to stop

==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Error waiting for instance to stop: ResourceNotReady: exceeded wait attempts
==> amazon-ebs: Pausing before cleanup of step 'StepCleanupTempKeys'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepProvision'. Press enter to continue. 
==> amazon-ebs: Provisioning step had errors: Running the cleanup provisioner, if present...
==> amazon-ebs: Pausing before cleanup of step 'StepSetGeneratedData'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepConnect'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepCreateSSMTunnel'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepGetPassword'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepRunSourceInstance'. Press enter to continue. 
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Pausing before cleanup of step 'StepCleanupVolumes'. Press enter to continue. 
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Pausing before cleanup of step 'StepIamInstanceProfile'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepSecurityGroup'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepKeyPair'. Press enter to continue. 
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Pausing before cleanup of step 'StepNetworkInfo'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepSourceAMIInfo'. Press enter to continue. 
==> amazon-ebs: Pausing before cleanup of step 'StepPreValidate'. Press enter to continue. 
2021/02/22 21:33:00 [INFO] (telemetry) ending amazon-ebs
Build 'amazon-ebs' errored after 1 hour 28 minutes: Error waiting for instance to stop: ResourceNotReady: exceeded wait attempts

Above are the logs I get by building on a mac1.metal instances. I didn't provision anything on top of the source AMI. the AMI I use is the latest Catalina AMI provided by amasion (ami-07f480f3fa002bc15)
I was able to get around this by setting AWS_MAX_ATTEMPTS=300 and AWS_POLL_DELAY_SECONDS=30

Similar issues #6526 and #6536 happened before but was fixed with the raised timeout at 1.3.0

These mac ec2 instances are fairly new to the AWS world, they are made available just short of 3 months ago 11/30/2020. Because they run on a dedicated host (mac mini), when terminating the instances there is a very lengthy scrubbing process to ensure the next user does not get any info from disk, memory, nor NVRAM. AWS doesn't release info about how long this process takes but is about 1h30m. I got this number from my own experiments and other user's report (https://dev.to/svasylenko/mac1-metal-ec2-instance-user-experience-j08), see Destroying the Instance section.

The default limit should probably be raised when building on mac1.metal instances.

Session Manager SSH hanging on shell provisioner

This issue was originally opened by @artis3n as hashicorp/packer#10584. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

I am attempting to create an AMI using the amazon-ebs builder. My provisioners are an Ansible playbook, a shell that restarts the server, then another shell to verify everything is ok after reboot. When using ssh_interface: session_manager, Packer freezes trying to open a new SSH session for the final shell provisioner (works fine on the first two). I can start a new Session Manager session through the AWS console to the packer builder machine during this period where it hangs locally.

I can change the ssh_interface to public_ip and the AMI build completes in ~31 minutes. The hanging is consistent at the same place when I use session_manager.

This seems materially different than these existing issues with similar-sounding titles - hashicorp/packer#10424 , hashicorp/packer#10508

Reproduction Steps

  1. Create Packer file. If it matters, I am using the new HCL2 format.
  2. Run PACKER_LOG=1 packer build wiki.pkr.hcl
  3. Observe the build hangs at the final shell provisioner
==> Personal Wiki.amazon-ebs.wiki: Pausing 1m0s before the next provisioner...
==> Personal Wiki.amazon-ebs.wiki: Provisioning with shell script: /tmp/packer-shell694354012
2021/02/06 20:00:40 packer-provisioner-shell plugin: Opening /tmp/packer-shell694354012 for reading
2021/02/06 20:00:40 packer-provisioner-shell plugin: [INFO] 66 bytes written for 'uploadData'
2021/02/06 20:00:40 [INFO] 66 bytes written for 'uploadData'
2021/02/06 20:00:40 packer-builder-amazon-ebs plugin: [DEBUG] Opening new ssh session

Packer version

โžœ packer version              
Packer v1.6.6

Simplified Packer Buildfile

Latest version of the file can be found here.
I receive the error with the file included below, in case I've materially changed the file since creating this issue.

Packer HCL2 setup
source "amazon-ebs" "wiki" {
  access_key              = var.aws_access_key
  secret_key              = var.aws_secret_key
  ami_description         = "Gollum wiki hosted on AWS"
  ami_name                = "${var.ami_name}-${local.timestamp}"
  ami_virtualization_type = "hvm"
  iam_instance_profile    = var.iam_instance_profile
  instance_type           = var.instance_type[var.architecture]
  region                  = var.aws_region
  ssh_interface           = "session_manager"
  ssh_username            = var.ec2_username

  launch_block_device_mappings {
    delete_on_termination = true
    device_name           = "/dev/xvda"
    encrypted             = true
    kms_key_id            = var.kms_key_id_or_alias
    volume_size           = var.disk_size
    volume_type           = var.disk_type
    throughput            = var.disk_throughput
    iops                  = var.disk_iops
  }

  source_ami_filter {
    filters = {
      architecture        = var.architecture
      name                = "amzn2-ami-hvm*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["amazon"]
  }

  tags = {
    Base_AMI      = "{{ .SourceAMI }}"
    Base_AMI_Name = "{{ .SourceAMIName }}"
  }
}

build {
  sources = ["source.amazon-ebs.wiki"]
  name    = "Personal Wiki"

  provisioner "shell" {
    inline = [
      "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
      "echo Beginning to build ${build.ID}",
      "echo Connected via SSM at '${build.User}@${build.Host}:${build.Port}'"
    ]
  }

  provisioner "shell" {
    inline = [
      "sudo yum update -y",
      "sudo yum install -y python3 python3-pip python3-wheel python3-setuptools coreutils shadow-utils yum-utils"
    ]
  }

  provisioner "ansible" {
    galaxy_file      = "packer/ansible/requirements.yml"
    host_alias       = "wiki"
    playbook_file    = "packer/ansible/main.yml"
    user             = var.ec2_username
    ansible_env_vars = ["ANSIBLE_VAULT_PASSWORD_FILE=${var.ansible_vault_pwd_file}"]
  }

  provisioner "shell" {
    inline = ["sudo reboot"]
    expect_disconnect = true
  }

  provisioner "shell" {
    inline       = ["echo ${build.ID} rebooted, done provisioning"]
    pause_before = "1m"
  }

}

# "timestamp" template function replacement
locals { timestamp = regex_replace(timestamp(), "[- TZ:]", "") }

variable "ec2_username" {
  type        = string
  description = "The username of the default user on the EC2 instance."
  default     = "ec2-user"
}

variable "ami_name" {
  type        = string
  description = "The name of the AMI that gets generated."
  default     = "packer-gollum-wiki"
}

variable "architecture" {
  type        = string
  description = "The type of source AMI architecture: either x86_64 or arm64."
  default     = "arm64"
}

variable "aws_access_key" {
  type        = string
  description = "AWS_ACCESS_KEY_ID env var."
  default     = env("AWS_ACCESS_KEY_ID")
}

variable "aws_region" {
  type        = string
  description = "The AWS region to create the image in. Defaults to us-east-2."
  default     = "us-east-2"
}

variable "aws_secret_key" {
  type        = string
  description = "AWS_SECRET_ACCESS_KEY env var."
  default     = env("AWS_SECRET_ACCESS_KEY")
  sensitive   = true
}

variable "disk_size" {
  type        = number
  description = "The size of the EBS volume to create."
  default     = 15
}

variable "disk_type" {
  type        = string
  description = "The type of EBS volume to create. Defaults to gp3."
  default     = "gp3"
}

variable "disk_throughput" {
  type        = number
  description = "The MB/s of throughput for the EBS volume. For GP3 volumes, this defaults to 125."
  default     = 125
}

variable "disk_iops" {
  type        = number
  description = "The IOPS for the EBS volume. For GP3 volumes, this defaults to 3000."
  default     = 3000
}

variable "iam_instance_profile" {
  type        = string
  default     = "AmazonSSMRoleForInstancesQuickSetup"
  description = "IAM instance profile configured for AWS Session Manager. Defaults to the default AWS role for Session Manager."
}

variable "instance_type" {
  type        = map(string)
  description = "The type of EC2 instance to create. Defaults are set for x86_64 and arm64 architectures. Overwrite the one that you want by architecture."
  default = {
    "x86_64" : "t3.micro",
    "arm64" : "t4g.micro"
  }
}

variable "kms_key_id_or_alias" {
  type        = string
  description = "The KMS key ID or alias to encrypt the AMI with. Defaults to the default EBS key alias."
  default     = "alias/aws/ebs"
}

variable "ansible_vault_pwd_file" {
  type        = string
  description = "The relative or absolute path to the Ansible Vault password file."
  default     = env("ANSIBLE_VAULT_PASSWORD_FILE")
}

Operating system and Environment details

โžœ cat /etc/lsb-release                 
DISTRIB_ID=Pop
DISTRIB_RELEASE=20.10
DISTRIB_CODENAME=groovy
DISTRIB_DESCRIPTION="Pop!_OS 20.10"

PopOS should be equivalent to Ubuntu.

Log Fragments and crash.log files

2021/02/06 19:59:40 [INFO] (telemetry) Starting provisioner shell
==> Personal Wiki.amazon-ebs.wiki: Pausing 1m0s before the next provisioner...
==> Personal Wiki.amazon-ebs.wiki: Provisioning with shell script: /tmp/packer-shell694354012
2021/02/06 20:00:40 packer-provisioner-shell plugin: Opening /tmp/packer-shell694354012 for reading
2021/02/06 20:00:40 packer-provisioner-shell plugin: [INFO] 66 bytes written for 'uploadData'
2021/02/06 20:00:40 [INFO] 66 bytes written for 'uploadData'
2021/02/06 20:00:40 packer-builder-amazon-ebs plugin: [DEBUG] Opening new ssh session

Once I pressed ctrl+c to end the command execution, I got the following logs. Not sure if that's expected with user interruption or if there are useful nuggets in here.

2021/02/06 20:54:30 packer-provisioner-shell plugin: Received interrupt signal (count: 1). Ignoring.
Cancelling build after receiving interrupt
2021/02/06 20:54:30 packer-provisioner-shell-local plugin: Received interrupt signal (count: 1). Ignoring.
2021/02/06 20:54:30 Cancelling builder after context cancellation context canceled
    Personal Wiki.amazon-ebs.wiki: Terminate signal received, exiting.
2021/02/06 20:54:30 packer-provisioner-shell plugin: Received interrupt signal (count: 1). Ignoring.
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: Received interrupt signal (count: 1). Ignoring.
2021/02/06 20:54:30 packer-provisioner-ansible plugin: Received interrupt signal (count: 1). Ignoring.
2021/02/06 20:54:30 packer-provisioner-shell plugin: Received interrupt signal (count: 1). Ignoring.
==> Personal Wiki.amazon-ebs.wiki: Terminating the source AWS instance...
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: [ERROR] ssh session open error: 'ssh: unexpected packet in response to channel open: <nil>', attempting reconnect
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: [DEBUG] reconnecting to TCP connection for SSH
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: Cancelling provisioning due to context cancellation: context canceled
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: Cancelling hook after context cancellation context canceled
    Personal Wiki.amazon-ebs.wiki: Exiting session with sessionId: terraform-0a5c79bb8713db77e.
2021/02/06 20:54:30 Cancelling provisioner after context cancellation context canceled
2021/02/06 20:54:30 packer-builder-amazon-ebs plugin: [DEBUG] handshaking with SSH
    Personal Wiki.amazon-ebs.wiki: Cannot perform start session: write tcp 192.168.1.162:59110->52.95.19.43:443: write: broken pipe
2021/02/06 20:54:30 packer-provisioner-shell plugin: Retryable error: Error uploading script: ssh: handshake failed: read tcp 127.0.0.1:36674->127.0.0.1:8772: read: connection reset by peer
2021/02/06 20:54:30 [INFO] (telemetry) ending shell
==> Personal Wiki.amazon-ebs.wiki: Cleaning up any extra volumes...
==> Personal Wiki.amazon-ebs.wiki: No volumes to clean up, skipping
==> Personal Wiki.amazon-ebs.wiki: Deleting temporary security group...
==> Personal Wiki.amazon-ebs.wiki: Deleting temporary keypair...
2021/02/06 20:55:17 [INFO] (telemetry) ending 
==> Wait completed after 1 hour 22 minutes
2021/02/06 20:55:17 [INFO] (telemetry) Finalizing.

amazon-ebssurrogate not able to lookup source volume id when using spot instances

This issue was originally opened by @sergey-safarov as hashicorp/packer#9347. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Issue related to hashicorp/packer#8777 and reproduced on current master.

When using spot instances with the amazon-ebssurrogate builder, the builder is not able to lookup the id of the source volume(s). This appears to be because it issues the describe instance call to AWS after it has stopped the EC2 instance, and not before.

Reproduction Steps

Try and use spot instances with amazon-ebssurrogate, you'll get the following error regardless of the value of delete_on_termination in the launch_block_device_mappings config:

==> amazon-ebssurrogate: 1 error occurred:
==> amazon-ebssurrogate: 	* Volume ID for device /dev/xvdf not found
==> amazon-ebssurrogate:
==> amazon-ebssurrogate:

Simply replacing the spot_* configuration options with instance_type works.

Packer version

[safarov@safarov-dell intetics]$ packer version
Packer v1.6.0-dev (19e4afa728c02104d2c0571e6c6fb86278009a75)

Simplified Packer Buildfile

Fails:

{
  "builders": [
    {
      "ami_name": "packer debug {{ user `BuildTime` }}",
      "ami_virtualization_type": "hvm",

      "type": "amazon-ebssurrogate",
      "spot_instance_types": [
        "c5.large",
        "c5.xlarge"
      ],
      "spot_price": "auto",

      "launch_block_device_mappings": [
        {
          "volume_type": "gp2",
          "device_name": "{{ user `DeviceName` }}",
          "delete_on_termination": false,
          "volume_size": 20
        }
      ],

      "ami_root_device": {
        "delete_on_termination": true,
        "device_name": "/dev/xvda",
        "source_device_name": "{{ user `DeviceName` }}",
        "volume_size": 20,
        "volume_type": "gp2"
      }
    }
  ]
}

Works:

{
  "builders": [
    {
      "ami_name": "packer debug {{ user `BuildTime` }}",
      "ami_virtualization_type": "hvm",

      "type": "amazon-ebssurrogate",
      "instance_type": "c5.large",

      "launch_block_device_mappings": [
        {
          "volume_type": "gp2",
          "device_name": "{{ user `DeviceName` }}",
          "delete_on_termination": false,
          "volume_size": 20
        }
      ],

      "ami_root_device": {
        "delete_on_termination": true,
        "device_name": "/dev/xvda",
        "source_device_name": "{{ user `DeviceName` }}",
        "volume_size": 20,
        "volume_type": "gp2"
      }
    }
  ]
}

Operating system and Environment details

Tested on Linux.

Log Fragments and crash.log files

2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [DEBUG] Opening new ssh session
2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [DEBUG] starting remote command: rm -f /tmp/script_6506.sh
2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [INFO] RPC endpoint: Communicator ended with: 0
2020/06/02 22:13:45 [INFO] RPC client: Communicator ended with: 0
2020/06/02 22:13:45 [INFO] RPC endpoint: Communicator ended with: 0
2020/06/02 22:13:45 packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [DEBUG] Opening new ssh session
2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [DEBUG] starting remote command: rm -f
2020/06/02 22:13:45 packer-builder-amazon-ebssurrogate plugin: [INFO] RPC endpoint: Communicator ended with: 0
2020/06/02 22:13:45 [INFO] RPC client: Communicator ended with: 0
2020/06/02 22:13:45 [INFO] RPC endpoint: Communicator ended with: 0
2020/06/02 22:13:45 packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
2020/06/02 22:13:45 [INFO] (telemetry) ending shell
==> amazon-ebssurrogate: 	* Volume ID for device /dev/sdf not found
==> amazon-ebssurrogate: 
==> amazon-ebssurrogate:
==> amazon-ebssurrogate: 1 error occurred:
==> amazon-ebssurrogate: 	* Volume ID for device /dev/sdf not found
==> amazon-ebssurrogate: 
==> amazon-ebssurrogate:
==> amazon-ebssurrogate: Provisioning step had errors: Running the cleanup provisioner, if present...
==> amazon-ebssurrogate: Terminating the source AWS instance...
==> amazon-ebssurrogate: Cleaning up any extra volumes...
==> amazon-ebssurrogate: No volumes to clean up, skipping
==> amazon-ebssurrogate: Deleting temporary security group...
==> amazon-ebssurrogate: Deleting temporary keypair...
2020/06/02 22:14:06 [INFO] (telemetry) ending amazon-ebssurrogate
	* Volume ID for device /dev/sdf not found

Build 'amazon-ebssurrogate' errored: 1 error occurred:

	* Volume ID for device /dev/sdf not found
2020/06/02 22:14:06 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:

2020/06/02 22:14:06 machine readable: amazon-ebssurrogate,error []string{"1 error occurred:\n\t* Volume ID for device /dev/sdf not found\n\n"}
	* Volume ID for device /dev/sdf not found

==> Builds finished but no artifacts were created.

2020/06/02 22:14:06 [INFO] (telemetry) Finalizing.

==> Some builds didn't complete successfully and had errors:
--> amazon-ebssurrogate: 1 error occurred:
	* Volume ID for device /dev/sdf not found



==> Builds finished but no artifacts were created.

amazon-ebs builder timeout error for AWS API endpoints

This issue was originally opened by @ranjb as hashicorp/packer#6162. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


The amazon-ebs builder randomly times out when interacting with the AWS API. This can happen at any stage during the build, either at the beginning or at the end after the AMI is created and packer is trying to shut down the instance or sometimes when attempting to tag the AMI etc. The build script runs more than one packer build in sequence and sometimes the first one builds fine while it times out on the second or sometimes it fails right at the beginning of the first.

Example:
--> amazon-ebs: Error stopping instance: RequestError: send request failed
caused by: Post https://ec2.us-east-1.amazonaws.com/: dial tcp 54.239.28.176:443: i/o timeout

AuthFailure: AWS was not able to validate the provided access credentials / status code: 401

This issue was originally opened by @micchickenburger as hashicorp/packer#10302. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Packer throws a 401 unauthorized error when using aws-vault. Terraform works fine, as well as any other aws command.

Reproduction Steps

In all examples, replace <profile> with the name of the pre-created aws-vault profile.

$ aws-vault exec <profile> -- packer build packer.json

However, these all work fine:

$ aws-vault exec <profile> -- terraform apply
...
$ aws-vault exec <profile> -- aws s3 ls
...

And the environment:

$ aws-vault exec <profile> -- env | grep AWS
AWS_VAULT=[redacted]
AWS_DEFAULT_REGION=us-east-2
AWS_REGION=us-east-2
AWS_ACCESS_KEY_ID=[redacted]
AWS_SECRET_ACCESS_KEY=[redacted]
AWS_SESSION_TOKEN=[redacted]
AWS_SECURITY_TOKEN=[redacted]
AWS_SESSION_EXPIRATION=2020-11-24T10:00:24Z

Packer version

Packer version: 1.6.5 [go1.15.3 darwin amd64]

Simplified Packer Buildfile

{
  "variables": {
    "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
    "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
    "aws_region": "{{env `AWS_DEFAULT_REGION`}}",
    "environment": "{{env `TF_VAR_environment`}}"
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "access_key": "{{user `aws_access_key`}}",
      "secret_key": "{{user `aws_secret_key`}}",
      "region": "{{user `aws_region`}}",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "ubuntu/images/*ubuntu-bionic-18.04-amd64-server-*",
          "root-device-type": "ebs"
        },
        "owners": ["099720109477"],
        "most_recent": true
      },
      "instance_type": "t2.micro",
      "ssh_username": "ubuntu",
      "ami_name": "app-{{user `environment`}}-{{timestamp}}"
    }
  ],
  "provisioners": []
}

Operating system and Environment details

macOS Big Sur 11.0.1, aws-vault version 6.2.0

Log Fragments and crash.log files

$ aws-vault exec <profile> -- packer build packer.json
2020/11/24 03:09:01 [INFO] Packer version: 1.6.5 [go1.15.3 darwin amd64]
2020/11/24 03:09:01 Checking 'PACKER_CONFIG' for a config file path
2020/11/24 03:09:01 'PACKER_CONFIG' not set; checking the default config file path
2020/11/24 03:09:01 Attempting to open config file: /[redacted]/.packerconfig
2020/11/24 03:09:01 [WARN] Config file doesn't exist: /[redacted]/.packerconfig
2020/11/24 03:09:01 Setting cache directory: /[redacted]/packer_cache
2020/11/24 03:09:01 Creating plugin client for path: /usr/local/bin/packer
2020/11/24 03:09:01 Starting plugin: /usr/local/bin/packer []string{"/usr/local/bin/packer", "plugin", "packer-builder-amazon-ebs"}
2020/11/24 03:09:01 Waiting for RPC address for: /usr/local/bin/packer
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: [INFO] Packer version: 1.6.5 [go1.15.3 darwin amd64]
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Checking 'PACKER_CONFIG' for a config file path
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Attempting to open config file: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: [WARN] Config file doesn't exist: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Setting cache directory: /[redacted]/packer_cache
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: args: []string{"packer-builder-amazon-ebs"}
2020/11/24 03:09:01 Received unix RPC address for /usr/local/bin/packer: addr is /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin703595027
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Plugin address: unix /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin703595027
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Waiting for connection...
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Serving a plugin connection...
2020/11/24 03:09:01 Creating plugin client for path: /usr/local/bin/packer
2020/11/24 03:09:01 Starting plugin: /usr/local/bin/packer []string{"/usr/local/bin/packer", "plugin", "packer-provisioner-shell"}
2020/11/24 03:09:01 Waiting for RPC address for: /usr/local/bin/packer
2020/11/24 03:09:01 packer-provisioner-shell plugin: [INFO] Packer version: 1.6.5 [go1.15.3 darwin amd64]
2020/11/24 03:09:01 packer-provisioner-shell plugin: Checking 'PACKER_CONFIG' for a config file path
2020/11/24 03:09:01 packer-provisioner-shell plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/11/24 03:09:01 packer-provisioner-shell plugin: Attempting to open config file: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-shell plugin: [WARN] Config file doesn't exist: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-shell plugin: Setting cache directory: /[redacted]/packer_cache
2020/11/24 03:09:01 packer-provisioner-shell plugin: args: []string{"packer-provisioner-shell"}
2020/11/24 03:09:01 Received unix RPC address for /usr/local/bin/packer: addr is /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin096613727
2020/11/24 03:09:01 packer-provisioner-shell plugin: Plugin address: unix /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin096613727
2020/11/24 03:09:01 packer-provisioner-shell plugin: Waiting for connection...
2020/11/24 03:09:01 packer-provisioner-shell plugin: Serving a plugin connection...
2020/11/24 03:09:01 Creating plugin client for path: /usr/local/bin/packer
2020/11/24 03:09:01 Starting plugin: /usr/local/bin/packer []string{"/usr/local/bin/packer", "plugin", "packer-provisioner-file"}
2020/11/24 03:09:01 Waiting for RPC address for: /usr/local/bin/packer
2020/11/24 03:09:01 packer-provisioner-file plugin: [INFO] Packer version: 1.6.5 [go1.15.3 darwin amd64]
2020/11/24 03:09:01 packer-provisioner-file plugin: Checking 'PACKER_CONFIG' for a config file path
2020/11/24 03:09:01 packer-provisioner-file plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/11/24 03:09:01 packer-provisioner-file plugin: Attempting to open config file: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-file plugin: [WARN] Config file doesn't exist: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-file plugin: Setting cache directory: /[redacted]/packer_cache
2020/11/24 03:09:01 packer-provisioner-file plugin: args: []string{"packer-provisioner-file"}
2020/11/24 03:09:01 packer-provisioner-file plugin: Plugin address: unix /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin074951819
2020/11/24 03:09:01 packer-provisioner-file plugin: Waiting for connection...
2020/11/24 03:09:01 Received unix RPC address for /usr/local/bin/packer: addr is /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin074951819
2020/11/24 03:09:01 packer-provisioner-file plugin: Serving a plugin connection...
2020/11/24 03:09:01 Creating plugin client for path: /usr/local/bin/packer
2020/11/24 03:09:01 Starting plugin: /usr/local/bin/packer []string{"/usr/local/bin/packer", "plugin", "packer-provisioner-shell"}
2020/11/24 03:09:01 Waiting for RPC address for: /usr/local/bin/packer
2020/11/24 03:09:01 packer-provisioner-shell plugin: [INFO] Packer version: 1.6.5 [go1.15.3 darwin amd64]
2020/11/24 03:09:01 packer-provisioner-shell plugin: Checking 'PACKER_CONFIG' for a config file path
2020/11/24 03:09:01 packer-provisioner-shell plugin: 'PACKER_CONFIG' not set; checking the default config file path
2020/11/24 03:09:01 packer-provisioner-shell plugin: Attempting to open config file: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-shell plugin: [WARN] Config file doesn't exist: /[redacted]/.packerconfig
2020/11/24 03:09:01 packer-provisioner-shell plugin: Setting cache directory: /[redacted]/packer_cache
2020/11/24 03:09:01 packer-provisioner-shell plugin: args: []string{"packer-provisioner-shell"}
2020/11/24 03:09:01 Received unix RPC address for /usr/local/bin/packer: addr is /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin207874823
2020/11/24 03:09:01 packer-provisioner-shell plugin: Plugin address: unix /var/folders/kd/0qsg7379351cw6j40x_pg_740000gn/T/packer-plugin207874823
2020/11/24 03:09:01 packer-provisioner-shell plugin: Waiting for connection...
2020/11/24 03:09:01 packer-provisioner-shell plugin: Serving a plugin connection...
2020/11/24 03:09:01 Preparing build: amazon-ebs
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: [INFO] (aws): No AWS timeout and polling overrides have been set. Packer will default to waiter-specific delays and timeouts. If you would like to customize the length of time between retries and max number of retries you may do so by setting the environment variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS or the configuration options aws_polling_delay_seconds and aws_polling_max_attempts to your desired values.
2020/11/24 03:09:01 Build debug mode: false
2020/11/24 03:09:01 Force build: false
2020/11/24 03:09:01 On error: 
2020/11/24 03:09:01 Waiting on builds to complete...
2020/11/24 03:09:01 Starting build run: amazon-ebs
2020/11/24 03:09:01 Running builder: amazon-ebs
2020/11/24 03:09:01 [INFO] (telemetry) Starting builder amazon-ebs
amazon-ebs: output will be in this color.

2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: [INFO] AWS Auth provider used: "StaticProvider"
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: Found region us-east-2
2020/11/24 03:09:01 packer-builder-amazon-ebs plugin: [INFO] AWS Auth provider used: "StaticProvider"
2020/11/24 03:09:01 [INFO] (telemetry) ending amazon-ebs
        status code: 401, request id: e11dfb0e-e505-4268-81ef-c482960ff3a8
==> Wait completed after 407 milliseconds 791 microseconds
2020/11/24 03:09:01 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
2020/11/24 03:09:01 machine readable: amazon-ebs,error []string{"error validating regions: AuthFailure: AWS was not able to validate the provided access credentials\n\tstatus code: 401, request id: e11dfb0e-e505-4268-81ef-c482960ff3a8"}
        status code: 401, request id: e11dfb0e-e505-4268-81ef-c482960ff3a8
==> Builds finished but no artifacts were created.
2020/11/24 03:09:01 [INFO] (telemetry) Finalizing.
Build 'amazon-ebs' errored after 407 milliseconds 636 microseconds: error validating regions: AuthFailure: AWS was not able to validate the provided access credentials
        status code: 401, request id: e11dfb0e-e505-4268-81ef-c482960ff3a8

==> Wait completed after 407 milliseconds 791 microseconds

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: error validating regions: AuthFailure: AWS was not able to validate the provided access credentials
        status code: 401, request id: e11dfb0e-e505-4268-81ef-c482960ff3a8

==> Builds finished but no artifacts were created.
2020/11/24 03:09:02 waiting for all plugin processes to complete...
2020/11/24 03:09:02 /usr/local/bin/packer: plugin process exited
2020/11/24 03:09:02 /usr/local/bin/packer: plugin process exited
2020/11/24 03:09:02 /usr/local/bin/packer: plugin process exited
2020/11/24 03:09:02 /usr/local/bin/packer: plugin process exited

amazon: Impossible to associate public IP in default subnet w/o auto-assign public IP

This issue was originally opened by @emcpow2 as hashicorp/packer#6589. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Packer v1.2.5

Builder type amazon-ebs

Assuming default networking setup.

Steps to reproduce:

  1. Find default VPC and disable Auto-assign public IPv4 address in its default subnets
  2. Leave vpc_id and subnet_id in default values(unset)
  3. Set associate_public_ip to true
  4. Start packer build
  5. EC2 instance will be created without public IP address

More information
associate_public_ip_address : true does not work here, because based on source code it only takes effect if subnet_id(or vpc_id) is specified.
https://github.com/hashicorp/packer/blob/v1.2.5/builder/amazon/common/step_run_source_instance.go#L157-L167

	if s.SubnetId != "" && s.AssociatePublicIpAddress {
		runOpts.NetworkInterfaces = []*ec2.InstanceNetworkInterfaceSpecification{
			{
				DeviceIndex:              aws.Int64(0),
				AssociatePublicIpAddress: &s.AssociatePublicIpAddress,
				SubnetId:                 &s.SubnetId,
				Groups:                   securityGroupIds,
				DeleteOnTermination:      aws.Bool(true),
			},
		}
	} else {

associate_public_ip_address must work for default VPC in spite of disabled Auto-assign public IPv4 address.

packer fails with ansible provisioner

This issue was originally opened by @jf as hashicorp/packer#10690. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Packing without the ansible provisioner passes (so there is nothing wrong with my packer setup), but with the ansible provisioner I get a error message about * Error running "ansible-playbook --version": exit status 1.

ansible-playbook is in the path, and running ansible-playbook --version on the command line returns with status 0 and not 1. If this were a PATH issue, I would expect status 127, so it doesn't look like it's a PATH issue. I also just confirmed this, by uninstalling ansible, which results in packer giving the following instead:

* Error running "ansible-playbook --version": exec: "ansible-playbook":
==> Wait completed after 302 microseconds
==> Builds finished but no artifacts were created.
executable file not found in $PATH

Reproduction Steps

Packer version

Packer v1.7.0

Simplified Packer Buildfile

It's really your standard packer file as far as I'm concerned. The issue here is that removing the ansible provisioner makes the build process work. If I include ansible, the build process fails.

Simplified version:

{
	"variables": {
		"var__aws_access_key":   "{{ env `AWS_ACCESS_KEY_ID` }}",
		"var__aws_secret_key":   "{{ env `AWS_SECRET_ACCESS_KEY` }}",
		"var__aws_secret_token": "{{ env `AWS_SECURITY_TOKEN` }}"
	},
	"builders": [{
		"type": "amazon-ebs",

		"access_key": "{{ user `var__aws_access_key` }}",
		"secret_key": "{{ user `var__aws_secret_key` }}",
		"token":      "{{ user `var__aws_secret_token` }}",

		"ssh_username": "aaa"
	}],
	"provisioners": [{
		"script": "update_upgrade.sh",
		"execute_command": "chmod +x {{ .Path }}; sudo {{ .Path }}",
		"type": "shell"
	},{
		"type": "ansible",
		"user": "aaa",
		"playbook_file": "provision.yml",

		"use_proxy": "false",

		"extra_arguments": ["-vvv"]
	}]
}

Operating system and Environment details

OS, Architecture, and any other information you can provide about the
environment.

I am on macOS Big Sur, on the M1 chip

Ansible (2.10.6) is installed in a python venv, and available in the path

Log Fragments and crash.log files

...
2021/02/24 15:19:29 Waiting for RPC address for: /usr/local/bin/packer
2021/02/24 15:19:29 packer-provisioner-shell plugin: [INFO] Packer version: 1.7.0 [go1.15.8 darwin amd64]
2021/02/24 15:19:29 packer-provisioner-shell plugin: [INFO] PACKER_CONFIG env var not set; checking the default config file path
2021/02/24 15:19:29 packer-provisioner-shell plugin: [INFO] PACKER_CONFIG env var set; attempting to open config file: /Users/jf/.packerconfig
2021/02/24 15:19:29 packer-provisioner-shell plugin: [WARN] Config file doesn't exist: /Users/jf/.packerconfig
2021/02/24 15:19:29 packer-provisioner-shell plugin: [INFO] Setting cache directory: /Users/jf/packer/packer_cache
2021/02/24 15:19:29 packer-provisioner-shell plugin: args: []string{"packer-provisioner-shell"}
2021/02/24 15:19:29 packer-provisioner-shell plugin: Plugin address: unix /var/folders/d3/xqf60mbn76b9y0tsxwf_st9r0000gn/T/packer-plugin974076808
2021/02/24 15:19:29 packer-provisioner-shell plugin: Waiting for connection...
2021/02/24 15:19:29 Received unix RPC address for /usr/local/bin/packer: addr is /var/folders/d3/xqf60mbn76b9y0tsxwf_st9r0000gn/T/packer-plugin974076808
2021/02/24 15:19:29 packer-provisioner-shell plugin: Serving a plugin connection...
2021/02/24 15:19:29 Preparing build: amazon-ebs
2021/02/24 15:19:29 packer-builder-amazon-ebs plugin: [INFO] (aws): No AWS timeout and polling overrides have been set. Packer will default to waiter-specific delays and timeouts. If you would like to customize the length of time between retries and max number of retries you may do so by setting the environment variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS or the configuration options aws_polling_delay_seconds and aws_polling_max_attempts to your desired values.

1 error(s) occurred:

* Error running "ansible-playbook --version": exit status 1

2021/02/24 15:19:29 Build debug mode: true
2021/02/24 15:19:29 Force build: false
2021/02/24 15:19:29 On error:
Error: Failed to prepare build: "amazon-ebs"

1 error(s) occurred:

2021/02/24 15:19:29 Waiting on builds to complete...
* Error running "ansible-playbook --version": exit status 1


Debug mode enabled. Builds will not be parallelized.

==> Wait completed after 265 microseconds
==> Wait completed after 265 microseconds

==> Builds finished but no artifacts were created.
==> Builds finished but no artifacts were created.
2021/02/24 15:19:29 [INFO] (telemetry) Finalizing.
2021/02/24 15:19:30 waiting for all plugin processes to complete...
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
2021/02/24 15:19:30 /usr/local/bin/packer: plugin process exited
(venv) j:packer jf@s027$

Packer doesn't keep UsageOperation with amazon-ebssurrogate

This issue was originally opened by @Seb0042 as hashicorp/packer#10513. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Same as the issue hashicorp/packer#4922
Briefly the ami made with packer lost the hourly licensing

Reproduction Steps

Use a source image that has RunInstances:0010 with amazon-ebssurrogate

Packer version

Packer v1.6.6

Simplified Packer Buildfile

{
 "variables":{
    "aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
    "aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
    "region": "eu-west-3",
    "source_ami": "ami-0ba7c4110ca9bfe0b",
    "ssh_username": "ec2-user"
 },
 "builders": [{
  "type": "amazon-ebssurrogate",
  "ami_virtualization_type": "hvm",
  "region": "{{user `region`}}",
  "instance_type": "t2.micro",
  "ena_support": true,
  "ssh_username": "{{user `ssh_username`}}",
  "source_ami": "{{user `source_ami`}}",
  "ami_name": "RHEL_8.3lvm_x86_64-{{ timestamp }}",
  "ami_description": "Amazon Linux RHEL 8.3 lvm",
  "launch_block_device_mappings": [{
     "delete_on_termination": true,
     "device_name": "/dev/xvdf",
     "volume_type": "gp2",
     "volume_size": 30
    }],
  "ami_root_device": {
    "source_device_name": "/dev/xvdf",
    "device_name": "/dev/xvda",
    "delete_on_termination": true,
    "volume_size": "30",
    "volume_type": "gp2"
   }}],
 "provisioners": [{
        "type": "shell",
        "inline": ["my wonderful script"]}]
}

Operating system and Environment details

OS=RHEL 8 or RHEL 7,
ARCH = x86_64

source_ami_filter permits only one value for name

This issue was originally opened by @grodzik as hashicorp/packer#7295. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Hi,

I'm building an AMI based on Debian base image (debian-stretch-hvm-x86_64-gp2-*). This requires doing apt upgrade during building, which slowly takes longer, taking into account that more updates come out since release of the given base image. Let's assume my image will be called my-debian-based-image-{{timestamp}}. I would like to give both names to filter out most recent. Kind of incremental build, only if last build is newer than debian-stretch-hvm-x86_64-gp2-*. Using AWS CLI I can filter images using both names, like so:

aws ec2 describe-images --filters "Name=name,Values=debian-stretch-hvm-x86_64-gp2-*,my-debian-based-image-*" --query "Images[].Name"

Unfortuantely this is not possible to achieve in packers source_ami_filter. It's impossible to pass array for filters attributes. Passing whole Values string from above also doesn't work, which is errored in the console like so:

2019/02/08 15:13:37 ui error: Build 'amazon-ebs' errored: No AMI was found matching filters: {
    Filters: [{
        Name: "name",
        Values: ["debian-stretch-hvm-x86_64-gp2-*,my-debian-based-image-*"]
    }]
}

Looking at the source code (I'm not a Go programmer), I would say this has something to do with it: https://github.com/hashicorp/packer/blob/master/builder/amazon/common/build_filter.go#L13
where I would expect that string to be splited on , into array.

  • packer version: 1.3.4
  • platform: Debian Linux

Restrict AWS Security group to the current host's Public IP address according to AWS

This issue was originally opened by @tedivm as hashicorp/packer#9915. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Community Note

Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Description

AWS provides an extremely simple service that you can use to find out what IP address AWS from your machine.

https://checkip.amazonaws.com/

Use Case(s)

This would make it so the machine did not have to be open to the world.

Potential References

#6735

amazon-ebs: fine-grained IAM policy

This issue was originally opened by @JIoJIaJIu as hashicorp/packer#10092. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Description

I would like to be able to create a custom IAM policy that grants access to the Packer, and only to the resources created by Packer. AWS uses conditions base on aws:RequestTag and aws:ResourceTag for that purpose, but unfortunately for most of the controlled resources Packer doesn't setup the tags. It leads that CI/CD user has wide permissions - ability to remove/change resources it shouldn't. At this moment I found the only way to limit access by using separated aws region for the Packer

Similar issues: hashicorp/packer#9894

Use Case(s) / Potential configuration

So would be great:

  1. โœ๏ธ Apply tag `Managed by: Hashicor Packer` to the all existed resources that support it

    It will allow writing policy as

    ...
    statement {
      actions = [
        "ec2:CreateSecurityGroup",
      ]
      resources = ["*"]
      condition {
        test  = "StringEquals"
        variable = "aws:RequestTag/Managed by"
        values = [
          "Hashicorp Packer"
        ]
      }
    }
    
    statement {
      actions = [
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:DeleteSecurityGroup",
      ]
      resources = ["*"]
      condition {
        test  = "StringEquals"
        variable = "aws:ResourceTag/Managed by"
        values = [
          "Hashicorp Packer"
        ]
      }
    }
    ...

    For actions that supports resources it's possible to point like

    statement {
        actions = [
          "ec2:CreateKeyPair",
          "ec2:DeleteKeyPair",
        ]
    
        resources = ["arn:aws:ec2:*:${var.aws_account}:key-pair/packer_*"]
     }

    but

    1. it's not supported by most of the actions
    2. using tags will make policy simpler and more uniform
  2. ๐Ÿ”” Add base tag during creation with TagSpecification for the resources
    runOpts := &ec2.RunInstancesInput{
      ImageId:             &s.SourceAMI,
      ...
      TagSpecifications: []*ec2.TagSpecification{
        {
          ResourceType: aws.String("instance"),
          Tags: []*ec2.Tag{
            {
              Key: aws.String("Managed by"),
              Value: aws.String("Hashicorp Packer"),
            },
          },
        },
        ...
      },
    }

    to be able to create policy

    {
      "Effect": "Allow",
      "Action": [
        "ec2:RunInstances"
      ],
      "Resource": [
        "arn:aws:ec2:*:${var.aws_account}:instance/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/Managed by": "Hashicorp Packer"
        }
      }

    rather than apply it after on the created instance, cause it requires the policy that allows to add any tag

  3. ๐Ÿ“• Update minimal IAM policy in the doc

    It would be great if IAM policy is provided in docs contains these conditions.

Potential References

SSH time out due to AWS Session Manager tunnel failure

This issue was originally opened by @Venkat1505 as hashicorp/packer#10508. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I am using the AWS Codebuild with packer for creating golden-ami. I am using the session_manager in packer to connect AWS.

I am getting the below error?

Can you please help?

Buildspec.yml

version: 0.2

phases:
  install:
    commands:
      - echo Executing install phases
      - curl -o packer.zip https://releases.hashicorp.com/packer/1.6.6/packer_1.6.6_linux_amd64.zip && unzip packer.zip
      - ./packer version
      - curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm" -o 
      -   "session-manager-plugin.rpm"
      - yum install -y session-manager-plugin.rpm
      - session-manager-plugin
      
    pre_build:
     commands:
     
      - ./packer validate packer_cis.json
     
  build:
    commands:
     
    
       - PACKER_LOG=1 ./packer build -debug -color=false packer_cis.json | tee build.log
       - cat packerlog.txt
       
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
    
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes
{
  "variables": {
    "aws_access_key": "******K",
    "aws_secret_key": "*******a",
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "******",
    "aws_region": "{{env `AWS_REGION`}}"
 },
  "builders": [{
    "name": "AWS AMI Builder - CIS",
    "type": "amazon-ebs",
    "region": "{{user `aws_region`}}",
    "source_ami_filter": {
      "filters": {
        "virtualization-type": "hvm",
        "name": "amzn-ami-hvm-2018.03.0.20200318.1-x86_64-ebs",
        "root-device-type": "ebs"
      },
      "owners": ["1111111111","
      ],
      "most_recent": true
    },
    "instance_type": "t2.micro",
    "ami_name": "Prod-CIS-Latest-AMZN-{{isotime \"02-Jan-06 03_04_05\"}}",
    "ami_description": "Amazon Linux CIS with Cloudwatch Logs agent",
    "associate_public_ip_address": "true",
    "vpc_id": "*********",
    "subnet_id": "***************",
    "security_group_id": "****************",
    "iam_instance_profile": "AmazonEC2RoleforSSM",
    "communicator": "ssh",
    "ssh_username": "ec2-user",
    "ssh_interface": "session_manager"
   
  }],
  "provisioners": [{
      "type": "shell",
      "inline": [
        "sudo pip install ansible==2.7.9"
      ]
    },
    {
      "type": "ansible-local",
      "playbook_file": "ansible/playbook.yaml",
      "role_paths": [
        "ansible/roles/common"
      ],
      "playbook_dir": "ansible",
      "galaxy_file": "ansible/requirements.yaml"
    },
    
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }
  ]
}
==> AWS AMI Builder - CIS: Launching a source AWS instance...
--
219 | ==> AWS AMI Builder - CIS: Adding tags to source instance
220 | AWS AMI Builder - CIS: Adding tag: "Name": "Packer Builder"
221 | AWS AMI Builder - CIS: Instance ID: i-00dd9fdfd466d5bfa462f
222 | ==> AWS AMI Builder - CIS: Waiting for instance (i-00dd9fdfd466d5bfa462f) to become ready...
223 | AWS AMI Builder - CIS: Public DNS: ec2-54-259-25-54.compute-1.amazonaws.com
224 | AWS AMI Builder - CIS: Public IP: 54.259.25.54
225 | AWS AMI Builder - CIS: Private IP: 10.113.17.203
226 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: Error asking for input: no available tty
227 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: [INFO] Not using winrm communicator, skipping get password...
228 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: Error asking for input: no available tty
229 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: Found available port: 8372 on IP: 0.0.0.0
230 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: ssm: Starting PortForwarding session to instance i-00dd9466d5bfa462f
231 | ==> AWS AMI Builder - CIS: Using ssh communicator to connect: localhost
232 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: Error asking for input: no available tty
233 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: [INFO] Waiting for SSH, up to timeout: 1h0m0s
234 | ==> AWS AMI Builder - CIS: Waiting for SSH to become available...
235 | 2021/01/21 23:18:10 packer-builder-amazon-ebs plugin: [DEBUG] TCP connection to SSH ip/port failed: dial tcp 127.0.0.1:8372: connect: connection refused
236 | 2021/01/21 23:18:11 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.
237 | 2021/01/21 23:18:11 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.
238 | 2021/01/21 23:18:11 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.
239 | 2021/01/21 23:18:12 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.
240 | 2021/01/21 23:18:14 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.
241 | 2021/01/21 23:18:15 packer-builder-amazon-ebs plugin: [DEBUG] TCP connection to SSH ip/port failed: dial tcp 127.0.0.1:8372: connect: connection refused
242 | 2021/01/21 23:18:17 packer-builder-amazon-ebs plugin: Retryable error: TargetNotConnected: i-00dd9fdfd466d5bfa462f is not connected.

amazon-ebs builder does not accept "/" in instanceProfileName

This issue was originally opened by @yogeek as hashicorp/packer#10024. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

When trying to create an AWS AMI with an instance profile ARN with a path (containing "/" characters) like arn:aws-cn:iam::$ACCOUNT_ID:instance-profile/foo/bar/test-profile, the following error occurs :

Couldn't find specified instance profile: ValidationError: 1 validation error detected: Value '/foo/bar/test-profile' at 'instanceProfileName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]+

Reproduction Steps

Try to build an AWS AMI with the code provided in the Simplified Packer Buildfile section where iam_instance_profile value contains "/" characters :
"iam_instance_profile": "/foo/bar/test-profile",

Packer version

Packer v1.6.4

Simplified Packer Buildfile

{
  "min_packer_version": "1.3.0",
  "variables": {
    "aws_region": "{{ env `AWS_DEFAULT_REGION` }}",
    "aws_profile": "{{ env `AWS_PROFILE` }}",
    [...]
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "region": "{{user `aws_region`}}",
      "instance_type": "t3.micro",
      "associate_public_ip_address": true,
      "ssh_interface" : "public_ip",
      "iam_instance_profile": "/foo/bar/test-profile",
	  [...]

Log Fragments and crash.log files

2020/10/01 16:30:13 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
2020/10/01 16:30:13 machine readable: amazon-ebs,error []string{"Couldn't find specified instance profile: ValidationError: 1 validation error detected: Value '/foo/bar/test-profile' at 'instanceProfileName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\\w+=,.@-]+\n\tstatus code: 400, request id: bbc5a9f9-2be5-4ad1-80be-e319b953f7a5"}
	status code: 400, request id: bbc5a9f9-2be5-4ad1-80be-e319b953f7a5
==> Builds finished but no artifacts were created.
2020/10/01 16:30:13 [INFO] (telemetry) Finalizing.
Build 'amazon-ebs' errored after 20 seconds 62 milliseconds: Couldn't find specified instance profile: ValidationError: 1 validation error detected: Value '/foo/bar/test-profile' at 'instanceProfileName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]+
	status code: 400, request id: bbc5a9f9-2be5-4ad1-80be-e319b953f7a5

==> Wait completed after 20 seconds 62 milliseconds

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Couldn't find specified instance profile: ValidationError: 1 validation error detected: Value '/foo/bar/test-profile' at 'instanceProfileName' failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]+
	status code: 400, request id: bbc5a9f9-2be5-4ad1-80be-e319b953f7a5

AWS the gp3 volume type throughput parameter is not set on AMI

This issue was originally opened by @AljoschaDembowsky2909 as hashicorp/packer#10506. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

We build an AWS AMI with Packer 1.6.6 and we want to utilize the new "gp3" volume type from AWS but the newly introduced throughput parameter is not set after the AMI is created.

Reproduction Steps

Without CloudFormation
  1. Introduce "gp3" volume type on ami_ or launch_block_device_mappings.
  2. Set "throughput" parameter to the desired value.
  3. Build AMI with packer.
  4. Launch AMI over AWS Console .
  5. In step "4. Add storage" you can see the right value for IOPS which was set inside the packer.json but the throughput parameter is empty and is required to continue.
With CloudFormation

In a CloudFormation based deployment the throughput parameter is set to 125MB/s which is the minimum value for the throughput parameter on gp3 volumes but doens't represent the value which is set inside the packer.json file.

Packer version

1.6.6

Simplified Packer Buildfile

 "builders": [
     {
       "type": "amazon-ebs",
       "launch_block_device_mappings": [{
         "device_name": "/dev/sdc",
         "encrypted": true,
         "volume_size": 64,
         "volume_type": "gp3",
         "throughput": 750,
         "iops": 3000,
         "delete_on_termination": true
       }],

Operating system and Environment details

Building Centos AMI Image with Packer in AWS CodeBuild.

Log Fragments and crash.log files

No crash.log was created and no warning was reported while building the AMI.

ebs-surrogate: Make AMI registration optional

This issue was originally opened by @cetex as hashicorp/packer#4899. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I want to build images for netboot (pxe) for our on-premise hardware using packer and i want to do this on Amazon EC2. Since the result of the build won't run in AWS it would be great if taking a snapshot and registering it as an AMI was optional for the amazon builders.

If this was optional i could use something like Amazon ebs surrogate builder:

  • Run debootstrap to create a chroot on the mounted ebs volume.
  • Run my ansible playbook inside the chrooted directory.
  • Run a script to pack the entire directory up into a cpio/gz archive.
  • Send the created output files to s3 or something similar.
  • Skip snapshot + AMI registration <- this is what's missing.
  • Cleanup / delete instance and volume directly as the resulting images would be useless for any purpose on AWS.

'temporary_iam_instance_profile_policy_document' option does not seem to work in AWS china

This issue was originally opened by @yogeek as hashicorp/packer#10026. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Because of hashicorp/packer#10024, I tried to use the temporary_iam_instance_profile_policy_document as a workaround (as documented here but this option does not seem to work in AWS China.
May it be because AWS China IAM API is not located at the same URLs ? (.amazonaws.com.cn instead of .amazonaws.com)

Error :

Build 'amazon-ebs' errored after 22 seconds 398 milliseconds: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"ec2.amazonaws.com"
	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747

Reproduction Steps

Try to build an AWS AMI with the code provided in the Simplified Packer Buildfile section.

Packer version

Packer v1.6.4

Simplified Packer Buildfile

{
  "min_packer_version": "1.3.0",
  "variables": {
    "aws_region": "{{ env `AWS_DEFAULT_REGION` }}",
    "aws_profile": "{{ env `AWS_PROFILE` }}",
    [...]
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "region": "{{user `aws_region`}}",
      "instance_type": "t3.micro",
      "associate_public_ip_address": true,
      "ssh_interface" : "public_ip",
      "temporary_iam_instance_profile_policy_document": {
        "Version": "2012-10-17",
        "Statement": [
        {
          "Effect": "Allow",
          "Action": "ecr:*",
          "Resource": "*"
        }]
      },
	  [...]

Log Fragments and crash.log files

==> amazon-ebs: Creating temporary instance profile for this instance: packer-5f7609a4-e7bd-6985-f9f0-b790f8cad5a5
2020/10/01 16:53:58 packer-builder-amazon-ebs plugin: [DEBUG] Waiting for temporary instance profile: packer-5f7609a4-e7bd-6985-f9f0-b790f8cad5a5
2020/10/01 16:53:59 packer-builder-amazon-ebs plugin: [DEBUG] Found instance profile packer-5f7609a4-e7bd-6985-f9f0-b790f8cad5a5
==> amazon-ebs: Creating temporary role for this instance: packer-5f7609a4-e7bd-6985-f9f0-b790f8cad5a5
==> amazon-ebs: 	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747
==> amazon-ebs: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"ec2.amazonaws.com"
==> amazon-ebs: 	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747
==> amazon-ebs: Deleting temporary instance profile...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
2020/10/01 16:54:04 [INFO] (telemetry) ending amazon-ebs
	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747
==> Wait completed after 22 seconds 398 milliseconds
2020/10/01 16:54:04 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
2020/10/01 16:54:04 machine readable: amazon-ebs,error []string{"MalformedPolicyDocument: Invalid principal in policy: \"SERVICE\":\"ec2.amazonaws.com\"\n\tstatus code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747"}
	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747
==> Builds finished but no artifacts were created.
2020/10/01 16:54:04 [INFO] (telemetry) Finalizing.
Build 'amazon-ebs' errored after 22 seconds 398 milliseconds: MalformedPolicyDocument: Invalid principal in policy: "SERVICE":"ec2.amazonaws.com"
	status code: 400, request id: 920a5812-471a-40f0-b701-3e90770a2747

Resolution attempt

I tried to solve this by specifying the "Principal" pointing to China EC2 endpoint in the policy :

"temporary_iam_instance_profile_policy_document": {
        "Version": "2012-10-17",
        "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": [
              "ec2.amazonaws.com.cn"
            ]
          },
          "Action": "ecr:*",
          "Resource": "*"
        }]
      },

but I had this error :

packer validate k8s.json
Error: Failed to prepare build: "amazon-ebs"

1 error occurred:
	* unknown configuration key:
'"temporary_iam_instance_profile_policy_document.Statement[0].Principal"'

Amazon chroot builder: Allow nvme-capable instances w/ proper udev rules to use symlinks under /dev

This issue was originally opened by @bmoyles as hashicorp/packer#10807. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Description

When NVMe-capable instances were released, the Amazon chroot builder was changed to take an explicit NVMe path to the real NVMe device and that is checked against the path under /sys/block. Amazon Linux (and some other distributions) ship with tooling and udev rules that look in the attached NVMe block device's metadata for the block device name that was passed to the EC2 API on attach and set up the appropriate symlinks under /dev/ to the real NVMe device (which is fantastic as the NVMe device order is not guaranteed).

It would be fantastic (and I would be happy to submit a PR if no one is able to take this on) if we could provide a configuration option to the chroot builder that indicates it should look under /dev for a symlink to the block device based on the device in the attach request. eg if one asks for vol-deadbeef to be attached to /dev/xvdg in the attach request, there will be a /dev/xvdg symlink pointing to the real block device.

(A step further might be to have Packer itself use nvme id-ctrl or some Golang-native equivalent do the inspection of the NVMe devices to find the requested volume which would allow this to work on distributions w/o the necessary udev configuration, but that's a much bigger change and I think simply indicating that this NVME-capable instance will have symlinks for Packer is sufficient for a first-pass).

Use Case(s)

At Netflix, we have multiple instances baking images in parallel (with multiple processes per-instance) which makes the NVMe device much less predictable, but we DO have the udev configuration rolled into our base images such that we always have the correct symlink for EBS volumes on NVMe instances (whether the request was made for xvd* or sd* -- we create both symlinks to eliminate confusion).

Potential configuration

I might approach this by simply allowing one to explicitly specify their device-prefix-of-choice via configuration which would short-circuit the code in devicePrefix https://github.com/hashicorp/packer/blob/master/builder/amazon/chroot/device.go#L15 to avoid the scan of /sys/block for the block device. This, to me, would indicate that I am asserting my device, when attached, will have a file (symlink) under /dev/ with the appropriate prefix + letter (and/or partition as necessary) so just look for the presence/absence of those links under /dev:

block-device-prefix: xvd
# or block-device-prefix: sd

Alternatively, we could go down a different path altogether with explicit configuration such as

nvme-device-symlinks-enabled: true

signaling that a) we're on a NVMe-enabled instance, b) if we request the device be attached to /dev/xvd* or /dev/sd*, there will be a symlink created when the device is attached, and c) presence or absence of those symlinks is sufficient to determine whether or not our block device is present.

I could approach this via a custom plugin that duplicates the chroot builder and extends the functionality internally, but this seems generally-applicable so figured I'd pose it here before going down that route. If I went that route, I personally would probably lean towards allowing the user to explicitly declare the block-device prefix as it seems like the route with the least moving pieces.

aws-ebs terminating instance exits with bad status at very end of AMI creation

This issue was originally opened by @simonw7034 as hashicorp/packer#10278. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

I am building an AMI using the aws-ebs builder. Everything works fine up until what appears to be the cleanup stage, where the "Terminating the source AWS instance" step exits with bad status (even though it appears to finish the AMI build properly), the AMI is created and stored in our VPC but the code exits prematurely and with a non-zero rc. There may be artefacts left in AWS that aren't cleaned up properly (I haven't seen but there could be temporary security groups etc. leftover) and it makes it difficult to incorporate this into a CI build process.

The general build process is: 1) launch source AMI, 2) install some dependencies with apt 3) run a shell script to change the default boot kernel and reboot 4) wait for reboot 5) carry on with further provisioning (not included in this build template)

Reproduction Steps

Runpacker build -except=vagrant bf_packer.json with an appropriate AWS VPC, user permissions and an IAM instance profile with AmazonSSMManagedInstanceCore policy and AWS authentication environment variables set.

Packer version

Packer v1.6.5

Simplified Packer Buildfile

Packer build template

Operating system and Environment details

MacBook Pro, 64-bit

Log Fragments and crash.log files

Debug log

Feature request - amazon-chroot builder - post_provision steps and custom chroot_path

This issue was originally opened by @anderssvanqvist as hashicorp/packer#4824. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I am using the amazon-chroot builder to do an image installation of VyOS which mounts the actual disk at /boot and has the root device as an overlay on a squashfs and a version subdirectory under boot for writes.

The amazon-chroot builder tightly couples the ebs device and the chroot path which does not work in this case. It also does not have a post_provision step which means that I can't do any cleanup of the various mounts. Right now I'm doing everything in the post_mount_commands step which means manually scripting the mounts and copy files.

My request is this:

  • Implement a post_provision_commands step, that runs after provision but before unmount.

  • Add a chroot_path variable to support use cases like writable squashfs overlays.

Tia,
Anders

AMI BUILDER (CHROOT) Attachment point /dev/sdf is already in use (parallel builds)

This issue was originally opened by @gardleopard as hashicorp/packer#3060. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Packer 0.8.6

This issue is a little bit tricky to reproduce. You need to run multiple builds on the same host. I think it happens when a build launches at the same time as an other build frees its EBS volume.

To reproduce this I run 6 terminals with the same build. I start one build and when its around the logging of "==> amazon-chroot: Unmounting the root device..." I start the 5 other builds. The builds are just using shell provisioner doing:

apt-get update
apt-get install --force-yes -y vim
apt-get install --force-yes -y telnet

The base ami is ubuntu trusty and the packages are already installed. This is just a simple build to reproduce the bug.

I can see in the build log of one of the builds that I started before that /dev/sdf is used there and unmounted at about the same time as the failing build is starting up. This is the error message I get:

amazon-chroot output will be in this color.

==> amazon-chroot: Prevalidating AMI Name...
==> amazon-chroot: Gathering information about this EC2 instance...
==> amazon-chroot: Inspecting the source AMI...
==> amazon-chroot: Checking the root device on source AMI...
==> amazon-chroot: Creating the root volume...
==> amazon-chroot: Attaching the root volume to /dev/sdf
==> amazon-chroot: Error attaching volume: InvalidParameterValue: Invalid value '/dev/sdf' for unixDevice. Attachment point /dev/sdf is already in use
==> amazon-chroot:  status code: 400, request id: []
==> amazon-chroot: Deleting the created EBS volume...
Build 'amazon-chroot' errored: Error attaching volume: InvalidParameterValue: Invalid value '/dev/sdf' for unixDevice. Attachment point /dev/sdf is already in use
    status code: 400, request id: []

==> Some builds didn't complete successfully and had errors:
--> amazon-chroot: Error attaching volume: InvalidParameterValue: Invalid value '/dev/sdf' for unixDevice. Attachment point /dev/sdf is already in use
    status code: 400, request id: []

==> Builds finished but no artifacts were created.

https://gist.github.com/gardleopard/597378585a698c09f3f5

SSM Session Manager Error when user has not access to create keypair in AWS

This issue was originally opened by @mixeract as hashicorp/packer#10453. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Our AWS accounts have a policy preventing roles from creating SSH keypairs. This prevents Packer from generating temporary key. If we don't specify a key in the configs packer will automatically assumes that automatic key generation is required. This will fail right away as the user has no permission to create key pairs.

amazon-ebs: Creating temporary keypair: packer_5eb99fdc-a11e-d9a0-e429-356b343ccc69 Build 'amazon-ebs' errored: Error creating temporary keypair: retry count exhausted. Last err: UnauthorizedOperation: You are not authorized to perform this operation.

In order to prevent packer from auto generate the keys, I've declared a keypair in the configs "ssh_private_key_file": "~/.ssh/packer_key", however by doing that Packer is ignoring the ssm session manager and tries to create ssh session using the provided key.

Error will be

==> amazon-ebs: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
==> amazon-ebs: Terminating the source AWS instance...

Configs

  "builders": [
    {
      "type": "amazon-ebs",
      "ami_name": "{{ user `name` }}",
      "profile": "{{ user `profile` }}",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "internal_ssm_ami*",
          "root-device-type": "ebs"
        },
        "owners": "ouraccountnumber",
        "most_recent": true
      },
      "ssh_username": "ec2-user",
      "ssh_pty": true,
      "pause_before_ssm": "1m",
      "ssh_interface": "session_manager",
      "communicator": "ssh",
      "ssh_timeout": "5m",
      "ssh_private_key_file": "~/.ssh/packer_key",
      "iam_instance_profile": "packer_instance_profile",
      "security_group_ids" : [list_of_sg],
      "disable_stop_instance": true,
      "shutdown_behavior": "stop",
      "session_manager_port" : 8422,
      "instance_type": "{{ user `instance_type` }}",
      "subnet_id": "{{ user `subnet_id` }}",
      "vpc_id": "{{ user `vpc_id` }}",
      "launch_block_device_mappings": [
        {
          "device_name": "/dev/xvda",
          "volume_size": 12,
          "volume_type": "gp2",
          "delete_on_termination": true
        }
      ],
    }
  ],

Any idea if there's a solution to this? Maybe an option that I missed In the docs prevents Packer from using the private key or a config to prevent packer from auto-generation of keys?

Feature Request: More verbose description on the type of tag being applied

This issue was originally opened by @shapp1 as hashicorp/packer#7772. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I'm utilizing run_tags, spot_tags, run_volume_tags, and tags for the aws builder. The console shows the tagging process like so:

==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Owner": "John Doe"
amazon-ebs: Adding tag: "OS": "linux"
=> amazon-ebs: Adding tags to source EBS Volumes
amazon-ebs: Adding tag: "Owner": "John Doe"
amazon-ebs: Adding tag: "Environment": "hr"
amazon-ebs: Requesting spot instance 't2.small' for: 0.0175
amazon-ebs: Waiting for spot request (sir-xxxxxxxx) to become active...
amazon-ebs: Adding tag: "App": "test-app-01"
amazon-ebs: Adding tag: "Beta": "true"
[..etc..]

My request is that packer output more verbose tagging text, for example, "Adding spot_tag "blah": "blah"", "Adding run_tag "blah": "blah", etc... to let the user know which block of tags to look at if a change is needed.

Thanks

Amazon Import Post Processor Fails

This issue was originally opened by @jeremymcgee73 as hashicorp/packer#10873. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

I am having a problem when I am using the Amazon Import post processor after building an ova with vmware-iso. This error does not happen every time, just seems to be random. I am setting AWS_POLL_DELAY_SECONDS=30 before running packer.

Packer version

1.70

Operating system and Environment details

RHEL 7.8
VMware Workstation 15

Log Fragments and crash.log files

==> vmware-iso.windows-base: Exporting virtual machine...
    vmware-iso.windows-base: Executing: ovftool
==> vmware-iso.windows-base: Running post-processor:  (type shell-local)
==> vmware-iso.windows-base (shell-local): Running local shell script: /packer/packer-shell731403097
==> vmware-iso.windows-base: Running post-processor:  (type amazon-import)
    vmware-iso.windows-base (amazon-import): Uploading 2019.ovaa to s3://NOT-REAL-BUCKET/packer-import-1617712990.ova
    vmware-iso.windows-base (amazon-import): Completed upload of 2019.ovaa to s3://NOT-REAL-BUCKET/packer-import-1617712990.ova
    vmware-iso.windows-base (amazon-import): Setting license type to 'AWS'
    vmware-iso.windows-base (amazon-import): Started import of s3://NOT-REAL-BUCKET/packer-import-1617712990.ova, task id import-ami-007c4145c618ddcd6
    vmware-iso.windows-base (amazon-import): Waiting for task import-ami-007c4145c618ddcd6 to complete (may take a while)
Build 'vmware-iso.windows-base' errored after 1 hour 26 minutes: 1 error(s) occurred:
* Post-processor failed: Import task import-ami-007c4145c618ddcd6 failed with status message: ClientError: The specified S3 resource does not exist. Reason 404 Not Found, error: ResourceNotReady: failed waiting for successful resource state

Unstable builds with temporary IAM Profile

This issue was originally opened by @siddhantvirmani as hashicorp/packer#9344. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

I'm having this very weird bug where every few builds there is an issue with the IAM profile not being attached to the temporal EC2 instance (which results in yum complaining).

Reproduction Steps

I run PACKER_LOG=1 packer build packer/build.json. Usually, immediately rerunning the build works. However, if I wait too long before rebuilding I get the same problem. I haven't been able to find a guaranteed pattern whatsoever.

Packer version

1.5.5

Simplified Packer Buildfile

https://gist.github.com/siddhantvirmani/61946abc1e8f9321562186feb60d4b29#file-build-json

Operating system and Environment details

I run packer on WSL2. Env has aws secrets.

Log Fragments and crash.log files

Successful log: https://gist.github.com/siddhantvirmani/61946abc1e8f9321562186feb60d4b29#file-successful-packer-log
I cancelled after the successful aws configure list. Can confirm that if it finds the credentials, the build succeeds without any issues. I can run a full build and upload that log too if needed.

Unsuccessful log: https://gist.github.com/siddhantvirmani/61946abc1e8f9321562186feb60d4b29#file-unsuccessful-packer-log
Cancelled this one after yum shows the update list - it just hangs there after that.

Add Static IP option for AWS and Azure

This issue was originally opened by @TechnicalMercenary as hashicorp/packer#10435. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Description

When I'm building an GCP VMI, a googlecompute builder, I can specify an 'address' which allows an "pre-allocated static external IP address". I do this because a software repo that I'm pulling from is protected by IP whitelist.

I would like something similar in the 'amazon-ebs' builder and the 'azure-arm' builder. I'm pretty sure that both cloud platforms allow a pre-allocated Static IP to be assigned.

AWS has their Elastic IPs
Azure has 'Public IP address'

Use Case(s)

Allow the AMI/VMI being built to have a known IP. This would allow access to external resources that are secured by IP whitelist ( allow list ).

Potential configuration

See the googlecompute documentation for reference

Potential References

Amazon Builder couldn't connect via SSH to a Windows box

This issue was originally opened by @phoewass as hashicorp/packer#4958. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I tried to bake SSH service into my images and use it instead of WinRM, since Microsoft is supporting SSH for Windows[1].

So I created a Windows AMI image with SSH enabled using Packer and everything went fine.
Now when I'm trying to use it as a base image in Packer using ssh communicator it fails.

==>amazon-ebs: Prevalidating AMI Name...`
    amazon-ebs: Found Image ID: ami-xxxxxxxx
==> amazon-ebs: Creating temporary keypair: packer_5931600e-8bca-8eab-3416-f5f9b63283e1
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing access to port 22 the temporary security group...
==> amazon-ebs: Launching a source AWS instance...
    amazon-ebs: Instance ID: i-xxxxxxxxxxxxxxxxx
==> amazon-ebs: Waiting for instance (i-xxxxxxxxxxxxxxxxx) to become ready...
==> amazon-ebs: Adding tags to source instance
    amazon-ebs: Adding tag: "Name": "Packer Builder"
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Error waiting for SSH: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

==> Builds finished but no artifacts were created.

I did some digging and I found out that amazon ssh communicator uses the generated key pair to connect to the box. While Windows boxes need to wait for the generated password then use it to connect, just like what WinRM communicator does.

[1] https://blogs.msdn.microsoft.com/powershell/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh/

Transition to use aws-sdk-go v2

This issue was originally opened by @SwampDragons as hashicorp/packer#10691. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


There are rumblings that the v1 sdk may be getting deprecated in a year or so; we should investigate what it would take to upgrade so we don't get caught out.

I don't think it's necessary to investigate until after the core-plugin split is complete, so I'm targeting for v1.8.1

cannot access the alpine ec2 instance created from resultant AMI

This issue was originally opened by @ffoysal as hashicorp/packer#10293. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Created an AMI from source alpine AMI (alpine-ami-3.12.1-x86_64-r0). packer build was successful and produced the AMI. Can create an EC2 instance using the resultant AMI. But cannot ssh to the newly created EC2 instance using the same ssh-user, even though during the packer build the same user named alpine was successful. But ec2 instance created from source AMI (alpine-ami-3.12.1-x86_64-r0) can access it using the user named alpine

Reproduction Steps

  • packer build example.json
  • create ec2 instance using the newly produced AMI
  • try to do ssh ... alpine@<ec2-ip>
  • it will ask for password
ssh -i mykey.pem alpine@<ec2-ip>

alpine@<ec2-ip>'s password:

Packer version

Packer v1.6.5

Simplified Packer Buildfile

{
  "builders": [
    {
      "type": "amazon-ebs",
      "profile": "default",
      "region": "us-east-2",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "alpine-ami-3.12.1-x86_64-r0",
          "root-device-type": "ebs"
        },
        "owners": ["538276064493"],
        "most_recent": true
      },
      "instance_type": "t2.micro",
      "ssh_username": "alpine",
      "ami_name": "packer-example {{timestamp}}"
    }
  ]
}

Operating system and Environment details

Mac OS catalina 10.15.7

AMI tag not persisting after build

This issue was originally opened by @RixhersAjazi as hashicorp/packer#10132. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

When I set tags on amazon-ebs builder I see the tags being applied in packer log output during the running of packer. However I do not see the final tag in AWS console when I go and check on my AMI's.

Reproduction Steps

  • Use this amazon-ebs builder defintion:
    {
      "name": "t3a.micro",
      "profile": "{{user `profile`}}",
      "region": "{{user `aws_region`}}",
      "type": "amazon-ebs",
      "source_ami_filter": {
        "filters": {
          "virtualization-type": "hvm",
          "name": "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200609",
          "root-device-type": "ebs"
        },
        "owners": ["099720109477"],
        "most_recent": false
      },
      "ami_users": ["320176959135", "351145466100", "485438916218"],
      "instance_type": "t3a.micro",
      "ssh_username": "ubuntu",
      "ami_name": "foooo-{{timestamp}}",
      "subnet_filter": {
        "filters": {
          "tag:Group": "meta-new",
          "tag:Name": "meta-new-private-*"
        },
        "most_free": true,
        "random": false
      },
      "ssh_interface": "private_ip",
      "ami_regions": ["us-east-2", "ap-southeast-2"],
      "tags": {
        "Release": "{{user `release_tag`}}"
      },
      "run_tags":{
        "Release": "{{user `release_tag`}}"
      }
    }
  • Run packer
  • Go to AWS CLI and see that the AMI that was generated and subsequently copied to the other regions doesn't have the tag defined:

Screen Shot 2020-10-20 at 1 09 24 PM

Packer version

rixhersajazi@hostyhost packer-dev (ticketT-7772)*$ packer --version
1.6.0

Packer build output

==> t3a.micro: Waiting for the instance to stop...
==> t3a.micro: Creating AMI fooooo-1603212597 from instance i-06aa58418cfc94449
    t3a.micro: AMI: ami-00275c4502a4e0dba
==> t3a.micro: Waiting for AMI to become ready...
==> t3a.micro: Copying/Encrypting AMI (ami-00275c4502a4e0dba) to other regions...
    t3a.micro: Copying to: us-east-2
    t3a.micro: Copying to: ap-southeast-2
    t3a.micro: Waiting for all copies to complete...
==> t3a.micro: Modifying attributes on AMI (ami-0b2881c5cba17e5b0)...
    t3a.micro: Modifying: users
==> t3a.micro: Modifying attributes on AMI (ami-00275c4502a4e0dba)...
    t3a.micro: Modifying: users
==> t3a.micro: Modifying attributes on AMI (ami-0752628f480d59636)...
    t3a.micro: Modifying: users
==> t3a.micro: Modifying attributes on snapshot (snap-04a99f316e19fced0)...
==> t3a.micro: Modifying attributes on snapshot (snap-032d3d88e4844f1dd)...
==> t3a.micro: Modifying attributes on snapshot (snap-0bd7cae46477fc6ca)...
==> t3a.micro: Adding tags to AMI (ami-00275c4502a4e0dba)...
==> t3a.micro: Tagging snapshot: snap-04a99f316e19fced0
==> t3a.micro: Creating AMI tags
    t3a.micro: Adding tag: "Release": "FEATURE_BRANCH_QA_ONLY"
==> t3a.micro: Creating snapshot tags
==> t3a.micro: Adding tags to AMI (ami-0752628f480d59636)...
==> t3a.micro: Tagging snapshot: snap-032d3d88e4844f1dd
==> t3a.micro: Creating AMI tags
    t3a.micro: Adding tag: "Release": "FEATURE_BRANCH_QA_ONLY"
==> t3a.micro: Creating snapshot tags
==> t3a.micro: Adding tags to AMI (ami-0b2881c5cba17e5b0)...
==> t3a.micro: Tagging snapshot: snap-0bd7cae46477fc6ca
==> t3a.micro: Creating AMI tags
    t3a.micro: Adding tag: "Release": "FEATURE_BRANCH_QA_ONLY"
==> t3a.micro: Creating snapshot tags
==> t3a.micro: Terminating the source AWS instance...
==> t3a.micro: Cleaning up any extra volumes...
==> t3a.micro: No volumes to clean up, skipping
==> t3a.micro: Deleting temporary security group...
==> t3a.micro: Deleting temporary keypair...
Build 't3a.micro' finished.

==> Builds finished. The artifacts of successful builds are:
--> t3a.micro: AMIs were created:
ap-southeast-2: ami-0752628f480d59636
us-east-2: ami-0b2881c5cba17e5b0
us-west-2: ami-00275c4502a4e0dba

Notice there are lines saying:

==> t3a.micro: Creating AMI tags
    t3a.micro: Adding tag: "Release": "FEATURE_BRANCH_QA_ONLY"

Operating system and Environment details

MacOS Catalina - 10.15.7

Log Fragments and crash.log files

Packer 1.3.1 waits indefinitely for AMI to be ready despite console indicating that it is ready

This issue was originally opened by @res0nance as hashicorp/packer#6765. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Running packer on a linux machine to create a windows ami gives the following log

https://gist.github.com/res0nance/91c52ac1908cb121b9a98e50a7252359

Packer waits forever instead of erroring out but when AWS console is checked the AMI packer is waiting for is available.
This issue is also sometimes seen when the copy is being made for encrypted boot.

amazon-ebs: Terminate spot instance before copying / tagging AMI

This issue was originally opened by @nilsmeyer as hashicorp/packer#7932. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Currently, when using spot instances, the amazon-ebs builder will keep the instance running almost through the end of the build process, e.g:

[lot's of provisioner output]
==> amazon-ebs: Creating unencrypted AMI example from instance i-00000000000000000
    amazon-ebs: AMI: ami-00000000000000000
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Copying AMI (ami-00000000000000000) to other regions...
    amazon-ebs: Copying to: eu-central-1
    amazon-ebs: Waiting for all copies to complete...
==> amazon-ebs: Modifying attributes on AMI (ami-00000000000000000)...
    amazon-ebs: Modifying: groups
    amazon-ebs: Modifying: users
==> amazon-ebs: Modifying attributes on AMI (ami-00000000000000001)...
    amazon-ebs: Modifying: groups
    amazon-ebs: Modifying: users
==> amazon-ebs: Modifying attributes on snapshot (snap-00000000000000000)...
==> amazon-ebs: Modifying attributes on snapshot (snap-00000000000000001)...
==> amazon-ebs: Adding tags to AMI (ami-00000000000000000)...
==> amazon-ebs: Tagging snapshot: snap-00000000000000000
==> amazon-ebs: Creating AMI tags
    amazon-ebs: Adding tag: [..]
==> amazon-ebs: Creating snapshot tags
==> amazon-ebs: Adding tags to AMI (ami-00000000000000001)...
==> amazon-ebs: Tagging snapshot: snap-00000000000000001
==> amazon-ebs: Creating AMI tags
    amazon-ebs: Adding tag: [..]
==> amazon-ebs: Creating snapshot tags
==> amazon-ebs: Cancelling the spot request...
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...

Especially the step of copying the AMI to other regions can take quite a bit of time. I think it makes sense to terminate the instance before this step for cost reduction.

amazon-import - improvement

This issue was originally opened by @Roxyrob as hashicorp/packer#9582. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Feature Description

amazon-import supporting "import-snapshot" (ami import passing through snapshot.).

Use Case(s)

I know support from vendor is important, this case can be an exception. AWS is not declaring support for kernel version present in many new OS (CentOS 8, Fedora 31, ...) and stop vmimport task on ami import stage with error: " Unable to determine kernel version" (see hashicorp/packer#8302 from where we start to search for a solution/workaround).

AWS does not stop however import task as snapshot (instead ami directly). We succesfully reach a working CentOS8 custom AMI manually starting from an virtualbox ova e.g created with packer virtualbox-iso

  • extracting VMDK inside ova (tar -xvf packer-centos8min-x86_64.ova)
  • uploading vmdk (aws s3 cp ...)
  • importing VMDK to aws as "snapshot" (aws ec2 import-snapshot...)
  • Creating AMI from snapshot (aws ec2 register-image...)

ova was made using packer provisioned to set right iniramfs with needed modules to allow root device discovery on boot stages for xen-base es2 instances (xen-blkfront), nitro-based ec2 instances (nvme, ena).

It would be nice if amazon-import can support import-snapshot (using VMDK in ova) automating all image creation steps useful to make custom image of new OS release (when kernel version is not yet supported by AWS as often it is a long time lack).

AWS EC2 instance using packer amazon-ebssurrogate AMI not booting up

This issue was originally opened by @arslan70 as hashicorp/packer#10480. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


As the title says, ec2 instance fails to boot up. System status shows 1/2 health check failed. There is very little information in the system log. AWS support is not able to help either.

The main purpose to use amazon-ebssurrogate is to use arm64 architecture which is not supported by amazon-ebs. I doubt that its actually a packer issue, mostly like bad config on my part. Will appreciate some support

Packer version

1.6.6

Simplified Packer Buildfile

packer.json

Operating system and Environment details

Source AMI: ami-0c582118883b46f4f (latest official amazonlinux2 AMI)
Target Instance family: c6gn
Architecture: arm64

Missing APT lists when provisioning amazon-ebs

This issue was originally opened by @realies as hashicorp/packer#9966. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Overview of the Issue

Occasionally seeing this in the very first steps of provisioning an image. Running Packer for a second time usually resolves it.

amazon-ebs: E: can not open /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic-backports_InRelease - fopen (2: No such file or directory)

Reproduction Steps

Run apt-get install on the latest ubuntu bionic AMI

Packer version

Packer v1.5.5

Simplified Packer Buildfile

If the file is longer than a few dozen lines, please include the URL to the
gist of the log or use the Github detailed
format

instead of posting it directly in the issue.

Operating system and Environment details

NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
EC2 t2.medium

Support tag on creation

This issue was originally opened by @nitrocode as hashicorp/packer#10372. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


Community Note

Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.

Description

https://aws.amazon.com/about-aws/whats-new/2020/12/amazon-machine-images-support-tag-on-create-tag-based-access-control/

Use Case(s)

Tag on creation instead of after creation

Thank you!

Allow AMI block device mapping to be created without snapshot

This issue was originally opened by @otterley as hashicorp/packer#5290. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


I have a use case where I have two launch_block_device_mappings, /dev/sda1 (root device) and /dev/sdb (data device). The data device exists during the build phase to satisfy some build-time prerequisites, but the actual volume content can be discarded at the the end of the build. The device can also be quite small.

Meanwhile, there are two ami_block_device_mappings of the same device names, /dev/sda1 and /dev/sdb. I'd like /dev/sdb NOT to be associated with a snapshot, since at launch time the instance should have a fresh volume. But Packer seems to always associate it with a snapshot because the device was mapped at launch time. Setting snapshot_id to "" seems to have no effect.

spot instances on p3.* fails to connect because packer tries to connect on private IP

This issue was originally opened by @amrragab8080 as hashicorp/packer#10347. It was migrated here as a result of the Packer plugin split. The original body of the issue is below.


When filing a bug, please include the following headings if possible. Any
example text in this template can be deleted.

Overview of the Issue

Launching a p3.* build node with spot instances. The node launches in a public subnet with a public IP, however packer tries to connect on the private IPv4

==> amazon-ebs: Using ssh communicator to connect: 172.31.2.186
--
984 | ==> amazon-ebs: Waiting for SSH to become available...
985 | ==> amazon-ebs: Timeout waiting for SSH.
986 | ==> amazon-ebs: Terminating the source AWS instance...

Time's out on the connect. However my environment is flexible enough that my packer launch nodes are also on AWS and can reach the build nodes on its private IP. But wanted to report the bug none the less.

Reproduction Steps

"builders": [{
    "type": "amazon-ebs",
    "region": "{{user `region`}}",
    "source_ami": "{{user `build_ami`}}",
    "run_tags": {
        "Name": "packer-gpu-processor-{{user `flag`}}"
    },
    "subnet_id": "{{user `subnet_id`}}",
    "security_group_ids": "{{user `security_groupids`}}",
    "spot_instance_types": ["p3.8xlarge","p3.16xlarge"],
    "spot_price": "auto",
    "ssh_username": "ubuntu",
    "ami_name": "ml-gpu-processor_{{user `flag`}}-{{timestamp}}",
    "launch_block_device_mappings":[{
      "delete_on_termination": true,
      "device_name": "/dev/sda1",
      "volume_size": 150,
      "volume_type": "gp2"
    }]

Packer version

1.6.5

Simplified Packer Buildfile

If the file is longer than a few dozen lines, please include the URL to the
gist of the log or use the Github detailed
format

instead of posting it directly in the issue.

Operating system and Environment details

OS, Architecture, and any other information you can provide about the
environment.

Ubuntu 20.04 local build environment.

Log Fragments and crash.log files

Include appropriate log fragments. If the log is longer than a few dozen lines,
please include the URL to the gist of the log or
use the Github detailed format instead of posting it directly in the issue.

Set the env var PACKER_LOG=1 for maximum log detail.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.