Code Monkey home page Code Monkey logo

packer-plugin-tart's Introduction

Packer Plugin Tart

The Tart multi-component plugin can be used with HashiCorp Packer to create custom macOS images. For the full list of available features for this plugin see docs.

Installation

Using pre-built releases

Using the packer init command

Starting from version 1.7, Packer supports a new packer init command allowing automatic installation of Packer plugins. Read the Packer documentation for more information.

To install this plugin, copy and paste this code into your Packer configuration . Then, run packer init.

packer {
  required_plugins {
    tart = {
      version = ">= 1.11.1"
      source  = "github.com/cirruslabs/tart"
    }
  }
}

Manual installation

You can find pre-built binary releases of the plugin here. Once you have downloaded the latest archive corresponding to your target OS, uncompress it to retrieve the plugin binary file corresponding to your platform. To install the plugin, please follow the Packer documentation on installing a plugin.

From Sources

If you prefer to build the plugin from sources, clone the GitHub repository locally and run the command go build from the root directory. Upon successful compilation, a packer-plugin-tart plugin binary file can be found in the root directory. To install the compiled plugin, please follow the official Packer documentation on installing a plugin.

Configuration

For more information on how to configure the plugin, please read the documentation located on the HashiCorp's website.

Contributing

  • If you think you've found a bug in the code or you have a question regarding the usage of this software, please reach out to us by opening an issue in this GitHub repository.
  • Contributions to this project are welcome: if you want to add a feature or a fix a bug, please do so by opening a Pull Request in this GitHub repository. In case of feature contribution, we kindly ask you to open an issue to discuss it beforehand.

packer-plugin-tart's People

Contributors

apachont avatar bytesguy avatar dependabot[bot] avatar edigaryev avatar fkorotkov avatar jonnybbb avatar mayeut avatar n8felton avatar nywilken avatar raven avatar roblabla avatar sc0rp10 avatar sjchmiela avatar sparshev avatar torarnv avatar trodemaster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

packer-plugin-tart's Issues

No device for installation media was detected

I'm finding with the Debian 11 installer that it is not finding the ISO DVD installation medium and I'm having to tell it via the installer context menu that the DVD device is /dev/vdb1, after which it boots.

You can reproduce this via:

git clone https://github.com/HariSekhon/Packer-templates pack

cd pack

make debian-tart-http

(http preseed.cfg is to work around issue #71)

Why does it detect the installation medium on other platforms such as x86_64 in VirtualBox but not on Tart?

Is it because the /vd[a-z] devices are less standard than /dev/sd[a-z] and not checked?

If so would it be possible to just have Tart present its devices as /dev/sda, /dev/sdb for simplicity and compatibility?

Allow running provisioners when recovery = true

Currently when recovery = true, the provisioners will not run, as evidenced by this piece of code. I don't quite understand why. My use-case is having a packer file that boots the VM to recovery to disable SIP, reboot, and then run the repackaging steps to install some software. Would you be open to adapting the behavior to allow this use-case?

Create Linux VM

Now that tart provides linux VMs, it would be nice to be able to create Linux VMs from scratch with packer.

No cleanup after failure

When the packer is failed to build the VM (macos1306 in this case) - it leaves the image in local registry:

...
Cancelling build after receiving interrupt
2024/02/29 12:41:53 Cancelling builder after context cancellation context canceled
2024/02/29 12:41:53 packer-plugin-ansible_v1.1.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:41:53 Received interrupt signal (count: 1). Ignoring.
2024/02/29 12:41:53 packer-provisioner-shell plugin: Received interrupt signal (count: 1). Ignoring.
2024/02/29 12:41:53 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:41:53 Received interrupt signal (count: 1). Ignoring.
==> tart-cli: Failed to run the boot command: context canceled
==> tart-cli: Failed to run the boot command: context canceled
==> tart-cli: Step "stepRun" failed
==> tart-cli: [c] Clean up and exit, [a] abort without cleanup, or [r] retry step (build may fail even if retry succeeds)? c
==> tart-cli: Waiting for the tart process to exit...
==> tart-cli: Connection reset by peer (os error 54)
==> Wait completed after 4 minutes 34 seconds
Build 'tart-cli' errored after 4 minutes 34 seconds: Failed to run the boot command: context canceled

==> Wait completed after 4 minutes 34 seconds
2024/02/29 12:42:10 waiting for all plugin processes to complete...
Cleanly cancelled builds after being interrupted.
2024/02/29 12:42:10 /Users/admin/.config/packer/plugins/github.com/cirruslabs/tart/packer-plugin-tart_v1.8.1_x5.0_darwin_arm64: plugin process exited
2024/02/29 12:42:10 /Users/admin/git/aquarium-bait/.bin/packer: plugin process exited
2024/02/29 12:42:10 /Users/admin/.config/packer/plugins/github.com/hashicorp/ansible/packer-plugin-ansible_v1.1.1_x5.0_darwin_arm64: plugin process exited
...
$ tart list
Source Name      Size State  
local  macos1306 15   stopped
local  test1     23   stopped
...

Usually build plugins cleans up after that - so maybe there is some sort of issue in the cleanup logic?

VNC password is not printed in debug log

VNC password is very useful if you want to capture headless image building or manipulate UI via vncdo or any other method. It's enabled in vmware plugin, maybe there is a way to have the same?

  • Tart current logs:
    ==> tart-cli: Starting the virtual machine...
    ==> tart-cli: Detecting host IP...
    2024/02/29 12:40:28 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:40:28 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
    ==> tart-cli: Host IP is assumed to be 172.16.226.1
    ==> tart-cli: Waiting for the VNC server credentials from Tart...
    ==> tart-cli: Retrieved VNC credentials, connecting...
    ==> tart-cli: Connected to the VNC!
    ==> tart-cli: Waiting 20s after the VM has booted...
    
  • Tart desired logs:
    ==> tart-cli: Starting the virtual machine...
    ==> tart-cli: Detecting host IP...
    2024/02/29 12:40:28 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:40:28 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
    ==> tart-cli: Host IP is assumed to be 172.16.226.1
    ==> tart-cli: Waiting for the VNC server credentials from Tart...
    ==> tart-cli: Retrieved VNC credentials, connecting...
        tart-cli: The VM will be run headless, without a GUI. If you want to
        tart-cli: view the screen of the VM, connect via VNC with the password "ABcD1Fgh" to
        tart-cli: vnc://127.0.0.1:5935
    ==> tart-cli: Connected to the VNC!
    ==> tart-cli: Waiting 20s after the VM has booted...
    
  • Example VMX logs:
    ==> vmware-vmx: Starting virtual machine...
        vmware-vmx: The VM will be run headless, without a GUI. If you want to
        vmware-vmx: view the screen of the VM, connect via VNC with the password "ABcD1Fgh" to
        vmware-vmx: vnc://127.0.0.1:5935
    ==> vmware-vmx: Connecting to VNC...
    ==> vmware-vmx: Waiting 1m0s for boot...
    

Example Linux Packer build VNC timeout and Failure to halt on interrupt

While attempting to build the linux example to see what the output was I encountered an issue where it was waiting for VNC credentials that never arrived.

I tried to cancel the action via cmd+c and it printed cancelling build after recieving interrupt, but was frozen. I had to kill the process in activity monitor before packer could continue

packer-plugin-tart/example on  main 
❯ packer init ubuntu-22.04-vanilla.pkr.hcl 

packer-plugin-tart/example on  main 
❯ packer validate ubuntu-22.04-vanilla.pkr.hcl
The configuration is valid.

packer-plugin-tart/example on  main 
❯ packer build ubuntu-22.04-vanilla.pkr.hcl
tart-cli.tart: output will be in this color.

==> tart-cli.tart: Creating virtual machine...
==> tart-cli.tart: Starting the virtual machine for installation...
==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
Cancelling build after receiving interrupt
Build 'tart-cli.tart' errored after 2 hours 13 minutes: unexpected EOF

==> Wait completed after 2 hours 13 minutes
Cleanly cancelled builds after being interrupted.

packer-plugin-tart/example on  main took 2h52m8s 
❯ 

Intermittent failures for disk resizing

Summary

Sometimes when running packer builds, the disk resize fails:

==> tart-cli.tart: Error repairing map: Couldn't read partition map (-69876)
    tart-cli.tart: Nonexistent, unknown, or damaged partition map scheme
    tart-cli.tart: If you are sure this disk contains a (damaged) APM, MBR, or GPT partition map,
    tart-cli.tart: you can hereby try to repair it enough to be recognized as a map; another
    tart-cli.tart: "diskutil repairDisk disk0" might then be necessary for further repairs
    tart-cli.tart: Proceed? (y/N)
==> tart-cli.tart: Resizing the partition...
==> tart-cli.tart: Could not find disk for disk0s2

Disk list from the VM:

admin@admins-Virtual-Machine ~ % diskutil list
/dev/disk1 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *70.0 GB    disk1
   1:             Apple_APFS_ISC Container disk2         524.3 MB   disk1s1
   2:                 Apple_APFS Container disk3         34.1 GB    disk1s2
                    (free space)                         5.4 GB     -

/dev/disk3 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +34.1 GB    disk3
                                 Physical Store disk1s2
   1:                APFS Volume Macintosh HD            8.6 GB     disk3s1
   2:              APFS Snapshot com.apple.os.update-... 8.6 GB     disk3s1s1
   3:                APFS Volume Preboot                 4.3 GB     disk3s2
   4:                APFS Volume Recovery                707.7 MB   disk3s3
   5:                APFS Volume Data                    8.0 GB     disk3s5
   6:                APFS Volume VM                      20.5 KB    disk3s6

So appears in some cases we get disk1 as opposed to disk0

If I ssh into this instance and execute manually:

  • yes | diskutil repairDisk disk1
  • diskutil apfs resizeContainer disk1s2 0

Then everything is hunky dory. I'll see if I can PR something into step_disk_resize.go that tries to find the appropriate disk first.

PACKER_HTTP_ADDR is unset for shell provisioner

It seems the plugin forgets to set the PACKER_HTTP_IP (it's a standard interface of packer) so PACKER_HTTP_ADDR is left unpopulated. It's quite hard to pull any data from the host via standard packer's http server through shell provisioner without this env variable.

  • Current Tart plugin:

    packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
    packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 16:25:30 [DEBUG] Opening new ssh session
    packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 16:25:30 [DEBUG] starting remote command: chmod +x /tmp/script_2697.sh; PACKER_BUILDER_TYPE='tart-cli' PACKER_BUILD_NAME='tart-cli' PACKER_HTTP_PORT='8625'  /tmp/script_2697.sh
    
  • VMX:

    ...
    packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
    packer-builder-vmware-vmx plugin: [DEBUG] Opening new ssh session
    packer-builder-vmware-vmx plugin: [DEBUG] starting remote command: chmod +x /tmp/script_2346.sh; PACKER_BUILDER_TYPE='vmware-vmx' PACKER_BUILD_NAME='vmware-vmx' PACKER_HTTP_ADDR='172.16.1.1:8350' PACKER_HTTP_IP='172.16.1.1' PACKER_HTTP_PORT='8350'  /tmp/script_2346.sh
    ...
    

Document http_directory

It seems there is a new http_directory setting but it's not documented anywhere that I can find.

When I add it I find that I get this output:

==> ubuntu-22.04.tart-cli.ubuntu: Starting HTTP server on port 8232

so it seems to work on the latest plugin, I basically guessed this from issues #29, #58, #63.

Packer crashing when using version 1.0.0 of tart plugin

Description

When attempting to build a mac VM today using packer and the tart plugin. The build failed with:

2023/03/01 10:32:10 ConfigSpec failed: gob: type cty.Type has no exported fields
2023/03/01 10:32:10 waiting for all plugin processes to complete...
2023/03/01 10:32:10 /Users/ci/.config/packer/plugins/github.com/cirruslabs/tart/packer-plugin-tart_v1.0.0_x5.0_darwin_arm64: plugin process exited
panic: ConfigSpec failed: gob: type cty.Type has no exported fields [recovered]
	panic: ConfigSpec failed: gob: type cty.Type has no exported fields

goroutine 1 [running]:
log.Panic({0x14000f1e798?, 0x1000000000090?, 0x1?})
	/Users/runner/hostedtoolcache/go/1.18.9/x64/src/log/log.go:385 +0x68
github.com/hashicorp/packer/packer.(*cmdBuilder).checkExit(0x140004a8790?, {0x106f924c0, 0x140004a87c0}, 0x0)
	/Users/runner/work/packer/packer/packer/cmd_builder.go:47 +0x84
github.com/hashicorp/packer/packer.(*cmdBuilder).ConfigSpec.func1()
	/Users/runner/work/packer/packer/packer/cmd_builder.go:19 +0x44
panic({0x106f924c0, 0x140004a87c0})
	/Users/runner/hostedtoolcache/go/1.18.9/x64/src/runtime/panic.go:838 +0x204
github.com/hashicorp/packer-plugin-sdk/rpc.(*commonClient).ConfigSpec(0x14000682220)
	/Users/runner/go/pkg/mod/github.com/hashicorp/[email protected]/rpc/common.go:44 +0x24c
github.com/hashicorp/packer/packer.(*cmdBuilder).ConfigSpec(0x0?)
	/Users/runner/work/packer/packer/packer/cmd_builder.go:22 +0x5c
github.com/hashicorp/packer/hcl2template.decodeHCL2Spec({0x10809f9d8, 0x14000e00d20}, 0x10625bac8?, {0x1145c34d8?, 0x1400000c9c0?})
	/Users/runner/work/packer/packer/hcl2template/decode.go:17 +0x3c
github.com/hashicorp/packer/hcl2template.(*PackerConfig).startBuilder(0x140005ac140, {{{0x140001d479f, 0x8}, {0x140001d47a8, 0x8}}, {0x0, 0x0}, {0x10809f9d8, 0x14000e00d20}}, 0x1400014eb10)
	/Users/runner/work/packer/packer/hcl2template/types.source.go:116 +0x148
github.com/hashicorp/packer/hcl2template.(*PackerConfig).GetBuilds(0x140005ac140, {{0x0, 0x0, 0x0}, {0x140007ea030, 0x1, 0x1}, 0x0, 0x0, {0x0, ...}, ...})
	/Users/runner/work/packer/packer/hcl2template/types.packer_config.go:654 +0xd68
github.com/hashicorp/packer/command.(*BuildCommand).RunContext(0x14000e00b10, {0x10809f038?, 0x14000706600}, 0x140003d7b00)
	/Users/runner/work/packer/packer/command/build.go:110 +0x1a8
github.com/hashicorp/packer/command.(*BuildCommand).Run(0x14000e00b10, {0x140001a4020, 0x6, 0x6})
	/Users/runner/work/packer/packer/command/build.go:38 +0xb0
github.com/mitchellh/cli.(*CLI).Run(0x140004163c0)
	/Users/runner/go/pkg/mod/github.com/mitchellh/[email protected]/cli.go:262 +0x4cc
main.wrappedMain()
	/Users/runner/work/packer/packer/main.go:262 +0xa40
main.realMain()
	/Users/runner/work/packer/packer/main.go:49 +0xb8
main.main()
	/Users/runner/work/packer/packer/main.go:35 +0x20
!!!!!!!!!!!!!!!!!!!!!!!!!!! PACKER CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Packer crashed! This is always indicative of a bug within Packer.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Packer[1] so that we can fix this.
[1]: https://github.com/hashicorp/packer/issues
!!!!!!!!!!!!!!!!!!!!!!!!!!! PACKER CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
  • This seems to be an issue with Packer but yet it doesn't happen with version 0.6.3 and earlier of the tart plugin

Packer version

1.8.4
1.8.5
1.9.0-dev

Versions affected

1.0.0

Workaround

Changed plugin version to only use 0.6.3 and less for now

packer {
  required_plugins {
    tart = {
      version = "<= 0.6.3"
      source  = "github.com/cirruslabs/tart"
    }
  }
}

Rosetta support

This plugin currently creates with default settings, however there is no option to attach a rosetta share to the VM. Perhaps add the rosetta option
rosetta = "<rosetta_tag>"
This will perform tart run --rosetta <tag> <vm_name>

Thank you.

Ubuntu subiquity crashes on Tart using same exact autoinstall user-data cloud-init as works on x86_64

I expect this is a subiquity bug more than a Tart problem so have raised it here with full details:

https://bugs.launchpad.net/ubuntu/+source/subiquity/+bug/2022856

but raising this here in case anybody has seen similar issues.

Also, I wanted to try a workaround of running a simple local webserver to serve the user-data and pointing the Ubuntu installer to use the use-data over the network instead of from a local cdrom iso, to see if I can isolate the difference from my working x86_64 build if this is an installer code path issue or an ARM code path or even a Tart related issue.

However, the installer hits the exact same error.

I had used tty2 to determine the IP and gateway, ran a python3 -m http.server on the Mac and did a curl http://192.168.64.1:8000/user-data to check that it worked.

You can reproduce using the network autoinstall user data via this command:

make ubuntu-tart-http

which will start the python web server and then run the alternate http autoinstall configuration to show this.

HTTP server IP discovery issues

It looks like there's an issue in the way the HTTP server's IP discovery is implemented (#58 )

As I understand, typeBootCommandOverVNC will invoke tart ip to discover the guest host's IP address, and derive the HTTP server from the gateway of the subnet the IP address is using. I don't see how this could work, as the host will not get to the point of obtaining and setting an IP address until it has fully booted.

I'm guessing some logic/ordering more akin to what the QEMU builder in https://github.com/hashicorp/packer-plugin-qemu/blob/main/builder/qemu/step_http_ip_discover.go will be necessary.

But maybe I'm totally misunderstanding how this is intended to work, could you help @edigaryev

Cannot extend the image with "Error: -69519: The target disk is too small for this operation"

Hi!
I'm testing the Packer integration with tart to achieve some desired hierarchy of macOS VMs (like the cirruslabs/macos-image-templates but in my own way).

I created a base image with vanilla Sonoma with disk_size_gb = 30, then I extended the base image with few additions like XCode with disk_size_gb = 90, but currently I'm unable to build the xcode-containing image due to error:

Error: -69519: The target disk is too small for this operation, or a gap is required in your partition map which is missing or too small, which is often caused by an attempt to grow a partition beyond the beginning of another partition or beyond the end of partition map usable space

Which part is this error related to? My workstation? Incorrect initialization of base image? How can I solve it?

Thanks!

Fails after VM start with "Failed to lock auxiliary storage"

Often (but not always) fails with Failed to lock auxiliary storage and Resource temporarily unavailable errors:

...
==> tart-cli: Inspecting machine disk image...
==> tart-cli: Getting partition table...
==> tart-cli: Found recovery partition. Let's remove it to save space...
==> tart-cli: Successfully updated partitions...
==> tart-cli: Starting the virtual machine...
==> tart-cli: Detecting host IP...
2024/02/29 12:07:48 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:07:48 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
==> tart-cli: Error Domain=VZErrorDomain Code=2 "Failed to lock auxiliary storage." UserInfo={NSLocalizedFailure=Invalid virtual machine configuration., NSLocalizedFailureReason=Failed to lock auxiliary storage., NSUnderlyingError=0x600000d5bde0 {Error Domain=NSPOSIXErrorDomain Code=35 "Resource temporarily unavailable"}}
==> tart-cli: Connection reset by peer (os error 54)

I can see the VM window briefly appears and then immediately closes and the Error appears in the log. Maybe it's due to VM startup is not completed and tart ip is too early to run? When running in headless mode - I don't see it fails.

% sw_vers
ProductName:		macOS
ProductVersion:		13.6
BuildVersion:		22G120

I also captured log of macos, but there is quite alot, so if you want to see something in particular - I can provide this data.

Improve "VM limit exhausted" message

Right now, this is the log output when trying to build a new VM with Packer when two are already running:

% packer build [...]
tart-cli.tart: output will be in this color.

==> tart-cli.tart: Cloning virtual machine...
==> tart-cli.tart: Updating virtual machine resources...
==> tart-cli.tart: Inspecting machine disk image...
==> tart-cli.tart: Getting partition table...
==> tart-cli.tart: Starting the virtual machine...
==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
==> tart-cli.tart: Retrieved VNC credentials, connecting...
==> tart-cli.tart: Failed to connect to the Tart's VNC server: dial tcp 127.0.0.1:60993: connect: connection refused
==> tart-cli.tart: Waiting for the tart process to exit...
Build 'tart-cli.tart' errored after 1 second 61 milliseconds: Failed to connect to the Tart's VNC server: dial tcp 127.0.0.1:60993: connect: connection refused

It took me a while to figure out it's the 2 VMs limit which is responsible for this log output. Is there a way to catch the limit earlier and log this appropriately?

VM does not save after build

The past 10 or so builds have not saved after finishing. I can't find anything to indicate why it might fail to save.

==> base.tart-cli.base: Provisioning with shell script: /var/folders/zz/5pds_n8j1q98bg_dlbtm6r2c0000gn/T/packer-shell195630698
==> base.tart-cli.base: Gracefully shutting down the VM...
==> base.tart-cli.base: Waiting for the tart process to exit...
Build 'base.tart-cli.base' finished after 1 hour 34 minutes.

==> Wait completed after 1 hour 34 minutes

==> Builds finished. The artifacts of successful builds are:
--> base.tart-cli.base: macos-12.6

% tart list <table formatted for readability>
| Source | Name | Size | Running |
| local | macos-12.6-4a74327b-cf71-4dcf-bb22-c5fafe0f2777 | 45 | true |
| oci | xxx.dkr.ecr.us-west-2.amazonaws.com/macos-12.6:latest | 300 | false |
| oci | xxx.dkr.ecr.us-west-2.amazonaws.com/macos-12.6@sha256:shaxxx | 300 | false |
| oci | ghcr.io/cirruslabs/macos-monterey-base:latest | 40 | false |
| oci | ghcr.io/cirruslabs/macos-monterey-base@sha256:77a2fbbf0e533200cb6b5585bbc0898e6f7f8aadd0e6e385fd0ea86b0bb2a9b4                  40 | false |

Occasional Timeout waiting for SSH

Hey there! Thanks for all your great work on this!

I sometimes get the following error when building an image from IPSW. My ssh_timeout is set to 180s, and I'm killing the tart process before beginning the build to ensure its not a result of too many VMs running (>2).

==> tart-cli.base: Waiting for SSH to become available...
==> tart-cli.base: Timeout waiting for SSH.

Here is what my source block looks like, its essentially identical to the templates provided in https://github.com/cirruslabs/macos-image-templates/blob/master/templates/vanilla-ventura.pkr.hcl

source "tart-cli" "base" {
  # You can find macOS IPSW URLs on various websites like https://ipsw.me/
  # and https://www.theiphonewiki.com/wiki/Beta_Firmware/Mac/13.x
  from_ipsw    = "${var.home}/macOS/UniversalMac_13.3_22E252_Restore.ipsw"
  vm_name      = var.vm_name
  cpu_count    = 4
  memory_gb    = 8
  disk_size_gb = 60
  ssh_username = "admin"
  ssh_password = "admin"
  ssh_timeout  = "180s"
  boot_command = [
    # hello, hola, bonjour, etc.
    "<wait60s><spacebar>",
    # Language
    "<wait30s>english<enter>",
    # Select Your Country and Region
    "<wait30s>united states<leftShiftOn><tab><leftShiftOff><spacebar>",
    # Written and Spoken Languages
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Accessibility
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Data & Privacy
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Migration Assistant
    "<wait10s><tab><tab><tab><spacebar>",
    # Sign In with Your Apple ID
    "<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Are you sure you want to skip signing in with an Apple ID?
    "<wait10s><tab><spacebar>",
    # Terms and Conditions
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # I have read and agree to the macOS Software License Agreement
    "<wait10s><tab><spacebar>",
    # Create a Computer Account
    "<wait10s>admin<tab><tab>admin<tab>admin<tab><tab><tab><spacebar>",
    # Enable Location Services
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Are you sure you don't want to use Location Services?
    "<wait10s><tab><spacebar>",
    # Select Your Time Zone
    "<wait10s><tab>UTC<enter><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Analytics
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Screen Time
    "<wait10s><tab><spacebar>",
    # Siri
    "<wait10s><tab><spacebar><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Choose Your Look
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Enable Voice Over
    "<wait10s><leftAltOn><f5><leftAltOff><wait5s>v",
    # Now that the installation is done, open "System Settings"
    "<wait10s><leftAltOn><spacebar><leftAltOff>System Settings<enter>",
    # Navigate to "Sharing"
    "<wait10s><leftAltOn>f<leftAltOff>sharing<enter>",
    # Navigate to "Screen Sharing" and enable it
    "<wait10s><tab><down><spacebar>",
    # Navigate to "Remote Login" and enable it
    "<wait10s><tab><tab><tab><tab><tab><tab><spacebar>",
    # Open "Remote Login" details
    "<wait10s><tab><spacebar>",
    # Enable "Full Disk Access"
    "<wait10s><tab><spacebar>",
    # Click "Done"
    "<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Disable Voice Over
    "<leftAltOn><f5><leftAltOff>",
  ]

  // A (hopefully) temporary workaround for Virtualization.Framework's
  // installation process not fully finishing in a timely manner
  create_grace_time = "30s"

Packer SSH shell provisioner not triggering?

It appears that the SSH shell provisioner isn't triggering on my Fedora build on Tart which is almost identical to my build on x86_64 VirtualBox.

You can reproduce this like so

git clone https://github.com/HariSekhon/Packer-templates pack

cd pack

make fedora-tart-http

The output is:

...
Fedora ISOs prepared
pkill -9 -if -- '.*python.* -m http.server'
cd installers && python3 -m http.server &
packer build --force fedora-38-arm64.tart.http.pkr.hcl
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
fedora-38.tart-cli.fedora-38: output will be in this color.

==> fedora-38.tart-cli.fedora-38: Creating virtual machine...
==> fedora-38.tart-cli.fedora-38: Starting the virtual machine for installation...
==> fedora-38.tart-cli.fedora-38: Waiting for the VNC server credentials from Tart...
==> fedora-38.tart-cli.fedora-38: Retrieved VNC credentials, connecting...
==> fedora-38.tart-cli.fedora-38: Connected to the VNC!
==> fedora-38.tart-cli.fedora-38: Typing the commands over VNC...
==> fedora-38.tart-cli.fedora-38: Waiting for the install process to shutdown the VM...

The Fedora prompt is up but the SSH provisioner never triggers, as verified by logging in and checking that the /etc/packer-version file created by my script never materializes, and the VM never shuts down to complete the build.

Those last 2 lines of the output are what makes me think that the SSH commands are not being run as when I got to tty2 to investigate there was no /etc/packer-version created by script and I can't see a hanging bash command it in the ps output (although it doesn't help that I have | pipe symbol to be able to less through the ps output).

Plugin version 1.5.3 breaking on disk resize

This started breaking our tart builds today a couple hours ago. The vm will launch from IPSW and go through most of the build process then bomb at the end complaining about the disk partitions. From looking at the error it seems it does not actually remove the recovery partition like normal and causes issues when trying to finish the vm build process.

We changed our packer plugin line to explicitly use plugin version 1.5.2 now instead of >=1.5.0. Multiple builds using versions 1.5.0 and 1.5.2 as tests work fine just like before. Any build with 1.5.3 results in the snippet below.

Our tart version installed by homebrew is 1.12.1 which shows current as of today

$ packer init --upgrade templates/vanilla.pkr.hcl
Installed plugin github.com/cirruslabs/tart v1.5.3 in "/opt/homebrew/bin/github.com/cirruslabs/tart/packer-plugin-tart_v1.5.3_x5.0_darwin_arm64"
$ packer build -force -debug -timestamp-ui templates/vanilla.pkr.hcl
Debug mode enabled. Builds will not be parallelized.
tart-cli.tart: output will be in this color.
2023-09-15T15:31:33-07:00: ==> tart-cli.tart: Creating virtual machine...
2023-09-15T15:37:04-07:00: ==> tart-cli.tart: Waiting 30s to let the Virtualization.Framework's installation process to finish correctly...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Updating virtual machine resources...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Inspecting machine disk image...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Getting partition table...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Found recovery partition. Let's remove it to save space...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Successfully updated partitions...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Starting the virtual machine...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Retrieved VNC credentials, connecting...
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Connected to the VNC!
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Typing the commands over VNC...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Successfully started the virtual machine...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Using SSH communicator to connect: 192.168.64.127
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Waiting for SSH to become available...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Connected to SSH!
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Let's SSH in and claim the new space for the disk...
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: failed to parse "diskutil list -plist physical" output: last partition's "Content" should be "Apple_APFS", got "Apple_APFS_Recovery"
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Gracefully shutting down the VM...
2023-09-15T15:44:13-07:00: tart-cli.tart: Shutdown NOW!
2023-09-15T15:44:13-07:00: tart-cli.tart:
2023-09-15T15:44:13-07:00: tart-cli.tart: System shutdown time has arrived��
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Password:
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Waiting for the tart process to exit...
2023-09-15T15:44:14-07:00: Build 'tart-cli.tart' errored after 12 minutes 40 seconds: Build was halted.
==> Wait completed after 12 minutes 40 seconds
==> Some builds didn't complete successfully and had errors:
--> tart-cli.tart: Build was halted.
==> Builds finished but no artifacts were created.

Attaching additional disks during build

Hi folks,

When attempting to attach an additional disk during an image build (e.g. run_extra_args = ["--disk=/dev/disk7"]) the user is presented with the following error message:

failed to parse "diskutil list -plist physical" output: there are more than one physical disk present on the system

Would it be possible to alter the logic to allow multiple disk attachments? My use case for wanting to do this is to attach a disk containing a large pre-existing source code directory (100GB+) and have that code compiled by a Packer provisioning step, thereby creating a tart image containing the compiled binaries. Due to its large size I am not keen on cloning the repo in a provisioning step, rather I would like to re-use the copy that already exists on the host.

As a workaround, I could compile my binaries on the host and copy them into the image, but I would prefer to have the compilation happen inside a tart VM, where the filesystem/dependencies/toolchain are immutable.

(Note: I tried mounting a directory via --dir, but run into good old virtiofs bugs during compilation.)

For reference, the output of diskutil list -plist physical when an extra disk is attached looks something like this:

admin@admins-Virtual-Machine ~ % diskutil list -plist physical
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>AllDisks</key>
	<array>
		<string>disk0</string>
		<string>disk0s1</string>
		<string>disk0s2</string>
		<string>disk2</string>
	</array>
	<key>AllDisksAndPartitions</key>
	<array>
		<dict>
			<key>Content</key>
			<string>GUID_partition_scheme</string>
			<key>DeviceIdentifier</key>
			<string>disk0</string>
			<key>OSInternal</key>
			<false/>
			<key>Partitions</key>
			<array>
				<dict>
					<key>Content</key>
					<string>Apple_APFS_ISC</string>
					<key>DeviceIdentifier</key>
					<string>disk0s1</string>
					<key>DiskUUID</key>
					<string>D2B79297-879E-4461-8DA2-EEA50EA7319A</string>
					<key>Size</key>
					<integer>524288000</integer>
				</dict>
				<dict>
					<key>Content</key>
					<string>Apple_APFS</string>
					<key>DeviceIdentifier</key>
					<string>disk0s2</string>
					<key>DiskUUID</key>
					<string>D5BA624D-182F-40D0-8248-D08508A8D1B3</string>
					<key>Size</key>
					<integer>89475674112</integer>
				</dict>
			</array>
			<key>Size</key>
			<integer>90000000000</integer>
		</dict>
		<dict>
			<key>Content</key>
			<string></string>
			<key>DeviceIdentifier</key>
			<string>disk2</string>
			<key>OSInternal</key>
			<false/>
			<key>Size</key>
			<integer>107164426240</integer>
		</dict>
	</array>
	<key>VolumesFromDisks</key>
	<array/>
	<key>WholeDisks</key>
	<array>
		<string>disk0</string>
		<string>disk2</string>
	</array>
</dict>
</plist>

macOS 12 VM does not respond to VNC input

Trying to provision macOS 12 (12.6.1+21G217), the VNC connection works for viewing the machine, but mouse and keyboard input doesn't seem to work. Neither from manually connecting to the machine via Screen Sharing, nor the boot_commands sent by Packer.

Implement HTTP directory configuration

The feature I've used a lot with VMware was the HTTP directory configuration. This was useful with the boot_command to access files on the host from recoveryOS.

As an example, here is my boot_command that I used with VMware

  http_directory = "http"
  boot_command   = [
    # Select English language
    "<enter><wait10s>",
    # Open Terminal
    "<leftSuperOn><leftShiftOn>t<leftSuperOff><leftShiftOff><wait10s>",
    # Mount the HTTP server
    "hdiutil mount http://{{ .HTTPIP }}:{{ .HTTPPort }}/bootstrap.dmg<enter>",
    "/Volumes/bootstrap/start<enter>"
  ]

Next to my packer config file would be an http directory that I would put the bootstrap.dmg disk image in that would be mounted and would contain scripts to run. This allowed me to iterate on and process the scripts faster than having to constantly update the boot_command.

This may also be useful for #15.

`{{ .HTTPIP }}` and `{{ .HTTPPort }}` are substituted as `<no value>` in `boot_command`

This was first observed in v1.5.3. v1.5.2 substitutes the values as expected.

The issue can be reproduced using the following template.

packer {
  required_plugins {
    tart = {
      version = "1.5.3"
      source  = "github.com/cirruslabs/tart"
    }
  }
}

source "tart-cli" "tart" {
  vm_base_name   = "ventura"
  vm_name        = "example"
  cpu_count      = 4
  memory_gb      = 8
  disk_size_gb   = 50
  ssh_username   = "admin"
  ssh_password   = "admin"
  ssh_timeout    = "120s"
  http_directory = "/tmp/example"
  boot_command   = [
    "<wait15s>{{ .HTTPIP }} {{ .HTTPPort }}<wait60s>",
  ]
}

build {
  sources = ["source.tart-cli.tart"]

  provisioner "shell" {
    inline = [
      "exit 0",
    ]
  }
}

add tart create --from-ipsw to as an option in packer source

This plugin currently creates new VMs by doing a tart clone of an existing artifact from a registry. Doing a build starting from an ipsw local file or URL would also be desirable.

Maybe add an ipsw option
from_ipsw = "/path/to/image.ipsw"

When configured, the plugin would exec create instead of the current clone function
tart create --from-ipsw "/path/to/image.ipsw"

Tart and this plugin are really cool! thanks for releasing them!

boot_command Alt-F2 keystroke not being received correctly - '<leftAltOn><f2><leftAltOff>' does not work

Hi,

I'm trying to do a Debian pressed configuration using a second iso, and this requires switching to tty2 to mount the supplemental cdrom containing the preseed.cfg file.

I am finding that the Alt-F2 keystroke is not being received correctly, so it is not switching to tty2 to run the mount commands.

Alt-F2 comes out as:

^[[[B

You can easily reproduce using my public GitHub repo:

git clone https://github.com/HariSekhon/Packer-templates pack

cd pack

make debian-tart

which will download the installer iso, create the pressed iso and run the packer command to launch the VM with both isos and send all keystrokes to reproduce.

The debian-11-arm64.tart.pkr.hcl is heavily commented of what each screen expects, specifically this is the part I'm having trouble with:

boot_command = [
    "<wait2s>",
    "e<down><down><down><down><left>",
    " auto=true file=/mnt/cdrom2/preseed.cfg<f10>",
    "<wait15s>",
    # go to terminal tty2 for CLI
    # XXX: this Alt-F2 keystroke is coming out unrecognized
    "<leftAltOn><f2><leftAltOff><wait2s>",

2.7.0 crashes when trying to use via packer

Process:               tart [16122]
Path:                  /opt/homebrew/*/tart.app/Contents/MacOS/tart
Identifier:            tart
Version:               ???
Code Type:             ARM-64 (Native)
Parent Process:        packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 [16116]
Responsible:           zed [620]
User ID:               501

Date/Time:             2024-03-11 11:27:15.6467 -0600
OS Version:            macOS 14.4 (23E214)

System Integrity Protection: enabled

Crashed Thread:        0

Exception Type:        EXC_BREAKPOINT (SIGTRAP)
Exception Codes:       0x0000000000000001, 0x0000000104c21904

Termination Reason:    Namespace SIGNAL, Code 5 Trace/BPT trap: 5
Terminating Process:   exc handler [16122]

Thread 0 Crashed:
0   tart                          	       0x105315904 Run.run() + 6660
1   tart                          	       0x10531c30d protocol witness for AsyncParsableCommand.run() in conformance Run + 1
2   tart                          	       0x10536d175 static Root.main() + 1
3   tart                          	       0x10536dff5 specialized thunk for @escaping @convention(thin) @async () -> () + 1
4   libswift_Concurrency.dylib    	       0x256897149 completeTaskAndRelease(swift::AsyncContext*, swift::SwiftError*) + 1

Thread 1:
0   libsystem_pthread.dylib       	       0x19019dd20 start_wqthread + 0

Thread 2::  Dispatch queue: com.apple.virtualization.vnc.server
0   dyld                          	       0x18fe617d8 invocation function for block in DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 136
1   dyld                          	       0x18fe61440 DyldSharedCache::forEachRegion(void (void const*, unsigned long long, unsigned long long, unsigned int, unsigned int, unsigned long long, bool&) block_pointer) const + 248
2   dyld                          	       0x18fe616fc invocation function for block in DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 132
3   dyld                          	       0x18fe615e0 DyldSharedCache::forEachCache(void (DyldSharedCache const*, bool&) block_pointer) const + 64
4   dyld                          	       0x18fe61588 DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 124
5   dyld                          	       0x18fe4c028 dyld4::APIs::findImageMappedAt(void const*, dyld3::MachOLoaded const**, bool*, char const**, void const**, unsigned long long*, unsigned char*, dyld4::Loader const**) + 268
6   dyld                          	       0x18fe4c598 dyld4::APIs::dyld_image_path_containing_address(void const*) + 76
7   libsystem_trace.dylib         	       0x18fefbc88 _os_activity_stream_reflect + 328
8   libsystem_trace.dylib         	       0x18ff02740 _os_log_impl_stream + 504
9   libsystem_trace.dylib         	       0x18fef3a18 _os_log_impl_flatten_and_send + 7636
10  libsystem_trace.dylib         	       0x18fef1c2c _os_log + 168
11  libsystem_trace.dylib         	       0x18fef1b7c _os_log_impl + 28
12  Network                       	       0x197ce0ccc networkd_settings_read_from_file() + 780
13  Network                       	       0x197ce008c networkd_settings_init + 124
14  Network                       	       0x197ce262c nw_allow_use_of_dispatch_internal + 280
15  Network                       	       0x1978af0c0 nw_protocol_register_extended + 60
16  libdispatch.dylib             	       0x18fff23e8 _dispatch_client_callout + 20
17  libdispatch.dylib             	       0x18fff3c68 _dispatch_once_callout + 32
18  Network                       	       0x19795bf60 __nw_protocol_setup_ip_definition_block_invoke + 280
19  libdispatch.dylib             	       0x18fff23e8 _dispatch_client_callout + 20
20  libdispatch.dylib             	       0x18fff3c68 _dispatch_once_callout + 32
21  Network                       	       0x19760afa4 nw_parameters_create_secure_tcp + 2864
22  Virtualization                	       0x22504a508 void Base::DispatchQueue::async<-[_VZVNCServer start]::$_0>(-[_VZVNCServer start]::$_0&&)::'lambda'(void*)::__invoke(void*) + 476
23  libdispatch.dylib             	       0x18fff23e8 _dispatch_client_callout + 20
24  libdispatch.dylib             	       0x18fff9a14 _dispatch_lane_serial_drain + 748
25  libdispatch.dylib             	       0x18fffa544 _dispatch_lane_invoke + 380
26  libdispatch.dylib             	       0x1900052d0 _dispatch_root_queue_drain_deferred_wlh + 288
27  libdispatch.dylib             	       0x190004b44 _dispatch_workloop_worker_thread + 404
28  libsystem_pthread.dylib       	       0x19019f00c _pthread_wqthread + 288
29  libsystem_pthread.dylib       	       0x19019dd28 start_wqthread + 8

Thread 3:
0   libsystem_pthread.dylib       	       0x19019dd20 start_wqthread + 0

Keyboard layout problem with boot_command

I used this build template: https://github.com/cirruslabs/macos-image-templates/blob/master/templates/vanilla-ventura.pkr.hcl
Which I modified to have the French language with the AZERTY keyboard.
But something strange happens when it writes "admin" = "qd,in", it doesn't seem to come from the QWERTY either...

macprob

In my template I just replaced :

# Language
"<wait30s>english<enter>",
# Select Your Country and Region
"<wait30s>united states<leftShiftOn><tab><leftShiftOff><spacebar>",

With this :

# Language
"<wait30s><enter>",
# Select Your Country and Region
"<wait30s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Gender
"<wait10s><leftShiftOn><tab><tab><leftShiftOff><spacebar>",

Here's the full template:

packer {
  required_plugins {
    tart = {
      version = ">= 1.2.0"
      source  = "github.com/cirruslabs/tart"
    }
  }
}

source "tart-cli" "tart" {
  # You can find macOS IPSW URLs on various websites like https://ipsw.me/
  # and https://www.theiphonewiki.com/wiki/Beta_Firmware/Mac/13.x
  from_ipsw    = "13.4.ipsw"
  vm_name      = "ventura"
  cpu_count    = 4
  memory_gb    = 8
  disk_size_gb = 25
  ssh_password = "admin"
  ssh_username = "admin"
  ssh_timeout  = "120s"
  boot_command = [
    # hello, hola, bonjour, etc.
    "<wait60s><spacebar>",
    # ! Language
    "<wait30s><enter>",
    # ! Select Your Country and Region
    "<wait30s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # ! Gender
    "<wait10s><leftShiftOn><tab><tab><leftShiftOff><spacebar>",
    # Written and Spoken Languages
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Accessibility
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Data & Privacy
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Migration Assistant
    "<wait10s><tab><tab><tab><spacebar>",
    # Sign In with Your Apple ID
    "<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Are you sure you want to skip signing in with an Apple ID?
    "<wait10s><tab><spacebar>",
    # Terms and Conditions
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # I have read and agree to the macOS Software License Agreement
    "<wait10s><tab><spacebar>",
    # Create a Computer Account
    "<wait10s>admin<tab><tab>admin<tab>admin<tab><tab><tab><spacebar>",
    # Enable Location Services
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Are you sure you don't want to use Location Services?
    "<wait10s><tab><spacebar>",
    # Select Your Time Zone
    "<wait10s><tab>paris<enter><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Analytics
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Screen Time
    "<wait10s><tab><spacebar>",
    # Siri
    "<wait10s><tab><spacebar><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Choose Your Look
    "<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Enable Voice Over
    "<wait10s><leftAltOn><f5><leftAltOff><wait5s>v",
    # Now that the installation is done, open "System Settings"
    "<wait10s><leftAltOn><spacebar><leftAltOff>System Settings<enter>",
    # Navigate to "Sharing"
    "<wait10s><leftAltOn>f<leftAltOff>partage<enter>",
    # Navigate to "Screen Sharing" and enable it
    "<wait10s><tab><down><spacebar>",
    # Navigate to "Remote Login" and enable it
    "<wait10s><tab><tab><tab><tab><tab><tab><spacebar>",
    # Open "Remote Login" details
    "<wait10s><tab><spacebar>",
    # Enable "Full Disk Access"
    "<wait10s><tab><spacebar>",
    # Click "Done"
    "<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
    # Disable Voice Over
    "<leftAltOn><f5><leftAltOff>",
  ]

  // A (hopefully) temporary workaround for Virtualization.Framework's
  // installation process not fully finishing in a timely manner
  create_grace_time = "30s"
}

build {
  sources = ["source.tart-cli.tart"]

  provisioner "shell" {
    inline = [
      // Enable passwordless sudo
      "echo admin | sudo -S sh -c \"mkdir -p /etc/sudoers.d/; echo 'admin ALL=(ALL) NOPASSWD: ALL' | EDITOR=tee visudo /etc/sudoers.d/admin-nopasswd\"",
      // Enable auto-login
      //
      // See https://github.com/xfreebird/kcpassword for details.
      "echo '00000000: 1ced 3f4a bcbc ba2c caca 4e82' | sudo xxd -r - /etc/kcpassword",
      "sudo defaults write /Library/Preferences/com.apple.loginwindow autoLoginUser admin",
      // Disable screensaver at login screen
      "sudo defaults write /Library/Preferences/com.apple.screensaver loginWindowIdleTime 0",
      // Disable screensaver for admin user
      "defaults -currentHost write com.apple.screensaver idleTime 0",
      // Prevent the VM from sleeping
      "sudo systemsetup -setdisplaysleep Off",
      "sudo systemsetup -setsleep Off",
      "sudo systemsetup -setcomputersleep Off",
      // Launch Safari to populate the defaults
      "/Applications/Safari.app/Contents/MacOS/Safari &",
      "sleep 30",
      "kill -9 %1",
      // Enable Safari's remote automation and "Develop" menu
      "sudo safaridriver --enable",
      "defaults write com.apple.Safari.SandboxBroker ShowDevelopMenu -bool true",
      "defaults write com.apple.Safari IncludeDevelopMenu -bool true",
      // Disable screen lock
      //
      // Note that this only works if the user is logged-in,
      // i.e. not on login screen.
      "sysadminctl -screenLock off -password admin",
    ]
  }
}
mac.mp4

Software Update doesn't work in VMs from tart packer plugin

I've only tested in macOS 13 as that's our current CI OS that we're likely stuck with for a little while.

  1. Setup a simple bare bones packer tart config: https://gist.github.com/jfro/4f9559d462bcc67238e4617d34730680
  2. Run packer build on it
  3. Once complete, start it up with tart
  4. Try to run software update to get it to latest patch
  5. End up with personalization error (console reports an error with recovery partition)

I don't see anything in my simple config that should affect SU, but as a comparison, creating a fresh VM via tart itself doesn't have this problem.

Allow --resolver option when getting VM's IP

I'm using --network-bridged=en0 when to launch my VM, and according to cirruslabs/tart#472, I should use --resolver=arp to get VM's ip address, because the default dhcp doesn't work for bridged network.

However, this packer plugin seems not to be providing this option currently, and if I pass run_extra_args = ["--net-bridged=en0"] in the source block, packer build cannot find the VM's ip address correctly (because it's using dhcp resolver).

Is it possible to add a option to allow passing --resolver option to this plugin? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.