cirruslabs / packer-plugin-tart Goto Github PK
View Code? Open in Web Editor NEWPacker builder for Tart VMs
Home Page: https://developer.hashicorp.com/packer/plugins/builders/tart
License: Mozilla Public License 2.0
Packer builder for Tart VMs
Home Page: https://developer.hashicorp.com/packer/plugins/builders/tart
License: Mozilla Public License 2.0
It seems the plugin forgets to set the PACKER_HTTP_IP (it's a standard interface of packer) so PACKER_HTTP_ADDR is left unpopulated. It's quite hard to pull any data from the host via standard packer's http server through shell provisioner without this env variable.
Current Tart plugin:
packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 16:25:30 [DEBUG] Opening new ssh session
packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 16:25:30 [DEBUG] starting remote command: chmod +x /tmp/script_2697.sh; PACKER_BUILDER_TYPE='tart-cli' PACKER_BUILD_NAME='tart-cli' PACKER_HTTP_PORT='8625' /tmp/script_2697.sh
VMX:
...
packer-provisioner-shell plugin: [INFO] RPC client: Communicator ended with: 0
packer-builder-vmware-vmx plugin: [DEBUG] Opening new ssh session
packer-builder-vmware-vmx plugin: [DEBUG] starting remote command: chmod +x /tmp/script_2346.sh; PACKER_BUILDER_TYPE='vmware-vmx' PACKER_BUILD_NAME='vmware-vmx' PACKER_HTTP_ADDR='172.16.1.1:8350' PACKER_HTTP_IP='172.16.1.1' PACKER_HTTP_PORT='8350' /tmp/script_2346.sh
...
It seems there is a new http_directory
setting but it's not documented anywhere that I can find.
When I add it I find that I get this output:
==> ubuntu-22.04.tart-cli.ubuntu: Starting HTTP server on port 8232
so it seems to work on the latest plugin, I basically guessed this from issues #29, #58, #63.
Hi!
I'm testing the Packer integration with tart to achieve some desired hierarchy of macOS VMs (like the cirruslabs/macos-image-templates
but in my own way).
I created a base image with vanilla Sonoma with disk_size_gb = 30
, then I extended the base image with few additions like XCode with disk_size_gb = 90
, but currently I'm unable to build the xcode-containing image due to error:
Error: -69519: The target disk is too small for this operation, or a gap is required in your partition map which is missing or too small, which is often caused by an attempt to grow a partition beyond the beginning of another partition or beyond the end of partition map usable space
Which part is this error related to? My workstation? Incorrect initialization of base image? How can I solve it?
Thanks!
This was first observed in v1.5.3. v1.5.2 substitutes the values as expected.
The issue can be reproduced using the following template.
packer {
required_plugins {
tart = {
version = "1.5.3"
source = "github.com/cirruslabs/tart"
}
}
}
source "tart-cli" "tart" {
vm_base_name = "ventura"
vm_name = "example"
cpu_count = 4
memory_gb = 8
disk_size_gb = 50
ssh_username = "admin"
ssh_password = "admin"
ssh_timeout = "120s"
http_directory = "/tmp/example"
boot_command = [
"<wait15s>{{ .HTTPIP }} {{ .HTTPPort }}<wait60s>",
]
}
build {
sources = ["source.tart-cli.tart"]
provisioner "shell" {
inline = [
"exit 0",
]
}
}
I expect this is a subiquity bug more than a Tart problem so have raised it here with full details:
https://bugs.launchpad.net/ubuntu/+source/subiquity/+bug/2022856
but raising this here in case anybody has seen similar issues.
Also, I wanted to try a workaround of running a simple local webserver to serve the user-data and pointing the Ubuntu installer to use the use-data over the network instead of from a local cdrom iso, to see if I can isolate the difference from my working x86_64 build if this is an installer code path issue or an ARM code path or even a Tart related issue.
However, the installer hits the exact same error.
I had used tty2 to determine the IP and gateway, ran a python3 -m http.server
on the Mac and did a curl http://192.168.64.1:8000/user-data
to check that it worked.
You can reproduce using the network autoinstall user data via this command:
make ubuntu-tart-http
which will start the python web server and then run the alternate http autoinstall configuration to show this.
When enabling http_directory
in #75 it seems to break boot_command
- the keystrokes aren't sent through to control the boot for 30 seconds which is too late, despite boot_wait = "5s"
being set.
I wonder if it's because of this line:
I guess it would be useful to add a post-processor to this plugin as well, similar to https://developer.hashicorp.com/packer/integrations/hashicorp/docker/latest/components/post-processor/docker-push
Is that something that's already being worked on, or already possible? If not I might have a look.
Looks like the runInstaller
part of step_create_linux_vm is duplicating code from step_run. I can clean this up, but want to verify if there are other plans in this area before I dig in. Thanks :)
Hi,
I'm trying to do a Debian pressed configuration using a second iso, and this requires switching to tty2 to mount the supplemental cdrom containing the preseed.cfg
file.
I am finding that the Alt-F2 keystroke is not being received correctly, so it is not switching to tty2 to run the mount commands.
Alt-F2 comes out as:
^[[[B
You can easily reproduce using my public GitHub repo:
git clone https://github.com/HariSekhon/Packer-templates pack
cd pack
make debian-tart
which will download the installer iso, create the pressed iso and run the packer command to launch the VM with both isos and send all keystrokes to reproduce.
The debian-11-arm64.tart.pkr.hcl
is heavily commented of what each screen expects, specifically this is the part I'm having trouble with:
boot_command = [
"<wait2s>",
"e<down><down><down><down><left>",
" auto=true file=/mnt/cdrom2/preseed.cfg<f10>",
"<wait15s>",
# go to terminal tty2 for CLI
# XXX: this Alt-F2 keystroke is coming out unrecognized
"<leftAltOn><f2><leftAltOff><wait2s>",
This way we can mount a local folder with Xcode XIPs so we don't need to download it each time:
This plugin currently creates with default settings, however there is no option to attach a rosetta share to the VM. Perhaps add the rosetta option
rosetta = "<rosetta_tag>"
This will perform tart run --rosetta <tag> <vm_name>
Thank you.
Hey there! Thanks for all your great work on this!
I sometimes get the following error when building an image from IPSW. My ssh_timeout is set to 180s, and I'm killing the tart
process before beginning the build to ensure its not a result of too many VMs running (>2).
==> tart-cli.base: Waiting for SSH to become available...
==> tart-cli.base: Timeout waiting for SSH.
Here is what my source block looks like, its essentially identical to the templates provided in https://github.com/cirruslabs/macos-image-templates/blob/master/templates/vanilla-ventura.pkr.hcl
source "tart-cli" "base" {
# You can find macOS IPSW URLs on various websites like https://ipsw.me/
# and https://www.theiphonewiki.com/wiki/Beta_Firmware/Mac/13.x
from_ipsw = "${var.home}/macOS/UniversalMac_13.3_22E252_Restore.ipsw"
vm_name = var.vm_name
cpu_count = 4
memory_gb = 8
disk_size_gb = 60
ssh_username = "admin"
ssh_password = "admin"
ssh_timeout = "180s"
boot_command = [
# hello, hola, bonjour, etc.
"<wait60s><spacebar>",
# Language
"<wait30s>english<enter>",
# Select Your Country and Region
"<wait30s>united states<leftShiftOn><tab><leftShiftOff><spacebar>",
# Written and Spoken Languages
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Accessibility
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Data & Privacy
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Migration Assistant
"<wait10s><tab><tab><tab><spacebar>",
# Sign In with Your Apple ID
"<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
# Are you sure you want to skip signing in with an Apple ID?
"<wait10s><tab><spacebar>",
# Terms and Conditions
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# I have read and agree to the macOS Software License Agreement
"<wait10s><tab><spacebar>",
# Create a Computer Account
"<wait10s>admin<tab><tab>admin<tab>admin<tab><tab><tab><spacebar>",
# Enable Location Services
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Are you sure you don't want to use Location Services?
"<wait10s><tab><spacebar>",
# Select Your Time Zone
"<wait10s><tab>UTC<enter><leftShiftOn><tab><leftShiftOff><spacebar>",
# Analytics
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Screen Time
"<wait10s><tab><spacebar>",
# Siri
"<wait10s><tab><spacebar><leftShiftOn><tab><leftShiftOff><spacebar>",
# Choose Your Look
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Enable Voice Over
"<wait10s><leftAltOn><f5><leftAltOff><wait5s>v",
# Now that the installation is done, open "System Settings"
"<wait10s><leftAltOn><spacebar><leftAltOff>System Settings<enter>",
# Navigate to "Sharing"
"<wait10s><leftAltOn>f<leftAltOff>sharing<enter>",
# Navigate to "Screen Sharing" and enable it
"<wait10s><tab><down><spacebar>",
# Navigate to "Remote Login" and enable it
"<wait10s><tab><tab><tab><tab><tab><tab><spacebar>",
# Open "Remote Login" details
"<wait10s><tab><spacebar>",
# Enable "Full Disk Access"
"<wait10s><tab><spacebar>",
# Click "Done"
"<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
# Disable Voice Over
"<leftAltOn><f5><leftAltOff>",
]
// A (hopefully) temporary workaround for Virtualization.Framework's
// installation process not fully finishing in a timely manner
create_grace_time = "30s"
I'm using --network-bridged=en0 when to launch my VM, and according to cirruslabs/tart#472, I should use --resolver=arp to get VM's ip address, because the default dhcp doesn't work for bridged network.
However, this packer plugin seems not to be providing this option currently, and if I pass run_extra_args = ["--net-bridged=en0"] in the source block, packer build
cannot find the VM's ip address correctly (because it's using dhcp resolver).
Is it possible to add a option to allow passing --resolver
option to this plugin? Thanks!
When the packer is failed to build the VM (macos1306
in this case) - it leaves the image in local registry:
...
Cancelling build after receiving interrupt
2024/02/29 12:41:53 Cancelling builder after context cancellation context canceled
2024/02/29 12:41:53 packer-plugin-ansible_v1.1.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:41:53 Received interrupt signal (count: 1). Ignoring.
2024/02/29 12:41:53 packer-provisioner-shell plugin: Received interrupt signal (count: 1). Ignoring.
2024/02/29 12:41:53 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:41:53 Received interrupt signal (count: 1). Ignoring.
==> tart-cli: Failed to run the boot command: context canceled
==> tart-cli: Failed to run the boot command: context canceled
==> tart-cli: Step "stepRun" failed
==> tart-cli: [c] Clean up and exit, [a] abort without cleanup, or [r] retry step (build may fail even if retry succeeds)? c
==> tart-cli: Waiting for the tart process to exit...
==> tart-cli: Connection reset by peer (os error 54)
==> Wait completed after 4 minutes 34 seconds
Build 'tart-cli' errored after 4 minutes 34 seconds: Failed to run the boot command: context canceled
==> Wait completed after 4 minutes 34 seconds
2024/02/29 12:42:10 waiting for all plugin processes to complete...
Cleanly cancelled builds after being interrupted.
2024/02/29 12:42:10 /Users/admin/.config/packer/plugins/github.com/cirruslabs/tart/packer-plugin-tart_v1.8.1_x5.0_darwin_arm64: plugin process exited
2024/02/29 12:42:10 /Users/admin/git/aquarium-bait/.bin/packer: plugin process exited
2024/02/29 12:42:10 /Users/admin/.config/packer/plugins/github.com/hashicorp/ansible/packer-plugin-ansible_v1.1.1_x5.0_darwin_arm64: plugin process exited
...
$ tart list
Source Name Size State
local macos1306 15 stopped
local test1 23 stopped
...
Usually build plugins cleans up after that - so maybe there is some sort of issue in the cleanup logic?
Often (but not always) fails with Failed to lock auxiliary storage
and Resource temporarily unavailable
errors:
...
==> tart-cli: Inspecting machine disk image...
==> tart-cli: Getting partition table...
==> tart-cli: Found recovery partition. Let's remove it to save space...
==> tart-cli: Successfully updated partitions...
==> tart-cli: Starting the virtual machine...
==> tart-cli: Detecting host IP...
2024/02/29 12:07:48 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:07:48 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
==> tart-cli: Error Domain=VZErrorDomain Code=2 "Failed to lock auxiliary storage." UserInfo={NSLocalizedFailure=Invalid virtual machine configuration., NSLocalizedFailureReason=Failed to lock auxiliary storage., NSUnderlyingError=0x600000d5bde0 {Error Domain=NSPOSIXErrorDomain Code=35 "Resource temporarily unavailable"}}
==> tart-cli: Connection reset by peer (os error 54)
I can see the VM window briefly appears and then immediately closes and the Error appears in the log. Maybe it's due to VM startup is not completed and tart ip
is too early to run? When running in headless mode - I don't see it fails.
% sw_vers
ProductName: macOS
ProductVersion: 13.6
BuildVersion: 22G120
I also captured log of macos, but there is quite alot, so if you want to see something in particular - I can provide this data.
This might be useful as a data source for packer-plugin-tart:
https://github.com/torarnv/packer-plugin-ipsw
Let me know if there are any issues or further use-cases :)
Would it be possible to run Packer from a remote (Linux) system?
This plugin currently creates new VMs by doing a tart clone of an existing artifact from a registry. Doing a build starting from an ipsw local file or URL would also be desirable.
Maybe add an ipsw option
from_ipsw = "/path/to/image.ipsw"
When configured, the plugin would exec create instead of the current clone function
tart create --from-ipsw "/path/to/image.ipsw"
Tart and this plugin are really cool! thanks for releasing them!
It looks like there's an issue in the way the HTTP server's IP discovery is implemented (#58 )
As I understand, typeBootCommandOverVNC
will invoke tart ip
to discover the guest host's IP address, and derive the HTTP server from the gateway of the subnet the IP address is using. I don't see how this could work, as the host will not get to the point of obtaining and setting an IP address until it has fully booted.
I'm guessing some logic/ordering more akin to what the QEMU builder in https://github.com/hashicorp/packer-plugin-qemu/blob/main/builder/qemu/step_http_ip_discover.go will be necessary.
But maybe I'm totally misunderstanding how this is intended to work, could you help @edigaryev
I've only tested in macOS 13 as that's our current CI OS that we're likely stuck with for a little while.
I don't see anything in my simple config that should affect SU, but as a comparison, creating a fresh VM via tart itself doesn't have this problem.
This started breaking our tart builds today a couple hours ago. The vm will launch from IPSW and go through most of the build process then bomb at the end complaining about the disk partitions. From looking at the error it seems it does not actually remove the recovery partition like normal and causes issues when trying to finish the vm build process.
We changed our packer plugin line to explicitly use plugin version 1.5.2 now instead of >=1.5.0. Multiple builds using versions 1.5.0 and 1.5.2 as tests work fine just like before. Any build with 1.5.3 results in the snippet below.
Our tart version installed by homebrew is 1.12.1 which shows current as of today
$ packer init --upgrade templates/vanilla.pkr.hcl
Installed plugin github.com/cirruslabs/tart v1.5.3 in "/opt/homebrew/bin/github.com/cirruslabs/tart/packer-plugin-tart_v1.5.3_x5.0_darwin_arm64"
$ packer build -force -debug -timestamp-ui templates/vanilla.pkr.hcl
Debug mode enabled. Builds will not be parallelized.
tart-cli.tart: output will be in this color.
2023-09-15T15:31:33-07:00: ==> tart-cli.tart: Creating virtual machine...
2023-09-15T15:37:04-07:00: ==> tart-cli.tart: Waiting 30s to let the Virtualization.Framework's installation process to finish correctly...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Updating virtual machine resources...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Inspecting machine disk image...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Getting partition table...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Found recovery partition. Let's remove it to save space...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Successfully updated partitions...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Starting the virtual machine...
2023-09-15T15:37:34-07:00: ==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Retrieved VNC credentials, connecting...
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Connected to the VNC!
2023-09-15T15:37:35-07:00: ==> tart-cli.tart: Typing the commands over VNC...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Successfully started the virtual machine...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Using SSH communicator to connect: 192.168.64.127
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Waiting for SSH to become available...
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Connected to SSH!
2023-09-15T15:44:12-07:00: ==> tart-cli.tart: Let's SSH in and claim the new space for the disk...
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: failed to parse "diskutil list -plist physical" output: last partition's "Content" should be "Apple_APFS", got "Apple_APFS_Recovery"
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Gracefully shutting down the VM...
2023-09-15T15:44:13-07:00: tart-cli.tart: Shutdown NOW!
2023-09-15T15:44:13-07:00: tart-cli.tart:
2023-09-15T15:44:13-07:00: tart-cli.tart: System shutdown time has arrived��
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Password:
2023-09-15T15:44:13-07:00: ==> tart-cli.tart: Waiting for the tart process to exit...
2023-09-15T15:44:14-07:00: Build 'tart-cli.tart' errored after 12 minutes 40 seconds: Build was halted.
==> Wait completed after 12 minutes 40 seconds
==> Some builds didn't complete successfully and had errors:
--> tart-cli.tart: Build was halted.
==> Builds finished but no artifacts were created.
Currently when recovery = true
, the provisioners will not run, as evidenced by this piece of code. I don't quite understand why. My use-case is having a packer file that boots the VM to recovery to disable SIP, reboot, and then run the repackaging steps to install some software. Would you be open to adapting the behavior to allow this use-case?
I used this build template: https://github.com/cirruslabs/macos-image-templates/blob/master/templates/vanilla-ventura.pkr.hcl
Which I modified to have the French language with the AZERTY keyboard.
But something strange happens when it writes "admin" = "qd,in", it doesn't seem to come from the QWERTY either...
In my template I just replaced :
# Language
"<wait30s>english<enter>",
# Select Your Country and Region
"<wait30s>united states<leftShiftOn><tab><leftShiftOff><spacebar>",
With this :
# Language
"<wait30s><enter>",
# Select Your Country and Region
"<wait30s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Gender
"<wait10s><leftShiftOn><tab><tab><leftShiftOff><spacebar>",
Here's the full template:
packer {
required_plugins {
tart = {
version = ">= 1.2.0"
source = "github.com/cirruslabs/tart"
}
}
}
source "tart-cli" "tart" {
# You can find macOS IPSW URLs on various websites like https://ipsw.me/
# and https://www.theiphonewiki.com/wiki/Beta_Firmware/Mac/13.x
from_ipsw = "13.4.ipsw"
vm_name = "ventura"
cpu_count = 4
memory_gb = 8
disk_size_gb = 25
ssh_password = "admin"
ssh_username = "admin"
ssh_timeout = "120s"
boot_command = [
# hello, hola, bonjour, etc.
"<wait60s><spacebar>",
# ! Language
"<wait30s><enter>",
# ! Select Your Country and Region
"<wait30s><leftShiftOn><tab><leftShiftOff><spacebar>",
# ! Gender
"<wait10s><leftShiftOn><tab><tab><leftShiftOff><spacebar>",
# Written and Spoken Languages
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Accessibility
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Data & Privacy
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Migration Assistant
"<wait10s><tab><tab><tab><spacebar>",
# Sign In with Your Apple ID
"<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
# Are you sure you want to skip signing in with an Apple ID?
"<wait10s><tab><spacebar>",
# Terms and Conditions
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# I have read and agree to the macOS Software License Agreement
"<wait10s><tab><spacebar>",
# Create a Computer Account
"<wait10s>admin<tab><tab>admin<tab>admin<tab><tab><tab><spacebar>",
# Enable Location Services
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Are you sure you don't want to use Location Services?
"<wait10s><tab><spacebar>",
# Select Your Time Zone
"<wait10s><tab>paris<enter><leftShiftOn><tab><leftShiftOff><spacebar>",
# Analytics
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Screen Time
"<wait10s><tab><spacebar>",
# Siri
"<wait10s><tab><spacebar><leftShiftOn><tab><leftShiftOff><spacebar>",
# Choose Your Look
"<wait10s><leftShiftOn><tab><leftShiftOff><spacebar>",
# Enable Voice Over
"<wait10s><leftAltOn><f5><leftAltOff><wait5s>v",
# Now that the installation is done, open "System Settings"
"<wait10s><leftAltOn><spacebar><leftAltOff>System Settings<enter>",
# Navigate to "Sharing"
"<wait10s><leftAltOn>f<leftAltOff>partage<enter>",
# Navigate to "Screen Sharing" and enable it
"<wait10s><tab><down><spacebar>",
# Navigate to "Remote Login" and enable it
"<wait10s><tab><tab><tab><tab><tab><tab><spacebar>",
# Open "Remote Login" details
"<wait10s><tab><spacebar>",
# Enable "Full Disk Access"
"<wait10s><tab><spacebar>",
# Click "Done"
"<wait10s><leftShiftOn><tab><leftShiftOff><leftShiftOn><tab><leftShiftOff><spacebar>",
# Disable Voice Over
"<leftAltOn><f5><leftAltOff>",
]
// A (hopefully) temporary workaround for Virtualization.Framework's
// installation process not fully finishing in a timely manner
create_grace_time = "30s"
}
build {
sources = ["source.tart-cli.tart"]
provisioner "shell" {
inline = [
// Enable passwordless sudo
"echo admin | sudo -S sh -c \"mkdir -p /etc/sudoers.d/; echo 'admin ALL=(ALL) NOPASSWD: ALL' | EDITOR=tee visudo /etc/sudoers.d/admin-nopasswd\"",
// Enable auto-login
//
// See https://github.com/xfreebird/kcpassword for details.
"echo '00000000: 1ced 3f4a bcbc ba2c caca 4e82' | sudo xxd -r - /etc/kcpassword",
"sudo defaults write /Library/Preferences/com.apple.loginwindow autoLoginUser admin",
// Disable screensaver at login screen
"sudo defaults write /Library/Preferences/com.apple.screensaver loginWindowIdleTime 0",
// Disable screensaver for admin user
"defaults -currentHost write com.apple.screensaver idleTime 0",
// Prevent the VM from sleeping
"sudo systemsetup -setdisplaysleep Off",
"sudo systemsetup -setsleep Off",
"sudo systemsetup -setcomputersleep Off",
// Launch Safari to populate the defaults
"/Applications/Safari.app/Contents/MacOS/Safari &",
"sleep 30",
"kill -9 %1",
// Enable Safari's remote automation and "Develop" menu
"sudo safaridriver --enable",
"defaults write com.apple.Safari.SandboxBroker ShowDevelopMenu -bool true",
"defaults write com.apple.Safari IncludeDevelopMenu -bool true",
// Disable screen lock
//
// Note that this only works if the user is logged-in,
// i.e. not on login screen.
"sysadminctl -screenLock off -password admin",
]
}
}
I'm finding with the Debian 11 installer that it is not finding the ISO DVD installation medium and I'm having to tell it via the installer context menu that the DVD device is /dev/vdb1
, after which it boots.
You can reproduce this via:
git clone https://github.com/HariSekhon/Packer-templates pack
cd pack
make debian-tart-http
(http preseed.cfg
is to work around issue #71)
Why does it detect the installation medium on other platforms such as x86_64 in VirtualBox but not on Tart?
Is it because the /vd[a-z] devices are less standard than /dev/sd[a-z] and not checked?
If so would it be possible to just have Tart present its devices as /dev/sda, /dev/sdb for simplicity and compatibility?
Trying to provision macOS 12 (12.6.1+21G217), the VNC connection works for viewing the machine, but mouse and keyboard input doesn't seem to work. Neither from manually connecting to the machine via Screen Sharing, nor the boot_commands
sent by Packer.
Sometimes when running packer builds, the disk resize fails:
==> tart-cli.tart: Error repairing map: Couldn't read partition map (-69876)
tart-cli.tart: Nonexistent, unknown, or damaged partition map scheme
tart-cli.tart: If you are sure this disk contains a (damaged) APM, MBR, or GPT partition map,
tart-cli.tart: you can hereby try to repair it enough to be recognized as a map; another
tart-cli.tart: "diskutil repairDisk disk0" might then be necessary for further repairs
tart-cli.tart: Proceed? (y/N)
==> tart-cli.tart: Resizing the partition...
==> tart-cli.tart: Could not find disk for disk0s2
Disk list from the VM:
admin@admins-Virtual-Machine ~ % diskutil list
/dev/disk1 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *70.0 GB disk1
1: Apple_APFS_ISC Container disk2 524.3 MB disk1s1
2: Apple_APFS Container disk3 34.1 GB disk1s2
(free space) 5.4 GB -
/dev/disk3 (synthesized):
#: TYPE NAME SIZE IDENTIFIER
0: APFS Container Scheme - +34.1 GB disk3
Physical Store disk1s2
1: APFS Volume Macintosh HD 8.6 GB disk3s1
2: APFS Snapshot com.apple.os.update-... 8.6 GB disk3s1s1
3: APFS Volume Preboot 4.3 GB disk3s2
4: APFS Volume Recovery 707.7 MB disk3s3
5: APFS Volume Data 8.0 GB disk3s5
6: APFS Volume VM 20.5 KB disk3s6
So appears in some cases we get disk1
as opposed to disk0
If I ssh into this instance and execute manually:
yes | diskutil repairDisk disk1
diskutil apfs resizeContainer disk1s2 0
Then everything is hunky dory. I'll see if I can PR something into step_disk_resize.go
that tries to find the appropriate disk first.
Now that tart provides linux VMs, it would be nice to be able to create Linux VMs from scratch with packer.
Right now, this is the log output when trying to build a new VM with Packer when two are already running:
% packer build [...]
tart-cli.tart: output will be in this color.
==> tart-cli.tart: Cloning virtual machine...
==> tart-cli.tart: Updating virtual machine resources...
==> tart-cli.tart: Inspecting machine disk image...
==> tart-cli.tart: Getting partition table...
==> tart-cli.tart: Starting the virtual machine...
==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
==> tart-cli.tart: Retrieved VNC credentials, connecting...
==> tart-cli.tart: Failed to connect to the Tart's VNC server: dial tcp 127.0.0.1:60993: connect: connection refused
==> tart-cli.tart: Waiting for the tart process to exit...
Build 'tart-cli.tart' errored after 1 second 61 milliseconds: Failed to connect to the Tart's VNC server: dial tcp 127.0.0.1:60993: connect: connection refused
It took me a while to figure out it's the 2 VMs limit which is responsible for this log output. Is there a way to catch the limit earlier and log this appropriately?
As per #75, please document something equivalent to this example for Tart:
https://github.com/HariSekhon/Packer-templates/blob/main/ubuntu-x86_64.vbox.pkr.hcl#L63
Showing the http settings, boot settings for a real world scenario of say building Ubuntu with an autoinstaller config, the keystrokes sent to grub etc...
Process: tart [16122]
Path: /opt/homebrew/*/tart.app/Contents/MacOS/tart
Identifier: tart
Version: ???
Code Type: ARM-64 (Native)
Parent Process: packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 [16116]
Responsible: zed [620]
User ID: 501
Date/Time: 2024-03-11 11:27:15.6467 -0600
OS Version: macOS 14.4 (23E214)
System Integrity Protection: enabled
Crashed Thread: 0
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x0000000104c21904
Termination Reason: Namespace SIGNAL, Code 5 Trace/BPT trap: 5
Terminating Process: exc handler [16122]
Thread 0 Crashed:
0 tart 0x105315904 Run.run() + 6660
1 tart 0x10531c30d protocol witness for AsyncParsableCommand.run() in conformance Run + 1
2 tart 0x10536d175 static Root.main() + 1
3 tart 0x10536dff5 specialized thunk for @escaping @convention(thin) @async () -> () + 1
4 libswift_Concurrency.dylib 0x256897149 completeTaskAndRelease(swift::AsyncContext*, swift::SwiftError*) + 1
Thread 1:
0 libsystem_pthread.dylib 0x19019dd20 start_wqthread + 0
Thread 2:: Dispatch queue: com.apple.virtualization.vnc.server
0 dyld 0x18fe617d8 invocation function for block in DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 136
1 dyld 0x18fe61440 DyldSharedCache::forEachRegion(void (void const*, unsigned long long, unsigned long long, unsigned int, unsigned int, unsigned long long, bool&) block_pointer) const + 248
2 dyld 0x18fe616fc invocation function for block in DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 132
3 dyld 0x18fe615e0 DyldSharedCache::forEachCache(void (DyldSharedCache const*, bool&) block_pointer) const + 64
4 dyld 0x18fe61588 DyldSharedCache::forEachRange(void (char const*, unsigned long long, unsigned long long, unsigned int, unsigned long long, unsigned int, unsigned int, bool&) block_pointer, void (DyldSharedCache const*, unsigned int) block_pointer) const + 124
5 dyld 0x18fe4c028 dyld4::APIs::findImageMappedAt(void const*, dyld3::MachOLoaded const**, bool*, char const**, void const**, unsigned long long*, unsigned char*, dyld4::Loader const**) + 268
6 dyld 0x18fe4c598 dyld4::APIs::dyld_image_path_containing_address(void const*) + 76
7 libsystem_trace.dylib 0x18fefbc88 _os_activity_stream_reflect + 328
8 libsystem_trace.dylib 0x18ff02740 _os_log_impl_stream + 504
9 libsystem_trace.dylib 0x18fef3a18 _os_log_impl_flatten_and_send + 7636
10 libsystem_trace.dylib 0x18fef1c2c _os_log + 168
11 libsystem_trace.dylib 0x18fef1b7c _os_log_impl + 28
12 Network 0x197ce0ccc networkd_settings_read_from_file() + 780
13 Network 0x197ce008c networkd_settings_init + 124
14 Network 0x197ce262c nw_allow_use_of_dispatch_internal + 280
15 Network 0x1978af0c0 nw_protocol_register_extended + 60
16 libdispatch.dylib 0x18fff23e8 _dispatch_client_callout + 20
17 libdispatch.dylib 0x18fff3c68 _dispatch_once_callout + 32
18 Network 0x19795bf60 __nw_protocol_setup_ip_definition_block_invoke + 280
19 libdispatch.dylib 0x18fff23e8 _dispatch_client_callout + 20
20 libdispatch.dylib 0x18fff3c68 _dispatch_once_callout + 32
21 Network 0x19760afa4 nw_parameters_create_secure_tcp + 2864
22 Virtualization 0x22504a508 void Base::DispatchQueue::async<-[_VZVNCServer start]::$_0>(-[_VZVNCServer start]::$_0&&)::'lambda'(void*)::__invoke(void*) + 476
23 libdispatch.dylib 0x18fff23e8 _dispatch_client_callout + 20
24 libdispatch.dylib 0x18fff9a14 _dispatch_lane_serial_drain + 748
25 libdispatch.dylib 0x18fffa544 _dispatch_lane_invoke + 380
26 libdispatch.dylib 0x1900052d0 _dispatch_root_queue_drain_deferred_wlh + 288
27 libdispatch.dylib 0x190004b44 _dispatch_workloop_worker_thread + 404
28 libsystem_pthread.dylib 0x19019f00c _pthread_wqthread + 288
29 libsystem_pthread.dylib 0x19019dd28 start_wqthread + 8
Thread 3:
0 libsystem_pthread.dylib 0x19019dd20 start_wqthread + 0
I'd like to specify that Packer spin up my VM with the --net-bridged=en0
option, as I do when I launch the VM interactively. Is this possible? I didn't see it mentioned in https://developer.hashicorp.com/packer/plugins/builders/tart
When attempting to build a mac VM today using packer and the tart plugin. The build failed with:
2023/03/01 10:32:10 ConfigSpec failed: gob: type cty.Type has no exported fields
2023/03/01 10:32:10 waiting for all plugin processes to complete...
2023/03/01 10:32:10 /Users/ci/.config/packer/plugins/github.com/cirruslabs/tart/packer-plugin-tart_v1.0.0_x5.0_darwin_arm64: plugin process exited
panic: ConfigSpec failed: gob: type cty.Type has no exported fields [recovered]
panic: ConfigSpec failed: gob: type cty.Type has no exported fields
goroutine 1 [running]:
log.Panic({0x14000f1e798?, 0x1000000000090?, 0x1?})
/Users/runner/hostedtoolcache/go/1.18.9/x64/src/log/log.go:385 +0x68
github.com/hashicorp/packer/packer.(*cmdBuilder).checkExit(0x140004a8790?, {0x106f924c0, 0x140004a87c0}, 0x0)
/Users/runner/work/packer/packer/packer/cmd_builder.go:47 +0x84
github.com/hashicorp/packer/packer.(*cmdBuilder).ConfigSpec.func1()
/Users/runner/work/packer/packer/packer/cmd_builder.go:19 +0x44
panic({0x106f924c0, 0x140004a87c0})
/Users/runner/hostedtoolcache/go/1.18.9/x64/src/runtime/panic.go:838 +0x204
github.com/hashicorp/packer-plugin-sdk/rpc.(*commonClient).ConfigSpec(0x14000682220)
/Users/runner/go/pkg/mod/github.com/hashicorp/[email protected]/rpc/common.go:44 +0x24c
github.com/hashicorp/packer/packer.(*cmdBuilder).ConfigSpec(0x0?)
/Users/runner/work/packer/packer/packer/cmd_builder.go:22 +0x5c
github.com/hashicorp/packer/hcl2template.decodeHCL2Spec({0x10809f9d8, 0x14000e00d20}, 0x10625bac8?, {0x1145c34d8?, 0x1400000c9c0?})
/Users/runner/work/packer/packer/hcl2template/decode.go:17 +0x3c
github.com/hashicorp/packer/hcl2template.(*PackerConfig).startBuilder(0x140005ac140, {{{0x140001d479f, 0x8}, {0x140001d47a8, 0x8}}, {0x0, 0x0}, {0x10809f9d8, 0x14000e00d20}}, 0x1400014eb10)
/Users/runner/work/packer/packer/hcl2template/types.source.go:116 +0x148
github.com/hashicorp/packer/hcl2template.(*PackerConfig).GetBuilds(0x140005ac140, {{0x0, 0x0, 0x0}, {0x140007ea030, 0x1, 0x1}, 0x0, 0x0, {0x0, ...}, ...})
/Users/runner/work/packer/packer/hcl2template/types.packer_config.go:654 +0xd68
github.com/hashicorp/packer/command.(*BuildCommand).RunContext(0x14000e00b10, {0x10809f038?, 0x14000706600}, 0x140003d7b00)
/Users/runner/work/packer/packer/command/build.go:110 +0x1a8
github.com/hashicorp/packer/command.(*BuildCommand).Run(0x14000e00b10, {0x140001a4020, 0x6, 0x6})
/Users/runner/work/packer/packer/command/build.go:38 +0xb0
github.com/mitchellh/cli.(*CLI).Run(0x140004163c0)
/Users/runner/go/pkg/mod/github.com/mitchellh/[email protected]/cli.go:262 +0x4cc
main.wrappedMain()
/Users/runner/work/packer/packer/main.go:262 +0xa40
main.realMain()
/Users/runner/work/packer/packer/main.go:49 +0xb8
main.main()
/Users/runner/work/packer/packer/main.go:35 +0x20
!!!!!!!!!!!!!!!!!!!!!!!!!!! PACKER CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Packer crashed! This is always indicative of a bug within Packer.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Packer[1] so that we can fix this.
[1]: https://github.com/hashicorp/packer/issues
!!!!!!!!!!!!!!!!!!!!!!!!!!! PACKER CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
1.8.4
1.8.5
1.9.0-dev
1.0.0
Changed plugin version to only use 0.6.3 and less for now
packer {
required_plugins {
tart = {
version = "<= 0.6.3"
source = "github.com/cirruslabs/tart"
}
}
}
Currently, the tart-cli packer source always runs with --no-graphics. Having this be configurable will support efforts to automate building the base images fully. Mostly from a debugging of what's happening.
The VMware builder implements the headless option
https://www.packer.io/plugins/builders/vmware/iso#headless
Using TART_HOME to redirect the build to another drive fails in steps such as https://github.com/cirruslabs/packer-plugin-tart/blob/main/builder/tart/step_disk_file_prepare.go#L31 where the code assumes that the homeDir is simply os.UserHomeDir().
Hi folks,
When attempting to attach an additional disk during an image build (e.g. run_extra_args = ["--disk=/dev/disk7"]
) the user is presented with the following error message:
failed to parse "diskutil list -plist physical" output: there are more than one physical disk present on the system
Would it be possible to alter the logic to allow multiple disk attachments? My use case for wanting to do this is to attach a disk containing a large pre-existing source code directory (100GB+) and have that code compiled by a Packer provisioning step, thereby creating a tart image containing the compiled binaries. Due to its large size I am not keen on cloning the repo in a provisioning step, rather I would like to re-use the copy that already exists on the host.
As a workaround, I could compile my binaries on the host and copy them into the image, but I would prefer to have the compilation happen inside a tart VM, where the filesystem/dependencies/toolchain are immutable.
(Note: I tried mounting a directory via --dir
, but run into good old virtiofs bugs during compilation.)
For reference, the output of diskutil list -plist physical
when an extra disk is attached looks something like this:
admin@admins-Virtual-Machine ~ % diskutil list -plist physical
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>AllDisks</key>
<array>
<string>disk0</string>
<string>disk0s1</string>
<string>disk0s2</string>
<string>disk2</string>
</array>
<key>AllDisksAndPartitions</key>
<array>
<dict>
<key>Content</key>
<string>GUID_partition_scheme</string>
<key>DeviceIdentifier</key>
<string>disk0</string>
<key>OSInternal</key>
<false/>
<key>Partitions</key>
<array>
<dict>
<key>Content</key>
<string>Apple_APFS_ISC</string>
<key>DeviceIdentifier</key>
<string>disk0s1</string>
<key>DiskUUID</key>
<string>D2B79297-879E-4461-8DA2-EEA50EA7319A</string>
<key>Size</key>
<integer>524288000</integer>
</dict>
<dict>
<key>Content</key>
<string>Apple_APFS</string>
<key>DeviceIdentifier</key>
<string>disk0s2</string>
<key>DiskUUID</key>
<string>D5BA624D-182F-40D0-8248-D08508A8D1B3</string>
<key>Size</key>
<integer>89475674112</integer>
</dict>
</array>
<key>Size</key>
<integer>90000000000</integer>
</dict>
<dict>
<key>Content</key>
<string></string>
<key>DeviceIdentifier</key>
<string>disk2</string>
<key>OSInternal</key>
<false/>
<key>Size</key>
<integer>107164426240</integer>
</dict>
</array>
<key>VolumesFromDisks</key>
<array/>
<key>WholeDisks</key>
<array>
<string>disk0</string>
<string>disk2</string>
</array>
</dict>
</plist>
The feature I've used a lot with VMware was the HTTP directory configuration. This was useful with the boot_command
to access files on the host from recoveryOS.
As an example, here is my boot_command that I used with VMware
http_directory = "http"
boot_command = [
# Select English language
"<enter><wait10s>",
# Open Terminal
"<leftSuperOn><leftShiftOn>t<leftSuperOff><leftShiftOff><wait10s>",
# Mount the HTTP server
"hdiutil mount http://{{ .HTTPIP }}:{{ .HTTPPort }}/bootstrap.dmg<enter>",
"/Volumes/bootstrap/start<enter>"
]
Next to my packer config file would be an http
directory that I would put the bootstrap.dmg
disk image in that would be mounted and would contain scripts to run. This allowed me to iterate on and process the scripts faster than having to constantly update the boot_command
.
This may also be useful for #15.
The past 10 or so builds have not saved after finishing. I can't find anything to indicate why it might fail to save.
==> base.tart-cli.base: Provisioning with shell script: /var/folders/zz/5pds_n8j1q98bg_dlbtm6r2c0000gn/T/packer-shell195630698
==> base.tart-cli.base: Gracefully shutting down the VM...
==> base.tart-cli.base: Waiting for the tart process to exit...
Build 'base.tart-cli.base' finished after 1 hour 34 minutes.
==> Wait completed after 1 hour 34 minutes
==> Builds finished. The artifacts of successful builds are:
--> base.tart-cli.base: macos-12.6
% tart list <table formatted for readability>
| Source | Name | Size | Running |
| local | macos-12.6-4a74327b-cf71-4dcf-bb22-c5fafe0f2777 | 45 | true |
| oci | xxx.dkr.ecr.us-west-2.amazonaws.com/macos-12.6:latest | 300 | false |
| oci | xxx.dkr.ecr.us-west-2.amazonaws.com/macos-12.6@sha256:shaxxx | 300 | false |
| oci | ghcr.io/cirruslabs/macos-monterey-base:latest | 40 | false |
| oci | ghcr.io/cirruslabs/macos-monterey-base@sha256:77a2fbbf0e533200cb6b5585bbc0898e6f7f8aadd0e6e385fd0ea86b0bb2a9b4 40 | false |
VNC password is very useful if you want to capture headless image building or manipulate UI via vncdo
or any other method. It's enabled in vmware plugin, maybe there is a way to have the same?
==> tart-cli: Starting the virtual machine...
==> tart-cli: Detecting host IP...
2024/02/29 12:40:28 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:40:28 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
==> tart-cli: Host IP is assumed to be 172.16.226.1
==> tart-cli: Waiting for the VNC server credentials from Tart...
==> tart-cli: Retrieved VNC credentials, connecting...
==> tart-cli: Connected to the VNC!
==> tart-cli: Waiting 20s after the VM has booted...
==> tart-cli: Starting the virtual machine...
==> tart-cli: Detecting host IP...
2024/02/29 12:40:28 packer-plugin-tart_v1.8.1_x5.0_darwin_arm64 plugin: 2024/02/29 12:40:28 Executing tart: []string{"ip", "--wait", "120", "macos1306"}
==> tart-cli: Host IP is assumed to be 172.16.226.1
==> tart-cli: Waiting for the VNC server credentials from Tart...
==> tart-cli: Retrieved VNC credentials, connecting...
tart-cli: The VM will be run headless, without a GUI. If you want to
tart-cli: view the screen of the VM, connect via VNC with the password "ABcD1Fgh" to
tart-cli: vnc://127.0.0.1:5935
==> tart-cli: Connected to the VNC!
==> tart-cli: Waiting 20s after the VM has booted...
==> vmware-vmx: Starting virtual machine...
vmware-vmx: The VM will be run headless, without a GUI. If you want to
vmware-vmx: view the screen of the VM, connect via VNC with the password "ABcD1Fgh" to
vmware-vmx: vnc://127.0.0.1:5935
==> vmware-vmx: Connecting to VNC...
==> vmware-vmx: Waiting 1m0s for boot...
While attempting to build the linux example to see what the output was I encountered an issue where it was waiting for VNC credentials that never arrived.
I tried to cancel the action via cmd+c and it printed cancelling build after recieving interrupt, but was frozen. I had to kill the process in activity monitor before packer could continue
packer-plugin-tart/example on main
❯ packer init ubuntu-22.04-vanilla.pkr.hcl
packer-plugin-tart/example on main
❯ packer validate ubuntu-22.04-vanilla.pkr.hcl
The configuration is valid.
packer-plugin-tart/example on main
❯ packer build ubuntu-22.04-vanilla.pkr.hcl
tart-cli.tart: output will be in this color.
==> tart-cli.tart: Creating virtual machine...
==> tart-cli.tart: Starting the virtual machine for installation...
==> tart-cli.tart: Waiting for the VNC server credentials from Tart...
Cancelling build after receiving interrupt
Build 'tart-cli.tart' errored after 2 hours 13 minutes: unexpected EOF
==> Wait completed after 2 hours 13 minutes
Cleanly cancelled builds after being interrupted.
packer-plugin-tart/example on main took 2h52m8s
❯
It appears that the SSH shell provisioner isn't triggering on my Fedora build on Tart which is almost identical to my build on x86_64 VirtualBox.
You can reproduce this like so
git clone https://github.com/HariSekhon/Packer-templates pack
cd pack
make fedora-tart-http
The output is:
...
Fedora ISOs prepared
pkill -9 -if -- '.*python.* -m http.server'
cd installers && python3 -m http.server &
packer build --force fedora-38-arm64.tart.http.pkr.hcl
Serving HTTP on :: port 8000 (http://[::]:8000/) ...
fedora-38.tart-cli.fedora-38: output will be in this color.
==> fedora-38.tart-cli.fedora-38: Creating virtual machine...
==> fedora-38.tart-cli.fedora-38: Starting the virtual machine for installation...
==> fedora-38.tart-cli.fedora-38: Waiting for the VNC server credentials from Tart...
==> fedora-38.tart-cli.fedora-38: Retrieved VNC credentials, connecting...
==> fedora-38.tart-cli.fedora-38: Connected to the VNC!
==> fedora-38.tart-cli.fedora-38: Typing the commands over VNC...
==> fedora-38.tart-cli.fedora-38: Waiting for the install process to shutdown the VM...
The Fedora prompt is up but the SSH provisioner never triggers, as verified by logging in and checking that the /etc/packer-version file created by my script never materializes, and the VM never shuts down to complete the build.
Those last 2 lines of the output are what makes me think that the SSH commands are not being run as when I got to tty2 to investigate there was no /etc/packer-version created by script and I can't see a hanging bash command it in the ps output (although it doesn't help that I have | pipe symbol to be able to less through the ps output).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.