Code Monkey home page Code Monkey logo

vic's Introduction

Build Status codecov Download Go Report Card

vSphere Integrated Containers Engine

vSphere Integrated Containers Engine (VIC Engine) is a container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional VM-based workloads on vSphere clusters, and allowing for these workloads to be managed through the vSphere UI in a way familiar to existing vSphere admins.

See VIC Engine Architecture for a high level overview.

Support and Troubleshooting

For general questions, visit the vSphere Integrated Containers Engine channel. If you do not have an @vmware.com or @emc.com email address, sign up at https://code.vmware.com/join to get an invitation.

Project Status

VIC Engine now provides:

  • support for most of the Docker commands for core container, image, volume and network lifecycle operations. Several docker compose commands are also supported. See the complete list of supported commands here.
  • vCenter support, leveraging DRS for initial placement. vMotion is also supported.
  • volume support for standard datastores such as vSAN and iSCSI datastores. NFS shares are also supported. See --volume-store - SIOC is not integrated but can be set as normal.
  • direct mapping of vSphere networks --container-network - NIOC is not integrated but can be set as normal.
  • dual-mode management - IP addresses are reported as normal via vSphere UI, guest shutdown via the UI will trigger delivery of container STOPSIGNAL, restart will relaunch container process.
  • client authentication - basic authentication via client certificates known as tlsverify.
  • integration with the VIC Management Portal (Admiral) for Docker image content trust.
  • integration with the vSphere Platform Services Controller (PSC) for Single Sign-on (SSO) for docker commands such as docker login.
  • an install wizard in the vSphere HTML5 client, as a more interactive alternative to installing via the command line. See details here.
  • support for a standard Docker Container Host (DCH) deployed and managed as a container on VIC Engine. This can be used to run docker commands that are not currently supported by VIC Engine (docker build, docker push). See details here.

We are working hard to add functionality while building out our foundation so continue to watch the repo for new features. Initial focus is on the production end of the CI pipeline, building backwards towards developer laptop scenarios.

Installing

After building the binaries (see the Building section), pick up the correct binary based on your OS, and install the Virtual Container Host (VCH) with the following command. For Linux:

bin/vic-machine-linux create --target <target-host>[/datacenter] --image-store <datastore name> --name <vch-name> --user <username> --password <password> --thumbprint <certificate thumbprint> --compute-resource <cluster or resource pool name> --tls-cname <FQDN, *.wildcard.domain, or static IP>

See vic-machine-$OS create --help for usage information. A more in-depth example can be found here.

Deleting

The installed VCH can be deleted using vic-machine-$OS delete.

See vic-machine-$OS delete --help for usage information. A more in-depth example can be found here.

Contributing

See CONTRIBUTING for details on submitting changes and the contribution workflow.

Building

Building the project is done with a combination of make and containers, with gcr.io/eminent-nation-87317/vic-build-image:tdnf being the common container base. This requires Docker installed. If gcr.io is not accessible you can follow the steps in Dockerfile to build this image.

To build as closely as possible to the formal build, you need the Go 1.8 toolchain and Drone.io installed:

drone exec

To build inside a Docker container:

docker run -v $GOPATH/bin:/go/bin -v $(pwd):/go/src/github.com/vmware/vic gcr.io/eminent-nation-87317/vic-build-image:tdnf make all

To build directly, you also need the Go 1.8 toolchain installed:

make all

There are three primary components generated by a full build, found in $BIN (the ./bin directory by default). The make targets used are the following:

  1. vic-machine - make vic-machine
  2. appliance.iso - make appliance
  3. bootstrap.iso - make bootstrap

Building binaries for development

Some of the project binaries can only be built on Linux. If you are developing on a Mac or Windows OS, then the easiest way to facilitate a build is by utilizing the project's Vagrantfile. The Vagrantfile will share the directory where the file is executed and set the GOPATH based on that share.

To build the component binaries, ensure GOPATH is set, then issue the following command in the root directory:

make components

This will install required tools and build the component binaries tether-linux, rpctool and server binaries docker-engine-server, port-layer-server. The binaries will be created in the $BIN directory, ./bin by default.

To run unit tests after a successful build, issue the following:

make test

Running "make" every time causes Go dependency regeneration for each component, so that "make" can rebuild only those components that are changed. However, such regeneration may take significant amount of time when it is not really needed. To fight that developers can use cached dependencies that can be enabled by defining the environment variable VIC_CACHE_DEPS. As soon as it is set, infra/scripts/go-deps.sh will read cached version of dependencies if those exist.

export VIC_CACHE_DEPS=1

This is important to note that as soon as you add a new package or an internal project dependency that didn't exist before, those dependencies should be regenerated to reflect latest changes. It can be done just by running:

make cleandeps

After that next "make" run will regenerate dependencies from scratch.

To enable generation of non-stripped binaries, the following environment variable can be set:

export VIC_DEBUG_BUILD=true

Updating the appliance with newly built binaries

After building any of the binaries for the appliance VM (vicadmin, vic-init, port-layer-server, or the docker personality), run make push to replace the binaries on your VCH with the newly built ones.

make push will prompt you for information that it needs, or you can set your GOVC environment variables, as well as VIC_NAME (name of your VCH) and VIC_KEY with a path to your SSH key in order to run make push noninteractively.

Replace individual components with one of: make push-portlayer, make push-vicadmin, make push-docker, or make push-vic-init.

Managing vendor/ directory

To build the VIC Engine dependencies, ensure GOPATH is set, then issue the following.

make gvt vendor

This will install the gvt utility and retrieve the build dependencies via gvt restore.

Building the ISOs

The component binaries above are packaged into ISO files, appliance.iso and bootstrap.iso, that are used by the installer. The generation of the ISOs is split into the following targets:

iso-base, appliance-staging, bootstrap-staging, appliance, and bootstrap

Generation of the ISOs involves authoring a new root filesystem, meaning running a package manager (currently tdnf) and packing/unpacking archives. This is done with gcr.io/eminent-nation-87317/vic-build-image:tdnf being the build container. This requires Docker installed. If gcr.io is not accessible you can follow the steps in Dockerfile to build this image. To generate the ISOs:

make isos

The appliance and bootstrap ISOs are bootable CD images used to start the VMs that make up VIC Engine. To build the image, issue the following.

docker run -v $(pwd):/go/src/github.com/vmware/vic gcr.io/eminent-nation-87317/vic-build-image:tdnf make isos

The iso image will be created in $BIN

Building Custom ISOs

Please reference this document to build your custom isos.

Building with CI

PRs to this repository will trigger builds on our Drone CI.

To build locally with Drone:

Ensure that you have Docker 1.6 or higher installed. Install the Drone command line tools. From the root directory of the vic repository run drone exec

Common Build Problems

  1. Builds may fail when building either the appliance.iso or bootstrap.iso with the error: cap_set_file failed - Operation not supported

    Cause: Some Ubuntu and Debian based systems ship with a defective aufs driver, which Docker uses as its default backing store. This driver does not support extended file capabilities such as cap_set_file

    Solution: Edit the /etc/default/docker file, add the option --storage-driver=overlay to the DOCKER_OPTS settings, and restart Docker.

  2. go vet fails when doing a make all

    Cause: Apparently some caching takes place in $GOPATH/pkg/linux_amd64/github.com/vmware/vic and can cause go vet to fail when evaluating outdated files in this cache.

    Solution: Delete everything under $GOPATH/pkg/linux_amd64/github.com/vmware/vic and re-run make all.

  3. vic-machine upgrade integration tests fail due to BUILD_NUMBER being set incorrectly when building locally

    Cause: vic-machine checks the build number of its binary to determine upgrade status and a locally-built vic-machine binary may not have the BUILD_NUMBER set correctly. Upon running vic-machine upgrade, it may fail with the message foo-VCH has same or newer version x than installer version y. No upgrade is available.

    Solution: Set BUILD_NUMBER to a high number at the top of the Makefile - BUILD_NUMBER ?= 9999999999. Then, re-build binaries - sudo make distclean && sudo make clean && sudo make all and run vic-machine upgrade with the new binary.

Integration Tests

VIC Engine Integration Test Suite includes instructions to run locally.

Debugging with DLV

VIC Engine DLV Debugging with DLV includes instruction on how to use dlv.

License

VIC Engine is available under the Apache 2 license.

vic's People

Contributors

andrewtchin avatar angiecris avatar caglar10ur avatar cgtexmex avatar chengwang86 avatar corrieb avatar danielxiao avatar dougm avatar emlin avatar fdawg4l avatar frapposelli avatar hickeng avatar hmahmood avatar jooskim avatar jzt avatar lcastellano avatar maplain avatar matthewavery avatar mdharamadas1 avatar mdubya66 avatar mhagen-vmware avatar rajanashok avatar sgairo avatar singhshwetaqe avatar stuclem avatar vburenin avatar wjun avatar yanzhaoli avatar yuyangbj avatar zjs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vic's Issues

Disable ASR at kernel boot and re-enable at tether/init

Imported from BON-283.

If we need to disable ASR and enable it in the future (like post kernel boot), the linux kernel provides a proc/sysctl interface which can be leveraged. This needs to be verified but this is a first step which may just work.
randomize_va_space sysctl
randomize_va_space:
This option can be used to select the type of process address
space randomization that is used in the system, for architectures
that support this feature.
0 - Turn the process address space randomization off. This is the
default for architectures that do not support this feature anyways,
and kernels that are booted with the "norandmaps" parameter.
1 - Make the addresses of mmap base, stack and VDSO page randomized.
This, among other things, implies that shared libraries will be
loaded to random addresses. Also for PIE-linked binaries, the
location of code start is randomized. This is the default if the
CONFIG_COMPAT_BRK option is enabled.
2 - Additionally enable heap randomization. This is the default if
CONFIG_COMPAT_BRK is disabled.
There are a few legacy applications out there (such as some ancient
versions of libc.so.5 from 1996) that assume that brk area starts
just after the end of the code+bss. These applications break when
start of the brk area is randomized. There are however no known
non-legacy applications that would be broken this way, so for most
systems it is safe to choose full randomization.
Systems with ancient and/or broken binaries should be configured
with CONFIG_COMPAT_BRK enabled, which excludes the heap from process
address space randomization.

Unify bootstrap Dockerfiles and merge with single vendored makefile

If the build is now carried out by a makefile, this simplifies the dockerfile significantly. The dockerfile will do not much more beyond getting some build tools, calling make, then building the iso. The base Dockerfile will be subsumed by this single file and all caching will happen in the docker context (SRCTOP).

Unify vendoring of repo

We should use a vendoring tool (e.g gvt, gb, gpm) to populate the vendor/ directory at the root of the repo. This will allow us to keep only our code in the repo while being able to grab dependencies during build time.

Quantify suitability of ESX agent to perform low-level tasks

There are a number of architectural decisions that pivot on our decision whether or not to use an ESX agent to provide very low-level services to multiple tenants.

Possible functions/benefits of an ESX agent:

  • vSocket support for tether interaction. This would eliminate our need to use serial-over-LAN for communications between tether and VCH. Serial-over-LAN currently inhibits vMotion and won't run on free ESX due to license restrictions.
  • Authentication proxy. Not only would we do authentication out-of-band, but it would mean that the a VCH could be untrusted (not run in the VC management network). We would need to consider how the authenticating proxy might present in the container guestOS and a VMOMI gateway may well be the appropriate mechanism (guest sends SOAP requests to an endpoint which provides validation and authentication before forwarding).
  • Out-of-band VMDK preparation. Currently VMDK prep is a bottleneck in the docker pull path, given the need to attach and detach disks to/from VMs in order to be able to write to them. When we have a viable solution for out-of-band VMDK prep, we will need an endpoint to delegate to.

The investigation work that needs to be done is:

  • Write and deploy a HelloWorld agent to an ESX host. How involved is the toolchain / build process?
  • Investigate mechanisms for installing/uninstalling/upgrading as part of the VIC product. What if a host is added to a cluster? Can the agent be pushed to the host automatically?
  • In addition to this, we need to have some decisions around all of the above functions/benefits (timeframe, basic designs) so that we can decide how critical an agent might be in the short term.
  • Are there other optimizations that the agent may be good for?
  • What implications would the ESX agent have on container vMotion? How would the vMotioned guest re-attach to a new agent on a new host?
  • A significant consideration is how we build a VIB without access to internal tool chains. Are there precedents for this from VMware partners? Are there libraries we can link to? How would we make an SDK available to OSS contributors to build against?

Using very simple passthrough API for the agents lets us be very API version agnostic w.r.t the contents of the stream.

Create VCH networks page

Create a networks tab on the vch page to show similar content to the normal networks page of vApps.

Getting more done in GitHub with ZenHub

Hola! @ali5ter has created a ZenHub account for the vmware organization. ZenHub is the leading team collaboration and project management solution built for GitHub.


How do I use ZenHub?

To get set up with ZenHub, all you have to do is download the browser extension and log in with your GitHub account. Once you do, you’ll get access to ZenHub’s complete feature-set immediately.

What can ZenHub do?

ZenHub adds a series of enhancements directly inside the GitHub UI:

  • Real-time, customizable task boards for GitHub issues;
  • Burndown charts, estimates, and velocity tracking based on GitHub Milestones;
  • Personal to-do lists and task prioritization;
  • “+1” button for GitHub issues and comments;
  • Drag-and-drop file sharing;
  • Time-saving shortcuts like a quick repo switcher.

Add ZenHub to GitHub

Still curious? See more ZenHub features or read user reviews. This issue was written by your friendly ZenHub bot, posted by request from @ali5ter.

ZenHub Board

Specify Docker Machine integration for VCH creation

Docker Machine will be one of the ways in which a VCH can be created. We need to understand how flexible the plug-in model is so that we can come up with a specification for how Docker Machine might be able to drive both vSphere admin tasks and user tasks.

Implementation of imagec

The Port Layer storage APIs will not cover image resolution. They are specifically concerned with how to take a tar file as an input (ideally streaming) and then store, index, attach, export etc. Given that the Port Layer is supposed to be container-engine and OS-agnostic, the question of how an image name gets resolved to one or more tar files must be handled by a layer above.

It makes sense to break this capability out into a simple binary that could be driven by the container engine, which for arguments sake, we can call "imagec". Docker may well split out their own image resolution code into an imagec themselves and if this happens, it would be desirable for us to simply adopt that code.

So this issue will cover the building of an imagec binary. In order to spec this out, we need to understand exactly what metadata is stored, derived and managed by a v2 Docker Repository; how recursive resolution is handled; how and where imagec should buffer images its downloading; and what the interface between imagec and the port layer should look like.

ToDo Items;

  • More testing
  • Testing with a hub account
  • Testing with a private registry
  • Integration with port layer
  • Signature checking
  • Content sum checking
  • Need to send HEAD requests before GETs
  • Need to extract JSON metadata

Create disk package for vsphere

We need a package handle disk preparation from within the VCH, such as attach/detach, mount/unmount, format, etc.

There is an existing collection of such methods in bonneville-daemon/daemon/modules/vmware/disk.go

These can be refactored into a new package (pkg/vsphere/disk) that is not tied to the docker daemon.

Swagger generated Docker API server

We're going to want docker API server bindings:

  • FVT tests that don't encompass docker code
  • client side interaction and integration with other products
  • possible implementation of thin semantic wrappers between API and port layer abstractions

Refactor bonneville-container/tether to build locally

Imported from BON-274

This is the easiest repo to refactor since it's all our code. The objective is to allow the tether component to be built locally for all platforms it supports. It currently builds inside build containers where the Dockerfile copies only the relevant files for the specified platform. This breaks local tools when developing and makes development hard.

Drone CI

Add Drone config for building and testing, as well as updating README to instruct on how to use Drone to build locally.

Decide on bootstap/tether's logging when/if serial is not available

The bootstrap kernel (which we're currently building from source) adds the serial driver to the kernel and directs a console to it. This serial port is file backed in the esx datastore so any logs written to the console, the kernel will write to the serial port, where esx will write it to a log file.

The issue is the "mainline" photon-kernel does not include the serial port in the kernel. They cite a performance penalty in adding a serial port to the kernel incurred by all VMs using their kernel.

We use this console to write kernel panic's which is what happens if the tether PANICs. The log ensures you get the tether stack trace before the kernel OOPs in a log, whereas the vga console will only capture the OOPs (due to log wrap).

There is, however, an initrd which includes the serial module.

So, we have 2 options.
1 - Modify the logger in the tether to log to the serial port rather than the console. This means we no longer need to build our own kernel and the vga console is reserved for kernel OOPses.

2 - Continue to build our own kernel and log to the serial console.

Automate ESXi image build for vCA

We need a way to create a OVF of ESX from a specified build/release and upload that OVF to vCloud Air in an automated fashion for deploying ESX/vCenter test harness(es) of various versions

Need a specification and control-flow for VCH self-provision

When self-provisioning a VCH, access is going to need to be granted ahead-of-time by an admin to certain vSphere system resources. The user and the admin need a secure and simple mechanism by which access is granted, presented and validated. The most simple approach to this is for the vSphere admin to be able to create a binary token, representing access to specific resources, which can be passed as input to VCH creation.

In order to specify this workflow, we need to be clearer about the mechanisms of authentication, authorization and validation that we've chosen. We also need to decide what the scope of the token should be.

Fix iso building in container

I nuked isolinux.bin and boot.cat from the repo. This breaks the iso build in the linux/Dockerfile. isolinux.bin can be grabbed from the build container. boot.cat needs to be looked at.

In any case, fix the iso building.

NSX deployment and distributed port group creation

We need to have well documented workflow for NSX deployment and DPG creation:

  1. manual steps initially
  2. automated for nested test topologies
  3. automation for docker network integration

This investigation should document at least (1) and generate additional issues for (2) and (3)

Investigate options and benefits of diskless appliance

The appliance must be capable of being restarted while containers are running. The raises a number of questions about the impact this has:

  • Interactivity with the containers - what control points should continue to work?
  • Whether the appliance can be stateless (see diskless discussion)
  • What state the appliance must re-discover and how/where it gets it from
  • What if the appliance has migrated to a new ESX host?
  • How does it re-authenticate?
  • Do custom attributes on the VM have a role to play?

The Bonneville appliance has a local disk as well as a ramdisk. It may be beneficial to see if we can make the VIC appliance "diskless" - have it boot off the ISO and then have everything else written into memory. The main question marks around this are:

  • What are the benefits? What do we write to the local disk today and does it need to be persisted?
  • Can we go stateless with the appliance? When it gets restarted, can it discover everything it needs?
  • Where does Docker container / image metadata live?

Direct VMDK manipulation investigation

Investigate how we can directly manipulate VMDKs on ESX. The following high level tasks are of specific interest:

  • Extract tar archive onto VMDK - presuming ext4 as the filesystem format should be suitable for linux rootfs
  • Expand VMDK capacity, followed by resize filesystem - this should be a "rare" operation so could be handled via a block device attach to a service VM, but we should know if it's viable via direct operation.

Datastore helpers for vsphere

The existing bonneville daemon has datastore related functionality that can be refactored into its own set of utilities. This will require a bit of investigation, starting in daemon/modules/vmware/driver.go:

    imagePath          string
    imageDatastore     *Datastore
    containerPath      string
    containerDatastore *Datastore
    volumePath         string
    volumeDatastore    *Datastore

It may merit its own pkg/vsphere/datastore package or perhaps just a Datastore type in pkg/vmware/object/datastore.go

type Datastore struct {
   *object.Datastore // govmomi/object

   // cache, etc
   dsType string
   dsPath string
}

VirtualMachine helpers for vsphere

There are several existing in daemon/modules/vmware/utils.go - however most depends on internal APIs (#8). Once we have the internal part sorted out, we can refactor these to pkg/vsphere/object/virtual_machine.go

Write test to measure memory overhead and run it as part of CI

We should use the STREAM test here.

This should assess the total consumed memory (preferably with breakdown of ESX/VM/guest) in:

  • the template
  • a minimal live container (snapshot X seconds after start)
  • a minimal live container (idle over time)
  • a minimal live container (defined workload over time)

This will allow us to track not only direct memory usage, but also page breaking behaviour.

Come up with detailed profile of startup time

Test container start time, across the various configurations we enable:

  • thin disk
  • sesparse disk
  • tags enabled
  • custom attributes enabled

There is already a tracing facility in the code to time function calls, perhaps we can use that get a detailed startup time breakup.

Create VCH datastores page

Create a datastores tab on the vch page to show similar content to the normal datastore page of vApps.

Spec for container command & control operations

This may be swagger, swagger used solely for specification rather than generation, or something else.

Currently the ssh server code is the spec for our communication. This is fragile and makes testing awkward when the client/server code are separated, particularly when it comes to the data structures that pass over the wire.

When specifying this, consider it to be a definition of the data that needs to pass from the VCH to the containerVM, not how it's passed. For example, we could pass an IP directly via ssh, or we can place it into the guestinfo for the containerVM so that it's persisted in an infrastructure visible fashion (the latter is preferred).

Pull nightly photon builds

In order to see and address breakages early, we want to move to using photon nightly builds as our base. This has a dependency on #16

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.