Code Monkey home page Code Monkey logo

azure-vhd-utils's Introduction

Azure VHD utilities.

This project provides a Go package to read Virtual Hard Disk (VHD) file, a CLI interface to upload local VHD to Azure storage and to inspect a local VHD.

An implementation of VHD VHD specification can be found in the vhdcore package.

Go Report Card

Installation

Note: You must have Go installed on your machine, at version 1.11 or greater. https://golang.org/dl/

go get github.com/Microsoft/azure-vhd-utils

Features

  1. Fast uploads - This tool offers faster uploads by using multiple routines and balancing the load across them.
  2. Efficient uploads - This tool will only upload used (non-zero) portions of the disk.
  3. Parallelism - This tool can upload segements of the VHD concurrently (user configurable).

Usage

Upload local VHD to Azure storage as page blob

USAGE:
   azure-vhd-utils upload [command options] [arguments...]

OPTIONS:
   --localvhdpath       Path to source VHD in the local machine.
   --stgaccountname     Azure storage account name.
   --stgaccountkey      Azure storage account key.
   --containername      Name of the container holding destination page blob. (Default: vhds)
   --blobname           Name of the destination page blob.
   --parallelism        Number of concurrent goroutines to be used for upload

The upload command uploads local VHD to Azure storage as page blob. Once uploaded, you can use Microsoft Azure portal to register an image based on this page blob and use it to create Azure Virtual Machines.

Note

When creating a VHD for Microsoft Azure, the size of the VHD must be a whole number in megabytes, otherwise you will see an error similar to the following when you attempt to create image from the uploaded VHD in Azure:

"The VHD http://.blob.core.windows.net/vhds/.vhd has an unsupported virtual size of bytes. The size must be a whole number (in MBs)."

You should ensure the VHD size is even MB before uploading

For virtual box:

VBoxManage modifyhd --resize <size in MB>

For Hyper V:

Resize-VHD -Path -SizeBytes

 http://azure.microsoft.com/blog/2014/05/22/running-freebsd-in-azure/
For Qemu:

qemu-img resize <path-to-raw-file> size

 http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-create-upload-vhd-generic/

How upload work

Azure requires VHD to be in Fixed Disk format. The command converts Dynamic and Differencing Disk to Fixed Disk during upload process, the conversion will not consume any additional space in local machine.

In case of Fixed Disk, the command detects blocks containing zeros and those will not be uploaded. In case of expandable disks (dynamic and differencing) only the blocks those are marked as non-empty in the Block Allocation Table (BAT) will be uploaded.

The blocks containing data will be uploaded as chunks of 2 MB pages. Consecutive blocks will be merged to create 2 MB pages if the block size of disk is less than 2 MB. If the block size is greater than 2 MB, tool will split them as 2 MB pages.

With page blob, we can upload multiple pages in parallel to decrease upload time. The command accepts the number of concurrent goroutines to use for upload through parallelism parameter. If the parallelism parameter is not proivded then it default to 8 * number_of_cpus.

Inspect local VHD

A subset of command are exposed under inspect command for inspecting various segments of VHD in the local machine.

Show VHD footer

USAGE:
   azure-vhd-utils inspect footer [command options] [arguments...]

OPTIONS:
   --path   Path to VHD.

Show VHD header of an expandable disk

USAGE:
   azure-vhd-utils inspect header [command options] [arguments...]

OPTIONS:
   --path   Path to VHD.

Only expandable disks (dynamic and differencing) VHDs has header.

Show Block Allocation Table (BAT) of an expandable disk

USAGE:
   azure-vhd-utils inspect bat [command options] [arguments...]

OPTIONS:
   --path           Path to VHD.
   --start-range    Start range.
   --end-range      End range.
   --skip-empty     Do not show BAT entries pointing to empty blocks.

Only expandable disks (dynamic and differencing) VHDs has BAT.

Show block general information

USAGE:
   azure-vhd-utils inspect block info [command options] [arguments...]

OPTIONS:
   --path   Path to VHD.

This command shows the total number blocks, block size and size of block sector

Show sector bitmap of an expandable disk's block

USAGE:
   azure-vhd-utils inspect block bitmap [command options] [arguments...]

OPTIONS:
   --path           Path to VHD.
   --block-index    Index of the block.
   

License

This project is published under MIT License.

azure-vhd-utils's People

Contributors

anuchandy avatar colemickens avatar marstr avatar mcronce avatar microsoft-github-policy-service[bot] avatar msftgits avatar nathanleclaire avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-vhd-utils's Issues

Progress doesn't get shown

The console only shows

 Completed:   0% [      0.00 MB] RemainingTime: 00h:12m:43s Throughput: 0 MB/sec   | 

, but apparently it is still uploading.
EDIT: Now it is 17% (200MB uploaded) with a throughput of still 0 MB/sec.

Segmentation fault (core dumped)

while trying to run this command "go get github.com/Microsoft/azure-vhd-utils" on ubuntu 14.04 getting below error:
Segmentation fault (core dumped)
Can somebody help?

How can I install this tool on windows?

I encounter the dynamic vhd uploading issue and I need this tool to upload my dynamic linux vhd.
But I must use windows to do this work.
Does this tool only support linux ? How can I install it on windows?
Actually, I am a little disappointed at Microsoft, which I cannot do all the work by Azure Powershell CLI but to install the other tools to do the job.

Upload Incomplete: Some blocks of the VHD failed to upload, rerun the command to upload those blocks

I see the above error message in the title after trying to upload a VHD to Azure:

Effective upload size: 27092.00 MB (from 28612.00 MB originally)

I see a large number of messages of the form:

{29808918528, 29811015679}: storage: service returned error: StatusCode=503, ErrorCode=ServerBusy, ErrorMessage=The server is busy.

No matter how many times I rerun the command I see the same kind of errors.
Any advice would be greatly appreciated.

Many Thanks

-stgaccountname for Azure Stack

May i kow for -stgaccountname is it support Azure Stack, as i put my storage account name, it goes to .blob.core.windows.net while my storage account url is storage account name + azure stack url

'DEPRECATED Action signature' warning

@anuchandy Thanks so much for this program.

I am running it after installing with go get and I get a warning on most command invocations:

DEPRECATED Action signature.  Must be `cli.ActionFunc`.  This is an error in the application.  Please contact the distributor of this application if this is not you.  See https://github.com/urfave/cli/blob/master/CHANGELOG.md#deprecated-cli-app-action-signature

FYI. Probably want to do a go get -u and check that out.

Automatically resize disk to next whole MB

It would be nice if this tool could automatically resize the disk to the next whole MB before uploading. It is kind of a pain to resize the disk manually and is error prone.

Set MD5 for upload

Before uploading the VHD the tool must compute MD5 checksum and send it to Azure storage. Go supports computing MD5 via "crypto/md5" and we have package to show the progress.

Add support for this feature.

Remove -for-go suffix

I think we can use mitchellh/gox and provide signed cross-platform binary releases on GitHub.

There's no point of mentioning this being a Go package in README or in the repo name?

Document benefits/why

It would be good to have a bullet list enumerating why this should be preferred over the azure storage blob upload command in the README.

go get github.com/Microsoft/azure-vhd-utils-for-go fails

I just ran:

go get github.com/Microsoft/azure-vhd-utils-for-go

This is the error message I get:

gopath\src\github.com\Microsoft\azure-vhd-utils-for-go\vhdUploadCmdHandler.go:158: cannot use blobServiceClient (type "github.com/Microsoft/azure-vhd-utils-for-go/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient) as type "github.com/Microsoft/azure-vhd-utils/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient in field value
gopath\src\github.com\Microsoft\azure-vhd-utils-for-go\vhdUploadCmdHandler.go:212: cannot use client (type "github.com/Microsoft/azure-vhd-utils-for-go/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient) as type "github.com/Microsoft/azure-vhd-utils/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient in argument to metadata.NewMetadataFromBlob

Tried in both Windows and Ubuntu 16.06.

Installation problem

Hello.
During installation I have following problem:

go get github.com/Microsoft/azure-vhd-utils
    # github.com/Microsoft/azure-vhd-utils/upload
    go/src/github.com/Microsoft/azure-vhd-utils/upload/upload.go:89:34: cxt.BlobServiceClient.PutPage undefined (type storage.BlobStorageClient has no field or method PutPage)
    go/src/github.com/Microsoft/azure-vhd-utils/upload/upload.go:93:7: undefined: storage.PageWriteTypeUpdate
    # github.com/Microsoft/azure-vhd-utils/upload/metadata
    go/src/github.com/Microsoft/azure-vhd-utils/upload/metadata/metaData.go:95:32: blobClient.GetBlobMetadata undefined (type storage.BlobStorageClient has no field or method GetBlobMetadata)

My versions:

go version go1.11 linux/amd64
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic

Could you please help?
Regards, Dawid.

osType support?

For quite some time I've been using the azure-vhd-utils to upload linux images into Azure and create VM deployments against them. But recently I started running into issues. After banging my head against the problem for awhile I finally broke down and submitted a ticket and was told my VM Deployments were failing because the image was flagged in Azure as a "Windows" image.

"properties": {
"storageProfile": {
"osDisk": {
"osType": "Windows",
"osState": "Generalized",
"blobUri": "https://",
"storageAccountType": "Standard_LRS"
}
},
"provisioningState": 0
},
"location": "westus"
})

I'm scratching my head as this used to "just work".... has something changed such that an "osType" flag is necessary now? Sorry if the problem is a misunderstanding on my part...

Any plan for VHDX support

I know Azure does not support VHDX but do you plan to add support in this library in near feature?

Thanks.

diskstream produces extra 2MB chunk in fixed vhd

hello,

i've recently written some tooling using this repo's code for uploading VHD images to Azure while converting from Dynamic to Fixed disks on the fly. unfortunate there seems to be an extra 2M block inserted in the file, which appears to have caused Azure to reject it.

this is trivially reproducible by creating some Dynamic VHD file, e.g. qemu-img create -f vpc small.vhd 1G, and then using DiskStream.Read to read it and write an on-the-fly converted Fixed disk (e.g., using io.Copy). if the result of that is compared with a different conversion mechanism that works (qemu-img convert -f vpc -O raw small.vhd small.raw; qemu-img convert -f raw -O vpc -o subformat=fixed small.raw small-2.vhd) you will notice there is 2MB extra data in the file produced by azure-vhd-utils.

i haven't been able to find the offending code yet, but i'm reporting this to get more eyes.

i will add some more detail later.

Join forces with go-vhd?

This project exists: https://github.com/rubiojr/go-vhd

It provides some functionality not available in azure-vhd-utils, but one can imagine many cases where they need one tool to create the VHD and then this, azure-vhd-utils to upload the image.

It might make sense to try to join forces? Especially since MS probably has the internal knowledge to remove the "highly experimental" caveat of go-vhd.

`vhd inspect footer --path /path/to/disk.vhd` panics!

$ go get github.com/Microsoft/azure-vhd-utils-for-go

$ azure-vhd-utils-for-go inspect footer /nix/store/igjb0w7jpjk43sxwk53l21mz118ivghn-azure-image/disk.vhd

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x18 pc=0x47ebbb]

goroutine 1 [running]:
text/template.(*Template).ExecuteTemplate(0x0, 0x7f4c059961e8, 0xc82003c010, 0x8859d0, 0xa, 0x7fae20, 0xc8200205a0, 0x0, 0x0)
    /nix/store/zfjn6abbd49iyiai3fnvdnxjkqjqwxxi-go-1.5.3/share/go/src/text/template/exec.go:117 +0x3b
main.showVhdFooter(0xc8200c8640)
    /home/cole/code/gopkgs/src/github.com/Microsoft/azure-vhd-utils-for-go/vhdInspectCmdHandler.go:164 +0x6d0
github.com/codegangsta/cli.Command.Run(0x879370, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8815b0, 0xf, 0x0, ...)
    /home/cole/code/gopkgs/src/github.com/codegangsta/cli/command.go:174 +0x1397
github.com/codegangsta/cli.(*App).RunAsSubcommand(0xc8200c83c0, 0xc8200c8280, 0x0, 0x0)
    /home/cole/code/gopkgs/src/github.com/codegangsta/cli/app.go:298 +0x11d4
github.com/codegangsta/cli.Command.startApp(0x879898, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ccf60, 0x1d, 0x0, ...)
    /home/cole/code/gopkgs/src/github.com/codegangsta/cli/command.go:249 +0x74f
github.com/codegangsta/cli.Command.Run(0x879898, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8ccf60, 0x1d, 0x0, ...)
    /home/cole/code/gopkgs/src/github.com/codegangsta/cli/command.go:65 +0x79
github.com/codegangsta/cli.(*App).Run(0xc8200c8140, 0xc82000a0a0, 0x5, 0x5, 0x0, 0x0)
    /home/cole/code/gopkgs/src/github.com/codegangsta/cli/app.go:187 +0x1135
main.main()
    /home/cole/code/gopkgs/src/github.com/Microsoft/azure-vhd-utils-for-go/vhd.go:27 +0x2e0

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
    /nix/store/zfjn6abbd49iyiai3fnvdnxjkqjqwxxi-go-1.5.3/share/go/src/runtime/asm_amd64.s:1721 +0x1

Error instaling vhd utils

I try to install in a Windows 10 Linux subsystem.

root@IGUANA4:~# go get github.com/Microsoft/azure-vhd-utils-for-go

github.com/Azure/azure-sdk-for-go/storage

work/src/github.com/Azure/azure-sdk-for-go/storage/client.go:308:10: error: reference to undefined field or method ‘EscapedPath’
cr += u.EscapedPath()
^
work/src/github.com/Azure/azure-sdk-for-go/storage/client.go:327:10: error: reference to undefined field or method ‘EscapedPath’
cr += u.EscapedPath()
^

MD5 checksum computing taking an unexpectedly long time

I have created a VHD that is ~500GB but it's less than 1GB on disk currently.

$ du -hs disk.vhd
943M disk.vhd

$ azure-vhd-utils-for-go inspect footer --path disk.vhd
[...]
PhysicalSize      : 536879692800 bytes
VirtualSize       : 536879692800 bytes
[...]

This is what I'm seeing right now as it's computing the MD5 Checksum...

Computing MD5 Checksum..
Completed:  10% RemainingTime: 00h:14m:34s Throughput: 4197 MB/sec
536879MB / 4197MB/s = ~127 s

I'm not really sure what's going on with this. Is the throughput value wrong? Is my math wrong?

Server failed to authenticate request

When running the latest commit of azure-vhd-utils, I get this error when I try to use it:

2016/08/08 11:52:28 Using default container 'vhds'
2016/08/08 11:52:28 Using default parallelism [8*NumCPU] : 64
storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:requestid
Time:2016-08-08T22:52:25.7571625Z, RequestId=requestid, QueryParameterName=, QueryParameterValue=

I run it with this

./azure-vhd-utils-for-go upload --localvhdpath Debian.vhd --stgaccountname mystorageaccountname --stgaccountkey thestorageaccountkey1formystorageaccountname --blobname debian.vhd

What am I doing wrong? I got all the information for the storage account name and key from my newly created storage account's access keys.

Provide download support

I seem to be uploading a fair amount faster than I'm able to download. I assume this is due to the parallelism I get from azure-vhd-utils on upload. Similar functionality for downloading would be great.

allow special characters in blob names

I am trying to upload a vhd to Azure but am getting an error based on the name I have chosen:

[carnott@shamir ~]$ vhd upload --localvhdpath rhel-72.vhd --stgaccountname <account_name> --stgaccountkey <account_key> --containername <container_name> --blobname rhel-72-{`uuidgen`}.vhd
2016/08/04 21:11:48 Using default parallelism [8*NumCPU] : 32
2016/08/04 21:11:48 storage: service returned without a response body (403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)

And upon further testing the following blob name works: rhel-72-uuidgen.vhd
Furthermore, azure storage blob upload completes with no issues:

azure storage blob upload --container <container_name> --blob rhel-72-{`uuidgen`}.vhd --account-name <account_name> --account-key <account_key> --file rhel-72.vhd

upload stays on 99%

Hi,

my upload remains on 99% while in the Azure portal, the file is visible and marked as 'available'.
The state stays like even after more then 30 min.

Version of Azure Go SDK in vendor

Could you please tell me the hash of the Azure Go SDK from the vendor folder?

I would like to use parts of this project in another, and since the SDK changed it no longer works with the latest version and I would like to vendor the exact version this project has.

Thanks!

too many arguments in call to client.SetBlobMetadata

go get github.com/Microsoft/azure-vhd-utils-for-go
# github.com/Microsoft/azure-vhd-utils-for-go
../gowork/src/github.com/Microsoft/azure-vhd-utils-for-go/vhdUploadCmdHandler.go:242: too many arguments in call to client.SetBlobMetadata

Possibly related to #20

Error building azure-vhd-utils: cannot use blobServiceClient

When attempting to go get -u github.com/Microsoft/azure-vhd-utils-for-go, I get the following error:

$HOME/go/src/github.com/Microsoft/azure-vhd-utils-for-go/vhdUploadCmdHandler.go:158: cannot use blobServiceClient (type "github.com/Microsoft/azure-vhd-utils-for-go/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient) as type "github.com/Microsoft/azure-vhd-utils/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient in field value
$HOME/go/src/github.com/Microsoft/azure-vhd-utils-for-go/vhdUploadCmdHandler.go:212: cannot use client (type "github.com/Microsoft/azure-vhd-utils-for-go/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient) as type "github.com/Microsoft/azure-vhd-utils/vendor/github.com/Azure/azure-sdk-for-go/storage".BlobStorageClient in argument to metadata.NewMetadataFromBlob

When I attempted this, the latest commit was 43293b8

Quick question: conversion from VMDK to VHD?

It looks like https://github.com/Microsoft/azure-vhd-utils/blob/master/vhdcore/writer/vhdWriter.go has the ability to write raw disk blocks into the VHD file.

Assuming that I read a VMDK block by block using the VMware VDDK, would I be able to write block by block into a equally sized fixed VHD file via the API in this library?

If I upload that manually written VHD to Azure, would it be bootable as the boot device?
Any caveats to manually creating a bootable VHD image?

Thanks in advance

Need support for Azure Government

Please add support for different endpoints for Azure blob storage, allow passing in a connection string or similar will accomplish this.

Wrong check in (*Cookie).Equal

The following code compares c.isHeader with itself:

return c.isHeader == c.isHeader && bytes.Equal(c.Data, other.Data)
/src/github.com/Microsoft/azure-vhd-utils-for-go/vhdcore/vhdCookie.go:75:9: identical expressions on the left and right side of the '==' operator

Display uploaded size of VHD

(This a very minor possible suggestion.)

I've created a 30GB VHD. Based on the estimated upload duration, I feel safe that it's not uploading all 30GB, and from guessing at Empty ranges: 14838/15360.

Suggestion is to change the output from this:

2016/03/09 16:29:14 Using default container 'vhds'
2016/03/09 16:29:14 Using default parallelism [8*NumCPU] : 32
Computing MD5 Checksum..
 Completed:  99% RemainingTime: 00h:00m:00s Throughput: 3827 MB/sec
Detecting empty ranges..
 Empty ranges : 14838/15360
Uploading the VHD..
 Completed:  22% RemainingTime: 00h:12m:23s Throughput: 8 MB/sec   | 

to include information about "effective upload size" after it "Detects empty ranges..", or having a line "Uploaded X MB, Remaining X MB" along with the throughput. That way I can be immediately sure that I'm only uploading non-zero bytes. Something like:

2016/03/09 16:29:14 Using default container 'vhds'
2016/03/09 16:29:14 Using default parallelism [8*NumCPU] : 32
Computing MD5 Checksum..
 Completed:  99% RemainingTime: 00h:00m:00s Throughput: 3827 MB/sec
Detecting empty ranges..
 Empty ranges : 14838/15360
 Effective upload size: 1044 MB (from 30720 MB originally)
Uploading the VHD..
 Completed:  22% (229.68 MB) - RemainingTime: 00h:12m:23s (814.32MB) - Throughput: 8 MB/sec   | 

This tool is very useful, by the way. Much better throughput than I've had with the xplat-cli also, not to mention being able to upload large VHDs without using up all of my bandwidth.

[Upload Throughput] MBytes used when refering to Mbits

Currently the vhd utility displays network throughput in MBytes, however the number displayed is in Mbits:

[user@localhost ~]$ vhd upload --localvhdpath <file.vhd> --stgaccountkey <key> --containername <container> --blobname <name>
2016/06/24 00:00:00 Using default parallelism [8*NumCPU] : 8
Computing MD5 Checksum..
 Completed:  99% RemainingTime: 00h:00m:00s Throughput: 1000 MB/sec
Uploading the VHD..
 Completed: 100% [   1000.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 MB/sec     - 
Upload completed

Change to this:

[user@localhost ~]$ vhd upload --localvhdpath <file.vhd> --stgaccountkey <key> --containername <container> --blobname <name>
2016/06/24 00:00:00 Using default parallelism [8*NumCPU] : 8
Computing MD5 Checksum..
 Completed:  99% RemainingTime: 00h:00m:00s Throughput: 1000 MB/sec
Uploading the VHD..
 Completed: 100% [   1000.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec     - 
Upload completed

The same may need to be done for MD5 throughput:

[user@localhost ~]$ vhd upload --localvhdpath <file.vhd> --stgaccountkey <key> --containername <container> --blobname <name>
2016/06/24 00:00:00 Using default parallelism [8*NumCPU] : 8
Computing MD5 Checksum..
 Completed:  99% RemainingTime: 00h:00m:00s Throughput: 1000 Mb/sec
Uploading the VHD..
 Completed: 100% [   1000.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec     - 
Upload completed

create release

Hi,

got some problem and found the solution in the issues to build. but I think it's more simpler to microsoft to upload directly releases.

th

Hk

VHDX support

Hey,
Does this tool support vhdx too? looking for a tool that convert to fixed disk on the fly.

Thanks

Detect lack of TTY and output summary lines instead of rewriting line

I'm using this inside of Jenkins jobs. When it calculates the checksum and does the upload, it replaces the progress line in place. This doesn't really work without a TTY though, and it winds up spewing a bunch of status lines, all in one line, when the step is finished

For example, the timestamps in bold are when Jenkins gets the line of output. A delay, and then all of the progress lines at once.

It might be preferable if you could detect the lack of TTY and just output new lines instead of replacing the existing line.

This would just be a nicety though, it doesn't impact the usability or functionality at all. Thanks again for this tool, too!

Use the cancellation feature of load-balancer & workers

The load-balancer (LB) and workers supports canceling upload operation, for this they listen on tearDownChan channel.

We have a go-routine in the vhdUploadCmdHandler that listen for upload failure, update this routine to send cancellation signal to LB and workers on upload failure.

add option to specify upload endpoint

Currently it uses the default core.windows.net [1]. But I need to upload to the German Azure Cloud, which has a different upload endpoint (core.cloudapi.de).

Can you please add an option to manually set the endpoint in such a case?

blobname should not append ".vhd"

It looks like the tool is appending .vhd to the user specific --blobname, which makes scripting things a bit more complicated if you need very reliable user-specified filenames.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.