Code Monkey home page Code Monkey logo

compute-image-packages's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compute-image-packages's Issues

Bundling fails on rsync on (non-GCE) CentOS 6.5

I'm trying to bundle up Cent OS 6.5 created inside Windows Azure to move it to GCE.

The bundle operation fails with

19:14:34 platform exclude list: (/etc/ssh/.host_key_regenerated, 0:0:0) (/dev, 0:1:0) (/proc, 0:1:0) (/run, 0:1:1) (/selinux, 0:0:0) (/tmp, 0:1:0) (/sys, 0:1:0) (/var/lib/google/per-instance, 0:1:0) (/var/lock, 0:1:1) (/var/log, 0:1:1) (/var/run, 0:1:1)
19:14:34 ignoring mounts /proc /sys /dev/pts /dev/shm /proc/sys/fs/binfmt_misc /mnt/resource
19:14:34 overwrite list = 
19:14:34 Initializing disk file
19:14:43 .
19:14:45 Making filesystem
19:14:58 .
19:15:13 .
19:15:21 Copying contents
19:15:21 Error while running ['rsync', '--times', '--perms', '--owner', '--group', '--links', '--devices', '--acls', '--sparse', '--hard-links', '--recursive', '--xattrs', u'--exclude-from=/tmp/tmpzSPO9Z', '/', u'/tmp/tmpaBbUgv'] return_code = 11
19:15:21 stdout=
19:15:21 stderr=rsync: failed to open exclude file /tmp/tmpzSPO9Z: Permission denied (13)
19:15:21 rsync error: error in file IO (code 11) at exclude.c(1062) [client=3.0.6]

After searching forums, doing research, I've found the reason is SElinux forbidding to access temp files.

Stopping SELinux by

echo 0 > /selinux/enforce

fixed the issue.

The problem appeared on CentOS at Windows Azure but it can appear on GCE too if SELinux is on and configured to prevent reading from temp files.
Adding the note to README or even adding turning off selinux before running rsync will solve the issue.

Regards,
Vladimir

Ubuntu 16.04 Image starts with wrong hostname in rsyslog managed logs

This has been reported to the GCE Bug Tracker: https://code.google.com/p/google-compute-engine/issues/detail?id=354
I was asked to report this here:

What steps will reproduce the problem?

  1. Start a VM with the Image ubuntu-1604-xenial-v20160610 and give it a random name
  2. Log in
  3. Create a testlog entry with logger testlog-entry
  4. Check the syslog file with tail -n20 /var/log/syslog

What is the expected output? What do you see instead?
I'd expect to see the instance hostname in the log.
I see ubuntu instead which might have been the hostname during image build.

What version of the product are you using? On what operating system?
ubuntu-1604-xenial-v20160610

Please provide any additional information below.
If I repack the image with packer, and start a instance with the new packer image, I see the temporary packer instance ID, not the expected hostname that is set at instance creation.

After a restart of the rsyslog daemon and an additional testlog-entry the correct hostname is shown in the logs for new messages.

Did the same test with ubuntu-1404-trusty-v20160610.
The 14.04 Image does not have this problem.

address_manager.py support for systems without iproute2

Currently, these scripts only work for Linux systems. In particular, porting address_manager.py would allow for FreeBSD and potentially other UNIX systems to use the load balancing side of GCE. I have been attempting to translate the "ip route add to local $IP/32 dev eth0 proto 66" command, but so far I've had no luck. Looks like "-proto2" can replace "proto 66."

I've tried:

  • route add -host $LB_IP -iface vtnet0 -nostatic -proto2
  • route add -host $LB_IP $HOST_IP -nostatic -proto2

cut a new release?

Is it possible to cut a 1.1.2 release? I need 6b9c8b8 and it would be great not to have to carry a patch.

Thanks,

Brandon

shutil.move across devices cause wrong authorized_keys permissions

Hello,
We have use case where our customer's /home directory is on a different persistent disk than the root disk and is mounted at startup. The google-compute-daemon/manage_accounts.py ,updates the ssh keys on startup but the permissions of the file indicate that they are owned by the root even for non root users which causes ssh to reject logins.
The problem seems to be with accounts.py/utils.py which uses shutil.move which does a copy if the move is between devices causing the permissions to become root.root.
Perhaps a chmod after the move can be added.

Thanks

Can't boot from image generated by gcimagebundle

I've been discussing this in private with @jkaplowitz and he suggested me to open an issue here. I've tried to do the following steps using both projects/debian-cloud/global/images/debian-7-wheezy-v20140415 and projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140415 images:

  • Start a new instance;
  • Install some packages using apt-get and taksel install standard;
  • Generate a new image from the current instance using gcimagebundle -d /dev/sda -o /tmp;
  • Upload the generated image to Cloud Storage;
  • Import the image using gcutil addimage ...;
  • Start another instance using the image imported on the previous step.

The problem is that this new instance never finishes its boot process. This is what I've found looking at gcutil getserialportoutput:

May 28 02:26:00 localhost virtionet-irq-affinity: Could not set channels for eth0 to
curl for instance-id returned 7
curl for instance-id returned 7
May 28 02:26:01 localhost google: onboot initializing
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
Traceback (most recent call last):
  File "/usr/share/google/google_daemon/manage_accounts.py", line 87, in <module>
    options.daemon, options.debug)
  File "/usr/share/google/google_daemon/manage_accounts.py", line 62, in Main
    accounts_manager.Main()
  File "/usr/share/google/google_daemon/accounts_manager.py", line 46, in Main
    self.RegenerateKeysAndCreateAccounts()
  File "/usr/share/google/google_daemon/accounts_manager.py", line 100, in RegenerateKeysAndCreateAccounts
    self.lock_file.RunExclusively(self.lock_fname, self.CreateAccounts)
  File "/usr/share/google/google_daemon/utils.py", line 156, in RunExclusively
    method()
  File "/usr/share/google/google_daemon/accounts_manager.py", line 104, in CreateAccounts
    desired_accounts = self.desired_accounts.GetDesiredAccounts()
  File "/usr/share/google/google_daemon/desired_accounts.py", line 124, in GetDesiredAccounts
    etag=self.attributes_etag)
  File "/usr/share/google/google_daemon/desired_accounts.py", line 97, in _GetAttribute
    attribute_url, etag=etag, timeout_secs=timeout_secs)
  File "/usr/share/google/google_daemon/desired_accounts.py", line 76, in _MakeHangingGetRequest
    return self.urllib2.urlopen(request)
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 401, in open
    response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 419, in _open
    '_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1211, in http_open
    return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.7/urllib2.py", line 1181, in do_open
    raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>

The whole log files can be found here: https://gist.github.com/anonymous/9ed5232d23266dd7a096

Git submodules and installation

Hello,

I'd like to ask you to use submodules. for each project. This way, installation becomes as easy as:

git clone https://github.com/GoogleCloudPlatform/google-daemon.git
cd google-daemon
GIT_WORK_TREE=/ git checkout -f

This last command, will create the files at /; which it's directory structure. The only thing to fix then it's permissions. Git keeps the executable permission, the user and group.

Besides, you won't change your directory structure if yo use sub-modules, while keeping your projects separate and being able to choose which releases to bundle in the compute-image-packages repo.

Unit Test Failure

ERROR: testDiskBundle (gcimagebundlelib.tests.block_disk_test.FsRawDiskTest)

Tests bundle command when a disk is specified.

Traceback (most recent call last):
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/tests/block_disk_test.py", line 107, in testDiskBundle
utils.RunCommand(['uuidgen']).strip())
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/tests/block_disk_test.py", line 142, in TestDiskBundleHelper
(
, _) = self._bundle.Bundleup()
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/block_disk.py", line 134, in Bundleup
partition_start, uuid = self._InitializeDiskFileFromDevice(disk_file_path)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/block_disk.py", line 96, in _InitializeDiskFileFromDevice
'with a single partition are supported.' % self._disk)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/utils.py", line 67, in exit
RunCommand(kpartx_cmd)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/utils.py", line 352, in RunCommand
(cmd_output[0], cmd_output[1]))
CalledProcessError: Command '['kpartx', '-d', '/tmp/disk.raw']' returned non-zero exit status 1


Ran 17 tests in 40.129s

ssh keys regenerated when none are present

The regenerate-host-keys script regenerates ssh keys in all cases, even when none are already preset or baked into the image, logging something like this:

regenerate-host-keys: Regenerating SSH Host Keys for: (previously ).

Perhaps it could check that there was no previous IP/ssh key and not regenerate in that case? I could work on this and send a pull request, but wanted to got feedback on the idea first.

Spamming warning message

I'm currently being spammed with the error message:

Apr 11 20:00:51 [secret-node-name] accounts-from-metadata: WARNING Not creating account for user [user].  Usernames must comprise characters [A-Za-z0-9._-] and not start with '-'.

What's up with that?!?

Upgrading Debian Backports Wheezy to Jessie stops hostname update

I've upgraded a Debian Backports compute engine instance to Jessie and now the hostname doesn't seem to be updating. I've tried updating to the most recent release using git and the commands in the Read Me. I'm not seeing any errors but I am seeing the hostname remains 'localhost'. If this isn't a bug any advice would be great. Thanks

PEP8 compliance

  • yapf will get us most of the way.
  • CamelCase methods will need to be renamed to snake_case, optionally with aliases for backwards compatibility.
  • automate flake8 in travis.

/cc @illfelder

Sudoers should be managed in /etc/sudoers.d

According to sudo's recommendation, you should add a file or individual files in the /etc/sudoers.d directory.

Also, currently, when I remove a user, it lingers on the /etc/sudoers file. It should be removed, IMHO.

If the first recommendation is implemented by using individual files, the second should be easy to do.

Sorry for the two issues in one ;)

NVMe image is flawed

Hi there, sorry if this is the wrong place for this.

I notice that the special Debian NVMe image you have prepared has a custom kernel deb. It is versioned such that future backports from Debian will erase the NVMe patched kernel unless you pay attention to it. The best way to avoid this is to not use the same package name as the upstream one. Additionally, it would be nice to have the original deb available in a Debian repository somewhere, as well as the nvme patch itself so we can inspect its contents. I had to repack it myself from a different machine which had the same kernel because I mistakenly upgraded it.

If you did all of this, you probably wouldn't have to pin the package in /etc/apt/preferences.d.

I'm happy to help with this but am not sure if the source for this is tracked in github at all.

Thanks!

On new debian 8 installs using google_compute_engine virtualenv installed boto is broken

Not exactly sure what's going on here but we are now seeing:

  File "/home/lex/local/lib/python2.7/site-packages/boto/__init__.py", line 1216, in <module>
    boto.plugin.load_plugins(config)
  File "/home/lex/local/lib/python2.7/site-packages/boto/plugin.py", line 93, in load_plugins
    _import_module(file)
  File "/home/lex/local/lib/python2.7/site-packages/boto/plugin.py", line 75, in _import_module
    return imp.load_module(name, file, filename, data)
  File "/usr/lib/python2.7/dist-packages/google_compute_engine/boto/boto_config.py", line 29, in <module>
    from google_compute_engine import config_manager
ImportError: No module named google_compute_engine

Note that this is a virtualenv installed copy of boto which ends up failing with an error in google_compute_engines vendoring(?) of boto?

Strange keys keep appearing

Hey guys,

I'm using: 2a5ae74

These strange keys (as in, not mine nor in my console) keep on appearing.

Added by Google

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDox8BQXfFu8e+hZFFXHVcMMTWpxcNeQYAFpwNUuhAqit5/6pDoNbnkuZBZkHgGiUJlHwFGpkEbGUyhbKXi84iw1CUTT0OmLZmmBQrmAxMTnMld5PCq+BzF6x9F1NijJk/ZDJkXneyVZmsSKab2ppivZI3YrSF0rr1GMFAqPNqjfhA2JUKbdEIHDmnWe/QscgmQjTW6oOluhxShO/VF2Rf+W3lC5HqQW2q2MUET84e40I/qhMrsqysJIcY7AiGUt6Z/WeHfFZSpHN3MqCnNAUrb+PXq2WA1JiRu/ZH4mCbxyVTTl5q4r4mAaViCSBDhsWBdR6jUajfAS6cWdIryqi2x [email protected]

Added by Google

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/fMYxvUbwG25HikrXSZ4rp+66ClvLnMkuWm3JRV0LNsCOR72rWnNJe+1/5dVFqfTlpZ6Tlk4RXr0Img4GF0/dkqEtRp8dB0dGbSp504dlF48oykFLieFGxLj9bpg+vCD966hNfAkmFEZRmOXvFoSMHNPZPqM/SW01Tblo7znjy5hKAiZ/GSelyLucvsiab84vROE36qgJ2v4L62jIxiayzzX/qlrRxjQxF6QHLDxpydMZewc4KeUVW2DUphnADobMpt9nPqoaacU4307EAewK/cMdSyTrTfcC9vlSWIGoN6GBZzD4jbZ5MsxOU/7dAeLD8nJiE+h12kEN5NwhyE+1 [email protected]

desired_accounts.py logging.info is too chatty in syslog

The code "logging.info('Google SSH key schema identifier found.')" will result in two entries per minute in the syslog which is too spammy like and makes remote-monitoring more complex.

» 12:15:43.273 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:15:43.400 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:16:43.323 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:16:43.450 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.

CoreOS support: build a container image

In order to make this work under CoreOS we should build a container image. The container image needs to execute under rkt so that the packages can live under systemd monitoring.

Essentially the first pass will be a Dockerfile or some other container image builder. This will probably take a similar distribution design to coreos/bugs#1303

@crawford will take a first look at this.

google-adress-manager is broken on Ubuntu Xenial 16.04

address-manager fails on the current image of 16.04 server because the network interfaces have been renamed:

 google-address-manager[7151]: WARNING Non-zero exit status from: "/sbin/ip route ls table local type local dev eth0 scope host proto 66"
                    STDOUT:

             STDERR:
          Cannot find device "eth0"

I am getting such an error every 5 seconds. See also https://bugs.launchpad.net/juju-vagrant-images/+bug/1547110 or geerlingguy/packer-boxes#1 for (some) of the other manifestations of this change.

BLKGETSIZE ioctl failed on

I was following this tutorial on how to create your images :

https://cloud.google.com/compute/docs/tutorials/building-images

Then I used your tool to create a image :

gcimagebundle -d /media/iso -o /home/nedstark

Got error, so I reran the command this time using slightly different args :

gcimagebundle -d /media/iso/ -r / -o /tmp --loglevel=DEBUG  --log_file=/tmp/image_bundle.log

The output is this is here http://pastebin.com/yJ9Ps3ZE , I m using Centos 6.7

subtle bug in gcimagebuilder/utils.py xfs detection

Currently, line 97: if self.type is 'xfs':
should read if self.type == 'xfs':
and line 192: if fs_type is 'xfs':
should read: if fs_type == 'xfs':

The is operator explicitly compares object identity rather than value equality. Hat tip to @sjagoe for bug solution.

Could not determine host platform try -s option

And of course if you try to throw in the mysterious '-s' flag it says:

gcimagebundle: error: no such option: -s

And indeed its not mentioned in the help/usage. It also says it is starting logging, and indeed it creates a temp directory and a log file, but the log file is empty.

BtrFS support

Guys, ext4 is boring. Could you support BtrFS? And, while at it, subvolumes.

Default sudoers doesn't allow run as any user

The template for adding new entries in accounts.py is:

sudoer_lines.append('%s ALL=NOPASSWD: ALL' % user)

but should probably be:

sudoer_lines.append('%s ALL=(ALL) NOPASSWD: ALL' % user)

to allow users to "sudo -u [username]" without being prompted for a password. This functionality is desirable because "sudo -u" is the default command that ansible uses when switching to a new user during provisioning. Also, the lack of this line in sudoers doesn't seem to add anything security-wise since you can still successfully execute "sudo su - [username]" without being prompted for a password.

gcimagebundle crash

Traceback (most recent call last):
File "./gcimagebundle", line 28, in
main()
File "./gcimagebundle", line 24, in main
imagebundle.main()
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/imagebundle.py", line 159, in main
options.root_directory).GetPlatform()
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/platform_factory.py", line 55, in GetPlatform
if self.__registry[name].IsThisPlatform(self.__root):
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/sle.py", line 26, in IsThisPlatform
if re.match(r'SUSE Linux Enterprise', suse.SUSE().distribution):
File "/usr/lib64/python2.6/re.py", line 137, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or buffer

manage_addresses.py is not running when upstart is activated

When activating upstart instead of systemvinit it seems like the script /usr/share/google/google_daemon/manage_addresses.py is not running correctly, therefore the IP address for the LB wasn't updated.

image : #1 SMP Debian 3.16.3-2~bpo70+1 (2014-09-21) x86_64 GNU/Linux

Packages broken in Ubuntu 14.04 from version 1.1.4 onwards

With an Ubuntu 14.04 server image configured according to google recommendations ( /etc/ssh/sshd_not_to_be_run created, ssh keys deleted etc), using debs from version 1.1.4-1.1.6
results in sshd not being started. Without sshd being started, I have no idea how to debug. If sshd is configured to start, no other problems are detected.

The script used to configure the server is this https://github.com/j-manu/gce-ubuntu-server-14.04/blob/7c137ac6d29de5b481e9236b21294a8a6e6181e1/provision.sh

CoreOS google-clock-sync-manager.service failure

Starting with CoreOS 1068.8.0 we are seeing a frequent failure of the systemd google-clock-sync-manager service at instance creation and reboot.

-- Logs begin at Tue 2016-07-19 14:39:17 UTC, end at Wed 2016-07-20 15:52:56 UTC. --
Jul 19 14:39:27 localhost systemd[1]: Started Google Compute Engine Clock Sync Daemon.
Jul 19 14:39:27 k8s-node-backend01.c.geofeedia-qa1.internal google-clock-sync[1102]: INFO Starting GCE clock sync
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: Traceback (most recent call last):
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 85, in <module>
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     Main()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 81, in Main
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     lock_file.RunExclusively(lock_fname, HandleClockDriftToken(watcher, OnChange))
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 51, in HandleClockDriftToken
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     Handler, initial_value='')
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/google-compute-daemon/metadata_watcher.py", line 74, in WatchMetadataForever
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     response = self.urllib2.urlopen(req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 127, in urlopen
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     return _opener.open(url, data, timeout)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 404, in open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     response = self._open(req, data)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 422, in _open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     '_open', req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 382, in _call_chain
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     result = func(*args)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 1214, in http_open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     return self.do_open(httplib.HTTPConnection, req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 1187, in do_open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     r = h.getresponse(buffering=True)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 1045, in getresponse
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     response.begin()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 409, in begin
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     version, status, reason = self._read_status()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 365, in _read_status
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     line = self.fp.readline(_MAXLINE + 1)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:   File "/usr/share/oem/python/lib64/python2.7/socket.py", line 476, in readline
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]:     data = self._sock.recv(self._rbufsize)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: socket.error: [Errno 104] Connection reset by peer
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Unit entered failed state.
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Failed with result 'exit-code'.

Remove need to specify full disk in

Reading the docs,
https://developers.google.com/compute/docs/disks#formatting
I assumed that <disk-name> meant the short name "media1" in my case.

But this failed:

/usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" media1 /mnt/media1

Only when I put the full path to a device node in /dev did it work:

/usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/disk/by-id/google-media1 /mnt/media1

Please be more clear with the docs.

Startup scripts broken for SUSE images

With the changes introduced in 502f06e, the startup scripts no longer work on SLES or OpenSUSE images since there is no such file, /lib/init/vars.sh (e.g. https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-startup-scripts/etc/init.d/google-startup-scripts#L27).

Currently, SLES / OpenSUSE are still using the 1.1.9 release so startup scripts are mostly working. AFAICT, the only missing functionality is shutdown-script support. However, the next time that the images are rolled with an updated set of packages, none of the startup scripts will work (well, perhaps SLES12 will).

/cc @jeremyje @illfelder

console logging in google-startup-scripts

Hi,

The rsyslog config in google-startup-scripts/etc/rsyslog.d/90-google.conf configures kernel and other messages to be dumped to the console, which makes console virtually impossible. Is there are reason for this?

Regards
Joe

SyncAddresses exception: Cant check local addresses

Is this the host project for google-address-manager?

I am completely clueless as to what this script is supposed to be doing, and why it's failing on coreos. Perhaps this is related to coreos/bugs#245?

Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager:  WARNING Non-zero exit status from: "/usr/bin/ip route ls table local type local dev ens4v1 scope host proto 66"
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager:                                                                              STDOUT:
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager:                                                                              STDERR:
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager:                                                                              Cannot find device "ens4v1"
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager:  ERROR SyncAddresses exception: Cant check local addresses

accounts_manager is overwriting existing linux account

We are running some legacy software on Google Compute, that creates a well known linux user account, with a home directory of /export/home/<user> and set some specific sudo rules.
If we use gcloud to SSH into a box as that same user, the Google daemon overwrites the users passwd entry and breaks the sudo rules.
It would helpful if either:

  • the google daemon detected the existing user settings and didn't trash them
  • we are able to blacklist certain usernames that we don't want the google daemon to manage

Centos/Python 2.6 Unit Test Failure

Traceback (most recent call last):
File "/home/jeremyedwards/master2/gcimagebundle/gcimagebundlelib/tests/utils_test.py", line 37, in testRunCommandThatFails
with self.assertRaises(subprocess.CalledProcessError):
TypeError: failUnlessRaises() takes at least 3 arguments (2 given)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.