googlecloudplatform / compute-image-packages Goto Github PK
View Code? Open in Web Editor NEWPackages for Google Compute Engine Linux images.
Home Page: https://cloud.google.com/compute/docs/images
License: Apache License 2.0
Packages for Google Compute Engine Linux images.
Home Page: https://cloud.google.com/compute/docs/images
License: Apache License 2.0
Not sure if it's possible to report bugs here but..
The /usr/share/google/run-startup-scripts checks for the presence of /var/run/google.cloudinit.user_data but then runs /var/run/google.startup.script again.
I'm trying to bundle up Cent OS 6.5 created inside Windows Azure to move it to GCE.
The bundle operation fails with
19:14:34 platform exclude list: (/etc/ssh/.host_key_regenerated, 0:0:0) (/dev, 0:1:0) (/proc, 0:1:0) (/run, 0:1:1) (/selinux, 0:0:0) (/tmp, 0:1:0) (/sys, 0:1:0) (/var/lib/google/per-instance, 0:1:0) (/var/lock, 0:1:1) (/var/log, 0:1:1) (/var/run, 0:1:1)
19:14:34 ignoring mounts /proc /sys /dev/pts /dev/shm /proc/sys/fs/binfmt_misc /mnt/resource
19:14:34 overwrite list =
19:14:34 Initializing disk file
19:14:43 .
19:14:45 Making filesystem
19:14:58 .
19:15:13 .
19:15:21 Copying contents
19:15:21 Error while running ['rsync', '--times', '--perms', '--owner', '--group', '--links', '--devices', '--acls', '--sparse', '--hard-links', '--recursive', '--xattrs', u'--exclude-from=/tmp/tmpzSPO9Z', '/', u'/tmp/tmpaBbUgv'] return_code = 11
19:15:21 stdout=
19:15:21 stderr=rsync: failed to open exclude file /tmp/tmpzSPO9Z: Permission denied (13)
19:15:21 rsync error: error in file IO (code 11) at exclude.c(1062) [client=3.0.6]
After searching forums, doing research, I've found the reason is SElinux forbidding to access temp files.
Stopping SELinux by
echo 0 > /selinux/enforce
fixed the issue.
The problem appeared on CentOS at Windows Azure but it can appear on GCE too if SELinux is on and configured to prevent reading from temp files.
Adding the note to README or even adding turning off selinux before running rsync will solve the issue.
Regards,
Vladimir
This has been reported to the GCE Bug Tracker: https://code.google.com/p/google-compute-engine/issues/detail?id=354
I was asked to report this here:
What steps will reproduce the problem?
ubuntu-1604-xenial-v20160610
and give it a random namelogger testlog-entry
tail -n20 /var/log/syslog
What is the expected output? What do you see instead?
I'd expect to see the instance hostname in the log.
I see ubuntu
instead which might have been the hostname during image build.
What version of the product are you using? On what operating system?
ubuntu-1604-xenial-v20160610
Please provide any additional information below.
If I repack the image with packer, and start a instance with the new packer image, I see the temporary packer instance ID, not the expected hostname that is set at instance creation.
After a restart of the rsyslog daemon and an additional testlog-entry the correct hostname is shown in the logs for new messages.
Did the same test with ubuntu-1404-trusty-v20160610.
The 14.04 Image does not have this problem.
Most Compute users don't even know how / what the daemons are, so in the interest of all man kind, I'm going to make a Debian repo using the official debs
Currently, these scripts only work for Linux systems. In particular, porting address_manager.py would allow for FreeBSD and potentially other UNIX systems to use the load balancing side of GCE. I have been attempting to translate the "ip route add to local $IP/32 dev eth0 proto 66" command, but so far I've had no luck. Looks like "-proto2" can replace "proto 66."
I've tried:
Is it possible to cut a 1.1.2 release? I need 6b9c8b8 and it would be great not to have to carry a patch.
Thanks,
Brandon
Hello,
We have use case where our customer's /home directory is on a different persistent disk than the root disk and is mounted at startup. The google-compute-daemon/manage_accounts.py ,updates the ssh keys on startup but the permissions of the file indicate that they are owned by the root even for non root users which causes ssh to reject logins.
The problem seems to be with accounts.py/utils.py which uses shutil.move which does a copy if the move is between devices causing the permissions to become root.root.
Perhaps a chmod after the move can be added.
Thanks
Provide instructions to 3rd party contributor agreement.
https://developers.google.com/compute/docs/building-image
https://github.com/GoogleCloudPlatform/Template/blob/master/CONTRIB.md
I've been discussing this in private with @jkaplowitz and he suggested me to open an issue here. I've tried to do the following steps using both projects/debian-cloud/global/images/debian-7-wheezy-v20140415
and projects/debian-cloud/global/images/backports-debian-7-wheezy-v20140415
images:
apt-get
and taksel install standard
;gcimagebundle -d /dev/sda -o /tmp
;gcutil addimage ...
;The problem is that this new instance never finishes its boot process. This is what I've found looking at gcutil getserialportoutput
:
May 28 02:26:00 localhost virtionet-irq-affinity: Could not set channels for eth0 to
curl for instance-id returned 7
curl for instance-id returned 7
May 28 02:26:01 localhost google: onboot initializing
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
curl for instance-id returned 7
Traceback (most recent call last):
File "/usr/share/google/google_daemon/manage_accounts.py", line 87, in <module>
options.daemon, options.debug)
File "/usr/share/google/google_daemon/manage_accounts.py", line 62, in Main
accounts_manager.Main()
File "/usr/share/google/google_daemon/accounts_manager.py", line 46, in Main
self.RegenerateKeysAndCreateAccounts()
File "/usr/share/google/google_daemon/accounts_manager.py", line 100, in RegenerateKeysAndCreateAccounts
self.lock_file.RunExclusively(self.lock_fname, self.CreateAccounts)
File "/usr/share/google/google_daemon/utils.py", line 156, in RunExclusively
method()
File "/usr/share/google/google_daemon/accounts_manager.py", line 104, in CreateAccounts
desired_accounts = self.desired_accounts.GetDesiredAccounts()
File "/usr/share/google/google_daemon/desired_accounts.py", line 124, in GetDesiredAccounts
etag=self.attributes_etag)
File "/usr/share/google/google_daemon/desired_accounts.py", line 97, in _GetAttribute
attribute_url, etag=etag, timeout_secs=timeout_secs)
File "/usr/share/google/google_daemon/desired_accounts.py", line 76, in _MakeHangingGetRequest
return self.urllib2.urlopen(request)
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 401, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 419, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 379, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1211, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1181, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 101] Network is unreachable>
The whole log files can be found here: https://gist.github.com/anonymous/9ed5232d23266dd7a096
Hello,
I'd like to ask you to use submodules. for each project. This way, installation becomes as easy as:
git clone https://github.com/GoogleCloudPlatform/google-daemon.git
cd google-daemon
GIT_WORK_TREE=/ git checkout -f
This last command, will create the files at /; which it's directory structure. The only thing to fix then it's permissions. Git keeps the executable permission, the user and group.
Besides, you won't change your directory structure if yo use sub-modules, while keeping your projects separate and being able to choose which releases to bundle in the compute-image-packages repo.
ERROR: testDiskBundle (gcimagebundlelib.tests.block_disk_test.FsRawDiskTest)
Traceback (most recent call last):
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/tests/block_disk_test.py", line 107, in testDiskBundle
utils.RunCommand(['uuidgen']).strip())
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/tests/block_disk_test.py", line 142, in TestDiskBundleHelper
(, _) = self._bundle.Bundleup()
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/block_disk.py", line 134, in Bundleup
partition_start, uuid = self._InitializeDiskFileFromDevice(disk_file_path)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/block_disk.py", line 96, in _InitializeDiskFileFromDevice
'with a single partition are supported.' % self._disk)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/utils.py", line 67, in exit
RunCommand(kpartx_cmd)
File "/home/jeremyedwards/tt/compute-image-packages/image-bundle/gcimagebundlelib/utils.py", line 352, in RunCommand
(cmd_output[0], cmd_output[1]))
CalledProcessError: Command '['kpartx', '-d', '/tmp/disk.raw']' returned non-zero exit status 1
Ran 17 tests in 40.129s
Should catch HTTPError instead of URLError. URLError does not have a code attribute.
The regenerate-host-keys script regenerates ssh keys in all cases, even when none are already preset or baked into the image, logging something like this:
regenerate-host-keys: Regenerating SSH Host Keys for: (previously ).
Perhaps it could check that there was no previous IP/ssh key and not regenerate in that case? I could work on this and send a pull request, but wanted to got feedback on the idea first.
I'm currently being spammed with the error message:
Apr 11 20:00:51 [secret-node-name] accounts-from-metadata: WARNING Not creating account for user [user]. Usernames must comprise characters [A-Za-z0-9._-] and not start with '-'.
What's up with that?!?
I've upgraded a Debian Backports compute engine instance to Jessie and now the hostname doesn't seem to be updating. I've tried updating to the most recent release using git and the commands in the Read Me. I'm not seeing any errors but I am seeing the hostname remains 'localhost'. If this isn't a bug any advice would be great. Thanks
CamelCase
methods will need to be renamed to snake_case
, optionally with aliases for backwards compatibility./cc @illfelder
According to sudo's recommendation, you should add a file or individual files in the /etc/sudoers.d directory.
Also, currently, when I remove a user, it lingers on the /etc/sudoers file. It should be removed, IMHO.
If the first recommendation is implemented by using individual files, the second should be easy to do.
Sorry for the two issues in one ;)
For Debian use...
python setup.py --command-packages=stdeb.command sdist_dsc --debian-version 3 bdist_deb
Hi there, sorry if this is the wrong place for this.
I notice that the special Debian NVMe image you have prepared has a custom kernel deb. It is versioned such that future backports from Debian will erase the NVMe patched kernel unless you pay attention to it. The best way to avoid this is to not use the same package name as the upstream one. Additionally, it would be nice to have the original deb available in a Debian repository somewhere, as well as the nvme patch itself so we can inspect its contents. I had to repack it myself from a different machine which had the same kernel because I mistakenly upgraded it.
If you did all of this, you probably wouldn't have to pin the package in /etc/apt/preferences.d.
I'm happy to help with this but am not sure if the source for this is tracked in github at all.
Thanks!
Not exactly sure what's going on here but we are now seeing:
File "/home/lex/local/lib/python2.7/site-packages/boto/__init__.py", line 1216, in <module>
boto.plugin.load_plugins(config)
File "/home/lex/local/lib/python2.7/site-packages/boto/plugin.py", line 93, in load_plugins
_import_module(file)
File "/home/lex/local/lib/python2.7/site-packages/boto/plugin.py", line 75, in _import_module
return imp.load_module(name, file, filename, data)
File "/usr/lib/python2.7/dist-packages/google_compute_engine/boto/boto_config.py", line 29, in <module>
from google_compute_engine import config_manager
ImportError: No module named google_compute_engine
Note that this is a virtualenv installed copy of boto which ends up failing with an error in google_compute_engine
s vendoring(?) of boto?
Hey guys,
I'm using: 2a5ae74
These strange keys (as in, not mine nor in my console) keep on appearing.
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDox8BQXfFu8e+hZFFXHVcMMTWpxcNeQYAFpwNUuhAqit5/6pDoNbnkuZBZkHgGiUJlHwFGpkEbGUyhbKXi84iw1CUTT0OmLZmmBQrmAxMTnMld5PCq+BzF6x9F1NijJk/ZDJkXneyVZmsSKab2ppivZI3YrSF0rr1GMFAqPNqjfhA2JUKbdEIHDmnWe/QscgmQjTW6oOluhxShO/VF2Rf+W3lC5HqQW2q2MUET84e40I/qhMrsqysJIcY7AiGUt6Z/WeHfFZSpHN3MqCnNAUrb+PXq2WA1JiRu/ZH4mCbxyVTTl5q4r4mAaViCSBDhsWBdR6jUajfAS6cWdIryqi2x [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/fMYxvUbwG25HikrXSZ4rp+66ClvLnMkuWm3JRV0LNsCOR72rWnNJe+1/5dVFqfTlpZ6Tlk4RXr0Img4GF0/dkqEtRp8dB0dGbSp504dlF48oykFLieFGxLj9bpg+vCD966hNfAkmFEZRmOXvFoSMHNPZPqM/SW01Tblo7znjy5hKAiZ/GSelyLucvsiab84vROE36qgJ2v4L62jIxiayzzX/qlrRxjQxF6QHLDxpydMZewc4KeUVW2DUphnADobMpt9nPqoaacU4307EAewK/cMdSyTrTfcC9vlSWIGoN6GBZzD4jbZ5MsxOU/7dAeLD8nJiE+h12kEN5NwhyE+1 [email protected]
The code "logging.info('Google SSH key schema identifier found.')" will result in two entries per minute in the syslog which is too spammy like and makes remote-monitoring more complex.
» 12:15:43.273 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:15:43.400 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:16:43.323 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
» 12:16:43.450 rubystackbox-xxx accounts-from-metadata: INFO Google SSH key schema identifier found.
In order to make this work under CoreOS we should build a container image. The container image needs to execute under rkt so that the packages can live under systemd monitoring.
Essentially the first pass will be a Dockerfile or some other container image builder. This will probably take a similar distribution design to coreos/bugs#1303
@crawford will take a first look at this.
address-manager fails on the current image of 16.04 server because the network interfaces have been renamed:
google-address-manager[7151]: WARNING Non-zero exit status from: "/sbin/ip route ls table local type local dev eth0 scope host proto 66"
STDOUT:
STDERR:
Cannot find device "eth0"
I am getting such an error every 5 seconds. See also https://bugs.launchpad.net/juju-vagrant-images/+bug/1547110 or geerlingguy/packer-boxes#1 for (some) of the other manifestations of this change.
Problem seems to stem from the upstart scripts for google-account-manager-*.conf relying on service 'sshd' vs 'ssh'.
I was following this tutorial on how to create your images :
https://cloud.google.com/compute/docs/tutorials/building-images
Then I used your tool to create a image :
gcimagebundle -d /media/iso -o /home/nedstark
Got error, so I reran the command this time using slightly different args :
gcimagebundle -d /media/iso/ -r / -o /tmp --loglevel=DEBUG --log_file=/tmp/image_bundle.log
The output is this is here http://pastebin.com/yJ9Ps3ZE , I m using Centos 6.7
Some messages in desired_accounts.py were promoted to INFO from DEBUG logs which is causing a lot of spam in the image logs.
Currently, line 97: if self.type is 'xfs':
should read if self.type == 'xfs':
and line 192: if fs_type is 'xfs':
should read: if fs_type == 'xfs':
The is
operator explicitly compares object identity rather than value equality. Hat tip to @sjagoe for bug solution.
And of course if you try to throw in the mysterious '-s' flag it says:
gcimagebundle: error: no such option: -s
And indeed its not mentioned in the help/usage. It also says it is starting logging, and indeed it creates a temp directory and a log file, but the log file is empty.
Guys, ext4 is boring. Could you support BtrFS? And, while at it, subvolumes.
The IsUserSudoerEntry function does not escape regex characters in the user name, so if there are two users on the system, test.user and test_user, the test.user will never be added since test_user matches the regex.
The template for adding new entries in accounts.py is:
sudoer_lines.append('%s ALL=NOPASSWD: ALL' % user)
but should probably be:
sudoer_lines.append('%s ALL=(ALL) NOPASSWD: ALL' % user)
to allow users to "sudo -u [username]" without being prompted for a password. This functionality is desirable because "sudo -u" is the default command that ansible uses when switching to a new user during provisioning. Also, the lack of this line in sudoers doesn't seem to add anything security-wise since you can still successfully execute "sudo su - [username]" without being prompted for a password.
Users are currently not required to specify the -d argument and with v1 this can lead to images that cannot be used to boot new instances.
The link in google-daemon README named "Live Migration" does not resolve properly. The scheme needs to be added.
Traceback (most recent call last):
File "./gcimagebundle", line 28, in
main()
File "./gcimagebundle", line 24, in main
imagebundle.main()
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/imagebundle.py", line 159, in main
options.root_directory).GetPlatform()
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/platform_factory.py", line 55, in GetPlatform
if self.__registry[name].IsThisPlatform(self.__root):
File "/home/jeremyedwards/test/gcimagebundle/gcimagebundlelib/sle.py", line 26, in IsThisPlatform
if re.match(r'SUSE Linux Enterprise', suse.SUSE().distribution):
File "/usr/lib64/python2.6/re.py", line 137, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
When activating upstart instead of systemvinit it seems like the script /usr/share/google/google_daemon/manage_addresses.py is not running correctly, therefore the IP address for the LB wasn't updated.
image : #1 SMP Debian 3.16.3-2~bpo70+1 (2014-09-21) x86_64 GNU/Linux
With an Ubuntu 14.04 server image configured according to google recommendations ( /etc/ssh/sshd_not_to_be_run created, ssh keys deleted etc), using debs from version 1.1.4-1.1.6
results in sshd not being started. Without sshd being started, I have no idea how to debug. If sshd is configured to start, no other problems are detected.
The script used to configure the server is this https://github.com/j-manu/gce-ubuntu-server-14.04/blob/7c137ac6d29de5b481e9236b21294a8a6e6181e1/provision.sh
Starting with CoreOS 1068.8.0 we are seeing a frequent failure of the systemd google-clock-sync-manager service at instance creation and reboot.
-- Logs begin at Tue 2016-07-19 14:39:17 UTC, end at Wed 2016-07-20 15:52:56 UTC. --
Jul 19 14:39:27 localhost systemd[1]: Started Google Compute Engine Clock Sync Daemon.
Jul 19 14:39:27 k8s-node-backend01.c.geofeedia-qa1.internal google-clock-sync[1102]: INFO Starting GCE clock sync
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: Traceback (most recent call last):
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 85, in <module>
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: Main()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 81, in Main
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: lock_file.RunExclusively(lock_fname, HandleClockDriftToken(watcher, OnChange))
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/google-compute-daemon/manage_clock_sync.py", line 51, in HandleClockDriftToken
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: Handler, initial_value='')
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/google-compute-daemon/metadata_watcher.py", line 74, in WatchMetadataForever
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: response = self.urllib2.urlopen(req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 127, in urlopen
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: return _opener.open(url, data, timeout)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 404, in open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: response = self._open(req, data)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 422, in _open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: '_open', req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 382, in _call_chain
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: result = func(*args)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 1214, in http_open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: return self.do_open(httplib.HTTPConnection, req)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/urllib2.py", line 1187, in do_open
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: r = h.getresponse(buffering=True)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 1045, in getresponse
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: response.begin()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 409, in begin
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: version, status, reason = self._read_status()
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/httplib.py", line 365, in _read_status
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: line = self.fp.readline(_MAXLINE + 1)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: File "/usr/share/oem/python/lib64/python2.7/socket.py", line 476, in readline
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: data = self._sock.recv(self._rbufsize)
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal python2.7[1102]: socket.error: [Errno 104] Connection reset by peer
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Unit entered failed state.
Jul 19 15:39:27 k8s-node-backend01.c.geofeedia-qa1.internal systemd[1]: google-clock-sync-manager.service: Failed with result 'exit-code'.
Reading the docs,
https://developers.google.com/compute/docs/disks#formatting
I assumed that <disk-name>
meant the short name "media1" in my case.
But this failed:
/usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" media1 /mnt/media1
Only when I put the full path to a device node in /dev did it work:
/usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/disk/by-id/google-media1 /mnt/media1
Please be more clear with the docs.
With the changes introduced in 502f06e, the startup scripts no longer work on SLES or OpenSUSE images since there is no such file, /lib/init/vars.sh
(e.g. https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google-startup-scripts/etc/init.d/google-startup-scripts#L27).
Currently, SLES / OpenSUSE are still using the 1.1.9 release so startup scripts are mostly working. AFAICT, the only missing functionality is shutdown-script
support. However, the next time that the images are rolled with an updated set of packages, none of the startup scripts will work (well, perhaps SLES12 will).
/cc @jeremyje @illfelder
Hi,
The rsyslog config in google-startup-scripts/etc/rsyslog.d/90-google.conf configures kernel and other messages to be dumped to the console, which makes console virtually impossible. Is there are reason for this?
Regards
Joe
Is this the host project for google-address-manager?
I am completely clueless as to what this script is supposed to be doing, and why it's failing on coreos. Perhaps this is related to coreos/bugs#245?
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager: WARNING Non-zero exit status from: "/usr/bin/ip route ls table local type local dev ens4v1 scope host proto 66"
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager: STDOUT:
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager: STDERR:
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager: Cannot find device "ens4v1"
Feb 23 20:23:37 core1.c.deis-dialoghq.internal google-address-manager: ERROR SyncAddresses exception: Cant check local addresses
https://pypi.python.org/pypi/stdeb/0.5.1#stdeb-cfg-configuration-file
kpartx
rsync
parted
We should use tox to test in multiple Python environments. Travis should just call into tox.
We are running some legacy software on Google Compute, that creates a well known linux user account, with a home directory of /export/home/<user>
and set some specific sudo rules.
If we use gcloud to SSH into a box as that same user, the Google daemon overwrites the users passwd entry and breaks the sudo rules.
It would helpful if either:
As per discussion with @illfelder, having setup.py
handle installing the init scripts is not cool from a packaging perspective. If possible, it'd be better to have a separate OS package for the init scripts.
Traceback (most recent call last):
File "/home/jeremyedwards/master2/gcimagebundle/gcimagebundlelib/tests/utils_test.py", line 37, in testRunCommandThatFails
with self.assertRaises(subprocess.CalledProcessError):
TypeError: failUnlessRaises() takes at least 3 arguments (2 given)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.