Code Monkey home page Code Monkey logo

googlecloudplatform / cloud-code-samples Goto Github PK

View Code? Open in Web Editor NEW
374.0 47.0 198.0 27.59 MB

Code templates to make working with Kubernetes feel like editing and debugging local code.

License: BSD Zero Clause License

Dockerfile 5.98% Go 5.66% Smarty 0.97% Java 23.33% JavaScript 4.45% Python 10.19% CSS 9.40% HTML 19.83% C# 8.27% Shell 4.40% Handlebars 2.95% Pug 0.70% Procfile 0.17% HCL 3.70%
kubernetes gcloud debugger code-template dotnet nodejs java golang python yaml

cloud-code-samples's Introduction

Code Templates for Google Cloud Code

What is Google Cloud Code

Cloud Code brings the power and convenience of IDEs to cloud-native application development. Cloud Code integrates with Google Cloud services like Google Kubernetes Engine, Cloud Run, Cloud APIs and Secret Manager, and makes you feel like you are working with local code.

Cloud Code works with Google’s command-line container tools like skaffold, minikube, and kubectl under the hood, providing local, continuous feedback on your project as you build, edit, run, and deploy your applications locally or in the cloud. Cloud Code also deeply integrates with Cloud SDK to provide a unified authentication experience when you develop with Google Cloud Services.

What's in this repo

Code templates for easy Getting-Started experience with Google Cloud Code in Python, Java, Nodejs, Go and .NET Core. We support two IDEs: Visual Studio Code and IntelliJ (and other JetBrains IDEs).

VS Code: Create New Application VS Code

IntelliJ: Create New Application IntelliJ

Useful Links

cloud-code-samples's People

Contributors

ahmetb avatar averikitsch avatar bourgeoisor avatar briandealwis avatar damondouglas avatar daniel-sanche avatar dependabot[bot] avatar dgageot avatar dinagraves avatar etanshaul avatar glouischandra avatar iantalarico avatar iennae avatar ivanporty avatar j-windsor avatar kelsk avatar matthewmichihara avatar meteatamel avatar muncus avatar murog avatar patflynn avatar pattishin avatar quoctruong avatar renovate-bot avatar renovate[bot] avatar seanmcbreen avatar shabirmean avatar sujit-kamireddy avatar sushicw avatar wangxf123456 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-code-samples's Issues

e2e testing - node

  • concurrent deployments for hello world and guestbook
  • investigate whether templates_location can be easily utilized
  • poll for service / or kubectl 1.14 deployment rollout
  • connectivity test for frontend
  • selenium integration test: create guestbook entry and validate it exists in response

no readme file

does not work out of the box. no steps.
have to figure out yourself where the port, url, this that. The purpose of this app just shows that it works. out of the box. not spending hours to figure out why it doesn't

The python, node, and go Dockerfiles should use multi-stage builds

The hello-world for go produces a 346 MB image right now. Switching over to use gcr.io/distroless/base cuts that down to 19.8MB. We could probably cut that down further with gcr.io/distroless/static and static compilation, but that makes using delve a bit of a pain.

Similarly, the node and python examples both use a 300MB+ base image without using a multi-stage build to cut down the size of the resulting image.

[BUG] CI build is broken

Describe the bug
When building the code, we get:

Step #0 - "python tests": Step #2 - "Get Endpoint": Querying for external IP python-guestbook-frontend
Step #0 - "python tests": Step #2 - "Get Endpoint": Querying for external IP python-guestbook-frontend
Step #0 - "python tests": Step #2 - "Get Endpoint": Querying for external IP python-guestbook-frontend
Step #0 - "python tests": Finished Step #2 - "Get Endpoint"
Step #0 - "python tests": ERROR
Step #0 - "python tests": ERROR: build step 2 "gcr.io/cloud-builders/kubectl" failed: context deadline exceeded
Step #0 - "python tests": --------------------------------------------------------------------------------
Step #0 - "python tests": 
Finished Step #0 - "python tests"
Finished Step #4 - "nodejs tests"
Finished Step #3 - "go tests"
Finished Step #1 - "java tests"
Finished Step #2 - "dotnet tests"
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
Step #1 - "java tests": Step #2 - "Get 
Step #4 - "nodejs tests": Step #2 - "Get Endpoint": Q
Step #2 - "dotnet tests": Step #2 - "Get Endpoint": Querying for external IP dotnet-g
Step #3 - "go tests": Step #2 - "Get End

To Reproduce
Run CI build

Expected behavior
It passes

Issues Connecting to MongoBD from Node Guestbook

I have some issues connecting to the Mongo DB. I see the following in the frontend log...

connection err: { MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]
    at Pool.<anonymous> (/backend/node_modules/mongodb-core/lib/topologies/server.js:564:11)
    at emitOne (events.js:116:13)
    at Pool.emit (events.js:211:7)
    at Connection.<anonymous> (/backend/node_modules/mongodb-core/lib/connection/pool.js:317:12)
    at Object.onceWrapper (events.js:317:30)
    at emitTwo (events.js:126:13)
    at Connection.emit (events.js:214:7)
    at Socket.<anonymous> (/backend/node_modules/mongodb-core/lib/connection/connection.js:246:50)
    at Object.onceWrapper (events.js:315:30)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at emitErrorNT (internal/streams/destroy.js:66:8)
    at _combinedTickCallback (internal/process/next_tick.js:139:11)
    at process._tickCallback (internal/process/next_tick.js:181:9)
  name: 'MongoNetworkError',
  errorLabels: [ 'TransientTransactionError' ],
  [Symbol(mongoErrorContextSymbol)]: {} }
connection err: { MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: connection 1 to mongo-service:27017 timed out]
    at Pool.<anonymous> (/backend/node_modules/mongodb-core/lib/topologies/server.js:564:11)
    at emitOne (events.js:116:13)
    at Pool.emit (events.js:211:7)
    at Connection.<anonymous> (/backend/node_modules/mongodb-core/lib/connection/pool.js:317:12)
    at Object.onceWrapper (events.js:317:30)
    at emitTwo (events.js:126:13)
    at Connection.emit (events.js:214:7)
    at Socket.<anonymous> (/backend/node_modules/mongodb-core/lib/connection/connection.js:257:10)
    at Object.onceWrapper (events.js:313:30)
    at emitNone (events.js:106:13)
    at Socket.emit (events.js:208:7)
    at Socket._onTimeout (net.js:422:8)
    at ontimeout (timers.js:498:11)
    at tryOnTimeout (timers.js:323:5)
    at Timer.listOnTimeout (timers.js:290:5)
  name: 'MongoNetworkError',
  errorLabels: [ 'TransientTransactionError' ],
  [Symbol(mongoErrorContextSymbol)]: {} }

The full Mongo log is

2019-03-29T23:42:42.005+0000 W CONTROL  [main] Option: sslMode is deprecated. Please use tlsMode instead.
about to fork child process, waiting until server is ready for connections.
forked process: 22
2019-03-29T23:42:42.008+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2019-03-29T23:42:42.014+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-03-29T23:42:42.018+0000 I CONTROL  [initandlisten] MongoDB starting : pid=22 port=27017 dbpath=/data/db 64-bit host=mongo-65f7945978-jvknl
2019-03-29T23:42:42.018+0000 I CONTROL  [initandlisten] db version v4.1.9
2019-03-29T23:42:42.018+0000 I CONTROL  [initandlisten] git version: a5fa363117062a20d6056c76e01edb3a08f71b7c
2019-03-29T23:42:42.018+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-03-29T23:42:42.018+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten] modules: none
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten] build environment:
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-03-29T23:42:42.019+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "127.0.0.1", port: 27017, tls: { mode: "disabled" } }, processManagement: { fork: true, pidFilePath: "/tmp/docker-entrypoint-temp-mongod.pid" }, systemLog: { destination: "file", logAppend: true, path: "/proc/1/fd/1" } }
2019-03-29T23:42:42.019+0000 I STORAGE  [initandlisten] 
2019-03-29T23:42:42.019+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-03-29T23:42:42.019+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-03-29T23:42:42.020+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1340M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-03-29T23:42:42.740+0000 I STORAGE  [initandlisten] WiredTiger message [1553902962:740854][22:0x7f2af7345a80], txn-recover: Set global recovery timestamp: (0,0)
2019-03-29T23:42:42.748+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-03-29T23:42:42.758+0000 I STORAGE  [initandlisten] Timestamp monitor starting
2019-03-29T23:42:42.762+0000 I CONTROL  [initandlisten] 
2019-03-29T23:42:42.762+0000 I CONTROL  [initandlisten] ** NOTE: This is a development version (4.1.9) of MongoDB.
2019-03-29T23:42:42.762+0000 I CONTROL  [initandlisten] **       Not recommended for production.
2019-03-29T23:42:42.762+0000 I CONTROL  [initandlisten] 
2019-03-29T23:42:42.762+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-03-29T23:42:42.763+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-03-29T23:42:42.763+0000 I CONTROL  [initandlisten] 
2019-03-29T23:42:42.764+0000 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: 81be457c-b812-47ce-bdb0-f4b44a362523
2019-03-29T23:42:42.775+0000 I INDEX    [initandlisten] index build: done building index _id_ on ns admin.system.version
2019-03-29T23:42:42.775+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2019-03-29T23:42:42.775+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 4.2
2019-03-29T23:42:42.778+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2019-03-29T23:42:42.779+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2019-03-29T23:42:42.779+0000 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: 189d06a2-9540-4813-946a-bd010d09fb83
2019-03-29T23:42:42.791+0000 I INDEX    [initandlisten] index build: done building index _id_ on ns local.startup_log
2019-03-29T23:42:42.792+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2019-03-29T23:42:42.792+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-03-29T23:42:42.793+0000 I NETWORK  [initandlisten] Listening on /tmp/mongodb-27017.sock
2019-03-29T23:42:42.793+0000 I NETWORK  [initandlisten] Listening on 127.0.0.1
2019-03-29T23:42:42.793+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
child process started successfully, parent exiting
2019-03-29T23:42:42.796+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
2019-03-29T23:42:42.796+0000 I STORAGE  [LogicalSessionCacheRefresh] createCollection: config.system.sessions with generated UUID: a3483914-ec35-4856-bb56-f9f76bb38f26
2019-03-29T23:42:42.813+0000 I INDEX    [LogicalSessionCacheRefresh] index build: done building index _id_ on ns config.system.sessions
2019-03-29T23:42:42.830+0000 I INDEX    [LogicalSessionCacheRefresh] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } using method: Hybrid
2019-03-29T23:42:42.830+0000 I INDEX    [LogicalSessionCacheRefresh] build may temporarily use up to 500 megabytes of RAM
2019-03-29T23:42:42.830+0000 I INDEX    [LogicalSessionCacheRefresh] index build: collection scan done. scanned 0 total records in 0 seconds
2019-03-29T23:42:42.830+0000 I INDEX    [LogicalSessionCacheRefresh] index build: inserted 0 keys from external sorter into index in 0 seconds
2019-03-29T23:42:42.831+0000 I INDEX    [LogicalSessionCacheRefresh] index build: done building index lsidTTLIndex on ns config.system.sessions
2019-03-29T23:42:42.930+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:48134 #1 (1 connection now open)
2019-03-29T23:42:42.930+0000 I NETWORK  [conn1] received client metadata from 127.0.0.1:48134 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.1.9" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-03-29T23:42:42.941+0000 I NETWORK  [conn1] end connection 127.0.0.1:48134 (0 connections now open)
2019-03-29T23:42:43.090+0000 I NETWORK  [listener] connection accepted from 127.0.0.1:48136 #2 (1 connection now open)
2019-03-29T23:42:43.090+0000 I NETWORK  [conn2] received client metadata from 127.0.0.1:48136 conn2: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.1.9" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-03-29T23:42:43.180+0000 I SHARDING [conn2] Marking collection admin.system.users as collection version: <unsharded>
2019-03-29T23:42:43.180+0000 I STORAGE  [conn2] createCollection: admin.system.users with generated UUID: 8a5ebc66-d352-4176-a112-f12a852391fd
2019-03-29T23:42:43.194+0000 I INDEX    [conn2] index build: done building index _id_ on ns admin.system.users
2019-03-29T23:42:43.200+0000 I INDEX    [conn2] index build: done building index user_1_db_1 on ns admin.system.users
Successfully added user: {
	"user" : "root",
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	]
}
2019-03-29T23:42:43.202+0000 E -        [main] Error saving history file: FileOpenFailed: Unable to open() file /home/mongodb/.dbshell: Unknown error 2

/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*

2019-03-29T23:42:43.218+0000 I NETWORK  [conn2] end connection 127.0.0.1:48136 (0 connections now open)
2019-03-29T23:42:43.259+0000 W CONTROL  [main] Option: sslMode is deprecated. Please use tlsMode instead.
2019-03-29T23:42:43.261+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2019-03-29T23:42:43.264+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
killing process with pid: 22
2019-03-29T23:42:43.276+0000 I CONTROL  [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2019-03-29T23:42:43.276+0000 I NETWORK  [signalProcessingThread] shutdown: going to close listening sockets...
2019-03-29T23:42:43.276+0000 I NETWORK  [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2019-03-29T23:42:43.276+0000 I CONTROL  [signalProcessingThread] Shutting down free monitoring
2019-03-29T23:42:43.276+0000 I FTDC     [signalProcessingThread] Shutting down full-time diagnostic data capture
2019-03-29T23:42:43.280+0000 I STORAGE  [signalProcessingThread] Timestamp monitor shutting down
2019-03-29T23:42:43.280+0000 I STORAGE  [signalProcessingThread] WiredTigerKVEngine shutting down
2019-03-29T23:42:43.281+0000 I STORAGE  [signalProcessingThread] Shutting down session sweeper thread
2019-03-29T23:42:43.281+0000 I STORAGE  [signalProcessingThread] Finished shutting down session sweeper thread
2019-03-29T23:42:43.281+0000 I STORAGE  [signalProcessingThread] Shutting down journal flusher thread
2019-03-29T23:42:43.356+0000 I STORAGE  [signalProcessingThread] Finished shutting down journal flusher thread
2019-03-29T23:42:43.356+0000 I STORAGE  [signalProcessingThread] Shutting down checkpoint thread
2019-03-29T23:42:43.388+0000 I STORAGE  [signalProcessingThread] Finished shutting down checkpoint thread
2019-03-29T23:42:43.413+0000 I STORAGE  [signalProcessingThread] shutdown: removing fs lock...
2019-03-29T23:42:43.413+0000 I CONTROL  [signalProcessingThread] now exiting
2019-03-29T23:42:43.413+0000 I CONTROL  [signalProcessingThread] shutting down with code:0

MongoDB init process complete; ready for start up.

2019-03-29T23:42:44.381+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-65f7945978-jvknl
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] db version v4.1.9
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] git version: a5fa363117062a20d6056c76e01edb3a08f71b7c
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] modules: none
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] build environment:
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-03-29T23:42:44.386+0000 I CONTROL  [initandlisten] options: { net: { bindIp: "*" }, security: { authorization: "enabled" } }
2019-03-29T23:42:44.387+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-03-29T23:42:44.387+0000 I STORAGE  [initandlisten] 
2019-03-29T23:42:44.387+0000 I STORAGE  [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-03-29T23:42:44.387+0000 I STORAGE  [initandlisten] **          See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-03-29T23:42:44.387+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1340M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-03-29T23:42:46.444+0000 I STORAGE  [initandlisten] WiredTiger message [1553902966:444038][1:0x7fe71c889a80], txn-recover: Main recovery loop: starting at 1/30080 to 2/256
2019-03-29T23:42:46.700+0000 I STORAGE  [initandlisten] WiredTiger message [1553902966:700115][1:0x7fe71c889a80], txn-recover: Recovering log 1 through 2
2019-03-29T23:42:46.782+0000 I STORAGE  [initandlisten] WiredTiger message [1553902966:782875][1:0x7fe71c889a80], txn-recover: Recovering log 2 through 2
2019-03-29T23:42:46.911+0000 I STORAGE  [initandlisten] WiredTiger message [1553902966:911275][1:0x7fe71c889a80], txn-recover: Set global recovery timestamp: (0,0)
2019-03-29T23:42:46.953+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-03-29T23:42:46.963+0000 I STORAGE  [initandlisten] Timestamp monitor starting
2019-03-29T23:42:46.965+0000 I CONTROL  [initandlisten] 
2019-03-29T23:42:46.965+0000 I CONTROL  [initandlisten] ** NOTE: This is a development version (4.1.9) of MongoDB.
2019-03-29T23:42:46.965+0000 I CONTROL  [initandlisten] **       Not recommended for production.
2019-03-29T23:42:46.965+0000 I CONTROL  [initandlisten] 
2019-03-29T23:42:46.980+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>
2019-03-29T23:42:46.982+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>
2019-03-29T23:42:46.982+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>
2019-03-29T23:42:46.983+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>
2019-03-29T23:42:46.983+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-03-29T23:42:46.984+0000 I NETWORK  [initandlisten] Listening on /tmp/mongodb-27017.sock
2019-03-29T23:42:46.984+0000 I NETWORK  [initandlisten] Listening on 0.0.0.0
2019-03-29T23:42:46.984+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2019-03-29T23:42:46.985+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>

Any ideas?

Python Guestbook app fails to deploy

kubectl client version: 1.13
time="2019-04-03T20:08:35-07:00" level=fatal msg="deploy failed: reading manifests: kubectl create: Running [kubectl --context gke_cloud-sharp-test_us-west1-a_test-cluster create --dry-run -oyaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-backend.deployment.yaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-backend.service.yaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-frontend.deployment.yaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-frontend.service.yaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-mongodb.deployment.yaml -f /Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-mongodb.service.yaml]: stdout apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: python-guestbook\n    tier: backend\n  name: python-guestbook-backend\n  namespace: default\nspec:\n  ports:\n  - port: 8080\n    targetPort: http-server\n  selector:\n    app: python-guestbook\n    tier: backend\n  type: ClusterIP\napiVersion: apps/v1beta1\nkind: Deployment\nmetadata:\n  labels:\n    app: python-guestbook\n    tier: frontend\n  name: python-guestbook-frontend\n  namespace: default\nspec:\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: python-guestbook\n        tier: frontend\n    spec:\n      containers:\n      - env:\n        - name: PORT\n          value: \"8080\"\n        - name: GUESTBOOK_API_ADDR\n          value: python-guestbook-backend:8080\n        image: python-guestbook-frontend\n        name: frontend\n        ports:\n        - containerPort: 8080\n          name: http-server\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: python-guestbook\n    tier: frontend\n  name: python-guestbook-frontend\n  namespace: default\nspec:\n  ports:\n  - port: 80\n    targetPort: http-server\n  selector:\n    app: python-guestbook\n    tier: frontend\n  type: LoadBalancer\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: python-guestbook\n    tier: db\n  name: python-guestbook-mongodb\n  namespace: default\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: python-guestbook\n      tier: db\n  template:\n    metadata:\n      labels:\n        app: python-guestbook\n        tier: db\n    spec:\n      containers:\n      - image: mongo:4\n        name: mongo\n        ports:\n        - containerPort: 27017\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: python-guestbook\n    tier: db\n  name: python-guestbook-mongodb\n  namespace: default\nspec:\n  ports:\n  - port: 27017\n    targetPort: 27017\n  selector:\n    app: python-guestbook\n    tier: db\n, stderr: error: error validating \"/Users/talarico/cloudcode-projects/python-guestbook-1/kubernetes-manifests/guestbook-backend.deployment.yaml\": error validating data: ValidationError(Deployment.spec.template): unknown field \"initContainers\" in io.k8s.api.core.v1.PodTemplateSpec; if you choose to ignore these errors, turn validation off with --validate=false\n, err: exit status 1: exit status 1"

remove debug related parts of Dockerfiles now that we're using Skaffold Debug

Describe the bug
A clear and concise description of what the bug is.
I noticed that https://github.com/GoogleCloudPlatform/cloud-code-samples/blob/master/golang/go-hello-world/Dockerfile includes an entrypoint that uses Delve. I think that both VS Code and IntelliJ use skaffold debug to automatically configure the delve entrypoint for debugging. Perhaps we can move this dockerfile to represent something that's more production ready and represents gcp containerization best practices.

[BUG] Add Compound Launch Configurations for VS Code in launch.json for Guestbook samples

Describe the bug
Users can currently only select to debug frontend or backend from a launch config. With a compound, they can debug the whole application with a config entry point.

To Reproduce
Steps to reproduce the behavior:

  1. Open VS Code with Cloud Code installed
  2. Select New Application > Node JS Guestbook (or any language)
  3. Switch to the Run and Debug view in the Activity Bar
  4. Select the dropdown

Expected behavior
An option to debug all the services in the application.

Actual behavior
Options to debug individual services in the application

Screenshots
image

Template in use:

Cloud Code in use:

  • Cloud Code for VS Code
  • OS: OSX
  • Version: 1.20

Additional context
This can be solved by using compounds in the launch configuration: https://code.visualstudio.com/docs/editor/debugging#_compound-launch-configurations

Build failing with timeouts

Use DNS instead IP address when connecting to Mongo

Related to #8 ... rather than reading an IP address from an environment variable, the backend could connect to the Mongo database via its hostname: "mongo-service". This should allow Mongo to automatically reconnect if the Mongo pod gets a new IP address.

I've tested this in my own project and it appears to fix the startup issues that we had to use an initContainer for, and should generally make the app more robust.

nodejs/guestbook: db connection failure handling is inconsistent

Right now the way we handle connection failures in Go guestbook backend is to crash right away when the process starts –––so that Kubernetes restarts the pod.

However guestbook keeps retrying indefinitely, which is causing #8 and #21 instead of just failing.

This causes the service to "trick" VSCode extension to think the pod is ready and serving traffic.

> [email protected] start /backend
> node --inspect=9229 app.js

Debugger listening on ws://127.0.0.1:9229/b1151423-9d4e-461f-8be7-69e88cb28910
For help see https://nodejs.org/en/docs/inspector
App listening on port 8080
Press Ctrl+C to quit.
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]
unable to connect to mongodb://root:password@mongo-service:27017/admin: MongoNetworkError: failed to connect to server [mongo-service:27017] on first connect [MongoNetworkError: getaddrinfo ENOTFOUND mongo-service mongo-service:27017]

Go Guestbook sample fails with Mongo db error

querying backend failed: Get http://guestbook-backend:8080/messages: dial tcp 10.122.10.27:8080: connect: connection refused

Sample log output

Tags used in deployment:
 - gcr.io/eshaul-work/guestbook-backend -> gcr.io/eshaul-work/guestbook-backend:latest@sha256:632661581a869e1766de8906686a49207555ff8375ffbbbf7e82f3a788f3ad15
 - gcr.io/eshaul-work/guestbook-frontend -> gcr.io/eshaul-work/guestbook-frontend:latest@sha256:ca69632f985c7725747d646ca0c2005104fff5ecb8671a26de101492f4147a70
Starting deploy...
 - deployment.apps/guestbook-backend created
 - service/guestbook-backend created
 - deployment.apps/guestbook-frontend created
 - service/guestbook-frontend created
 - deployment.apps/guestbook-mongodb created
 - service/guestbook-mongodb created
Deploy complete in 1.205275669s
Port forwarding service/guestbook-backend in namespace default, remote port 8080 -> local port 8080
Port forwarding service/guestbook-mongodb in namespace default, remote port 27017 -> local port 27017
Port forwarding service/guestbook-frontend in namespace default, remote port 80 -> local port 4503
Watching for changes...
[guestbook-backend-56c9bf77cb-jtsjv backend] API server listening at: [::]:3000
[guestbook-backend-56c9bf77cb-jtsjv backend] 2019-09-27T20:00:57Z info layer=debugger launching process with args: [/app/backend]
[guestbook-backend-56c9bf77cb-jtsjv backend] 2019-09-27T20:00:57Z debug layer=debugger continuing
[guestbook-frontend-7cc556565f-n8mkj frontend] API server listening at: [::]:3000
[guestbook-frontend-7cc556565f-n8mkj frontend] 2019-09-27T20:00:57Z info layer=debugger launching process with args: [/app/frontend]
[guestbook-frontend-7cc556565f-n8mkj frontend] 2019-09-27T20:00:58Z debug layer=debugger continuing
[guestbook-frontend-7cc556565f-n8mkj frontend] 2019/09/27 20:00:58 frontend server listening on port 8080
[guestbook-backend-56c9bf77cb-jtsjv backend] 2019/09/27 20:01:07 ping to mongodb failed: context deadline exceeded
[guestbook-frontend-7cc556565f-n8mkj frontend] 2019/09/27 20:01:19 received request: GET /
[guestbook-frontend-7cc556565f-n8mkj frontend] 2019/09/27 20:01:19 querying backend for entries

Skaffold / jib issue with maven wrapper

Using the latest java hello world sample, and attempting to deploy using Cloud Code IntelliJ, I'm seeing:

time="2019-11-04T17:21:45-05:00" level=warning msg="error checking cache, caching may not work as expected: getting hash for artifact java-hello-world: getting dependencies for java-hello-world: getting jib-maven dependencies: initial Jib dependency refresh failed: failed to get Jib dependencies: starting command &{/usr/local/google/home/eshaul/IdeaProjects/untitled209/mvnw [/usr/local/google/home/eshaul/IdeaProjects/untitled209/mvnw jib:_skaffold-fail-if-jib-out-of-date -Djib.requiredVersion=1.4.0 --non-recursive jib:_skaffold-files-v2 --quiet] [] . <nil> 0xc00075e0a0 0xc00075e0b0 [] <nil> <nil> <nil> 0xc000646440 <nil> false [0xc00075e0b8 0xc00075e0a0 0xc00075e0b0] [0xc00075e0a0 0xc00075e0b0 0xc00075e0b8] [0xc00075e098 0xc00075e0a8] [] <nil> <nil>}: fork/exec /usr/local/google/home/eshaul/IdeaProjects/untitled209/mvnw: permission denied"
time="2019-11-04T17:21:45-05:00" level=fatal msg="exiting dev mode because first build failed: build failed: build failed: building [java-hello-world]: build artifact: maven build failed: fork/exec /usr/local/google/home/eshaul/IdeaProjects/untitled209/mvnw: permission denied"

@ivanporty seems related to #154 . When I delete the maven wrapper, it starts to work

[BUG] Go guestbook sample (sometimes) fails when running skaffold

  1. clone the Kubernetes Go guestbook sample application
  2. (from the terminal) run skaffold dev (full command shown below). Sometimes the app starts. Sometimes the frontend fails with:
    querying backend failed: Get "http://go-guestbook-backend:8080/messages": dial tcp 10.122.5.65:8080: connect: connection refused
    and from the app logs:
[go-guestbook-frontend-7d64466946-wkcwv frontend] 2020/03/31 21:23:56 querying backend for entries
[go-guestbook-backend-6fb6656c5b-6jphr backend] 2020/03/31 21:23:59 ping to mongodb failed: context deadline exceeded
[go-guestbook-backend-6fb6656c5b-6jphr backend] 2020-03-31T21:23:59Z error layer=rpc writing response:write tcp [::1]:3000->[::1]:43740: use of closed network connection

Full skaffold command run:

skaffold dev --filename skaffold.yaml  --default-repo gcr.io/[replace_with_gcp_proj] --rpc-port 50051 --port-forward=true --status-check=true --enable-rpc=true

Provide Maven wrapper for Java Jib samples

Java samples now use Jib by default, but many users do not have Maven in path especially working from IntelliJ - IntelliJ does provide complete support for Maven from the IDE and one does not need it in PATH. Maven wrapper is a good addon which solves this managing Maven for Jib without users having to install and set up Maven (even if they use IntelliJ where it's always available).

[BUG] Node JS HelloWorld sample application's service is not working on Windows

Describe the bug
If the Windows machine has IIS, chances are the port 80 will be used by IIS. Since we are using port 80 in our template, this will lead to a bug where the LoadBalancer created won't work.

To Reproduce
Steps to reproduce the behavior:
Deploy the NodeJS HelloWorld application in Windows with IIS running.

Expected behavior
The publicly exposed service endpoint should be shown in the output.

Screenshots

Waiting for Deployment 'guestbook-backend' to rollout...
Waiting for Deployment 'guestbook-frontend' to rollout...
Waiting for Deployment 'guestbook-mongodb' to rollout...
Waiting for IP address of Service 'guestbook-frontend'.
Failed to get IP address from Service guestbook-frontend

Publicly exposed service endpoints in the application:


Application deployed successfully but one of the publicly exposed service endpoints could not be retrieved..
No ingress endpoints found in the application.

** Template in use:**

  • Language NodeJS
  • Path to template Hello World template

Cloud Code in use:

  • [Cloud Code for VS Code]
  • OS: Windows
  • Browser Chrome
  • Version 0.12.0

java-questbook example with profile cloudbuild not working

After adding project-id and kubeContext to skaffold.yaml in java-questbook example and update apiVersion (the generated one is old one):

apiVersion: skaffold/v2alpha1
kind: Config
....
- name: cloudbuild
  build:
    googleCloudBuild:
      projectId: my-project-id
  deploy: 	
    kubeContext: 'gke_myproject-id_zone_cluster-id'  

The build is failing:

starting build "abcdedf12345abffeeeeebeeeee"

FETCHSOURCE
Fetching storage object: gs://my-project-id_cloudbuild/source/my-project-id-abcdef1234567890abcdef.tar.gz#123456789
Copying gs://my-project-id_cloudbuild/source/my-project-id-abcdef1234567890abcdef.tar.gz#123456789...
/ [0 files][ 0.0 B/ 7.7 KiB] 
/ [1 files][ 7.7 KiB/ 7.7 KiB] 
Operation completed over 1 objects/7.7 KiB. 
BUILD
Already have image (with digest): gcr.io/cloud-builders/mvn
[INFO] Scanning for projects...
[ERROR] [ERROR] Could not find the selected project in the reactor: frontend @ 
[ERROR] Could not find the selected project in the reactor: frontend -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MavenExecutionException
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/mvn" failed: exit status 1

because cloud build command is:

-c "mvn -Duser.home=$HOME -Djib.console=plain jib:_skaffold-fail-if-jib-out-of-date -Djib.requiredVersion=1.4.0 --projects frontend --also-make package jib:build -Djib.containerize=frontend -Dimage=gcr.io/my-project-id/java-guestbook-frontend:latest"

The problem is:
--projects frontend
because it means Maven build module in "frontend" subdirecotry, while gs contains "frontend" as root.
The way to fix is either:
--projects :frontend
or remove the --projects flag.

Secondly the version is cloud run is different than the one used in pom.xml plugin (1.7.0 generated, 1.8.0 latest jib version) - gcloud should use the same version.
However It seems I cannot be changed it from https://skaffold.dev/docs/references/yaml/ configuration.

`go-hello-world` sample not working

On Docker Desktop, whether I run it with the extension or skaffold dev, I'm unable to connect to localhost:80.

Here's the log:

[go-hello-world-5db5b95f55-h5hf8 server] API server listening at: [::]:3000
[go-hello-world-5db5b95f55-h5hf8 server] 2019-04-03T06:32:23Z info layer=debugger launching process with args: [/src/hello-world/debug]

If I comment out this line in the Dockerfile:

ENTRYPOINT ["dlv", "debug", "./cmd/hello-world",  "--api-version=2", "--headless", "--listen=:3000", "--log"]

and uncomment this one:

ENTRYPOINT ["/app"]

everything is fine.

I don't know if the debugger is just waiting for a connection or if something is not working. Anyways, this is not what I expect when I just want to deploy this sample.

[BUG] Consider shorter names in k8s yaml to reduce verbosity

Describe the bug
We could meaningfully reduce the amount of non-core information when people are processing logs from our samples by reducing the length of names in our YAML files for all languages for both Hello World and Guestbook.

e.g. https://github.com/GoogleCloudPlatform/cloud-code-samples/blob/master/nodejs/nodejs-guestbook/kubernetes-manifests/mongo.deployment.yaml could have name as 'mongodb' and app as 'guestbook'. No need to include the language in either or 'guestbook' when calling out 'mongodb'.

I'd also simplify the names for frontend and backend to just be "backend" and "frontend" and not include the "nodejs-guestbook-frontend"

e.g. https://github.com/GoogleCloudPlatform/cloud-code-samples/blob/master/nodejs/nodejs-hello-world/kubernetes-manifests/hello.deployment.yaml could have name="hello-world" instead of "nodejs-hello-world" and same for app, container, etc.

To Reproduce
Steps to reproduce the behavior:

  1. Open a sample and deploy it
  2. View the logs displayed by Skaffold

Expected behavior
Shorter names in YAML so the generated logs are less verbose

Screenshots
If applicable, add screenshots to help explain your problem.
image

Template in use:
All Kubernetes templates

Cloud Code in use:

  • Applies to both VS Code and IntelliJ
  • OS: n/a
  • Browser n/a
  • Version 1.20 but n/a

Additional context
Feel free to follow up with russellwolf@ for further discussion

Java Guestbook sample should have a parent pom.xml

currently the guestbook sample has individual pom.xml's under the backend and frontend services.

This doesn't work well with the IntelliJ multi-module model where the project is expected to have a root pom.xml. With the current setup, the user is unable to sync the project with Maven unless the "sub" poms are individually loaded (by browsing to them in the Maven window).

Suggestion - enable squash merging

Currently, when merging a PR, all commits from the feature branch are merged onto master. This imo creates a messy history on master. If you enable squash merging, then each feature (PR) gets squashed into a single commit, with no merge commits.

Screen Shot 2019-04-26 at 10 10 40 AM

This is a matter a of taste, so just take this as a suggestion.

Java examples should provide a Jib builder (as a profile)

Many Java developers opt to use Jib as a default image builder for their project and integration with Skaffold is first-class. While we can keep Dockerfile based profile as a default, having a custom Jib based profile and examples of using Jib with modules for guestbook project is very valuable.

[BUG] Cloud Run Java Sample has failing tests

Describe the bug
Cloud Run Java Sample has failing tests

To Reproduce
Steps to reproduce the behavior:

  1. Open Cloud Run Java Sample
  2. Run ./mvnw test
  3. Tests fail

Expected behavior
Tests pass!

** Template in use:**

  • Cloud Run Java
  • Path to template

Additional context
This came up as we were testing the local development flow with IntelliJ. When using Skaffold and Jib as the builder, the tests will run as part of the build and the failing tests will cause the build to fail.

I noticed the test appear to rely on a server running on localhost, and they do indeed pass when I start the application locally before running the test suite, but that won't be the case when using the local development run configuration since that's how the user is starting the server. They can't start the server without running tests, and they can't run tests without starting the server.

The other Cloud Run samples have similar test suites that rely on local servers running as well. I found this a bit confusing myself, but I'd be interested in understanding the rationale and discussing it further. Let me know if I've misunderstood anything or can help in any way. 🙂

quota exceeded error when building the code

When building guestbook for go, we got this error:

Step #3 - "go tests": Step #1 - "deploy to staging": time="2019-09-23T22:59:28Z" level=fatal msg="failed to build: build failed: build failed: building [gcr.io/cloud-code-samples/guestbook-backend]: getting build status: googleapi: Error 429: Quota exceeded for quota metric 'cloudbuild.googleapis.com/get_requests' and limit 'GetRequestsPerMinutePerProject' of service 'cloudbuild.googleapis.com' for consumer 'project_number:296929634055'., rateLimitExceeded"

Full details here: https://pantheon.corp.google.com/cloud-build/builds/553d21d7-6db6-44fa-a03e-0ea60ed2ae30;step=3?project=cloud-code-samples

Guestbook templates might show errors on startup

When deploying guestbook templates, kubernetes might schedule BE pod earlier than the DB pod, so the call to the DB might fail and produce error logs. Kubernetes might restart BE pod and at that time DB pod will be up so second initialization event of BE pod will succeed.

We might want to introduce a proper stateful service config for DB pod to ensure that the templates are less error prone.

nodejs/guestbook: mongodb url parsing inconsistent

Go guestbook sample requires a GUESTBOOK_DB_ADDR env var

// GUESTBOOK_DB_ADDR environment variable is set in guestbook-backend.deployment.yaml.
dbAddr := os.Getenv("GUESTBOOK_DB_ADDR")
if dbAddr == "" {
log.Fatal("GUESTBOOK_DB_ADDR environment variable not specified")
}

  1. nodejs guestbook doesn't require this env var

  2. nodejs guestbook uses completely unrelated env vars

const MONGO_USERNAME = process.env.MONGO_USERNAME || 'root'
const MONGO_PASSWORD = process.env.MONGO_PASSWORD || 'password'
const MONGO_HOST = process.env.MONGO_HOST || 'mongo-service'
const MONGO_PORT = process.env.MONGO_PORT || '27017'
const MONGO_URI = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOST}:${MONGO_PORT}/admin`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.