googlecloudplatform / python-docs-samples Goto Github PK
View Code? Open in Web Editor NEWCode samples used on cloud.google.com
License: Apache License 2.0
Code samples used on cloud.google.com
License: Apache License 2.0
Code cleanup: HTTP requests should be named constants.
Others might feel that HTTP codes are well known enough to not be "magic". I think this would be slight code cleanup but if there's loud objections we can leave it as is.
httplib contains the constants so we can use those.
Hello
I had an exception "Empty or missing scope not allowed" when running the sample load_data_by_post.
I fixed it by creating the scope after getting credentials :
credentials = GoogleCredentials.get_application_default()
scope = ['https://www.googleapis.com/auth/bigquery']
credentials = credentials.create_scoped(scope)
Regards
Specifically, if we automatically create the dataset and the table, I think we're good to go.
Work out test configuration/standards for the repo.
The CI tests should run under both python 2 and 3
The current tests for the storage transfer service mock everything, which isn't very useful in terms of ensuring the sample still works with the live API.
We need to come up with a plan to test these. The only one that might be difficult is the AWS one, as it will require us to have an AWS account.
otherwise people outside google can not run the tests
Needed for docs.
Image app sample throws error 500 on signing in to guest book (local deploy)
After merging #6
According to BigQuery (BQ) API documentation, streaming insert (insertAll) input should be formed as
{"rows": [ {"insertId": "A String",
"json": {"a_key": ""},
},
],
}
whereas in the example bigquery/api/streaming.py the input is formed differently as
insert_all_data = {
'insertId': str(uuid.uuid4()),
'rows': [{'json': row}]
}
The difference is where you place the 'insertId' key. A separate insertId should be with each row, not one 'insertId' at the same level with 'rows'
Instead of:
def main(argv):
args = parser.parse_args(argv[1:])
if __name__ == '__main__':
main(sys.argv)
It should be
def descriptive_function_name(arg1, arg2, arg3):
...
if __name__ == '__main__':
args = parser.parse_args()
descriptive_function_name(args.arg1, args.arg2, args.arg3)
Needs to call delete_task
Where should we put: https://github.com/GoogleCloudPlatform/appengine-memcache-guestbook-python ?
appengine/memcache/guestbook ?
@elibixby @jonparrott
Bigquery tests should at least verify that the right RPC is made.
Notably, the samples should be executable from the command line without modification, and should accept parameters as command line arguments.
Most users don't need Testing
section, but contributors do. How about to move the sections related to tests to CONTRIBUTING.md?
Hello
With the new load_data_by_post, I'm not able to upload a JSON file and I have this error "Cannot load CSV data with a nested schema".
Sounds like the job expect a CSV not a JSON file so do I need to do something special ?
And I tested on the web client with my data/schema and it works fine, here the schema and the data in case you wanna test.
Regards
Principle of keeping only required data.
The django apps don't like being imported as part of a larger package, apparently. We need to either find a way for the apps to play nice with nose or figure out a way to get nose and coverage to cooperate if we remove all of the packages.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.