ari3s / synct Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU General Public License v3.0
License: GNU General Public License v3.0
If there is a formula in a column that is not updated by the script, it is not inherited when rows are added. Inheritance of formulas in added rows would be useful for more powerful automation of spreadsheets update where formulas are included.
To prevent conflicts, this feature has to be explicitly enabled in the configuration file either globally of in specific sheets:
INHERIT_FORMULAS: True
If the feature is enabled globally, it can be disabled for specific sheets:
INHERIT_FORMULAS: True
SHEET:
- NAME: Example
INHERIT_FORMULAS: False
...
Google spreadsheet rows are deleted one by one. It can happen that the repeated operation exceeds limit Write requests per minute per user with the following error:
gd2gs: error: <HttpError 429 when requesting https://sheets.googleapis.com/v4/spreadsheets/????????????????????????????????????????????:batchUpdate?alt=json returned "Quota exceeded for quota metric 'Write requests' and limit 'Write requests per minute per user' of service 'sheets.googleapis.com' for consumer 'project_number:????????????'.". Details: ...
When the script updates data in a Google spreadsheet, the hyperlinks that were created via Insert/Link
command, disappear. Values in the cells are not affected. Neither hyperlinks are affected in the cells with the HYPERLINK
formula.
It is not easy to recognize mapping of Jira field names to their internal names returned from a query. It can be found in XML data exported from Jira which is not user friendly.
There is a script easyjira which can show the fields mapping for a project and an issue type in the JSON format. The documentation should mention this option.
If default columns are enabled in the config file, data are deleted in several cells that are not a part of the source data. It is a bug impacting the default columns functionality.
The script should optionally enable removing rows with key values that were not found by the query.
The debug mode can be extended with showing source data in json or json-like format. Because source data can be a huge volume, the number of listed items should be limited. Maybe, a number of v
letters in the -v
parameter can used for this purpose. -v
can still be just the info level and -vv
will work as the debug mode without showing source data as today. However, -vvv
can work as the debug mode with showing the first source data record, -vvvv
can display first two records etc. Another option could be to add a number after -vv
to show the initial source records. The third option could be displaying just one data record per sheet/query.
Add parameter -s or --sheet with names of sheets defining which sheets are processed. The selected sheets can be any sheets that are set up in the configuration YAML file.
The script (version 0.6.0) deletes data in columns that are not present in the config file, it means which should stay untouched by the script.
An additional source of data can be GitLab. It works in a similar way as GitHub.
All documentation is now in the README.md
file. It is long but there is still missing a guidance how to prepare the config file, especially queries.
The script could enable to updates of local spreadsheets (.ods, .xls, .xlsx), not only Google spreadsheets. Either the command line parameter -m
(modify) or -u
(update) with a file name could define the modified/updated file, and the SPREADSHEET_ID
reserved word would be missing or replaced with SPREADSHEET_NAME
in the config file.
The config file should include an option that supports this use case:
As a user, I need to keep updated Jira items as listed by the query but ignore missing keys with Closed status. It means, items which are in the list and which status changed to Closed are kept in the list until they are manually deleted. Missing items in Closed status are not reported as missing.
If a cell contains int64 type, the following error occurs when the spreadsheet is being updated:
Traceback (most recent call last):
File "/usr/bin/gd2gs", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/lib/python3.11/site-packages/gd2gs/gd2gs.py", line 203, in main
update_google_data(google_spreadsheet, used_sheets_list, config.sheet, args.remove)
File "/usr/lib/python3.11/site-packages/gd2gs/gd2gs.py", line 157, in update_google_data
google.update_spreadsheet()
File "/usr/lib/python3.11/site-packages/gd2gs/gsheet.py", line 197, in update_spreadsheet
if not self.request_operation(self.spreadsheet_access.values().batchUpdate, body):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/gd2gs/gsheet.py", line 240, in request_operation
response = operation(
^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/googleapiclient/discovery.py", line 1123, in method
headers, params, query, body = model.request(
^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/googleapiclient/model.py", line 160, in request
body_value = self.serialize(body_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/googleapiclient/model.py", line 273, in serialize
return json.dumps(body_value)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type int64 is not JSON serializable
If both source and modified files are local spreadsheets as proposed in the issue #16, it could be useful to omit the config file and enable the definition of several parameters on the command line, at least the key, the header offset, the default columns, the formulas inheritance and the column name relation to the column in the input spreadsheet.
Formulas inheritance works only if default columns are not enabled in the configuration file.
If a sheet contains just a header without data, it is not updated even adding is enabled - the sheet is excluded from any operations. It is required to have at least one data row for adding missing rows.
If the delimiter is changed globally or in a sheet only, arrays are still separated by the default delimiter. If the delimiter is set in the particular data item, it works as expected.
An additional source of data can be GitHub. Because of REST API, the query items can be designed in a similar way as Bugzilla queries.
There is required to set-up a mapping of column names to source field names in the configuration file. It would be useful to enable default names, it means if column name is equal to the source filed name, it is updated even it is not presented in the configuration file.
To prevent conflicts, this feature has to be explicitly enabled in the configuration file either globally of in specific sheets:
DEFAULT_COLUMNS: True
If the feature is enabled globally, it can be disabled for specific sheets:
DEFAULT_COLUMNS: True
SHEET:
- NAME: Example
DEFAULT_COLUMNS: False
...
The script should optionally enable adding rows with missing key values that were found with the query including related values in the columns.
Adding the script on PyPI will require to rename it again because there is already a package named syncit.
Formulas inheritance in added rows from the last original row can be set either globally or specifically for each sheet. It could be useful to extend the settings for specific columns.
The script could use input from local Excel/LibreOffice spreadsheet file or CSV format file. It would be useful for retrieving data from applications that can export data in such formats.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.