origintrail / dkg.py Goto Github PK
View Code? Open in Web Editor NEW:snake: Python :snake: SDK for OriginTrail Decentralized Knowledge Graph interactions
License: Apache License 2.0
:snake: Python :snake: SDK for OriginTrail Decentralized Knowledge Graph interactions
License: Apache License 2.0
Asset.create function lacks an option to manage Nonce, which limits number of assets which can be published from one wallet. Essentially current implementation allows only sequential publishing.
I can not start publishing next asset while previous asset publishing process hasn't finished yet. That's because next asset pub transaction goes with the same Nonce as previous asset pub transaction.
Functional workaround here - use different wallets if there is a need to publish more then ~50 assets per hour.
But it would be nice to have an ability to publish several assets in parallel from the same wallet.
"a multi-chain data exchange network - connects to several blockchains (currently Ethereum and xDai with more integrations upcoming such as with Polkadot)"
I'm trying to burn an asset I previously published using asset.burn, but it throws an exception.
I can get this asset though.
I'm calling it just by the UAL like this - dkg.asset.burn(ual='did:dkg:otp/0x5cAC41237127F94c2D21dAe0b14bFeFa99880630/591752')
Am I using it the wrong way? Or is it actually bugged?
NODE INFO RECEIVED
{'version': '6.0.14'}
asset.get as dkg.asset.get(ual='did:dkg:otp/0x5cAC41237127F94c2D21dAe0b14bFeFa99880630/591752')
{'operation': {'publicGet': {'operationId': '5f885f33-01f6-480d-8290-0432bbd789da', 'status': 'COMPLETED'}}, 'public': {'assertion': 'uuid:1 http://schema.org/message "pub_with_47k_stake" .', 'assertionId': '0x8a320fb4b787628551bf730114983b8ae774955f9baeb853d6e1dd4e445401e8'}}
asset.burn as dkg.asset.burn(ual='did:dkg:otp/0x5cAC41237127F94c2D21dAe0b14bFeFa99880630/591752')
Traceback (most recent call last):
File "/root/dkg.py/examples/test_1.py", line 98, in
res = dkg.asset.burn(ual='did:dkg:otp/0x5cAC41237127F94c2D21dAe0b14bFeFa99880630/591752')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/dkg.py/dkg/asset.py", line 316, in burn
self._burn_asset(token_id)
File "/root/dkg.py/dkg/module.py", line 37, in caller
return self.manager.blocking_request(type(method.action), request_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/dkg.py/dkg/manager.py", line 58, in blocking_request
return self.blockchain_provider.call_function(**request_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/dkg.py/dkg/providers/blockchain.py", line 106, in call_function
gas_estimate = contract_function(**args).estimate_gas(options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/contract/contract.py", line 318, in estimate_gas
return estimate_gas_for_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/contract/utils.py", line 203, in estimate_gas_for_function
return w3.eth.estimate_gas(estimate_transaction, block_identifier)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/eth/eth.py", line 293, in estimate_gas
return self._estimate_gas(transaction, block_identifier)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/module.py", line 68, in caller
result = w3.manager.request_blocking(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/manager.py", line 232, in request_blocking
return self.formatted_response(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/spam_test_2/lib/python3.11/site-packages/web3/manager.py", line 197, in formatted_response
apply_error_formatters(error_formatters, response)
File "/root/spam_test_2/lib/python3.11/site-packages/web3/manager.py", line 72, in apply_error_formatters
formatted_resp = pipe(response, error_formatters)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "cytoolz/functoolz.pyx", line 680, in cytoolz.functoolz.pipe
File "cytoolz/functoolz.pyx", line 655, in cytoolz.functoolz.c_pipe
File "/root/spam_test_2/lib/python3.11/site-packages/web3/_utils/contract_error_handling.py", line 126, in raise_contract_logic_error_on_revert
raise ContractCustomError(data, data=data)
web3.exceptions.ContractCustomError: 0x4db94714dd2eed3f95b5460da544a35b7907a6db0d8681c7a1d973911d8feeef5c8009e8
I've ran into an issue trying to normalize N-Quads using URDNA2015 normalization from jsonld
library.
pyld.jsonld.JsonLdError: ('Could not convert input to RDF dataset before normalization.',)
Type: jsonld.NormalizeError
Cause: ('Error while parsing N-Quads invalid quad.',)
Type: jsonld.ParseError
In dkg.js
we're normalizing N-Quads using the following function:
async toNQuads(content, inputFormat) {
const options = {
algorithm: 'URDNA2015',
format: 'application/n-quads',
};
if (inputFormat) {
options.inputFormat = inputFormat;
}
const canonized = await jsonld.canonize(content, options);
return canonized.split('\n').filter((x) => x !== '');
}
I've tried to reproduce the same logic in dkg.py
, but I've ran into issues trying to normalized N-Quads (JSON-LD works fine). It may be either wrong usage of the library from my side or bug in the jsonld
as it seems it's not supported anymore.
Python normalization function:
def normalize_dataset(
dataset: JSONLD | NQuads,
input_format: Literal["JSON-LD", "N-Quads"] = "JSON-LD",
) -> NQuads:
normalization_options = {
"algorithm": "URDNA2015",
"format": "application/n-quads",
}
match input_format.lower():
case "json-ld" | "jsonld":
pass
case "n-quads" | "nquads":
normalization_options["inputFormat"] = "application/n-quads"
case _:
raise DatasetInputFormatNotSupported(
f"Dataset input format isn't supported: {input_format}. "
"Supported formats: JSON-LD / N-Quads."
)
n_quads = jsonld.normalize(dataset, normalization_options)
assertion = [quad for quad in n_quads.split("\n") if quad]
if not assertion:
raise InvalidDataset("Invalid dataset, no quads were extracted.")
return assertion
We've come across an inconsistency in how the byte size of JSON assertions is calculated in the Python and ot-node/dkg.js. This discrepancy has led to an error in verification of the Knowledge Asset (KA) size on the node when trying to create a KA with special characters through the dkg.py.
In the Python version, when encoding a specific character (e.g., \u2022
) within a JSON string, I noticed that the resulting byte size of the JSON is calculated as 22. Here's an example code snippet for illustration:
import json
data = {"character": "\u2022"}
number_of_bytes = len(json.dumps(data, separators=(",", ":")).encode("utf-8"))
print(number_of_bytes) # Outputs: 22
In contrast, using NodeJS, the calculation for the same JSON object yields a different byte size:
const data = { character: '\u2022' };
const numberOfBytes = Buffer.byteLength(JSON.stringify(data));
console.log(numberOfBytes) // Outputs: 19
The expected behavior would be a consistent byte size calculation for JSON assertions across both the Python and NodeJS implementations (and all other).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.