origintrail / ot-node Goto Github PK
View Code? Open in Web Editor NEWOriginTrail Decentralized Knowledge Graph network node
Home Page: https://origintrail.io
License: Apache License 2.0
OriginTrail Decentralized Knowledge Graph network node
Home Page: https://origintrail.io
License: Apache License 2.0
In 2018 software development should be container based in order to enable modern software paradigm like DevOps and to utilizes cloud environments that scale.
E.g. please provide the OT-node as docker image, I would like to deploy it without manual installation steps.
https://en.wikipedia.org/wiki/Docker_%28software%29
As well this approach will enable non IT-professionals to deploy a master-node to cloud service provider. Enterprise IT expect container based deployment in 2018 too.
Manual installation required, error prone and a challenge for token holders that want to operate a node, but aren’t Linux experts.
https://github.com/OriginTrail/ot-node/wiki/Integration-Instructions
The DC sends replication request acknowledgement to all nodes that sent replication request.
The DC does not send replication request acknowledgement for offer_id, which does not allow the job bidding process to continue, and the DH is not able to bid successfully on the job. This happens on average on 5-10% of the jobs during the day. With adoption and presense of 50 000 - 100 000 nodes within the next years, this behaviour could be further exacerbated and result in assessing it as DDOS attack behaviour of the DC during job bids and DC cancellation by the hosting provider.
To discuss if the job replication process can be improved. I.e. DC sends a basic check to obtain identity/lambda settings of nodes every 30-60 mins, and choose the winners of the job and replicate only to these. Should the replication post job winning is unsuccessful, another winner is chosen from the list. (which could resolve issue #1505).
To be able to make a backup of a node that is running without any data corruption in sqlite.
If your node is busy during the backup you can get a copy of a corrupted system.db sqlite database because it was copied mid transaction.
I've encountered this when I restored one of my nodes 4 weeks ago. I can think of about 6 other users in the last few weeks who have had this while migrating servers.
We have only recently figured out where the problem comes from.
When you backup any database you should never do it copying files while it is running. Unfortunately, the backup.js script does a copy file on system.db and if there is any writing/transaction happening to the file during the copy it gets corrupted.
https://www.sqlite.org/howtocorrupt.html
https://www.quackit.com/sqlite/tutorial/backup_a_database_to_file.cfm
Currently node runners are having to go back to their original server the backup was from and running backup commands on sqlite which are safe with transactions running on the same file.
I followed the steps mentioned in
https://docs.origintrail.io/en/latest/Running-a-Node/basic-setup.html
I tried both ways
=> via docker
=> manual installation
in both cases I am not able to find the cause of not running node on my end.
docker runs for few seconds and then stops, I tried via docker start -i otnode
but no clue of stopping.
Terminal logs for: docker start -i otnode
Terminal logs for: npm start
I tried by configuring as in https://github.com/OriginTrail/ot-node/blob/release/mainnet/.origintrail_noderc.image
and
in this way(but no luck)
{
"node_wallet": "0x4ff99b5d96035611cd5866d776538877",
"node_private_key": "aafff60551908d34d62c7746ab6c394e1dc6b43dcf58350f15abf6d59c4",
"management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6ad48e48b5ec76f1ebfe699254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb",
"network": {
"hostname":"127.0.0.1",
"remoteWhitelist":[
"0.0.0.0",
"127.0.0.1"
]
},
"blockchain": {
"rpc_server_url": "https://1pxlHB8a44CZZzzHASMIzQob:[email protected]",
"implementations": [
{
"blockchain_title": "Ethereum",
"blockchain_id": "ethr:mainnet",
"rpc_server_url": "https://1pxlHB8a44CZZzzHAX1MIzQob:[email protected]",
"node_wallet": "0x4ff99b5d96035611caf81b16d586d7765038877",
"node_private_key": "aafff60551908d3f8df4d62c7746b6394e1dc6b43dcf583650f15abf6d59c4",
"management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6abd7fad48e4b5ec76d8ae98f1eb699254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb"
},
{
"blockchain_title": "xDai",
"blockchain_id": "xdai:mainnet",
"rpc_server_url": "https://1pxlHB8a44CZZzzASX21MIzQob:[email protected]",
"node_wallet": "0x4ff99b5d96035611caf81b16d5866d776508877",
"node_private_key": "aafff60551908d3f8df4d62c7746ab6c394e1dc6b43dcf583650f1abf6d59c4",
"management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6abd7fad4e48b5ec76dae98f1ebfe9254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb"
}
]
}
}
Note: the above addresses/values are not true
Any direction, regarding this would be appreciated.
I propose the following improvement to the backup process. This will reduce expense costs on AWS, as well as reduce space on the VPS server and reduce risks of maxing available space.
OT-Node uses different wallet addresses from the configuration file for normal functioning. Node wallet is operational wallet with corresponding private key that the OT-Node will use for dataset signing, creating and finalizing offers, payouts, and litigation initiating, answering, and completing. Management wallet is used when the new identity is generated and from that point on it is not used.
OT-Node wallets configuration parameters could be incorrectly entered and cause problems with communication, since they’re not checked when creating a node profile. Adding additional checks for wallets format, such as hex string with the 0x at the beginning for addresses and hex string without 0x at the start for the private key, would create a more error resistant OT-Node.
Level: Easy
An OT-Node startup should fail and report an error message if one of the following scenarios occur:
OT-Node entry point in standard environment:
Line 86 in bbc5c99
OT-Node entry point in Docker environment:
ot-node/testnet/register-node.js
Line 278 in bbc5c99
If you have questions ask in this issue or on your pull request (if you've created one).
Issue: When running
sudo docker run -i --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner
it says
Wallet not provided! Please provide valid wallet. INFO exited: otnode (exit status 0; expected)
On top of that ... (at least on Bash for Windows) the process doesn't kill itself after existing, i,e, after
INFO exited: otnode (exit status 0; expected)
Note: the operational wallet is currently empty since I am basically ensuring everything works smoothly to start.
Given the chance, I was wondering if there is any way to be notified of any node failure (e.g. Some specific error)?
With your help. I can import XML data to Supply Chains test network. All my data from "example_gs1.xml" success import to ArangoDB and writing Fingerprint of this data to blockchain. So, How can I validation the data I imported to Blockchain with Fingerprint or Block number.
This is my first scenario:
1/ I Import data to Supply Chainins, everything success. I have Fingerprint and Block number for this data.
2/ After two years, I have the XML file contain this data from step 1, I need to validation this data from XML file still original or modified.
This is my second scenario:
1/ I setup the Supply Chains network with 5 node (node is the PC runs on many different networks).
2/ From each node I import the data for specific products (Egg or carrot) but different time.
3/ How can I get the "Chains" for specific products (Egg or carrot) from Supply Chains and validation my XML file with this data.
So how can I do that?
Thank you.
Whenever TRAC price is shown in logs, it should display with proper decimal placement
Shown in microTRAC, with no decimal placement
Logs example:
2021-05-19T14:14:14.722Z - trace - Calculated offer price for data size: 0.117957MB, and holding time: 90 days, PRICE: 461464187049431230[mTRAC]
2021-05-19T14:14:14.723Z - info - Accepting offer with price: 2304700654612025900 TRAC.
Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].
OT-Node as data creator spends ETH on blockchain transactions and TRAC for chosen data holders. If data creator doesn't have sufficient funds on the wallet an error occurs.
The error severity is the same as if there was an actual error during transaction execution. Since low funds are not an error but rather a circumstance, errors thrown because of low funds should be caught and logged with a lower severity level, and without sending an error report to BugSnag.
Level: Easy
When a transaction fails because of low funds, an OT-Node should meet the following:
Handling insufficient funds for offer preparation:
Handling insufficient funds for offer finalization:
Handling insufficient funds for offer mining:
If you have questions ask in this issue or on your pull request (if you've created one).
When I import a WOT-File for a example https://github.com/OriginTrail/ot-node/blob/develop/importers/json_examples/WOT_Example_1.json.
And then investigate the vertices and edges from the Data that i get from GET /api/query/local/{import_id}.
I should see vertices or edges that contain the readPoint or the observedObjects otherwise the examples should not list that data. This makes it easy to query the WOT-Data.
The WOT contains:
"readPoint": {
"id": "urn:epc: id: sgln: Building_1"
}
"observedObjects": [
"urn:epc: id: sgtin: Batch_1"
]
The Graph has no vertices or edges that contain the readPoint or the observedObjects.
Without readPoint or observedObjects it is hard to query the data from WOT-Files.
And for some reasons the id containts Invalid_Date" like "urn:ot:mda: actor: id: Company_1: Invalid_Date
i atached the responser with vertices/edges.
1.Local Import a json file from this repo for example https://github.com/OriginTrail/ot-node/blob/develop/importers/json_examples/WOT_Example_1.json
2. Get the graph data with GET /api/query/local/{import_id}.
Datasets are deleted as soon as the DC has confirmed that the node has not won the job.
The nodes store the data from the job they bid on, which increases the space used. During the initial jobs on the xDAI network, the space used per day increases with 300MB. As per the expected growth of the network of 450k jobs for 2021, would require over 200GB space per server to contain the ODN data. Extrapolating further over the years, this approach is not sustainable.
In OT-Node the gas limit is the same for every transaction. It is defined by the gas_limit parameter in the configuration, using default value of 2,000,000. The default value is set extraordinarily high because the createProfile function generates a new smart contract, which is an expensive operation. Profile creation is invoked on OT-Node startup if profile on the blockchain doesn't exist. After creating the profile, all other transactions executed by the OT-Node use considerably less gas, possibly around 450,000, but possibly even less.
OT-Node always uses the high gas limit which could reject transactions, since the required amount of ETH is calculated with the extraordinarily high gas limit, and not with how much is necessary. Reducing gas limit for transactions would allow an OT-Node to create and finalize offers with less balance required. For example, if a transaction uses 200,000 gas, and the gas price is 10GWei, the transaction will cost 0.002 ETH to execute. However, if the user sets the gas limit to 2,000,000 gas for that transaction, the transaction will require that the wallet has 0.02 ETH, even though the end cost will be 0.002 ETH.
Level: Medium
OT-Node configuration should contain gas_limit parameter which would be used for blockchain transactions except createProfile where gas_limit parameter should be hardcoded to 2,000,000.
Ethereum blockchain service:
https://github.com/OriginTrail/ot-node/blob/develop/modules/Blockchain/Ethereum/index.js
Sending transactions to the blockchain using web3 provider:
If you have questions ask in this issue or on your pull request (if you've created one).
Success import on regular node.
I follow this tutorial. But I can not import XML files. This is error on regular node. This information on Bootstrap node. Why on Regular node show this message is "Transaction ran out of gas, Please provide more gas."
Address wallet for Bootstrap node: 0x5735BA41852f309dF11aF2C27F9c49696eBE66B1
Address wallet for Regular node: 0x5359E7544d48db78172F10FfA322a1d1b25E9259
With version 2.0.52 the log shows 4 blockchain calls after node recognized a finalized offer (chosen for or not). Those 4 calls happen for every offer the node was chosen for in the past. Is this sustainable if the amount of won offers grow?
2019-04-09T16:16:11.687Z - trace - getHolderLitigationEncryptionType(offer=0x12345..., holderIdentity=0x6789a...)
2019-04-09T16:16:11.928Z - trace - getOffer(offerId=0x12345...)
2019-04-09T16:16:12.513Z - trace - getHolderPaidAmount(offer=0x12345..., holderIdentity=0x6789a...)
2019-04-09T16:16:12.788Z - trace - getHolderStakedAmount(offer=0x12345..., holderIdentity=0x6789a...)
Should be able to create an xDai node regardless of how busy the xDai blockchain is.
When gas prices on xDai creep up it's impossible to create a node on xDai because it has a fixed gas price.
trace - Sending transaction to blockchain, nonce 0, balance is 5991296853149283761
xDai has this for the default values in the config.json:
"gas_limit": "2000000", "gas_price": "1000000000",
This equates to 1 gwei for the gas price. Looking at the screenshot below xDai we can see xDai is already at over 80% of gas utilised in blocks mostly from 1 project.
Gas prices have been seen where 1 gwei is no longer safe so transactions don't get picked up.
This leads to the problem of unless you manually change the node config to have a different gas_price for xDai it is stuck at 1 gwei. This blocked a user setting up their node and there are no warnings or errors, it just seemingly gets stuck in the otnode.
It should also be said blockscout.com for checking your wallet doesn't track pending transactions very well either so the experience there is bad for figuring out what's happening.
Ethereum has support for reading eth gas station to get gas prices as seen here
xDai has no support for dynamic gas prices and instead hard codes it as seen here
ot-node/modules/Blockchain/XDai/index.js
Line 31 in 277c255
I think we need to find an equivalent service for getting gas prices for xDai or find other creative ways of calculating this.
The installation process should stop if the management wallet is a blank wallet.
The installation will continue regardless.
I propose that additional control is added. The easiest I can think of is to check if the management wallet has any ETH/xDAI on it. If it doesn't the user is noted to add some to confirm the management wallet is appropriate, or update it with the correct one.
With the lack of such control, the user can lose their TRAC during the installation process.
I followed the instructions in the git repo for setting up the warp node (v5) on both AWS - Ubuntu version 20.04 and Azure - Ubuntu version 18.04 (Bionic). The node runs fine on Azure / 18.04 but fails on AWS / 20.04 with the error below. As noted, the node seems to run fine on the older Ubuntu version so I'm not sure there is any need to fix this, but thought the info could be useful (at least for setup guides).
I expect when I run (** replaced with my real public ip address):**
docker run --log-driver json-file --log-opt max-size=1g --name=otnode --hostname=**** -p 8900:8900 -p 5278:5278 -p 3000:3000 -e LOGS_LEVEL_DEBUG=1 -e SEND_LOGS=1 -v /root/certs/:/ot-node/certs/ -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc -v /root/wallets/kovan.json:/ot-node/data/wallets/kovan.json -v /root/wallets/rink.json:/ot-node/data/wallets/rink.json quay.io/origintrail/otnode-test:feature_blockchain-service
To see the node fire up and start eating my test coins, but instead I get:
(more logs up here, but no errors)...
arango: stopped
2021-02-19 03:01:23,481 INFO spawned: 'arango' with pid 312
2021-02-19 03:01:24,484 INFO success: arango entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
arango: started
2021-02-19T03:01:34.502Z - notify - One-time password migration completed. Lasted 24730 millisecond(s)
2021-02-19T03:01:34.571Z - error - Whoops, terminating with code: 1
2021-02-19 03:01:34,579 INFO exited: otnode (exit status 1; expected)
2021-02-19 03:01:35,581 WARN received SIGTERM indicating exit request
2021-02-19 03:01:35,581 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-02-19 03:01:36,325 INFO stopped: arango (exit status 0)
See above.
Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].
Received "Import success" messages after import xml files.
Did not received "Import success" messages. You can check this photo, I have 18 EHT on my test wallet. Here is my address: 0x5735BA41852f309dF11aF2C27F9c49696eBE66B1
Can create new smart contract. This is new feature of version 0.6.0a
Did not know how to create new smart contract. Need tutorials or demo for this behavior.
I Import XML data success but It can not write data fingerprint on blockchain. So, How can I fix it ?
You can check my screenshot
can not deposit TRAC from management wallet to my profile node-profile.origintrail.io
"profile": {
"minimalStake": "1000",
"reserved": "0",
"staked": "1000"
},
tokenbalance of my operational wallet is 850 but can not sent TRAC to my profile
How to sent TRAC to my profile??
After getting to the last part of the install, I get the following warnings.
I'm using Amazon Linux 2.
$ npm install
npm WARN deprecated [email protected]: 🙌 Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
npm WARN deprecated [email protected]: Use uuid module instead
> [email protected] prepare /home/ec2-user/workspace/ot-node
> npm run snyk-protect
> [email protected] snyk-protect /home/ec2-user/workspace/ot-node
> snyk protect
Successfully applied Snyk patches
npm WARN [email protected] requires a peer of ajv@^6.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
up to date in 170.072s
Hi, After your new commit I can not find .env in your source code. Did you use config.json instead of .env ?
So, How can I add WALLET_ADDRESS and PRIVATE_KEY. I follow this Documents.
Thank you.
OT-Node can act both as a data holder and a data creator. Data creators import datasets to the local knowledge graph and replicate data by sending offers on the blockchain. Data holders are listening on the blockchain events and accept offers if conditions are fulfilled.
In some cases it is useful to disable listening on the blockchain events for new offers. Proposal is to create a configuration parameter which would make the OT-Node automatically reject offers by not listening to the blockchain events for newly created offers by other nodes on the network.
Level: Medium
An OT-Node should have a configuration parameter that when enabled should meet the following:
Blockchain service where new offers are handled:
Line 694 in 9a21b6c
If you have questions ask in this issue or on your pull request (if you've created one).
I do this ducuments to import XML data to ot-node. But, How can I get that data after I import successful. I use example_v1.5.xml demo data to import to ot-node.
Thank you.
arangodb3 able to keep running when there is at least 7.5gb of free space on the server
on otnode logs, getting error :
ArangoError: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
on journal log :
2021-07-07T23:01:17Z [4323] WARNING [82af5] {statistics} could not commit stats to _statisticsRaw: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
Jul 08 01:01:17 OTNODE12 arangod[4323]: 2021-07-07T23:01:17Z [4323] WARNING [82af5] {statistics} could not commit stats to _statisticsRaw: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
error - Unhandled Rejection:
Jul 08 02:02:27 OTNODE12 node[4964]: ArangoError: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
Jul 08 02:02:27 OTNODE12 node[4964]: at new ArangoError (/ot-node/5.0.4/node_modules/arangojs/lib/error.js:30:15)
Jul 08 02:02:27 OTNODE12 node[4964]: at /ot-node/5.0.4/node_modules/arangojs/lib/connection.js:204:21
Jul 08 02:02:27 OTNODE12 node[4964]: at callback (/ot-node/5.0.4/node_modules/arangojs/lib/util/request.node.js:52:9)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage. (/ot-node/5.0.4/node_modules/arangojs/lib/util/request.node.js:63:11)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage.emit (events.js:185:15)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage.emit (domain.js:422:20)
Jul 08 02:02:27 OTNODE12 node[4964]: at endReadableNT (_stream_readable.js:1106:12)
Jul 08 02:02:27 OTNODE12 node[4964]: at process._tickCallback (internal/process/next_tick.js:178:19)
Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].
Include data privacy statement. Document how GDPR and China Cyber Securtiy Law legal requirements are implemented.
Especially how the right to forget will be implement if person related data is saved to an immutable Blockchain. Here it does not matter that the person related data is encrypted
From the Graph structure I can see that the Origin Trail “databases” will deal w. person related data defined by the object class actor.
This object will very like be person related that is applicable for GDRP or China Cyber Security Law. These legislations require compliance in the software implementation.
https://github.com/OriginTrail/ot-node/wiki/Graph-structure-in-OriginTrail-Data-Layer---version-1.0
No data privacy statement included in the Git or Wiki.
https://gdpr-info.eu/art-17-gdpr/
https://assets.kpmg.com/content/dam/kpmg/cn/pdf/en/2017/02/overview-of-cybersecurity-law.pdf
Provide operations concept how a critical security patch will be applied within the OT network without impacting the overall network performance and availability.
In general will upgrading the OT node, ArangoDB or Node.Js, require a downtime of the server?
Imagine the following case:
The ArangoDB or Node.JS software component has a severe security issue and requires and immediate security patch.
E.g. all OT-nodes will have to be security-patched ASAP.
How will OT ensure:
All operators of the OT-nodes apply the security patch in the shortest time-frame possible.
How will OT ensure overall availability of the network during the security patching process? Assuming a data replication factor of 3-5, how will OT ensure that 5 nodes, with a replicated record, are not security patched in parallel and therefore down? E.g. data will not be able for queries due to the impact of the security-patch upgrade.
What will happen with nodes that don't apply the security patch, for example, within 48hours?
How will OT ensure and validate the software version running on OT nodes operated by external parties?
No concept for security upgrades or patches are included.
run OTnode in testnet network
my logs when run OTnode
2019-03-05T02:42:36.646Z - trace - Sending transaction to blockchain, nonce 9, balance is 45327694480943374925824
2019-03-05T02:42:49.326Z - error - Failed to create profile
Error: Error: Transaction has been reverted by the EVM:
OT-Node has two options for executing the payOut transactions in order to retrieve TRAC earned for all held jobs. If configuration parameter for automatic payouts (disableAutoPayouts) is enabled the payOut transaction is executed immediately after a holding period. Otherwise, it could be executed manually one job at a time through the API.
In some cases automatic payouts can be highly inefficient due to the variations of a gas price. Proposal is to create an AutoPayout service which would keep track of the amount of TRAC the OT-Node would earn after a holding period by executing a payOutMultiple transaction for all of its completed jobs, and publish the transaction once the amount is greater than an amount defined as a configuration parameter.
Level: Hard
An OT-Node AutoPayout service should create payOutMultiple transactions once the amount of earned TRAC is greater than an amount defined as a configuration parameter.
One time payout migration on OT-Node startup:
Line 464 in bbc5c99
Automatic payouts on offer finalization:
Payout command:
https://github.com/OriginTrail/ot-node/blob/develop/modules/command/dh/dh-pay-out-command.js
An example of service for fetching gas price as a reference:
https://github.com/OriginTrail/ot-node/blob/develop/modules/service/gas-station-service.js
If you have questions ask in this issue or on your pull request (if you've created one).
Storage on my VPS was fully consumed due to a process causing 25GB+ of data to be written, possibly either due to a backup or arangodb, causing the node running on this VPS to terminate. My nodes are running on a VPS tier with a 55GB SSD and while storage utilization has increased over time, it so far has hovered around 50% utilization, so the storage filling up overnight was a bit unexpected.
I was able to free up storage by deleting a very large backup file, with approx 20GB of storage now free. When attempting to now start the node I receive the following error: warn - Failed to load contracts on all blockchain implementations. Error: Invalid JSON RPC response: ""
Up to this point I've done a bit of troubleshooting in the form of restarting my VPS, verifying configurations in the .origintrail_noderc file look fine (especially the RPC endpoint URL I have entered for Ethereum), and attempting to roll back my entire VPS to an earlier snapshot going back 4-5 days.
While I can't reproduce this issue on demand, at least one other user is also experiencing the same error after their VPS storage filled up. I had two other nodes also experience this sudden increase in storage utilization but both of those nodes are able to start after clearing up storage space.
(node:14) Warning: N-API is an experimental feature and could change at any time.
2021-06-24T19:45:34.534Z - info - npm modules dependencies check done
2021-06-24T19:45:34.535Z - info - ot-node folder structure check done
2021-06-24T19:45:34.537Z - important - Running in mainnet environment.
2021-06-24T19:45:34.561Z - info - Using existing graph database password.
2021-06-24T19:45:34.596Z - info - Arango server version 3.5.3 is up and running
2021-06-24T19:45:34.612Z - info - Storage database check done
2021-06-24T19:45:35.309Z - info - [ethr:mainnet] Selected blockchain: Ethereum
2021-06-24T19:45:35.320Z - info - [xdai:mainnet] Selected blockchain: xDai
2021-06-24T19:45:35.327Z - trace - [ethr:mainnet] Asking Hub for Holding contract address...
2021-06-24T19:45:35.359Z - trace - [xdai:mainnet] Asking Hub for Holding contract address...
**2021-06-24T19:45:55.384Z - warn - Failed to load contracts on all blockchain implementations. Error: Invalid JSON RPC response: ""**
2021-06-24 19:45:55,410 INFO exited: otnode (exit status 0; expected)
2021-06-24 19:45:56,414 WARN received SIGTERM indicating exit request
2021-06-24 19:45:56,418 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-06-24 19:45:57,413 INFO stopped: arango (exit status 0)
2021-06-24 19:45:58,419 INFO stopped: remote_syslog (terminated by SIGTERM)
2021-06-24 19:45:58,420 INFO stopped: otnodelistener (terminated by SIGTERM)
Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].
Provide detailed concept and implementation examples for the large ERP software vendors how the XML export should be implemented, ideally without customization of the ERP. Customization is moving the customer away from the standard and is therefore seen as bad practice.
A customer using Navision or SAP ERP will not be willing to implement the XML export from the scratch or the reinvent the XML export wheel per customer.
No detailed description how the XML export should be implemented for a large ERP vendors like Navision, Infor or SAP.
https://github.com/OriginTrail/ot-node/wiki/ERP-Customization
https://community.dynamics.com/ax/b/axvanyakashperuk/archive/2014/09/16/tutorial-generating-shipping-labels-using-the-gs1-sscc-18-barcode-format
https://help.sap.com/saphelp_me60/helpdata/EN/f7/86c1536ca9b54ce10000000a174cb4/frameset.htm
Hello, I get this issue when running the RPC node:
$ node ipc.js
IPC-RPC Communication server listening on port 3000
OriginTrail IPC server listening at http://127.0.0.1:8765
RPC Server connected
$ node rpc.js
Kademlia service listening...
OriginTrail RPC server listening at http://[::]:8888
Socket connected to IPC-RPC Communication server on port 3000
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"connect econnrefused 18.196.209.195:1778","time":"2018-02-25T16:23:19.278Z","v":0}
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"connect econnrefused 18.196.209.195:1778","time":"2018-02-25T16:23:19.279Z","v":0}
Kademlia connection to seed failed
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"failed to pull filter from 0000000000000000000000000000000000000001, reason: Timed out waiting for response","time":"2018-02-25T16:23:39.102Z","v":0}
run OTnode in testnet network
2019-03-07T07:44:59.895Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 8529,
response: undefined }
2019-03-07T07:44:59.898Z - info - Notifying Bugsnag of exception...
I've being running a node for almost 2 months. I don't have a complaint about job distribution like most others. Although I think it is a issue that needs to be answered. And I'm hopeful it'll be fixed with Freedom update.
Before I get to the current issue I'm having, I would like to report an issue I had when I was first setting up my node. I tried to use the option in the below image to deposit ETH to my operational wallet.
https://gyazo.com/ec09f5bbbdbfa84a42696f17bff48878
Because it says Deposit ETH to your Operational Wallet. But it sent ETH to my ERC-725 identity instead. Which I later learned are lost forever. Obviously, I didn't know what I was doing. I went ahead with the transaction because it says Deposit ETH to your Operational Wallet and I thought that's what it's going to do. But somehow my erc-725 identity was in the text field. It should at least state that I need to put in my operational wallet address if it wasn't there. Here's the transaction on Etherscan,
https://etherscan.io/tx/0x37b373d8651cfadaebb5ce177d63c4c7cac4e8087e6793e7499b3956459cef51
I'm out 0.22 ETH. I would like this reimbursed
And to the issue i'm currently having. I installed my node with Git. And the command cloned master branch. Now team is posting updates everywhere about updating to v2.0.50. So I did. But it turns out v2.0.50 is on another branch. Git pull still gives me v2.0.45.
Why do I need switch branches? And if there is no other option than to switch. I would like instructions to properly handle this.
Thanks!
Call POST /api/latest/replicate with bad data and get a validation error.
Call POST /api/latest/replicate with bad data and it turns the bad data into even crazier data.
Call POST /api/latest/replicate with the following data:
{ blockchain_id = balance.blockchain_id, dataset_id = status.data.dataset_id, holding_time_in_minutes = 60, litigation_interval_in_minutes = 120, token_amount_per_holder = "0.01" }
When I checked the replicate result API call it showed -199 for the token_amount_per_holder which is pretty random?
No one bid on the job because it was too expensive. I asked in telegram and got the following from other users:
"2021-04-02T23:32:22.556Z - info - Accepting offer with price: 115792089237316195423570985008687907853269984665640564039457584007913129639737 TRAC."
Users were also getting warnings they did not have enough TRAC (users have alerts for these on telegram oops!).
Transaction for job creation: https://blockscout.com/xdai/mainnet/tx/0x9af67b3d3324c8320846984002b01c9c8ca2dac91bd0c2f05513db201b020237/internal-transactions
This mostly happened because I was winging it as I followed the parameters on https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.1#/replicate/post_replicate but it's missing 3 parameters (including token_amount_per_holder). It was giving me an error message saying invalid parameters.
So I did a quick look in the source code for these other 3 parameters and saw token_amount_per_holder was a number, In hindsight I guess it's the gwei equivalent number and not the human readable number. I just went with it thinking it would stop me if I did it wrong.
This probably needs some more protection in the API code to stop me (or others) doing this again! I have agreed to making no more jobs for a while 😅.
https://blog.infura.io/infura-dashboard-transition-update-c670945a922a
I can see usage to the old API uri by doing a code search for https://mainnet.infura.io/
Nodes see I have been chosen
in their logs for jobs when a job is finalised multiple hours after it is created initially.
Nodes do not see anything in their logs about being chosen for a job if the job finalization comes too late.
In this log screenshot it mentions this job:
https://v5.othub.info/offers/0x5636360b994d7b1b94bed4d1a06671cabaa16ba5060e8d1774f073fbb7fcd577
The job was created on transaction: https://blockscout.com/xdai/mainnet/tx/0xd696fed85347ec28f11c9d93f612ebe97aa1393f12f7b9b23ac0366f3d256259
The job was finalised on transaction:
https://blockscout.com/xdai/mainnet/tx/0xf34e2526d226de9cee967954b7ab38c3b5e70e99d51cb7cd99a879fdd4a4ff2b
The problem is there is a 4 hour gap between the 2 transactions. The data holders poll the RPC for when the job becomes finalised but they miss this because they have a timeout which means they don't see this. Users see nothing in their logs about winning and the first time they see it is on OThub and it adds lots of confusion.
I can't imagine these nodes that are winning after these timeouts (Offer has not been finalised errors) are probably in the best state in terms of data stored on their nodes if their nodes think they didn't win.
I'm aware of at least 5 different users in Telegram who have been confused at othub showing jobs they don't see in their logs as being won.
OTnode starts up and the diff/var/lib/arangodb3/engine-rocksdb/journals/ is not filled up with logfiles (or archived+ garbage collected ).
The problem is split up into 2 areas
14/08<
**Last week I noticed I did win hardly any jobs. So I logged into my nodes(14/08). They were still running but 2 were producing errors and space occupied was around 68GB. While it Should be around 30GB
The logging showed it started on 8/8**
2021-08-08 11:46:16,602 INFO exited: arango (exit status 1; not expected)
2021-08-08 11:46:17,642 INFO spawned: 'arango' with pid 53967
2021-08-08 11:46:18,646 INFO success: arango entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Other error
2021-08-12T07:34:44.320Z - error - Unhandled Rejection:
Error: Timed out waiting for response
at KademliaNode._timeout (/ot-node/5.1.0/node_modules/@deadcanaries/kadence/ lib/node-abstract.js:260:15)
at Timeout.setInterval [as _onTimeout] (/ot-node/5.1.0/node_modules/@deadcan aries/kadence/lib/node-abstract.js:172:28)
at ontimeout (timers.js:466:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:267:5)
The Arango logging started on 08/08 with
2021-08-08T11:46:16Z [12] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:16Z [12] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));
2021-08-08T11:46:16Z [12] ERROR [cb0bd] ! ^
2021-08-08T11:46:16Z [12] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:16Z [12] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));
2021-08-08T11:46:16Z [12] ERROR [cb0bd] ! ^
2021-08-08T11:46:16Z [12] FATAL [69ac3] {v8} error during execution of JavaScript file 'server/initialize.js'
2021-08-08T11:46:17Z [53967] INFO [e52b0] ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019
2021-08-08T11:46:17Z [53967] INFO [75ddc] detected operating system: Linux version 5.4.0-80-generic (buildd@lcy01-amd64-030) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #90-Ubuntu SMP Fri Jul 9 22:49:44 UTC 2021
2021-08-08T11:46:17Z [53967] WARNING [118b0] {memory} maximum number of memory mappings per process is 65530, which seems too low. it is recommended to set it to at least 128000
2021-08-08T11:46:17Z [53967] WARNING [49528] {memory} execute 'sudo sysctl -w "vm.max_map_count=128000"'
2021-08-08T11:46:17Z [53967] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-08T11:46:17Z [53967] INFO [144fe] using storage engine rocksdb
2021-08-08T11:46:17Z [53967] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-08T11:46:17Z [53967] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-08T11:46:17Z [53967] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-08T11:46:17Z [53967] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number
because it is lower than recommended
2021-08-08T11:46:27Z [53967] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:27Z [53967] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));
and today..when I try to restart OT node
2021-08-16T07:41:08Z [12] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-16T07:41:08Z [12] INFO [144fe] using storage engine rocksdb
2021-08-16T07:41:08Z [12] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-16T07:41:08Z [12] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-16T07:41:08Z [12] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-16T07:41:08Z [12] WARNING [b387d] found existing lockfile '/var/lib/arangodb3/LOCK' of previous process with pid 13, but that process seems to be dead already
2021-08-16T07:41:08Z [12] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number
because it is lower than recommended
2021-08-16T07:41:24Z [12] INFO [6ea38] using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
2021-08-16T07:41:27Z [12] INFO [e52b0] ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019
2021-08-16T07:41:27Z [12] INFO [75ddc] detected operating system: Linux version 5.4.0-81-generic (buildd@lgw01-amd64-052) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021
2021-08-16T07:41:27Z [12] WARNING [118b0] {memory} maximum number of memory mappings per process is 65530, which seems too low. it is recommended to set it to at least 128000
2021-08-16T07:41:27Z [12] WARNING [49528] {memory} execute 'sudo sysctl -w "vm.max_map_count=128000"'
2021-08-16T07:41:27Z [12] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-16T07:41:27Z [12] INFO [144fe] using storage engine rocksdb
2021-08-16T07:41:27Z [12] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-16T07:41:27Z [12] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-16T07:41:27Z [12] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-16T07:41:27Z [12] WARNING [ad4b2] found existing lockfile '/var/lib/arangodb3/LOCK' of previous process with pid 12, and that process seems to be still running
2021-08-16T07:41:27Z [12] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number
because it is lower than recommended
14/08>
My node is unable to start up anymore and the diff/var/lib/arangodb3/engine-rocksdb/journals/ is filled with small log files, but taking huge amount of space (41000 files of 1KB taking 40GB). Writing starts at the moment of a reboot/restart when the docker container is active again. Normal space should be around 30GB, But my node is now at 70GB
Every time my Node is started I get the following error
2021-08-16T07:40:22.288Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 8529,
response: undefined }
2021-08-16T07:40:22.290Z - error - Whoops, terminating with code: 1
2021-08-16 07:40:22,299 INFO exited: otnode (exit status 1; expected)
2021-08-16 07:40:23,301 WARN received SIGTERM indicating exit request
2021-08-16 07:40:23,301 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-08-16 07:40:26,137 INFO stopped: arango (terminated by SIGTERM)
I did reboot/restart/kill/ changed swapfile size from 1Gb to 6GB Nothing helped.
However.... Strangely ONE of the Nodes purged the 40GB logfiles and started working again after a reboot (yesterday).
Version: OT node 5.1.0
Platform: Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-81-generic x86_64) Linux version 5.4.0-81-generic (buildd@lgw01-amd64-052) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021
ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019
-Digital Ocean 4GB 80GB Storage
Would you accept a PR that allows me to turn off the automatic ETH transaction payouts in the node?
The PR (at a glance) will probably be changing the /modules/command/dh/dh-pay-out-command.js file to check the config for a new setting in the execute function.
The reason for this being I'm currently paying outs offers for my nodes via the OT Hub website. Doing it this way I can do the payout via the management wallet and at a low gas price - this then has no effect on the node and it's operational wallet.
Example of the cost savings from lowering gas price by doing payouts externally:
The default 20 Gwei gas price on the operational wallet costs 0.00316291 ETH ($0.49) per payout on average.
Doing the payout at 2 Gwei gas price on the management wallet costs 0.00032378 ETH ($0.05) per payout on average.
As a side note OT Hub also checks all the conditions within the holding smart contract so it will block the payout transaction from being sent if it will fail anyway in the smart contract.
Ot-node should authenticate to ArangoDB server
When starting ot-node to run with ArangoDB, ot-node is unable to atuhenticate with arango server, throwing an error:
error: Please make sure Arango server is runing before starting ot-node
Disabling authentication in arangod config solves the issue.
I found this issue that might be helpful: arangodb/arangodb#2313
A software project operated by a commercial entity should decide under which software license the software project operates.
https://opensource.org/licenses
The OT-node does not include a reference under which software or open-source license the project is published
Browser the repository and search for a software license
Commercial entities utilizing the OT network or protocol require legal documents that define the data ownership and general liabilities or warranties of the protocol and network.
E.g. include end-user agreement.
No document that clarifies data ownership or general legal liabilities and warranties when utilizing the OT network as commercial organization
No end user agreement.
Is OT usable for disaster relief; with a spotty internet connection?
What does it do with queued transactions (when an internet connection is unavailable)?
If not, what changes or enhancements would need to occur to make that happen?
Current Dockerfile creates a volume for /var/lib/arangodb but the actual arango data is in /var/lib/arangodb3
Not sure if this is intentional or a mistake.
Import XML file named '01_Green_to_pink_shipment.xml'
But I have errors 'error: Caught exception: Error: listen EADDRNOTAVAIL 103.199.6.2:8900.'
1.Run docker container on Windows 10
2.Login by using Houston app
3.Import '01_Green_to_pink_shipment.xml'
Link to import data page.
Redirects to node login page.
All the commands run as root
Run docker run -it --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner
for the node to set up.
Here are some logs
// this is ok during the initial setup
2019-04-17T04:23:09.801Z - info - Getting the identity
2019-04-17T04:23:09.802Z - info - Identity not provided, generating new one...
// call Contract and generate identity
.....
2019-04-17T04:14:04.119Z - info - OT Node started
2019-04-17T04:14:05.122Z - trace - Command cleanerCommand and ID fd591658-f151-4d1c-be2e-a30b4d238c4e started.
2019-04-17T04:14:05.137Z - trace - Command autoupdaterCommand and ID e17aeea7-5a01-47d6-9532-120524ad252c started.
2019-04-17T04:14:05.148Z - info - Checking for new node version
2019-04-17T04:14:05.336Z - trace - Version check: local version 2.0.54, remote version: 2.0.54
2019-04-17T04:14:05.339Z - info - No new version found
// cause we're in the inter-active mode, i press CTRL-C
^C2019-04-17 04:14:24,057 WARN received SIGINT indicating exit request
2019-04-17 04:14:24,058 INFO waiting for remote_syslog, otnode, arango, otnodelistener to die
2019-04-17 04:14:27,063 INFO waiting for remote_syslog, otnode, arango, otnodelistener to die
2019-04-17 04:14:28,266 INFO stopped: arango (exit status 0)
2019-04-17 04:14:29,269 INFO stopped: otnode (terminated by SIGTERM)
2019-04-17 04:14:29,272 INFO stopped: remote_syslog (terminated by SIGTERM)
2019-04-17 04:14:29,274 INFO stopped: otnodelistener (terminated by SIGTERM)
Then i run docker start otnode
got the following errors issue 867
2019-04-17T04:14:52.538Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 8529,
response: undefined }
2019-04-17T04:14:52.545Z - info - Notifying Bugsnag of exception...
Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
2019-04-17T04:14:52.550Z - error - Whoops, terminating with code: 1
docker restart otnode
did not work then i ran docker rm otnode
and the set up command again.
Here is a problem cause i've already made a contract and got my identity, so the second time i made a new contract and got the following errors:
2019-04-17T04:25:05.666Z - trace - Get profile by identity
**my old identify**
2019-04-17T04:25:06.073Z - error - Unhandled Rejection:
Error: ERC725 profile not created for this node ID. My identity **new identify**,
profile's node id: <node id>.
at OTNode.bootstrap (/ot-node/init/ot-node.js:439:19)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:182:7)
When run docker run -it --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner
there should have a way for user to provide the identify, either by docker variables
or some new fileds in the .origintrail_noderc file
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.