Code Monkey home page Code Monkey logo

ot-node's Introduction




Table of Contents
  1. πŸ“š About The Project
  2. πŸš€ Getting Started
  3. πŸ“„ License
  4. 🀝 Contributing
  5. πŸ“° Social Media


πŸ“š About The Project

What is the Decentralized Knowledge Graph?


Knowledge Asset

OriginTrail Decentralized Knowledge Graph (DKG), hosted on the OriginTrail Decentralized Network (ODN) as trusted knowledge infrastructure, is shared global Knowledge Graph of Knowledge Assets. Running on the basis of the permissionless multi-chain OriginTrail protocol, it combines blockchains and knowledge graph technology to enable trusted AI applications based on key W3C standards.

The OriginTrail DKG Architecture


The OriginTrail tech stack is a three layer structure, consisting of the multi-chain consensus layer (OriginTrail layer 1, running on multiple blockchains), the Decentralized Knowledge Graph layer (OriginTrail Layer 2, hosted on the ODN) and Trusted Knowledge applications in the application layer.

DKG Architecture

Further, the architecture differentiates between the public, replicated knowledge graph shared by all network nodes according to the protocol, and private Knowledge graphs hosted separately by each of the OriginTrail nodes.

Anyone can run an OriginTrail node and become part of the ODN, contributing to the network capacity and hosting the OriginTrail DKG. The OriginTrail node is the ultimate data service for data and knowledge intensive Web3 applications and is used as the key backbone for trusted AI applications (see https://chatdkg.ai)

What is a Knowledge Asset?


Knowledge Asset

Knowledge Asset is the new, AI‑ready resource for the Internet

Knowledge Assets are verifiable containers of structured knowledge that live on the OriginTrail DKG and provide:

  • Discoverability - UAL is the new URL. Uniform Asset Locators (UALs, based on the W3C Decentralized Identifiers) are a new Web3 knowledge identifier (extensions of the Uniform Resource Locators - URLs) which identify a specific piece of knowledge and make it easy to find and connect with other Knowledge Assets.
  • Ownership - NFTs enable ownership. Each Knowledge Asset contains an NFT token that enables ownership, knowledge asset administration and market mechanisms.
  • Verifiability - On-chain information origin and verifiable trail. The blockchain tech increases trust, security, transparency, and the traceability of information.

By their nature, Knowledge Assets are semantic resources (following the W3C Semantic Web set of standards), and through their symbolic representations inherently AI ready. See more at https://chatdkg.ai

Discover Knowledge Assets with the DKG Explorer:

Knowledge Assets Graph 1
Supply Chains
Knowledge Assets Graph 2
Construction
Knowledge Assets Graph 3
Life sciences and healthcare
Knowledge Assets Graph 3
Metaverse

(back to top)


πŸš€ Getting Started


Prerequisites


  • NodeJS 16.x (ideally, 16.16)
  • npm >= 8.0.0


Local Network Setup


First, clone the repo:

git clone https://github.com/OriginTrail/ot-node.git
cd ot-node

Install dependencies using npm:

npm install

Create the .env file inside the "ot-node" directory:

nano .env

and paste the following content inside (save and close):

NODE_ENV=development
RPC_ENDPOINT_BC1=http://localhost:8545
RPC_ENDPOINT_BC2=http://localhost:9545

Run the Triple Store.

To use default Triple Store (blazegraph), download the exec file and run it with the following command in the separate process:

java -server -Xmx4g -jar blazegraph.jar

Then, depending on the OS, use one of the scripts in order to run the local network with provided number of nodes (minimal amount of nodes should be 12):

MacOS

bash ./tools/local-network-setup/setup-macos-environment.sh --nodes=12

Linux

./tools/local-network-setup/setup-linux-environment.sh --nodes=12


DKG Node Setup


In order to run a DKG node on the Testnet or Mainnet, please read the official documentation: https://docs.origintrail.io/decentralized-knowledge-graph-layer-2/node-setup-instructions/setup-instructions-dockerless



Build on DKG


The OriginTrail SDKs are client libraries for your applications, used to interact and connect with the OriginTrail Decentralized Knowledge Graph. From an architectural standpoint, the SDK libraries are application interfaces into the DKG, enabling you to create and manage Knowledge Assets through your apps, as well as perform network queries (such as search, or SPARQL queries), as illustrated below.

SDK

The OriginTrail SDK libraries are being built in various languages by the team and the community, as listed below:



(back to top)

πŸ“„ License

Distributed under the Apache-2.0 License. See LICENSE file for more information.


(back to top)

🀝 Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

πŸ“° Social Media



ot-node's People

Contributors

aescwine avatar aleksamagicka avatar anadjokovic avatar branarakic avatar brkagithub avatar calr0x avatar dependabot[bot] avatar djordjekovac avatar dustinwsweet avatar haroldboom avatar kotlarmilos avatar micax3000 avatar mihajlo-pavlovic avatar milosstanisavljevic avatar nesovic avatar niks988 avatar nzt48 avatar swamala avatar u-hubar avatar unima007 avatar valcyclovir avatar vterzic avatar zeroxbt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ot-node's Issues

Missing data rights and commercial usage legal statement --> e.g missing end user agreement

Expected Behavior

Commercial entities utilizing the OT network or protocol require legal documents that define the data ownership and general liabilities or warranties of the protocol and network.
E.g. include end-user agreement.

Actual Behavior

No document that clarifies data ownership or general legal liabilities and warranties when utilizing the OT network as commercial organization
No end user agreement.

Specifications

http://www.clock.org/~fair/opinion/software-liability.html

DC timeouts during the job replication process and the majority of the nodes are not able to bid on the job

Expected Behavior

The DC sends replication request acknowledgement to all nodes that sent replication request.

Actual Behavior

The DC does not send replication request acknowledgement for offer_id, which does not allow the job bidding process to continue, and the DH is not able to bid successfully on the job. This happens on average on 5-10% of the jobs during the day. With adoption and presense of 50 000 - 100 000 nodes within the next years, this behaviour could be further exacerbated and result in assessing it as DDOS attack behaviour of the DC during job bids and DC cancellation by the hosting provider.

To discuss if the job replication process can be improved. I.e. DC sends a basic check to obtain identity/lambda settings of nodes every 30-60 mins, and choose the winners of the job and replicate only to these. Should the replication post job winning is unsuccessful, another winner is chosen from the list. (which could resolve issue #1505).

Steps to Reproduce the Problem

  1. Error occurs during the bidding process.
    timeout

Specifications

  • Version: 5.0.0
  • Platform: Ubuntu v20

Can not import xml on V0.6.0a

Expected Behavior

Success import on regular node.

Actual Behavior

I follow this tutorial. But I can not import XML files. This is error on regular node. This information on Bootstrap node. Why on Regular node show this message is "Transaction ran out of gas, Please provide more gas."

Address wallet for Bootstrap node: 0x5735BA41852f309dF11aF2C27F9c49696eBE66B1
Address wallet for Regular node: 0x5359E7544d48db78172F10FfA322a1d1b25E9259

Steps to Reproduce the Problem

  1. Bootstrap node and Regular node, I use two wallet, and it has Ether and TRAC.
  2. I setup Bootstrap node and Regular node on one PC run Ubuntu 16.04 but different port.
  3. On regular node, I import data but false.

Specifications

  • Version: V 0.6.0a
  • Platform: Ubuntu 16.04 run on VMWare

How can validation the data after push to Blockchain with Fingerprint or Block number

With your help. I can import XML data to Supply Chains test network. All my data from "example_gs1.xml" success import to ArangoDB and writing Fingerprint of this data to blockchain. So, How can I validation the data I imported to Blockchain with Fingerprint or Block number.

This is my first scenario:
1/ I Import data to Supply Chainins, everything success. I have Fingerprint and Block number for this data.
2/ After two years, I have the XML file contain this data from step 1, I need to validation this data from XML file still original or modified.

This is my second scenario:
1/ I setup the Supply Chains network with 5 node (node is the PC runs on many different networks).
2/ From each node I import the data for specific products (Egg or carrot) but different time.
3/ How can I get the "Chains" for specific products (Egg or carrot) from Supply Chains and validation my XML file with this data.

So how can I do that?

Thank you.

Option to disable automatic payouts via config file

Would you accept a PR that allows me to turn off the automatic ETH transaction payouts in the node?

The PR (at a glance) will probably be changing the /modules/command/dh/dh-pay-out-command.js file to check the config for a new setting in the execute function.

The reason for this being I'm currently paying outs offers for my nodes via the OT Hub website. Doing it this way I can do the payout via the management wallet and at a low gas price - this then has no effect on the node and it's operational wallet.

Example of the cost savings from lowering gas price by doing payouts externally:
The default 20 Gwei gas price on the operational wallet costs 0.00316291 ETH ($0.49) per payout on average.
Doing the payout at 2 Gwei gas price on the management wallet costs 0.00032378 ETH ($0.05) per payout on average.

As a side note OT Hub also checks all the conditions within the holding smart contract so it will block the payout transaction from being sent if it will fail anyway in the smart contract.

Did not received "Import success" after import xml files

Expected Behavior

Received "Import success" messages after import xml files.

Actual Behavior

Did not received "Import success" messages. You can check this photo, I have 18 EHT on my test wallet. Here is my address: 0x5735BA41852f309dF11aF2C27F9c49696eBE66B1

Steps to Reproduce the Problem

  1. Update WALLET_ID (Line 3) and PROVIDER_ID (Line 16 and 82) by new ID get from here for example_v1.5.xml
  2. npm start
  3. curl -v -F importfile=@example_v1.5.xml http://127.0.0.1:8900/import

Specifications

  • Version: on Master branch
  • Platform: Ubuntu 16.04 run on VMWare 14

More validation on replicate API call around token_amount_per_holder

Expected behavior

Call POST /api/latest/replicate with bad data and get a validation error.

Actual behavior

Call POST /api/latest/replicate with bad data and it turns the bad data into even crazier data.

Steps to reproduce the problem

  1. Call POST /api/latest/replicate with the following data:
    { blockchain_id = balance.blockchain_id, dataset_id = status.data.dataset_id, holding_time_in_minutes = 60, litigation_interval_in_minutes = 120, token_amount_per_holder = "0.01" }

  2. When I checked the replicate result API call it showed -199 for the token_amount_per_holder which is pretty random?
    image

  3. No one bid on the job because it was too expensive. I asked in telegram and got the following from other users:
    "2021-04-02T23:32:22.556Z - info - Accepting offer with price: 115792089237316195423570985008687907853269984665640564039457584007913129639737 TRAC."

image

Users were also getting warnings they did not have enough TRAC (users have alerts for these on telegram oops!).

Transaction for job creation: https://blockscout.com/xdai/mainnet/tx/0x9af67b3d3324c8320846984002b01c9c8ca2dac91bd0c2f05513db201b020237/internal-transactions

This mostly happened because I was winging it as I followed the parameters on https://app.swaggerhub.com/apis-docs/otteam/ot-node-api/v2.1#/replicate/post_replicate but it's missing 3 parameters (including token_amount_per_holder). It was giving me an error message saying invalid parameters.

So I did a quick look in the source code for these other 3 parameters and saw token_amount_per_holder was a number, In hindsight I guess it's the gwei equivalent number and not the human readable number. I just went with it thinking it would stop me if I did it wrong.

This probably needs some more protection in the API code to stop me (or others) doing this again! I have agreed to making no more jobs for a while πŸ˜….

Specifications

Contact details

Missing operations concept for emergency patches without impacting the OT protocol or node network

Expected Behavior

Provide operations concept how a critical security patch will be applied within the OT network without impacting the overall network performance and availability.
In general will upgrading the OT node, ArangoDB or Node.Js, require a downtime of the server?

Actual Behavior

Imagine the following case:
The ArangoDB or Node.JS software component has a severe security issue and requires and immediate security patch.
E.g. all OT-nodes will have to be security-patched ASAP.
How will OT ensure:
All operators of the OT-nodes apply the security patch in the shortest time-frame possible.
How will OT ensure overall availability of the network during the security patching process? Assuming a data replication factor of 3-5, how will OT ensure that 5 nodes, with a replicated record, are not security patched in parallel and therefore down? E.g. data will not be able for queries due to the impact of the security-patch upgrade.
What will happen with nodes that don't apply the security patch, for example, within 48hours?
How will OT ensure and validate the software version running on OT nodes operated by external parties?

Steps to Reproduce the Problem

No concept for security upgrades or patches are included.

Specifications

There is no control to prevent otnode creating a bricked node if the management wallet is mistyped.

Expected behavior

The installation process should stop if the management wallet is a blank wallet.

Actual behavior

The installation will continue regardless.

I propose that additional control is added. The easiest I can think of is to check if the management wallet has any ETH/xDAI on it. If it doesn't the user is noted to add some to confirm the management wallet is appropriate, or update it with the correct one.

With the lack of such control, the user can lose their TRAC during the installation process.

Add a configuration parameter to disable listening on the blockchain events for new offers

Background

OT-Node can act both as a data holder and a data creator. Data creators import datasets to the local knowledge graph and replicate data by sending offers on the blockchain. Data holders are listening on the blockchain events and accept offers if conditions are fulfilled.

The bug

In some cases it is useful to disable listening on the blockchain events for new offers. Proposal is to create a configuration parameter which would make the OT-Node automatically reject offers by not listening to the blockchain events for newly created offers by other nodes on the network.

Level: Medium

Story definition

An OT-Node should have a configuration parameter that when enabled should meet the following:

  • doesn't respond on blockchain events for newly created offers
  • responds to blockchain events for already created and accepted offers

Where to start?

Blockchain service where new offers are handled:

async handleReceivedEvents(events, contractName, blockchain_id) {

Requirements

Questions?

If you have questions ask in this issue or on your pull request (if you've created one).

Docker image for OT-node instead of manual installation

Expected Behavior

In 2018 software development should be container based in order to enable modern software paradigm like DevOps and to utilizes cloud environments that scale.
E.g. please provide the OT-node as docker image, I would like to deploy it without manual installation steps.
https://en.wikipedia.org/wiki/Docker_%28software%29

As well this approach will enable non IT-professionals to deploy a master-node to cloud service provider. Enterprise IT expect container based deployment in 2018 too.

Actual Behavior

Manual installation required, error prone and a challenge for token holders that want to operate a node, but aren’t Linux experts.

Steps to Reproduce the Problem

https://github.com/OriginTrail/ot-node/wiki/Integration-Instructions

Docker volume for /var/lib/arangodb

Actual Behavior

Current Dockerfile creates a volume for /var/lib/arangodb but the actual arango data is in /var/lib/arangodb3

Not sure if this is intentional or a mistake.

  • Version: 2.0.44
  • Platform: ubuntu

Unable to authorize on Arango server

Expected Behavior

Ot-node should authenticate to ArangoDB server

Actual Behavior

When starting ot-node to run with ArangoDB, ot-node is unable to atuhenticate with arango server, throwing an error:

error: Please make sure Arango server is runing before starting ot-node

Disabling authentication in arangod config solves the issue.

I found this issue that might be helpful: arangodb/arangodb#2313

Steps to Reproduce the Problem

  1. Setup ot-node to use ArangoDB
  2. Start ot-node

Specifications

  • Version: 7.0 - develop branch
  • Platform: Ubuntu 16.04.4 LTS

Allow user to specify identity manually

All the commands run as root

Run docker run -it --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner for the node to set up.
Here are some logs

// this is ok during the initial setup
2019-04-17T04:23:09.801Z - info - Getting the identity
2019-04-17T04:23:09.802Z - info - Identity not provided, generating new one...
// call Contract and generate identity
.....
2019-04-17T04:14:04.119Z - info - OT Node started
2019-04-17T04:14:05.122Z - trace - Command cleanerCommand and ID fd591658-f151-4d1c-be2e-a30b4d238c4e started.
2019-04-17T04:14:05.137Z - trace - Command autoupdaterCommand and ID e17aeea7-5a01-47d6-9532-120524ad252c started.
2019-04-17T04:14:05.148Z - info - Checking for new node version
2019-04-17T04:14:05.336Z - trace - Version check: local version 2.0.54, remote version: 2.0.54
2019-04-17T04:14:05.339Z - info - No new version found
// cause we're in the inter-active mode, i press CTRL-C
^C2019-04-17 04:14:24,057 WARN received SIGINT indicating exit request
2019-04-17 04:14:24,058 INFO waiting for remote_syslog, otnode, arango, otnodelistener to die
2019-04-17 04:14:27,063 INFO waiting for remote_syslog, otnode, arango, otnodelistener to die
2019-04-17 04:14:28,266 INFO stopped: arango (exit status 0)
2019-04-17 04:14:29,269 INFO stopped: otnode (terminated by SIGTERM)
2019-04-17 04:14:29,272 INFO stopped: remote_syslog (terminated by SIGTERM)
2019-04-17 04:14:29,274 INFO stopped: otnodelistener (terminated by SIGTERM)

Then i run docker start otnode got the following errors issue 867

2019-04-17T04:14:52.538Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED',
  syscall: 'connect',
  address: '127.0.0.1',
  port: 8529,
  response: undefined }
2019-04-17T04:14:52.545Z - info - Notifying Bugsnag of exception...
Error: connect ECONNREFUSED 127.0.0.1:8529
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
2019-04-17T04:14:52.550Z - error - Whoops, terminating with code: 1

docker restart otnode did not work then i ran docker rm otnode and the set up command again.
Here is a problem cause i've already made a contract and got my identity, so the second time i made a new contract and got the following errors:

2019-04-17T04:25:05.666Z - trace - Get profile by identity 
**my old identify**
2019-04-17T04:25:06.073Z - error - Unhandled Rejection:
Error: ERC725 profile not created for this node ID. My identity **new identify**, 
profile's node id: <node id>.
    at OTNode.bootstrap (/ot-node/init/ot-node.js:439:19)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:182:7)

Expected Behavior

When run docker run -it --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner there should have a way for user to provide the identify, either by docker variables or some new fileds in the .origintrail_noderc file.

Specifications

  • Version: v2.0.54
  • Platform: ubuntu 18.04

ENH: Offline support for disaster relief

Is OT usable for disaster relief; with a spotty internet connection?

What does it do with queued transactions (when an internet connection is unavailable)?

If not, what changes or enhancements would need to occur to make that happen?

Can not import xml files to Houston app

Expected Behavior

Import XML file named '01_Green_to_pink_shipment.xml'

Actual Behavior

But I have errors 'error: Caught exception: Error: listen EADDRNOTAVAIL 103.199.6.2:8900.'

Steps to Reproduce the Problem

1.Run docker container on Windows 10
2.Login by using Houston app
3.Import '01_Green_to_pink_shipment.xml'

Specifications

  • Version: otnode 1.3.5
  • Platform: Windows 10

Backup script improvement proposal

Expected Behavior

I propose the following improvement to the backup process. This will reduce expense costs on AWS, as well as reduce space on the VPS server and reduce risks of maxing available space.

  1. The node backup data is zipped upon generation to reduce space used.
  2. Data is uploaded on AWS
  3. The local backup created is deleted to reduce space

Actual Behavior

  1. Data is backed up without any archiving
  2. Backup is uploaded on AWS
  3. Backup remains on the server, which raises a risk of maxing the available space with scheduled backups.

Steps to Reproduce the Problem

  1. Backup steps as detailed on https://otnode.com/node-backup/

Specifications

  • Version: 5.0.0
  • Platform: Ubuntu 20

Nodes encounter errors with 'Offer has not been finalised' when job bidding -> job finalized is spread over multiple hours

Expected behavior

Nodes see I have been chosen in their logs for jobs when a job is finalised multiple hours after it is created initially.

Actual behavior

Nodes do not see anything in their logs about being chosen for a job if the job finalization comes too late.

Investigation / Example

image

In this log screenshot it mentions this job:
https://v5.othub.info/offers/0x5636360b994d7b1b94bed4d1a06671cabaa16ba5060e8d1774f073fbb7fcd577

The job was created on transaction: https://blockscout.com/xdai/mainnet/tx/0xd696fed85347ec28f11c9d93f612ebe97aa1393f12f7b9b23ac0366f3d256259

The job was finalised on transaction:
https://blockscout.com/xdai/mainnet/tx/0xf34e2526d226de9cee967954b7ab38c3b5e70e99d51cb7cd99a879fdd4a4ff2b

The problem is there is a 4 hour gap between the 2 transactions. The data holders poll the RPC for when the job becomes finalised but they miss this because they have a timeout which means they don't see this. Users see nothing in their logs about winning and the first time they see it is on OThub and it adds lots of confusion.

I can't imagine these nodes that are winning after these timeouts (Offer has not been finalised errors) are probably in the best state in terms of data stored on their nodes if their nodes think they didn't win.

I'm aware of at least 5 different users in Telegram who have been confused at othub showing jobs they don't see in their logs as being won.

Specifications

  • Node version: v5
  • Platform: Any
  • Node wallet: N/A
  • ERC725 identity: N/A

Nodes are replicating and storing every dataset they bid on

Expected Behavior

Datasets are deleted as soon as the DC has confirmed that the node has not won the job.

Actual Behavior

The nodes store the data from the job they bid on, which increases the space used. During the initial jobs on the xDAI network, the space used per day increases with 300MB. As per the expected growth of the network of 450k jobs for 2021, would require over 200GB space per server to contain the ODN data. Extrapolating further over the years, this approach is not sustainable.

Steps to Reproduce the Problem

  1. Backup performed daily shows increase of 3GB in the span of 6 days.
    backup

Specifications

  • Version: 5.0.0
  • Platform: Ubuntu v20

Create automatic payout service for completed jobs

Background

OT-Node has two options for executing the payOut transactions in order to retrieve TRAC earned for all held jobs. If configuration parameter for automatic payouts (disableAutoPayouts) is enabled the payOut transaction is executed immediately after a holding period. Otherwise, it could be executed manually one job at a time through the API.

The bug

In some cases automatic payouts can be highly inefficient due to the variations of a gas price. Proposal is to create an AutoPayout service which would keep track of the amount of TRAC the OT-Node would earn after a holding period by executing a payOutMultiple transaction for all of its completed jobs, and publish the transaction once the amount is greater than an amount defined as a configuration parameter.

Level: Hard

Story definition

An OT-Node AutoPayout service should create payOutMultiple transactions once the amount of earned TRAC is greater than an amount defined as a configuration parameter.

Where to start?

One time payout migration on OT-Node startup:

async _runPayoutMigration(blockchain, config) {

Automatic payouts on offer finalization:

if (this.config.disableAutoPayouts !== true) {

Payout command:
https://github.com/OriginTrail/ot-node/blob/develop/modules/command/dh/dh-pay-out-command.js

An example of service for fetching gas price as a reference:
https://github.com/OriginTrail/ot-node/blob/develop/modules/service/gas-station-service.js

Requirements

  • Pull request should include unit or/and BDD tests that cover the bug (if applicable).
  • All integration tests on the pull request have to pass.

Questions?

If you have questions ask in this issue or on your pull request (if you've created one).

Add decimal to TRAC pricing in logs

Expected behavior

Whenever TRAC price is shown in logs, it should display with proper decimal placement

Actual behavior

Shown in microTRAC, with no decimal placement

Steps to reproduce the problem

Logs example:
2021-05-19T14:14:14.722Z - trace - Calculated offer price for data size: 0.117957MB, and holding time: 90 days, PRICE: 461464187049431230[mTRAC]
2021-05-19T14:14:14.723Z - info - Accepting offer with price: 2304700654612025900 TRAC.

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].

Corrupted sqlite data using scripts/backup.js

Expected behavior

To be able to make a backup of a node that is running without any data corruption in sqlite.

Actual behavior

If your node is busy during the backup you can get a copy of a corrupted system.db sqlite database because it was copied mid transaction.

How many users are affected?

I've encountered this when I restored one of my nodes 4 weeks ago. I can think of about 6 other users in the last few weeks who have had this while migrating servers.

We have only recently figured out where the problem comes from.

Specifications

  • Node version: Latest/Any
  • Platform: Ubuntu
  • Node wallet: N/A
  • ERC725 identity: N/A

Error logs (from TG chats I could find)

User 1
image

User 2
image

User 3
image

image

User 4/me
image

User 5
image

Investigation into the issue

When you backup any database you should never do it copying files while it is running. Unfortunately, the backup.js script does a copy file on system.db and if there is any writing/transaction happening to the file during the copy it gets corrupted.

https://www.sqlite.org/howtocorrupt.html
image

Code

image

image

image

Solution

https://www.quackit.com/sqlite/tutorial/backup_a_database_to_file.cfm

Workarounds

Currently node runners are having to go back to their original server the backup was from and running backup commands on sqlite which are safe with transactions running on the same file.

Warnings during install

After getting to the last part of the install, I get the following warnings.
I'm using Amazon Linux 2.

$ npm install
npm WARN deprecated [email protected]: πŸ™Œ Thanks for using Babel: we recommend using babel-preset-env now: please read babeljs.io/env to update!
npm WARN deprecated [email protected]: Use uuid module instead

> [email protected] prepare /home/ec2-user/workspace/ot-node
> npm run snyk-protect

> [email protected] snyk-protect /home/ec2-user/workspace/ot-node
> snyk protect

Successfully applied Snyk patches

npm WARN [email protected] requires a peer of ajv@^6.0.0 but none is installed. You must install peer dependencies yourself.
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

up to date in 170.072s

Implement gas limit estimation for each transaction

Background

In OT-Node the gas limit is the same for every transaction. It is defined by the gas_limit parameter in the configuration, using default value of 2,000,000. The default value is set extraordinarily high because the createProfile function generates a new smart contract, which is an expensive operation. Profile creation is invoked on OT-Node startup if profile on the blockchain doesn't exist. After creating the profile, all other transactions executed by the OT-Node use considerably less gas, possibly around 450,000, but possibly even less.

The bug

OT-Node always uses the high gas limit which could reject transactions, since the required amount of ETH is calculated with the extraordinarily high gas limit, and not with how much is necessary. Reducing gas limit for transactions would allow an OT-Node to create and finalize offers with less balance required. For example, if a transaction uses 200,000 gas, and the gas price is 10GWei, the transaction will cost 0.002 ETH to execute. However, if the user sets the gas limit to 2,000,000 gas for that transaction, the transaction will require that the wallet has 0.02 ETH, even though the end cost will be 0.002 ETH.

Level: Medium

Story definition

OT-Node configuration should contain gas_limit parameter which would be used for blockchain transactions except createProfile where gas_limit parameter should be hardcoded to 2,000,000.

Where to start?

Ethereum blockchain service:
https://github.com/OriginTrail/ot-node/blob/develop/modules/Blockchain/Ethereum/index.js

Sending transactions to the blockchain using web3 provider:

async _sendTransaction(newTransaction) {

Requirements

Questions?

If you have questions ask in this issue or on your pull request (if you've created one).

Ot node set up "Wallet not provided! Please provide valid wallet."

  • I have set up a OT node following https://origintrail.io/node-setup-mainnet step-by-step.
  • The wallet json file has been configured correctly as well as same name and directory used in the tutorial linked above (2 different ETH addresses, of which I did specify the PK for the operational wallet)
  • I am definitely using the latest version (as of the date this issue has been opened) as I made the setup 10 minutes ago

Issue: When running

sudo docker run -i --name=otnode -p 8900:8900 -p 5278:5278 -p 3000:3000 -v ~/.origintrail_noderc:/ot-node/.origintrail_noderc quay.io/origintrail/otnode-mariner:release_mariner

it says

Wallet not provided! Please provide valid wallet. INFO exited: otnode (exit status 0; expected)

On top of that ... (at least on Bash for Windows) the process doesn't kill itself after existing, i,e, after

INFO exited: otnode (exit status 0; expected)

Note: the operational wallet is currently empty since I am basically ensuring everything works smoothly to start.

Given the chance, I was wondering if there is any way to be notified of any node failure (e.g. Some specific error)?

Active Node gave errors (exited: arango (exit status 1; not expected), After reboot will not not start: (Error: connect ECONNREFUSED 127.0.0.1:8529)+ writing log files in engine-rocksdb/journals/

Expected Behavior

OTnode starts up and the diff/var/lib/arangodb3/engine-rocksdb/journals/ is not filled up with logfiles (or archived+ garbage collected ).

Actual Behavior

The problem is split up into 2 areas

  • When it was running before 14/08 (problem started probably on 08/08)
  • After I tried to restart it >14/08 Node does not start

14/08<
**Last week I noticed I did win hardly any jobs. So I logged into my nodes(14/08). They were still running but 2 were producing errors and space occupied was around 68GB. While it Should be around 30GB

The logging showed it started on 8/8**
2021-08-08 11:46:16,602 INFO exited: arango (exit status 1; not expected)
2021-08-08 11:46:17,642 INFO spawned: 'arango' with pid 53967
2021-08-08 11:46:18,646 INFO success: arango entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Other error
2021-08-12T07:34:44.320Z - error - Unhandled Rejection:
Error: Timed out waiting for response
at KademliaNode._timeout (/ot-node/5.1.0/node_modules/@deadcanaries/kadence/ lib/node-abstract.js:260:15)
at Timeout.setInterval [as _onTimeout] (/ot-node/5.1.0/node_modules/@deadcan aries/kadence/lib/node-abstract.js:172:28)
at ontimeout (timers.js:466:11)
at tryOnTimeout (timers.js:304:5)
at Timer.listOnTimeout (timers.js:267:5)

The Arango logging started on 08/08 with

2021-08-08T11:46:16Z [12] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:16Z [12] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));
2021-08-08T11:46:16Z [12] ERROR [cb0bd] ! ^
2021-08-08T11:46:16Z [12] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:16Z [12] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));
2021-08-08T11:46:16Z [12] ERROR [cb0bd] ! ^
2021-08-08T11:46:16Z [12] FATAL [69ac3] {v8} error during execution of JavaScript file 'server/initialize.js'
2021-08-08T11:46:17Z [53967] INFO [e52b0] ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019
2021-08-08T11:46:17Z [53967] INFO [75ddc] detected operating system: Linux version 5.4.0-80-generic (buildd@lcy01-amd64-030) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #90-Ubuntu SMP Fri Jul 9 22:49:44 UTC 2021
2021-08-08T11:46:17Z [53967] WARNING [118b0] {memory} maximum number of memory mappings per process is 65530, which seems too low. it is recommended to set it to at least 128000
2021-08-08T11:46:17Z [53967] WARNING [49528] {memory} execute 'sudo sysctl -w "vm.max_map_count=128000"'
2021-08-08T11:46:17Z [53967] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-08T11:46:17Z [53967] INFO [144fe] using storage engine rocksdb
2021-08-08T11:46:17Z [53967] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-08T11:46:17Z [53967] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-08T11:46:17Z [53967] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-08T11:46:17Z [53967] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number because it is lower than recommended
2021-08-08T11:46:27Z [53967] ERROR [8a210] JavaScript exception in file '/usr/share/arangodb3/js/common/bootstrap/modules.js' at 68,37: ArangoError 2: cannot get current working directory: No such file or directory
2021-08-08T11:46:27Z [53967] ERROR [409ee] ! const ROOT_PATH = fs.normalize(fs.makeAbsolute(internal.startupPath));

and today..when I try to restart OT node
2021-08-16T07:41:08Z [12] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-16T07:41:08Z [12] INFO [144fe] using storage engine rocksdb
2021-08-16T07:41:08Z [12] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-16T07:41:08Z [12] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-16T07:41:08Z [12] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-16T07:41:08Z [12] WARNING [b387d] found existing lockfile '/var/lib/arangodb3/LOCK' of previous process with pid 13, but that process seems to be dead already
2021-08-16T07:41:08Z [12] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number because it is lower than recommended
2021-08-16T07:41:24Z [12] INFO [6ea38] using endpoint 'http+tcp://0.0.0.0:8529' for non-encrypted requests
2021-08-16T07:41:27Z [12] INFO [e52b0] ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019
2021-08-16T07:41:27Z [12] INFO [75ddc] detected operating system: Linux version 5.4.0-81-generic (buildd@lgw01-amd64-052) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021
2021-08-16T07:41:27Z [12] WARNING [118b0] {memory} maximum number of memory mappings per process is 65530, which seems too low. it is recommended to set it to at least 128000
2021-08-16T07:41:27Z [12] WARNING [49528] {memory} execute 'sudo sysctl -w "vm.max_map_count=128000"'
2021-08-16T07:41:27Z [12] INFO [43396] {authentication} Jwt secret not specified, generating...
2021-08-16T07:41:27Z [12] INFO [144fe] using storage engine rocksdb
2021-08-16T07:41:27Z [12] INFO [3bb7d] {cluster} Starting up with role SINGLE
2021-08-16T07:41:27Z [12] INFO [a1c60] {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2021-08-16T07:41:27Z [12] INFO [3844e] {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2021-08-16T07:41:27Z [12] WARNING [ad4b2] found existing lockfile '/var/lib/arangodb3/LOCK' of previous process with pid 12, and that process seems to be still running
2021-08-16T07:41:27Z [12] WARNING [d5c49] {engines} ignoring value for option --rocksdb.max-write-buffer-number because it is lower than recommended

14/08>
My node is unable to start up anymore and the diff/var/lib/arangodb3/engine-rocksdb/journals/ is filled with small log files, but taking huge amount of space (41000 files of 1KB taking 40GB). Writing starts at the moment of a reboot/restart when the docker container is active again. Normal space should be around 30GB, But my node is now at 70GB

Every time my Node is started I get the following error

2021-08-16T07:40:22.288Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 8529,
response: undefined }
2021-08-16T07:40:22.290Z - error - Whoops, terminating with code: 1
2021-08-16 07:40:22,299 INFO exited: otnode (exit status 1; expected)
2021-08-16 07:40:23,301 WARN received SIGTERM indicating exit request
2021-08-16 07:40:23,301 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-08-16 07:40:26,137 INFO stopped: arango (terminated by SIGTERM)

I did reboot/restart/kill/ changed swapfile size from 1Gb to 6GB Nothing helped.
However.... Strangely ONE of the Nodes purged the 40GB logfiles and started working again after a reboot (yesterday).

Steps to Reproduce the Problem

  1. Docker start OT node Since I can not start the node anymore, The first part I can not reproduce anymore

Specifications

  • Version: OT node 5.1.0

  • Platform: Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-81-generic x86_64) Linux version 5.4.0-81-generic (buildd@lgw01-amd64-052) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021

  • ArangoDB 3.5.3 [linux] 64bit, using jemalloc, build tags/v3.5.3-0-gf9ff700153, VPack 0.1.33, RocksDB 6.2.0, ICU 58.1, V8 7.1.302.28, OpenSSL 1.1.1d 10 Sep 2019

  • -Digital Ocean 4GB 80GB Storage

ArangoError: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device

Expected behavior

arangodb3 able to keep running when there is at least 7.5gb of free space on the server

Actual behavior

on otnode logs, getting error :
ArangoError: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
on journal log :
2021-07-07T23:01:17Z [4323] WARNING [82af5] {statistics} could not commit stats to _statisticsRaw: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device

Steps to reproduce the problem

  1. Have a 12gb backup sized otnode running on a 20gb server with about 4-7gb space left, error comes around 60-70% disk capacity
  2. No extra steps needed, let the arangodb3 and node run its course, error will eventually come when disk usage gets to a certain point

Specifications

  • Node version: 5.0.4
  • Platform: ubuntu 18.04 or ubuntu 20.04 both with docker or dockerless
  • Node wallet: 0x702f808dC1710e281B80666042669E16a6EF4f64
  • ERC725 identity: 0x19293f7aC985DaAb0300dd23c3e270B74244e0E1

Contact details

Error logs

Jul 08 01:01:17 OTNODE12 arangod[4323]: 2021-07-07T23:01:17Z [4323] WARNING [82af5] {statistics} could not commit stats to _statisticsRaw: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device

error - Unhandled Rejection:
Jul 08 02:02:27 OTNODE12 node[4964]: ArangoError: IO error: No space left on deviceWhile appending to file: /var/lib/arangodb3/engine-rocksdb/002001.sst: No space left on device
Jul 08 02:02:27 OTNODE12 node[4964]: at new ArangoError (/ot-node/5.0.4/node_modules/arangojs/lib/error.js:30:15)
Jul 08 02:02:27 OTNODE12 node[4964]: at /ot-node/5.0.4/node_modules/arangojs/lib/connection.js:204:21
Jul 08 02:02:27 OTNODE12 node[4964]: at callback (/ot-node/5.0.4/node_modules/arangojs/lib/util/request.node.js:52:9)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage. (/ot-node/5.0.4/node_modules/arangojs/lib/util/request.node.js:63:11)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage.emit (events.js:185:15)
Jul 08 02:02:27 OTNODE12 node[4964]: at IncomingMessage.emit (domain.js:422:20)
Jul 08 02:02:27 OTNODE12 node[4964]: at endReadableNT (_stream_readable.js:1106:12)
Jul 08 02:02:27 OTNODE12 node[4964]: at process._tickCallback (internal/process/next_tick.js:178:19)

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].

Connection issue when running the rpc node

Hello, I get this issue when running the RPC node:

$ node ipc.js
IPC-RPC Communication server listening on port 3000
OriginTrail IPC server listening at http://127.0.0.1:8765
RPC Server connected

$ node rpc.js
Kademlia service listening...
OriginTrail RPC server listening at http://[::]:8888
Socket connected to IPC-RPC Communication server on port 3000
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"connect econnrefused 18.196.209.195:1778","time":"2018-02-25T16:23:19.278Z","v":0}
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"connect econnrefused 18.196.209.195:1778","time":"2018-02-25T16:23:19.279Z","v":0}
Kademlia connection to seed failed
{"name":"kadtools","hostname":"ip-xxx-xx-xx-xx.ec2.internal","pid":3526,"level":40,"msg":"failed to pull filter from 0000000000000000000000000000000000000001, reason: Timed out waiting for response","time":"2018-02-25T16:23:39.102Z","v":0}

otnode v4.1.16 startup error code:1

I followed the instructions in the git repo for setting up the warp node (v5) on both AWS - Ubuntu version 20.04 and Azure - Ubuntu version 18.04 (Bionic). The node runs fine on Azure / 18.04 but fails on AWS / 20.04 with the error below. As noted, the node seems to run fine on the older Ubuntu version so I'm not sure there is any need to fix this, but thought the info could be useful (at least for setup guides).

I expect when I run (** replaced with my real public ip address):**

docker run --log-driver json-file --log-opt max-size=1g --name=otnode --hostname=**** -p 8900:8900 -p 5278:5278 -p 3000:3000 -e LOGS_LEVEL_DEBUG=1 -e SEND_LOGS=1 -v /root/certs/:/ot-node/certs/ -v /root/.origintrail_noderc:/ot-node/.origintrail_noderc -v /root/wallets/kovan.json:/ot-node/data/wallets/kovan.json -v /root/wallets/rink.json:/ot-node/data/wallets/rink.json quay.io/origintrail/otnode-test:feature_blockchain-service

To see the node fire up and start eating my test coins, but instead I get:

(more logs up here, but no errors)...
arango: stopped
2021-02-19 03:01:23,481 INFO spawned: 'arango' with pid 312
2021-02-19 03:01:24,484 INFO success: arango entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
arango: started

===================================================
==== ====
==== Arango password successfully updated! ====
==== ====

2021-02-19T03:01:34.502Z - notify - One-time password migration completed. Lasted 24730 millisecond(s)
2021-02-19T03:01:34.571Z - error - Whoops, terminating with code: 1
2021-02-19 03:01:34,579 INFO exited: otnode (exit status 1; expected)
2021-02-19 03:01:35,581 WARN received SIGTERM indicating exit request
2021-02-19 03:01:35,581 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-02-19 03:01:36,325 INFO stopped: arango (exit status 0)

Specifications

  • Node version: 4.1.16 (perhaps this should be incremented to 5.0.0 since people are referring to this node as v5)
  • Platform: Ubuntu
  • Node wallet: I'll send it along via email if you need it
  • ERC725 identity: Not created yet, I believe this is created after the first run

Contact details

  • Just comment to this ticket

Error logs

See above.

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].

Node unable to start after available storage fills up

Issue

Storage on my VPS was fully consumed due to a process causing 25GB+ of data to be written, possibly either due to a backup or arangodb, causing the node running on this VPS to terminate. My nodes are running on a VPS tier with a 55GB SSD and while storage utilization has increased over time, it so far has hovered around 50% utilization, so the storage filling up overnight was a bit unexpected.

I was able to free up storage by deleting a very large backup file, with approx 20GB of storage now free. When attempting to now start the node I receive the following error: warn - Failed to load contracts on all blockchain implementations. Error: Invalid JSON RPC response: ""

Up to this point I've done a bit of troubleshooting in the form of restarting my VPS, verifying configurations in the .origintrail_noderc file look fine (especially the RPC endpoint URL I have entered for Ethereum), and attempting to roll back my entire VPS to an earlier snapshot going back 4-5 days.

Steps to reproduce the problem

While I can't reproduce this issue on demand, at least one other user is also experiencing the same error after their VPS storage filled up. I had two other nodes also experience this sudden increase in storage utilization but both of those nodes are able to start after clearing up storage space.

Specifications

Contact details

  • Email:

Error logs

(node:14) Warning: N-API is an experimental feature and could change at any time.
2021-06-24T19:45:34.534Z - info - npm modules dependencies check done
2021-06-24T19:45:34.535Z - info - ot-node folder structure check done
2021-06-24T19:45:34.537Z - important - Running in mainnet environment.
2021-06-24T19:45:34.561Z - info - Using existing graph database password.
2021-06-24T19:45:34.596Z - info - Arango server version 3.5.3 is up and running
2021-06-24T19:45:34.612Z - info - Storage database check done
2021-06-24T19:45:35.309Z - info - [ethr:mainnet] Selected blockchain: Ethereum
2021-06-24T19:45:35.320Z - info - [xdai:mainnet] Selected blockchain: xDai
2021-06-24T19:45:35.327Z - trace - [ethr:mainnet] Asking Hub for Holding contract address...
2021-06-24T19:45:35.359Z - trace - [xdai:mainnet] Asking Hub for Holding contract address...
**2021-06-24T19:45:55.384Z - warn - Failed to load contracts on all blockchain implementations. Error: Invalid JSON RPC response: ""**
2021-06-24 19:45:55,410 INFO exited: otnode (exit status 0; expected)
2021-06-24 19:45:56,414 WARN received SIGTERM indicating exit request
2021-06-24 19:45:56,418 INFO waiting for remote_syslog, arango, otnodelistener to die
2021-06-24 19:45:57,413 INFO stopped: arango (exit status 0)
2021-06-24 19:45:58,419 INFO stopped: remote_syslog (terminated by SIGTERM)
2021-06-24 19:45:58,420 INFO stopped: otnodelistener (terminated by SIGTERM)

image

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to [email protected].

A couple Issues running a ot-node over last 2 months

I've being running a node for almost 2 months. I don't have a complaint about job distribution like most others. Although I think it is a issue that needs to be answered. And I'm hopeful it'll be fixed with Freedom update.

Before I get to the current issue I'm having, I would like to report an issue I had when I was first setting up my node. I tried to use the option in the below image to deposit ETH to my operational wallet.
https://gyazo.com/ec09f5bbbdbfa84a42696f17bff48878

Because it says Deposit ETH to your Operational Wallet. But it sent ETH to my ERC-725 identity instead. Which I later learned are lost forever. Obviously, I didn't know what I was doing. I went ahead with the transaction because it says Deposit ETH to your Operational Wallet and I thought that's what it's going to do. But somehow my erc-725 identity was in the text field. It should at least state that I need to put in my operational wallet address if it wasn't there. Here's the transaction on Etherscan,
https://etherscan.io/tx/0x37b373d8651cfadaebb5ce177d63c4c7cac4e8087e6793e7499b3956459cef51

I'm out 0.22 ETH. I would like this reimbursed

And to the issue i'm currently having. I installed my node with Git. And the command cloned master branch. Now team is posting updates everywhere about updating to v2.0.50. So I did. But it turns out v2.0.50 is on another branch. Git pull still gives me v2.0.45.

Why do I need switch branches? And if there is no other option than to switch. I would like instructions to properly handle this.

Thanks!

How to create smart contract for V0.6.0a

Expected Behavior

Can create new smart contract. This is new feature of version 0.6.0a

Actual Behavior

Did not know how to create new smart contract. Need tutorials or demo for this behavior.

Specifications

  • Version: V0.6.0a
  • Platform: Ubuntu 16.04 run on VMWare

"Import Data" link broken

Expected Behavior

Link to import data page.

Actual Behavior

Redirects to node login page.

Steps to Reproduce the Problem

  1. Login to node.
  2. From "My Account" page, click on "Import Data" link.

Specifications

  • Version: Houston Beta v1.0.2
  • Platform: Mac v10.13.4

Not able to Run a Node

I followed the steps mentioned in
https://docs.origintrail.io/en/latest/Running-a-Node/basic-setup.html

I tried both ways
=> via docker
=> manual installation
in both cases I am not able to find the cause of not running node on my end.

docker runs for few seconds and then stops, I tried via docker start -i otnode but no clue of stopping.

Terminal logs for: docker start -i otnode

Screen Shot 2021-03-19 at 7 13 39 PM

Terminal logs for: npm start

Screen Shot 2021-03-19 at 6 55 30 PM

I tried by configuring as in https://github.com/OriginTrail/ot-node/blob/release/mainnet/.origintrail_noderc.image

and

in this way(but no luck)

{
    "node_wallet": "0x4ff99b5d96035611cd5866d776538877",
    "node_private_key": "aafff60551908d34d62c7746ab6c394e1dc6b43dcf58350f15abf6d59c4",
    "management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6ad48e48b5ec76f1ebfe699254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb",
    "network": {
        "hostname":"127.0.0.1",
      "remoteWhitelist":[
         "0.0.0.0",
         "127.0.0.1"
      ]
    },
    "blockchain": {
        "rpc_server_url": "https://1pxlHB8a44CZZzzHASMIzQob:[email protected]",
        "implementations": [
            {
                "blockchain_title": "Ethereum",
                "blockchain_id": "ethr:mainnet",
                "rpc_server_url": "https://1pxlHB8a44CZZzzHAX1MIzQob:[email protected]",
                "node_wallet": "0x4ff99b5d96035611caf81b16d586d7765038877",
                "node_private_key": "aafff60551908d3f8df4d62c7746b6394e1dc6b43dcf583650f15abf6d59c4",
                "management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6abd7fad48e4b5ec76d8ae98f1eb699254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb"
            },
            {
                "blockchain_title": "xDai",
                "blockchain_id": "xdai:mainnet",
                "rpc_server_url": "https://1pxlHB8a44CZZzzASX21MIzQob:[email protected]",
                "node_wallet": "0x4ff99b5d96035611caf81b16d5866d776508877",
                "node_private_key": "aafff60551908d3f8df4d62c7746ab6c394e1dc6b43dcf583650f1abf6d59c4",
                "management_wallet": "04ec7cc16bcbd873d3f5015e6f6263c3dd98a6abd7fad4e48b5ec76dae98f1ebfe9254303adf02ddb3f48a0bfb5cdce87796e851265da2cad77f9de03ed4fb"
            }
        ]
    }
}

Note: the above addresses/values are not true

Any direction, regarding this would be appreciated.

Missing template / detailed description for ERP customers how to export the GS1 XML file in an automated way or process

Expected Behavior

Provide detailed concept and implementation examples for the large ERP software vendors how the XML export should be implemented, ideally without customization of the ERP. Customization is moving the customer away from the standard and is therefore seen as bad practice.

Actual Behavior

A customer using Navision or SAP ERP will not be willing to implement the XML export from the scratch or the reinvent the XML export wheel per customer.
No detailed description how the XML export should be implemented for a large ERP vendors like Navision, Infor or SAP.

https://github.com/OriginTrail/ot-node/wiki/ERP-Customization

Specifications

https://community.dynamics.com/ax/b/axvanyakashperuk/archive/2014/09/16/tutorial-generating-shipping-labels-using-the-gs1-sscc-18-barcode-format
https://help.sap.com/saphelp_me60/helpdata/EN/f7/86c1536ca9b54ce10000000a174cb4/frameset.htm

Validate wallet addresses upon OT-Node startup

Background

OT-Node uses different wallet addresses from the configuration file for normal functioning. Node wallet is operational wallet with corresponding private key that the OT-Node will use for dataset signing, creating and finalizing offers, payouts, and litigation initiating, answering, and completing. Management wallet is used when the new identity is generated and from that point on it is not used.

The bug

OT-Node wallets configuration parameters could be incorrectly entered and cause problems with communication, since they’re not checked when creating a node profile. Adding additional checks for wallets format, such as hex string with the 0x at the beginning for addresses and hex string without 0x at the start for the private key, would create a more error resistant OT-Node.

Level: Easy

Story definition

An OT-Node startup should fail and report an error message if one of the following scenarios occur:

  • node_wallet and management_wallet are not hex values
  • node_wallet and management_wallet don't start with 0x
  • node_wallet and management_wallet are not 42 characters long
  • node_private_key is not a hex value
  • node_private_key starts with 0x
  • node_private_key is not 160 characters long

Where to start?

OT-Node entry point in standard environment:

if (!config.node_wallet || !config.node_private_key) {

OT-Node entry point in Docker environment:

if (!externalConfig.node_wallet ||

Requirements

Questions?

If you have questions ask in this issue or on your pull request (if you've created one).

xDai gas prices are increasing and this means you can't create an xDai node at certain times of the day

Expected behavior

Should be able to create an xDai node regardless of how busy the xDai blockchain is.

Actual behavior

When gas prices on xDai creep up it's impossible to create a node on xDai because it has a fixed gas price.

Steps to reproduce the problem

  1. xDai gas prices must be high enough that anything with 1 gwei does not get processed
  2. Create a new node on xDai
  3. When it tries to create your profile on the blockchain you will see this log entry
    trace - Sending transaction to blockchain, nonce 0, balance is 5991296853149283761
  4. It now hangs because gas prices are too high for this to be picked up

Specifications

  • Node version: Latest version
  • Platform: Any
  • Node wallet: N/A
  • ERC725 identity: N/A

Investigation (Overview)

xDai has this for the default values in the config.json:
"gas_limit": "2000000", "gas_price": "1000000000",

This equates to 1 gwei for the gas price. Looking at the screenshot below xDai we can see xDai is already at over 80% of gas utilised in blocks mostly from 1 project.

Gas prices have been seen where 1 gwei is no longer safe so transactions don't get picked up.

image

This leads to the problem of unless you manually change the node config to have a different gas_price for xDai it is stuck at 1 gwei. This blocked a user setting up their node and there are no warnings or errors, it just seemingly gets stuck in the otnode.

It should also be said blockscout.com for checking your wallet doesn't track pending transactions very well either so the experience there is bad for figuring out what's happening.

Investigation (Code)

Ethereum has support for reading eth gas station to get gas prices as seen here

const gasStationLink = 'https://ethgasstation.info/json/ethgasAPI.json';

xDai has no support for dynamic gas prices and instead hard codes it as seen here

return this.config.gas_price;

Possible Solution

I think we need to find an equivalent service for getting gas prices for xDai or find other creative ways of calculating this.

connect ECONNREFUSED 127.0.0.1:8529

Expected Behavior

run OTnode in testnet network

Actual Behavior

Steps to Reproduce the Problem

2019-03-07T07:44:59.895Z - error - Please make sure Arango server is up and running
{ Error: connect ECONNREFUSED 127.0.0.1:8529
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 8529,
response: undefined }
2019-03-07T07:44:59.898Z - info - Notifying Bugsnag of exception...

Specifications

  • Version:
  • Platform:

Handle insufficient funds gracefully

Background

OT-Node as data creator spends ETH on blockchain transactions and TRAC for chosen data holders. If data creator doesn't have sufficient funds on the wallet an error occurs.

The bug

The error severity is the same as if there was an actual error during transaction execution. Since low funds are not an error but rather a circumstance, errors thrown because of low funds should be caught and logged with a lower severity level, and without sending an error report to BugSnag.

Level: Easy

Story definition

When a transaction fails because of low funds, an OT-Node should meet the following:

  • log a warning message
  • doesn't send an error report to BugSnag

Where to start?

Handling insufficient funds for offer preparation:

const message = 'Not enough tokens. To replicate data please deposit more tokens to your profile';

Handling insufficient funds for offer finalization:

errorMessage = 'Not enough tokens. To replicate data please deposit more tokens to your profile';

Handling insufficient funds for offer mining:

const message = 'Not enough tokens. To replicate data please deposit more tokens to your profile';

Requirements

Questions?

If you have questions ask in this issue or on your pull request (if you've created one).

4 blockchain calls after each finalized offer

With version 2.0.52 the log shows 4 blockchain calls after node recognized a finalized offer (chosen for or not). Those 4 calls happen for every offer the node was chosen for in the past. Is this sustainable if the amount of won offers grow?

2019-04-09T16:16:11.687Z - trace - getHolderLitigationEncryptionType(offer=0x12345..., holderIdentity=0x6789a...)
2019-04-09T16:16:11.928Z - trace - getOffer(offerId=0x12345...)
2019-04-09T16:16:12.513Z - trace - getHolderPaidAmount(offer=0x12345..., holderIdentity=0x6789a...)
2019-04-09T16:16:12.788Z - trace - getHolderStakedAmount(offer=0x12345..., holderIdentity=0x6789a...)

Missing data-privacy statement for GDPR and China Cyber Security Law compliance

Expected Behavior

Include data privacy statement. Document how GDPR and China Cyber Securtiy Law legal requirements are implemented.
Especially how the right to forget will be implement if person related data is saved to an immutable Blockchain. Here it does not matter that the person related data is encrypted

Actual Behavior

From the Graph structure I can see that the Origin Trail β€œdatabases” will deal w. person related data defined by the object class actor.
This object will very like be person related that is applicable for GDRP or China Cyber Security Law. These legislations require compliance in the software implementation.

Steps to Reproduce the Problem

https://github.com/OriginTrail/ot-node/wiki/Graph-structure-in-OriginTrail-Data-Layer---version-1.0
No data privacy statement included in the Git or Wiki.

Specifications

https://gdpr-info.eu/art-17-gdpr/
https://assets.kpmg.com/content/dam/kpmg/cn/pdf/en/2017/02/overview-of-cybersecurity-law.pdf

  • Version:
  • Platform:

no vertices on readPoint + observedObjects when importing WOT-Files

Expected Behavior

When I import a WOT-File for a example https://github.com/OriginTrail/ot-node/blob/develop/importers/json_examples/WOT_Example_1.json.
And then investigate the vertices and edges from the Data that i get from GET /api/query/local/{import_id}.
I should see vertices or edges that contain the readPoint or the observedObjects otherwise the examples should not list that data. This makes it easy to query the WOT-Data.

The WOT contains:

"readPoint": {
"id": "urn:epc: id: sgln: Building_1"
}
"observedObjects": [
"urn:epc: id: sgtin: Batch_1"
]

Actual Behavior

The Graph has no vertices or edges that contain the readPoint or the observedObjects.
Without readPoint or observedObjects it is hard to query the data from WOT-Files.

And for some reasons the id containts Invalid_Date" like "urn:ot:mda: actor: id: Company_1: Invalid_Date
i atached the responser with vertices/edges.

WOT_Exampl_1_responds.txt

Steps to Reproduce the Problem

1.Local Import a json file from this repo for example https://github.com/OriginTrail/ot-node/blob/develop/importers/json_examples/WOT_Example_1.json
2. Get the graph data with GET /api/query/local/{import_id}.

Specifications

  • Version: 1.3.32
  • Platform: Ubuntu 16.04 on Digital Ocean

How to sent TRAC to my profile

Expected Behavior

can not deposit TRAC from management wallet to my profile node-profile.origintrail.io

Actual Behavior

Steps to Reproduce the Problem

"profile": {
"minimalStake": "1000",
"reserved": "0",
"staked": "1000"
},

tokenbalance of my operational wallet is 850 but can not sent TRAC to my profile
How to sent TRAC to my profile??

Specifications

  • Version:
  • Platform:

error - Failed to create profile

Expected Behavior

run OTnode in testnet network

Actual Behavior

screenshot from 2019-03-01 19-06-39

Steps to Reproduce the Problem

my logs when run OTnode
2019-03-05T02:42:36.646Z - trace - Sending transaction to blockchain, nonce 9, balance is 45327694480943374925824
2019-03-05T02:42:49.326Z - error - Failed to create profile
Error: Error: Transaction has been reverted by the EVM:

Specifications

  • Version:
  • Platform:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.