Code Monkey home page Code Monkey logo

pegnet's Introduction


Build Status Discord Coverage Status License

A Network of Pegged Tokens

This is the main repository for the PegNet application.

Pegged tokens reflect real market assets such as currencies, precious metals, commodities, cryptocurrencies etc. The conversion rates on PegNet are determined by a decentralized set of miners who submit values based on current market data. These values are recorded in the Factom blockchain and then graded based upon accuracy and mining hashpower.

The draft proposal paper is available here.

For any questions, troubleshooting or further information head to discord.

Mining

Requirements

Setup

Create a .pegnet folder inside your home directory. Copy the config/defaultconfig.ini file there.

On Windows this is your %USERPROFILE% folder

Linux example:

mkdir ~/.pegnet
wget https://raw.githubusercontent.com/pegnet/pegnet/master/config/defaultconfig.ini -P ~/.pegnet/
  • Sign up for an API Key from https://currencylayer.com, replace APILayerKey in the config with your own

  • Replace either ECAddress or FCTAddress with your own

  • Modify the IdentityChain name to one of your choosing.

  • Have a factomd node running on mainnet.

  • Have factom-walletd open

  • Start Pegnet

On first startup there will be a delay while the hash bytemap is generated. Mining will only begin at the start of each ten minute block.

Contributing

  • Join Discord and chat about it with lovely people!

  • Run a testnet node

  • Create a github issue because they always exist.

  • Fork the repo and submit your pull requests, fix things.

Development

Docker guide can be found here for an automated solution.

Manual Setup

Install the factom binaries

The Factom developer sandbox setup overview is here, which covers the first parts, otherwise use below.

# In first terminal
# Change blocktime to whatever suits you 
factomd -blktime=120 -network=LOCAL

# Second Terminal
factom-walletd

# Third Terminal
fa='factom-cli importaddress Fs3E9gV6DXsYzf7Fqx1fVBQPQXV695eP3k5XbmHEZVRLkMdD9qCK'
ec='factom-cli newecaddress'
factom-cli listaddresses # Verify addresses
factom-cli buyec $fa $ec 100000
factom-cli balance $ec # Verify Balance

# Fork Repo on github, clone your fork
git clone https://github.com/<USER>/pegnet

# Add main pegnet repo as a remote
cd pegnet
git remote add upstream https://github.com/pegnet/pegnet

# Sync with main development branch
git pull upstream develop 

# Initialize the pegnet chain
cd initialization
go build
./initialization

# You should be ready to roll from here

pegnet's People

Contributors

aktary avatar dwjorgeb avatar emyrk avatar gunsmoke avatar mberry avatar mwanon avatar ormembaar avatar paulbernier avatar paulsnow avatar sambarnes avatar sigrlami avatar starneit avatar whosoup avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pegnet's Issues

PegNet API - onchain info

We should move the APIs here : https://github.com/pegnet/pegnet-node

This the node api vs the miner. The miner will gradually lose features, as we push them to the node. The pegnet node currently is a superset, meaning it supports all the /v endpoints as well. But I'd like to migrate them to /node/v1.

We need (some exist, but let's get some cmd options to get this info and confirm they work)

  • OPR by entryhash
  • All OPRs per block in sorted order
  • OPRs by Identity (maybe pagination or per block in the call?)
  • OPRs by payout
  • Winners per block (by both identity or payout)
  • Rewards per address

Not included bc this is in fatd

  • market caps
  • txs per block

Provide Miner balances

Need to write code to scan the Oracle Mining Records chain and print the payouts for all the miners so they can see what they have from mining.

Refactor and cleanup

The current code assumes a particular data source for currency, precious metals (gold and silver), and cryptocurrencies.

We would like to modularize the code, enable plugging in other sources, and create other adaptors for those sources.

This issue is about designing that structure for the code.

Documentation

We need some serious work done on the documentation. I've written a bunch of theory, and have discussed various aspects of the protocol, but more user documentation and other documentation needs to be identified and written.

Edit (via @Cahl-Dee ):
Documentation needed before pM1 launch to include...

  • [draft] clear miner documentation
    -- how to setup
    -- brief overview of what the mining is doing
    -- identity performance of miner
    -- mining benchmarks/suggested specs
  • [pending] CLI tool documentation mainly around key management. Include links to guides for setting up factom walletd
  • [complete] Greg Forst has requested a narrative/story around Pegnet - "I am newer to the specifics of Pegnet and want to take this to all our channels (PR, social etc...) but I need more clarity on the narrative/story we are telling with pegnet. The technical description is great but what does it solve, enable, use cases, better than X etc... I need to feed a story with the product and as you know other stable coins exist"

Edit 2 (via @Cahl-Dee ):
Draft docs are available here

Edit 3 (via @Cahl-Dee ):
All docs are live on the github wiki here: https://github.com/pegnet/pegnet/wiki

Calculating Grade doesn't reset

pegnet/opr/grading.go

Lines 40 to 47 in 98e3806

func CalculateGrade(avg [20]float64, opr *OraclePriceRecord) float64 {
tokens := opr.GetTokens()
for i, v := range tokens {
d := v.value - avg[i] // compute the difference from the average
opr.Grade = opr.Grade + d*d*d*d // the grade is the sum of the squares of the differences
}
return opr.Grade
}

Every cycle, the algorithm is supposed to take the average and move outliers to the back, however the Grade doesn't reset between cycles and it compounds. This can lead to fairly easy manipulation.

Assuming you have 10 records of $1, one record of $20, and one record of $1,000,000. In theory, the ten records of $1 should be selected but instead the algorithm picks $20 as the winner with nine 1$ entries.

First Cycle:
Average = ($10 + $20 + $1,000,000) / 12 = $83335.833...

Results after first cycle:
$20 grade = 0 + (20-83335.833)^4 ~= 48184e15
nine times $1 grade = 0 + (1 - 83335.833)^4 ~= 48228e15
$1m grade = 0 + (1m - 83335.833)^4 = 706059041e15

Second Cycle:
Average = $30 / 11 = 2.727272...

Results:
$20 grade = 48184e15 + (20 - 2.72)^4 ~= 48184e15 + 89010
nine times $1 grade = 48228e15 + (1 - 2.72)^4 ~= 48228e15 + 8

Grade absolutely has to be reset between each cycle:

func CalculateGrade(avg [20]float64, opr *OraclePriceRecord) float64 {
	tokens := opr.GetTokens()
	opr.Grade = 0
	for i, v := range tokens {
		d := v.value - avg[i]           // compute the difference from the average
		opr.Grade = opr.Grade + d*d*d*d // the grade is the sum of the squares of the differences
	}
	return opr.Grade
}

Design Transaction Structure and Execution Rules

Create the conversion transaction that allows a user to convert assets from one pegged token to another.

CoS:

  • A design is produced that is able to be implemented
  • Tickets are created for all work items needed to execute the design

Grade gets reset if an asset isn't available

If the average is zero then the grade is completely reset back to zero? Doesn't seem like it should happen.

pegnet/opr/grading.go

Lines 54 to 55 in e950f5c

} else {
opr.Grade = v.value // If the average is zero, then it's all zero so

If the final asset isn't available for whatever reason everyone will get the best grade possible irrespective of other values and the opr will be decided on difficulty alone. If any asset isn't available it will void all the previous scores.

Shouldn't an avg of zero simply be ignored?

PegNet Addresses

So I had a look at the addresses used for balances and found that they're very inconsistent. The motivation was that you can't accidentally send pFCT to a pUSD address and vice versa for all addresses.

At the moment, the following scheme is used:

  1. take the RCD of an FA address
  2. take the prefix (pFCT, pUSD, etc) to get prefix_rcdaddress, calculate the checksum from that
  3. return prefix_ + base58(rcd | checksum)

The problem (imo) is that this creates wildly inconsistent addresses that range from 42 to 56 character length, as well as the fact that the prefix and _ are not base58 encoded but the rest of the address is. Regular FCT addresses fix this by using a two-byte prefix that locks the addresses inside a specific range.

So I figured why not use the same approach with PegNet addresses? Using a 4 byte prefix, I have come up with the following table:

        prefix  minimum address                                           maximum address
pPNT = 10715270 pPNT1k37XPNTkQLEVux7DugBBsGs7MVFrkTScSo25sdL2sWJWYzcV5 to pPNT3gnHkqQKB5ZNKVf1oXg8WU3nDyCuZbLEFWuGEZXKgsto8cy4dU
pUSD = 10791442 pUSD2bHWVHaZgM9j4jptSvmxKZuerYNRyhUcN37wGFXve6rzxS47P5 to pUSD4Y2gijcR72NrtKXo2YmueAgZyA65gYMQ17EBQwRvJ7FVaW2ZXU
pEUR = 1063b207 pEUR2HcgKZPnVpi9AZvdcpwxY5uBPPJP8ghuhPkyuRruVafuPLZtJB to pEUR4EMrZ1RdvVwGz9dYCSwurgg6W122qXahLTsE47ku9b4Q1QYLSa
pJPY = 1069b15f pJPY2EkvqtvkFPQKgptiWFu3yU4iotuwKheH9h9iuHSu1i1ST5xsxj to pJPY4BW75Lxbg4dTWQbd5su1J4qdvWdb2YX4nmFy3yLtfiPw59wL78
pGBP = 10664f09 pGBP11bzrGaLGA4bfdK8qWYJU7wh22DdwhN2ijKMz991zkpLhuvf4s to pGBP2xMB5icBgqHjVD23R8YFniic8dwHeYEpMoRc8q31emCqKyu7DG
pCAD = 106026c9 pCAD2rtN7pdXABrSHpbhgMvxrAAK6YcXY6mS4nZrNuwGJcF5QnF8H5 to pCAD4odYMGfNas5a7QJcFyvvAkwEDALBEweDhrg6XbqFxcda2rDaRU
pCHF = 10605656 pCHF1aipkPMuD1amas35xrEdWb3rd36ra5t9Tu3bnMQv81ADSh6SAB to pCHF3XTzyqPkdgouQSjzYUEaqBpmjepWGvkw6y9qw3Jun1Yi4m4tJa
* pINR is not possible since Base58 doesn't contain uppercase i
pSGD = 1075c07e pSGD1zpbPnLidnWw18xiwJStxaLQXiCuCuKRyEqZ4rysr6EannPHaf to pSGD3wZmdENa4Tk4pifdWvSrHB7KeKvYukCDcJwoDYssW6d5QrMjj4
pCNY = 10607a1e pCNY1RckjdsiN5yr7yT3iEwUxYeDU2wWu4hz6dAHsEtUZFaVnCRYxX to pCNY3NMvy5uZnmCywZ9xHrwSH9R8aefAbuamjhGY1vnUDFxzQGQ16v
pHKD = 10680c09 pHKD2c2pCaUjxLgsZTQrUuokZrh3RJhrcBocvjKWAGdxNV7eE13AvB to pHKD4YmzS2WbP1v1P37m4XohtTTxXvRWK2gQZoRkJxXx2VW8r51d4a
pXAU = 107d4217 pXAU1h3c5XXQgP4cxgEyrneAkvuBv3jEqJ1URTDVXyNBsPTy4NbZTV to pXAU3dnnJyZG74HknFwtSQe85Xg72fStY8tG4XKjgfGBXPrTgSa1bt
pXAG = 107d40b1 pXAG1nYT4r6LNFzMGdkSLavhxHfM5aeWjcXmc6g9uiSKGzdJRrABXM to pXAG3jHdJJ8BnwDV6DTLvCvfGtSGCCNASTQZFAnQ4QLJw11o3v8dfk
pXPD = 107d9839 pXPD21XUx9fbwCZhzVnKWh5ZoCdh55wL6TnDj4zrRW58xpsDJudC9d to pXPD3xGfBbhTMsnqp5VE6K5X7oQcBheyoJf1N976aBy8cqFhvybeJ2
pXPT = 107d99db pXPT2YhDsm1omuE4uQbFp8kFqpLUkW74Sho8tfzq32nescyVFYTcnb to pXPT4VSQ7D3fCaTCizJAPkkDAR7Ps7pi9YfvXk75BigeXdMyscS4vz
pXBT = 107d48bc pXBT2DJgVyqwNoGyQAw1sZHbEnAtwaE1ehTxcvLUL3E5Wm9NruALDM to pXBT4A3rjRsnoUW7DkdvTBHYZNwp4BwfMYLkFzSiUj85AmXsUy8nMk
pETH = 1063aa56 pETH2orKkpWsn9oesLDeFiWv7Z63npMPbSaYCwncy1kcER4EvTAJfR to pETH4kbVzGYjCq2nguvYqLWsS9rxuS53JHTKr1ts7hebtRSjYX8kop
pLTC = 106cda3e pLTC1CjfhqVzPJFWL626hEyNvhr1aJa2SzSi1oHAFB6D6HzwrUwioH to pLTC39UqwHXqoyUe9fj1GryLFJcvgvHg9qKVesPQPrzCkJPSUYvAwg
pXBC = 107d46fc pXBC1MoecuYAmopZspxtBVeHaBEdqgWnBBBSHC7BDscjwmQJGD8PDy to pXBC3JYprMa2CV3hhQfnm7eEtn1YxJERt24DvGDRNZWjbmnntH6qNN
pFCT = 1064ce2e pFCT1B2PhYMgHFHMvPVWKf1aunnHtfPac164YAsW99Ga5ymA8bYknj to pFCT37mZvzPXhvWVjyCQuH1YEPZD1H7EJqxrBEykHqAZjz9ekfXCw8

Unfortunately the Indian rupee doesn't work with this approach. Using all lowercase characters means that LTC no longer works. If there's a possibility of dropping or renaming INR (ie RUP) then this would be a very clean approach to giving a unique, human readable address that has a consistent length.

Another possibility is taking the current system and just adding a prefix to stabilize the length. Those prefixes could be something like:

PEG (32 byte payload + 4 byte checksum) = 22 0b 70
PEG (37 byte payload + 4 byte checksum) = 44 5d e6

Which would make the addresses:
pUSD_PEG1k2fvhGErEKCpVDHMjBY5bdMBbdmkA4VDDsknZBwUcPPkJmCS7yAQ6tR5
5 characters non base58 name + 60 characters base58(3 byte prefix + 5 byte name + 32 byte rcd + 4 byte checksum)

The FA address FA1y5ZGuHSLmf2TqNf6hVMkPiNGyQpQDTFJvDLRkKQaoPo4bmbgu (all zeros for RCD) would be:

pPNT_PEG1k2RTjAL59eNe9ekjpcysXGgFtcFjeSw6PHCi22br1oVmuNG9cJ6BmEZE
pUSD_PEG1k2fvhGErEKCpVDHMjBY5bdMBbdmkA4VDDsknZBwUcPPkJmCS7yAQ6tR5
pEUR_PEG1k1sqf63fidaQPfzco36Vx28WZsjXG6zsrJaTK8ybvQ8AX5PbUpV2UJ7K
pJPY_PEG1k28C9tukGsENbEDRYn3tz9RoZMEtPPs3oFDGrWvZvZ3xEVKAehvCdoMK
pGBP_PEG1k1yPYWxXkQKYYwLirpaMikb3hPN6EQ46mVsEHmLwFPSQn4esD1CCtV1u
pCAD_PEG1k1mrGNmLD1TZb91cLZB5sabcPv4TapvQvRkESmEu2vDCAycp4UdjaiQF
pCHF_PEG1k1mvqkzXvXuPevcGUtwA1Whic9nE4KLgZACNVJnisqm33RfhCKVKnvFf
pINR_PEG1k25HgVmwTa9Gio2VyKDNZ72Htp9qEcGn1L7wp5q9T7eMeRuSbTPJVX3V
pSGD_PEG1k2a2ZaahfU6tffRrZivbsRhizZAyeifqet3XMkhXXZszRMTHJBLTueFD
pCNY_PEG1k1mzone2zVEDsyF4jZXdyLUPYntb3xk3fsqmdGTPzCWFZWbU33nGGVBn
pHKD_PEG1k22NYC9JGqHaovtR7ktpg8ourWBgdYqwfMiRxTuSpD7rvXHXV9VFC829
pXAU_PEG1k2pNRzofa17ZoQGqcAd2ruuL9DQXZ87YEsyWrTQRsyaVM3ymCdtMEe2X
pXAG_PEG1k2pNPvgqZk4yMFSuxvSH7rNpzn5i53JQZ2BLaGgc3cjAUMRoG17FBfK9
pXPD_PEG1k2pYBXXFMygd56mTtTFh6asQeb5WzDANggbdCMti4a6RUQBne7Man9Up
pXPT_PEG1k2pYDtoLWZT1sZRfUjTywoBhPNbJyANp3NwFoRro1qdwL4FB9QVGseVY
pXBT_PEG1k2pP5ifTN9zxSbtu67zutEZ49bXj1vLt7b7tN7FfZovdZ5AG13arQ96p
pETH_PEG1k1spyjowdhj7pnJ3bPaAqs8xoaZy4XZMnzT61dGw5ZxyunvGamefMrtU
pLTC_PEG1k2E11NJfKeSC2ZcKd5FHwW2gEA1TjKp7L1dm5Ugij5PjjNzzdHrnXeza
pXBC_PEG1k2pP3CojeQsfTzKZXKmrUwqscdvSnXvnvUWXbbfT4632CwMADREW3LsE
pFCT_PEG1k1vX4MWvKpwD4FquNU4E6jfDCRySx5xzHa1C4UX3fovZgmLbVceciRUz

They would still be uniquely identifiable even without the 5 character name at the front, since the name is encoded inside the base58 itself, allowing for the potential of just having "PEG....." addresses.

Thoughts?

FAT wallet integration

We would like a wallet that integrates PegNet tokens with other FAT tokens

This is a really huge design effort. Let's discuss here to figure out how to do this.

Remove spaces from the Oracle Price Record chain

To limit confusion, I believe that chain names we use in the PegNet should not include spaces. Nor should they include characters outside of A-Z a-z 0-9.

This limits confusion about chainIDs when we see PegNet-TestNet-Oracle_Price_Records
This is easily a ChainID created from three fields, PegNet and TestNet and Oracle_Price_Records.

`Pegnet` Command Miner/API/CLI Organization

Currently to launch a miner you do pegnet, which then reads from the config file to find the number of miners to run. But this is also our cli tool, and soon to be api.

Would you guys be opposed to doing:
pegnet --service miner --service api to launch a miner with an api?

then when we add a control panel (if we will?)
pegnet --service miner --service api --service controlpanel (or something)
I would also allow the user to define the main-service that takes the main thread, in case for w/e reason they want that.
pegnet --main-service miner --service api --service controlpanel
My only concern is that it's not the most intuitive, as I can't think of another app that does this. It's kind of a result of us overloading the pegnet binary as being both the cli and the daemon.

I'd rather pegnet just display the help of all the available commands, similar to how if you use docker.

Does anyone want to defend just launching it as pegnet like it is now?

Control Panel

One of the things I really like about factomd is having a control panel that lets us see what's going on with a node. I am not sure how many crypto daemons for other projects have this feature, but I think it would be great for the PegNet.

If we do keep a form of mining integrated in the PegNet daemon, it would be super cool to be able to turn on and turn off mining right in the daemon. And it would be great to be able to browse the blocks in the PegNet, and (eventually) the conversons, the transactions, the balances for winning miners, and stuff like that.

Of course, I think the real need will be for a full blown analysis screen that shows charts for the assets, token supplies for assets in the PegNet, and stuff like that. Is that part of the PegNet Daemon, or something separate? Likely something separate.

Just going into detail about ideas here. This Issue is to create the start of a Control Panel with the minimum features:

  • Turn on and off mining
  • Show the balance of the EC address for writing OPR records
  • Show the balances for all Mining rewards by Identity
  • Current block height for facotmd and for the pegnet, current minute
  • Connectivity to the factomd node supporting the daemon
  • Connectivity to the data source used to provide pricing information

Remove dead code from assets.go

We are calling but not using the old binary marshaling code for the OPR
simply remove it and the routine it calls.

Failed to see this when we moved to JSON for entries.

Design the PegNet Daemon

We need to design all the API calls for driving the PegNet. The design of the Daemon and the wallet should follow FATd, the FAT wallet, Factomd, and the Factom-walletd.

The evaluation of the winners of the prior block is not working

Improving the log files revealed that the check that the winners of the prior block is not working properly. We are detecting differences in the winning lists but are putting those blocks in our valid block list anyway.

More research is needed, and more testing to ensure this issue is fixed with the sort rework.

Evaluating PoW to avoid a 51% attack

We have an advantage because the selection of the record that matters is a distributed problem (over all entries submitted). But any miner with 51% of the hash power still has the chance of selecting the values actually used in a block. How do we protect ourselves from this?

An analysis of the final 50 OPRs in the current approach has no method to do much but calculate the agreement between the 50 OPRs. A miner with 26 Entries in the 50 can dictate that result. So how can we make it harder to get 26 Entries?

Change how we reduce 100 to 200 entries to 50.

This method works like this:

Collect the valid OPRs (all references to OPRs past this step assumes the list of valid OPRs)

Calculate the PoW for all the OPRs.

Take the difficulty of the last OPR submitted, and use it to create a salted hash for all OPRs
Sort by Salted Hash.

Then loop through all the OPRs by pairs
 Keep the OPR of the pair that has the highest PoW
    if all that is left + what you are to keep == 50, you are done
    If at the end of the list (without a pair), and we still have more than 50 OPRs, repeat the loop 
         with the OPRs we Kept

What this does for any set of valid OPRs over 100 is ensure a party submitting 26 OPRs has a much reduced chance of being in set of 50. A mining pool submitting multiple entries is likely to compete with themselves prior to the selection of 50.

To have a good chance to have 26 entries out of 50, 51% is no longer enough. Many of your entries will end up competing with your own entries, ensuring one or the other no longer counts, no matter how high the hash power is for each.

The only entry with 100% certainty to win and go into the 50 is the highest hash power. But the second highest hash power might have been paired with the highest and eliminated. The impact of the algorithm is rather hard for me to calculate. Someone with some statistics might be able to figure it out. I need my stats book to do stats.

Add a database

We need to cache our analysis of the PegNet chains in a database for fast booting of state. To this end, we should add (I am thinking) a leveldb database. There are other options, so we can use this issue to discuss the pros and cons.

Of course, with a key value database (with a bucket layer for sorting keys) we can map this to about any key value database. I really don't want to get into further abstractions beyond the definition of a set of buckets and maybe a few helpers that provide the buckets for some really common key values, if that has value. I'm not sure it does.

Create a utility to convert FCT addresses into Pegged Asset Addresses

All the pegged token assets use FCT compatible addresses, really. They just use a different human readable encoding to avoid mistakes in sending or receiving values from others.

We need to have an easy way to convert someone's FCT address into a Test Network PNT address or a main network PNT address. The actual code to do the PegNet side is in the utility.go file. Need to add the Factom conversions, and a utility that makes generating the PegNet asset addresses easy, at least from the commandline.

Monitor Implementation

// Running multiple Monitors is problematic and should be avoided if possible

Two observations:

  1. The monitor is never stopped
  2. There's only one monitor instance that runs single threaded.

It is possible to just make it a singleton.

There's a lot of superfluous functionality (like stopping) and mutex juggling. I don't think mutexes are even necessary in this case, since the info struct is just swapped out with a new one but there are no goroutines that write data to the fields itself.

Is the stop functionality ever needed in theory? The only instance I can think of is if you hotswap the factomd connection during runtime to a different node that potentially has difference issues with directoryblockheights, but that seems a far fetched scenario.

Any objections to a slight of this file to reduce mutexes?

Getting a robust statistical average for grading

Currently we use arithmetic mean to calculate the basis for grading. Mean is inherently susceptible to skew by outliers. While there many different methods for dealing with outliers, simply using median prevents many issues that could arise, it's a robust measure that requires over 50% of malicious entries before reaching it's breakdown point. Mean by comparison is fragile and has a breakdown point of 1/N.

Hypothetical example:

50 OPR values in the first cut for bitcoin, 49 all recorded the same value of $10,000 except for one outlier of $80,000:

XBT Values = [10000, 10000, 10000, ... , 80000]

Mean = $11,400
Median = $10,000

This might not seem problematic at extremes like this with such a simple example, but getting a few invalid values in the first cut can corrupt grading further down the line in favor of the eventual winning opr.

A miner who knows they have a solid hashrate can submit an outlier into the top 50 and modify all their other entries to be graded above others. Highly recommend replacing the current grading mechanism with median.

Alternatives:

  • Trimming or winsorizing outliers, though when set at an arbitrary percentage its hard to see whether it would be effective. Using standard deviation as a threshold is also quite susceptible to outliers and probably shouldn't be considered when faced with malicious inputs. Winsorizing extreme values to the last previous recorded amount would have the effect of creating inertia for any attempt to dramatically move prices.

  • Limit the percentage rate of price movement per block, by either trimming or modifying any value that tries to go beyond that. This is even more fraught with pitfalls and hardly ideal for a basket of vastly different pegs. Currencies rarely move a single percentage point in a day. Some commodities swing wildly, cryptocurrencies can halve in price within hours. Having to make a judgement call on each individual one is impractical at best The only good aspect of restricting movement to a certain percent is that it would avoid flashcrashes and balance out highly turbulent situations.

Further info

https://en.wikipedia.org/wiki/Central_tendency

https://en.wikipedia.org/wiki/Breakdown_point#Examples

Validation of the Oracle Price Chain

Add code to walk the chain from the start to the end, and weed out any OPRs that do not:
1) have values for some assets. If someone messes up the app key, they might write a record with some assets set to zero, something that isn't going to happen (or is so unlikely to happen that we can do an emergency fix if, say, the Euro goes to zero.
2) Check the grading done by the miners. The 10 payments should all be correct by the standard algorithm for grading OPRs. Don't include OPRs that you don't think graded the past right.
3) Any OPR that has the wrong directory block height for the block it is recorded in should be discarded.

Any other tests we might think of.

Analysis of the Mining Process

First of all, the LXRHash cascading effect seems to be doing its job fairly well. I ran an analysis on the results of the mining algorithm and determined that the resulting hashes are uniformly distributed:

https://i.imgur.com/0X8Zn4L.png

This data contains 1,907,966 hashes, calculated with LXRHash(LXRHash("foo") | nonce) using the miner nonce format. For each hash, the "difficulty" was calculated and sorted into a bucket based on their percentile, spanning the entire range of Uint64 divided into 1000 buckets. Bucket 0 is 0.0% to 0.1% and bucket 999 is 99.9%, etc.

At 1.9m hashes and 1000 buckets, we would expect the average bucket to hold 1,908 items. This turns out to be true as seen in the histogram. The normal distribution of the histogram is a strong indicator that the underlying data is random.

Conclusion: The results of hashing data with LXRHash are fairly random.

Now, this has some implications to the efficacy of mining in general. If you know the the time period (9 minutes) and the Hashrate (depends on setup), it is possible to calculate the likelihood that you will reach a higher difficulty than the one you attained.

I started out my analysis by just letting miners run and see what kind of difficulties they reach and how close to the maximum (2^64)-1 (18446744073709551615) they get. Turns out this happens rather fast. In most mining runs, the miners hit an early maximum within the first one or two minutes and occasionally improving on that maximum down the road. These results are deterministic depending on the base OPR Hash, but will vary depending on how fast your CPU can mine.

Using the base of LXRHash("foo") for example, I reached the difficulty 18446742286272833685 in the first minute (minute 0), improving to 18446743757683897496 in the tenth minute (minute 9). Since these numbers are hard to parse, I'm going to use a different format, which is MaxUint64 - difficulty in e-notation. The difficulty 18446742286272833685 becomes 1.7e12. This is the distance to the maximum, so lower is better.

To give these numbers some reference, 1e13 is at ~99.99995% (4 nines) of the maximum. 1e12 is ~99.999995% (5 nines) of the maximum. 1e11 is ~99.9999995% (6 nines).

base 0 1 2 3 4 5 6 7 8 9 10
foo 1.7e12 - - - - - - - - 3.2e11 -
bar 5.3e12 2.9e12 - - - - - - - - -
boo 7.7e12 - 4.5e12 - - 1.1e12 - - - - -
car 1.1e13 - - - - 3.8e12 - - - 3.0e12 -
zoo 8.6e12 - 5.1e12 - - - - - - - -
dar 1.2e13 - 1.0e12 - - - - - - - -

Under the assumption that LXRHash is random, we can now calculate the likelihood of a single hash reaching a higher difficulty: P = difficulty / MaxUint64 or the chance that a hash has the same or lower difficulty: P2 = (MaxUint64 - difficulty) / MaxUint64. My hashrate on a 9-year-old rig is: 13,374 hashes/second or to make it easier, 800k per minute.

Let's assume that we reached 1e13 during the first minute. We have eight more minutes left, or 6.4 million hashes. The likelihood of 6.4 million hashes all being less or equal to 1e13 is P2 ^ 6.4e6 or ((2^64 - 1 - 1e13) / (2 ^ 64 - 1))^(6.4e6), which turns out to be 3.1%. That means there's a 97% likelihood one of the hashes will have a higher difficulty.

When we look at 1e12 however, the outcome of P2 ^ 6.4e6 is 70.6%. That means there is only a 30% likelihood one of the hashes will have a higher difficulty. At 1e11, the outcome is 96.6% giving us only a 3.4% likelihood that one of the hashes will improve. And this is, of course, for the result after the first minute.

Let's take the table above and include the data of how likely we are to get a better difficulty in the remaining time:

base 0 1 2 3 4 5 6 7 8 9
foo 1.7e12
45%
- - - - - - - - 3.2e11
bar 5.3e12
84%
2.9e12
58%
- - - - - - - -
boo 7.7e12
93%
- 4.5e12
69%
- - 1.1e12
13%
- - - -
car 1.1e13
98%
- - - - 3.8e12
39%
- - - 3.0e12
zoo 8.6e12
95%
- 5.1e12
73%
- - - - - - -
dar 1.2e13
98%
- 1.0e12
23%
- - - - - - -

The sample size here is fairly low but you can see that the ones with high percentages end up getting replacements fairly quickly. An outlier here is "zoo" which had a 1-in-4 likelihood of not getting one.

A higher hashrate means these probabilities increase. Let's assume we got a result of 1e12 and we have 8 minutes left to mine.

Hash/m Res
100k 4%
200k 8%
200k 16%
800k 29%
1.6m 50%
3.2m 75%
6.4m 94%

Using this process, it is possible to estimate whether or not a miner should abandon their search. This means a single miner may only take a fraction of the total mining time, say three or four minutes. Further, running multiple miners means you can share the highest difficulties reached across the mining pool, letting you keep a "Top X" list of difficulties. You can keep mining as long as the lowest difficulty in the list has a likelihood within a certain treshold.

Remove init prints

Currently running the pegnet cli, we always get prints from init functions in the various packages.

2019-07-16 19:23:52.647666143 -0500 CDT m=+0.002079765
Reading ByteMap Table  /home/steven/.lxrhash/lrxhash-seed-fafaececfafaecec-passes-5.size-25.dat

Add logging to PegNet

Added a dirt simple logging to stdout. Don't think we really need more.

This task should add logging through out mining and the PegNet daemon so we can validate everything is working correctly.

FAT-2 Pegged Asset Token Standard

A new FAT standard has been created for us to collaborate on the protocol design of Pegnet:
FATIP-2 Pegged Asset Token Standard

The standard has a proposal for a simplified OPR record structure, addresses #72 (Address format), #19 (Add transactions), #16 (Balances), #18 (Conversions), and marries Pegnet & FAT by allowing FAT-0 tokens to act as pegged assets controlled by the Pegnet protocol (OPR Grading Algo powering Coinbase Rewards & Conversions).

FAT already supports all token functionality required by Pegnet and is well proven, it's very exciting to see the best of both projects coming together ๐Ÿ›ฉ๏ธ Pegnet will be compatible with all existing FAT tooling and wallets and explores built as part of Factom protocol grants -even without additions.

I would like to propose that we use this PR on the FAT Github for nitty-gritty protocol & datastructure design work going forward. It would be good for us to separate out he protocol and implementation issues as they are often, from experience, totally disconnected. You would never for example see something like "Sending a transaction with input 2 results in output o

Please ping @PaulBernier or myself if you'd like access as a collaborator on the FAT repo, looking forward to it!

Remove custom sorts

We have a few instances where I wrote some quick and dirty sorts. Replace with calls to sorting libraries.

Create a means of setting up Miner Identities

Create Identities for each miner
sign the registration for the miner with the PegNet
Sign the PNT payment address in the Miner Identity
Remove the PNT address from the OPR
Add signing the OPR with the identity
Add validation of all of that.

Rollup Branch

While waiting on pull requests to develop, this roll up branch will collect the work being done so we can continue.

Grading should take the difference as a percent

Right now the difference in grading is an absolute. But a difference of 1 is very different if you are talking about an asset worth $1 from an asset worth $10,000.

Solution is to take a percentage difference from the average rather than an absolute difference

Housekeeping

An issue branch to deal with the small things like linters and gofmt, along with general irregularities that occur along the line. May as well keep it all in the one place.

For example, this:

factom.SetFactomdServer(FactomdLocation)
factom.SetWalletServer(WalletdLocation)

gets overridden here:

pegnet/common/monitor.go

Lines 142 to 143 in ac613a7

factom.SetFactomdServer("localhost:8088")
factom.SetWalletServer("localhost:8089")

Believe there's a few lines of unreachable code too somewhere.

Design the FCT "Burn" address

For the testnet, we should implement a burn of 1 FCT to 1000 tFCT for the testnet. This way we both burn a little FCT and we don't have to implement a faucet. People can just make their own testnet tokens, and they are cheap.

Once there are some tFCT in the network, we can price the tPNT tokens, and miners can burn their rewards into other assets. This is where we begin to see some of the dynamics of the PegNet.

Add gofmt to travis

Checking if code is properly formatted (go fmt) is at times very annoying.
We can automate the check.

Non-winning blocks

This came up in Factom-Asset-Tokens/FAT#28: When there are no winners in a block, the list of winning entries is empty. The behavior of what happens in this case is not clearly defined.

I assume that the prices of the most recent winner will then stand in for the non-winning block, though I couldn't find anything that explicitly says so. In this case, I propose that instead of having the next block show there were 0 previous winners, we re-use the most-recent winners.

Example:
At height 1, the winners are A & B with previous winners C & D.
At height 2, there are no winners with previous winners A&B
At height 3, there are winners E & F with previous winners also A & B.

For OPRs, the previous winners is what links the individual blocks to each other. If it's possible for a list of winners to be empty, that's an attack vector that lets someone premine the block far in advance if they have the ability to cause a 0-winner block.

Attack scenario:
You have the ability to cause or predict when a non-winning block happens. The risk of this is lower the more popular pegnet is, but could include things like:

  • DDoSing a factom node (or nodes) that is used by miners (such as the open node)
  • DDoSing miners themselves (or a mining pool)
  • An undiscovered bug, like miners crashing if the network pauses, or a division by zero error when verifying opr records.

You know ahead of time at which height the non-winning block occurs and start mining until you have 26+ entries with very high difficulties using height+1 and a blank winner list. After height-1 is written, you cause the hiccup, producing a non-winning block at the target height. You now have the pre-mined OPR records for height+1 with a valid (blank) previous winner list.

If we include the "most-recent" winners in the OPR instead of the "winners of the last block" then this attack can at most mine for two blocks, which isn't a significant advantage over mining just one block as several days or weeks would be.

Caveat: This attack can't happen if there are at least 10 oprs per block, ie there's at least one or two miners running at all times, so the likelihood of it happening is extremely small, but the possibility of a weird bug is always there.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.