metadaoproject / futarchy Goto Github PK
View Code? Open in Web Editor NEWPrograms for market-driven governance
License: Other
Programs for market-driven governance
License: Other
Once the AMM / CLOB is found / completed, we need to integrate it into the code.
Specifically, the code should display the following behavior:
I don't have a huge understanding of the mess that is npm and yarn but this line messes with any attempts to run scripts using yarn. Yarn goes to the wrong spl-token package, even though there's a node based alias and just ignores the correct spl-token library.
Practically speaking deleting the package from package.json and/or running npm install worked in terms of getting the script to import the correct version of spl-token but then when it came to either compiling or running the scripts using yarn stuff just started to break again.
Minting conditional token currently does not use the Mint account of the token deposited. This is possible because the mint is stored by the vault and token accounts do not explicitly require the mint.
However, this prevents indexer from noticing the transactions and tools like Dune to correctly index transactions.
Including the mint in the transaction could simplify some security checks by only checking the mint instead of checking the underlying mint + underlying account (here and here). It could also save account space by only storing the mint and deriving the accounts
I'm unable to generate verifiable builds on my local machine, and it's kind of a PITA to set up for new contributors. It would be nice to have a GitHub action workflow that generates verifiable builds when a program has been changed and commits them back to mainline.
At least while we still use openbook and are opening up the metadao program to new DAOs, we can use the dollar value of a project's token to help determine the quote and lot sizes.
Rough heuristic
With knowing the USDC value, this would also make it possible to set the expected value accurately and also set the max movement value of the TWAP, which is in terms of price lots (number of quote lots per 1 base lot), at 5% of the expected value.
Maybe because of variance encourage DAOs to put a smaller number than the true current dollar value.
Some people have been complaining that a proposal shouldn't be live for trading yet: more kinks need to be worked out. It could be cool to add a pre-trading state to proposals so that you could affirm that you're definitely raising the proposal, and then collect feedback on how it could be improved in the first 1-2 days before trading.
Discuss with team for updating settings, what fields etc.
Deploy the demo code to the Solana blockchain.
This would afford better price discovery and more competitive marketing. I've thought about this and the only potential thoughts I have with respect to that are:
On the other side of it, it would allow for multiple proposals to be ongoing at once, where currently you really are limited by the amount of capital you want to allocate where and for how long. It's concerning that you couldn't have multiple and expect full participation from all participants within the ecosystem.
Today, people are required to trade a minimum of 1 META in the conditional markets. This hurts liquidity given that this means that the minimum limit order size is ~$1000 at today's prices.
Today, you can't merge your conditional tokens. For example, you can't take 10 pMETA and 10 fMETA and convert them back into 10 META. We should fix this to increase cross-proposal liquidity.
Once the timelock is deployed, we should create a proposal to migrate the DAO's assets to it.
It costs around 5 SOL to create the openbook markets, which is a pain in the butt and means that we've burned more than $10,000 in just state rent. It would be nice to be able to close markets and send this rent back to the proposer.
If a proposal is successful when it's finalized the instruction stored on the proposal is executed. Instead we should have an instruction that finalizes the proposal and updates the state of the proposal. Then a separate instruction in which successful proposal instructions can be executed. This would avoid some potential locking issues in the finalization instruction.
Just popping this here as a self reminder to make a PR for this. The base and quote vaults already have functionality that they don't get double created, it's likely possible to add something similar instead of just using keygen.generate()
to get the pass and fail openbook market accounts.
Line 441 in 8dfa34d
Maybe it's possible to use keypair.fromseed
to generate a consistent pubkey for the markets. So long as the account is created and the openbook market is initialized in the same transaction this should be fine?
Revisions which need to address:
What I'm thinking here is if we want to test things is there a way to securely make changes which afford flexibility without arduous process. I understand this could be in conflict with security so not going to press that issue too much.
Say we'd like to build a proposal with three day see the response, then one day see the response, eventually resolving to a fixed code structure, or creation of structure supporting the results of our testing and evaluation.
For additional context: The question is where should default values of potentially adjustable values live? If you wanted to enforce PreTrading proposal state, but some didn't, or if you wanted to afford 1-10 day proposal time, or if you wanted to adjust minimum order size, or if you wanted to use AMM exclusively compared to hybrid, or only one instruction type.
Those questions exist for me, and the practicality of which are unknown.
Self-explanatory. In the script that initializes proposals, add metadata that allows wallets to see that one is pMETA and one is fMETA
As an anti-spam measure, we force the proposer to burn some lamports.
We should add a parameter expected_lamport_burn
that you can pass into initialize_proposal
. We should revert the transaction if actual lamport burn would be greater than expected lamport burn.
if the proposal has been finalized to pass, i think you shouldn't be allowed to continue to mint fMETA/fUSDC from META (just the pass conditionals). Otherwise users can generate as much fMETA as they like after finalization.
This also makes it more symmetric with the redeem process after finalization and is more fork friendly: the conditional fail diehards can form a community around their fMETA and pUSDC from closing their pMETA. fMETA can then carry on as if the proposal didnt pass.
from @0xbigz
Although ideally futarchy would prevent governance attacks, you never know what's going to happen.
What would be nice long-term is if we stored most of the DAO's funds in a 'cold wallet' whose funds can be moved to the 'hot wallet' with some delay x. The delay would have to be shorter than the time it takes to pass a proposal, so that after a malicious proposal passes a counter-proposal can stop it
createMarket
doesn't exist on openbook client https://github.com/metaDAOproject/futarchy/blob/main/tests/autocratV0.ts#L1458
currently only twaps need to be higher by threshold for proposal to pass. just relying on the twaps as currently implemented might be too easy to game.
i wonder if its better to also ensure that
this consistency check wouldn't change whether a proposal passes or not, but just delays finalizing. this gives ample time to for those who really disagree with a proposal to exit before it's allowed to pass.
Ideally, we'd have an incentive to trade in markets up until the very end, to allow for better price discovery. We could rebate taker fees to people who traded in the unfinalized market, as discussed in #26
It might be worth adding some install instructions with some common gotchas, for example for me it said -
error: package
regex-automata v0.4.3 cannot be built because it requires rustc 1.65 or newer, while the currently active rustc version is 1.62.0-dev
In addition to updating rust you should also update solana version on your machine -
solana-install init 1.16.18
This is what raydium does in their CLMM code and prophet suggested something similar during a code review, so maybe worth a tiny PR to modify. Solana timestamp is an i64 so better to cast as u32 than to make the other vars u64. It's not super important anyway, just a minor change.
FUTURE (formerly MERTD) wants to use futarchy to manage their operations. We should deploy a program they can use.
would save some SOL deployment cost
Right now, we have a few functions that allow the Meta-DAO to configure itself. See here. It would clean up the code if these functions were generated by a macro
Today, the repo is a bit annoying to use because we push all changes to smart contracts to master even before those smart contracts have been deployed on-chain, so we can't use the scripts on master for hitting the production smart contracts.
I think we should use a branching strategy where we have a separate production
and develop
branches. We can merge script changes into production
and merge smart contract changes onto develop (and changing the scripts to use those updated smart contracts), and then merge develop onto production once we complete a migration.
Allow governance to change taker fees, as described in #26
The main attack vector I can think of at the moment is spam. Someone could create 100 proposals, for example, and then the honest participants would be playing whack-a-mole against the attacker where the attacker would manipulate a market, causing people to join that market, then move to another market, and so on.
Two solutions I can think of:
Possibly as a series of Medium posts
They're super long and complex right now, we should abstract some stuff out so that we can test more stuff with less code.
The autocrat program has blake3 as a dependency but it doesn't appear to be using it
Today, the migrator program can only transfer 2 tokens at once. Currently, there are 4 tokens in the treasury that we'd like to migrate:
We should update the migrator program to allow for this.
There's a variable max_observation_change_per_update_lots
which is now used to update the TWAP. The problem is that these updates lots vary based on the base and quote lots. So a $5 change might be 5_000
lots or 50_000
lots depending on the base lot size. Therefore to avoid any unfortunate mistakes, we should add a function that takes some absolute dollar value, e.g. $5 / 5_000_000
and automatically calculates the correct max_observation_change_per_update_lots
based on the base and quote lots of the market.
Bring the Meta-DAO to production.
Release a demonstration of the Meta-DAO's mechanics. Doesn't need to be pretty, but needs to work.
With this demo, a user should be able to:
A demo website is unnecessary; that can be a future issue if needed.
Running all tests fails because there are issues:
Today, autocrat finalizes proposals if the pass TWAP is x% higher than the fail TWAP, where x is currently 5 for MetaDAO. This is non-optimal for large market caps because it imposes a huge bar to proposal passing. For example, if META's market cap is $100MM, a proposal would need to create $5MM of value to pass.
It would be better to use a statistical algorithm so that autocrat was instead making the decision like "I'll accept this proposal if there's a 99.5% chance (P = 0.005) that the market deems it vaue-accretive."
Currently, the DAO is global for the program. It simplifies some parts of the code but it makes it more expensive to create new DAO instances because you need to redeploy the program. Enabling multiple instances makes it more affordable for everyone to experiment with futarchy. It could also open the door for DAO merging and forking, which might prove helpful.
At the core of the Meta-DAO's decision-making is Robin Hanson's futarchy. In our implementation, the Meta-DAO determines whether the market is for or against a given improvement proposal via the price differential between conditional-on-pass and conditional-on-fail tokens.
For a given member DAO, the proposal's impact is measured as:
(TWAP of member's conditional-on-pass tokens - TWAP of member's conditional-on-fail tokens) / total member token supply
Hence, the core code needs some way of extracting the TWAP of the member's conditional-on-pass and conditional-on-fail tokens. Additionally, this TWAP needs to be manipulation-resistant.
The objective of this task is to plan whether or not we need to create our own AMM.
This can be accomplished by:
given price volatility / supply can dilute, relying on 1 META tick size isnt safety guarantee
these notes also discussed issues around the impl of twap updates
i suggest depth param to be calc'd as some scaled clamped amount of p/f META/USDC in vault
ask depth:
(p/fMETA.amount * x_scalar).clamp(x_1, x_2)
bid depth:
(p/fUSDC.amount* y_scalar).clamp(y_1, y_2)
think something like scalar < 1 and > 0 and clamp up to ~ mid price * 10 could makes sense
To continue development we'll need programs deployed on Devnet. Included in this task is:
Questions to consider:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.