aeternity / protocol Goto Github PK
View Code? Open in Web Editor NEWSpecification of the æternity blockchain protocol
Home Page: https://docs.aeternity.com/protocol
Specification of the æternity blockchain protocol
Home Page: https://docs.aeternity.com/protocol
It seems that every exception that is thrown during transaction processing is reposted to the epoch.log over and over again which makes development almost impossible after a couple of bad transactions have been produced. In addition this now seems to affect server stability.
https://github.com/aeternity/protocol/blob/master/contracts/contracts.md
Following subsections:
Hello, I think I'm lost........SORRY!
epoch running fine in my node, but when I try to enquiry the balance, I have problem:
jeremy@manager:~/0.5$ curl http://127.0.0.1:3103/v2/account/pub-key
{"pub_key":"ak$3QoJpkpFAtr37mE25VUSraJPQc5B23FN3kFzDSJdqFnVBSVPg5fjLPtJDSp3eRDHYZsDqaVRzno21WvzeCAQouugoBh5DB"}
when I try to check balance as suggested
curl -G http://127.0.0.1:3013/v2/account/balance --data-urlencode 'pub_key=ak$3N1WLMewMQPUyQBdEhXRSYee84RQNKJrECwbbseMkNsZhv1XLjpmiqjAkvSRpQ6kgWJMjq9dTmdQ3ekuhpscJk6LpjJYk4'
curl -G http://127.0.0.1:3013/v2/account/balance --data-urlencode 'pub_key=ak$3QoJpkpFAtr37mE25VUSraJPQc5B23FN3kFzDSJdqFnVBSVPg5fjLPtJDSp3eRDHYZsDqaVRzno21WvzeCAQouugoBh5DB'
curl: (7) Failed to connect to 127.0.0.1 port 3013: Connection refused
So I think the port number is wrong in the doc and change to
curl -G http://127.0.0.1:3103/v2/account/balance --data-urlencode 'pub_key=ak$3QoJpkpFAtr37mE25VUSraJPQc5B23FN3kFzDSJdqFnVBSVPg5fjLPtJDSp3eRDHYZsDqaVRzno21WvzeCAQouugoBh5DB'
{}
So the above returning empty map.
I'm sure that I have mined some blocks, when listening to websocket, i can see some log:
< {"action":"mined_block","origin":"miner","payload":{"height":1005,"hash":"bh$ZYedc5XzBhwgu1hmjoqNknZWo9doTqmE88c9JAdBE8faLqoYD"}}
< {"action":"mined_block","origin":"miner","payload":{"height":1028,"hash":"bh$3PqkmMJPGCJo1a1tn3L4jkmiJTbRH6wqQ9Rj7Xnd5BLSMwKyJ"}}
< {"action":"mined_block","origin":"miner","payload":{"height":1032,"hash":"bh$fG2JZb1vSUr25V6ANxLZRxYcAm7NzjpWr2PS3LQsXNVkeeEnS"}}
< {"action":"mined_block","origin":"miner","payload":{"height":1053,"hash":"bh$2MqBwDAikt6UCNhWau9MFM4RGSxzY5pdxVxvjAVguJTShkwP6d"}}
Thanks
When a remote protected call fails any side-effects from executing the remote call are rolled back.
aeternity/aeternity@9335d66
To be able to write more useful showcases, we need to be able to pull more data about oracles already living on the chain. Can you give us expected dates when these endpoints will be implemented?
How to get block?
which url to call json-rpc?
How to register account?
I wonder if this is a philosophical issue--it seems to me that the current logic is:
on syntactically invalid request, nothing is returned.
on a syntactically valid request which is nonetheless not valid, a successful result is returned. Sometimes this is useful--for instance I can register an Oracle, and if it already exists I receive back an identifier for the existing one. However now I have an inaccurate idea of the oracle's ttl, at minimum. Is there a strategy this this?
It would be very helpful to pull a list of errors from the server, if not get them as they occur.
Informative error handling during Dapp development is very important imho and my one of the biggest fails on Ethereum in my opinion, so it would be nice if we could do better here.
https://hackmd.io/Nm92os78SLyMMqmG50F4RQ?view#State
When having
type state = { contributions : map(address, uint),
total : uint,
beneficiary : address,
deadline : uint,
goal : uint }
do not force to define whole state in init()
, instead implicitly assign default variables for undefined fields.
Reason: Some contracts have dozens of state "vars", that are not used right away and stay empty till used in code later. Not having to set everything zu 0, '', or Map.empty would give better readability and save effort.
should be configurable on a per node
basis.
Nice to have:
Scenarios
Affect api endpoints (so far)
GET /block/tx/height/{height}/{tx_index}
GET /block/txs/list/height
GET /block/tx/hash/{hash}/{tx_index}
GET /block/tx/latest/{tx_index}
GET /block/txs/list/hash
Is the mining block time the starting point for the delta or the block time of the broadcasting node? It seemed to me, that sometimes I received almost instantly ttl expirations errors and sometimes the query/response pairs seemed to live even longer than expected.
it seems that the websocket protocol described here is legacy. the code in epoch:
https://github.com/aeternity/epoch/blob/master/apps/aehttp/src/sc_ws_handler.er is using json rpc 2.o apparently.l
We know that we must wait for a block to be mined before we can progress to the next stage the various operations we make. However it's still possible that a transaction can become invalid due to being on a chain which is orphaned. What do you envisage as the right way to handle for a client application? When can we truly assume that a transaction has succeeded, and what do we do if we discover that something we'd relied upon is no longer valid? If these are corner cases which we think will only occur very infrequently, can we put numbers on that?
The paragraph "Outdated Whitepaper v0.1" contains a broken link to "https://aeternity.com/aeternity-blockchain-whitepaper.pdf" resulting in a 404 HTTP response when requested.
If I understand the concept right, the websockets are not supposed to be exposed to the web in the future? Anyway, for development purposes it is important to be able to talk to an oracle via the browser in my opinion. Everyone who wants to build web clients for the oracles, has to set up an Nginx/Apache proxy that provides valid CORS headers and expose the Websocket port. It would be nice if we could have a starter setup for our users and the internal developers.
Scenarios
Affect api endpoints
GET /block/hash/{hash}
GET /block/tx/hash/{hash}/{tx_index}
GET /block/txs/list/height
GET /block/txs/list/hash
For consistency could also affect all endpoints that include transaction objects, even if it's clear in which block the transaction was mined, given by the request itself.
GET /block/height/{height}
GET /block/genesis
GET /block/latest
GET /block/tx/height/{height}/{tx_index}
GET /block/tx/latest/{tx_index}
Add hash of block to the listed internal api endpoints
GET /block/height/{height}
GET /block/hash/{hash}
GET /block/genesis
GET /block/latest
https://github.com/aeternity/protocol/blob/master/epoch/api/oracle_api_usage.md
For a better understanding, it would be nice to have a data flow diagram at the beginning (or somewhere) of the docs
As suggested by @sammy007:
If AE will be a successful project we will see ASIC in 1-2 years, current ASIC performance over CPU
on equihash for example is dramatic, so it probably wise to just use 32 bytes since the start and
don't bother any more, no need for pools to upgrade consensus in the future, no need for HF, etc.
Discussion link:
-spec pack_header_and_nonce(binary(), aec_pow:nonce()) -> string().
pack_header_and_nonce(Hash, Nonce) when byte_size(Hash) == 32 ->
%% Cuckoo originally uses 32-bit nonces inserted at the end of its 80-byte buffer.
%% This buffer is hashed into the keys used by the main algorithm.
%%
%% We insert our 64-bit Nonce right after the hash of the block header We
%% base64-encode both the hash of the block header and the nonce and pass
%% the resulting command-line friendly string with the -h option to Cuckoo.
%%
%% The SHA256 hash is 32 bytes (44 chars base64-encoded), the nonce is 8 bytes
%% (12 chars base64-encoded). That leaves plenty of room (80 - 56 = 24
%% bytes) for cuckoo to put its nonce (which will be 0 in our case) in.
%%
%% (Base64 encoding: see RFC 3548, Section 3:
%% https://tools.ietf.org/html/rfc3548#page-4
%% converts every triplet of bytes to 4 characters: from N bytes to 4*ceil(N/3)
%% bytes.)
%%
%% Like Cuckoo, we use little-endian for the nonce here.
NonceStr = base64:encode_to_string(<<Nonce:64/little-unsigned-integer>>),
HashStr = base64:encode_to_string(Hash),
%% Cuckoo will automatically fill bytes not given with -h option to 0, thus
%% we need only return the two base64 encoded strings concatenated.
%% 44 + 12 = 56 bytes
HashStr ++ NonceStr.
Using base64 encoding is command-line friendly, but it shouldn't be a part of consensus, header hash and nonce should be concatenated directly.
When using base64 encoding, a decode option should be added to cuckoo and used.
Since cuckoo has -x
option, we should hex encode the concatenated binary, and let cuckoo decode it.
Related: we have sped up mining speed on our test network to one block on average every 15 seconds, which makes life much easier. Since TTLs are expressed in blocks, speeding up the mining speed on the main net after launch will cause a great many assumptions to become invalid, things to expire before they're anticipated, and so on. What's current thinking in the core team about this?
Maybe it would be nice to borrow some ideas from https://solidityx.org/ if you think it wouldn't clutter the Sophia language too much.
I am thinking about the once
modifier: "A special one-time only functions to prevent calling initializers multiple times.". It looks like a solidityx
is just a precompiler, so it just achieve this purely in a Solidity language. Maybe a better idea would be to have some kind of precompiler scripting language that would allow to create or define new modifiers or decorators which could lead to creation of many competing pre-compiling libraries with a good design patterns or good smart-contract coding practices.
example:
GET /tx/hash/{hash}
{
"signatures": [
"sg$......................................................................................................."
],
"tx": {
"account": "ak$..............................................................................................",
"type": "CoinbaseTxObject",
"vsn": 1
}
}
As we were told in the oracles-core channel, the coupling is one oracle per account. At the moment it is one account per node. Can you tell us when this can be expected to be changed?
Right now we implemented examples for Python/NodeJS and Browser. Does this even make sense if the input/output formats are later to be specified as types of the smart contract language. Can we cast to these types later or do the oracles themselves have to be ported to the smart contract language?
Speaking of formats...
Are the formats expected to be atomic types or will it also be possible to have tuples?
Search for accounts/blocks/transactions by part of (prefix only) hash/public-key by
Example
GET /search?query=<part of hash>
{
"input": "<part of hash>",
"results": [
{
"type": "tx",
"result": "<valid tx hash>"
},
// AND/OR
{
"type": "account",
"result": "<valid account hash>"
}
]
}
Increased mining block height should not be taken as a proof that an oracle has been successfully registered. The transaction could either fail or even not be mined yet due to network congestion. During the time of the Cryptokitties and ICO hypes, high network traffic recently led to a state, where transactions could not even be expected to be broadcasted to the network for more that 15 minutes (independently of the actual fees). How I understand it, that means, that the registering account needs to keep track of the TX hash or an id to make sure, that the oracle really has been registered.
I understand that it makes sense to give a TTL to oracles, answers and questions. However since Oracles are meant to be reliable information providers which consuming smart contracts rely on, Oracles are usually meant to live forever or at least for a long amount of time. That means, that a reinitialisation would have to be scheduled before an oracle expires to make sure, that the information flow is steady.
Conceivably an AENS name could help with this, if their lifetimes could be extended, but this doesn't seem part of the current plan. Is there a way of addressing this?
in the picture, in blow sentence have an url is the http://google.com:
(zh)Learn more about our technology in the Æternity protocol repository on GitHub.
some detail message ,you can go there to found!
In the dry descriptions page there is no information for the slash and snapshot methods.
Namespaces like Int
, String
or Oracle
etc. could be mentioned there
In this document:
https://github.com/aeternity/protocol/blob/epoch-v0.5.0/epoch_api/oracle_api_usage.md
curl http://127.0.0.1:3113/v1/account/pub-key
with the default settings(https://github.com/aeternity/epoch/blob/master/RELEASE-NOTES.md), this command should be:
curl http://127.0.0.1:3103/v2/account/pub-key
In generalized accounts code one needs to check the gas
(cost) and fee
to avert rouge miners and some other attacks.
We could use some advanced generalized accounts examples. (eg multi signature wallet)
I only see the api which get top height, how to get block N?
I running a node with docker,And only opened the 3013 port,How do I get my pub_key
As plenty syntax highlighters for functional languages use (* *)
for block comments by default
https://github.com/aeternity/protocol/blob/master/contracts/sophia.md#types
Let compiler insert
private function require(b : bool, err : string) =
if(!b) abort(err)
into every contract ̶b̶̶y̶̶ ̶̶d̶̶e̶̶f̶̶a̶̶u̶̶l̶̶t̶ if parser finds usage of require()
to make it comfortably available by default.
This feature was merged into the master branch of the node.
In the Aeternity node implementation, the state trees are cached inside the node. Note that if the node restarts, cached data is not likely to survive (it is not persisted on each update for performance reasons.)
in https://github.com/aeternity/protocol/blob/master/channels/OFF-CHAIN.md#channel_reestablish - according to Dimitar, this is not correct and state channel tree data would be stored permanently.
Aeternity should supports both transaction-level privacy and smart contract privacy.
References:
JSON-RPC specification doesn't say anything about version
field in the root of response/requests objects. If it is needed to store protocol version it can use params
, result
, or method name to be compatible with JSON RPC.
similar issue: aeternity/AEXs#24
Question, did you consider using msgpak or at worst json?
Question 2: were is the framing, over TCP the message can be sent as fragments requiring multiple receive calls, how do I know when I got the full message? Most protocols at the very least have the first 1-4 bytes (varint sometimes) specifiying the message length.
The channels.dry_run.call_contract
method is not currently documented in channels_ws_api.md
(although it is mentioned in the channels_api_usage.md
).
-spec integer_to_scientific(integer()) -> sci_int().
integer_to_scientific(I) ->
%% Find exponent and significand
{Exp, Significand} = integer_to_scientific(I, 3),
case Exp >= 0 of
true ->
%% 1st byte: exponent, next 3 bytes: significand
(Exp bsl 24) + Significand;
false ->
%% flip sign bit in significand
((-Exp) bsl 24) + 16#800000 + Significand
end.
Since sci_int is 4 bytes as with BTC.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.