# Admin API URL: /docs/apis/admin-api This page is an overview of the Admin API associated with AvalancheGo. The Admin API can be used for measuring node health and debugging. The Admin API is disabled by default for security reasons. To run a node with the Admin API enabled, use [`config flag --api-admin-enabled=true`](https://build.avax.network/docs/nodes/configure/configs-flags#--api-admin-enabled-boolean). This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Format This API uses the `json 2.0` RPC format. For details, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoint ``` /ext/admin ``` ## Methods ### `admin.alias` Assign an API endpoint an alias, a different endpoint for the API. The original endpoint will still work. This change only affects this node; other nodes will not know about this alias. **Signature**: ``` admin.alias({endpoint:string, alias:string}) -> {} ``` * `endpoint` is the original endpoint of the API. `endpoint` should only include the part of the endpoint after `/ext/`. * The API being aliased can now be called at `ext/alias`. * `alias` can be at most 512 characters. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.alias", "params": { "alias":"myAlias", "endpoint":"bc/X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` Now, calls to the X-Chain can be made to either `/ext/bc/X` or, equivalently, to `/ext/myAlias`. ### `admin.aliasChain` Give a blockchain an alias, a different name that can be used any place the blockchain's ID is used. Aliasing a chain can also be done via the [Node API](https://build.avax.network/docs/nodes/configure/configs-flags#--chain-aliases-file-string). Note that the alias is set for each chain on each node individually. In a multi-node Avalanche L1, the same alias should be configured on each node to use an alias across an Avalanche L1 successfully. Setting an alias for a chain on one node does not register that alias with other nodes automatically. **Signature**: ``` admin.aliasChain( { chain:string, alias:string } ) -> {} ``` * `chain` is the blockchain's ID. * `alias` can now be used in place of the blockchain's ID (in API endpoints, for example.) **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.aliasChain", "params": { "chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM", "alias":"myBlockchainAlias" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` Now, instead of interacting with the blockchain whose ID is `sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM` by making API calls to `/ext/bc/sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM`, one can also make calls to `ext/bc/myBlockchainAlias`. ### `admin.getChainAliases` Returns the aliases of the chain **Signature**: ``` admin.getChainAliases( { chain:string } ) -> {aliases:string[]} ``` * `chain` is the blockchain's ID. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.getChainAliases", "params": { "chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "aliases": [ "X", "avm", "2eNy1mUFdmaxXNj1eQHUe7Np4gju9sJsEtWQ4MX3ToiNKuADed" ] }, "id": 1 } ``` ### `admin.getLoggerLevel` Returns log and display levels of loggers. **Signature**: ``` admin.getLoggerLevel( { loggerName:string // optional } ) -> { loggerLevels: { loggerName: { logLevel: string, displayLevel: string } } } ``` * `loggerName` is the name of the logger to be returned. This is an optional argument. If not specified, it returns all possible loggers. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.getLoggerLevel", "params": { "loggerName": "C" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "loggerLevels": { "C": { "logLevel": "DEBUG", "displayLevel": "INFO" } } }, "id": 1 } ``` ### `admin.loadVMs` Dynamically loads any virtual machines installed on the node as plugins. See [here](https://build.avax.network/docs/virtual-machines#installing-a-vm) for more information on how to install a virtual machine on a node. **Signature**: ``` admin.loadVMs() -> { newVMs: map[string][]string failedVMs: map[string]string, } ``` * `failedVMs` is only included in the response if at least one virtual machine fails to be loaded. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.loadVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "newVMs": { "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": ["foovm"] }, "failedVMs": { "rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": "error message" } }, "id": 1 } ``` ### `admin.lockProfile` Writes a profile of mutex statistics to `lock.profile`. **Signature**: ``` admin.lockProfile() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.lockProfile", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.memoryProfile` Writes a memory profile of the to `mem.profile`. **Signature**: ``` admin.memoryProfile() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.memoryProfile", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.setLoggerLevel` Sets log and display levels of loggers. **Signature**: ``` admin.setLoggerLevel( { loggerName: string, // optional logLevel: string, // optional displayLevel: string, // optional } ) -> {} ``` * `loggerName` is the logger's name to be changed. This is an optional parameter. If not specified, it changes all possible loggers. * `logLevel` is the log level of written logs, can be omitted. * `displayLevel` is the log level of displayed logs, can be omitted. `logLevel` and `displayLevel` cannot be omitted at the same time. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.setLoggerLevel", "params": { "loggerName": "C", "logLevel": "DEBUG", "displayLevel": "INFO" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.startCPUProfiler` Start profiling the CPU utilization of the node. To stop, call `admin.stopCPUProfiler`. On stop, writes the profile to `cpu.profile`. **Signature**: ``` admin.startCPUProfiler() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.startCPUProfiler", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.stopCPUProfiler` Stop the CPU profile that was previously started. **Signature**: ``` admin.stopCPUProfiler() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.stopCPUProfiler" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` # Health API URL: /docs/apis/health-api This page is an overview of the Health API associated with AvalancheGo. The Health API can be used for measuring node health. This API set is for a specific node; it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Health Checks The node periodically runs all health checks, including health checks for each chain. The frequency at which health checks are run can be specified with the [--health-check-frequency](https://build.avax.network/docs/nodes/configure/configs-flags) flag. ## Filterable Health Checks The health checks that are run by the node are filterable. You can specify which health checks you want to see by using `tags` filters. Returned results will only include health checks that match the specified tags and global health checks like `network`, `database` etc. When filtered, the returned results will not show the full node health, but only a subset of filtered health checks. This means the node can still be unhealthy in unfiltered checks, even if the returned results show that the node is healthy. AvalancheGo supports using subnetIDs as tags. ## GET Request To get an HTTP status code response that indicates the node's health, make a `GET` request. If the node is healthy, it will return a `200` status code. If the node is unhealthy, it will return a `503` status code. In-depth information about the node's health is included in the response body. ### Filtering To filter GET health checks, add a `tag` query parameter to the request. The `tag` parameter is a string. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`, use the following query: ```sh curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL' ``` In this example returned results will contain global health checks and health checks that are related to subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`. **Note**: This filtering can show healthy results even if the node is unhealthy in other Chains/Avalanche L1s. In order to filter results by multiple tags, use multiple `tag` query parameters. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL` and `28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY` use the following query: ```sh curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL&tag=28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY' ``` The returned results will include health checks for both subnetIDs as well as global health checks. ### Endpoints The available endpoints for GET requests are: * `/ext/health` returns a holistic report of the status of the node. **Most operators should monitor this status.** * `/ext/health/health` is the same as `/ext/health`. * `/ext/health/readiness` returns healthy once the node has finished initializing. * `/ext/health/liveness` returns healthy once the endpoint is available. ## JSON RPC Request ### Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ### Endpoint ### Methods #### `health.health` This method returns the last set of health check results. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.health", "params": { "tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"] } }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": { "C": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 31273749, "lastAcceptedID": "2Y4gZGzQnu8UjnHod8j1BLewHFVEbzhULPNzqrSWEHkHNqDrYL", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.2931-04:00", "duration": 20375 }, "P": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 142517, "lastAcceptedID": "2e1FEPCBEkG2Q7WgyZh1v4nt3DXj1HDbDthyhxdq2Ltg3shSYq", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.293115-04:00", "duration": 8750 }, "X": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 24464, "lastAcceptedID": "XuFCsGaSw9cn7Vuz5e2fip4KvP46Xu53S8uDRxaC2QJmyYc3w", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.29312-04:00", "duration": 23291 }, "bootstrapped": { "message": [], "timestamp": "2024-03-26T19:44:45.293078-04:00", "duration": 3375 }, "database": { "timestamp": "2024-03-26T19:44:45.293102-04:00", "duration": 1959 }, "diskspace": { "message": { "availableDiskBytes": 227332591616 }, "timestamp": "2024-03-26T19:44:45.293106-04:00", "duration": 3042 }, "network": { "message": { "connectedPeers": 284, "sendFailRate": 0, "timeSinceLastMsgReceived": "293.098ms", "timeSinceLastMsgSent": "293.098ms" }, "timestamp": "2024-03-26T19:44:45.2931-04:00", "duration": 2333 }, "router": { "message": { "longestRunningRequest": "66.90725ms", "outstandingRequests": 3 }, "timestamp": "2024-03-26T19:44:45.293097-04:00", "duration": 3542 } }, "healthy": true }, "id": 1 } ``` In this example response, every check has passed. So, the node is healthy. **Response Explanation**: * `checks` is a list of health check responses. * A check response may include a `message` with additional context. * A check response may include an `error` describing why the check failed. * `timestamp` is the timestamp of the last health check. * `duration` is the execution duration of the last health check, in nanoseconds. * `contiguousFailures` is the number of times in a row this check failed. * `timeOfFirstFailure` is the time this check first failed. * `healthy` is true all the health checks are passing. #### `health.readiness` This method returns the last evaluation of the startup health check results. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.readiness", "params": { "tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"] } }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": { "bootstrapped": { "message": [], "timestamp": "2024-03-26T20:02:45.299114-04:00", "duration": 2834 } }, "healthy": true }, "id": 1 } ``` In this example response, every check has passed. So, the node has finished the startup process. **Response Explanation**: * `checks` is a list of health check responses. * A check response may include a `message` with additional context. * A check response may include an `error` describing why the check failed. * `timestamp` is the timestamp of the last health check. * `duration` is the execution duration of the last health check, in nanoseconds. * `contiguousFailures` is the number of times in a row this check failed. * `timeOfFirstFailure` is the time this check first failed. * `healthy` is true all the health checks are passing. #### `health.liveness` This method returns healthy. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.liveness" }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": {}, "healthy": true }, "id": 1 } ``` In this example response, the node was able to handle the request and mark the service as healthy. **Response Explanation**: * `checks` is an empty list. * `healthy` is true. # Index API URL: /docs/apis/index-api This page is an overview of the Index API associated with AvalancheGo. AvalancheGo can be configured to run with an indexer. That is, it saves (indexes) every container (a block, vertex or transaction) it accepts on the X-Chain, P-Chain and C-Chain. To run AvalancheGo with indexing enabled, set command line flag [--index-enabled](https://build.avax.network/docs/nodes/configure/configs-flags#--index-enabled-boolean) to true. **AvalancheGo will only index containers that are accepted when running with `--index-enabled` set to true.** To ensure your node has a complete index, run a node with a fresh database and `--index-enabled` set to true. The node will accept every block, vertex and transaction in the network history during bootstrapping, ensuring your index is complete. It is OK to turn off your node if it is running with indexing enabled. If it restarts with indexing still enabled, it will accept all containers that were accepted while it was offline. The indexer should never fail to index an accepted block, vertex or transaction. Indexed containers (that is, accepted blocks, vertices and transactions) are timestamped with the time at which the node accepted that container. Note that if the container was indexed during bootstrapping, other nodes may have accepted the container much earlier. Every container indexed during bootstrapping will be timestamped with the time at which the node bootstrapped, not when it was first accepted by the network. If `--index-enabled` is changed to `false` from `true`, AvalancheGo won't start as doing so would cause a previously complete index to become incomplete, unless the user explicitly says to do so with `--index-allow-incomplete`. This protects you from accidentally running with indexing disabled, after previously running with it enabled, which would result in an incomplete index. This document shows how to query data from AvalancheGo's Index API. The Index API is only available when running with `--index-enabled`. ## Go Client There is a Go implementation of an Index API client. See documentation [here](https://pkg.go.dev/github.com/ava-labs/avalanchego/indexer#Client). This client can be used inside a Go program to connect to an AvalancheGo node that is running with the Index API enabled and make calls to the Index API. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoints Each chain has one or more index. To see if a C-Chain block is accepted, for example, send an API call to the C-Chain block index. To see if an X-Chain vertex is accepted, for example, send an API call to the X-Chain vertex index. ### C-Chain Blocks ``` /ext/index/C/block ``` ### P-Chain Blocks ``` /ext/index/P/block ``` ### X-Chain Transactions ``` /ext/index/X/tx ``` ### X-Chain Blocks ``` /ext/index/X/block ``` To ensure historical data can be accessed, the `/ext/index/X/vtx` is still accessible, even though it is no longer populated with chain data since the Cortina activation. If you are using `V1.10.0` or higher, you need to migrate to using the `/ext/index/X/block` endpoint. ## Methods ### `index.getContainerByID` Get container by ID. **Signature**: ``` index.getContainerByID({ id: string, encoding: string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: * `id` is the container's ID * `encoding` is `"hex"` only. **Response**: * `id` is the container's ID * `bytes` is the byte representation of the container * `timestamp` is the time at which this node accepted the container * `encoding` is `"hex"` only. * `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByID", "params": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding":"hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.getContainerByIndex` Get container by index. The first container accepted is at index 0, the second is at index 1, etc. **Signature**: ``` index.getContainerByIndex({ index: uint64, encoding: string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: * `index` is how many containers were accepted in this index before this one * `encoding` is `"hex"` only. **Response**: * `id` is the container's ID * `bytes` is the byte representation of the container * `timestamp` is the time at which this node accepted the container * `index` is how many containers were accepted in this index before this one * `encoding` is `"hex"` only. **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByIndex", "params": { "index":0, "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.getContainerRange` Returns the transactions at index \[`startIndex`], \[`startIndex+1`], ... , \[`startIndex+n-1`] * If \[`n`] == 0, returns an empty response (for example: null). * If \[`startIndex`] > the last accepted index, returns an error (unless the above apply.) * If \[`n`] > \[`MaxFetchedByRange`], returns an error. * If we run out of transactions, returns the ones fetched before running out. * `numToFetch` must be in `[0,1024]`. **Signature**: ``` index.getContainerRange({ startIndex: uint64, numToFetch: uint64, encoding: string }) -> []{ id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: * `startIndex` is the beginning index * `numToFetch` is the number of containers to fetch * `encoding` is `"hex"` only. **Response**: * `id` is the container's ID * `bytes` is the byte representation of the container * `timestamp` is the time at which this node accepted the container * `encoding` is `"hex"` only. * `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerRange", "params": { "startIndex":0, "numToFetch":100, "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": [ { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } ] } ``` ### `index.getIndex` Get a container's index. **Signature**: ``` index.getIndex({ id: string, encoding: string }) -> { index: string } ``` **Request**: * `id` is the ID of the container to fetch * `encoding` is `"hex"` only. **Response**: * `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getIndex", "params": { "id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "index": "0" }, "id": 1 } ``` ### `index.getLastAccepted` Get the most recently accepted container. **Signature**: ``` index.getLastAccepted({ encoding:string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: * `encoding` is `"hex"` only. **Response**: * `id` is the container's ID * `bytes` is the byte representation of the container * `timestamp` is the time at which this node accepted the container * `encoding` is `"hex"` only. **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getLastAccepted", "params": { "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.isAccepted` Returns true if the container is in this index. **Signature**: ``` index.isAccepted({ id: string, encoding: string }) -> { isAccepted: bool } ``` **Request**: * `id` is the ID of the container to fetch * `encoding` is `"hex"` only. **Response**: * `isAccepted` displays if the container has been accepted **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.isAccepted", "params": { "id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "isAccepted": true }, "id": 1 } ``` ## Example: Iterating Through X-Chain Transaction Here is an example of how to iterate through all transactions on the X-Chain. You can use the Index API to get the ID of every transaction that has been accepted on the X-Chain, and use the X-Chain API method `avm.getTx` to get a human-readable representation of the transaction. To get an X-Chain transaction by its index (the order it was accepted in), use Index API method [index.getlastaccepted](#indexgetlastaccepted). For example, to get the second transaction (note that `"index":1`) accepted on the X-Chain, do: ```sh curl --location --request POST 'https://indexer-demo.avax.network/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByIndex", "params": { "encoding":"hex", "index":1 }, "id": 1 }' ``` This returns the ID of the second transaction accepted in the X-Chain's history. To get the third transaction on the X-Chain, use `"index":2`, and so on. The above API call gives the response below: ```json { "jsonrpc": "2.0", "result": { "id": "ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo", "bytes": "0x00000000000000000001ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b0000000221e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000129f6afc0000000000000000000000001000000017416792e228a765c65e2d76d28ab5a16d18c342f21e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000700000222afa575c00000000000000000000000010000000187d6a6dd3cd7740c8b13a410bea39b01fa83bb3e000000016f375c785edb28d52edb59b54035c96c198e9d80f5f5f5eee070592fe9465b8d0000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000500000223d9ab67c0000000010000000000000000000000010000000900000001beb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801a8da6e40", "timestamp": "2021-11-04T00:42:55.01643414Z", "encoding": "hex", "index": "1" }, "id": 1 } ``` The ID of this transaction is `ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo`. To get the transaction by its ID, use API method `avm.getTx`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTx", "params" :{ "txID":"ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo", "encoding": "json" } }' -H 'content-type:application/json;' https://api.avax.network/ext/bc/X ``` **Response**: ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "outputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1wst8jt3z3fm9ce0z6akj3266zmgccdp03hjlaj"], "amount": 4999000000, "locktime": 0, "threshold": 1 } }, { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1slt2dhfu6a6qezcn5sgtagumq8ag8we75f84sw"], "amount": 2347999000000, "locktime": 0, "threshold": 1 } } ], "inputs": [ { "txID": "qysTYUMCWdsR3MctzyfXiSvoSf6evbeFGRLLzA4j2BjNXTknh", "outputIndex": 0, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 2352999000000, "signatureIndices": [0] } } ], "memo": "0x" }, "credentials": [ { "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "credential": { "signatures": [ "0xbeb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801" ] } } ] }, "encoding": "json" }, "id": 1 } ``` # Introduction URL: /docs/apis Comprehensive reference documentation for Avalanche APIs. # Info API URL: /docs/apis/info-api This page is an overview of the Info API associated with AvalancheGo. The Info API can be used to access basic information about an Avalanche node. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoint ``` /ext/info ``` ## Methods ### `info.acps` Returns peer preferences for Avalanche Community Proposals (ACPs) **Signature**: ``` info.acps() -> { acps: map[uint32]{ supportWeight: uint64 supporters: set[string] objectWeight: uint64 objectors: set[string] abstainWeight: uint64 } } ``` **Example Call**: ```sh curl -sX POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.acps", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "acps": { "23": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "24": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "25": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "30": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "31": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "41": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "62": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" } } }, "id": 1 } ``` ### `info.isBootstrapped` Check whether a given chain is done bootstrapping **Signature**: ``` info.isBootstrapped({chain: string}) -> {isBootstrapped: bool} ``` `chain` is the ID or alias of a chain. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` ### `info.getBlockchainID` Given a blockchain's alias, get its ID. (See [`admin.aliasChain`](https://build.avax.network/docs/api-reference/admin-api#adminaliaschain).) **Signature**: ``` info.getBlockchainID({alias:string}) -> {blockchainID:string} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getBlockchainID", "params": { "alias":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "blockchainID": "sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM" } } ``` ### `info.getNetworkID` Get the ID of the network this node is participating in. **Signature**: ``` info.getNetworkID() -> { networkID: int } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "networkID": "2" } } ``` Network ID of 1 = Mainnet Network ID of 5 = Fuji (testnet) ### `info.getNetworkName` Get the name of the network this node is participating in. **Signature**: ``` info.getNetworkName() -> { networkName:string } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkName" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "networkName": "local" } } ``` ### `info.getNodeID` Get the ID, the BLS key, and the proof of possession(BLS signature) of this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getNodeID() -> { nodeID: string, nodePOP: { publicKey: string, proofOfPossession: string } } ``` * `nodeID` Node ID is the unique identifier of the node that you set to act as a validator on the Primary Network. * `nodePOP` is this node's BLS key and proof of possession. Nodes must register a BLS key to act as a validator on the Primary Network. Your node's POP is logged on startup and is accessible over this endpoint. * `publicKey` is the 48 byte hex representation of the BLS key. * `proofOfPossession` is the 96 byte hex representation of the BLS signature. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "nodePOP": { "publicKey": "0x8f95423f7142d00a48e1014a3de8d28907d420dc33b3052a6dee03a3f2941a393c2351e354704ca66a3fc29870282e15", "proofOfPossession": "0x86a3ab4c45cfe31cae34c1d06f212434ac71b1be6cfe046c80c162e057614a94a5bc9f1ded1a7029deb0ba4ca7c9b71411e293438691be79c2dbf19d1ca7c3eadb9c756246fc5de5b7b89511c7d7302ae051d9e03d7991138299b5ed6a570a98" } }, "id": 1 } ``` ### `info.getNodeIP` Get the IP of this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getNodeIP() -> {ip: string} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeIP" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "ip": "192.168.1.1:9651" }, "id": 1 } ``` ### `info.getNodeVersion` Get the version of this node. **Signature**: ``` info.getNodeVersion() -> { version: string, databaseVersion: string, gitCommit: string, vmVersions: map[string]string, rpcProtocolVersion: string, } ``` where: * `version` is this node's version * `databaseVersion` is the version of the database this node is using * `gitCommit` is the Git commit that this node was built from * `vmVersions` is map where each key/value pair is the name of a VM, and the version of that VM this node runs * `rpcProtocolVersion` is the RPCChainVM protocol version **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeVersion" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "version": "avalanche/1.9.1", "databaseVersion": "v1.4.5", "rpcProtocolVersion": "18", "gitCommit": "79cd09ba728e1cecef40acd60702f0a2d41ea404", "vmVersions": { "avm": "v1.9.1", "evm": "v0.11.1", "platform": "v1.9.1" } }, "id": 1 } ``` ### `info.getTxFee` Deprecated as of [v1.12.2](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2). Get the fees of the network. **Signature**: ``` info.getTxFee() -> { txFee: uint64, createAssetTxFee: uint64, createSubnetTxFee: uint64, transformSubnetTxFee: uint64, createBlockchainTxFee: uint64, addPrimaryNetworkValidatorFee: uint64, addPrimaryNetworkDelegatorFee: uint64, addSubnetValidatorFee: uint64, addSubnetDelegatorFee: uint64 } ``` * `txFee` is the default fee for issuing X-Chain transactions. * `createAssetTxFee` is the fee for issuing a `CreateAssetTx` on the X-Chain. * `createSubnetTxFee` is no longer used. * `transformSubnetTxFee` is no longer used. * `createBlockchainTxFee` is no longer used. * `addPrimaryNetworkValidatorFee` is no longer used. * `addPrimaryNetworkDelegatorFee` is no longer used. * `addSubnetValidatorFee` is no longer used. * `addSubnetDelegatorFee` is no longer used. All fees are denominated in nAVAX. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getTxFee" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "txFee": "1000000", "createAssetTxFee": "10000000", "createSubnetTxFee": "1000000000", "transformSubnetTxFee": "10000000000", "createBlockchainTxFee": "1000000000", "addPrimaryNetworkValidatorFee": "0", "addPrimaryNetworkDelegatorFee": "0", "addSubnetValidatorFee": "1000000", "addSubnetDelegatorFee": "1000000" } } ``` ### `info.getVMs` Get the virtual machines installed on this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getVMs() -> { vms: map[string][]string } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "vms": { "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq": ["avm"], "mgj786NP7uDwBCcq6YwThhaN8FLyybkCa4zBWTQbNgmK6k9A6": ["evm"], "qd2U4HDWUvMrVUeTcCHp6xH3Qpnn1XbU5MDdnBoiifFqvgXwT": ["nftfx"], "rWhpuQPF1kb72esV2momhMuTYGkEb1oL29pt2EBXWmSy4kxnT": ["platform"], "rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": ["propertyfx"], "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ": ["secp256k1fx"] } }, "id": 1 } ``` ### `info.peers` Get a description of peer connections. **Signature**: ``` info.peers({ nodeIDs: string[] // optional }) -> { numPeers: int, peers:[]{ ip: string, publicIP: string, nodeID: string, version: string, lastSent: string, lastReceived: string, benched: string[], observedUptime: int, } } ``` * `nodeIDs` is an optional parameter to specify what NodeID's descriptions should be returned. If this parameter is left empty, descriptions for all active connections will be returned. If the node is not connected to a specified NodeID, it will be omitted from the response. * `ip` is the remote IP of the peer. * `publicIP` is the public IP of the peer. * `nodeID` is the prefixed Node ID of the peer. * `version` shows which version the peer runs on. * `lastSent` is the timestamp of last message sent to the peer. * `lastReceived` is the timestamp of last message received from the peer. * `benched` shows chain IDs that the peer is currently benched on. * `observedUptime` is this node's primary network uptime, observed by the peer. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.peers", "params": { "nodeIDs": [] } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "numPeers": 3, "peers": [ { "ip": "206.189.137.87:9651", "publicIP": "206.189.137.87:9651", "nodeID": "NodeID-8PYXX47kqLDe2wD4oPbvRRchcnSzMA4J4", "version": "avalanche/1.9.4", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:57Z", "benched": [], "observedUptime": "99", "trackedSubnets": [], "benched": [] }, { "ip": "158.255.67.151:9651", "publicIP": "158.255.67.151:9651", "nodeID": "NodeID-C14fr1n8EYNKyDfYixJ3rxSAVqTY3a8BP", "version": "avalanche/1.9.4", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:34Z", "benched": [], "observedUptime": "75", "trackedSubnets": [ "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL" ], "benched": [] }, { "ip": "83.42.13.44:9651", "publicIP": "83.42.13.44:9651", "nodeID": "NodeID-LPbcSMGJ4yocxYxvS2kBJ6umWeeFbctYZ", "version": "avalanche/1.9.3", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:55Z", "benched": [], "observedUptime": "95", "trackedSubnets": [], "benched": [] } ] } } ``` ### `info.uptime` Returns the network's observed uptime of this node. This is the only reliable source of data for your node's uptime. Other sources may be using data gathered with incomplete (limited) information. **Signature**: ``` info.uptime() -> { rewardingStakePercentage: float64, weightedAveragePercentage: float64 } ``` * `rewardingStakePercentage` is the percent of stake which thinks this node is above the uptime requirement. * `weightedAveragePercentage` is the stake-weighted average of all observed uptimes for this node. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.uptime" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "rewardingStakePercentage": "100.0000", "weightedAveragePercentage": "99.0000" } } ``` #### Example Avalanche L1 Call ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.uptime", "params" :{ "subnetID":"29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` #### Example Avalanche L1 Response ```json { "jsonrpc": "2.0", "id": 1, "result": { "rewardingStakePercentage": "74.0741", "weightedAveragePercentage": "72.4074" } } ``` ### `info.upgrades` Returns the upgrade history and configuration of the network. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.upgrades" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "apricotPhase1Time": "2020-12-05T05:00:00Z", "apricotPhase2Time": "2020-12-05T05:00:00Z", "apricotPhase3Time": "2020-12-05T05:00:00Z", "apricotPhase4Time": "2020-12-05T05:00:00Z", "apricotPhase4MinPChainHeight": 0, "apricotPhase5Time": "2020-12-05T05:00:00Z", "apricotPhasePre6Time": "2020-12-05T05:00:00Z", "apricotPhase6Time": "2020-12-05T05:00:00Z", "apricotPhasePost6Time": "2020-12-05T05:00:00Z", "banffTime": "2020-12-05T05:00:00Z", "cortinaTime": "2020-12-05T05:00:00Z", "cortinaXChainStopVertexID": "11111111111111111111111111111111LpoYY", "durangoTime": "2020-12-05T05:00:00Z", "etnaTime": "2024-10-09T20:00:00Z", "fortunaTime": "9999-12-01T05:00:00Z", "graniteTime": "9999-12-01T05:00:00Z" }, "id": 1 } ``` # Metrics API URL: /docs/apis/metrics-api This page is an overview of the Metrics API associated with AvalancheGo. The Metrics API allows clients to get statistics about a node's health and performance. This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Endpoint ``` /ext/metrics ``` ## Usage To get the node metrics: ```sh curl -X POST 127.0.0.1:9650/ext/metrics ``` ## Format This API produces Prometheus compatible metrics. See [here](https://prometheus.io/docs/instrumenting/exposition_formats) for information on Prometheus' formatting. [Here](https://build.avax.network/docs/nodes/maintain/monitoring) is a tutorial that shows how to set up Prometheus and Grafana to monitor AvalancheGo node using the Metrics API. # Subnet-EVM API URL: /docs/apis/subnet-evm-api This page describes the API endpoints available for Subnet-EVM based blockchains. [Subnet-EVM](https://github.com/ava-labs/subnet-evm) APIs are identical to [Coreth](https://build.avax.network/docs/api-reference/c-chain/api) C-Chain APIs, except Avalanche Specific APIs starting with `avax`. Subnet-EVM also supports standard Ethereum APIs as well. For more information about Coreth APIs see [GitHub](https://github.com/ava-labs/coreth). Subnet-EVM has some additional APIs that are not available in Coreth. ## `eth_feeConfig` Subnet-EVM comes with an API request for getting fee config at a specific block. You can use this API to check your activated fee config. **Signature:** ```bash eth_feeConfig([blk BlkNrOrHash]) -> {feeConfig: json} ``` * `blk` is the block number or hash at which to retrieve the fee config. Defaults to the latest block if omitted. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_feeConfig", "params": [ "latest" ], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "feeConfig": { "gasLimit": 15000000, "targetBlockRate": 2, "minBaseFee": 33000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "lastChangedAt": 0 } } ``` ## `eth_getChainConfig` `eth_getChainConfig` returns the Chain Config of the blockchain. This API is enabled by default with `internal-blockchain` namespace. This API exists on the C-Chain as well, but in addition to the normal Chain Config returned by the C-Chain `eth_getChainConfig` on `subnet-evm` additionally returns the upgrade config, which specifies network upgrades activated after the genesis. **Signature:** ```bash eth_getChainConfig({}) -> {chainConfig: json} ``` **Example Call:** ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_getChainConfig", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "chainId": 43214, "feeConfig": { "gasLimit": 8000000, "targetBlockRate": 2, "minBaseFee": 33000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "allowFeeRecipients": true, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "contractDeployerAllowListConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "contractNativeMinterConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "feeManagerConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "upgrades": { "precompileUpgrades": [ { "feeManagerConfig": { "adminAddresses": null, "blockTimestamp": 1661541259, "disable": true } }, { "feeManagerConfig": { "adminAddresses": null, "blockTimestamp": 1661541269 } } ] } } } ``` ## `eth_getActivePrecompilesAt` **DEPRECATED—instead use** [`eth_getActiveRulesAt`](#eth_getactiveprecompilesat). `eth_getActivePrecompilesAt` returns activated precompiles at a specific timestamp. If no timestamp is provided it returns the latest block timestamp. This API is enabled by default with `internal-blockchain` namespace. **Signature:** ```bash eth_getActivePrecompilesAt([timestamp uint]) -> {precompiles: []Precompile} ``` * `timestamp` specifies the timestamp to show the precompiles active at this time. If omitted it shows precompiles activated at the latest block timestamp. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_getActivePrecompilesAt", "params": [], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "contractDeployerAllowListConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "contractNativeMinterConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "feeManagerConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 } } } ``` ## `eth_getActiveRulesAt` `eth_getActiveRulesAt` returns activated rules (precompiles, upgrades) at a specific timestamp. If no timestamp is provided it returns the latest block timestamp. This API is enabled by default with `internal-blockchain` namespace. **Signature:** ```bash eth_getActiveRulesAt([timestamp uint]) -> {rules: json} ``` * `timestamp` specifies the timestamp to show the rules active at this time. If omitted it shows rules activated at the latest block timestamp. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_getActiveRulesAt", "params": [], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "ethRules": { "IsHomestead": true, "IsEIP150": true, "IsEIP155": true, "IsEIP158": true, "IsByzantium": true, "IsConstantinople": true, "IsPetersburg": true, "IsIstanbul": true, "IsCancun": true }, "avalancheRules": { "IsSubnetEVM": true, "IsDurango": true, "IsEtna": true }, "precompiles": { "contractNativeMinterConfig": { "timestamp": 0 }, "rewardManagerConfig": { "timestamp": 1712918700 }, "warpConfig": { "timestamp": 1714158045 } } } } ``` ## `validators.getCurrentValidators` This API retrieves the list of current validators for the Subnet/L1. It provides detailed information about each validator, including their ID, status, weight, connection, and uptime. URL: `http:///ext/bc//validators` **Signature:** ```bash validators.getCurrentValidators({nodeIDs: []string}) -> {validators: []Validator} ``` * `nodeIDs` is an optional parameter that specifies the node IDs of the validators to retrieve. If omitted, all validators are returned. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "validators.getCurrentValidators", "params": { "nodeIDs": [] }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C49rHzk3vLr1w9Z8sY7scrZ69TU4WcD2pRS6ZyzaSn9xA2U9F/validators ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "validators": [ { "validationID": "nESqWkcNXihfdZESS2idWbFETMzatmkoTCktjxG1qryaQXfS6", "nodeID": "NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5", "weight": 20, "startTimestamp": 1732025492, "isActive": true, "isL1Validator": false, "isConnected": true, "uptimeSeconds": 36, "uptimePercentage": 100 } ] }, "id": 1 } ``` **Response Fields:** * `validationID`: (string) Unique identifier for the validation. This returns validation ID for L1s, AddSubnetValidator txID for Subnets. * `nodeID`: (string) Node identifier for the validator. * `weight`: (integer) The weight of the validator, often representing stake. * `startTimestamp`: (integer) UNIX timestamp for when validation started. * `isActive`: (boolean) Indicates if the validator is active. This returns true if this is L1 validator and has enough continuous subnet staking fees in P-Chain. It always returns true for subnet validators. * `isL1Validator`: (boolean) Indicates if the validator is a L1 validator or a subnet validator. * `isConnected`: (boolean) Indicates if the validator node is currently connected to the callee node. * `uptimeSeconds`: (integer) The number of seconds the validator has been online. * `uptimePercentage`: (float) The percentage of time the validator has been online. # Introduction URL: /docs/avalanche-l1s/evm-l1-customization Learn how to customize the Ethereum Virtual Machine with EVM and Precompiles. Welcome to the EVM customization guide. This documentation provides an overview of **EVM**, the purpose of **Validator Manager Contracts**, the capabilities of **precompiles**, and how you can create custom precompiles to extend the functionality of the Ethereum Virtual Machine (EVM). ## Overview of EVM EVM is Avalanche's customized version of the Ethereum Virtual Machine, tailored to run on Avalanche L1s. It allows developers to deploy Solidity smart contracts with enhanced capabilities, benefiting from Avalanche's high throughput and low latency. EVM enables more flexibility and performance optimizations compared to the standard EVM. ## Validator Manager Contracts Validator Manager Contracts (VMCs) are smart contracts that manage the validators of an L1. They allow you to define rules and criteria for validator participation directly within smart contracts. VMCs enable dynamic validator sets, making it easier to add or remove validators without requiring a network restart. This provides greater control over the L1's validator management and enhances network governance. ## Precompiles Precompiles are specialized smart contracts that execute native Go code within the EVM context. They act as a bridge between Solidity and lower-level functionalities, allowing for performance optimizations and access to features not available in Solidity alone. ### Default Precompiles in EVM EVM comes with a set of default precompiles that extend the EVM's functionality. For detailed documentation on each precompile, visit the [Avalanche L1s Precompiles](/docs/avalanche-l1s/evm-configuration/evm-l1-customization#precompiles) section: * [AllowList](/docs/avalanche-l1s/evm-configuration/allowlist): A reusable interface for permission management * [Permissions](/docs/avalanche-l1s/evm-configuration/permissions): Control contract deployment and transaction submission * [Tokenomics](/docs/avalanche-l1s/evm-configuration/tokenomics): Manage native token supply and minting * [Transaction Fees](/docs/avalanche-l1s/evm-configuration/transaction-fees): Configure fee parameters and reward mechanisms * [Warp Messenger](/docs/avalanche-l1s/evm-configuration/warpmessenger): Perform cross-chain operations ## Custom Precompiles One of the powerful features of EVM is the ability to create custom precompiles. By writing Go code and integrating it as a precompile, you can extend the EVM's functionality to suit specific use cases. Custom precompiles allow you to: * Achieve higher performance for computationally intensive tasks. * Access lower-level system functions not available in Solidity. * Implement custom cryptographic functions or algorithms. * Interact with external systems or data sources. Creating custom precompiles opens up a wide range of possibilities for developers to optimize and expand their decentralized applications on Avalanche L1s. By leveraging EVM, Validator Manager Contracts, and precompiles, you can build customized and efficient decentralized applications with greater control and enhanced functionality. Explore the following sections to learn how to implement and utilize these powerful features. # Avalanche Layer 1s URL: /docs/avalanche-l1s Explore the multi-chain architecture of Avalanche ecosystem. An Avalanche L1 is a sovereign network which defines its own rules regarding its membership and token economics. It is composed of a dynamic subset of Avalanche validators working together to achieve consensus on the state of one or more blockchains. Each blockchain is validated by exactly one Avalanche L1, while an Avalanche L1 can validate many blockchains. Avalanche's [Primary Network](/docs/quick-start/primary-network) is a special Avalanche L1 running three blockchains: * The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain) * The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain) * The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain) ![image](/images/subnet1.png) Every validator of an Avalanche L1 **must** sync the P-Chain of the Primary Network for interoperability. Node operators that validate an Avalanche L1 with multiple chains do not need to run multiple machines for validation. For example, the Primary Network is an Avalanche L1 with three coexisting chains, all of which can be validated by a single node, or a single machine. ## Advantages ### Independent Networks * Avalanche L1s use virtual machines to specify their own execution logic, determine their own fee regime, maintain their own state, facilitate their own networking, and provide their own security. * Each Avalanche L1's performance is isolated from other Avalanche L1s in the ecosystem, so increased usage on one Avalanche L1 won't affect another. * Avalanche L1s can have their own token economics with their own native tokens, fee markets, and incentives determined by the Avalanche L1 deployer. * One Avalanche L1 can host multiple blockchains with customized [virtual machines](/docs/quick-start/virtual-machines). ### Native Interoperability Avalanche Warp Messaging enables native cross-Avalanche L1 communication and allows Virtual Machine (VM) developers to implement arbitrary communication protocols between any two Avalanche L1s. ### Accommodate App-Specific Requirements Different blockchain-based applications may require validators to have certain properties such as large amounts of RAM or CPU power. an Avalanche L1 could require that validators meet certain [hardware requirements](/docs/nodes/system-requirements#hardware-and-operating-systems) so that the application doesn't suffer from low performance due to slow validators. ### Launch Networks Designed With Compliance Avalanche's L1 architecture makes regulatory compliance manageable. As mentioned above, an Avalanche L1 may require validators to meet a set of requirements. Some examples of requirements the creators of an Avalanche L1 may choose include: * Validators must be located in a given country. * Validators must pass KYC/AML checks. * Validators must hold a certain license. ### Control Privacy of On-Chain Data Avalanche L1s are ideal for organizations interested in keeping their information private. Institutions conscious of their stakeholders' privacy can create a private Avalanche L1 where the contents of the blockchains would be visible only to a set of pre-approved validators. Define this at creation with a [single parameter](/docs/nodes/configure/avalanche-l1-configs#private-avalanche-l1). ### Validator Sovereignty In a heterogeneous network of blockchains, some validators will not want to validate certain blockchains because they simply have no interest in those blockchains. The Avalanche L1 model enables validators to concern themselves only with blockchain networks they choose to participate in. This greatly reduces the computational burden on validators. ## Develop Your Own Avalanche L1 Avalanche L1s on Avalanche are deployed by default with [Subnet-EVM](https://github.com/ava-labs/subnet-evm#subnet-evm), a fork of go-ethereum. It implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality. To get started, check out our [L1 Toolbox](/tools/l1-toolbox) or the tutorials in the [Avalanche CLI](/docs/tooling/create-avalanche-l1) section. # Manage VM Binaries URL: /docs/avalanche-l1s/manage-vm-binaries Learn about Avalanche Plugin Manager (APM) and how to use it to manage virtual machines binaries on existing AvalancheGo instances. Avalanche Plugin Manager (APM) is a command-line tool to manage virtual machines binaries on existing AvalancheGo instances. It enables to add/remove nodes to Avalanche L1s, upgrade the VM plugin binaries as new versions get released to the plugin repository. GitHub: [https://github.com/ava-labs/apm](https://github.com/ava-labs/apm) ## `avalanche-plugins-core` `avalanche-plugins-core` is plugin repository that ships with the `apm`. A plugin repository consists of a set of virtual machine and Avalanche L1 definitions that the `apm` consumes to allow users to quickly and easily download and manage VM binaries. GitHub: [https://github.com/ava-labs/avalanche-plugins-core](https://github.com/ava-labs/avalanche-plugins-core) # Simple VM in Any Language URL: /docs/avalanche-l1s/simple-vm-any-language Learn how to implement a simple virtual machine in any language. This is a language-agnostic high-level documentation explaining the basics of how to get started at implementing your own virtual machine from scratch. Avalanche virtual machines are grpc servers implementing Avalanche's [Proto interfaces](https://buf.build/ava-labs/avalanche). This means that it can be done in [any language that has a grpc implementation](https://grpc.io/docs/languages/). ## Minimal Implementation To get the process started, at the minimum, you will to implement the following interfaces: * [`vm.Runtime`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime) (Client) * [`vm.VM`](https://buf.build/ava-labs/avalanche/docs/main:vm) (Server) To build a blockchain taking advantage of AvalancheGo's consensus to build blocks, you will need to implement: * [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) (Client) * [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) (Client) To have a json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo, you will need to implement: * [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) (Server) You can and should use a tool like `buf` to generate the (Client/Server) code from the interfaces as stated in the [Avalanche module](https://buf.build/ava-labs/avalanche)'s page. There are *server* and *client* interfaces to implement. AvalancheGo calls the *server* interfaces exposed by your VM and your VM calls the *client* interfaces exposed by AvalancheGo. ## Starting Process Your VM is started by AvalancheGo launching your binary. Your binary is started as a sub-process of AvalancheGo. While launching your binary, AvalancheGo passes an environment variable `AVALANCHE_VM_RUNTIME_ENGINE_ADDR` containing an url. We must use this url to initialize a `vm.Runtime` client. Your VM, after having started a grpc server implementing the VM interface must call the [`vm.Runtime.InitializeRequest`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime#vm.runtime.InitializeRequest) with the following parameters. * `protocolVersion`: It must match the `supported plugin version` of the [AvalancheGo release](https://github.com/ava-labs/AvalancheGo/releases) you are using. It is always part of the release notes. * `addr`: It is your grpc server's address. It must be in the following format `host:port` (example `localhost:12345`) ## VM Initialization The service methods are described in the same order as they are called. You will need to implement these methods in your server. ### Pre-Initialization Sequence AvalancheGo starts/stops your process multiple times before launching the real initialization sequence. 1. [VM.Version](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Version) * Return: your VM's version. 2. [VM.CreateStaticHandler](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) * Return: an empty array - (Not absolutely required). 3. [VM.Shutdown](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Shutdown) * You should gracefully stop your process. * Return: Empty ### Initialization Sequence 1. [VM.CreateStaticHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) * Return an empty array - (Not absolutely required). 2. [VM.Initialize](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Initialize) * Param: an [InitializeRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeRequest). * You must use this data to initialize your VM. * You should add the genesis block to your blockchain and set it as the last accepted block. * Return: an [InitializeResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeResponse) containing data about the genesis extracted from the `genesis_bytes` that was sent in the request. 3. [VM.VerifyHeightIndex](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.VerifyHeightIndex) * Return: a [VerifyHeightIndexResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VerifyHeightIndexResponse) with the code `ERROR_UNSPECIFIED` to indicate that no error has occurred. 4. [VM.CreateHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateHandlers) * To serve json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo * See [json-RPC](#json-rpc) for more detail * Create a [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) server and get its url. * Return: a `CreateHandlersResponse` containing a single item with the server's url. (or an empty array if not implementing the json-RPC endpoint) 5. [VM.StateSyncEnabled](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.StateSyncEnabled) * Return: `true` if you want to enable StateSync, `false` otherwise. 6. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) *If you had specified `true` in the `StateSyncEnabled` result* * Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `StateSyncing` value * Set your blockchain's state to `StateSyncing` * Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 7. [VM.GetOngoingSyncStateSummary](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.GetOngoingSyncStateSummary) *If you had specified `true` in the `StateSyncEnabled` result* * Return: a [GetOngoingSyncStateSummaryResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.GetOngoingSyncStateSummaryResponse) built from the genesis block. 8. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) * Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `Bootstrapping` value * Set your blockchain's state to `Bootstrapping` * Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 9. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) * Param: `SetPreferenceRequest` containing the preferred block ID * Return: Empty 10. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) * Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `NormalOp` value * Set your blockchain's state to `NormalOp` * Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 11. [VM.Connected](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Connected) (for every other node validating this Avalanche L1 in the network) * Param: a [ConnectedRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ConnectedRequest) with the NodeID and the version of AvalancheGo. * Return: Empty 12. [VM.Health](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Health) * Param: Empty * Return: a [HealthResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.HealthResponse) with an empty `details` property. 13. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) * Param: A byte array containing a Block (the genesis block in this case) * Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. At this point, your VM is fully started and initialized. ### Building Blocks #### Transaction Gossiping Sequence When your VM receives transactions (for example using the [json-RPC](#json-rpc) endpoints), it can gossip them to the other nodes by using the [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) service. Supposing we have a 3 nodes network with nodeX, nodeY, nodeZ. Let's say NodeX has received a new transaction on it's json-RPC endpoint. [`AppSender.SendAppGossip`](https://buf.build/ava-labs/avalanche/docs/main:appsender#appsender.AppSender.SendAppGossip) (*client*): You must serialize your transaction data into a byte array and call the `SendAppGossip` to propagate the transaction. AvalancheGo then propagates this to the other nodes. [VM.AppGossip](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.AppGossip): You must deserialize the transaction and store it for the next block. * Param: A byte array containing your transaction data, and the NodeID of the node which sent the gossip message. * Return: Empty #### Block Building Sequence Whenever your VM is ready to build a new block, it will initiate the block building process by using the [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) service. Supposing that nodeY wants to build the block. you probably will implement some kind of background worker checking every second if there are any pending transactions: *client* [`Messenger.Notify`](https://buf.build/ava-labs/avalanche/docs/main:messenger#messenger.Messenger.Notify): You must issue a notify request to AvalancheGo by calling the method with the `MESSAGE_BUILD_BLOCK` value. 1. [VM.BuildBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BuildBlock) * Param: Empty * You must build a block with your pending transactions. Serialize it to a byte array. * Store this block in memory as a "pending blocks" * Return: a [BuildBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.BuildBlockResponse) from the newly built block and it's associated data (`id`, `parent_id`, `height`, `timestamp`). 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) * Param: The byte array containing the block data * Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) * Param: The block's ID * You must mark this block as the next preferred block. * Return: Empty 1. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) * Param: A byte array containing a the newly built block's data * Store this block in memory as a "pending blocks" * Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) * Param: The byte array containing the block data * Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) * Param: The block's ID * You must mark this block as the next preferred block. * Return: Empty [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. * Param: The block's ID * Return: Empty #### Managing Conflicts Conflicts happen when two or more nodes propose the next block at the same time. AvalancheGo takes care of this and decides which block should be considered final, and which blocks should be rejected using Snowman consensus. On the VM side, all there is to do is implement the `VM.BlockAccept` and `VM.BlockReject` methods. *nodeX proposes block `0x123...`, nodeY proposes block `0x321...` and nodeZ proposes block `0x456`* There are three conflicting blocks (different hashes), and if we look at our VM's log files, we can see that AvalancheGo uses Snowman to decide which block must be accepted. ```bash ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/voter.go:87 Snowman engine can't quiesce ... ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/topological.go:600 accepting block ``` Supposing that AvalancheGo accepts block `0x123...`. The following RPC methods are called on all nodes: 1. [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. * Param: The block's ID (`0x123...`) * Return: Empty 2. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. * Param: The block's ID (`0x321...`) * Return: Empty 3. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. * Param: The block's ID (`0x456...`) * Return: Empty ### JSON-RPC To enable your json-RPC endpoint, you must implement the [HandleSimple](https://buf.build/ava-labs/avalanche/docs/main:http#http.HTTP.HandleSimple) method of the [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) interface. * Param: a [HandleSimpleHTTPRequest](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPRequest) containing the original request's method, url, headers, and body. * Analyze, deserialize and handle the request. For example: if the request represents a transaction, we must deserialize it, check the signature, store it and gossip it to the other nodes using the [messenger client](#block-building-sequence)). * Return the [HandleSimpleHTTPResponse](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPResponse) response that will be sent back to the original sender. This server is registered with AvalancheGo during the [initialization process](#initialization-sequence) when the `VM.CreateHandlers` method is called. You must simply respond with the server's url in the `CreateHandlersResponse` result. # Introduction URL: /docs/avalanche-l1s/vm-overview Learn about the execution layer of a blockchain network. A Virtual Machine (VM) is a blueprint for a blockchain. Blockchains are instantiated from a VM, similar to how objects are instantiated from a class definition. VMs can define anything you want, but will generally define transactions that are executed and how blocks are created. ## Blocks and State Virtual Machines deal with blocks and state. The functionality provided by VMs is to: * Define the representation of a blockchain's state * Represent the operations in that state * Apply the operations in that state Each block in the blockchain contains a set of state transitions. Each block is applied in order from the blockchain's initial genesis block to its last accepted block to reach the latest state of the blockchain. ## Blockchain A blockchain relies on two major components: The **Consensus Engine** and the **VM**. The VM defines application specific behavior and how blocks are built and parsed to create the blockchain. All VMs run on top of the Avalanche Consensus Engine, which allows nodes in the network to agree on the state of the blockchain. Here's a quick example of how VMs interact with consensus: 1. A node wants to update the blockchain's state 2. The node's VM will notify the consensus engine that it wants to update the state 3. The consensus engine will request the block from the VM 4. The consensus engine will verify the returned block using the VM's implementation of `Verify()` 5. The consensus engine will get the network to reach consensus on whether to accept or reject the newly verified block. Every virtuous (well-behaved) node on the network will have the same preference for a particular block 6. Depending upon the consensus results, the engine will either accept or reject the block. What happens when a block is accepted or rejected is specific to the implementation of the VM AvalancheGo provides the consensus engine for every blockchain on the Avalanche Network. The consensus engine relies on the VM interface to handle building, parsing, and storing blocks as well as verifying and executing on behalf of the consensus engine. This decoupling between the application and consensus layer allows developers to build their applications quickly by implementing virtual machines, without having to worry about the consensus layer managed by Avalanche which deals with how nodes agree on whether or not to accept a block. ## Installing a VM VMs are supplied as binaries to a node running `AvalancheGo`. These binaries must be named the VM's assigned **VMID**. A VMID is a 32-byte hash encoded in CB58 that is generated when you build your VM. In order to install a VM, its binary must be installed in the `AvalancheGo` plugin path. See [here](/docs/nodes/configure/configs-flags#--plugin-dir-string) for more details. Multiple VMs can be installed in this location. Each VM runs as a separate process from AvalancheGo and communicates with `AvalancheGo` using gRPC calls. This functionality is enabled by **RPCChainVM**, a special VM which wraps around other VM implementations and bridges the VM and AvalancheGo, establishing a standardized communication protocol between them. During VM creation, handshake messages are exchanged via **RPCChainVM** between AvalancheGo and the VM installation. Ensure matching **RPCChainVM** protocol versions to avoid errors, by updating your VM or using a [different version of AvalancheGo](https://github.com/ava-labs/AvalancheGo/releases). Note that some VMs may not support the latest protocol version. ### API Handlers Users can interact with a blockchain and its VM through handlers exposed by the VM's API. VMs expose two types of handlers to serve responses for incoming requests: * **Blockchain Handlers**: Referred to as handlers, these expose APIs to interact with a blockchain instantiated by a VM. The API endpoint will be different for each chain. The endpoint for a handler is `/ext/bc/[chainID]`. * **VM Handlers**: Referred to as static handlers, these expose APIs to interact with the VM directly. One example API would be to parse genesis data to instantiate a new blockchain. The endpoint for a static handler is `/ext/vm/[vmID]`. For any readers familiar with object-oriented programming, static and non-static handlers on a VM are analogous to static and non-static methods on a class. Blockchain handlers can be thought of as methods on an object, whereas VM handlers can be thought of as static methods on a class. ### Instantiate a VM The `vm.Factory` interface is implemented to create new VM instances from which a blockchain can be initialized. The factory's `New` method shown below provides `AvalancheGo` with an instance of the VM. It's defined in the [`factory.go`](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/factory.go) file of the `timestampvm` repository. ```go // Returning a new VM instance from VM's factory func (f *Factory) New(*snow.Context) (interface{}, error) { return &vm.VM{}, nil } ``` ### Initializing a VM to Create a Blockchain Before a VM can run, AvalancheGo will initialize it by invoking its `Initialize` method. Here, the VM will bootstrap itself and sets up anything it requires before it starts running. This might involve setting up its database, mempool, genesis state, or anything else the VM requires to run. ```go if err := vm.Initialize( ctx.Context, vmDBManager, genesisData, chainConfig.Upgrade, chainConfig.Config, msgChan, fxs, sender, ); ``` You can refer to the [implementation](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/vm.go#L75) of `vm.initialize` in the TimestampVM repository. ## Interfaces Every VM should implement the following interfaces: ### `block.ChainVM` To reach a consensus on linear blockchains, Avalanche uses the Snowman consensus engine. To be compatible with Snowman, a VM must implement the `block.ChainVM` interface. For more information, see [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/vm.go). ```go title="snow/engine/snowman/block/vm.go" // ChainVM defines the required functionality of a Snowman VM. // // A Snowman VM is responsible for defining the representation of the state, // the representation of operations in that state, the application of operations // on that state, and the creation of the operations. Consensus will decide on // if the operation is executed and the order operations are executed. // // For example, suppose we have a VM that tracks an increasing number that // is agreed upon by the network. // The state is a single number. // The operation is setting the number to a new, larger value. // Applying the operation will save to the database the new value. // The VM can attempt to issue a new number, of larger value, at any time. // Consensus will ensure the network agrees on the number at every block height. type ChainVM interface { common.VM Getter Parser // Attempt to create a new block from data contained in the VM. // // If the VM doesn't want to issue a new block, an error should be // returned. BuildBlock() (snowman.Block, error) // Notify the VM of the currently preferred block. // // This should always be a block that has no children known to consensus. SetPreference(ids.ID) error // LastAccepted returns the ID of the last accepted block. // // If no blocks have been accepted by consensus yet, it is assumed there is // a definitionally accepted block, the Genesis block, that will be // returned. LastAccepted() (ids.ID, error) } // Getter defines the functionality for fetching a block by its ID. type Getter interface { // Attempt to load a block. // // If the block does not exist, an error should be returned. // GetBlock(ids.ID) (snowman.Block, error) } // Parser defines the functionality for fetching a block by its bytes. type Parser interface { // Attempt to create a block from a stream of bytes. // // The block should be represented by the full byte array, without extra // bytes. ParseBlock([]byte) (snowman.Block, error) } ``` ### `common.VM` `common.VM` is a type that every `VM` must implement. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/common/vm.go). ```go title="snow/engine/common/vm.go" // VM describes the interface that all consensus VMs must implement type VM interface { // Contains handlers for VM-to-VM specific messages AppHandler // Returns nil if the VM is healthy. // Periodically called and reported via the node's Health API. health.Checkable // Connector represents a handler that is called on connection connect/disconnect validators.Connector // Initialize this VM. // [ctx]: Metadata about this VM. // [ctx.networkID]: The ID of the network this VM's chain is running on. // [ctx.chainID]: The unique ID of the chain this VM is running on. // [ctx.Log]: Used to log messages // [ctx.NodeID]: The unique staker ID of this node. // [ctx.Lock]: A Read/Write lock shared by this VM and the consensus // engine that manages this VM. The write lock is held // whenever code in the consensus engine calls the VM. // [dbManager]: The manager of the database this VM will persist data to. // [genesisBytes]: The byte-encoding of the genesis information of this // VM. The VM uses it to initialize its state. For // example, if this VM were an account-based payments // system, `genesisBytes` would probably contain a genesis // transaction that gives coins to some accounts, and this // transaction would be in the genesis block. // [toEngine]: The channel used to send messages to the consensus engine. // [fxs]: Feature extensions that attach to this VM. Initialize( ctx *snow.Context, dbManager manager.Manager, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, toEngine chan<- Message, fxs []*Fx, appSender AppSender, ) error // Bootstrapping is called when the node is starting to bootstrap this chain. Bootstrapping() error // Bootstrapped is called when the node is done bootstrapping this chain. Bootstrapped() error // Shutdown is called when the node is shutting down. Shutdown() error // Version returns the version of the VM this node is running. Version() (string, error) // Creates the HTTP handlers for custom VM network calls. // // This exposes handlers that the outside world can use to communicate with // a static reference to the VM. Each handler has the path: // [Address of node]/ext/VM/[VM ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, it might make sense to have an extension for creating // genesis bytes this VM can interpret. CreateStaticHandlers() (map[string]*HTTPHandler, error) // Creates the HTTP handlers for custom chain network calls. // // This exposes handlers that the outside world can use to communicate with // the chain. Each handler has the path: // [Address of node]/ext/bc/[chain ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, if this VM implements an account-based payments system, // it have an extension called `accounts`, where clients could get // information about their accounts. CreateHandlers() (map[string]*HTTPHandler, error) } ``` ### `snowman.Block` The `snowman.Block` interface It define the functionality a block must implement to be a block in a linear Snowman chain. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/block.go). ```go title="snow/consensus/snowman/block.go" // Block is a possible decision that dictates the next canonical block. // // Blocks are guaranteed to be Verified, Accepted, and Rejected in topological // order. Specifically, if Verify is called, then the parent has already been // verified. If Accept is called, then the parent has already been accepted. If // Reject is called, the parent has already been accepted or rejected. // // If the status of the block is Unknown, ID is assumed to be able to be called. // If the status of the block is Accepted or Rejected; Parent, Verify, Accept, // and Reject will never be called. type Block interface { choices.Decidable // Parent returns the ID of this block's parent. Parent() ids.ID // Verify that the state transition this block would make if accepted is // valid. If the state transition is invalid, a non-nil error should be // returned. // // It is guaranteed that the Parent has been successfully verified. Verify() error // Bytes returns the binary representation of this block. // // This is used for sending blocks to peers. The bytes should be able to be // parsed into the same block on another node. Bytes() []byte // Height returns the height of this block in the chain. Height() uint64 } ``` ### `choices.Decidable` This interface is a superset of every decidable object, such as transactions, blocks, and vertices. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/choices/decidable.go). ```go title="snow/choices/decidable.go" // Decidable represents element that can be decided. // // Decidable objects are typically thought of as either transactions, blocks, or // vertices. type Decidable interface { // ID returns a unique ID for this element. // // Typically, this is implemented by using a cryptographic hash of a // binary representation of this element. An element should return the same // IDs upon repeated calls. ID() ids.ID // Accept this element. // // This element will be accepted by every correct node in the network. Accept() error // Reject this element. // // This element will not be accepted by any correct node in the network. Reject() error // Status returns this element's current status. // // If Accept has been called on an element with this ID, Accepted should be // returned. Similarly, if Reject has been called on an element with this // ID, Rejected should be returned. If the contents of this element are // unknown, then Unknown should be returned. Otherwise, Processing should be // returned. Status() Status } ``` # Why Build Avalanche L1s URL: /docs/avalanche-l1s/when-to-build-avalanche-l1 Learn key concepts to decide when to build your own Avalanche L1. ## Why Build Your Own Avalanche L1 There are many advantages to running your own Avalanche L1. If you find one or more of these a good match for your project then an Avalanche L1 might be a good solution for you. ### We Want Our Own Gas Token C-Chain is an Ethereum Virtual Machine (EVM) chain; it requires the gas fees to be paid in its native token. That is, the application may create its own utility tokens (ERC-20) on the C-Chain, but the gas must be paid in AVAX. In the meantime, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) effectively creates an application-specific EVM-chain with full control over native(gas) coins. The operator can pre-allocate the native tokens in the chain genesis, and mint more using the [Subnet-EVM](https://github.com/ava-labs/subnet-evm) precompile contract. And these fees can be either burned (as AVAX burns in C-Chain) or configured to be sent to an address which can be a smart contract. Note that the Avalanche L1 gas token is specific to the application in the chain, thus unknown to the external parties. Moving assets to other chains requires trusted bridge contracts (or upcoming cross Avalanche L1 communication feature). ### We Want Higher Throughput The primary goal of the gas limit on C-Chain is to restrict the block size and therefore prevent network saturation. If a block can be arbitrarily large, it takes longer to propagate, potentially degrading the network performance. The C-Chain gas limit acts as a deterrent against any system abuse but can be quite limiting for high throughput applications. Unlike C-Chain, Avalanche L1 can be single-tenant, dedicated to the specific application, and thus host its own set of validators with higher bandwidth requirements, which allows for a higher gas limit thus higher transaction throughput. Plus, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) supports fee configuration upgrades that can be adaptive to the surge in application traffic. Avalanche L1 workloads are isolated from the Primary Network; which means, the noisy neighbor effect of one workload (for example NFT mint on C-Chain) cannot destabilize the Avalanche L1 or surge its gas price. This failure isolation model in the Avalanche L1 can provide higher application reliability. ### We Want Strict Access Control The C-Chain is open and permissionless where anyone can deploy and interact with contracts. However, for regulatory reasons, some applications may need a consistent access control mechanism for all on-chain transactions. With [Subnet-EVM](https://github.com/ava-labs/subnet-evm), an application can require that “only authorized users may deploy contracts or make transactions.” Allow-lists are only updated by the administrators, and the allow list itself is implemented within the precompile contract, thus more transparent and auditable for compliance matters. ### We Need EVM Customization If your project is deployed on the C-Chain then your execution environment is dictated by the setup of the C-Chain. Changing any of the execution parameters means that the configuration of the C-Chain would need to change, and that is expensive, complex and difficult to change. So if your project needs some other capabilities, different execution parameters or precompiles that C-Chain does not provide, then Avalanche L1s are a solution you need. You can configure the EVM in an Avalanche L1 to run however you want, adding precompiles, and setting runtime parameters to whatever your project needs. ### We Need Custom Validator Management With the Etna upgrade, L1s can implement their own validator management logic through a *ValidatorManager* smart contract. This gives you complete control over your validator set, allowing you to define custom staking rules, implement permissionless proof-of-stake with your own token, or create permissioned proof-of-authority networks. The validator management can be handled directly through smart contracts, giving you programmatic control over validator selection and rewards distribution. ### We Want to Build a Sovereign Network L1s on Avalanche are truly sovereign networks that operate independently without relying on other systems. You have complete control over your network's consensus mechanisms, transaction processing, and security protocols. This independence allows you to scale horizontally without dependencies on other networks while maintaining full control over your network parameters and upgrades. This sovereignty is particularly important for projects that need complete autonomy over their blockchain's operation and evolution. ## Conclusion Here we presented some considerations in favor of running your own Avalanche L1 vs. deploying on the C-Chain. If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1. Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run. # Asset Requirements URL: /docs/builderkit/asset-requirements Required assets and file structure for chain and token logos. # Asset Requirements BuilderKit requires specific asset files for displaying chain and token logos. These assets should follow a standardized file structure and naming convention. ## Chain Logos Chain logos are used by components like `ChainIcon`, `ChainDropdown`, and `TokenIconWithChain`. ### File Structure Chain logos should be placed at: ``` /chains/logo/{chain_id}.png ``` ### Examples ``` /chains/logo/43114.png // Avalanche C-Chain /chains/logo/43113.png // Fuji Testnet /chains/logo/173750.png // Echo L1 ``` ### Requirements * Format: PNG with transparency * Dimensions: 32x32px (minimum) * Background: Transparent * Shape: Circular or square with rounded corners * File size: \< 100KB ## Token Logos Token logos are used by components like `TokenIcon`, `TokenChip`, and `TokenRow`. ### File Structure Token logos should be placed at: ``` /tokens/logo/{chain_id}/{address}.png ``` ### Examples ``` /tokens/logo/43114/0x1234567890123456789012345678901234567890.png // Token on C-Chain /tokens/logo/43113/0x5678901234567890123456789012345678901234.png // Token on Fuji ``` ### Requirements * Format: PNG with transparency * Dimensions: 32x32px (minimum) * Background: Transparent * Shape: Circular or square with rounded corners * File size: \< 100KB ## Directory Structure Your public assets directory should look like this: ``` public/ ├── chains/ │ └── logo/ │ ├── 43114.png │ ├── 43113.png │ └── 173750.png └── tokens/ └── logo/ ├── 43114/ │ ├── 0x1234....png │ └── 0x5678....png └── 43113/ ├── 0x9012....png └── 0xabcd....png ``` # Custom Chain Setup URL: /docs/builderkit/chains Configure custom Avalanche L1 chains in your application. # Custom Chain Setup Learn how to configure custom Avalanche L1 chains in your BuilderKit application. ## Chain Definition Define your custom L1 chain using `viem`'s `defineChain`: ```tsx import { defineChain } from "viem"; export const myL1 = defineChain({ id: 173750, // Your L1 chain ID name: 'My L1', // Display name network: 'my-l1', // Network identifier nativeCurrency: { decimals: 18, name: 'Token', symbol: 'TKN', }, rpcUrls: { default: { http: ['https://api.avax.network/ext/L1/rpc'] }, }, blockExplorers: { default: { name: 'Explorer', url: 'https://explorer.avax.network/my-l1' }, }, // Optional: Custom metadata iconUrl: "/chains/logo/my-l1.png", icm_registry: "0x..." // ICM registry contract }); ``` ## Provider Configuration Add your custom L1 chain to the Web3Provider: ```tsx import { Web3Provider } from '@avalabs/builderkit'; import { avalanche } from '@wagmi/core/chains'; import { myL1 } from './chains/definitions/my-l1'; function App() { return ( ); } ``` ## Required Properties | Property | Type | Description | | ---------------- | -------- | ---------------------------- | | `id` | `number` | Unique L1 chain identifier | | `name` | `string` | Human-readable chain name | | `network` | `string` | Network identifier | | `nativeCurrency` | `object` | Chain's native token details | | `rpcUrls` | `object` | RPC endpoint configuration | | `blockExplorers` | `object` | Block explorer URLs | ## Optional Properties | Property | Type | Description | | -------------- | --------- | ------------------------------ | | `iconUrl` | `string` | Chain logo URL | | `icm_registry` | `string` | ICM registry contract address | | `testnet` | `boolean` | Whether the chain is a testnet | ## Example: Echo L1 Here's a complete example using the Echo L1: ```tsx import { defineChain } from "viem"; export const echo = defineChain({ id: 173750, name: 'Echo L1', network: 'echo', nativeCurrency: { decimals: 18, name: 'Ech', symbol: 'ECH', }, rpcUrls: { default: { http: ['https://subnets.avax.network/echo/testnet/rpc'] }, }, blockExplorers: { default: { name: 'Explorer', url: 'https://subnets-test.avax.network/echo' }, }, iconUrl: "/chains/logo/173750.png", icm_registry: "0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228" }); ``` # Contribute URL: /docs/builderkit/contribute Guide for contributing to BuilderKit by building hooks, components, and flows. # Contributing to BuilderKit We welcome contributions to BuilderKit! Whether you're fixing bugs, adding new features, or improving documentation, your help makes BuilderKit better for everyone. ## What You Can Contribute ### Hooks Build reusable hooks that handle common Web3 functionality: * Chain data management * Token interactions * Contract integrations * State management * API integrations ### Components Create new UI components or enhance existing ones: * Form elements * Display components * Interactive elements * Layout components * Utility components ### Flows Design complete user journeys by combining components: * Token swaps * NFT minting * Governance voting * Staking interfaces * Custom protocols # Getting Started URL: /docs/builderkit/getting-started Quick setup guide for BuilderKit in your React application. Get started with BuilderKit in your React application. ## Installation ```bash npm install @avalabs/builderkit # or yarn add @avalabs/builderkit ``` ## Provider Setup Wrap your application with the Web3Provider to enable wallet connections and chain management: ```tsx import { Web3Provider } from '@avalabs/builderkit'; import { avalanche, avalancheFuji } from '@wagmi/core/chains'; import { echo } from './chains/definitions/echo'; import { dispatch } from './chains/definitions/dispatch'; // Configure chains const chains = [avalanche, avalancheFuji, echo, dispatch]; function App() { return ( ); } ``` ## Next Steps * Learn about [Token Configuration](/docs/builderkit/tokens) * Explore [Core Components](/docs/builderkit/components/control) * Check out [Pre-built Flows](/docs/builderkit/flows/ictt) # Introduction URL: /docs/builderkit A comprehensive React component library for building Web3 applications on Avalanche. BuilderKit is a powerful collection of React components and hooks designed specifically for building Web3 applications on Avalanche. It provides everything you need to create modern, user-friendly blockchain applications with minimal effort. ## Ready to Use Components BuilderKit offers a comprehensive set of components that handle common Web3 functionalities: * **Control Components**: Buttons, forms, and wallet connection interfaces * **Identity Components**: Address displays and domain name resolution * **Token Components**: Balance displays, inputs, and price conversions * **Input Components**: Specialized form inputs for Web3 data types * **Chain Components**: Network selection and chain information displays * **Transaction Components**: Transaction submission and status tracking * **Collectibles Components**: NFT displays and collection management ## Powerful Hooks BuilderKit provides hooks for seamless integration with Avalanche's ecosystem: ### Blockchain Interaction Access and manage blockchain data, tokens, and cross-chain operations with hooks for chains, tokens, DEX interactions, and inter-chain transfers. ### Precompile Integration Easily integrate with Avalanche's precompiled contracts for access control, fee management, native minting, rewards, and cross-chain messaging. ## Getting Started Get started quickly by installing BuilderKit in your React application: ```bash npm install @avalabs/builderkit # or yarn add @avalabs/builderkit ``` Check out our [Getting Started](/docs/builderkit/getting-started) guide to begin building your Web3 application. # Token Configuration URL: /docs/builderkit/tokens Guide for configuring tokens in BuilderKit flows. # Token Configuration BuilderKit flows require proper token configuration to function correctly. This guide explains the required fields for different token configurations. ## Basic Token Structure All tokens in BuilderKit share a common base structure with these required fields: ```typescript interface BaseToken { // Contract address of the token, use "native" for native chain token address: string; // Human-readable name of the token name: string; // Token symbol/ticker symbol: string; // Number of decimal places the token uses decimals: number; // ID of the chain where this token exists chain_id: number; } ``` ## ICTT Token Fields ICTT tokens extend the base structure with additional fields for cross-chain functionality: ```typescript interface ICTTToken extends BaseToken { // Whether this token can be used with ICTT supports_ictt: boolean; // Address of the contract that handles transfers transferer?: string; // Whether this token instance is a transferer is_transferer?: boolean; // Information about corresponding tokens on other chains mirrors: { // Contract address of the mirrored token address: string; // Transferer contract on the mirror chain transferer: string; // Chain ID where the mirror exists chain_id: number; // Decimal places of the mirrored token decimals: number; // Whether this is the home/original chain home?: boolean; }[]; } ``` ## Field Requirements ### Base Token Fields * `address`: Must be a valid contract address or "native" * `name`: Should be human-readable * `symbol`: Should match the token's trading symbol * `decimals`: Must match the token's contract configuration * `chain_id`: Must be a valid chain ID ### ICTT-Specific Fields * `supports_ictt`: Required for ICTT functionality * `transferer`: Required if token supports ICTT * `is_transferer`: Optional, indicates if token is a transferer * `mirrors`: Required for ICTT, must contain at least one mirror configuration ### Mirror Configuration Fields * `address`: Required, contract address on mirror chain * `transferer`: Required, transferer contract on mirror chain * `chain_id`: Required, must be different from token's chain\_id * `decimals`: Required, must match token contract * `home`: Optional, indicates original/home chain # ACP-103: Dynamic Fees URL: /docs/acps/103-dynamic-fees Details for Avalanche Community Proposal 103: Dynamic Fees | ACP | 103 | | :------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | Add Dynamic Fees to the P-Chain | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/104)) | | **Track** | Standards | ## Abstract Introduce a dynamic fee mechanism to the P-Chain. Preview a future transition to a multidimensional fee mechanism. ## Motivation Blockchains are resource-constrained environments. Users are charged for the execution and inclusion of their transactions based on the blockchain's transaction fee mechanism. The mechanism should fluctuate based on the supply of and demand for said resources to serve as a deterrent against spam and denial-of-service attacks. With a fixed fee mechanism, users are provided with simplicity and predictability but network congestion and resource constraints are not taken into account. There is no incentive for users to withhold transactions since the cost is fixed regardless of the demand. The fee does not adjust the execution and inclusion fee of transactions to the market clearing price. The C-Chain, in [Apricot Phase 3](https://medium.com/avalancheavax/apricot-phase-three-c-chain-dynamic-fees-432d32d67b60), employs a dynamic fee mechanism to raise the price during periods of high demand and lowering the price during periods of low demand. As the price gets too expensive, network utilization will decrease, which drops the price. This ensures the execution and inclusion fee of transactions closely matches the market clearing price. The P-Chain currently operates under a fixed fee mechanism. To more robustly handle spikes in load expected from introducing the improvements in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md), it should be migrated to a dynamic fee mechanism. The X-Chain also currently operates under a fixed fee mechanism. However, due to the current lower usage and lack of new feature introduction, the migration of the X-Chain to a dynamic fee mechanism is deferred to a later ACP to reduce unnecessary additional technical complexity. ## Specification ### Dimensions There are four dimensions that will be used to approximate the computational cost of, or "gas" consumed in, a transaction: 1. Bandwidth $B$ is the amount of network bandwidth used for transaction broadcast. This is set to the size of the transaction in bytes. 2. Reads $R$ is the number of state/database reads used in transaction execution. 3. Writes $W$ is the number of state/database writes used in transaction execution. 4. Compute $C$ is the total amount of compute used to verify and execute a transaction, measured in microseconds. The gas consumed $G$ in a transaction is: $G = B + 1000R + 1000W + 4C$ A future ACP could remove the merging of these dimensions to granularly meter usage of each resource in a multidimensional scheme. ### Mechanism This mechanism aims to maintain a target gas consumption $T$ per second and adjusts the fee based on the excess gas consumption $x$, defined as the difference between the current gas consumption and $T$. Prior to the activation of this mechanism, $x$ is initialized: $x = 0$ At the start of building/executing block $b$, $x$ is updated: $x = \max(x - T \cdot \Delta{t}, 0)$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The gas price for block $b$ is: $M \cdot \exp\left(\frac{x}{K}\right)$ Where: * $M$ is the minimum gas price * $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` * $K$ is a constant to control the rate of change of the gas price After processing block $b$, $x$ is updated with the total gas consumed in the block $G$: $x = x + G$ Whenever $x$ increases by $K$, the gas price increases by a factor of `~2.7`. If the gas price gets too expensive, average gas consumption drops, and $x$ starts decreasing, dropping the price. The gas price constantly adjusts to make sure that, on average, the blockchain consumes $T$ gas per second. A [token bucket](https://en.wikipedia.org/wiki/Token_bucket) is employed to meter the maximum rate of gas consumption. Define $C$ as the capacity of the bucket, $R$ as the amount of gas to add to the bucket per second, and $r$ as the amount of gas currently in the bucket. Prior to the activation of this mechanism, $r$ is initialized: $r = 0$ At the beginning of processing block $b$, $r$ is set: $r = \min\left(r + R \cdot \Delta{t}, C\right)$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The maximum gas consumed in a given $\Delta{t}$ is $r + R \cdot \Delta{t}$. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. After processing block $b$, the total gas consumed in $b$, or $G$, will be known. If $G \gt r$, $b$ is considered an invalid block. If $b$ is a valid block, $r$ is updated: $r = r - G$ A block gas limit does not need to be set as it is implicitly derived from $r$. The parameters at activation are: | Parameter | P-Chain Configuration | | ------------------------------------ | --------------------- | | $T$ - target gas consumed per second | 50,000 | | $M$ - minimum gas price | 1 nAVAX | | $K$ - gas price update constant | 2\_164\_043 | | $C$ - maximum gas capacity | 1,000,000 | | $R$ - gas capacity added per second | 100,000 | $K$ was chosen such that at sustained maximum capacity ($R=100,000$ gas/second), the fee rate will double every \~30 seconds. As the network gains capacity to handle additional load, this algorithm can be tuned to increase the gas consumption rate. #### A note on $e^x$ There is a subtle reason why an exponential adjustment function was chosen: The adjustment function should be *equally* reactive irrespective of the actual fee. Define $b_n$ as the current block's gas fee, $b_{n+1}$ as the next block's gas fee, and $x$ as the excess gas consumption. Let's use a linear adjustment function: $b_{n+1} = b_n + 10x$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 + 10 \cdot 1 = 110$, an increase of `10%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 + 10 \cdot 1 = 10,010$, an increase of `0.1%`. The fee is *less* reactive as the fee increases. This is because the rate of change *does not scale* with $x$. Now, let's use an exponential adjustment function: $b_{n+1} = b_n \cdot e^x$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 \cdot e^1 \approx 271.828$, an increase of `171%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 \cdot e^1 \approx 27,182.8$, an increase of `171%` again. The fee is *equally* reactive as the fee increases. This is because the rate of change *scales* with $x$. ### Block Building Procedure When a transaction is constructed on the P-Chain, the amount of $AVAX burned is given by `sum($AVAX outputs) - sum($AVAX inputs)`. The amount of gas consumed by the transaction can be deterministically calculated after construction. Dividing the amount of $AVAX burned by the amount of gas consumed yields the maximum gas price that the transaction can pay. Instead of using a FIFO queue for the mempool (like the P-Chain does now), the mempool should use a priority queue ordered by the maximum gas price of each transaction. This ensures that higher paying transactions are included first. ## Backwards Compatibility Modification of a fee mechanism is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any transaction issued on the P-Chain must account for the fee mechanism defined above. Users are responsible for reconstructing their transactions to include a larger fee for quicker inclusion when the fee increases. ## Reference Implementation ACP-103 was implemented into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp103` label [here](https://github.com/ava-labs/avalanchego/pulls?q=is%3Apr+label%3Aacp103). ## Security Considerations The current fixed fee mechanism on the X-Chain and P-Chain does not robustly handle spikes in load. Migrating the P-Chain to a dynamic fee mechanism will ensure that any additional load caused by demand for new P-Chain features (such as those introduced in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)) is properly priced given allotted processing capacity. The X-Chain, in comparison, currently has significantly lower usage, making it less likely for the demand for blockspace on it to exceed the current static fee rates. If necessary or desired, a future ACP can reuse the mechanism introduced here to add dynamic fee rates to the X-Chain. ## Acknowledgements Thank you to [@aaronbuchwald](https://github.com/aaronbuchwald) and [@patrick-ogrady](https://github.com/patrick-ogrady) for providing feedback prior to publication. Thank you to the authors of [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md) for creating the fee design that inspired the above mechanism. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-108: Evm Event Importing URL: /docs/acps/108-evm-event-importing Details for Avalanche Community Proposal 108: Evm Event Importing | ACP | 108 | | :------------ | :------------------------------------------------------------------------------------ | | **Title** | EVM Event Importing Standard | | **Author(s)** | Michael Kaplan ([@mkaplan13](https://github.com/mkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/114)) | | **Track** | Best Practices Track | ## Abstract Defines a standard smart contract interface and abstract implementation for importing EVM events from any blockchain within Avalanche using [Avalanche Warp Messaging](https://docs.avax.network/build/cross-chain/awm/overview). ## Motivation The implementation of Avalanche Warp Messaging within `coreth` and `subnet-evm` exposes a [mechanism for getting authenticated hashes of blocks](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IWarpMessenger.sol#L43) that have been accepted on blockchains within Avalanche. Proofs of acceptance of blocks, such as those introduced in [ACP-75](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/75-acceptance-proofs), can be used to prove arbitrary events and state changes that occured in those blocks. However, there is currently no clear standard for using authenticated block hashes in smart contracts within Avalanche, making it difficult to build applications that leverage this mechanism. In order to make effective use of authenticated block hashes, contracts must be provided encoded block headers that match the authenticated block hashes and also Merkle proofs that are verified against the state or receipts root contained in the block header. With a standard interface and abstract contract implemetation that handles the authentication of block hashes and verification of Merkle proofs, smart contract developers on Avalanche will be able to much more easily create applications that leverage data from other Avalanche blockchains. These type of cross-chain application do not require any direct interaction on the source chain. ## Specification ### Event Importing Interface We propose that smart contracts importing EVM events emitted by other blockchains within Avalanche implement the following interface. #### Methods Imports the EVM event uniquely identified by the source blockchain ID, block header, transaction index, and log index. The `blockHeader` must be validated to match the authenticated block hash from the `sourceBlockchainID`. The specification for EVM block headers can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/block.go#L73). The `txIndex` identifies the key of receipts trie of the given block header that the `receiptProof` must prove inclusion of. The value obtained by verifying the `receiptProof` for that key is the encoded transaction receipt. The specification for EVM transaction receipts can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/receipt.go#L62). The `logIndex` identifies which event log from the given transaction receipt is to be imported. Must emit an `EventImported` event upon success. ```solidity function importEvent( bytes32 sourceBlockchainID, bytes calldata blockHeader, uint256 txIndex, bytes[] calldata receiptProof, uint256 logIndex ) external; ``` This interface does not require that the Warp precompile is used to authenticate block hashes. Implementations could: * Use the Warp precompile to authenticate block hashes provided directly in the transaction calling `importEvent`. * Check previously authenticated block hashes using an external contract. * Allows for a block hash to be authenticated once and used in arbitrarily many transactions afterwards. * Allows for alternative authentication mechanisms to be used, such as trusted oracles. #### Events Must trigger when an EVM event is imported. ```solidity event EventImported( bytes32 indexed sourceBlockchainID, bytes32 indexed sourceBlockHash, address indexed loggerAddress, uint256 txIndex, uint256 logIndex ); ``` ### Event Importing Abstract Contract Applications importing EVM events emitted by other blockchains within Avalanche should be able to use a standard abstract implementation of the `importEvent` interface. This abstract implementation must handle: * Authenticating block hashes from other chains. * Verifying that the encoded `blockHeader` matches the imported block hash. * Verifying the Merkle `receiptProof` for the given `txIndex` against the receipt root of the provided `blockHeader`. * Decoding the event log identified by `logIndex` from the receipt obtained from verifying the `receiptProof`. As noted above, implementations could directly use the Warp precompile's `getVerifiedWarpBlockHash` interface method for authenticating block hashes, as is done in the reference implementation [here](https://github.com/ava-labs/event-importer-poc/blob/main/contracts/src/EventImporter.sol#L51). Alternatively, implementations could use the `sourceBlockchainID` and `blockHeader` provided in the parameters to check with an external contract that the block has been accepted on the given chain. The specifics of such an external contract are outside the scope of this ACP, but for illustrative purposes, this could look along the lines of: ```solidity bool valid = blockHashRegistry.checkAuthenticatedBlockHash( sourceBlockchainID, keccack256(blockHeader) ); require(valid, "Invalid block header"); ``` Inheriting contracts should only need to define the logic to be executed when an event is imported. This is done by providing an implementation of the following internal function, called by `importEvent`. ```solidity function _onEventImport(EVMEventInfo memory eventInfo) internal virtual; ``` Where the `EVMEventInfo` struct is defined as: ```solidity struct EVMLog { address loggerAddress; bytes32[] topics; bytes data; } struct EVMEventInfo { bytes32 blockchainID; uint256 blockNumber; uint256 txIndex; uint256 logIndex; EVMLog log; } ``` The `EVMLog` struct is meant to match the `Log` type definition in the EVM [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/log.go#L39). ## Reference Implementation See reference implementation on [Github here](https://github.com/ava-labs/event-importer-poc). In addition to implementing the interface and abstract contract described above, the reference implementation shows how transactions can be constructed to import events using Warp block hash signatures. ## Open Questions See [here](https://github.com/ava-labs/event-importer-poc?tab=readme-ov-file#open-questions-and-considerations). ## Security Considerations The correctness of a contract using block hashes to prove that a specific event was emitted within that block depends on the correctness of: 1. The mechanism for authenticating that a block hash was finalized on another blockchain. 2. The Merkle proof validation library used to prove that a specific transaction receipt was included in the given block. For considerations on using Avalanche Warp Messaging to authenticate block hashes, see [here](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/30-avalanche-warp-x-evm#security-considerations). To improve confidence in the correctness of the Merkle proof validation used in implementations, well-audited and widely used libraries should be used. ## Acknowledgements Using Merkle proofs to verify events/state against root hashes is not a new idea. Protocols such as [IBC](https://ibc.cosmos.network/v8/), [Rainbow Bridge](https://github.com/Near-One/rainbow-bridge), and [LayerZero](https://layerzero.network/publications/LayerZero_Whitepaper_V1.1.0.pdf), among others, have previously suggested using Merkle proofs in a similar manner. Thanks to [@aaronbuchwald](https://github.com/aaronbuchwald) for proposing the `getVerifiedWarpBlockHash` interface be included in the AWM implemenation within Avalanche EVMs, which enables this type of use case. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-113: Provable Randomness URL: /docs/acps/113-provable-randomness Details for Avalanche Community Proposal 113: Provable Randomness | ACP | 113 | | :------------ | :--------------------------------------------------------------------------------- | | **Title** | Provable Virtual Machine Randomness | | **Author(s)** | Tsachi Herman [http://github.com/tsachiherman](http://github.com/tsachiherman) | | **Status** | Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/142)) | | **Track** | Standards | ## Future Work This ACP was marked as stale due to its documented security concerns. In order to safely utilize randomness produced by this mechanism, the consumer of the randomness must: 1. Define a security threshold `x` which is the maximum number of consecutive blocks which can be proposed by a malicious entity. 2. After committing to a request for randomness, the consumer must wait for `x` blocks. 3. After waiting for `x` blocks, the consumer must verify that the randomness was not biased during the `x` blocks. 4. If the randomness was biased, it would be insufficient to request randomness again, as this would allow the malicious block producer to discard any randomness that it did not like. If using the randomness mechanism proposed in this ACP, the consumer of the randomness must be able to terminate the request for randomness in such a way that no participant would desire the outcome. Griefing attacks would likely result from such a construction. ### Alternative Mechanisms There are alternative mechanisms that would not result in such security concerns, such as: * Utilizing a deterministic threshold signature scheme to finalize a block in consensus would allow the threshold signature to be used during the execution of the block. * Utilizing threshold commit-reveal schemes that guarantee that committed values will always be revealed in a timely manner. However, these mechanisms are likely too costly to be introduced into the Avalanche Primary Network due to its validator set size. It is left to a future ACP to specify the implementation of one of these alternative schemes for L1 networks with smaller sized validator sets. ## Abstract Avalanche offers developers flexibility through subnets and EVM-compatible smart contracts. However, the platform's deterministic block execution limits the use of traditional random number generators within these contracts. To address this, a mechanism is proposed to generate verifiable, non-cryptographic random number seeds on the Avalanche platform. This method ensures uniformity while allowing developers to build more versatile applications. ## Motivation Reliable randomness is essential for building exciting applications on Avalanche. Games, participant selection, dynamic content, supply chain management, and decentralized services all rely on unpredictable outcomes to function fairly. Randomness also fuels functionalities like unique identifiers and simulations. Without a secure way to generate random numbers within smart contracts, Avalanche applications become limited. Avalanche's traditional reliance on external oracles for randomness creates complexity and bottlenecks. These oracles inflate costs, hinder transaction speed, and are cumbersome to integrate. As Avalanche scales to more Subnets, this dependence on external systems becomes increasingly unsustainable. A solution for verifiable random number generation within Avalanche solves these problems. It provides fair randomness functionality across the chains, at no additional cost. This paves the way for a more efficient Avalanche ecosystem. ## Specification ### Changes Summary The existing Avalanche protocol breaks the block building into two parts : external and internal. The external block is the Snowman++ block, whereas the internal block is the actual virtual machine block. To support randomness, a BLS based VRF implementation is used, that would be recursively signing its own signatures as its message. Since the BLS signatures are deterministic, they provide a great way to construct a reliable VRF. For proposers that do not have a BLS key associated with their node, the hash of the signature from the previous round is used in place of their signature. In order to bootstrap the signatures chain, a missing signature would be replaced with a byte slice that is the hash product of a verifiable and trustable seed. The changes proposed here would affect the way a blocks are validated. Therefore, when this change gets implemented, it needs to be deployed as a mandatory upgrade. ``` +-----------------------+ +-----------------------+ | Block n | <-------- | Block n+1 | +-----------------------+ +-----------------------+ | VRF-Sig(n) | | VRF-Sig(n+1) | | ... | | ... | +-----------------------+ +-----------------------+ +-----------------------+ +-----------------------+ | VM n | | VM n+1 | +-----------------------+ +-----------------------+ | VRF-Out(n) | | VRF-Out(n+1) | +-----------------------+ +-----------------------+ VRF-Sig(n+1) = Sign(VRF-Sig(n), Block n+1 proposer's BLS key) VRF-Out(n) = Hash(VRF-Sig(n)) ``` ### Changes Details #### Step 1. Adding BLS signature to proposed blocks ```go type statelessUnsignedBlock struct { … vrfSig []byte `serialize:”true”` } ``` #### Step 2. Populate signature When a block proposer attempts to build a new block, it would need to use the parent block as a reference. The `vrfSig` field within each block is going to be daisy-chained to the `vrfSig` field from it's parent block. Populating the `vrfSig` would following this logic: 1. The current proposer has a BLS key a. If the parent block has an empty `vrfSig` signature, the proposer would sign the bootStrappingBlockSignature with its BLS key. See the bootStrappingBlockSignature details below. This is the base case. b. If the parent block does not have an empty `vrfSig` signature, that signature would be signed using the proposer’s BLS key. 2. The current proposer does not have a BLS key a. If the parent block has a non-empty `vrfSig` signature, the proposer would set the proposed block `vrfSig` to the 32 byte hash result of the following preimage: ``` +-------------------------+----------+------------+ | prefix : | [8]byte | "rng-derv" | +-------------------------+----------+------------+ | vrfSig : | [96]byte | 96 bytes | +-------------------------+----------+------------+ ``` b. If the parent block has an empty `vrfSig` signature, the proposer would leave the `vrfSig` on the new block empty. The bootStrappingBlockSignature that would be used above is the hash of the following preimage: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "rng-root" | +-----------------------+----------+------------+ | networkID: | uint32 | 4 bytes | +-----------------------+----------+------------+ | chainID : | [32]byte | 32 bytes | +-----------------------+----------+------------+ ``` #### Step 3. Signature Verification This signature verification would perform the exact opposite of what was done in step 2, and would verify the cryptographic correctness of the operation. Validating the `vrfSig` would following this logic: 1. The proposer has a BLS key a. If the parent block's `vrfSig` was non-empty , then the `vrfSig` in the proposed block is verified to be a valid BLS signature of the parent block's `vrfSig` value for the proposer's BLS public key. b. If the parent block's `vrfSig` was empty, then a BLS signature verification of the proposed block `vrfSig` against the proposer’s BLS public key and bootStrappingBlockSignature would take place. 2. The proposer does not have a BLS key a. If the parent block had a non-empty `vrfSig`, then the hash of the preimage ( as described above ) would be compared against the proposed `vrfSig`. b. If the parent block has an empty `vrfSig` then the proposer's `vrfSig` would be validated to be empty. #### Step 4. Extract the VRF Out and pass to block builders Calculating the VRF Out would be done by hashing the preimage of the following struct: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "vrfout " | +-----------------------+----------+------------+ | vrfout: | [96]byte | 96 bytes | +-----------------------+----------+------------+ ``` Before calculating the VRF Out, the method needs to explicitly check the case where the `vrfSig` is empty. In that case, the output of the VRF Out needs to be empty as well. ## Backwards Compatibility The above design has taken backward compatibility considerations. The chain would keep working as before, and at some point, would have the newly added `vrfSig` populated. From usage perspective, each VM would need to make its own decision on whether it should use the newly provided random seed. Initially, this random seed would be all zeros - and would get populated once the feature rolled out to a sufficient number of nodes. Also, as mentioned in the summary, these changes would necessitate a network upgrade. ## Reference Implementation A full reference implementation has not been provided yet. It will be provided once this ACP is considered `Implementable`. ## Security Considerations Virtual machine random seeds, while appearing to offer a source of randomness within smart contracts, fall short when it comes to cryptographic security. Here's a breakdown of the critical issues: * Limited Permutation Space: The number of possible random values is derived from the number of validators. While no validator, nor a validator set, would be able to manipulate the randomness into any single value, a nefarious actor(s) might be able to exclude specific numbers. * Predictability Window: The seed value might be accessible to other parties before the smart contract can benefit from its uniqueness. This predictability window creates a vulnerability. An attacker could potentially observe the seed generation process and predict the sequence of "random" numbers it will produce, compromising the entire cryptographic foundation of your smart contract. Despite these limitations appearing severe, attackers face significant hurdles to exploit them. First, the attacker can't control the random number, limiting the attack's effectiveness to how that number is used. Second, a substantial amount of AVAX is needed. And last, such an attack would likely decrease AVAX's value, hurting the attacker financially. One potential attack vector involves collusion among multiple proposers to manipulate the random number selection. These attackers could strategically choose to propose or abstain from proposing blocks, effectively introducing a bias into the system. By working together, they could potentially increase their chances of generating a random number favorable to their goals. However, the effectiveness of this attack is significantly limited for the following reasons: * Limited options: While colluding attackers expand their potential random number choices, the overall pool remains immense (2^256 possibilities). This drastically reduces their ability to target a specific value. * Protocol's countermeasure: The protocol automatically eliminates any bias introduced by previous proposals once an honest proposer submits their block. * Detectability: Exploitation of this attack vector is readily identifiable. A successful attack necessitates coordinated collusion among multiple nodes to synchronize their proposer slots for a specific block height ( the proposer slot order are known in advance ). Subsequent to this alignment, a designated node constructs the block proposal. The network maintains a record of the proposer slot utilized for each block. A value of zero for the proposer slot unequivocally indicates the absence of an exploit. Increasing values correlate with a heightened risk of exploitation. It is important to note that non-zero slot numbers may also arise from transient network disturbances. While this attack is theoretically possible, its practical impact is negligible due to the vast number of potential outcomes and the protocol's inherent safeguards. ## Open Questions ### How would the proposed changes impact the proposer selection and their inherit bias ? The proposed modifications will not influence the selection process for block proposers. Proposers retain the ability to determine which transactions are included in a block. This inherent proposer bias remains unchanged and is unaffected by the proposed changes. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-118: Warp Signature Request URL: /docs/acps/118-warp-signature-request Details for Avalanche Community Proposal 118: Warp Signature Request | ACP | 118 | | :------------ | :------------------------------------------------------------------------------------- | | **Title** | Warp Signature Interface Standard | | **Author(s)** | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/123)) | | **Track** | Best Practices Track | ## Abstract Proposes a standard [AppRequest](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto#L385) payload format type for requesting Warp signatures for the provided bytes, such that signatures may be requested in a VM-agnostic manner. To make this concrete, this standard type should be defined in AvalancheGo such that VMs can import it at the source code level. This will simplify signature aggregator implementations by allowing them to depend only on AvalancheGo for message construction, rather than individual VM codecs. ## Motivation Warp message signatures consist of an aggregate BLS signature composed of the individual signatures of a subnet's validators. Individual signatures need to be retreivable by the party that wishes to construct an aggregate signature. At present, this is left to VMs to implement, as is the case with [Subnet EVM](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/message/signature_request.go#20) and [Coreth](https://github.com/ava-labs/coreth/blob/v0.13.6-rc.0/plugin/evm/message/signature_request.go#L20) This creates friction in applications that are intended to operate across many VMs (or distinct implementations of the same VM). As an example, the reference Warp message relayer implementation, [awm-relayer](https://github.com/ava-labs/awm-relayer), fetches individual signatures from validators and aggregates them before sending the Warp message to its destination chain for verification. However, Subnet EVM and Coreth have distinct codecs, requiring the relayer to [switch](https://github.com/ava-labs/awm-relayer/blob/v1.4.0-rc.0/relayer/application_relayer.go#L372) according to the target codebase. Another example is ACP-75, which aims to implement acceptance proofs using Warp. The signature aggregation mechanism is not [specified](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md#signature-aggregation), which is a blocker for that ACP to be marked implementable. Standardizing the Warp Signature Request interface by defining it as a format for `AppRequest` message payloads in AvalancheGo would simplify the implementation of ACP-75, and streamline signature aggregation for out-of-protocol services such as Warp message relayers. ## Specification We propose the following types, implemented as Protobuf types that may be decoded from the `AppRequest`/`AppResponse` `app_bytes` field. By way of example, this approach is currently used to [implement](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/proto/sdk/sdk.proto#7) and [parse](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/gossip/message.go#22) gossip `AppRequest` types. * `SignatureRequest` includes two fields. `message` specifies the payload that the returned signature should correspond to, namely a serialized unsigned Warp message. `justification` specifies arbitrary data that the requested node may use to decide whether or not it is willing to sign `message`. `justification` may not be required by every VM implementation, but `message` should always contain the bytes to be signed. It is up to the VM to define the validity requirements for the `message` and `justification` payloads. ```protobuf message SignatureRequest { bytes message = 1; bytes justification = 2; } ``` * `SignatureResponse` is the corresponding `AppResponse` type that returns the requested signature. ```protobuf message SignatureResponse { bytes signature = 1; } ``` ### Handlers For each of the above types, VMs must implement corresponding `AppRequest` and `AppResponse` handlers. The `AppRequest` handler should be [registered](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/network.go#L173) using the canonical handler ID, defined as `2`. ## Use Cases Generally speaking, `SignatureRequest` can be used to request a signature over a Warp message by serializing the unsigned Warp message into `message`, and populating `justification` as needed. ### Sign a known Warp Message Subnet EVM and Coreth store messages that have been seen (i.e. on-chain message sent through the [Warp Precompile](https://github.com/ava-labs/subnet-evm/tree/v0.6.7/precompile/contracts/warp) and [off-chain](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/config.go#L226) Warp messages) such that a signature over that message can be provided on request. `SignatureRequest` can be used for this case by specifying the Warp message in `message`. The queried node may then look up the Warp message in its database and return the signature. In this case, `justification` is not needed. ### Attest to an on-chain event Subnet EVM and Coreth also support attesting to block hashes via Warp, by serving signature requests made using the following `AppRequest` type: ``` type BlockSignatureRequest struct { BlockID ids.ID } ``` `SignatureRequest` can achieve this by specifying an unsigned Warp message with the `BlockID` as the payload, and serializing that message into `message`. `justification` may optionally be used to provide additional context, such as a the block height of the given block ID. ### Confirm that an event did not occur With [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets), Subnets will have the ability to manage their own validator sets. The Warp message payload contained in a `RegisterSubnetValidatorTx` includes an `expiry`, after which the specified validation ID (i.e. a unique hash over the Subnet ID, node ID, stake weight, and expiry) becomes invalid. The Subnet needs to know that this validation ID is expired so that it can keep its locally tracked validator set in sync with the P-Chain. We also assume that the P-Chain will not persist expired or invalid validation IDs. We can use `SignatureRequest` to construct a Warp message attesting that the validation ID expired. We do so by serializing an unsigned Warp message containing the validation ID into `message`, and providing the validation ID hash preimage in `justification` for the P-Chain to reconstruct the expired validation ID. ## Security Considerations VMs have full latitude when implementing `SignatureRequest` handlers, and should take careful consideration of what `message` payloads their implementation should be willing to sign, given a `justification`. Some considerations include, but are not limited to: * Input validation. Handlers should validate `message` and `justification` payloads to ensure that they decode to coherent types, and that they contain only expected data. * Signature DoS. AvalancheGo's peer-to-peer networking stack implements message rate limiting to mitigate the risk of DoS, but VMs should also consider the cost of parsing and signing a `message` payload. * Payload collision. `message` payloads should be implemented as distinct types that do not overlap with one another within the context of signed Warp messages from the VM. For instance, a `message` payload specifying 32-byte hash may be interpreted as a transaction hash, a block hash, or a blockchain ID. ## Backwards Compatibility This change is backwards compatible for VMs, as nodes running older versions that do not support the new message types will simply drop incoming messages. ## Reference Implementation A reference implementation containing the Protobuf types and the canonical handler ID can be found [here](https://github.com/ava-labs/avalanchego/pull/3218). ## Acknowledgements Thanks to @joshua-kim, @iansuvak, @aaronbuchwald, @michaelkaplan13, and @StephenButtolph for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-125: Basefee Reduction URL: /docs/acps/125-basefee-reduction Details for Avalanche Community Proposal 125: Basefee Reduction | ACP | 125 | | :------------ | :------------------------------------------------------------------------------------------------------------------------------------ | | **Title** | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/127)) | | **Track** | Standards | ## Abstract Reduce the minimum base fee on the Avalanche C-Chain from 25 nAVAX to 1 nAVAX. ## Motivation With dynamic fees, the gas price is supposed to be a result of a continuous auction such that the consumed gas per second converges to the target gas usage per second. When dynamic fees were first introduced, safeguards were added to ensure the mechanism worked as intended, such as a relatively high minimum gas price and a maximum gas price. The maximum gas price has since been entirely removed. The minimum gas price has been reduced significantly. However, the base fee is often observed pinned to this minimum. This shows that it is higher than what the market demands, and therefore it is artificially reducing network usage. ## Specification The dynamic fee calculation currently must enforce a minimum base fee of 25 nAVAX. This change proposes reducing the minimum base fee to 1 nAVAX upon the next network upgrade activation. ## Backwards Compatibility Modifies the consensus rules for the C-Chain, therefore it requires a network upgrade. ## Reference Implementation A draft implementation of this ACP for the coreth VM can be found [here](https://github.com/ava-labs/coreth/pull/604/files). ## Security Considerations Lower gas costs may increase state bloat. However, we note that the dynamic fee algorithm responded appropriately during periods of high use (such as Dec. 2023), which gives reasonable confidence that enforcing a 25 nAVAX minimum fee is no longer necessary. ## Open Questions N/A ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-13: Subnet Only Validators URL: /docs/acps/13-subnet-only-validators Details for Avalanche Community Proposal 13: Subnet Only Validators | ACP | 13 | | :---------------- | :----------------------------------------------------------------------------------------------------- | | **Title** | Subnet-Only Validators (SOVs) | | **Author(s)** | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network. Require SOVs to pay a refundable fee of 500 $AVAX on the P-Chain to register as a Subnet Validator instead of staking at least 2000 $AVAX, the minimum requirement to become a Primary Network Validator. Preview a future transition to Pay-As-You-Go Subnet Validation and \$AVAX-Augmented Subnet Security. *This ACP does not modify/deprecate the existing Subnet Validation semantics for Primary Network Validators.* ## Motivation Each node operator must stake at least 2000 $AVAX ($20k at the time of writing) to first become a Primary Network Validator before they qualify to become a Subnet Validator. Most Subnets aim to launch with at least 8 Subnet Validators, which requires staking 16000 $AVAX ($160k at time of writing). All Subnet Validators, to satisfy their role as Primary Network Validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Avalanche Warp Messaging (AWM), the native interoperability mechanism for the Avalanche Network, provides a way for Subnets to communicate with each other/C-Chain without a trusted intermediary. Any Subnet Validator must be able to register a BLS key and participate in AWM, otherwise a Subnet may not be able to generate a BLS Multi-Signature with sufficient participating stake. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) can’t launch a Subnet because they can’t opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain \<-> Subnets using AWM/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network Validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet Validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. *Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load.* Elastic Subnets allow any community to weight Subnet Validation based on some staking token and reward Subnet Validators with high uptime with said staking token. However, there is no way for \$AVAX holders on the Primary Network to augment the security of such Subnets. ## Specification ### Required Changes 1. Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network 2. Introduce a refundable fee (called a "lock") of 500 \$AVAX that nodes must pay to become an SOV 3. Introduce a non-refundable fee of 0.1 \$AVAX that SOVs must pay to become an SOV 4. Introduce a new transaction type on the P-Chain to register as an SOV (i.e. `AddSubnetOnlyValidatorTx`) 5. Add a mode to ANCs that allows SOVs to optionally disable full Primary Network verification (only need to verify P-Chain) 6. ANCs track IPs for SOVs to ensure Subnet Validators can find peers whether or not they are Primary Network Validators 7. Provide a guaranteed rate limiting allowance for SOVs like Primary Network Validators Because SOVs do not validate the Primary Network, they will not be rewarded with $AVAX for "locking" the 500 $AVAX required to become an SOV. This enables people interested in validating Subnets to opt for a lower upfront $AVAX commitment and lower infrastructure costs instead of $AVAX rewards. Additionally, SOVs will only be required to sync the P-chain (not X/C-Chain) to track any validator set changes in their Subnet and to support Cross-Subnet communication via AWM (see “Primary Network Partial Sync” mode introduced in [Cortina 8](https://github.com/ava-labs/avalanchego/releases/tag/v1.10.8)). The lower resource requirement in this "minimal mode" will provide Subnets with greater flexibility of validation hardware requirements as operators are not required to reserve any resources for C-Chain/X-Chain operation. If an SOV wishes to sync the entire Primary Network, they still can. ### Future Work The previously described specification is a minimal, additive change to Subnet Validation semantics that prepares the Avalanche Network for a more flexible Subnet model. It alone, however, fails to communicate this flexibility nor provides an alternative use of \$AVAX that would have otherwise been used to create Subnet Validators. Below are two high-level ideas (Pay-As-You-Go Subnet Validation Registration Fees and \$AVAX-Augmented Security) that highlight how this initial change could be extended in the future. If the Avalanche Community is interested in their adoption, they should each be proposed as a unique ACP where they can be properly specified. **These ideas are only suggestions for how the Avalanche Network could be modified in the future if this ACP is adopted. Supporting this ACP does not require supporting these ideas or committing to their rollout.** #### Pay-As-You-Go Subnet Validation Registration Fees *Transition Subnet Validator registration to a dynamically priced, continuously charged fee (that doesn't require locking large amounts of \$AVAX upfront).* While it would be possible to just transition to a lower required "lock" amount, many think that it would be more competitive to transition to a dynamically priced, continuous payment mechanism to register as a Subnet Validator. This new mechanism would target some $Y nAVAX fee that would be paid by each Subnet Validator per Subnet per second (pulling from a "Subnet Validator's Account") instead of requiring a large upfront lockup of $AVAX. The rate of nAVAX/second should be set by the demand for validating Subnets on Avalanche compared to some usage target per Subnet and across all Subnets. This rate should be locked for each Subnet Validation period to ensure operators are not subject to surprise costs if demand rises significantly over time. The optimization work outlined in [BLS Multi-Signature Voting](https://hackmd.io/@patrickogrady/100k-subnets#How-will-BLS-Multi-Signature-uptime-voting-work) should allow the min rate to be set as low as \~512-4096 nAVAX/second (or 1.3-10.6 \$AVAX/month). Fees paid to the Avalanche Network for PAYG could be burned, like all other P-Chain, X-Chain, and C-Chain transactions, or they could be partially rewarded to Primary Network Validators as a "boost" over the existing staking rewards. The nice byproduct of the latter approach is that it better aligns Primary Network Validators with the growth of Subnets. #### \$AVAX-Augmented Subnet Security *Allow pledging unstaked $AVAX to Subnet Validators on Elastic Subnets that can be slashed if said Subnet Validator commits an attributable fault (i.e. proposes/signs conflicting blocks/AWM payloads). Reward locked $AVAX associated with Subnet Validators that were not slashed with Elastic Subnet staking rewards.* Currently, the only way to secure an Elastic Subnet is to stake its custom staking token (defined in the `TransformSubnetTx`). Many have requested the option to use $AVAX for this token, however, this could easily allow an adversary to take over small Elastic Subnets (where the amount of $AVAX staked may be much less than the circulating supply). $AVAX-Augmented Subnet Security would allow anyone holding $AVAX to lock it to specific Subnet Validators and earn Elastic Subnet reward tokens for supporting honest participants. Recall, all stake management on the Avalanche Network (even for Subnets) occurs on the P-Chain. Thus, staked tokens ($AVAX and/or custom staking tokens used in Elastic Subnets) and stake weights (used for AWM verification) are secured by the full $AVAX stake of the Primary Network. $AVAX-Augmented Subnet Security, like staking, would be implemented on the P-Chain and enjoy the full security of the Primary Network. This approach means locking $AVAX occurs on the Primary Network (no need to transfer \$AVAX to a Subnet, which may not be secured by meaningful value yet) and proofs of malicious behavior are processed on the Primary Network (a colluding Subnet could otherwise choose not to process a proof that would lead to their "lockers" being slashed). *This native approach is comparable to the idea of using $ETH to secure DA on [EigenLayer](https://www.eigenlayer.xyz/) (without reusing stake) or $BTC to secure Cosmos Zones on [Babylon](https://babylonchain.io/) (but not using an external ecosystem).* ## Backwards Compatibility * Existing Subnet Validation semantics for Primary Network Validators are not modified by this ACP. This means that All existing Subnet Validators can continue validating both the Primary Network and whatever Subnets they are validating. This change would just provide a new option for Subnet Validators that allows them to sacrifice their staking rewards for a smaller upfront \$AVAX commitment and lower infrastructure costs. * Support for this ACP would require adding a new transaction type to the P-Chain (i.e. `AddSubnetOnlyValidatorTx`). This new transaction is an execution-breaking change that would require a mandatory Avalanche Network upgrade to activate. ## Reference Implementation A full implementation will be provided once this ACP is considered `Implementable`. However, some initial ideas are presented below. ### `AddSubnetOnlyValidatorTx` ```text type AddSubnetOnlyValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator // The NodeID included in [Validator] must be the Ed25519 public key. Validator `serialize:"true" json:"validator"` // ID of the subnet this validator is validating Subnet ids.ID `serialize:"true" json:"subnetID"` // [Signer] is the BLS key for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID does uniquely map to a BLS key Signer signer.Signer `serialize:"true" json:"signer"` // Where to send locked tokens when done validating LockOuts []*avax.TransferableOutput `serialize:"true" json:"lock"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` *`AddSubnetOnlyValidatorTx` is almost the same as [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/vms/platformvm/txs/add_permissionless_validator_tx.go#L33-L58), the only exception being that `StakeOuts` are now `LockOuts`.* ### `GetSubnetPeers` To support tracking SOV IPs, a new message should be added to the P2P specification that allows Subnet Validators to request the IP of all peers a node knows about on a Subnet (these Signed IPs won't be gossiped like they are for Primary Network Validators because they don't need to be known by the entire Avalanche Network): ```text message GetSubnetPeers { bytes subnet_id = 1; } ``` *It would be a nice addition if a bloom filter could also be provided here so that an ANC only sends IPs of peers that the original sender does not know.* ANCs should respond to this incoming message with a [`PeerList` message](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/proto/p2p/p2p.proto#L135-L148). ## Security Considerations * Any Subnet Validator running in "Partial Sync Mode" will not be able to verify Atomic Imports on the P-Chain and will rely entirely on Primary Network consensus to only accept valid P-Chain blocks. * High-throughput Subnets will be better isolated from the Primary Network and should improve its resilience (i.e. surges of traffic on some Subnet cannot destabilize a Primary Network Validator). * Avalanche Network Clients (ANCs) must track IPs and provide allocated bandwidth for SOVs even though they are not Primary Network Validators. ## Open Questions * To help orient the Avalanche Community around this wide-ranging and likely to be long-running conversation around the relationship between the Primary Network and Subnets, should we come up with a project name to describe the effort? I've been casually referring to all of these things as the *Astra Upgrade Track* but definitely up for discussion (may be more confusing than it is worth to do this). ## Appendix A draft of this ACP was posted on in the ["Ideas" Discussion Board](https://github.com/avalanche-foundation/ACPs/discussions/10#discussioncomment-7373486), as suggested by the [ACP README](https://github.com/avalanche-foundation/ACPs#step-1-post-your-idea-to-github-discussions). Feedback on this draft was collected and addressed on both the "Ideas" Discussion Board and on [HackMD](https://hackmd.io/@patrickogrady/100k-subnets#Feedback-to-Draft-Proposal). ## Acknowledgements Thanks to @luigidemeo1, @stephenbuttolph, @aaronbuchwald, @dhrubabasu, and @abi87 for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-131: Cancun Eips URL: /docs/acps/131-cancun-eips Details for Avalanche Community Proposal 131: Cancun Eips | ACP | 131 | | :------------ | :--------------------------------------------------------------------------------------------------------------- | | **Title** | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/139)) | | **Track** | Standards, Subnet | ## Abstract Enable new EVM opcodes and opcode changes in accordance with the following EIPs on the Avalanche C-Chain and Subnet-EVM chains: * [EIP-4844: BLOBHASH opcode](https://eips.ethereum.org/EIPS/eip-4844) * [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) * [EIP-1153: Transient storage](https://eips.ethereum.org/EIPS/eip-1153) * [EIP-5656: MCOPY opcode](https://eips.ethereum.org/EIPS/eip-5656) * [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780) Note blob transactions from EIP-4844 are excluded and blocks containing them will still be considered invalid. ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Cancun upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/cancun.md#included-eips). This proposal is to activate them on the Avalanche C-Chain in the next network upgrade, to maintain compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler defaults >= [0.8.25](https://github.com/ethereum/solidity/releases/tag/v0.8.25)). Additionally, it recommends the activation of the same EIPs on Subnet-EVM chains. ## Specification & Reference Implementation The opcodes (EVM exceution modifications) and block header modifications should be adopted as specified in the EIPs themselves. Other changes such as enabling new transaction types or mempool modifications are not in scope (specifically blob transactions from EIP-4844 are excluded and blocks containing them are considered invalid). ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.13.8](https://github.com/ethereum/go-ethereum/releases/tag/v1.13.8) release in this [PR](https://github.com/ava-labs/coreth/pull/550). In particular, note the following code: * [Activation of new opcodes](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/core/vm/jump_table.go#L93) * Activation of Cancun in next Avalanche upgrade: * [C-Chain](https://github.com/ava-labs/coreth/pull/610) * [Subnet-EVM chains](https://github.com/ava-labs/subnet-evm/blob/fa909031ed148484c5072d949c5ed73d915ce1ed/params/config_extra.go#L186) * `ParentBeaconRoot` is enforced to be included and the zero value [here](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/plugin/evm/block_verification.go#L287-L288). This field is retained for future use and compatibility with upstream tooling. * Forbids blob transactions by enforcing `BlobGasUsed` to be 0 [here](https://github.com/ava-labs/coreth/pull/611/files#diff-532a2c6a5365d863807de5b435d8d6475552904679fd611b1b4b10d3bf4f5010R267). *Note:* Subnets are sovereign in regards to their validator set and state transition rules, and can choose to opt out of this proposal by making a code change in their respective Subnet-EVM client. ## Backwards Compatibility The original EIP authors highlighted the following considerations. For full details, refer to the original EIPs: * [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#backwards-compatibility): Blob transactions are not proposed to be enabled on Avalanche, so concerns related to mempool or transaction data availability are not applicable. * [EIP-6780](https://eips.ethereum.org/EIPS/eip-6780#backwards-compatibility) "Contracts that depended on re-deploying contracts at the same address using CREATE2 (after a SELFDESTRUCT) will no longer function properly if the created contract does not call SELFDESTRUCT within the same transaction." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. It is recommended that Subnet-EVM chains also adopt this ACP and follow the same upgrade time as Avalanche's next network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: * [EIP 1153](https://eips.ethereum.org/EIPS/eip-1153#security-considerations) * [EIP 4788](https://eips.ethereum.org/EIPS/eip-4788#security-considerations) * [EIP 4844](https://eips.ethereum.org/EIPS/eip-4844#security-considerations) * [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656#security-considerations) * [EIP 6780](https://eips.ethereum.org/EIPS/eip-6780#security-considerations) * [EIP 7516](https://eips.ethereum.org/EIPS/eip-7516#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-151: Use Current Block Pchain Height As Context URL: /docs/acps/151-use-current-block-pchain-height-as-context Details for Avalanche Community Proposal 151: Use Current Block Pchain Height As Context | ACP | 151 | | :------------ | :------------------------------------------------------------------------------------- | | **Title** | Use current block P-Chain height as context for state verification | | **Author(s)** | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/152)) | | **Track** | Standards | ## Abstract Proposes that the ProposerVM passes inner VMs the P-Chain block height of the current block being built rather than the P-Chain block height of the parent block. Inner VMs use this P-Chain height for verifying aggregated signatures of Avalanche Interchain Messages (ICM). This will allow for a more reliable way to determine which validators should participate in signing the message, and remove unnecessary waiting periods. ## Motivation Currently the ProposerVM passes the P-Chain height of the parent block to inner VMs, which use the value to verify ICM messages in the current block. Using the parent block's P-Chain height is necessary for verifying the proposer and reaching consensus on the current block, but it is not necessary for verifying ICM messages within the block. Using the P-Chain height of the current block being built would make operations using ICM messages to modify the validator set, such as ones specified in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) be verifiable sooner and more reliably. Currently at least two new P-Chain blocks need to be produced after the relevant state change for it to be reflected for purposes of ICM aggregate signature verification. ## Specification The [block context](https://github.com/ava-labs/avalanchego/blob/d2e9d12ed2a1b6581b8fd414cbfb89a6cfa64551/snow/engine/snowman/block/block_context_vm.go#L14) contains a `PChainHeight` field that is passed from the ProposerVM to the inner VMs building the block. It is later used by the inner VMs to fetch the canonical validator set for verification of ICM aggregated signatures. The `PChainHeight` currently passed in by the ProposerVM is the P-Chain height of the parent block. The proposed change is to instead have the ProposerVM pass in the P-Chain height of the current block. ## Backwards Compatibility This change requires an upgrade to make sure that all validators verifying the validity of the ICM messages use the same P-Chain height and therefore the same validator set. Prior to activation nodes should continue to use P-Chain height of the parent block. ## Reference Implementation An implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/3459) ## Security Considerations ProposerVM needs to use the parent block's P-Chain height to verify proposers for security reasons but we don't have such restrictions for verifying ICM message validity in the current block being built. Therefore, this should be a safe change. ## Acknowledgments Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@michaelkaplan13](https://github.com/michaelkaplan13) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates URL: /docs/acps/176-dynamic-evm-gas-limit-and-price-discovery-updates Details for Avalanche Community Proposal 176: Dynamic Evm Gas Limit And Price Discovery Updates | ACP | 176 | | :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | Dynamic EVM Gas Limits and Price Discovery Updates | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/178)) | | **Track** | Standards | ## Abstract Proposes that the C-Chain and Subnet-EVM chains adopt a dynamic fee mechanism similar to the one [introduced on the P-Chain as part of ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md), with modifications to allow for block proposers (i.e. validators) to dynamically adjust the target gas consumption per unit time. ## Motivation Currently, the C-Chain has a static gas target of [15,000,000 gas](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L32) per [10 second rolling window](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L36), and uses a modified version of the [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) dynamic fee mechanism to adjust the base fee of blocks based on the gas consumed in the previous 10 second window. This has two notable drawbacks: 1. The windower mechanism used to determine the base fee of blocks can lead to outsized spikes in the gas price when there is a large block. This is because after a large block that uses all of its gas limit, blocks that follow in the same window continue to result in increased gas prices even if they are relatively small blocks that are under the target gas consumption. 2. The static gas target necessitates a required network upgrade in order to modify. This is cumbersome and makes it difficult for the network to adjust its capacity in response to performance optimizations or hardware requirement increases. To better position Avalanche EVM chains, including the C-Chain, to be able to handle future increases in load, we propose replacing the above mechanism with one that better handles blocks that consume a large amount of gas, and that allows for validators to dynamically adjust the target rate of consumption. ## Specification ### Gas Price Determination The mechanism to determine the base fee of a block is the same as the one used in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) to determine the gas price of a block on the P-Chain. This mechanism calculates the gas price for a given block $b$ based on the following parameters:
| | | | --- | ---------------------------------- | | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $C$ | maximum gas capacity | | $R$ | gas capacity added per second |
### Making $T$ Dynamic As noted above, the gas price determination mechanism relies on a target gas consumption per second, $T$, in order to calculate the gas price for a given block. $T$ will be adjusted dynamically according to the following specification. Let $q$ be a non-negative integer that is initialized to 0 upon activation of this mechanism. Let the target gas consumption per second be expressed as: $T = P \cdot e^{\frac{q}{D}}$ where $P$ is the global minimum allowed target gas consumption rate for the network, and $D$ is a constant that helps control the rate of change of the target gas consumption. After the execution of transactions in block $b$, the value of $q$ can be increased or decreased up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e. validators), may set their desired value for $T$ (i.e. their desired gas consumption rate) in their configuration, and their desired value for $q$ can then be calculated as: $q_{desired} = D \cdot ln\left(\frac{T_{desired}}{P}\right)$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{T_{desired}}{P}\right)$, and round the resulting value to the nearest integer. When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q given for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $T$ is also updated such that $T = P \cdot e^{\frac{q}{D}}$ at all times. As the value of $T$ adjusts, the value of $R$ (capacity added per second) is also updated such that: $R = 2 \cdot T$ This ensures that the gas price can increase and decrease at the same rate. The value of $C$ must also adjust proportionately, so we set: $C = 10 \cdot T$ This means that the maximum stored gas capacity would be reached after 5 seconds where no blocks have been accepted. In order to keep roughly constant the time it takes for the gas price to double at sustained maximum network capacity usage, the value of $K$ used in the gas price determination mechanism must be updated proportionally to $T$ such that: $K = 87 \cdot T$ In order to have the gas price not be directly impacted by the change in $K$, we also update $x$ (excess gas consumption) proportionally. When updating $x$ after executing a block, instead of setting $x = x + G$ as specified in ACP-103, we set: $x_{n+1} = (x + G) \cdot \frac{K_{n+1}}{K_{n}}$ Note that the value of $q$ (and thus also $T$, $R$, $C$, $K$, and $x$) are updated **after** the execution of block $b$, which means they only take effect in determining the gas price of block $b+1$. The change to each of these values in block $b$ does not effect the gas price for transactions included in block $b$ itself. Allowing block builders to adjust the target gas consumption rate in blocks that they produce makes it such that the effective target gas consumption rate should converge over time to the point where 50% of the voting stake weight wants it increased and 50% of the voting stake weight wants it decreased. This is because the number of blocks each validator produces is proportional to their stake weight. As noted in ACP-103, the maximum gas consumed in a given period of time $\Delta{t}$, is $r + R \cdot \Delta{t}$, where $r$ is the remaining gas capacity at the end of previous block execution. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. Phrased differently, the maximum amount of gas that can be consumed by any given block $b$ is: $gasLimit_{b} = min(r + R \cdot \Delta{t}, C)$ ### Configuration Parameters As noted above, the gas price determination mechanism depends on the values of $T$, $M$, $K$, $C$, and $R$ to be set as parameters. $T$ is adjusted dynamically from its initial value based on $D$ and $P$, and the values of $R$ and $C$ are derived from $T$. Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration | | --------- | ------------------------------------------------------ | --------------------- | | $P$ | minimum target gas consumption per second | $1,000,000$ | | $D$ | target gas consumption rate update constant | $2^{25}$ | | $Q$ | target gas consumption rate update factor change limit | $2^{15}$ | | $M$ | minimum gas price | $1*10^{-18}$ AVAX | | $K$ | initial gas price update factor | $87,000,000$ |
$P$ was chosen as a safe bound on the minimum target gas usage on the C-Chain. The current gas target of the C-Chain is $1,500,000$ per second. The target gas consumption rate will only stay at $P$ if the majority of stake weight of the network specifies $P$ as their desired gas consumption rate target. $D$ and $Q$ were chosen to give each block builder the ability to adjust the value of $T$ by roughly $\frac{1}{1024}$ of its current value, which matches the [gas limit bound divisor that Ethereum currently uses](https://github.com/ethereum/go-ethereum/blob/52766bedb9316cd6cddacbb282809e3bdfba143e/params/protocol_params.go#L26) to limit the amount that validators can change the execution layer gas limit in a single block. $D$ and $Q$ were scaled up by a factor of $2^{15}$ to provide block builders more granularity in the adjustments to $T$ that they can make. $M$ was chosen as the minimum possible denomination of the native EVM asset, such that the gas price will be more likely to consistently be in a range of price discovery. The price discovery mechanism has already been battle tested on the P-Chain (and prior to that on Ethereum for blob gas prices as defined by EIP-4844), giving confidence that it will correctly react to any increase in network usage in order to prevent a DOS attack. $K$ was chosen such that at sustained maximum capacity ($T*2$ gas/second), the fee rate will double every \~60.3 seconds. For comparison, EIP-1559 can double about \~70 seconds, and the C-Chain's current implementation can double about every \~50 seconds, depending on the time between blocks. The maximum instantaneous price multiplier is: $e^\frac{C}{K} = e^\frac{10 \cdot T}{87 \cdot T} = e^\frac{10}{87} \simeq 1.12$ ### Choosing $T_{desired}$ As mentioned above, this new mechanism allows for validators to specify their desired target gas consumption rate ($T_{desired}$) in their configuration, and the value that they set impacts the effective target gas consumption rate of the network over time. The higher the value of $T$, the more resources (storage, compute, etc) that are able to be used by the network. When choosing what value makes sense for them, validators should consider the resources that are required to properly support that level of gas consumption, the utility the network provides by having higher transaction per second throughput, and the stability of network should it reach that level of utilization. While Avalanche Network Clients can set default configuration values for the desired target gas consumption rate, each validator can choose to set this value independently based on their own considerations. ## Backwards Compatibility The changes proposed in this ACP require a required network upgrade in order to take effect. Prior to its activation, the current gas limit and price discovery mechanisms will continue to be used. Its activation should have relatively minor compatibility effects on any developer tooling. Notably, transaction formats, and thus wallets, are not impacted. After its activation, given that the value of $C$ is dynamically adjusted, the maximum possible gas consumed by an individual block, and thus maximum possible consumed by an individual transaction, will also dynamically adjust. The upper bound on the amount of gas consumed by a single transaction fluctuating means that transactions that are considered invalid at one time may be considered valid at a different point in time, and vice versa. While potentially unintuitive, as long as the minimum gas consumption rate is set sufficiently high this should not have significant practical impact, and is also currently the case on the Ethereum mainnet. > \[!NOTE] > After the activation of this ACP, concerns were raised around the latency of inclusion for large transactions when the fee is increasing. To address these concerns, block producers SHOULD only produce blocks when there is sufficient capacity to include large transactions. Prior to this ACP, the maximum size of a transaction was $15$ million gas. Therefore, the recommended heuristic is to only produce blocks when there is at least $\min(8 \cdot T, 15 \text{ million})$ capacity. *At the time of writing, this ensures transactions with up to 12.8 million gas will be able to bid for block space.* ## Reference Implementation This ACP was implemented and merged into Coreth behind the `Fortuna` upgrade flag. The full implementation can be found in [coreth@v0.14.1-acp-176.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1-acp-176.1). ## Security Considerations This ACP changes the mechanism for determining the gas price on Avalanche EVM chains. The gas price is meant to adapt dynamically to respond to changes in demand for using the chain. If it does not react as expected, the chain could be at risk for a DOS attack (if the usage price is too low), or over charge users during period of low activity. This price discovery mechanism has already been employed on the P-Chain, but should again be thoroughly tested for use on the C-Chain prior to activation on the Avalanche Mainnet. Further, this ACP also introduces a mechanism for validators to change the gas limit of the C-Chain. If this limit is set too high, it is possible that validator nodes will not be able to keep up in the processing of blocks. An upper bound on the maximum possible gas limit could be considered to try to mitigate this risk, though it would then take further required network upgrades to scale the network past that limit. ## Acknowledgments Thanks to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. * [Emin Gün Sirer](https://x.com/el33th4xor) * [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) * [Darioush Jalali](https://github.com/darioush) * [Aaron Buchwald](https://github.com/aaronbuchwald) * [Geoff Stuart](https://github.com/geoff-vball) * [Meag FitzGerald](https://github.com/meaghanfitzgerald) * [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-181: P Chain Epoched Views URL: /docs/acps/181-p-chain-epoched-views Details for Avalanche Community Proposal 181: P Chain Epoched Views | ACP | 181 | | :------------ | :------------------------------------------------------------------------------------ | | **Title** | P-Chain Epoched Views | | **Author(s)** | Cam Schultz [@cam-schultz](https://github.com/cam-schultz) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/211)) | | **Track** | Standards | ## Abstract Proposes a standard P-Chain epoching scheme such that any VM that implements it uses a P-Chain block height known prior to the generation of its next block. This would enable VMs to optimize validator set retrievals, which currently must be done during block execution. This standard does *not* introduce epochs to the P-Chain's VM directly. Instead, it provides a standard that may be implemented by layers that inject P-Chain state into VMs, such as the ProposerVM. ## Motivation The P-Chain maintains a registry of L1 and Subnet validators (including Primary Network validators). Validators are added, removed, or their weights changed by issuing P-Chain transactions that are included in P-Chain blocks. When describing an L1 or Subnet's validator set, what is really being described are the weights, BLS keys, and Node IDs of the active validators at a particular P-Chain height. Use cases that require on-demand views of L1 or Subnet validator sets need to fetch validator sets at arbitrary P-Chain heights, while use cases that require up-to-date views need to fetch them as often as every P-Chain block. Epochs during which the P-Chain height is fixed would widen this window to a predictable epoch duration, allowing these use cases to implement optimizations such as pre-fetching validator sets once per epoch, or allowing more efficient backwards traversal of the P-Chain to fetch historical validator sets. ## Specification ### Assumptions In the following specification, we assume that a block $b_m$ has timestamp $t_m$ and P-Chain height $p_m$. ### Epoch Definition An epoch is defined as a contiguous range of blocks that share the same three values: * An Epoch Number * An Epoch P-Chain Height * An Epoch Start Time Let $E_N$ denote an epoch with epoch number $N$. $E_N$'s start time is denoted as $T_{start}^N$, and its P-Chain height as $P_N$. $E_0$ is defined as the epoch whose start time $T_{start}^0$ is the block timestamp of the block that activates this ACP (i.e. the first block at or following this ACP's activation timestamp). ### Epoch Sealing An epoch $E_N$ is *sealed* by the first block with a timestamp greater than or equal to $T_{start}^N + D$, where $D$ is a constant defined in the network upgrade that activates this ACP. Let $B_{S_N}$ denote the block that sealed $E_N$. The sealing block is defined to be a member of the epoch it seals. This guarantees that every epoch will contain at least one block. ### Advancing an Epoch We advance from the current epoch $E_N$ to the next epoch $E_{N+1}$ when the next block after $B_{S_N}$ is produced. This block will be a member of $E_{N+1}$, and will have the values: * $P_{N+1}$ equal to the P-Chain height of $B_{S_N}$ * $T_{start}^{N+1}$ equal to $B_{S_N}$'s timestamp * The epoch number, $N+1$ increments the previous epoch's epoch number by exactly $1$ ## Properties ### Epoch Duration Bounds Since an epoch's start time is set to the [timestamp of the sealing block of the previous epoch](#advancing-an-epoch), all epochs are guaranteed to have a duration of at least $D$, as measured from the epoch's starting time to the timestamp of the epoch's sealing block. However, since a sealing block is [defined](#epoch-sealing) to be a member of the epoch it seals, there is no upper bound on an epoch's duration, since that sealing block may be produced at any point in the future beyond $T_{start}^N + D$. ### Fixing the P-Chain Height When building a block, Avalanche blockchains use the P-Chain height [embedded in the block](#assumptions) to determine the validator set. If instead the epoch P-Chain height is used, then we can ensure that when a block is built, the validator set to be used for the next block is known. To see this, suppose block $b_m$ seals epoch $E_N$. Then the next block, $b_{m+1}$ will begin a new epoch, $E_{N+1}$ with $P_{N+1}$ equal to $b_m$'s P-Chain height, $p_m$. If instead $b_m$ does not seal $E_N$, then $b_{m+1}$ will continue to use $P_{N}$. Both candidates for $b_{m+1}$'s P-Chain height ($p_m$ and $P_N$) are known at $b_m$ build time. ## Use Cases ### ICM Verification Optimization For a validator to verify an ICM message, the signing L1/Subnet's validator set must be retrieved during block execution by traversing backward from the current P-Chain height to the P-Chain height provided by the ProposerVM. The traversal depth is highly variable, so to account for the worst case, VM implementations charge a large fixed amount of gas to perform this verification. With epochs, validator set retrieval occurs at fixed P-Chain heights that increment at regular intervals, which provides opportunities to optimize this retrieval. For instance, validator retrieval may be done asynchronously from block execution as soon as $D$ time has passed since the current epoch's start time, allowing the verification gas cost to be safely reduced by a significant amount. ### Improved Relayer Reliability Current ICM VM implementations verify ICM messages against the local P-Chain state, as determined by the P-Chain height set by the ProposerVM. Off-chain relayers perform the following steps to deliver ICM messages: 1. Fetch the sending chain's validator set at the verifying chain's current proposed height 2. Collect BLS signatures from that validator set to construct the signed ICM message 3. Submit the transaction containing the signed message to the verifying chain If the validator set changes between steps 1 and 3, the ICM message will fail verification. Epochs improve upon this by fixing the P-Chain height used to verify ICM messages for a duration of time that is predictable to off-chain relayers. A relayer should be able to derive the epoch boundaries based on the specification above, or they could retrieve that information via a node API. Relayers could use that information to decide the validator set to query, knowing that it will be stable for the duration of the epoch. Further, VMs could relax the verification rules to allow ICM messages to be verified against the previous epoch as a fallback, eliminating edge cases around the epoch boundary. ## Backwards Compatibility This change requires a network upgrade and is therefore not backwards compatible. Any downstream entities that depend on a VM's view of the P-Chain will also need to account for epoched P-Chain views. For instance, ICM messages are signed by an L1's validator set at a specific P-Chain height. Currently, the constructor of the signed message can in practice use the validator set at the P-Chain tip, since all deployed Avalanche VMs are at most behind the P-Chain by a fixed number of blocks. With epoching, however, the ICM message constructor must take into account the epoch P-Chain height of the verifying chain, which may be arbitrarily far behind the P-Chain tip. ## Reference Implementation The following pseudocode illustrates how an epoch may be calculated for a block: ```go // Epoch Duration const D time.Duration type Epoch struct { PChainHeight uint64 Number uint64 StartTime time.Time } type Block interface { Timestamp() time.Time PChainHeight() uint64 Epoch() Epoch } func GetPChainEpoch(parent Block) Epoch { if parent.Timestamp().After(time.Add(parent.Epoch().StartTime, D)) { // If the parent crossed its epoch boundary, then it sealed its epoch. // The child is the first block of the new epoch, so it should use the parent's // P-Chain height as the new epoch's height, and its parent's timestamp as the new // epoch's starting time return Epoch{ PChainHeight: parent.PChainHeight() Number: parent.Epoch().Number + 1 StartTime: parent.Timestamp() } } // Otherwise, the parent did not seal its epoch, so the child should use the same // epoch height. This is true even if the child seals its epoch, since sealing // blocks are considered to be part of the epoch they seal. return Epoch{ PChainHeight: parent.Epoch().PChainHeight Number: parent.Epoch().Number StartTime: parent.Epoch().StartTime } } ``` * If the parent sealed its epoch, the current block [advances the epoch](#advancing-an-epoch), refreshing the epoch height, incrementing the epoch number, and setting the epoch starting time. * Otherwise, the current block uses the current epoch height, number, and starting time, regardless of whether it seals the epoch. A full reference implementation that implements this ACP in the ProposerVM is available in [AvalancheGo](https://github.com/ava-labs/avalanchego/pull/3746), and must be merged before this ACP may be considered `Implementable`. ### Setting the Epoch Duration The epoch duration $D$ is hardcoded in the upgrade configuration, and may only be changed by a required network upgrade. #### Changing the Epoch Duration Future network upgrades may change the value of $D$ to some new duration $D'$. $D'$ should not take effect until the end of the current epoch, rather than the activation time of the network upgrade that defines $D'$. This ensures an in progress epoch at the upgrade activation time cannot have a realized duration less than both $D$ and $D'$. ## Security Considerations ### Epoch P-Chain Height Skew Because epochs may have [unbounded duration](#epoch-duration-bounds), it is possible for a block's `PChainEpochHeight` to be arbitrarily far behind the tip of the P-Chain. This does not affect the *validity* of ICM verification within a VM that implements P-Chain epoched views, since the validator set at `PChainEpochHeight` is always known. However, the following considerations should be made under this scenario: 1. As validators exit the validator set, their physical nodes may be unavailable to serve BLS signature requests, making it more difficult to construct a valid ICM message 2. A valid ICM message may represent an attestation by a stale validator set. Signatures from validators that have exited the validator set between `PChainEpochHeight` and the current P-Chain tip will not represent active stake. Both of these scenarios may be mitigated by having shorter epoch lengths, which limit the delay in time between when the P-Chain is updated and when those updates are taken into account for ICM verification on a given L1, and by ensuring consistent block production, so that epochs always advance soon after $D$ time has passed. ### Excessive Validator Churn If an epoched view of the P-Chain is used by the consensus engine, then validator set changes over an epoch's duration will be concentrated into a single block at the epoch's boundary. Excessive validator churn can cause consensus failures and other dangerous behavior, so it is imperative that the amount of validator weight change at the epoch boundary is limited. One strategy to accomplish this is to queue validator set changes and spread them out over multiple epochs. Another strategy is to batch updates to the same validator together such that increases and decreases to that validator's weight cancel each other out. Given the primary use case of ICM verification improvements, which occur at the VM level, mechanisms to mitigate against this are omitted from this ACP. ## Open Questions * What should the epoch duration $D$ be set to? * Is it safe for `PChainEpochHeight` and `PChainHeight` to differ significantly within a block, due to [unbounded epoch duration](#epoch-duration-bounds)? ## Acknowledgements Thanks to [@iansuvak](https://github.com/iansuvak), [@geoff-vball](https://github.com/geoff-vball), [@yacovm](https://github.com/yacovm), [@michaelkaplan13](https://github.com/michaelkaplan13), [@StephenButtolph](https://github.com/StephenButtolph), and [@aaronbuchwald](https://github.com/aaronbuchwald) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-191: Seamless L1 Creation URL: /docs/acps/191-seamless-l1-creation Details for Avalanche Community Proposal 191: Seamless L1 Creation | ACP | 191 | | :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | Seamless L1 Creations (CreateL1Tx) | | **Author(s)** | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/197)) | | **Track** | Standards | ## Abstract This ACP introduces a new P-Chain transaction type called `CreateL1Tx` that simplifies the creation of Avalanche L1s. It consolidates three existing transaction types (`CreateSubnetTx`, `CreateChainTx`, and `ConvertSubnetToL1Tx`) into a single atomic operation. This streamlines the L1 creation process, removes the need for the intermediary Subnet creation step, and eliminates the management of temporary `SubnetAuth` credentials. ## Motivation [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) introduced Avalanche L1s, providing greater sovereignty and flexibility compared to Subnets. However, creating an L1 currently requires a three-step process: 1. `CreateSubnetTx`: Create the Subnet record on the P-Chain and specify the `SubnetAuth` 2. `CreateChainTx`: Add a blockchain to the Subnet (can be called multiple times) 3. `ConvertSubnetToL1Tx`: Convert the Subnet to an L1, specifying the initial validator set and the validator manager location This process has several drawbacks: * It requires orchestrating three separate transactions that could be handled in one. * The `SubnetAuth` must be managed during creation but becomes irrelevant after conversion. * The multi-step process increases complexity and potential for errors. * It introduces unnecessary state transitions and storage overhead on the P-Chain. By introducing a single `CreateL1Tx` transaction, we can simplify the process, reduce overhead, and improve the developer experience for creating L1s. ## Specification ### New Transaction Type The following new transaction type is introduced: ```go // ChainConfig represents the configuration for a chain to be created type ChainConfig struct { // A human readable name for the chain; need not be unique ChainName string `serialize:"true" json:"chainName"` // ID of the VM running on the chain VMID ids.ID `serialize:"true" json:"vmID"` // IDs of the feature extensions running on the chain FxIDs []ids.ID `serialize:"true" json:"fxIDs"` // Byte representation of genesis state of the chain GenesisData []byte `serialize:"true" json:"genesisData"` } // CreateL1Tx is an unsigned transaction to create a new L1 with one or more chains type CreateL1Tx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Chain configurations for the L1 (can be multiple) Chains []ChainConfig `serialize:"true" json:"chains"` // Chain where the L1 validator manager lives ManagerChainID ids.ID `serialize:"true" json:"managerChainID"` // Address of the L1 validator manager ManagerAddress types.JSONByteSlice `serialize:"true" json:"managerAddress"` // Initial pay-as-you-go validators for the L1 Validators []*L1Validator `serialize:"true" json:"validators"` } ``` The `L1Validator` structure follows the same definition as in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md#convertsubnettol1tx). ### Transaction Processing When a `CreateL1Tx` transaction is processed, the P-Chain performs the following operations atomically: 1. Create a new L1. 2. Create chain records for each chain configuration in the `Chains` array. 3. Set up the L1 validator manager with the specified `ManagerChainID` and `ManagerAddress`. 4. Register the initial validators specified in the `Validators` array. ### IDs * `subnetID`: The `subnetID` of the L1 is the transaction hash. * `blockchainID`: the `blockchainID` for each blockchain is is defined as the SHA256 hash of the 37 bytes resulting from concatenating the 32 byte `subnetID` with the `0x00` byte and the 4 byte `chainIndex` (index in the `Chains` array within the transaction) * `validationID`: The `validationID` for the initial validators added through `CreateL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Note: Even with this updated definition of the `blockchainID`s for chains created using this new flow, the `validationID`s of the L1s initial set of validators is still compatible with the existing reference validator manager contracts as defined [here](https://github.com/ava-labs/icm-contracts/blob/4a897ba913958def3f09504338a1b9cd48fe5b2d/contracts/validator-manager/ValidatorManager.sol#L247). ### Restrictions and Validation The `CreateL1Tx` transaction has the following restrictions and validation criteria: 1. The `Chains` array must contain at least one chain configuration 2. The `ManagerChainID` must be a valid blockchain ID, but cannot be the P-Chain blockchain ID 3. Validator nodes must have unique NodeIDs within the transaction 4. Each validator must have a non-zero weight and a non-zero balance 5. The transaction inputs must provide sufficient AVAX to cover the transaction fee and all validator balances ### Warp Message After the transaction is accepted, the P-Chain must be willing to sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to the new L1, similar to what would happen after a `ConvertSubnetToL1Tx`. This ensures compatibility with existing systems that expect this message, such as the validator manager contracts. ## Backwards Compatibility This ACP introduces a new transaction type and does not modify the behavior of existing transaction types. Existing Subnets and L1s created through the three-step process will continue to function as before. This change is purely additive and does not require any changes to existing L1s or Subnets. The existing transactions `CreateSubnetTx`, `CreateChainTx` and `ConvertSubnetToL1Tx` remain unchanged for now, but may be removed in a future ACP to ensure systems have sufficient time to update to the new process. ## Reference Implementation A reference implementation must be provided in order for this ACP to be considered implementable. ## Security Considerations The `CreateL1Tx` transaction follows the same security model as the existing three-step process. By making the L1 creation atomic, it reduces the risk of partial state transitions that could occur if one of the transactions in the three-step process fails. The same continuous fee mechanism introduced in ACP-77 applies to L1s created through this new transaction type, ensuring proper metering of validator resources. The transaction verification process must ensure that all validator properties are properly validated, including unique NodeIDs, valid BLS signatures, and sufficient balances. ## Rationale and Alternatives The primary alternative is to maintain the status quo - requiring three separate transactions to create an L1. However, this approach has clear disadvantages in terms of complexity, transaction overhead, and user experience. Another alternative would be to modify the existing `ConvertSubnetToL1Tx` to allow specifying chain configurations directly. However, this would complicate the conversion process for existing Subnets and would not fully address the desire to eliminate the Subnet intermediary step for new L1 creation. The chosen approach of introducing a new transaction type provides a clean solution that addresses all identified issues while maintaining backward compatibility. ## Acknowledgements The idea for this PR was originally formulated by Aaron Buchwald in our discussion about the creation of L1s. Special thanks to the authors of ACP-77 for their groundbreaking work on Avalanche L1s, and to the projects that have shared their experiences and challenges with the current validator manager framework. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-194: Streaming Asynchronous Execution URL: /docs/acps/194-streaming-asynchronous-execution Details for Avalanche Community Proposal 194: Streaming Asynchronous Execution | ACP | 194 | | :------------ | :------------------------------------------------------------------------------------------------------------------------------- | | **Title** | Streaming Asynchronous Execution | | **Author(s)** | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/196)) | | **Track** | Standards | ## Abstract Streaming Asynchronous Execution (SAE) decouples consensus and execution by introducing a queue upon which consensus is performed. A concurrent execution stream is responsible for clearing the queue and reporting a delayed state root for recording by later rounds of consensus. Validation of transactions to be pushed to the queue is lightweight but guarantees eventual execution. ## Motivation ### Performance improvements 1. Concurrent consensus and execution streams eliminate node context switching, reducing latency caused by each waiting on the other. In particular, "VM time" (akin to CPU time) more closely aligns with wall time since it is no longer eroded by consensus. This increases gas per wall-second even without an increase in gas per VM-second. 2. Lean, execution-only clients can rapidly execute the queue agreed upon by consensus, providing accelerated receipt issuance and state computation. Without the need to compute state *roots*, such clients can eschew expensive Merkle data structures. End users see expedited but identical transaction results. 3. Irregular stop-the-world events like database compaction are amortised over multiple blocks. 4. Introduces additional bursty throughput by eagerly accepting transactions, without a reduction in security guarantees. 5. Third-party accounting of non-data-dependent transactions, such as EOA-to-EOA transfers of value, can be performed prior to execution. ### Future features Performing transaction execution after consensus sequencing allows the usage of consensus artifacts in execution. This unblocks some additional future improvements: 1. Exposing a real-time VRF during transaction execution. 2. Using an encrypted mempool to reduce front-running. This ACP does not introduce these, but some form of asynchronous execution is required to correctly implement them. ### User stories 1. A sophisticated DeFi trader runs a highly optimised execution client, locally clearing the transaction queue well in advance of the network—setting the stage for HFT DeFi. 2. A custodial platform filters the queue for only those transactions sent to one of their EOAs, immediately crediting user balances. ## Description In all execution models, a block is *proposed* and then verified by validators before being *accepted*. To assess a block's validity in *synchronous* execution, its transactions are first *executed* and only then *accepted* by consensus. This immediately and implicitly *settles* all of the block's transactions by including their execution results at the time of *acceptance*. E[Executed] --> A[Accepted/Settled]`} /> Under SAE, a block is considered valid if all of its transactions can be paid for when eventually *executed*, after which the block is *accepted* by consensus. The act of *acceptance* enqueues the block to be *executed* asynchronously. In the future, some as-yet-unknown later block will reference the execution results and *settle* all transactions from the *executed* block. A[Accepted] A -->|variable delay| E[Executed] E -->|τ seconds| S[Settled] A -. guarantees .-> S`} /> ### Block lifecycle #### Proposing blocks The validator selection mechanism for block production is unchanged. However, block builders are no longer expected to execute transactions during block building. The block builder is expected to include transactions by building upon the most recently settled state and to apply worst-case bounds on the execution of the ancestor blocks prior to the most recently settled block. The worst-case bounds enforce minimum balances of sender accounts and the maximum required base fee. The worst-case bounds are described [below](#block-validity-and-building). Prior to adding a proposed block to consensus, all validators MUST verify that the block builder correctly enforced the worst-case bounds while building the block. This guarantees that the block can be executed successfully if it is accepted. > \[!NOTE] > The worst-case bounds guarantee does not provide assurance about whether or not a transaction will revert nor whether its computation will run out of gas by reaching the specified limit. The verification only ensures the transaction is capable of paying for the accrued fees. #### Accepting blocks Once a block is marked as accepted by consensus, the block is put in a FIFO execution queue. #### Executing blocks Each client runs a block executor in parallel, which constantly executes the blocks from the FIFO queue. In addition to executing the blocks, the executor provides deterministic timestamps for the beginning and end of each block's execution. Time is measured two ways by the block executor: 1. The timestamp included in the block header. 2. The amount of gas charged during the execution of blocks. > \[!NOTE] > Execution timestamps are more granular than block header timestamps to allow sub-second block execution times. As soon as there is a block available in the execution queue, the block executor starts processing the block. If the executor's current timestamp is prior to the current block's timestamp, the executor's timestamp is advanced to match the block's. Advancing the timestamp in this scenario results in unused gas capacity, reducing the gas *excess* from which the price is determined. The block is then executed on top of the last executed (not settled) state. After executing the block, the executor advances its timestamp based on the gas usage of the block, also increasing the gas *excess* for the pricing algorithm. The block's execution time is now timestamped and the block is available to be settled. #### Settling blocks Already-executed blocks are settled once a following block that includes the results of the executed block is accepted. The results are included by setting the state root to that of the last executed block and the receipt root to that of a MPT of all receipts since last settlement, possibly from more than one block. The following block's timestamp is used to determine which blocks to settle—blocks are settled if said timestamp is greater than or equal to the execution time of the executed block plus a constant delay. The additional delay amortises any sporadic slowdowns the block executor may have encountered. ## Specification ### Background ACP-103 introduced the following variables for calculating the gas price:
| | | | --- | ---------------------------------- | | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $R$ | gas capacity added per second |
ACP-176 provided a mechanism to make $T$ dynamic and set: $$ \begin{align} R &= 2 \cdot T \\ K &= 87 \cdot T \end{align} $$ The *excess* actual consumption $x \ge 0$ beyond the target $T$ is tracked via numerical integration and used to calculate the gas price as: $M \cdot \exp\left(\frac{x}{K}\right)$ ### Gas charged We introduce $g_L$, $g_U$, and $g_C$ as the gas *limit*, *used*, and *charged* per transaction, respectively. We define $$ g_C := \max\left(g_U, \frac{g_L}{\lambda}\right) $$ where $\lambda$ enforces a lower bound on the gas charged based on the gas limit. > \[!NOTE] > $\dfrac{g_L}{\lambda}$ is rounded up by actually calculating $\dfrac{g_L + \lambda - 1}{\lambda}$ In all previous instances where execution referenced gas used, from now on, we will reference gas charged. For example, the gas excess $x$ will be modified by $g_C$ rather than $g_U$. ### Block size The constant time delay between block execution and settlement is defined as $\tau$ seconds. The maximum allowed size of a block is defined as: $$ \omega_B ~:= R \cdot \tau \cdot \lambda $$ Any block whose total sum of gas limits for transactions exceed $\omega_B$ MUST be considered invalid. ### Queue size The maximum allowed size of the execution queue *prior* to adding a new block is defined as: $$ \omega_Q ~:= 2 \cdot \omega_B $$ Any block that attempts to be enqueued while the current size of the queue is larger than $\omega_Q$ MUST be considered invalid. > \[!NOTE] > By restricting the size of the queue *prior* to enqueueing the new block, $\omega_B$ is guaranteed to be the only limitation on block size. ### Block executor During the activation of SAE, the block executor's timestamp $t_e$ is initialised to the timestamp of the last accepted block. Prior to executing a block with timestamp $t_b$, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \max\left(0, t_b - t_e\right) \\ t_e &~:= t_e + \Delta{t} \\ x &~:= \max\left(x - T \cdot \Delta{t}, 0\right) \\ \end{align} $$ The block is then executed with the gas price calculated from the current value of $x$. After executing a block that charged $g_C$ gas in total, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \frac{g_C}{R} \\ t_e &~:= t_e + \Delta{t} \\ x &~:= x + \Delta{t} \cdot (R - T) \\ \end{align} $$ > \[!NOTE] > The update rule here assumes that $t_e$ is a timestamp that tracks the passage of time both by gas and by wall-clock time. $\frac{g_C}{R}$ MUST NOT be simply rounded. Rather, the gas accumulation MUST be left as a fraction. $t_e$ is now this block's execution timestamp. ### Handling gas target changes When a block is produced that modifies $T$, both the consensus thread and the execution thread will update to the modified $T$ after their own handling of the block. For example, restrictions of the queue size MUST be calculated based on the parent block's $T$. Similarly, the time spent executing a block MUST be calculated based on the parent block's $T$. ### Block settlement For a *proposed* block that includes timestamp $t_b$, all ancestors whose execution timestamp $t_e$ is $t_e \leq t_b - \tau$ are considered settled. Note that $t_e$ is not an integer as it tracks fractional seconds with gas consumption, which is not the case for $t_b$. The *proposed* block MUST include the `stateRoot` produced by the execution of the most recently settled block. For any *newly* settled blocks, the *proposed* block MUST include all execution artifacts: * `receiptsRoot` * `logsBloom` * `gasUsed` The receipts root MUST be computed as defined in [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718) except that the tree MUST be built from the concatenation of receipts from all blocks being settled. > \[!NOTE] > If the block executor has fallen behind, the node may not be able to determine precisely which ancestors should be considered settled. If this occurs, validators MUST allow the block executor to catch up prior to deciding the block's validity. ### Block validity and building After determining which blocks to settle, all remaining ancestors of the new block must be inspected to determine the worst-case bounds on $x$ and account balances. Account nonces are able to be known immediately. The worst-case bound on $x$ can be calculated by following the block executor update rules using $g_L$ rather than $g_C$. The worst-case bound on account balances can be calculated by charging the worst-case gas cost to the sender of a transaction along with deducting the value of the transaction from the sender's account balance. The `baseFeePerGas` field MUST be populated with the gas price based on the worst-case bound on $x$ at the start of block execution. ### Configuration Parameters As noted above, SAE depends on the values of $\tau$ and $\lambda$ to be set as parameters and the values of $\omega_B$ and $\omega_Q$ are derived from $T$. Parameters to specify for the C-Chain are:
| Parameter | Description | C-Chain Configuration | | --------- | ------------------------------------------------ | --------------------- | | $\tau$ | duration between execution and settlement | $5s$ | | $\lambda$ | minimum conversion from gas limit to gas charged | $2$ |
## Backwards Compatibility This ACP modifies the meaning of multiple fields in the block. A comprehensive list of changes will be produced once a reference implementation is available. Likely fields to change include: * `stateRoot` * `receiptsRoot` * `logsBloom` * `gasUsed` * `extraData` ## Reference Implementation A reference implementation is still a work-in-progress. This ACP will be updated to include a reference implementation once one is available. ## Security Considerations ### Worst-case transaction validity To avoid a DoS vulnerability on execution, we require an upper bound on transaction gas cost (i.e. amount $\times$ price) beyond the regular requirements for transaction validity (e.g. nonce, signature, etc.). We therefore introduced "worst-case cost" validity. We can prove that if every transaction were to use its full gas limit this would result in the greatest possible: 1. Consumption of gas units (by definition of the gas limit); and 2. Gas excess $x$ (and therefore gas price) at the time of execution. For a queue of blocks $Q = \\{i\\}_ {i \ge 0}$ the gas excess $x_j$ immediately prior to execution of block $j \in Q$ is a monotonic, non-decreasing function of the gas usage of all preceding blocks in the queue; i.e. $x_j~:=~f(\\{g_i\\}_{i 0$. Hence any decrease of $x$ is $\ge$ predicted. The excess, and hence gas price, for every later block $x_{i>k}$ is therefore reduced: $$ \downarrow g_k \implies \begin{cases} \downarrow \Delta^+x \propto g_k \\ \uparrow \Delta^-x \propto R-g_k \end{cases} \implies \downarrow \Delta x_k \implies \downarrow M \cdot \exp\left(\frac{x_{i>k}}{K}\right) $$ Given maximal gas consumption under (1), the monotonicity of $f$ implies (2). Since we are working with non-negative integers, it follows that multiplying a transaction's gas limit by the hypothetical gas price of (2) results in its worst-case gas cost. Any sender able to pay for this upper bound (in addition to value transfers) is guaranteed to be able to pay for the actual execution cost. Transaction *acceptance* under worst-case cost validity is therefore a guarantee of *settlement*. ### Queue DoS protection Worst-case cost validity only protects against DoS at the point of execution but leaves the queue vulnerable to high-limit, low-usage transactions. For example, a malicious user could send a transfer-only transaction (21k gas) with a limit set to consume the block's full gas limit. Although they would have to have sufficient funds to theoretically pay for all the reserved gas, they would never actually be charged this amount. Pushing a sufficient number of such transactions to the queue would artificially inflate the worst-case cost of other users. Therefore, the gas charged was modified from being equal to the gas usage to the above $g_C := \max\left(g_U, \frac{g_L}{\lambda}\right)$ The gas limit is typically set higher than the predicted gas consumption to allow for a buffer should the prediction be imprecise. This precludes setting $\lambda := 1$. Conversely, setting $\lambda := \infty$ would allow users to attack the queue with high-limit, low-consumption transactions. Setting $\lambda ~:= 2$ allows for a 100% buffer on gas-usage estimates without penalising the sender, while still disincentivising falsely high limits. #### Upper bound on queue DoS Recall $R$ (gas capacity per second) for rate and $g_C$ (gas charged) as already defined. The actual gas excess $x_A$ has an upper bound of the worst-case excess $x_W$, both of which can be used to calculate respective base fees $f_A$ and $f_W$ (the variable element of gas prices) from the existing exponential function: $$ f := M \cdot \exp\left( \frac{x}{K} \right). $$ Mallory is attempting to maximize the DoS ratio $$ D := \frac{f_W}{f_A} $$ by maximizing $\Sigma_{\forall i} (g_L - g_U)_i$ to maximize $x_W - x_A$. > \[!TIP] > Although $D$ shadows a variable in ACP-176, that one is very different to anything here so there won't be confusion. Recall that the increasing excess occurs such that $$ x := x + g \cdot \frac{(R - T)}{R} $$ Since the largest allowed size of the queue when enqueuing a new block is $\omega_Q$, we can derive an upper bound on the difference in the changes to worst-case and actual gas excess caused by the transactions in the queue before the new block is added: $$ \begin{align} \Delta x_A &\ge \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ \Delta x_W &= \omega_Q \cdot \frac{(R - T)}{R} \\ \Delta x_W - \Delta x_A &\le \omega_Q \cdot \frac{(R - T)}{R} - \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ &= \omega_Q \cdot \frac{(R - T)}{R} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{(2 \cdot T - T)}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{T}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{\omega_Q}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{2 \cdot \omega_B}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_B \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot \lambda \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot (\lambda-1) \\ &= 2 \cdot T \cdot \tau \cdot (\lambda-1) \end{align} $$ Note that we can express Mallory's DoS quotient as: $$ \begin{align} D &= \frac{f_W}{f_A} \\ &= \frac{ M \cdot \exp \left( \frac{x_W}{K} \right)}{ M \cdot \exp \left( \frac{x_A}{K} \right)} \\ & = \exp \left( \frac{x_W - x_A}{K} \right). \end{align} $$ When the queue is empty (i.e. the execution stream has caught up with accepted transactions), the worst-case fee estimate $f_W$ is known to be the actual base fee $f_A$; i.e. $Q = \emptyset \implies D=1$. The previous bound on $\Delta x_W - \Delta x_A$ also bounds Mallory's ability such that: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{K} \right)\\ &= \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{87 \cdot T} \right)\\ &= \exp \left( \frac{2 \cdot \tau \cdot (\lambda-1)}{87} \right)\\ \end{align} $$ Therefore, for the values suggested by this ACP: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot 5 \cdot (2 - 1)}{87} \right)\\ &= \exp \left( \frac{10}{87} \right)\\ &\simeq 1.12\\ \end{align} $$ In summary, Mallory can require users to increase their gas price by at most \~12%. In practice, the gas price often fluctuates more than 12% on a regular basis. Therefore, this does not appear to be a significant attack vector. However, any deviation that dislodges the gas price bidding mechanism from a true bidding mechanism is of note. ## Appendix ### JSON RPC methods Although asynchronous execution decouples the transactions and receipts recorded by a specific block, APIs MUST NOT alter their behavior to mirror this. In particular, the API method `eth_getBlockReceipts` MUST return the receipts corresponding to the block's transactions, not the receipts settled in the block. #### Named blocks The Ethereum Mainnet APIs allow for retrieving blocks by named parameters that the API server resolves based on their consensus mechanism. Other than the *earliest* (genesis) named block, which MUST be interpreted in the same manner, all other named blocks are mapped to SAE in terms of the *execution* status of blocks and MUST be interpreted as follows: * *pending*: the most recently *accepted* block; * *latest*: the block that was most recently *executed*; * *safe* and *finalized*: the block that was most recently *settled*. > \[!NOTE] > The finality guarantees of Snowman consensus remove any distinction between *safe* and *finalized*. > Furthermore, the *latest* block is not at risk of re-org, only of a negligible risk of data corruption local to the API node. ### Observations around transaction prioritisation As EOA-to-EOA transfers of value are entirely guaranteed upon *acceptance*, block builders MAY choose to prioritise other transactions for earlier execution. A reliable marker of such transactions is a gas limit of 21,000 as this is an indication from the sender that they do not intend to execute bytecode. However, this could delay the ability to issue transactions that depend on these EOA-to-EOA transfers. Block builders are free to make their own decisions around which transactions to include. ## Acknowledgments Thank you to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. * [Aaron Buchwald](https://github.com/aaronbuchwald) * [Angharad Thomas](https://x.com/divergenceharri) * [Martin Eckardt](https://github.com/martineckardt) * [Meaghan FitzGerald](https://github.com/meaghanfitzgerald) * [Michael Kaplan](https://github.com/michaelkaplan13) * [Yacov Manevich](https://github.com/yacovm) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-20: Ed25519 P2p URL: /docs/acps/20-ed25519-p2p Details for Avalanche Community Proposal 20: Ed25519 P2p | ACP | 20 | | :------------ | :----------------------------------------------------------------------------------- | | **Title** | Ed25519 p2p | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/21)) | | **Track** | Standards | ## Abstract Support Ed25519 TLS certificates for p2p communications on the Avalanche network. Permit usage of Ed25519 public keys for Avalanche Network Client (ANC) NodeIDs. Support Ed25519 signatures in the ProposerVM. ## Motivation Avalanche Network Clients (ANCs) rely on TLS handshakes to facilitate p2p communications. AvalancheGo (and by extension, the Avalanche Network) only supports TLS certificates that use RSA or ECDSA as the signing algorithm and explicitly prohibits any other signing algorithms. If a TLS certificate is not present, AvalancheGo will generate and persist to disk a 4096 bit RSA private key on start-up. This key is subsequently used to generate the TLS certificate which is also persisted to disk. Finally, the TLS certificate is hashed to generate a 20 byte NodeID. Authenticated p2p messaging was required when the network started and it was sufficient to simply use a hash of the TLS certificate. With the introduction of Snowman++, validators were then required to produce shareable message signatures. The Snowman++ block headers (specified [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension)) were then required to include the full TLS `Certificate` along with the `Signature`. However, TLS certificates support Ed25519 as their signing algorithm. Ed25519 are IETF recommendations ([RFC8032](https://datatracker.ietf.org/doc/html/rfc8032)) with some very nice properties, a large one being their size: * 32 byte public key * 64 byte private key * 64 byte signature Because of the small size of the public key, it can be used for the NodeID directly with a marginal hit to size (an additional 12 bytes). Additionally, the brittle reliance on static TLS certificates can be removed. Using the Ed25519 private key, a TLS certificate can be generated in-memory on node startup and used for p2p communications. This reduces the maintenance burden on node operators as they will only need to backup the Ed25519 private key instead of the TLS certificate and the RSA private key. Ed25519 has wide adoption, including in the crypto industry. A non-exhaustive list of things that use Ed25519 can be found [here](https://ianix.com/pub/ed25519-deployment.html). More information about the Ed25519 protocol itself can be found [here](https://ed25519.cr.yp.to). ## Specification ### Required Changes 1. Support registration of 32-byte NodeIDs on the P-chain 2. Generate an Ed25519 key by default (`staker.key`) on node startup 3. Use the Ed25519 key to generate a TLS certificate on node startup 4. Add support for Ed25519 keys + signatures to the proposervm 5. Remove the TLS certificate embedding in proposervm blocks when an Ed25519 NodeID is the proposer 6. Add support for Ed25519 in `PeerList` messages Changes to the p2p layer will be minimal as TLS handshakes are used to do p2p communication. Ed25519 will need to be added as a supported algorithm. The P-chain will also need to be modified to support registration of 32-byte NodeIDs. During serialization, the length of the NodeID is not serialized and was assumed to always be 20 bytes. Implementers of this ACP must take care to continue parsing old transactions correctly. This ACP could be implemented by adding a new tx type that requires Ed25519 NodeIDs only. If the implementer chooses to do this, a separate follow-up ACP must be submitted detailing the format of that transaction. ### Future Work In the future, usage of non-Ed25519 TLS certificates should be prohibited to remove any dependency on them. This will further secure the Avalanche network by reducing complexity. The path to doing so is not outlined in this ACP. ## Backwards Compatibility An implementation of this proposal should not introduce any backwards compatibility issues. NodeIDs that are 20 bytes should continue to be treated as hashes of TLS certificates. NodeIDs of 32 bytes (size of Ed25519 public key) should be supported following implementation of this proposal. ## Reference Implementation TLS certificate generation using an Ed25519 private key is standard. The golang standard library has a reference [implementation](https://github.com/golang/go/blob/go1.20.10/src/crypto/tls/generate_cert.go). Parsing TLS certificates and extracting the public key is also standard. AvalancheGo already contains [code](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/staking/verify.go#L55-L65) to verify the public key from a TLS certificate. ## Security Considerations ### Validation Criteria Although Ed25519 is standardized in [RFC8032](https://datatracker.ietf.org/doc/html/rfc8032), it does not define strict validation criteria. This has led to inconsistencies in the validation criteria across implementations of the signature scheme. This is unacceptable for any protocol that requires participants to reach consensus on signature validity. Henry de Valance highlights the complexity of this issue [here](https://hdevalence.ca/blog/2020-10-04-its-25519am). From [Chalkias et al. 2020](https://eprint.iacr.org/2020/1244.pdf): * The RFC 8032 and the NIST FIPS186-5 draft both require to reject non-canonically encoded points, but not all of the implementations follow those guidelines. * The RFC 8032 allows optionality between using a permissive verification equation and a more strict verification equation. Different implementations use different equations meaning validation results can vary even across implementations that follow RFC 8032. Zcash adopted [ZIP-215](https://zips.z.cash/zip-0215) (proposed by Henry de Valance) to explicitly define the Ed25519 validation criteria. Implementers of this ACP **must** use the ZIP-215 validation criteria. The [`ed25519consensus`](https://github.com/hdevalence/ed25519consensus) golang library is a minimal fork of golang's `crypto/ed25519` package with support for ZIP-215 verification. It is maintained by [Filippo Valsorda](https://github.com/FiloSottile) who also maintains many golang stdlib cryptography packages. It is strongly recommended to use this library for golang implementations. ## Open Questions *Can this Ed25519 key be used in alternative communication protocols?* Yes. Ed25519 can be used for alternative communication protocols like [QUIC](https://datatracker.ietf.org/group/quic/about) or [NOISE](http://www.noiseprotocol.org/noise.html). This ACP removes the reliance on TLS certificates and associates a Ed25519 public key with NodeIDs. This allows for experimentation with different communication protocols that may be better suited for a high throughput blockchain like Avalanche. *Can this Ed25519 key be used for Verifiable Random Functions?* Yes. VRFs, as specified in [RFC9381](https://datatracker.ietf.org/doc/html/rfc9381), can be constructed using elliptic curves that are secure in the cryptographic random oracle model. Ed25519 test vectors are provided in the RFC for implementers of an Elliptic Curve VRF (ECVRF). This allows for Avalanche validators to generate a VRF per block using their associated Ed25519 keys, including for Subnets. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-204: Precompile Secp256r1 URL: /docs/acps/204-precompile-secp256r1 Details for Avalanche Community Proposal 204: Precompile Secp256r1 # ACP-204: Precompile for secp256r1 Curve Support | ACP | 204 | | :------------ | :---------------------------------------------------------------------------------------- | | **Title** | Precompile for secp256r1 Curve Support | | **Author(s)** | [Santiago Cammi](https://github.com/scammi), [Arran Schlosberg](https://github.com/ARR4N) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/212)) | | **Track** | Standards | ## Abstract This proposal introduces a precompiled contract that performs signature verifications for the secp256r1 elliptic curve on Avalanche's C-Chain. The precompile will be implemented at address `0x0000000000000000000000000000000000000100` and will enable native verification of P-256 signatures, significantly improving gas efficiency for biometric authentication systems, WebAuthn, and modern device-based signing mechanisms. ## Motivation The secp256r1 (P-256) elliptic curve is the standard cryptographic curve used by modern device security systems, including Apple's Secure Enclave, Android Keystore, WebAuthn, and Passkeys. However, Avalanche currently only supports secp256k1 natively, forcing developers to use expensive Solidity-based verification that costs [200k-330k gas per signature verification](https://hackmd.io/@1ofB8klpQky-YoR5pmPXFQ/SJ0nuzD1T#Smart-Contract-Based-Verifiers). This ACP proposes implementing EIP-7951's secp256r1 precompiled contract to unlock significant ecosystem benefits: ### Enterprise & Institutional Adoption * Reduced onboarding friction: Enterprises can leverage existing biometric authentication infrastructure instead of managing seed phrases or hardware wallets * Regulatory compliance: Institutions can utilize their approved device security standards and identity management systems * Cost optimization: \~50x gas reduction (from 200k-330k to 6,900 gas) makes enterprise-scale applications economically viable The 100x gas cost reduction makes these use cases economically viable while maintaining the security properties institutions and users expect from their existing devices. Adding the precompiled contract at the same address as used in [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md) provides consistency across ecosystems, and allows for any libraries that have been developed to interact with the precompile to be used unmodified across ecosystems. ## Specification This ACP implements [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md) for secp256r1 signature verification on Avalanche. The specification follows EIP-7951 exactly, with the precompiled contract deployed at address `0x0000000000000000000000000000000000000100`. ### Core Functionality * Input: 160 bytes (message hash + signature components r,s + public key coordinates x,y) * Output: success: 32 bytes `0x...01`; failure: no data returned * Gas Cost: 6,900 gas (based on EIP-7951 benchmarking) * Validation: Full compliance with NIST FIPS 186-3 specification ### Activation This precompile may be activated as part of Avalanche's next network upgrade. Individual Avalanche L1s and subnets could adopt this enhancement independently through their respective client software updates. For complete technical specifications, validation requirements, and implementation details, refer to [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md). ## Backwards Compatibility This ACP introduces a new precompiled contract and does not modify existing functionality. No backwards compatibility issues are expected since: 1. The precompile uses a previously unused address 2. No existing opcodes or consensus rules are modified 3. The change is additive and opt-in for applications Adoption requires a coordinated network upgrade for the C-Chain. Other EVM L1s can adopt this enhancement independently by upgrading their client software. ## Security Considerations ### Cryptographic Security * The secp256r1 curve is standardized by NIST and widely vetted * Security properties are comparable to secp256k1 (used by ECRECOVER) * Implementation follows NIST FIPS 186-3 specification exactly ### Implementation Security * Signature verification (vs public-key recovery) approach maximizes compatibility with existing P-256 ecosystem * No malleability check included to match NIST specification, but wrapper libraries may choose to add this * Input validation prevents invalid curve points and out-of-range signature components ### Network Security * Gas cost prevents potential DoS attacks through expensive computation * No consensus-level security implications beyond standard precompile considerations ## Reference Implementation The implementation will build upon existing work: 1. EIP-7951 Reference: The \[Go-Ethereum implementation][https://github.com/ethereum/go-ethereum/pull/31991](https://github.com/ethereum/go-ethereum/pull/31991)) of EIP-7951 provides the foundation 2. Coreth Implementation: Integration with Avalanche's C-Chain (Avalanche's fork of go-ethereum) 3. Cryptographic Library: Implementation utilizes Go's standard library `crypto/ecdsa` and `crypto/elliptic` packages, which implement NIST P-256 per FIPS 186-3 ([Go documentation](https://pkg.go.dev/crypto/elliptic#P256)) The implementation follows established patterns for precompile integration, adding the contract to the precompile registry and implementing the verification logic using established cryptographic libraries. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-209: Eip7702 Style Account Abstraction URL: /docs/acps/209-eip7702-style-account-abstraction Details for Avalanche Community Proposal 209: Eip7702 Style Account Abstraction | ACP | 209 | | :------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | EIP-7702-style Set Code for EOAs | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/216)) | | **Track** | Standards | ## Abstract [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md) was activated on the Ethereum mainnet in May 2025 as part of the Pectra upgrade, and introduced a new "set code transaction" type that allows Externally Owned Accounts (EOAs) to set the code in their account. This enabled several UX improvements, including batching multiple operations into a single atomic transaction, sponsoring transactions on behalf of another account, and privilege de-escalation for EOAs. This ACP proposes adding a similar transaction type and functionality to Avalanche EVM implementations in order to have them support the same style of UX available on Ethereum. Modifications to the handling of account nonce and balances are required in order for it to be safe when used in conjunction with the streaming asynchronous execution (SAE) mechanism proposed in [ACP-194](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution). ## Motivation The motivation for this ACP is the same as the motivation described in [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#motivation). However, EIP-7702 as implemented for Ethereum breaks invariants required for EVM chains that use the ACP-194 SAE mechanism. There has been strong community feedback in support of ACP-194 for its potential to: * Allow for increasing the target gas rate of Avalanche EVM chains, including the C-Chain * Enable the use of an encrypted mempool to prevent front-running * Enable the use of real time VRF during transaction execution Given the strong support for ACP-194, bringing EIP-7702-style functionality to Avalanche EVMs requires modifications to preserve its necessary invariants, described below. ### Invariants needed for ACP-194 There are [two invariants explicitly broken by EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#backwards-compatibility) that are required for SAE. They are: 1. An account balance can only decrease as a result of a transaction originating from that account. 2. An EOA nonce may not increase after transaction execution has begun. These invariants are required for SAE in order to be able to statically analyze (i.e. determine without executing the transaction) that a transaction: * Has the proper nonce * Will have sufficient balance to pay for its worst case transaction fee plus the balance it sends As described in the ACP-194, this lightweight analysis of transactions in blocks allows blocks to be accepted by consensus with the guarantee that they can be executed successfully. Only after block acceptance are the transactions within the block then put into a queue to be executed asynchronously. If the execution of transactions in the queue can decrease an EOA's account balance or change an EOA's current nonce, then block verification is unable to ensure that transactions in the block will be valid when executed. If transactions accepted into blocks can be invalidated prior to their execution, this poses DOS vulnerabilities because the invalidated transactions use up space in the pending execution queue according to their gas limits, but they do not pay any fees. Notably, EIP-7702's violation of these invariants already presents challenges for mempool verification on Ethereum. As [noted in the security considerations section](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#transaction-propagation), EIP-7702 makes it "possible to cause transactions from other accounts to become stale" and this "poses some challenges for transaction propagation" because nodes now cannot "statically determine the validity of transactions for that account". In synchronous execution environments such as Ethereum, these issues only pose potential DOS risks to the public transaction mempool. Under an asynchronous execution scheme, the issues pose DOS risks to the chain itself since the invalidated transactions can be included in blocks prior to their execution. ## Specification The same [set code transaction as specified in EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#set-code-transaction) will be added to Avalanche EVM implementations. The behavior of the transaction is the same as specified in EIP-7702. However, in order to keep the guarantee of transaction validity upon inclusion in an accepted block, two modifications are made to the transaction verification and execution rules. 1. Delegated accounts must maintain a "reserved balance" to ensure they can always pay for the transaction fees and transferred balance of transactions sent from the account. The reserved balances are managed via a new `ReservedBalanceManager` precompile, as specified below. 2. The handling of account nonces during execution is separated from the verification of nonces during block verification, as specified below. ### Reserved balances To ensure that all transactions can cover their worst case transaction fees and transferred balances upon inclusion in an accepted block, a "reserved balance" mechanism is introduced for accounts. Reserved balances are required for delegated accounts to guarantee that subsequent transactions they send after setting code for their account can still cover their fees and transfer amounts, even if transactions from other accounts reduce the account's balance prior to their execution. To allow for managing reserved balances, a new `ReservedBalanceManager` stateful precompile will be added at address `0x0200000000000000000000000000000000000006`. The `ReservedBalanceManager` precompile will have the following interface: ```solidity interface IReservedBalanceManager { /// @dev Emitted whenever an account's reserved balance is modified. event ReservedBalanceUpdated(address indexed account, uint256 newBalance); /// @dev Called to deposit the native token balance provided into the account's /// reserved balance. function depositReservedBalance(address account) external payable; /// @dev Returns the current reserved balance for the given account. function getReservedBalance(address account) external view returns (uint256 balance); } ``` The precompile will maintain a mapping of accounts to their current reserved balances. The precompile itself intentionally only allows for *increasing* an account's reserved balance. Reducing an account's reserved balance is only ever done by the EVM when a transaction is sent from the account, as specified below. During transaction verification, the following rules are applied: * If the sender EOA account has not set code via an EIP-7702 transaction, no reserved balance is required. * The transaction is confirmed to be able to pay for its worst case transaction fee and transferred balance by looking at the sender account's regular balance and accounting for prior transactions it has sent that are still in the pending execution queue, as specified in ACP-194. * Otherwise, if the sender EOA account has previously been delegated via an EIP-7702 transaction (even if that transaction is still in the pending execution queue), then the account's current "[settled](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution#settling-blocks)" reserved balance must be sufficient to cover the sum of the worst case transaction fees and balances sent for all of the transactions in the pending execution queue after the set code transaction. During transaction execution, the following rules are applied: * When initially deducting balance from the sender EOA account for the maximum transaction fee and balance sent with the transaction, the account's regular balance is used first. The account's reserved balance is only reduced if the regular balance is insufficient. * In the execution of code as part of a transaction, only regular account balances are available. The only possible modification to reserved balances during code execution is increases via calls to the `ReservedBalanceManager` precompile `depositReservedBalance` function. * If there is a gas refund at the end of the transaction execution, the balance is first credited to the sender account's reserved balance, up to a maximum of the account's reserved balance prior to the transaction. Any remaining refund is credited to the account's regular balance. ### Handling of nonces To account for EOA account nonces being incremented during contract execution and potentially invalidating transactions from that EOA that have already been accepted, we separate the rules for how nonces are verified during block verification and how they are handled during execution. During block verification, all transactions must be verified to have a correct nonce value based on the latest "settled" state root, as defined in ACP-194, and the number of transactions from the sender account in the pending execution queue. Specifically, the required nonce is derived from the settled state root and incremented by one for each of the sender’s transactions already accepted into the pending execution queue or current block. During execution, the nonce used must be one greater than the latest nonce used by the account, accounting for both all transactions from the account and all contracts created by the account. This means that the actual nonce used by a transaction may differ from the nonce assigned in the raw transaction itself and used in verification. Separating the nonce values used for block verification and execution ensures that transactions accepted in blocks cannot be invalidated by the execution of transactions before them in the pending execution queue. It still provides the same level of replay protection to transactions, as a transaction with a given nonce from an EOA can be accepted at most once. However, this separation has a subtle potential impact on contract creation. Previously, the resulting address of a contract could be deterministically derived from a contract creation transaction based on its sender address and the nonce set in the transaction. Now, since the nonce used in execution is separate from that set in the transaction, this is no longer guaranteed. ## Backwards Compatibility The introduction of EIP-7702 transactions will require a network upgrade to be scheduled. Upon activation, a few invariants will be broken: * (From EIP-7702) `tx.origin == msg.sender` can only be true in the topmost frame of execution. * Once an account has been delegated, it can invoke multiple calls per transaction. * (From EIP-7702) An EOA nonce may not increase after transaction execution has begun. * Once an account has been delegated, the account may call a create operation during execution, causing the nonce to increase. * The contract address of a contract deployed by an EOA (via transaction with an empty "to" address) can be derived from the sender address and the transaction's nonce. * If earlier transactions cause the nonce to increase before execution, the actual nonce used in a contract creation transaction may differ from the one in the transaction payload, altering the resulting contract address. * Note that this can only occur for accounts that have been delegated, and whose delegated code involves contract creation. Additionally, at all points after the acceptance of a set code transaction, an EOA must have sufficient reserved balance to cover the sum of the worst case transactions fees and balances sent for all transactions in the pending execution queue after the set code transaction. Notably, this means that: * If a delegated account has zero reserved balance at any point, it will be unable to send any further transactions until a different account provides it with reserved balance via the `ReservedBalanceManager` precompile. * In order to initially "self-fund" its own reserved balance, an account must deposit reserved balance via the `ReservedBalanceManager` precompile prior to sending a set code transaction. * In order to transfer its full (regular + reserved) account balance, a delegated account must first deposit all of its regular balance into reserved balance. In order to support wallets as seamlessly as possible, the `eth_getBalance` RPC implementations should be updated to return the sum of an accounts regular and reserved balances. Additionally, clients should provide a new `eth_getReservedBalance` RPC method to allow for querying the reserved balance of a given account. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations All of the [security considerations from the EIP-7702 specification](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#security-considerations) apply here as well, except for the considerations regarding "sponsored transaction relayers" and "transaction propagation". Those two considerations do not apply here, as they are accounted for by the modifications made to introduce reserved balances and separate the handling of nonces in execution from verification. Additionally, given that an account's reserved balance may need be updated in state when a transfer is sent from the account it must be confirmed that 21,000 gas is still a sufficiently high cost for the potential more expensive operation. Charging more gas for basic transfer transactions in this case could otherwise be an option, but would likely cause further backwards compatibility issues for smart contracts and off-chain services. ## Open Questions 1. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth the UX improvements introduced by the new set code transaction type? * Except for having a contract spend an account's native token balance, most, if not all, of the UX improvements associated with the new transaction type could theoretically be implemented at the contract layer rather than the protocol layer. However, not all contracts provide support for account abstraction functionality via standards such as [ERC-2771](https://eips.ethereum.org/EIPS/eip-2771). 2. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth giving delegate contracts the ability to spend native token balances? * An alternative may be to disallow delegate contracts from spending native token balances at all, and revert if they attempt to. They could use "wrapped native token" ERC20 implementations (i.e. WAVAX) to achieve the same effect. However, this may be equally or more complex at the implementation level, and would cause incompatibilies in delegate contract implementations for Ethereum. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-224: Dynamic Gas Limit In Subnet Evm URL: /docs/acps/224-dynamic-gas-limit-in-subnet-evm Details for Avalanche Community Proposal 224: Dynamic Gas Limit In Subnet Evm | ACP | 224 | | :------------ | :---------------------------------------------------------------------------------------------------------------------------- | | **Title** | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM | | **Author(s)** | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/230)) | | **Track** | Standards | ## Abstract Proposes implementing [ACP-176](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) in Subnet-EVM, along with the addition of a new optional `ACP224FeeManagerPrecompile` that can be used to configure fee parameters on-chain dynamically after activation, in the same way that the existing `FeeManagerPrecompile` can be used today prior to ACP-176. ## Motivation ACP-176 updated the EVM dynamic fee mechanism to more accurately achieve the target gas consumption on-chain. It also added a mechanism for the target gas consumption rate to be dynamically updated. Until now, ACP-176 was only added to Coreth (C-Chain), primarily because most L1s prefer to control their fees and gas targets through the `FeeManagerPrecompile` and `FeeConfig` in genesis chain configuration, and the existing `FeeManagerPrecompile` is not compatible with the ACP-176 fee mechanism. [ACP-194](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/194-streaming-asynchronous-execution/README.md) (SAE) depends on having a gas target and capacity mechanism aligned with ACP-176. Specifically, there must be a known gas capacity added per second, and maximum gas capacity. The existing windower fee mechanism employed by Subnet-EVM does not provide these properties because it does not have a fixed capacity rate, making it difficult to calculate worst-case bounds for gas prices. As such, adding ACP-176 into Subnet-EVM is a functional requirement for L1s to be able to use SAE in the future. Adding ACP-176 fee dynamics to Subnet-EVM also has the added benefit of aligning with Coreth such that only a single mechanism needs to be maintained on a go-forwards basis. While both ACP-176 and ACP-194 will be required upgrades for L1s, this ACP aims to provide similar controls for chains with a new precompile. A new dynamic fee configuration and fee manager precompile that maps well into the ACP-176 mechanism will be added, optionally allowing admins to adjust fee parameters dynamically. ## Specification ### ACP-176 Parameters This ACP uses the same parameters as in the [ACP-176 specification](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md#configuration-parameters), and allows their values to be configured on a chain-by-chain basis. The parameters and their current values used by the C-Chain are as follows: | Parameter | Description | C-Chain Configuration | | :-------- | :----------------------------------------------------- | :-------------------- | | $T$ | target gas consumed per second | dynamic | | $R$ | gas capacity added per second | 2\*T | | $C$ | maximum gas capacity | 10\*T | | $P$ | minimum target gas consumption per second | 1,000,000 | | $D$ | target gas consumption rate update constant | 2^25 | | $Q$ | target gas consumption rate update factor change limit | 2^15 | | $M$ | minimum gas price | 1x10^-18 AVAX | | $K$ | initial gas price update factor | 87\*T | ### Prior Subnet-EVM Fee Configuration Parameters Prior to this ACP, the Subnet-EVM fee configuration and fee manager precompile used the following parameters to control the fee mechanism: **GasLimit**: Sets the max amount of gas consumed per block. **TargetBlockRate**: Sets the target rate of block production in seconds used for fee adjustments. If the actual block rate is faster than this target, block gas cost will be increased, and vice versa. **MinBaseFee**: The minimum base fee sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction. **TargetGas**: Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10s window. When the dynamic fee algorithm observes that network activity is above/below the `TargetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is. **BaseFeeChangeDenominator**: Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly. **MinBlockGasCost**: Sets the minimum amount of gas to charge for the production of a block. **MaxBlockGasCost**: Sets the maximum amount of gas to charge for the production of a block. **BlockGasCostStep**: Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block. If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block. If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly. Note: if the `BlockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `TargetBlockRate`. Ex: if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * BlockGasCostStep`. ### ACP-176 Parameters in Subnet-EVM ACP-176 will make `GasLimit` and `BaseFeeChangeDenominator` configurations obsolete in Subnet-EVM. `TargetBlockRate`, `MinBlockGasCost`, `MaxBlockGasCost`, and `BlockGasCostStep` will be also removed by [ACP-226](https://github.com/avalanche-foundation/ACPs/tree/ce51dfab/ACPs/226-dynamic-minimum-block-times). `MinGasPrice` is equivalent to `M` in ACP-176 and will be used to set the minimum gas price for ACP-176. This is similar to `MinBaseFee` in old Subnet-EVM fee configuration, and roughly gives the same effect. Currently the default value is `25 * 10^-18` (25 nAVAX/Gwei). This default will be changed to the minimum possible denomination of the native EVM asset (1 Wei), which is aligned with the C-Chain. `TargetGas` is equivalent to `T` (target gas consumed per second) in ACP-176 and will be used to set the target gas consumed per second for ACP-176. `MaxCapacityFactor` is equivalent to the factor in `C` in ACP-176 and controls the maximum gas capacity (i.e. block gas limit). This determines the `C` as `C = MaxCapacityFactor * T`. The default value will be 10, which is aligned with the C-Chain. `TimeToDouble` will be used to control the speed of the fee adjustment (`K`). This determines the `K` as `K = (RMult * TimeToDouble) / ln(2)`, where `RMult` is the factor in `R` which is defined as 2. The default value for `TimeToDouble` will be 60 (seconds), making `K=~87*T`, which is aligned with the C-Chain. As a result parameters will be set as follows: | Parameter | Description | Default Value | Is Configurable | | :-------- | :----------------------------------------------------- | :------------ | :------------------------------------------------------------ | | $T$ | target gas consumed per second | 1,000,000 | :white\_check\_mark: | | $R$ | gas capacity added per second | 2\*T | :x: | | $C$ | maximum gas capacity | 10\*T | :white\_check\_mark: Through `MaxCapacityFactor` (default 10) | | $P$ | minimum target gas consumption per second | 1,000,000 | :x: | | $D$ | target gas consumption rate update constant | 2^25 | :x: | | $Q$ | target gas consumption rate update factor change limit | 2^15 | :x: | | $M$ | minimum gas price | 1 Wei | :white\_check\_mark: | | $K$ | gas price update constant | \~87\*T | :white\_check\_mark: Through `TimeToDouble` (default 60s) | The gas capacity added per second (`R`) always being equal to `2*T` keeps it such that the gas price is capable of increases and decrease at the same rate. The values of `Q` and `D` affect the magnitude of change to `T` that each block can have, and the granularity at which the target gas consumption rate can be updated. The proposed values match the C-Chain, allowing each block to modify the current gas target by roughly $\frac{1}{1024}$ of its current value. This has provided sufficient responsiveness and granularity as is, removing the need to make `D` and `Q` dynamic or configurable. Similarly, 1,000,000 gas/second should be a low enough minimum target gas consumption for any EVM L1. The target gas for a given L1 will be able to be increased from this value dynamically and has no maximum. ### Genesis Configuration There will be a new genesis chain configuration to set the parameters for the chain without requiring the ACP224FeeManager precompile to be activated. This will be similar to the existing fee configuration parameters in chain configuration. If there is no genesis configuration for the new fee parameters the default values for the C-Chain will be used. This will look like the following: ```json { ... "acp224Timestamp": uint64 "acp224FeeConfig": { "minGasPrice": uint64 "maxCapacityFactor": uint64 "timeToDouble": uint64 } } ``` ### Dynamic Gas Target Via Validator Preference For L1s that want their gas target to be dynamically adjusted based on the preferences of their validator sets, the same mechanism introduced on the C-Chain in ACP-176 will be employed. Validators will be able to set their `gas-target` preference in their node's configuration, and block builders can then adjust the target excess in blocks that they propose based on their preference. ### Dynamic Gas Target & Fee Configuration Via `ACP224FeeManagerPrecompile` For L1s that want an "admin" account to be able to dynamically configuration their gas target and other fee parameters, a new optional `ACP224FeeManagerPrecompile` will be introduced and can be activated. The precompile will offer similar controls as the existing `FeeManagerPrecompile` implemented in Subnet-EVM [here](https://github.com/ava-labs/subnet-evm/tree/53f5305/precompile/contracts/feemanager). The solidity interface will be as follows: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; import "./IAllowList.sol"; /// @title ACP-224 Fee Manager Interface /// @notice Interface for managing dynamic gas limit and fee parameters /// @dev Inherits from IAllowList for access control interface IACP224FeeManager is IAllowList { /// @notice Configuration parameters for the dynamic fee mechanism struct FeeConfig { uint256 targetGas; // Target gas consumption per second uint256 minGasPrice; // Minimum gas price in wei uint256 maxCapacityFactor; // Maximum capacity factor (C = factor * T) uint256 timeToDouble; // Time in seconds for gas price to double at max capacity } /// @notice Emitted when fee configuration is updated /// @param sender Address that triggered the update /// @param oldFeeConfig Previous configuration /// @param newFeeConfig New configuration event FeeConfigUpdated(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig); /// @notice Set the fee configuration /// @param config New fee configuration parameters function setFeeConfig(FeeConfig calldata config) external; /// @notice Get the current fee configuration /// @return config Current fee configuration function getFeeConfig() external view returns (FeeConfig memory config); /// @notice Get the block number when fee config was last changed /// @return blockNumber Block number of last configuration change function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber); } ``` For chains with the precompile activated, `setFeeConfig` can be used to dynamically change each of the values in the fee configurations. Importantly, any updates made via calls to `setFeeConfig` in a transaction will take effect only as of *settlement* of the transaction, not as of *acceptance* or *execution* (for transaction life cycles/status, refer to ACP-194 [here](https://github.com/avalanche-foundation/ACPs/tree/61d2a2a/ACPs/194-streaming-asynchronous-execution#description)). This ensures that all nodes apply the same worst-case bounds validation on transactions being accepted into the queue, since the worst-case bounds are affected by changes to the fee configuration. In addition to storing the latest fee configuration to be returned by `getFeeConfig`, the precompile will also maintain state storing the latest values of $q$ and $K$. These values can be derived from the `targetGas` and `timeToDouble` values given to the precompile, respectively. The value of $q$ can be deterministically calculated using the same method as Coreth currently employs to calculate a node's desired target excess [here](https://github.com/ava-labs/coreth/blob/b4c8300490afb7f234df704fdcc446f227e4ec2f/plugin/evm/upgrade/acp176/acp176.go#L170). Similarly, the value of $K$ could be computed directly according to: $K = \frac{targetGas \cdot timeToDouble}{ln(2)}$ However, floating point math may introduce inaccuracies. Instead, a similar approach will be employed using binary search to determine the closest integer solution for $K$. Similar to the [desired target excess calculation in Coreth](https://github.com/ava-labs/coreth/blob/0255516f25964cf4a15668946f28b12935a50e0c/plugin/evm/upgrade/acp176/acp176.go#L170), which takes a node's desired gas target and calculates its desired target excess value, the `ACP224FeeManagerPrecompile` will use binary search to determine the resulting dynamic target excess value given the `targetGas` value passed to `setFeeConfig`. All blocks accepted after the settlement of such a call must have the correct target excess value as derived from the binary search result. Block building logic can follow the below diagram for determining the target excess of blocks. B{Is ACP224FeeManager precompile active?} B -- Yes --> C[Use targetExcess from precompile storage at latest settled root] B -- No --> D{Is gas-target set in node chain config file?} D -- Yes --> E[Calculate targetExcess from configured preference and allowed update bounds] D -- No --> F{Does parent block have ACP176 fields?} F -- Yes --> G[Use parent block ACP176 gas target] F -- No --> H[Use MinTargetPerSecond]`} /> #### Adjustment to ACP-176 calculations for price discovery ACP-176 defines the gas price for a block as: $gas\_price = M \cdot e^{\frac{x}{K}}$ Now, whenever $M$ (`minGasPrice`) or $K$ (derived from `timeToDouble`) are changed via the `ACP224FeeManagerPrecompile`, $x$ must also be updated. Specifically, when $M$ is updated from $M_0$ to $M_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$. $x_1$ theoretically could be calculated directly as: $x_1 = ln(\frac{M_0}{M_1}) \cdot K + x_0$ However, this would introduce floating point inaccuracies. Instead $x_1$ can be approximated using binary search to find the minimum non-negative integer such that the resulting gas price calculated using $M_1$ is greater than or equal to the current gas price prior to the change in $M$. In effect, this means that both reducing the minimum gas price and increasing the minimum gas price to a value less than the current gas price have no immediate effect on the current gas price. However, increasing the minimum gas price to value greater than the current gas price will cause the gas price to immediately step up to the new minimum value. Similarly, when $K$ is updated from $K_0$ to $K_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$, where $x_1$ is calculated as: $x_1 = x_0 \cdot \frac{K_1}{K_0}$ This makes it such that the current gas price stays the same when $K$ is changed. Changes to $K$ only impact how quickly or slowly the gas price can change going forward based on usage. ## Backwards Compatibility ACP-224 will require a network update in order to activate the new fee mechanism. Another activation will also be required to activate the new fee manager precompile. The activation of precompile should never occur before the activation of ACP-224 (the fee mechanism) since the precompile depends on ACP-224’s fee update logic to function correctly. Activation of ACP-224 mechanism will deactivate the prior fee mechanism and the prior fee manager precompile. This ensures that there is no ambiguity or overlap between legacy and new pricing logic. In order to provide a configuration for existing networks, a network upgrade override for both activation time and ACP-176 configuration parameters will be introduced. These upgrades will be optional at the moment. However, with introduction of ACP-194 (SAE), it will be required to activate this ACP; otherwise the network will not be able to use ACP-194. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations Generally, this has the same security considerations as ACP-176. However, due to the dynamic nature of parameters exposed in the `ACP224FeeManagerPrecompile` there is an additional risk of misconfiguration. Misconfiguration of parameters could leave the network vulnerable to a DoS attack or result in higher transaction fees than necessary. ## Open Questions * Should activation of the `ACP224FeeManager` precompile disable the old precompile itself or should we require it to be manually disabled as a separate upgrade? * Should we use `targetGas` in genesis/chain config as an optional field signaling whether the chain config should have a precedence over the validator preferences? * Similarly above, should we have a toggle in `ACP224FeeManager` precompile to give control to validators for `targetGas`? ## Acknowledgements * [Stephen Buttolph](https://github.com/StephenButtolph) * [Arran Schlosberg](https://github.com/ARR4N) * [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-226: Dynamic Minimum Block Times URL: /docs/acps/226-dynamic-minimum-block-times Details for Avalanche Community Proposal 226: Dynamic Minimum Block Times | ACP | 226 | | :------------ | :------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | Dynamic Minimum Block Times | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/228)) | | **Track** | Standards | ## Abstract Proposes replacing the current block production rate limiting mechanism on Avalanche EVM chains with a new mechanism where validators collectively and dynamically determine the minimum time between blocks. ## Motivation Currently, Avalanche EVM chains employ a mechanism to limit the rate of block production by increasing the "block gas cost" that must be burned if blocks are produced more frequently than the target block rate specified for the chain. The block gas cost is paid by summing the "priority fee" amounts that all transactions included in the block collectively burn. This mechanism has a few notable suboptimal aspects: 1. There is no explicit minimum block delay time. Validators are capable of producing blocks as frequently as they would like by paying the additional fee, and too rapid block production could cause network stability issues. 2. The target block rate can only be changed in a required network upgrade, which makes updates difficult to coordinate and operationalize. 3. The target block rate can only be specified with 1-second granularity, which does not allow for configuring sub-second block times as performance improvements are made to make them feasible. With the prospect of ACP-194 removing block execution from consensus and allowing for increases to the gas target through the dynamic ACP-176 mechanism, Avalanche EVM chains would be better suited by having a dynamic minimum block delay time denominated in milliseconds. This allows networks to ensure that blocks are never produced more frequently than the minimum block delay, and allows validators to dynamically influence the minimum block delay value by setting their preference. ## Specification ### Block Header Changes Upon activation of this ACP, the `blockGasCost` field in block headers will be required to be set to 0. This means that no validation of the cumulative priority fee amounts of transactions within the block exceeding the block gas cost is required. Additionally, two new fields will be added to EVM block headers: `timestampMilliseconds` and `minimumBlockDelayExcess`. #### `timestampMilliseconds` The canonical serialization and interpretation of EVM blocks already contains a block timestamp specified in seconds. Altering this would require deep changes to the EVM codebase, as well as cause breaking changes to tooling such as indexers and block explorers. Instead, a new field is added representing the unix timestamp in milliseconds. Header verification should verify the `block.timestamp` (in seconds) is aligned with the `block.timeMilliseconds`, more precisely: `block.timestampMilliseconds / 1000 == block.timestamp`. Existing tools that do not need millisecond granularity do not need to parse the new field, which limits the amount of breaking changes. The `timestampMilliseconds` field will be represented in block headers as a `uint64`. #### `minimumBlockDelayExcess` The new `minimumBlockDelayExcess` field in the block header will be used to derive the minimum number of milliseconds that must pass before the next block is allowed to be accepted. Specifically, if block $B$ has a `minimumBlockDelayExcess` of $q$, then the effective timestamp of block $B+1$ in milliseconds must be at least $M * e^{\frac{q}{D}}$ greater than the effective timestamp of block $B$ in milliseconds. $M$, $q$, and $D$ are defined below in the mechanism specification. The `minimumBlockDelayExcess` field will be represented in block headers as a `uint64`. The value of `minimumBlockDelayExcess` can be updated in each block, similar to the gas target excess field introduced in ACP-176. The mechanism is specified below. ### Dynamic `minimumBlockDelay` mechanism The `minimumBlockDelay` can be defined as: $m = M * e^{\frac{q}{D}}$ Where: * $M$ is the global minimum `minimumBlockDelay` value in milliseconds * $q$ is a non-negative integer that is initialized upon the activation of this mechanism, referred to as the `minimumBlockDelayExcess` * $D$ is a constant that helps control the rate of change of `minimumBlockDelay` After the execution of transactions in block $b$, the value of $q$ can be increased or decreased by up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e., validators) may set their desired value for $M$ (i.e., their desired `minimumBlockDelay`) in their configuration, and their desired value for $q$ can then be calculated as: $q_{desired} = D \cdot ln\left(\frac{M_{desired}}{M}\right)$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{M_{desired}}{M}\right)$ and round the resulting value to the nearest integer. Alternatively, client implementations can choose to use binary search to find the closest integer solution, as `coreth` [does to calculate a node's desired target excess](https://github.com/ava-labs/coreth/blob/ebaa8e028a3a8747d11e6822088b4af7863451d8/plugin/evm/upgrade/acp176/acp176.go#L170). When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $m$ is also updated such that $m = M \cdot e^{\frac{q}{D}}$ at all times. As noted above, the change to $m$ only takes effect for subsequent block production, and cannot change the time at which block $b$ can be produced itself. ### Gas Accounting Updates Currently, the amount of gas capacity available is only incremented on a per second basis, as defined by ACP-176. With this ACP, it is expected for chains to be able to have sub-second block times. However, in the case when a chain's gas capacity is fully consumed (i.e. during period of heavy transaction load), blocks would not be able to produced at sub-second intervals because at least one second would need to elapse for new gas capacity to be added. To correct this, upon activation of this ACP, gas capacity will be added on a per millisecond basis. The ACP-176 mechanism for determing the target gas consumption per second will remain unchanged, but its result will now be used to derive the target gas consumption per millisecond by dividing by 1000, and gas capacity will be added at that rate as each block advances time by some number of milliseconds. ### Activation Parameters for the C-Chain Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration | | --------- | ---------------------------------------------- | --------------------- | | $M$ | minimum `minimumBlockDelay` value | 1 millisecond | | $q$ | initial `minimumBlockDelayExcess` | 7,970,124 | | $D$ | `minimumBlockDelay` update constant | $2^{20}$ | | $Q$ | `minimumBlockDelay` update factor change limit | 200 |
$M$ was chosen as a lower bound for `minimumBlockDelay` values to allow high-performance Avalanche L1s to be able to realize maximum performance and minimal transaction latency. Based on the 1 millisecond value for $M$, $q$ was chosen such that the effective `minimumBlockDelay` value at time of activation is as close as possible to the current target block rate of the C-Chain, which is 2 seconds. $D$ and $Q$ were chosen such that it takes approximately 3,600 consecutive blocks of the maximum allowed change in $q$ for the effective `minimumBlockDelay` value to either halve or double. ### ProposerVM `MinBlkDelay` The ProposerVM currently offers a static, configurable `MinBlkDelay` seconds for consecutive blocks. With this ACP enforcing a dynamic minimum block delay time, any EVM instance adopting this ACP that also leverages the ProposerVM should ensure that the ProposerVM `MinBlkDelay` is set to 0. ### Note on Block Building While there is no longer a requirement for blocks to burn a minimum block gas cost after the activation of this ACP, block builders should still take priority fees into account when building blocks to allow for transaction prioritization and to maximize the amount of native token (AVAX) burned in the block. From a user (transaction issuer) perspective, this means that a non-zero priority fee would only ever need to be set to ensure inclusion during periods of maximum gas utilization. ## Backwards Compatibility While this proposal requires a network upgrade and updates the EVM block header format, it does so in a way that tries to maintain as much backwards compatibility as possible. Specifically, applications that currently parse and use the existing timestamp field that is denominated in seconds can continue to do so. The `timestampMilliseconds` header value only needs to be used in cases where more granular timestamps are required. ## Reference Implementation A reference implementation is not yet provided, and must be made available for this ACP to be considered `implementable`. ## Security Considerations Too rapid block production may cause availability issues if validators of the given blockchain are not able to keep up with blocks being proposed to consensus. This new mechanism allows validators to help influence the maximum frequency at which blocks are allowed to be produced, but potential misconfiguration or overly aggressive settings may cause problems for some validators. The mechanism for the minimum block delay time to adapt based on validator preference has already been used previously to allow for dynamic gas targets based on validator preference on the C-Chain, providing more confidence that it is suitable for controlling this network parameter as well. However, because each block is capable of changing the value of the minimum block delay by a certain amount, the lower the minimum block delay is, the more blocks that can be produced in a given time, and the faster the minimum block delay value will be able to change. This creates a dynamic where the mechanism for controlling `minimumBlockDelay` is more reactive at lower values, and less reactive at higher values. The global minimum `minimumBlockDelay` ($M$) provides a lower bound of how quickly blocks can ever be produced, but it is left to validators to ensure that the effective value does not exceed their collective preference. ## Acknowledgments Thanks to [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) for continually bringing up the idea of reducing block times to provide better UX for users of Avalanche blockchains. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-23: P Chain Native Transfers URL: /docs/acps/23-p-chain-native-transfers Details for Avalanche Community Proposal 23: P Chain Native Transfers | ACP | 23 | | :------------ | :--------------------------------------------------------- | | **Title** | P-Chain Native Transfers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support native transfers on P-chain. This enables users to transfer P-chain assets without leaving the P-chain or using a transaction type that's not meant for native transfers. ## Motivation Currently, the P-chain has no simple transfer transaction type. The X-chain supports this functionality through a `BaseTx`. Although the P-chain contains transaction types that extend `BaseTx`, the `BaseTx` transaction type itself is not a valid transaction. This leads to abnormal implementations of P-chain native transfers like in the AvalancheGo wallet which abuses [`CreateSubnetTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.15/wallet/chain/p/builder.go#L54-L63) to replicate the functionality contained in `BaseTx`. With the growing number of subnets slated for launch on the Avalanche network, simple transfers will be demanded more by users. While there are work-arounds as mentioned before, the network should support it natively to provide a cheaper option for both validators and end-users. ## Specification To support `BaseTx`, Avalanche Network Clients (like AvalancheGo) must register `BaseTx` with the type ID `0x22` in codec version `0x00`. For the specification of the transaction itself, see [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/base_tx.go#L29). Note that most other P-chain transactions extend this type, the only change in this ACP is to register it as a valid transaction itself. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the added `BaseTx` transaction type. ## Reference Implementation An implementation of `BaseTx` support was created [here](https://github.com/ava-labs/avalanchego/pull/2232) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations The P-chain has fixed fees which does not place any limits on chain throughput. A potentially popular transaction type like `BaseTx` may cause periods of high usage. The reference implementation in AvalancheGo sets the transaction fee to 0.001 AVAX as a deterrent (equivalent to `ImportTx` and `ExportTx`). This should be sufficient for the time being but a dynamic fee mechanism will need to be added to the P-chain in the future to mitigate this security concern. This is not addressed in this ACP as it requires a larger change to the fee dynamics on the P-chain as a whole. ## Open Questions No open questions. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-24: Shanghai Eips URL: /docs/acps/24-shanghai-eips Details for Avalanche Community Proposal 24: Shanghai Eips | ACP | 24 | | :------------ | :--------------------------------------------------------- | | **Title** | Activate Shanghai EIPs on C-Chain | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated | | **Track** | Standards | ## Abstract This ACP proposes the adoption of the following EIPs on the Avalanche C-Chain network: * [EIP-3651: Warm COINBASE](https://eips.ethereum.org/EIPS/eip-3651) * [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855) * [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860) * [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049) ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Shanghai upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md#included-eips). This ACP proposes their activation on the Avalanche C-Chain in the next network upgrade. This maintains compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler >= [0.8.20](https://github.com/ethereum/solidity/releases/tag/v0.8.20)). ## Specification & Reference Implementation This ACP proposes the EIPs be adopted as specified in the EIPs themselves. ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.12.0](https://github.com/ethereum/go-ethereum/releases/tag/v1.12.0) release in this [PR](https://github.com/ava-labs/coreth/pull/277). In particular, note the following code: * [Activation of new opcode and dynamic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/vm/jump_table.go#L92) * [EIP-3860 intrinsic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state_transition.go#L112-L113) * [EIP-3651 warm coinbase](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state/statedb.go#L1197-L1199) * Note EIP-6049 marks SELFDESTRUCT as deprecated, but does not remove it. The implementation in coreth is unchanged. ## Backwards Compatibility The following backward compatibility considerations were highlighted by the original EIP authors: * [EIP-3855](https://eips.ethereum.org/EIPS/eip-3855#backwards-compatibility): "... introduces a new opcode which did not exist previously. Already deployed contracts using this opcode could change their behaviour after this EIP". * [EIP-3860](https://eips.ethereum.org/EIPS/eip-3860#backwards-compatibility) "Already deployed contracts should not be effected, but certain transactions (with initcode beyond the proposed limit) would still be includable in a block, but result in an exceptional abort." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: * [EIP 3855](https://eips.ethereum.org/EIPS/eip-3855#security-considerations) * [EIP 3860](https://eips.ethereum.org/EIPS/eip-3860#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-25: Vm Application Errors URL: /docs/acps/25-vm-application-errors Details for Avalanche Community Proposal 25: Vm Application Errors | ACP | 25 | | :------------ | :-------------------------------------------------------- | | **Title** | Virtual Machine Application Errors | | **Author(s)** | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support a way for a Virtual Machine (VM) to signal application-defined error conditions to another VM. ## Motivation VMs are able to build their own peer-to-peer application protocols using the `AppRequest`, `AppResponse`, and `AppGossip` primitives. `AppRequest` is a message type that requires a corresponding `AppResponse` to indicate a successful response. In the unhappy path where an `AppRequest` is unable to be served, there currently is no native way for a peer to signal an error condition. VMs currently resort to timeouts in failure cases, where a client making a request will fallback to marking its request as failed after some timeout period has expired. Having a native application error type would offer a more powerful abstraction where Avalanche nodes would be able to score peers based on perceived errors. This is not currently possible because Avalanche networking isn't aware of the specific implementation details of the messages being delivered to VMs. A native application error type would also guarantee that all clients can potentially expect an `AppError` message to unblock an unsuccessful `AppRequest` and only rely on a timeout when absolutely necessary, significantly decreasing the latency for a client to unblock its request in the unhappy path. ## Specification ### Message This modifies the p2p specification by introducing a new [protobuf](https://protobuf.dev/) message type: ``` message AppError { bytes chain_id = 1; uint32 request_id = 2; uint32 error_code = 3; string error_message = 4; } ``` 1. `chain_id`: Reserves field 1. Senders **must** use the same chain id of from the original `AppRequest` this `AppError` message is being sent in response to. 2. `request_id`: Reserves field 2. Senders **must** use the same request id from the original `AppRequest` this `AppError` message is being sent in response to. 3. `error_code`: Reserves field 3. Application defined error code. Implementations *should* use the same error codes for the same conditions to allow clients to error match. Negative error codes are reserved for protocol defined errors. VMs may reserve any error code greater than zero. 4. `error_message`: Reserves field 4. Application defined human-readable error message that *should not* be used for error matching. For error matching, use `error_code`. ### Reserved Errors The following error codes are currently reserved by the Avalanche protocol: | Error Code | Description | | ---------- | --------------- | | 0 | undefined | | -1 | network timeout | ### Handling Clients **must** respond to an inbound `AppRequest` message with either a corresponding `AppResponse` to indicate a successful response, or an `AppError` to indicate an error condition by the requested `deadline` in the original `AppRequest`. ## Backwards Compatibility This new message type requires a network activation to require either an `AppResponse` or an `AppError` as a required response to an `AppRequest`. ## Reference Implementation * Message definition: [https://github.com/ava-labs/avalanchego/pull/2111](https://github.com/ava-labs/avalanchego/pull/2111) * Handling: [https://github.com/ava-labs/avalanchego/pull/2248](https://github.com/ava-labs/avalanchego/pull/2248) ## Security Considerations Optional section that discusses the security implications/considerations relevant to the proposed change. Clients should be aware that peers can arbitrarily send `AppError` messages to invoke error handling logic in a VM. ## Open Questions Optional section that lists any concerns that should be resolved prior to implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-30: Avalanche Warp X Evm URL: /docs/acps/30-avalanche-warp-x-evm Details for Avalanche Community Proposal 30: Avalanche Warp X Evm | ACP | 30 | | :------------ | :------------------------------------------------------------------------------- | | **Title** | Integrate Avalanche Warp Messaging into the EVM | | **Author(s)** | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Integrate Avalanche Warp Messaging into the C-Chain and Subnet-EVM in order to bring Cross-Subnet Communication to the EVM on Avalanche. ## Motivation Avalanche Subnets enable the creation of independent blockchains within the Avalanche Network. Each Avalanche Subnet registers its validator set on the Avalanche P-Chain, which serves as an effective "membership chain" for the entire Avalanche Ecosystem. By providing read access to the validator set of every Subnet on the Avalanche Network, any Subnet can look up the validator set of any other Subnet within the Avalanche Ecosystem to verify an Avalanche Warp Message, which replaces the need for point-to-point exchange of validator set info between Subnets. This enables a light weight protocol that allows seamless, on-demand communication between Subnets. For more information on the Avalanche Warp Messaging message and payload formats see here: * [AWM Message Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/README.md) * [Payload Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/payload/README.md) This ACP proposes to activate Avalanche Warp Messaging on the C-Chain and offer compatible support in Subnet-EVM to provide the first standard implementation of AWM in production on the Avalanche Network. ## Specification The specification will be broken down into the Solidity interface of the Warp Precompile, a Golang example implementation, the predicate verification, and the proposed gas costs for the Warp Precompile. The Warp Precompile address is `0x0200000000000000000000000000000000000005`. ### Precompile Solidity Interface ```solidity // (c) 2022-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; struct WarpMessage { bytes32 sourceChainID; address originSenderAddress; bytes payload; } struct WarpBlockHash { bytes32 sourceChainID; bytes32 blockHash; } interface IWarpMessenger { event SendWarpMessage(address indexed sender, bytes32 indexed messageID, bytes message); // sendWarpMessage emits a request for the subnet to send a warp message from [msg.sender] // with the specified parameters. // This emits a SendWarpMessage log from the precompile. When the corresponding block is accepted // the Accept hook of the Warp precompile is invoked with all accepted logs emitted by the Warp // precompile. // Each validator then adds the UnsignedWarpMessage encoded in the log to the set of messages // it is willing to sign for an off-chain relayer to aggregate Warp signatures. function sendWarpMessage(bytes calldata payload) external returns (bytes32 messageID); // getVerifiedWarpMessage parses the pre-verified warp message in the // predicate storage slots as a WarpMessage and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpMessage(uint32 index) external view returns (WarpMessage calldata message, bool valid); // getVerifiedWarpBlockHash parses the pre-verified WarpBlockHash message in the // predicate storage slots as a WarpBlockHash message and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpBlockHash( uint32 index ) external view returns (WarpBlockHash calldata warpBlockHash, bool valid); // getBlockchainID returns the snow.Context BlockchainID of this chain. // This blockchainID is the hash of the transaction that created this blockchain on the P-Chain // and is not related to the Ethereum ChainID. function getBlockchainID() external view returns (bytes32 blockchainID); } ``` ### Warp Predicates and Pre-Verification Signed Avalanche Warp Messages are encoded in the [EIP-2930 Access List](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2930.md) of a transaction, so that they can be pre-verified before executing the transactions in the block. The access list can specify any number of access tuples: a pair of an address and an array of storage slots in EIP-2930. Warp Predicate verification borrows this functionality to encode signed warp messages according to the serialization format defined [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Predicate.md). Each Warp specific access tuple included in the access list specifies the Warp Precompile address as the address. The first tuple that specifies the Warp Precompile address is considered to be at index. Each subsequent access tuple that specifies the Warp Precompile address increases the Warp Message index by 1. Access tuples that specify any other address are not included in calculating the index for a specific warp message. Avalanche Warp Messages are pre-verified (prior to block execution), and outputs a bitset for each transaction where a 1 indicates an Avalanche Warp Message that failed verification at that index. Throughout the EVM execution, the Warp Precompile checks the status of the resulting bit set to determine whether pre-verified messages are considered valid. This has the additional benefit of encoding the Warp pre-verification results in the block, so that verifying a historical block can use the encoded results instead of needing to access potentially old P-Chain state. The result bitset is encoded in the block according to the predicate result specification [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Results.md). Each Warp Message in the access list is charged gas to pay for verifying the Warp Message (gas costs are covered below) and is verified with the following steps (see [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) for reference implementation): 1. Unpack the predicate bytes 2. Parse the signed Avalanche Warp Message 3. Verify the signature according to the AWM spec in AvalancheGo [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) (the quorum numerator/denominator for the C-Chain is 67/100 and is configurable in Subnet-EVM) ### Precompile Implementation All types, events, and function arguments/outputs are encoded using the ABI package according to the official [Solidity ABI Specification](https://docs.soliditylang.org/en/latest/abi-spec.html). When the precompile is invoked with a given `calldata` argument, the first four bytes (`calldata[0:4]`) are read as the [function selector](https://docs.soliditylang.org/en/latest/abi-spec.html#function-selector). If the function selector matches the function selector of one of the functions defined by the Solidity interface, the contract invokes the corresponding execution function with the remaining calldata ie. `calldata[4:]`. For the full specification of the execution functions defined in the Solidity interface, see the reference implementation here: * [sendWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L226) * [getVerifiedWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L187) * [getVerifiedWarpBlockHash](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L145) * [getBlockchainID](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L96) ### Gas Costs The Warp Precompile charges gas during the verification of included Avalanche Warp Messages, which is included in the intrinsic gas cost of the transaction, and during the execution of the precompile. #### Verification Gas Costs Pre-verification charges the following costs for each Avalanche Warp Message: * GasCostPerSignatureVerification: 20000 * GasCostPerWarpMessageBytes: 100 * GasCostPerWarpSigner: 500 These numbers were determined experimentally using the benchmarks available [here](https://github.com/ava-labs/subnet-evm/blob/master/x/warp/predicate_test.go#L687) to target approximately the same mgas/s as existing precompile benchmarks in the EVM, which ranges between 50-200 mgas/s. In addition to the benchmarks, the following assumptions and goals were taken into account: * BLS Public Key Aggregation is extremely fast, resulting in charging more for the base cost of a single BLS Multi-Signature Verification than for adding an additional public key * The cost per byte included in the transaction should be strictly higher for including Avalanche Warp Messages than via transaction calldata, so that the Warp Precompile does not change the worst case maximum block size #### Execution Gas Costs The execution gas costs were determined by summing the cost of the EVM operations that are performed throughout the execution of the precompile with special consideration for added functionality that does not have an existing corollary within the EVM. ##### sendWarpMessage `sendWarpMessage` charges a base cost of 41,500 gas + 8 gas / payload byte This is comprised of charging for the following components: * 375 gas / log operation * 3 topics \* 375 gas / topic * 20k gas to produce and serve a BLS Signature * 20k gas to store the Unsigned Warp Message * 8 gas / payload byte This charges 20k gas for storing an Unsigned Warp Message although the message is stored in an independent key-value database instead of the active state. This makes it less expensive to store, so 20k gas is a conservative estimate. Additionally, the cost of serving valid signatures is significantly cheaper than serving state sync and bootstrapping requests, so the cost to validators of serving signatures over time is not considered a significant concern. `sendWarpMessage` also charges for the log operation it includes commensurate with the gas cost of a standard log operation in the EVM. A single `SendWarpMessage` log is charged: * 375 gas base cost * 375 gas per topic (`eventID`, `sender`, `messageID`) * 8 byte per / payload byte encoded in the `message` field Topics are indexed fields encoded as 32 byte values to support querying based on given specified topic values. ##### getBlockchainID `getBlockchainID` charges 2 gas to serve an already in-memory 32 byte valu commensurate with existing in-memory operations. ##### getVerifiedWarpBlockHash / getVerifiedWarpMessage `GetVerifiedWarpMessageBaseCost` charges 2 gas for serving a Warp Message (either payload type). Warp message are already in-memory, so it charges 2 gas for access. `GasCostPerWarpMessageBytes` charges 100 gas per byte of the Avalanche Warp Message that is unpacked into a Solidity struct. ## Backwards Compatibility Existing EVM opcodes and precompiles are not modified by activating Avalanche Warp Messaging in the EVM. This is an additive change to activate a Warp Precompile on the Avalanche C-Chain and can be scheduled for activation in any VM running on Avalanche Subnets that are capable of sending / verifying the specified payload types. ## Reference Implementation A full reference implementation can be found in Subnet-EVM v0.5.9 [here](https://github.com/ava-labs/subnet-evm/tree/v0.5.9/x/warp). ## Security Considerations Verifying an Avalanche Warp Message requires reading the source subnet's validator set at the P-Chain height specified in the [Snowman++ Block Extension](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension). The Avalanche PlatformVM provides the current state of the Avalanche P-Chain and maintains reverse diff-layers in order to compute Subnets' validator sets at historical points in time. As a result, verifying a historical Avalanche Warp Message that references an old P-Chain height requires applying diff-layers from the current state back to the referenced P-Chain height. As Subnets and the P-Chain continue to produce and accept new blocks, verifying the Warp Messages in historical blocks becomes increasingly expensive. To efficiently handle historical blocks containing Avalanche Warp Messages, the EVM uses the result bitset encoded in the block to determine the validity of Avalanche Warp Messages without requiring a historical P-Chain state lookup. This is considered secure because the network already verified the Avalanche Warp Messages when they were originally verified and accepted. ## Open Questions *How should validator set lookups in Warp Message verification be effectively charged for gas?* The verification cost of performing a validator set lookup on the P-Chain is currently excluded from the implementation. The cost of this lookup is variable depending on how old the referenced P-Chain height is from the perspective of each validator. [Ongoing work](https://github.com/ava-labs/avalanchego/pull/1611) can parallelize P-Chain validator set lookups and message verification to reduce the impact on block verification latency to be negligible and reduce costs to reflect the additional bandwidth of encoding Avalanche Warp Messages in the transaction. ## Acknowledgements Avalanche Warp Messaging and this effort to integrate it into the EVM has been a monumental effort. Thanks to all of the contributors who contributed their ideas, feedback, and development to this effort. @stephenbuttolph @patrick-ogrady @michaelkaplan13 @minghinmatthewlam @cam-schultz @xanderdunn @darioush @ceyonur ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-31: Enable Subnet Ownership Transfer URL: /docs/acps/31-enable-subnet-ownership-transfer Details for Avalanche Community Proposal 31: Enable Subnet Ownership Transfer | ACP | 31 | | :------------ | :--------------------------------------------------------- | | **Title** | Enable Subnet Ownership Transfer | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Allow the current owner of a Subnet to transfer ownership to a new owner. ## Motivation Once a Subnet is created on the P-chain through a [CreateSubnetTx](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/create_subnet_tx.go#L14-L19), the `Owner` of the subnet is currently immutable. Subnet operators may want to transition ownership of the Subnet to a new owner for a number of reasons, not least of all being rotating their control key(s) periodically. ## Specification Implement a new transaction type (`TransferSubnetOwnershipTx`) that: 1. Takes in a `Subnet` 2. Verifies that the `SubnetAuth` has the right to remove the node from the subnet by verifying it against the `Owner` field in the `CreateSubnetTx` that created the `Subnet`. 3. Takes in a new `Owner` and assigning it as the new owner of `Subnet` This transaction type should have the following format (code below is presented in Golang): ```go type TransferSubnetOwnershipTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the subnet this tx is modifying Subnet ids.ID `serialize:"true" json:"subnetID"` // Proves that the issuer has the right to remove the node from the subnet. SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` // Who is now authorized to manage this subnet Owner fx.Owner `serialize:"true" json:"newOwner"` } ``` This transaction type should have type ID `0x21` in codec version `0x00`. This transaction type should have a fee of `0.001 AVAX`, equivalent to adding a subnet validator/delegator. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the `TransferSubnetOwnershipTx` type. ## Reference Implementation An implementation of `TransferSubnetOwnershipTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2178) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions No open questions. ## Acknowledgements Thank you [@friskyfoxdk](https://github.com/friskyfoxdk) for filing an [issue](https://github.com/ava-labs/avalanchego/issues/1946) requesting this feature. Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-41: Remove Pending Stakers URL: /docs/acps/41-remove-pending-stakers Details for Avalanche Community Proposal 41: Remove Pending Stakers | ACP | 41 | | :------------ | :--------------------------------------------------------- | | **Title** | Remove Pending Stakers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Remove user-specified `StartTime` for stakers. Start the staking period for a staker as soon as their staking transaction is accepted. This greatly reduces the computational load on the P-chain, increasing the efficiency of all Avalanche Network validators. ## Motivation Stakers currently set a `StartTime` for their staking period. This means that Avalanche Network Clients, like AvalancheGo, need to maintain a pending set of all stakers that have not yet started. This places a nontrivial amount of work on the P-chain: * When a new delegator transaction is verified, the pending set needs to be checked to ensure that the validator they are delegating to will not exceed `MaxValidatorStake` while they are active * When a new staker transaction is accepted, it gets added to the pending set * When time is advanced on the P-chain, any stakers in the pending set whose `StartTime <= CurrentTime` need to be moved to the current set By immediately starting every staker on acceptance, the validators do not have to do the above work when validating the P-chain. `MaxValidatorStake` will become an `O(1)` operation as only the current stake of the validator needs to be checked. The pending set can be fully removed. ## Specification 1. When adding a new staker, the current on-chain time should be used for the staker's start time. 2. When determining when to remove the staker from the staker set, the `EndTime` specified in the transaction should continue to be used. Staking transactions should now be rejected if it does not satisfy `MinStakeDuration <= EndTime - CurrentTime <= MaxStakeDuration`. `StartTime` will no longer be validated. ## Backwards Compatibility Modifying the state transition of a transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to not alter the execution behavior prior to activation. This ACP only details the new state transition. Current wallet implementations will continue to work as-is post-activation of this ACP since no transaction formats are modified or added. Wallet implementations may run into issues with their txs being rejected as a result of this ACP if `EndTime >= CurrentChainTime + MaxStakeDuration`. `CurrentChainTime` is guaranteed to be >= the latest block timestamp on the P-chain. ## Reference Implementation A reference implementation has not been created for this ACP since it deals with state management. Each ANC will need to adjust their execution step to follow the Specification detailed above. For AvalancheGo, this work is tracked in this PR: [https://github.com/ava-labs/avalanchego/pull/2175](https://github.com/ava-labs/avalanchego/pull/2175) If modifications are made to the specification of the new execution behavior as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions *How will stakers stake for `MaxStakeDuration` if they cannot determine their `StartTime`?* As mentioned above, the beginning of your staking period is the block acceptance timestamp. Unless you can accurately predict the block timestamp, you will *not* be able to fully stake for `MaxStakeDuration`. This is an explicit trade-off to guarantee that stakers will receive their original stake + any staking rewards at `EndTime`. Delegators can maximize their staking period by setting the same `EndTime` as the Validator they are delegating to. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-62: Disable Addvalidatortx And Adddelegatortx URL: /docs/acps/62-disable-addvalidatortx-and-adddelegatortx Details for Avalanche Community Proposal 62: Disable Addvalidatortx And Adddelegatortx | ACP | 62 | | :------------ | :------------------------------------------------------------------------------------------------------------------------- | | **Title** | Disable `AddValidatorTx` and `AddDelegatorTx` | | **Author(s)** | Jacob Everly ([@JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Disable `AddValidatorTx` and `AddDelegatorTx` to push all new stakers to use `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx`. `AddPermissionlessValidatorTx` requires validators to register a BLS key. Wide adoption of registered BLS keys accelerates the timeline for future P-Chain upgrades. Additionally, this reduces the number of ways to participate in Primary Network validation from two to one. ## Motivation `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` were activated on the Avalanche Network in October 2022 with Banff (v1.9.0). This unlocked the ability for Subnet creators to activate Proof-of-Stake validation using their own token on their own Subnet. See more details about Banff [here](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c). These new transaction types can also be used to register a Primary Network validator, leaving two redundant transactions: `AddValidatorTx` and `AddDelegatorTx`. [`AddPermissionlessDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_delegator_tx.go#L25-L37) contains the same fields as [`AddDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_delegator_tx.go#L29-L39) with an additional `Subnet` field. [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_validator_tx.go#L35-L59) contains the same fields as [`AddValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_validator_tx.go#L29-L42) with additional `Subnet` and `Signer` fields. `RewardsOwner` was also split into `ValidationRewardsOwner` and `DelegationRewardsOwner` letting validators divert rewards they receive from delegators into a separate rewards owner. By disabling support of `AddValidatorTx`, all new validators on the Primary Network must use `AddPermissionlessValidatorTx` and register a BLS key with their NodeID. As more validators attach BLS keys to their nodes, future upgrades using these BLS keys can be activated through the ACP process. BLS keys can be used to efficiently sign a common message via [Public Key Aggregation](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html). Applications of this include, but are not limited to: * **Arbitrary Subnet Rewards**: The P-Chain currently restricts Elastic Subnets to follow the reward curve defined in a `TransformSubnetTx`. With sufficient BLS key adoption, Elastic Subnets can define their own reward curve and reward conditions. The P-Chain can be modified to take in a message indicating if a Subnet validator should be rewarded with how many tokens signed with a BLS Multi-Signature. * **Subnet Attestations**: Elastic Subnets can attest to the state of their Subnet with a BLS Multi-Signature. This can enable clients to fetch the current state of the Subnet without syncing the entire Subnet. `StateSync` enables clients to download chain state from peers up to a recent block near tip. However, it is up to the client to query these peers and resolve any potential conflicts in the responses. With Subnet Attestations, clients can query an API node to prove information about a Subnet without querying the Subnet's validators. This can especially be useful for [Subnet-Only Validators](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/13-subnet-only-validators.md) to prove information about the C-Chain. To accelerate future BLS-powered advancements in the Avalanche Network, this ACP aims to disable `AddValidatorTx` and `AddDelegatorTx` in Durango. ## Specification `AddValidatorTx` and `AddDelegatorTx` should be marked as dropped when added to the mempool after activation. Any blocks including these transactions should be considered invalid. ## Backwards Compatibility Disabling a transaction type is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any new issuance of `AddValidatorTx` or `AddDelegatorTx` will be considered invalid and dropped by the network. Any consumers of these transactions must transition to using `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` to participate in Primary Network validation. The [Avalanche Ledger App](https://github.com/LedgerHQ/app-avalanche) supports both of these transaction types. Note that `AddSubnetValidatorTx` and `RemoveSubnetValidatorTx` are unchanged by this ACP. ## Reference Implementation An implementation disabling `AddValidatorTx` and `AddDelegatorTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2662). Until activation, these transactions will continue to be accepted by AvalancheGo. If modifications are made to the specification as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-75: Acceptance Proofs URL: /docs/acps/75-acceptance-proofs Details for Avalanche Community Proposal 75: Acceptance Proofs | ACP | 75 | | :------------ | :----------------------------------------------------------------------------------- | | **Title** | Acceptance Proofs | | **Author(s)** | Joshua Kim | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/82)) | | **Track** | Standards | ## Abstract Introduces support for a proof of a block’s acceptance in consensus. ## Motivation Subnets are able to prove arbitrary events using warp messaging, but native support for proving block acceptance at the protocol layer enables more utility. Acceptance proofs are introduced to prove that a block has been accepted by a subnet. One example use case for acceptance proofs is to provide stronger fault isolation guarantees from the primary network to subnets. Subnets use the [ProposerVM](https://github.com/ava-labs/avalanchego/blob/416fbdf1f783c40f21e7009a9f06d192e69ba9b5/vms/proposervm/README.md) to implement soft leader election for block proposal. The ProposerVM determines the block producer schedule from a randomly shuffled validator set at a specified P-Chain block height. Validators are therefore required to have the P-Chain block referenced in a block's header to verify the block producer against the expected block producer schedule. If a block's header specifies a P-Chain height that has not been accepted yet, the block is treated as invalid. If a block referencing an unknown P-Chain height was produced virtuously, it is expected that the validator will eventually discover the block as its P-Chain height advances and accept the block. If many validators disagree about the current tip of the P-Chain, it can lead to a liveness concern on the subnet where block production entirely stalls. In practice, this almost never occurs because nodes produce blocks with a lagging P-Chain height because it’s likely that most nodes will have accepted a sufficiently stale block. This however, relies on an assumption that validators are constantly making progress in consensus on the P-Chain to prevent the subnet from stalling. This leaves an open concern where the P-Chain stalling on a node would prevent it from verifying any blocks, leading to a subnet unable to produce blocks if many validators stalled at different P-Chain heights. *** Figure 1: A Validator that has synced P-Chain blocks `A` and `B` fails verification of a block proposed at block `C`. figure 1 *** We introduce "acceptance proofs", so that a peer can verify any block accepted by consensus. In the aforementioned use-case, if a P-Chain block is unknown by a peer, it can request the block and proof at the provided height from a peer. If a block's proof is valid, the block can be executed to advance the local P-Chain and verify the proposed subnet block. Peers can request blocks from any peer without requiring consensus locally or communication with a validator. This has the added benefit of reducing the number of required connections and p2p message load served by P-Chain validators. *** Figure 2: A Validator is verifying a subnet’s block `Z` which references an unknown P-Chain block `C` in its block header figure 2 Figure 3: A Validator requests the blocks and proofs for `B` and `C` from a peer figure 3 Figure 4: The Validator accepts the P-Chain blocks and is now able to verify `Z` figure 4 *** ## Specification Note: The following is pseudocode. ### P2P #### Aggregation ```diff + message GetAcceptanceSignatureRequest { + bytes chain_id = 1; + uint32 request_id = 2; + bytes block_id = 3; + } ``` The `GetAcceptanceSignatureRequest` message is sent to a peer to request their signature for a given block id. ```diff + message GetAcceptanceSignatureResponse { + bytes chain_id = 1; + uint32 request_id = 2; + bytes bls_signature = 3; + } ``` `GetAcceptanceSignatureResponse` is sent to a peer as a response for `GetAcceptanceSignatureRequest`. `bls_signature` is the peer’s signature using their registered primary network BLS staking key over the requested `block_id`. An empty `bls_signature` field indicates that the block was not accepted yet. ## Security Considerations Nodes that bootstrap using state sync may not have the entire history of the P-Chain and therefore will not be able to provide the entire history for a block that is referenced in a block that they propose. This would be needed to unblock a node that is attempting to fast-forward their P-Chain, as they require the entire ancestry between their current accepted tip and the block they are attempting to forward to. It is assumed that nodes will have some minimum amount of recent state so that the requester can eventually be unblocked by retrying, as only one node with the requested ancestry is required to unblock the requester. An alternative is to make a churn assumption and validate the proposed block's proof with a stale validator set to avoid complexity, but this introduces more security concerns. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-77: Reinventing Subnets URL: /docs/acps/77-reinventing-subnets Details for Avalanche Community Proposal 77: Reinventing Subnets | ACP | 77 | | :------------ | :-------------------------------------------------------------------------------------------------------- | | **Title** | Reinventing Subnets | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/78)) | | **Track** | Standards | | **Replaces** | [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) | ## Abstract Overhaul Subnet creation and management to unlock increased flexibility for Subnet creators by: * Separating Subnet validators from Primary Network validators (Primary Network Partial Sync, Removal of 2000 \$AVAX requirement) * Moving ownership of Subnet validator set management from P-Chain to Subnets (ERC-20/ERC-721/Arbitrary Staking, Staking Reward Management) * Introducing a continuous P-Chain fee mechanism for Subnet validators (Continuous Subnet Staking) This ACP supersedes [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) and borrows some of its language. ## Motivation Each node operator must stake at least 2000 $AVAX ($70k at time of writing) to first become a Primary Network validator before they qualify to become a Subnet validator. Most Subnets aim to launch with at least 8 Subnet validators, which requires staking 16000 $AVAX ($560k at time of writing). All Subnet validators, to satisfy their role as Primary Network validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) cannot launch a Subnet because they cannot opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain \<-> Subnets using Avalanche Warp Messaging/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. *Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load.* Elastic Subnets, introduced in [Banff](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c), enabled Subnet creators to activate Proof-of-Stake validation and uptime-based rewards using their own token. However, this token is required to be an ANT (created on the X-Chain) and locked on the P-Chain. All staking rewards were distributed on the P-Chain with the reward curve being defined in the `TransformSubnetTx` and, once set, was unable to be modified. With no Elastic Subnets live on Mainnet, it is clear that Permissionless Subnets as they stand today could be more desirable. There are many successful Permissioned Subnets in production but many Subnet creators have raised the above as points of concern. In summary, the Avalanche community could benefit from a more flexible and affordable mechanism to launch Permissionless Subnets. ### A Note on Nomenclature Avalanche Subnets are subnetworks validated by a subset of the Primary Network validator set. The new network creation flow outlined in this ACP does not require any intersection between the new network's validator set and the Primary Network's validator set. Moreover, the new networks have greater functionality and sovereignty than Subnets. To distinguish between these two kinds of networks, the community has been referring to these new networks as *Avalanche Layer 1s*, or L1s for short. All networks created through the old network creation flow will continue to be referred to as Avalanche Subnets. ## Specification At a high-level, L1s can manage their validator sets externally to the P-Chain by setting the blockchain ID and address of their *validator manager*. The P-Chain will consume Warp messages that modify the L1's validator set. To confirm modification of the L1's validator set, the P-Chain will also produce Warp messages. L1 validators are not required to validate the Primary Network, and do not have the same 2000 $AVAX stake requirement that Subnet validators have. To maintain an active L1 validator, a continuous fee denominated in $AVAX is assessed. L1 validators are only required to sync the P-Chain (not X/C-Chain) in order to track validator set changes and support cross-L1 communication. ### P-Chain Warp Message Payloads To enable management of an L1's validator set externally to the P-Chain, Warp message verification will be added to the [`PlatformVM`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). For a Warp message to be considered valid by the P-Chain, at least 67% of the `sourceChainID`'s weight must have participated in the aggregate BLS signature. This is equivalent to the threshold set for the C-Chain. A future ACP may be proposed to support modification of this threshold on a per-L1 basis. The following Warp message payloads are introduced on the P-Chain: * `SubnetToL1ConversionMessage` * `RegisterL1ValidatorMessage` * `L1ValidatorRegistrationMessage` * `L1ValidatorWeightMessage` The method of requesting signatures for these messages is left unspecified. A viable option for supporting this functionality is laid out in [ACP-118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md) with the `SignatureRequest` message. All node IDs contained within the message specifications are represented as variable length arrays such that they can support new node IDs types should the P-Chain add support for them in the future. The serialization of each of these messages is as follows. #### `SubnetToL1ConversionMessage` The P-Chain can produce a `SubnetToL1ConversionMessage` for consumers (i.e. validator managers) to be aware of the initial validator set. The following serialization is defined as the `ValidatorData`: | Field | Type | Size | | -------------: | ---------: | -----------------------: | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `weight` | `uint64` | 8 bytes | | | | 60 + len(`nodeID`) bytes | The following serialization is defined as the `ConversionData`: | Field | Type | Size | | ---------------: | ----------------: | ---------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `managerChainID` | `[32]byte` | 32 bytes | | `managerAddress` | `[]byte` | 4 + len(`managerAddress`) bytes | | `validators` | `[]ValidatorData` | 4 + sum(`validatorLengths`) bytes | | | | 74 + len(`managerAddress`) + sum(`validatorLengths`) bytes | * `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` * `sum(validatorLengths)` is the sum of the lengths of `ValidatorData` serializations included in `validators`. * `subnetID` identifies the Subnet that is being converted to an L1 (described further below). * `managerChainID` and `managerAddress` identify the validator manager for the newly created L1. This is the (blockchain ID, address) tuple allowed to send Warp messages to modify the L1's validator set. * `validators` are the initial continuous-fee-paying validators for the given L1. The `SubnetToL1ConversionMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `conversionID` | `[32]byte` | 32 bytes | | | | 38 bytes | * `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` * `typeID` is the payload type identifier and is `0x00000000` for this message * `conversionID` is the SHA256 hash of the `ConversionData` from a given `ConvertSubnetToL1Tx` #### `RegisterL1ValidatorMessage` The P-Chain can consume a `RegisterL1ValidatorMessage` from validator managers through a `RegisterL1ValidatorTx` to register an addition to the L1's validator set. The following is the serialization of a `PChainOwner`: | Field | Type | Size | | ----------: | -----------: | ---------------------------------: | | `threshold` | `uint32` | 4 bytes | | `addresses` | `[][20]byte` | 4 + len(`addresses`) \\\* 20 bytes | | | | 8 + len(`addresses`) \\\* 20 bytes | * `threshold` is the number of `addresses` that must provide a signature for the `PChainOwner` to authorize an action. * Validation criteria: * If `threshold` is `0`, `addresses` must be empty * `threshold` \<= len(`addresses`) * Entries of `addresses` must be unique and sorted in ascending order The `RegisterL1ValidatorMessage` is specified as an `AddressedCall` with a payload of: | Field | Type | Size | | ----------------------: | ------------: | --------------------------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `expiry` | `uint64` | 8 bytes | | `remainingBalanceOwner` | `PChainOwner` | 8 + len(`addresses`) \\\* 20 bytes | | `disableOwner` | `PChainOwner` | 8 + len(`addresses`) \\\* 20 bytes | | `weight` | `uint64` | 8 bytes | | | | 122 + len(`nodeID`) + (len(`addresses1`) + len(`addresses2`)) \\\* 20 bytes | * `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` * `typeID` is the payload type identifier and is `0x00000001` for this payload * `subnetID`, `nodeID`, `weight`, and `blsPublicKey` are for the validator being added * `expiry` is the time at which this message becomes invalid. As of a P-Chain timestamp `>= expiry`, this Avalanche Warp Message can no longer be used to add the `nodeID` to the validator set of `subnetID` * `remainingBalanceOwner` is the P-Chain owner where leftover \$AVAX from the validator's Balance will be issued to when this validator it is removed from the validator set. * `disableOwner` is the only P-Chain owner allowed to disable the validator using `DisableL1ValidatorTx`, specified below. #### `L1ValidatorRegistrationMessage` The P-Chain can produce an `L1ValidatorRegistrationMessage` for consumers to verify that a validation period has either begun or has been invalidated. The `L1ValidatorRegistrationMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `registered` | `bool` | 1 byte | | | | 39 bytes | * `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` * `typeID` is the payload type identifier and is `0x00000002` for this message * `validationID` identifies the validator for the message * `registered` is a boolean representing the status of the `validationID`. If true, the `validationID` corresponds to a validator in the current validator set. If false, the `validationID` does not correspond to a validator in the current validator set, and never will in the future. #### `L1ValidatorWeightMessage` The P-Chain can consume an `L1ValidatorWeightMessage` through a `SetL1ValidatorWeightTx` to update the weight of an existing validator. The P-Chain can also produce an `L1ValidatorWeightMessage` for consumers to verify that the validator weight update has been effectuated. The `L1ValidatorWeightMessage` is specified as an `AddressedCall` with the following payload. When sent from the P-Chain, the `sourceChainID` is set to the P-Chain ID, and the `sourceAddress` is set to an empty byte array. | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `nonce` | `uint64` | 8 bytes | | `weight` | `uint64` | 8 bytes | | | | 54 bytes | * `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` * `typeID` is the payload type identifier and is `0x00000003` for this message * `validationID` identifies the validator for the message * `nonce` is a strictly increasing number that denotes the latest validator weight update and provides replay protection for this transaction * `weight` is the new `weight` of the validator ### New P-Chain Transaction Types Both before and after this ACP, to create a Subnet, a `CreateSubnetTx` must be issued on the P-Chain. This transaction includes an `Owner` field which defines the key that today can be used to authorize any validator set additions (`AddSubnetValidatorTx`) or removals (`RemoveSubnetValidatorTx`). To be considered a permissionless network, or Avalanche Layer 1: * This `Owner` key must no longer have the ability to modify the validator set. * New transaction types must support modification of the validator set via Warp messages. The following new transaction types are introduced on the P-Chain to support this functionality: * `ConvertSubnetToL1Tx` * `RegisterL1ValidatorTx` * `SetL1ValidatorWeightTx` * `DisableL1ValidatorTx` * `IncreaseL1ValidatorBalanceTx` #### `ConvertSubnetToL1Tx` To convert a Subnet into an L1, a `ConvertSubnetToL1Tx` must be issued to set the `(chainID, address)` pair that will manage the L1's validator set. The `Owner` key defined in `CreateSubnetTx` must provide a signature to authorize this conversion. The `ConvertSubnetToL1Tx` specification is: ```go type PChainOwner struct { // The threshold number of `Addresses` that must provide a signature in order for // the `PChainOwner` to be considered valid. Threshold uint32 `json:"threshold"` // The 20-byte addresses that are allowed to sign to authenticate a `PChainOwner`. // Note: It is required for: // - len(Addresses) == 0 if `Threshold` is 0. // - len(Addresses) >= `Threshold` // - The values in Addresses to be sorted in ascending order. Addresses []ids.ShortID `json:"addresses"` } type L1Validator struct { // NodeID of this validator NodeID []byte `json:"nodeID"` // Weight of this validator used when sampling Weight uint64 `json:"weight"` // Initial balance for this validator Balance uint64 `json:"balance"` // [Signer] is the BLS public key and proof-of-possession for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer signer.ProofOfPossession `json:"signer"` // Leftover $AVAX from the [Balance] will be issued to this // owner once it is removed from the validator set. RemainingBalanceOwner PChainOwner `json:"remainingBalanceOwner"` // The only owner allowed to disable this validator on the P-Chain. DisableOwner PChainOwner `json:"disableOwner"` } type ConvertSubnetToL1Tx struct { // Metadata, inputs and outputs BaseTx // ID of the Subnet to transform // Restrictions: // - Must not be the Primary Network ID Subnet ids.ID `json:"subnetID"` // BlockchainID where the validator manager lives ChainID ids.ID `json:"chainID"` // Address of the validator manager Address []byte `json:"address"` // Initial continuous-fee-paying validators for the L1 Validators []L1Validator `json:"validators"` // Authorizes this conversion SubnetAuth verify.Verifiable `json:"subnetAuthorization"` } ``` After this transaction is accepted, `CreateChainTx` and `AddSubnetValidatorTx` are disabled on the Subnet. The only action that the `Owner` key is able to take is removing Subnet validators with `RemoveSubnetValidatorTx` that had been added using `AddSubnetValidatorTx`. Unless removed by the `Owner` key, any Subnet validators added previously with an `AddSubnetValidatorTx` will continue to validate the Subnet until their [`End`](https://github.com/ava-labs/avalanchego/blob/a1721541754f8ee23502b456af86fea8c766352a/vms/platformvm/txs/validator.go#L27) time is reached. Once all Subnet validators added with `AddSubnetValidatorTx` are no longer in the validator set, the `Owner` key is powerless. `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` must be used to manage the L1's validator set. The `validationID` for validators added through `ConvertSubnetToL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Once this transaction is accepted, the P-Chain must be willing sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to `ConversionData` populated with the values from this transaction. #### `RegisterL1ValidatorTx` After a `ConvertSubnetToL1Tx` has been accepted, new validators can only be added by using a `RegisterL1ValidatorTx`. The specification of this transaction is: ```go type RegisterL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee. Balance uint64 `json:"balance"` // [Signer] is a BLS signature proving ownership of the BLS public key specified // below in `Message` for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer [96]byte `json:"signer"` // A RegisterL1ValidatorMessage payload Message warp.Message `json:"message"` } ``` The `validationID` of validators added via `RegisterL1ValidatorTx` is defined as the SHA256 hash of the `Payload` of the `AddressedCall` in `Message`. When a `RegisterL1ValidatorTx` is accepted on the P-Chain, the validator is added to the L1's validator set. A `minNonce` field corresponding to the `validationID` will be stored on addition to the validator set (initially set to `0`). This field will be used when validating the `SetL1ValidatorWeightTx` defined below. This `validationID` will be used for replay protection. Used `validationID`s will be stored on the P-Chain. If a `RegisterL1ValidatorTx`'s `validationID` has already been used, the transaction will be considered invalid. To prevent storing an unbounded number of `validationID`s, the `expiry` of the `RegisterL1ValidatorMessage` is required to be no more than 24 hours in the future of the time the transaction is issued on the P-Chain. Any `validationIDs` corresponding to an expired timestamp can be flushed from the P-Chain's state. L1s are responsible for defining the procedure on how to retrieve the above information from prospective validators. An EVM-compatible L1 may choose to implement this step like so: * Use the number of tokens the user has staked into a smart contract on the L1 to determine the weight of their validator * Require the user to submit an on-chain transaction with their validator information * Generate the Warp message For a `RegisterL1ValidatorTx` to be valid, `Signer` must be a valid proof-of-possession of the `blsPublicKey` defined in the `RegisterL1ValidatorMessage` contained in the transaction. After a `RegisterL1ValidatorTx` is accepted, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the given `validationID` with `registered` set to `true`. This remains the case until the time at which the validator is removed from the validator set using a `SetL1ValidatorWeightTx`, as described below. When it is known that a given `validationID` *is not and never will be* registered, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the `validationID` with `registered` set to `false`. This could be the case if the `expiry` time of the message has passed prior to the message being delivered in a `RegisterL1ValidatorTx`, or if the validator was successfully registered and then later removed. This enables the P-Chain to prove to validator managers that a validator has been removed or never added. The P-Chain must refuse to sign any `L1ValidatorRegistrationMessage` where the `validationID` does not correspond to an active validator and the `expiry` is in the future. #### `SetL1ValidatorWeightTx` `SetL1ValidatorWeightTx` is used to modify the voting weight of a validator. The specification of this transaction is: ```go type SetL1ValidatorWeightTx struct { // Metadata, inputs and outputs BaseTx // An L1ValidatorWeightMessage payload Message warp.Message `json:"message"` } ``` Applications of this transaction could include: * Increase the voting weight of a validator if a delegation is made on the L1 * Increase the voting weight of a validator if the stake amount is increased (by staking rewards for example) * Decrease the voting weight of a misbehaving validator * Remove an inactive validator The validation criteria for `L1ValidatorWeightMessage` is: * `nonce >= minNonce`. Note that `nonce` is not required to be incremented by `1` with each successive validator weight update. * When `minNonce == MaxUint64`, `nonce` must be `MaxUint64` and `weight` must be `0`. This prevents L1s from being unable to remove `nodeID` in a subsequent transaction. * If `weight == 0`, the validator being removed must not be the last one in the set. If all validators are removed, there are no valid Warp messages that can be produced to register new validators through `RegisterL1ValidatorMessage`. With no validators, block production will halt and the L1 is unrecoverable. This validation criteria serves as a guardrail against this situation. A future ACP can remove this guardrail as users get more familiar with the new L1 mechanics and tooling matures to fork an L1. When `weight != 0`, the weight of the validator is updated to `weight` and `minNonce` is updated to `nonce + 1`. When `weight == 0`, the validator is removed from the validator set. All state related to the validator, including the `minNonce` and `validationID`, are reaped from the P-Chain state. Tracking these post-removal is not required since `validationID` can never be re-initialized due to the replay protection provided by `expiry` in `RegisterL1ValidatorTx`. Any unspent \$AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that `RemainingBalanceOwner` is specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). Note: There is no explicit `EndTime` for L1 validators added in a `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`. The only time when L1 validators are removed from the L1's validator set is through this transaction when `weight == 0`. #### `DisableL1ValidatorTx` L1 validators can use `DisableL1ValidatorTx` to mark their validator as inactive. The specification of this transaction is: ```go type DisableL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Authorizes this validator to be disabled DisableAuth verify.Verifiable `json:"disableAuthorization"` } ``` The `DisableOwner` specified for this validator must sign the transaction. Any unspent \$AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that both `DisableOwner` and `RemainingBalanceOwner` are specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). For full removal from an L1's validator set, a `SetL1ValidatorWeightTx` must be issued with weight `0`. To do so, a Warp message is required from the L1's validator manager. However, to support the ability to claim the unspent `Balance` for a validator without authorization is critical for failed L1s. Note that this does not modify an L1's total staking weight. This transaction marks the validator as inactive, but does not remove it from the L1's validator set. Inactive validators can re-activate at any time by increasing their balance with an `IncreaseL1ValidatorBalanceTx`. L1 creators should be aware that there is no notion of `MinStakeDuration` that is enforced by the P-Chain. It is expected that L1s who choose to enforce a `MinStakeDuration` will lock the validator's Stake for the L1's desired `MinStakeDuration`. #### `IncreaseL1ValidatorBalanceTx` L1 validators are required to maintain a non-zero balance used to pay the continuous fee on the P-Chain in order to be considered active. The `IncreaseL1ValidatorBalanceTx` can be used by anybody to add additional \$AVAX to the `Balance` to a validator. The specification of this transaction is: ```go type IncreaseL1ValidatorBalanceTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee Balance uint64 `json:"balance"` } ``` If the validator corresponding to `ValidationID` is currently inactive (`Balance` was exhausted or `DisableL1ValidatorTx` was issued), this transaction will move them back to the active validator set. Note: The \$AVAX added to `Balance` can be claimed at any time by the validator using `DisableL1ValidatorTx`. ### Bootstrapping L1 Nodes Bootstrapping a node/validator is the process of securely recreating the latest state of the blockchain locally. At the end of this process, the local state of a node/validator must be in sync with the local state of other virtuous nodes/validators. The node/validator can then verify new incoming transactions and reach consensus with other nodes/validators. To bootstrap a node/validator, a few critical questions must be answered: How does one discover peers in the network? How does one determine that a discovered peer is honestly participating in the network? For standalone networks like the Avalanche Primary Network, this is done by connecting to a hardcoded [set](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json) of trusted bootstrappers to then discover new peers. Ethereum calls their set [bootnodes](https://ethereum.org/developers/docs/nodes-and-clients/bootnodes). Since L1 validators are not required to be Primary Network validators, a list of validator IPs to connect to (the functional bootstrappers of the L1) cannot be provided by simply connecting to the Primary Network validators. However, the Primary Network can enable nodes tracking an L1 to seamlessly connect to the validators by tracking and gossiping L1 validator IPs. L1s will not need to operate and maintain a set of bootstrappers and can rely on the Primary Network for peer discovery. ### Sidebar: L1 Sovereignty After this ACP is activated, the P-Chain will no longer support staking of any assets other than $AVAX for the Primary Network. The P-Chain will not support the distribution of staking rewards for L1s. All staking-related operations for L1 validation must be managed by the L1's validator manager. The P-Chain simply requires a continuous fee per validator. If an L1 would like to manage their validator's balances on the P-Chain, it can cover the cost for all L1 validators by posting the $AVAX balance on the P-Chain. L1s can implement any mechanism they want to pay the continuous fee charged by the P-Chain for its participants. The L1 has full ownership over its validator set, not the P-Chain. There are no restrictions on what requirements an L1 can have for validators to join. Any stake that is required to join the L1's validator set is not locked on the P-Chain. If a validator is removed from the L1's validator set via a `SetL1ValidatorWeightTx` with weight `0`, the stake will continue to be locked outside of the P-Chain. How each L1 handles stake associated with the validator is entirely left up to the L1 and can be treated independently to what happens on the P-Chain. The relationship between the P-Chain and L1s provides a dynamic where L1s can use the P-Chain as an impartial judge to modify parameters (in addition to its existing role of helping to validate incoming Avalanche Warp Messages). If a validator is misbehaving, the L1 validators can collectively generate a BLS multisig to reduce the voting weight of a misbehaving validator. This operation is fully secured by the Avalanche Primary Network (225M $AVAX or $8.325B at the time of writing). Follow-up ACPs could extend the P-Chain \<-> L1 relationship to include parametrization of the 67% threshold to enable L1s to choose a different threshold based on their security model (e.g. a simple majority of 51%). ### Continuous Fee Mechanism Every additional validator on the P-Chain adds persistent load to the Avalanche Network. When a validator transaction is issued on the P-Chain, it is charged for the computational cost of the transaction itself but is not charged for the cost of an active validator over the time they are validating on the network (which may be indefinitely). This is a common problem in blockchains, spawning many state rent proposals in the broader blockchain space to address it. The following fee mechanism takes advantage of the fact that each L1 validator uses the same amount of computation and charges each L1 validator the dynamic base fee for every discrete unit of time it is active. To charge each L1 validator, the notion of a `Balance` is introduced. The `Balance` of a validator will be continuously charged during the time they are active to cover the cost of storing the associated validator properties (BLS key, weight, nonce) in memory and to track IPs (in addition to other services provided by the Primary Network). This `Balance` is initialized with the `RegisterL1ValidatorTx` that added them to the active validator set. `Balance` can be increased at any time using the `IncreaseL1ValidatorBalanceTx`. When this `Balance` reaches `0`, the validator will be considered "inactive" and will no longer participate in validating the L1. Inactive validators can be moved back to the active validator set at any time using the same `IncreaseL1ValidatorBalanceTx`. Once a validator is considered inactive, the P-Chain will remove these properties from memory and only retain them on disk. All messages from that validator will be considered invalid until it is revived using the `IncreaseL1ValidatorBalanceTx`. L1s can reduce the amount of inactive weight by removing inactive validators with the `SetL1ValidatorWeightTx` (`Weight` = 0). Since each L1 validator is charged the same amount at each point in time, tracking the fees for the entire validator set is straight-forward. The accumulated dynamic base fee for the entire network is tracked in a single uint. This accumulated value should be equal to the fee charged if a validator was active from the time the accumulator was instantiated. The validator set is maintained in a priority queue. A pseudocode implementation of the continuous fee mechanism is provided below. ```python # Pseudocode class ValidatorQueue: def __init__(self, fee_getter): self.acc = 0 self.queue = PriorityQueue() self.fee_getter = fee_getter # At each time period, increment the accumulator and # pop all validators from the top of the queue that # ran out of funds. # Note: The amount of work done in a single block # should be bounded to prevent a large number of # validator operations from happening at the same # time. def time_elapse(self, t): self.acc = self.acc + self.fee_getter(t) while True: vdr = self.queue.peek() if vdr.balance < self.acc: self.queue.pop() continue return # Validator was added def validator_enter(self, vdr): vdr.balance = vdr.balance + self.acc self.queue.add(vdr) # Validator was removed def validator_remove(self, vdrNodeID): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance - self.acc vdr.refund() # Refund [vdr.balance] to [RemainingBalanceOwner] self.queue.remove() # Validator's balance was topped up def validator_increase(self, vdrNodeID, balance): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance + balance self.queue.add(vdr) ``` #### Fee Algorithm [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) proposes a dynamic fee mechanism for transactions on the P-Chain. This mechanism is repurposed with minor modifications for the active L1 validator continuous fee. At activation, the number of excess active L1 validators $x$ is set to `0`. The fee rate per second for an active L1 validator is: $M \cdot \exp\left(\frac{x}{K}\right)$ Where: * $M$ is the minimum price for an active L1 validator * $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` * $K$ is a constant to control the rate of change for the L1 validator price After every second, $x$ will be updated: $x = \max(x + (V - T), 0)$ Where: * $V$ is the number of active L1 validators * $T$ is the target number of active L1 validators Whenever $x$ increases by $K$, the price per active L1 validator increases by a factor of `~2.7`. If the price per active L1 validator gets too expensive, some active L1 validators will exit the active validator set, decreasing $x$, dropping the price. The price per active L1 validator constantly adjusts to make sure that, on average, the P-Chain has no more than $T$ active L1 validators. #### Block Processing Before processing the transactions inside a block, all validators that no longer have a sufficient (non-zero) balance are deactivated. After processing the transactions inside a block, all validators that do not have a sufficient balance for the next second are deactivated. ##### Block Timestamp Validity Change To ensure that validators are charged accurately, blocks are only considered valid if advancing the chain times would not cause a validator to have a negative balance. This upholds the expectation that the number of L1 validators remains constant between blocks. The block building protocol is modified to account for this change by first checking if the wall clock time removes any validator due to a lack of funds. If the wall clock time does not remove any L1 validators, the wall clock time is used to build the block. If it does, the time at which the first validator gets removed is used. ##### Fee Calculation The total validator fee assessed in $\Delta t$ is: ```python # Calculate the fee to charge over Δt def cost_over_time(V:int, T:int, x:int, Δt: int) -> int: cost = 0 for _ in range(Δt): x = max(x + V - T, 0) cost += fake_exponential(M, x, K) return cost ``` #### Parameters The parameters at activation are: | Parameter | Definition | Value | | --------- | ------------------------------------------- | ---------------- | | $T$ | target number of validators | 10\_000 | | $C$ | capacity number of validators | 20\_000 | | $M$ | minimum fee rate | 512 nAVAX/s | | $K$ | constant to control the rate of fee changes | 1\_246\_488\_515 | An $M$ of 512 nAVAX/s equates to \~1.33 AVAX/month to run an L1 validator, so long as the total number of continuous-fee-paying L1 validators stays at or below $T$. $K$ was chosen to set the maximum fee doubling rate to \~24 hours. This is in the extreme case that the network has $C$ validators for prolonged periods of time; if the network has $T$+1 validators for example, the fee rate would double every \~27 years. A future ACP can adjust the parameters to increase $T$, reduce $M$, and/or modify $K$. #### User Experience L1 validators are continuously charged a fee, albeit a small one. This poses a challenge for L1 validators: How do they maintain the balance over time? Node clients should expose an API to track how much balance is remaining in the validator's account. This will provide a way for L1 validators to track how quickly it is going down and top-up when needed. A nice byproduct of the above design is the balance in the validator's account is claimable. This means users can top-up as much \$AVAX as they want and rest-assured knowing they can always retrieve it if there is an excessive amount. The expectation is that most users will not interact with node clients or track when or how much they need to top-up their validator account. Wallet providers will abstract away most of this process. For users who desire more convenience, L1-as-a-Service providers will abstract away all of this process. ## Backwards Compatibility This new design for Subnets proposes a large rework to all L1-related mechanics. Rollout should be done on a going-forward basis to not cause any service disruption for live Subnets. All current Subnet validators will be able to continue validating both the Primary Network and whatever Subnets they are validating. Any state execution changes must be coordinated through a mandatory upgrade. Implementors must take care to continue to verify the existing ruleset until the upgrade is activated. After activation, nodes should verify the new ruleset. Implementors must take care to only verify the presence of 2000 \$AVAX prior to activation. ### Deactivated Transactions * P-Chain * `TransformSubnetTx` After this ACP is activated, Elastic Subnets will be disabled. `TransformSubnetTx` will not be accepted post-activation. As there are no Mainnet Elastic Subnets, there should be no production impact with this deactivation. ### New Transactions * P-Chain * `ConvertSubnetToL1Tx` * `RegisterL1ValidatorTx` * `SetL1ValidatorWeightTx` * `DisableL1ValidatorTx` * `IncreaseL1ValidatorBalanceTx` ## Reference Implementation ACP-77 was implemented and will be merged into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp77` label [here](https://github.com/ava-labs/avalanchego/issues?q=sort%3Aupdated-desc+label%3Aacp77). Since Etna is not yet activated, all new transactions introduced in ACP-77 will be rejected by AvalancheGo. If any modifications are made to ACP-77 as part of the ACP process, the implementation must be updated prior to activation. ## Security Considerations This ACP introduces Avalanche Layer 1s, a new network type that costs significantly less than Avalanche Subnets. This can lead to a large increase in the number of networks and, by extension, the number of validators. Each additional validator adds consistent RAM usage to the P-Chain. However, this should be appropriately metered by the continuous fee mechanism outlined above. With the sovereignty L1s have from the P-Chain, L1 staking tokens are not locked on the P-Chain. This poses a security consideration for L1 validators: Malicious chains can choose to remove validators at will and take any funds that the validator has locked on the L1. The P-Chain only provides the guarantee that L1 validators can retrieve the remaining \$AVAX Balance for their validator via a `DisableL1ValidatorTx`. Any assets on the L1 is entirely under the purview of the L1. The onus is on L1 validators to vet the L1's security for any assets transferred onto the L1. With a long window of expiry (24 hours) for the Warp message in `RegisterL1ValidatorTx`, spam of validator registration could lead to high memory pressure on the P-Chain. A future ACP can reduce the window of expiry if 24 hours proves to be a problem. NodeIDs can be added to an L1's validator set involuntarily. However, it is important to note that any stake/rewards are *not* at risk. For a node operator who was added to a validator set involuntarily, they would only need to generate a new NodeID via key rotation as there is no lock-up of any stake to create a NodeID. This is an explicit tradeoff for easier on-boarding of NodeIDs. This mirrors the Primary Network validators guarantee of no stake/rewards at risk. The continuous fee mechanism outlined above does not apply to inactive L1 validators since they are not stored in memory. However, inactive L1 validators are persisted on disk which can lead to persistent P-Chain state growth. A future ACP can introduce a mechanism to decrease the rate of P-Chain state growth or provide a state expiry path to reduce the amount of P-Chain state. ## Acknowledgements Special thanks to [@StephenButtolph](https://github.com/StephenButtolph), [@aaronbuchwald](https://github.com/aaronbuchwald), and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. Thank you to the broader Ava Labs Platform Engineering Group for their feedback on this ACP prior to publication. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-83: Dynamic Multidimensional Fees URL: /docs/acps/83-dynamic-multidimensional-fees Details for Avalanche Community Proposal 83: Dynamic Multidimensional Fees | ACP | 83 | | :---------------- | :------------------------------------------------------------------------------------------------ | | **Title** | Dynamic multidimensional fees for P-chain and X-chain | | **Author(s)** | Alberto Benegiamo ([@abi87](https://github.com/abi87)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) | ## Abstract Introduce a dynamic and multidimensional fees scheme for the P-chain and X-chain. Dynamic fees helps to preserve the stability of the chain as it provides a feedback mechanism that increases the cost of resources when the network operates above its target utilization. Multidimensional fees ensures that high demand for orthogonal resources does not drive up the price of underutilized resources. For example, networks provide and consume orthogonal resources including, but not limited to, bandwidth, chain state, read/write throughput, and CPU. By independently metering each resource, they can be granularly priced and stay closer to optimal resource utilization. ## Motivation The P-Chain and X-Chain currently have fixed fees and in some cases those fees are fixed to zero. This makes transaction issuance predictable, but does not provide feedback mechanism to preserve chain stability under high load. In contrast, the C-Chain, which has the highest and most regular load among the chains on the Primary Network, already supports dynamic fees. This ACP proposes to introduce a similar dynamic fee mechanism for the P-Chain and X-Chain to further improve the Primary Network's stability and resilience under load. However, unlike the C-Chain, we propose a multidimensional fee scheme with an exponential update rule for each fee dimension. The [HyperSDK](https://github.com/ava-labs/hypersdk) already utilizes a multidimensional fee scheme with optional priority fees and its efficiency is backed by [academic research](https://arxiv.org/abs/2208.07919). Finally, we split the fee into two parts: a `base fee` and a `priority fee`. The `base fee` is calculated by the network each block to accurately price each resource at a given point in time. Whatever amount greater than the base fee is burnt is treated as the `priority fee` to prioritize faster transaction inclusion. ## Specification We introduce the multidimensional scheme first and then how to apply the dynamic fee update rule for each fee dimension. Finally we list the new block verification rules, valid once the new fee scheme activates. ### Multidimensional scheme components We define four fee dimensions, `Bandwidth`, `Reads`, `Writes`, `Compute`, to describe transactions complexity. In more details: * `Bandwidth` measures the transaction size in bytes, as encoded by the AvalancheGo codec. Byte length is a proxy for the network resources needed to disseminate the transaction. * `Reads` measures the number of DB reads needed to verify the transactions. DB reads include UTXOs reads and any other state quantity relevant for the specific transaction. * `Writes` measures the number of DB writes following the transaction verification. DB writes include UTXOs generated as outputs of the transactions and any other state quantity relevant for the specific transaction. * `Compute` measures the number of signatures to be verified, including UTXOs ones and those related to authorization of specific operations. For each fee dimension $i$, we define: * *fee rate* $r_i$ as the price, denominated in AVAX, to be paid for a transaction with complexity $u_i$ along the fee dimension $i$. * *base fee* as the minimal fee needed to accept a transaction. Base fee is given be the formula $base \ fee = \sum_{i=0}^3 r_i \times u_i$ * *priority fee* as an optional fee paid on top of the base fee to speed up the transaction inclusion in a block. ### Dynamic scheme components Fee rates are updated in time, to allow a fee increase when network is getting congested. Each new block is a potential source of congestion, as its transactions carry complexity that each validator must process to verify and eventually accept the block. The more complexity carries a block, and the more rapidly blocks are produced, the higher the congestion. We seek a scheme that rapidly increases the fees when blocks complexity goes above a defined threshold and that equally rapidly decreases the fees once complexity goes down (because blocks carry less/simpler transactions, or because they are produced more slowly). We define the desired threshold as a *target complexity rate* $T$: we would want to process every second a block whose complexity is $T$. Any complexity more than that causes some congestion that we want to penalize via fees. In order to update fees rates we track, per each block and each fee dimension, a parameter called cumulative excess complexity. Fee rates applied to a block will be defined in terms of cumulative excess complexity as we show in the following. Suppose that a block $B_t$ is the current chain tip. $B_t$ has the following features: * $t$ is its timestamp. * $\Delta C_t$ is the cumulative excess complexity along fee dimension $i$. Say a new block $B_{t + \Delta T}$ is built on top of $B$, with the following features: * $t + \Delta T$ is its timestamp * $C_{t + \Delta T}$ is its complexity along fee dimension $i$. Then the fee rate $r_{t + \Delta T}$ applied for the block $B_{t + \Delta T}$ along dimension $i$ will be: $r_{t + \Delta T} = r^{min} \times e^{\frac{max(0, \Delta C_t - T \times \Delta T)}{Denom}}$ where * $r^{min}$ is the minimal fee rate along fee dimension $i$ * $T$ is the target complexity rate along fee dimension $i$ * $Denom$ is a normalization constant for the fee dimension $i$ Moreover, once the block $B_{t + \Delta T}$ is accepted, the cumulative excess complexity is updated as follows: $\Delta C_{t + \Delta T} = max\large(0, \Delta C_{t} - T \times \Delta T\large) + C_{t + \Delta T}$ The fee rate update formula guarantees that fee rates increase if incoming blocks are complex (large $C_{t + \Delta T}$) and if blocks are emitted rapidly (small $\Delta T$). Symmetrically, fee rates decrease to the minimum if incoming blocks are less complex and if blocks are produced less frequently.\ The update formula has a few paramenters to be tuned, independently, for each fee dimension. We defer discussion about tuning to the [implementation section](#tuning-the-update-formula). ## Block verification rules Upon activation of the dynamic multidimensional fees scheme we modify block processing as follows: * **Bound block complexity**. For each fee dimension $i$, we define a *maximal block complexity* $Max$. A block is only valid if the block complexity $C$ is less than the maximum block complexity: $C \leq Max$. * **Verify transaction fee**. When verifying each transaction in a block, we confirm that it can cover its own base fee. Note that both base fee and optional priority fees are burned. ## User Experience ### How will the wallets estimate the fees? AvalancheGo nodes will provide new APIs exposing the current and expected fee rates, as they are likely to change block by block. Wallets can then use the fees rates to select UTXOs to pay the transaction fees. Moreover, the AvalancheGo implementation proposed above offers a `fees.Calculator` struct that can be reused by wallets and downstream projects to evaluate calculate fees. ### How will wallets be able to re-issue Txs at a higher fee? Wallets should be able to simply re-issue the transaction since current AvalancheGo implementation drops mempool transactions whose fee rate is lower than current one. More specifically, a transaction may be valid the moment it enters the mempool and it won’t be re-verified as long as it stays in there. However, as soon as the transaction is selected to be included in the next block, it is re-verified against the latest preferred tip. If fees are not enough by this time, the transaction is dropped and the wallet can simply re-issue it at a higher fee, or wait for the fee rate to go down. Note that priority fees offer some buffer space against an increase in the fee rate. A transaction paying just the base fee will be evicted from the mempool in the face of a fee rate increase, while a transaction paying some extra priority fee may have enough buffer room to stay valid after some amount of fee increase. ### How does priority fees guarantee a faster block inclusion? AvalancheGo mempool will be restructured to order transactions by priority fees. Transactions paying priority fees will be selected for block inclusion first, without violating any spend dependency. ## Backwards Compatibility Modifying the fee scheme for P-Chain and X-Chain requires a mandatory upgrade for activation. Moreover, wallets must be modified to properly handle the new fee scheme once activated. ## Reference Implementation The implementation is split across multiple PRs: * P-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2707](https://github.com/ava-labs/avalanchego/issues/2707) * X-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2708](https://github.com/ava-labs/avalanchego/issues/2708) A very important implementation step is tuning the update formula parameters for each chain and each fee dimension. We show here the principles we followed for tuning and a simulation based on historical data. ### Tuning the update formula The basic idea is to measure the complexity of blocks already accepted and derive the parameters from it. You can find the historical data in [this repo](https://github.com/abi87/complexities).\ To simplify the exposition I am purposefully ignoring chain specifics (like P-chain proposal blocks). We can account for chain specifics while processing the historical data. Here are the principles: * **Target block complexity rate $T$**: calculate the distribution of block complexity and pick a high enough quantile. * **Max block complexity $Max$**: this is probably the trickiest parameter to set. Historically we had [pretty big transactions](https://subnets.avax.network/p-chain/tx/27pjHPRCvd3zaoQUYMesqtkVfZ188uP93zetNSqk3kSH1WjED1) (more than $1.000$ referenced utxos). Setting a max block complexity so high that these big transactions are allowed is akin to setting no complexity cap. On the other side, we still want to allow, even encourage, UTXO consolidation, so we may want to allow transactions [like this](https://subnets.avax.network/p-chain/tx/2LxyHzbi2AGJ4GAcHXth6pj5DwVLWeVmog2SAfh4WrqSBdENhV). A principled way to set max block complexity may be the following: * calculate the target block complexity rate (see previous point) * calculate the median time elapsed among consecutive blocks * The product of these two quantities should gives us something like a target block complexity. * Set the max block complexity to say $\times 50$ the target value. * **Normalization coefficient $Denom$**: I suggest we size it as follows: * Find the largest historical peak, i.e. the sequence of consecutive blocks which contained the most complexity in the shortest period of time * Tune $Denom$ so that it would cause a $\times 10000$ increase in the fee rate for such a peak. This increase would push fees from the milliAVAX we normally pay under stable network condition up to tens of AVAX. * **Minimal fee rates $r^{min}$**: we could size them so that transactions fees do not change very much with respect to the currently fixed values. We simulate below how the update formula would behave on an peak period from Avalanche mainnet.

/> />

Figure 1 shows a peak period, starting with block [wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1](https://subnets.avax.network/p-chain/block/wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1) and going for roughly 30 blocks. We only show `Bandwidth` for clarity, but other fees dimensions have similar behaviour. The network load is much larger than target and sustained.\ Figure 2 show the fee dynamic in response to the peak: fees scale up from a few milliAVAX up to around 25 AVAX. Moreover as soon as the peak is over, and complexity goes back to the target value, fees are reduced very rapidly. ## Security Considerations The new fee scheme is expected to help network stability as it offers economic incentives to users to hold transactions issuance in times of high load. While fees are expected to remain generally low when the system is not loaded, a sudden load increase, with fuller blocks, would push the dynamic fees algo to increase fee rates. The increase is expected to continue until the load is reduced. Load reduction happens by both dropping unconfirmed transactions whose fee-rate is not sufficient anymore and by pushing users that optimize their transactions costs to delay transaction issuance until the fee rate goes down to an acceptable level.\ Note finally that the exponential fee update mechanism detailed above is [proven](https://ethresear.ch/t/multidimensional-eip-1559/11651) to be robust against strategic behaviors of users delaying transactions issuance and then suddenly push a bulk of transactions once the fee rate is low enough. ## Acknowledgements Thanks to @StephenButtolph @patrick-ogrady and @dhrubabasu for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-84: Table Preamble URL: /docs/acps/84-table-preamble Details for Avalanche Community Proposal 84: Table Preamble | ACP | 84 | | :------------ | :------------------------------------------------------------ | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Activated | | **Track** | Meta | ## Abstract The current ACP template features a plain-text code block containing "RFC 822 style headers" as `Preamble` (see [What belongs in a successful ACP?](https://github.com/avalanche-foundation/ACPs?tab=readme-ov-file#what-belongs-in-a-successful-acp)). This header includes multiple links to discussions, authors, and other ACPs. This ACP proposes to replace the `Preamble` code block with a Markdown table format (similar to what is used in [Ethereum EIPs](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md)). ## Motivation The current ACPs `Preamble` is (i) not very readable and (ii) not user-friendly as links are not clickable. The proposed table format aims to fix these issues. ## Specification The following Markdown table format is proposed: | ACP | PR Number | | :------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | ACP title | | **Author(s)** | A list of the author's name(s) and optionally contact info: FirstName LastName ([@GitHubUsername](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) or [email@address.com](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Status** | Proposed, Implementable, Activated, Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Track** | Standards, Best Practices, Meta, Subnet | | \**Replaces (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | | \**Superseded-By (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | It features all the existing fields of the current ACP template, and would replace the current `Preamble` code block in [ACPs/TEMPLATE.md](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md). ## Backwards Compatibility Existing ACPs could be updated to use the new table format, but it is not mandatory. ## Reference Implementation For this ACP, the table would look like this: | ACP | 84 | | :------------ | :----------------------------------------------------------------------------------- | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/86)) | | **Track** | Meta | ## Security Considerations NA ## Open Questions NA ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-99: Validatorsetmanager Contract URL: /docs/acps/99-validatorsetmanager-contract Details for Avalanche Community Proposal 99: Validatorsetmanager Contract | ACP | 99 | | :----------- | :-------------------------------------------------------------------------------------------------------------------------- | | Title | Validator Manager Solidity Standard | | Author(s) | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | Status | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/165)) | | Track | Best Practices | | Dependencies | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Define a standard Validator Manager Solidity smart contract to be deployed on any Avalanche EVM chain. This ACP relies on concepts introduced in [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). It depends on it to be marked as `Implementable`. ## Motivation [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets) opens the door to managing an L1 validator set (stored on the P-Chain) from any chain on the Avalanche Network. The P-Chain allows a Subnet to specify a "validator manager" if it is converted to an L1 using `ConvertSubnetToL1Tx`. This `(blockchainID, address)` pair is responsible for sending ICM messages contained within `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` on the P-Chain. This enables an on-chain program to add, modify the weight of, and remove validators. On each validator set change, the P-Chain is willing to sign an `AddressedCall` to notify any on-chain program tracking the validator set. On-chain programs must be able to interpret this message, so they can trigger the appropriate action. The 2 kinds of `AddressedCall`s [defined in ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#p-chain-warp-message-payloads) are `L1ValidatorRegistrationMessage` and `L1ValidatorWeightMessage`. Given these assumptions and the fact that most of the active blockchains on Avalanche Mainnet are EVM-based, we propose `ACP99Manager` as the standard Solidity contract specification that can: 1. Hold relevant information about the current L1 validator set 2. Send validator set updates to the P-Chain by generating `AdressedCall`s defined in ACP-77 3. Correctly update the validator set by interpreting notification messages received from the P-Chain 4. Be easily integrated into validator manager implementations that utilize various security models (e.g. Proof-of-Stake). Having an audited and open-source reference implementation freely available will contribute to lowering the cost of launching L1s on Avalanche. Once deployed, the `ACP99Manager` implementation contract can be used as the `Address` in the [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx). ## Specification > **Note:**: The naming convention followed for the interfaces and contracts are inspired from the way [OpenZeppelin Contracts](https://docs.openzeppelin.com/contracts/5.x/) are named after ERC standards, using `ACP` instead of `ERC`. ### Type Definitions The following type definitions are used in the function signatures described in [Contract Specification](#contract-specification) ```solidity /** * @notice Description of the conversion data used to convert * a subnet to an L1 on the P-Chain. * This data is the pre-image of a hash that is authenticated by the P-Chain * and verified by the Validator Manager. */ struct ConversionData { bytes32 subnetID; bytes32 validatorManagerBlockchainID; address validatorManagerAddress; InitialValidator[] initialValidators; } /// @notice Specifies an initial validator, used in the conversion data. struct InitialValidator { bytes nodeID; bytes blsPublicKey; uint64 weight; } /// @notice L1 validator status. enum ValidatorStatus { Unknown, PendingAdded, Active, PendingRemoved, Completed, Invalidated } /** * @notice Specifies the owner of a validator's remaining balance or disable owner on the P-Chain. * P-Chain addresses are also 20-bytes, so we use the address type to represent them. */ struct PChainOwner { uint32 threshold; address[] addresses; } /** * @notice Contains the active state of a Validator. * @param status The validator status. * @param nodeID The NodeID of the validator. * @param startingWeight The weight of the validator at the time of registration. * @param sentNonce The current weight update nonce sent by the manager. * @param receivedNonce The highest nonce received from the P-Chain. * @param weight The current weight of the validator. * @param startTime The start time of the validator. * @param endTime The end time of the validator. */ struct Validator { ValidatorStatus status; bytes nodeID; uint64 startingWeight; uint64 sentNonce; uint64 receivedNonce; uint64 weight; uint64 startTime; uint64 endTime; } ``` #### About `Validator`s A `Validator` represents the continuous time frame during which a node is part of the validator set. Each `Validator` is identified by its `validationID`. If a validator was added as part of the initial set of continuous dynamic fee paying validators, its `validationID` is the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `ConvertSubnetToL1Tx` transaction ID and the 4 byte index of the initial validator within the transaction. If a validator was added to the L1's validator set post-conversion, its `validationID` is the SHA256 of the payload of the `AddressedCall` in the `RegisterL1ValidatorTx` used to add it, as defined in ACP-77. ### Contract Specification The standard `ACP99Manager` functionality is defined by a set of events, public methods, and private methods that must be included by a compliant implementation. For a full implementation, please see the [Reference Implementation](#reference-implementation) #### Events ```solidity /** * @notice Emitted when an initial validator is registered. * @notice The field index is the index of the initial validator in the conversion data. * This is used along with the subnetID as the ACP-118 justification in * signature requests to P-Chain validators over a L1ValidatorRegistrationMessage * when removing the validator */ event RegisteredInitialValidator( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 indexed subnetID, uint64 weight, uint32 index ); /// @notice Emitted when a validator registration to the L1 is initiated. event InitiatedValidatorRegistration( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 registrationMessageID, uint64 registrationExpiry, uint64 weight ); /// @notice Emitted when a validator registration to the L1 is completed. event CompletedValidatorRegistration(bytes32 indexed validationID, uint64 weight); /// @notice Emitted when removal of an L1 validator is initiated. event InitiatedValidatorRemoval( bytes32 indexed validationID, bytes32 validatorWeightMessageID, uint64 weight, uint64 endTime ); /// @notice Emitted when removal of an L1 validator is completed. event CompletedValidatorRemoval(bytes32 indexed validationID); /// @notice Emitted when a validator weight update is initiated. event InitiatedValidatorWeightUpdate( bytes32 indexed validationID, uint64 nonce, bytes32 weightUpdateMessageID, uint64 weight ); /// @notice Emitted when a validator weight update is completed. event CompletedValidatorWeightUpdate(bytes32 indexed validationID, uint64 nonce, uint64 weight); ``` #### Public Methods ```solidity /// @notice Returns the SubnetID of the L1 tied to this manager function subnetID() public view returns (bytes32 id); /// @notice Returns the validator details for a given validation ID. function getValidator(bytes32 validationID) public view returns (Validator memory validator); /// @notice Returns the total weight of the current L1 validator set. function l1TotalWeight() public view returns (uint64 weight); /** * @notice Verifies and sets the initial validator set for the chain by consuming a * SubnetToL1ConversionMessage from the P-Chain. * * Emits a {RegisteredInitialValidator} event for each initial validator in {conversionData}. * * @param conversionData The Subnet conversion message data used to recompute and verify against the ConversionID. * @param messsageIndex The index that contains the SubnetToL1ConversionMessage ICM message containing the * ConversionID to be verified against the provided {conversionData}. */ function initializeValidatorSet( ConversionData calldata conversionData, uint32 messsageIndex ) public; /** * @notice Completes the validator registration process by returning an acknowledgement of the registration of a * validationID from the P-Chain. The validator should not be considered active until this method is successfully called. * * Emits a {CompletedValidatorRegistration} event on success. * * @param messageIndex The index of the L1ValidatorRegistrationMessage to be received providing the acknowledgement. * @return validationID The ID of the registered validator. */ function completeValidatorRegistration(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes validator removal by consuming an RegisterL1ValidatorMessage from the P-Chain acknowledging * that the validator has been removed. * * Emits a {CompletedValidatorRemoval} on success. * * @param messageIndex The index of the RegisterL1ValidatorMessage. */ function completeValidatorRemoval(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes the validator weight update process by consuming an L1ValidatorWeightMessage from the P-Chain * acknowledging the weight update. The validator weight change should not have any effect until this method is successfully called. * * Emits a {CompletedValidatorWeightUpdate} event on success. * * @param messageIndex The index of the L1ValidatorWeightMessage message to be received providing the acknowledgement. * @return validationID The ID of the validator, retreived from the L1ValidatorWeightMessage. * @return nonce The nonce of the validator, retreived from the L1ValidatorWeightMessage. */ function completeValidatorWeightUpdate(uint32 messageIndex) public returns (bytes32 validationID, uint64 nonce); ``` > Note: While `getValidator` provides a way to fetch a `Validator` based on its `validationID`, no method that returns all active validators is specified. This is because a `mapping` is a reasonable way to store active validators internally, and Solidity `mapping`s are not iterable. This can be worked around by storing additional indexing metadata in the contract, but not all applications may wish to incur that added complexity. #### Private Methods The following methods are specified as `internal` to account for different semantics of initiating validator set changes, such as checking uptime attested to via ICM message, or transferring funds to be locked as stake. Rather than broaden the definitions of these functions to cover all use cases, we leave it to the implementer to define a suitable external interface and call the appropriate `ACP99Manager` function internally. ```solidity /** * @notice Initiates validator registration by issuing a RegisterL1ValidatorMessage. The validator should * not be considered active until completeValidatorRegistration is called. * * Emits an {InitiatedValidatorRegistration} event on success. * * @param nodeID The ID of the node to add to the L1. * @param blsPublicKey The BLS public key of the validator. * @param remainingBalanceOwner The remaining balance owner of the validator. * @param disableOwner The disable owner of the validator. * @param weight The weight of the node on the L1. * @return validationID The ID of the registered validator. */ function _initiateValidatorRegistration( bytes memory nodeID, bytes memory blsPublicKey, PChainOwner memory remainingBalanceOwner, PChainOwner memory disableOwner, uint64 weight ) internal returns (bytes32 validationID); /** * @notice Initiates validator removal by issuing a L1ValidatorWeightMessage with the weight set to zero. * The validator should be considered inactive as soon as this function is called. * * Emits an {InitiatedValidatorRemoval} on success. * * @param validationID The ID of the validator to remove. */ function _initiateValidatorRemoval(bytes32 validationID) internal; /** * @notice Initiates a validator weight update by issuing an L1ValidatorWeightMessage with a nonzero weight. * The validator weight change should not have any effect until completeValidatorWeightUpdate is successfully called. * * Emits an {InitiatedValidatorWeightUpdate} event on success. * * @param validationID The ID of the validator to modify. * @param weight The new weight of the validator. * @return nonce The validator nonce associated with the weight change. * @return messageID The ID of the L1ValidatorWeightMessage used to update the validator's weight. */ function _initiateValidatorWeightUpdate( bytes32 validationID, uint64 weight ) internal returns (uint64 nonce, bytes32 messageID); ``` ##### About `DisableL1ValidatorTx` In addition to calling `_initiateValidatorRemoval`, a validator may be disabled by issuing a `DisableL1ValidatorTx` on the P-Chain. This transaction allows the `DisableOwner` of a validator to disable it directly from the P-Chain to claim the unspent `Balance` linked to the validator of a failed L1. Therefore it is not meant to be called in the `Manager` contract. ## Backwards Compatibility `ACP99Manager` is a reference specification. As such, it doesn't have any impact on the current behavior of the Avalanche protocol. ## Reference Implementation A reference implementation will be provided in Ava Labs' [ICM Contracts](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager) repository. This reference implementation will need to be updated to conform to `ACP99Manager` before this ACP may be marked `Implementable`. ### Example Integrations `ACP99Manager` is designed to be easily incorporated into any architecture. Two example integrations are included in this ACP, each of which uses a different architecture. #### Multi-contract Design The multi-contract design consists of a contract that implements `ACP99Manager`, and separate "security module" contracts that implement security models, such as PoS or PoA. Each `ACP99Manager` implementation contract is associated with one or more "security modules" that are the only contracts allowed to call the `ACP99Manager` functions that initiate validator set changes (`initiateValidatorRegistration`, and `initiateValidatorWeightUpdate`). Every time a validator is added/removed or a weight change is initiated, the `ACP99Manager` implementation will, in turn, call the corresponding function of the "security module" (`handleValidatorRegistration` or `handleValidatorWeightChange`). We recommend that the "security modules" reference an immutable `ACP99Manager` contract address for security reasons. It is up to the "security module" to decide what action to take when a validator is added/removed or a weight change is confirmed by the P-Chain. Such actions could be starting the withdrawal period and allocating rewards in a PoS L1. |Own| SecurityModule Safe -.->|Own| Manager SecurityModule <-.->|Reference| Manager Safe -->|addValidator| SecurityModule SecurityModule -->|initiateValidatorRegistration| Manager Manager -->|sendWarpMessage| P P -->|completeValidatorRegistration| Manager Manager -->|handleValidatorRegistration| SecurityModule`} /> "Security modules" could implement PoS, Liquid PoS, etc. The specification of such smart contracts is out of the scope of this ACP. A work in progress implementation is available in the [Suzaku Contracts Library](https://github.com/suzaku-network/suzaku-contracts-library/blob/main/README.md#acp99-contracts-library) repository. It will be updated until this ACP is considered `Implementable` based on the outcome of the discussion. Ava Labs' V2 Validator Manager also implements this architecture for a Proof-of-Stake security module, and is available in their [ICM Contracts Repository](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v2.0.0/contracts/validator-manager/StakingManager.sol). #### Single-contract Design The single-contract design consists of a class hierarchy with the base class implementing `ACP99Manager`. The `PoAValidatorManager` child class in the below diagram may be swapped out for another class implementing a different security model, such as PoS. > ACP99Manager class ValidatorManager { completeValidatorRegistration } <> ValidatorManager class PoAValidatorManager { initiateValidatorRegistration initiateEndValidation completeEndValidation } ACP99Manager <|--ValidatorManager ValidatorManager <|-- PoAValidatorManager`} /> No reference implementation is provided for this architecture in particular, but Ava Labs' V1 [Validator Manager](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v1.0.0/contracts/validator-manager) implements much of the functional behavior described by the specification. It predates the specification, however, so there are some deviations. It should at most be treated as a model of an approximate implementation of this standard. ## Security Considerations The audit process of `ACP99Manager` and reference implementations is of the utmost importance for the future of the Avalanche ecosystem as most L1s would rely upon it to secure their L1. ## Open Questions ### Is there an interest to keep historical information about the validator set on the manager chain? It is left to the implementor to decide if `getValidator` should return information about historical validators. Information about past validator performance may not be relevant for all applications (e.g. PoA has no need to know about past validator's uptimes). This information will still be available in archive nodes and offchain tools (e.g. explorers), but it is not enforced at the contract level. ### Should `ACP99Manager` include a churn control mechanism? The Ava Labs [implementation](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/ValidatorManager.sol) of the `ValidatorManager` contract includes a churn control mechanism that prevents too much weight from being added or removed from the validator set in a short amount of time. Excessive churn can cause consensus failures, so it may be appropriate to require that churn tracking is implemented in some capacity. ## Acknowledgments Special thanks to [@leopaul36](https://github.com/leopaul36), [@aaronbuchwald](https://github.com/aaronbuchwald), [@dhrubabasu](https://github.com/dhrubabasu), [@minghinmatthewlam](https://github.com/minghinmatthewlam) and [@michaelkaplan13](https://github.com/michaelkaplan13) for their reviews of previous versions of this ACP! ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # Avalanche Community Proposals (ACPs) URL: /docs/acps Official framework for proposing improvements and gathering consensus around changes to the Avalanche Network
>
## What is an Avalanche Community Proposal (ACP)? An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.com). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption. ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](https://docs.avax.network/nodes/configure/avalanchego-config-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ## ACP Tracks There are three kinds of ACP: * A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Subnet architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs). * A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Subnets to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed. * A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate. * A `Subnet Track` ACP describes a change to a particular Subnet. This would include things like configuration changes or coordinated Subnet upgrades. ## ACP Statuses There are four statuses of an ACP: * A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback. * An `Implementable` ACP is considered "ready for implementation" by the author(s) and will no longer change meaningfully from its current form (which would require a new ACP). * An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked. * A `Stale` ACP has been abandoned by its author(s) because it is not supported by the Avalanche Community or has been replaced with another ACP. ## ACP Workflow ### Step 0: Think of a Novel Improvement to Avalanche The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author(s): someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights). ### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas) The author(s) should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author(s) and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author(s). Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker. ### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once the author(s) feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number. ### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable) ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author(s) or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author(s) and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP. ### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once an ACP is considered complete by the author(s), it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ### \[Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author(s) abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author(s) restart work. ### Maintenance ACP maintainers will only merge PRs updating an ACP if it is created or approved by at least one of the author(s). ACP maintainers are not responsible for ensuring ACP author(s) approve the PR. ACP author(s) are expected to review PRs that target their unlocked ACP (`Proposed` or `Implementable`). Any PRs opened against a locked ACP (`Activated` or `Stale`) will not be merged by ACP maintainers. ## What belongs in a successful ACP? Each ACP must have the following parts: * `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author(s), and optionally the contact info for each author, etc. * `Abstract`: Concise (\~200 word) description of the ACP * `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses * `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP * `Security Considerations`: Security implications of the proposed ACP Each ACP can have the following parts: * `Open Questions`: Questions that should be resolved before implementation Each `Standards Track` ACP must have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change Each `Best Practices Track` ACP can have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change ### ACP Formats and Templates Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/TEMPLATE.md) for an example of the correct layout. ### Auxiliary Files ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files. ### Waived Copyright ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP: ```text ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). ``` ## Proposals *You can view the status of each ACP on the [ACP Tracker](https://github.com/orgs/avalanche-foundation/projects/1/views/1).* | Number | Title | Author(s) | Type | | :------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------- | | [13](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/13-subnet-only-validators/README.md) | Subnet-Only Validators (SOVs) | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | Standards | | [20](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/20-ed25519-p2p/README.md) | Ed25519 p2p | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [23](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/23-p-chain-native-transfers/README.md) | P-Chain Native Transfers | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [24](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/24-shanghai-eips/README.md) | Activate Shanghai EIPs on C-Chain | Darioush Jalali ([@darioush](https://github.com/darioush)) | Standards | | [25](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/25-vm-application-errors/README.md) | Virtual Machine Application Errors | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | Standards | | [30](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/30-avalanche-warp-x-evm/README.md) | Integrate Avalanche Warp Messaging into the EVM | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) | Standards | | [31](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/31-enable-subnet-ownership-transfer/README.md) | Enable Subnet Ownership Transfer | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [41](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/41-remove-pending-stakers/README.md) | Remove Pending Stakers | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [62](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md) | Disable `AddValidatorTx` and `AddDelegatorTx` | Jacob Everly ([https://twitter.com/JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [75](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/75-acceptance-proofs/README.md) | Acceptance Proofs | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | Standards | | [77](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/77-reinventing-subnets/README.md) | Reinventing Subnets | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | Standards | | [83](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/83-dynamic-multidimensional-fees/README.md) | Dynamic Multidimensional Fees for P-Chain and X-Chain | Alberto Benegiamo ([@abi87](https://github.com/abi87)) | Standards | | [84](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/84-table-preamble/README.md) | Table Preamble for ACPs | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | Meta | | [99](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/99-validatorsetmanager-contract/README.md) | Validator Manager Solidity Standard | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Best Practices | | [103](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/103-dynamic-fees/README.md) | Add Dynamic Fees to the X-Chain and P-Chain | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | Standards | | [108](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/108-evm-event-importing/README.md) | EVM Event Importing | Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Best Practices | | [113](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/113-provable-randomness/README.md) | Provable Virtual Machine Randomness | Tsachi Herman ([@tsachiherman](https://github.com/tsachiherman)) | Standards | | [118](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/118-warp-signature-request/README.md) | Standardized P2P Warp Signature Request Interface | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Best Practices | | [125](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/125-basefee-reduction/README.md) | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | Standards | | [131](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/131-cancun-eips/README.md) | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | Standards | | [151](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/151-use-current-block-pchain-height-as-context/README.md) | Use current block P-Chain height as context for state verification | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | Standards | | [176](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) | Dynamic EVM Gas Limits and Price Discovery Updates | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards | | [181](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/181-p-chain-epoched-views/README.md) | P-Chain Epoched Views | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | Standards | | [191](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/191-seamless-l1-creation/README.md) | Seamless L1 Creations (CreateL1Tx) | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meag FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | Standards | | [194](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/194-streaming-asynchronous-execution/README.md) | Streaming Asynchronous Execution | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | Standards | | [204](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/204-precompile-secp256r1/README.md) | Precompile for secp256r1 Curve Support | Santiago Cammi ([@scammi](https://github.com/scammi)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)) | Standards | | [209](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/209-eip7702-style-account-abstraction/README.md) | EIP-7702-style Set Code for EOAs | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards | | [224](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md) | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards | | [226](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/ACPs/226-dynamic-minimum-block-times/README.md) | Dynamic Minimum Block Times | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | Standards | ## Contributing Before contributing to ACPs, please read the [ACP Terms of Contribution](https://raw.githubusercontent.com/avalanche-foundation/ACPs/main/CONTRIBUTING.md). # Introduction URL: /docs/nodes A brief introduction to the concepts of nodes and validators within the Avalanche ecosystem. The Avalanche network is a decentralized platform designed for high throughput and low latency, enabling a wide range of applications. At the core of the network are nodes and validators, which play vital roles in maintaining the network's security, reliability, and performance. ## What is a Node? A node in the Avalanche network is any computer that participates in the network by maintaining a copy of the blockchain, relaying information, and validating transactions. Nodes can be of different types depending on their role and level of participation in the network’s operations. ### Types of Nodes * **Full Node**: Stores the entire blockchain data and helps propagate transactions and blocks across the network. It does not participate directly in consensus but is crucial for the network's health and decentralization. **Archival full nodes** store the entire blockchain ledger, including all transactions from the beginning to the most recent. **Pruned full nodes** download the blockchain ledger, then delete blocks starting with the oldest to save memory. * **Validator Node**: A specialized type of full node that actively participates in the consensus process by validating transactions, producing blocks, and securing the network. Validator nodes are required to stake AVAX tokens as collateral to participate in the consensus mechanism. * **RPC (Remote Procedure Call) Node**: These nodes act as an interface, enabling third-party applications to query and interact with the blockchain. ## More About Validator Nodes A validator node participates in the network's consensus protocol by validating transactions and creating new blocks. Validators play a critical role in ensuring the integrity, security, and decentralization of the network. #### Key Functions of Validators: * **Transaction Validation**: Validators verify the legitimacy of transactions before they are added to the blockchain. * **Block Production**: Validators produce and propose new blocks to the network. This involves reaching consensus with other validators to agree on which transactions should be included in the next block. * **Security and Consensus**: Validators work together to secure the network and ensure that only valid transactions are confirmed. This is done through the Avalanche Consensus protocol, which allows validators to achieve agreement quickly and with high security. ### Primary Network Validators To become a validator on the Primary Network, you must stake **2,000 AVAX**. This will grant you the ability to validate transactions across all three chains in the Primary Network: the P-Chain, C-Chain, and X-Chain. ### Avalanche L1 Validator To become a validator on an Avalanche L1, you must meet the specific validator management criteria for that network. If the L1 operates on a Proof-of-Stake (PoS) model, you will need to stake the required amount of tokens to be eligible. In addition to meeting these criteria, there is a monthly fee of **1.33 AVAX** per validator. # System Requirements URL: /docs/nodes/system-requirements This document provides information about the system and networking requirements for running an AvalancheGo node. ## Hardware and Operating Systems Avalanche is an incredibly lightweight protocol, so nodes can run on commodity hardware. Note that as network usage increases, hardware requirements may change. * **CPU**: Equivalent of 8 AWS vCPU * **RAM**: 8 GiB (16 GiB recommended) * **Storage**: 1 TiB SSD * **OS**: Ubuntu 22.04 or MacOS >= 12 Nodes which choose to use a HDD may get poor and random read/write latencies, therefore reducing performance and reliability. An SSD is strongly suggested. ## Networking To run successfully, AvalancheGo needs to accept connections from the Internet on the network port `9651`. Before you proceed with the installation, you need to determine the networking environment your node will run in. ### On a Cloud Provider If your node is running on a cloud provider computer instance, it will have a static IP. Find out what that static IP is, or set it up if you didn't already. ### On a Home Connection If you're running a node on a computer that is on a residential internet connection, you have a dynamic IP; that is, your IP will change periodically. **For the sake of demonstration, you can ignore the following information.** Otherwise, you will need to set up inbound port forwarding of port `9651` from the internet to the computer the node is installed on. As there are too many models and router configurations, we cannot provide instructions on what exactly to do, but there are online guides to be found (like [this](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/), or [this](https://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/) ), and your service provider support might help too. Please note that a fully connected Avalanche node maintains and communicates over a couple of thousand of live TCP connections. For some under-powered and older home routers, that might be too much to handle. If that is the case, you may experience lagging on other computers connected to the same router, node getting benched, or failing to sync and similar issues. # Disclaimer URL: /docs/quick-start/disclaimer The Knowledge Base, including all the Help articles on this site, is provided for technical support purposes only, without representation, warranty or guarantee of any kind. Not an offer to sell or solicitation of an offer to buy any security or other regulated financial instrument. Not technical, investment, financial, accounting, tax, legal or other advice; please consult your own professionals. Please conduct your own research before connecting to or interacting with any dapp or third party or making any investment or financial decisions. MoonPay, ParaSwap and any other third party services or dapps you access are offered by third parties unaffiliated with us. Please review this [Notice](https://assets.website-files.com/602e8e4411398ca20cfcafd3/60ec9607c853cd466383f1ad_Important%20Notice%20-%20avalabs.org.pdf) and the [Terms of Use](https://core.app/terms/core). # Quick Start URL: /docs/quick-start Get started with Avalanche networks. Avalanche is a platform for building decentralized applications with near-instant transaction finality. ## Network Configuration | Network | Chain ID | RPC URL | Explorer | | ---------------- | -------- | ---------------------------------------------------------------------------------------- | ----------------------------------------------------- | | **Mainnet** | `43114` | [https://api.avax.network/ext/bc/C/rpc](https://api.avax.network/ext/bc/C/rpc) | [Explorer](https://subnets.avax.network/c-chain) | | **Fuji Testnet** | `43113` | [https://api.avax-test.network/ext/bc/C/rpc](https://api.avax-test.network/ext/bc/C/rpc) | [Explorer](https://subnets-test.avax.network/c-chain) | **Symbol**: `AVAX` (for both networks) ## Getting Test Tokens (Fuji) | Source | Description | | ------------------------------------------------------------------------------ | ---------------------------------- | | [Builder Console Faucet](/console/primary-network/faucet) | Primary faucet for test tokens | | [Core Wallet Faucet](https://core.app/tools/testnet-faucet/?subnet=c\&token=c) | Core wallet faucet for test tokens | | [Guild](https://guild.xyz/avalanche) | Request faucet coupons | # Avalanche Consensus URL: /docs/primary-network/avalanche-consensus Learn about the groundbreaking Avalanche Consensus algorithms. Consensus is the task of getting a group of computers (a.k.a. nodes) to come to an agreement on a decision. In blockchain, this means that all the participants in a network have to agree on the changes made to the shared ledger. This agreement is reached through a specific process, a consensus protocol, that ensures that everyone sees the same information and that the information is accurate and trustworthy. ## Avalanche Consensus Avalanche Consensus is a consensus protocol that is scalable, robust, and decentralized. It combines features of both classical and Nakamoto consensus mechanisms to achieve high throughput, fast finality, and energy efficiency. For the whitepaper, see [here](https://www.avalabs.org/whitepapers). Key Features Include: * Speed: Avalanche consensus provides sub-second, immutable finality, ensuring that transactions are quickly confirmed and irreversible. * Scalability: Avalanche consensus enables high network throughput while ensuring low latency. * Energy Efficiency: Unlike other popular consensus protocols, participation in Avalanche consensus is neither computationally intensive nor expensive. * Adaptive Security: Avalanche consensus is designed to resist various attacks, including sybil attacks, distributed denial-of-service (DDoS) attacks, and collusion attacks. Its probabilistic nature ensures that the consensus outcome converges to the desired state, even when the network is under attack. ## Conceptual Overview Consensus protocols in the Avalanche family operate through repeated sub-sampled voting. When a node is determining whether a [transaction](http://support.avalabs.org/en/articles/4587384-what-is-a-transaction) should be accepted, it asks a small, random subset of [validator nodes](http://support.avalabs.org/en/articles/4064704-what-is-a-blockchain-validator) for their preference. Each queried validator replies with the transaction that it prefers, or thinks should be accepted. Consensus will never include a transaction that is determined to be **invalid**. For example, if you were to submit a transaction to send 100 AVAX to a friend, but your wallet only has 2 AVAX, this transaction is considered **invalid** and will not participate in consensus. If a sufficient majority of the validators sampled reply with the same preferred transaction, this becomes the preferred choice of the validator that inquired. In the future, this node will reply with the transaction preferred by the majority. The node repeats this sampling process until the validators queried reply with the same answer for a sufficient number of consecutive rounds. * The number of validators required to be considered a "sufficient majority" is referred to as "α" (*alpha*). * The number of consecutive rounds required to reach consensus, a.k.a. the "Confidence Threshold," is referred to as "β" (*beta*). * Both α and β are configurable. When a transaction has no conflicts, finalization happens very quickly. When conflicts exist, honest validators quickly cluster around conflicting transactions, entering a positive feedback loop until all correct validators prefer that transaction. This leads to the acceptance of non-conflicting transactions and the rejection of conflicting transactions. ![How Avalanche Consensus Works](/images/avalanche-consensus1.png) Avalanche Consensus guarantees that if any honest validator accepts a transaction, all honest validators will come to the same conclusion. For a great visualization, check out [this demo](https://tedyin.com/archive/snow-bft-demo/#/snow). ## Deep Dive Into Avalanche Consensus ### Intuition First, let's develop some intuition about the protocol. Imagine a room full of people trying to agree on what to get for lunch. Suppose it's a binary choice between pizza and barbecue. Some people might initially prefer pizza while others initially prefer barbecue. Ultimately, though, everyone's goal is to achieve **consensus**. Everyone asks a random subset of the people in the room what their lunch preference is. If more than half say pizza, the person thinks, "OK, looks like things are leaning toward pizza. I prefer pizza now." That is, they adopt the *preference* of the majority. Similarly, if a majority say barbecue, the person adopts barbecue as their preference. Everyone repeats this process. Each round, more and more people have the same preference. This is because the more people that prefer an option, the more likely someone is to receive a majority reply and adopt that option as their preference. After enough rounds, they reach consensus and decide on one option, which everyone prefers. ### Snowball The intuition above outlines the Snowball Algorithm, which is a building block of Avalanche consensus. Let's review the Snowball algorithm. #### Parameters * *n*: number of participants * *k* (sample size): between 1 and *n* * α (quorum size): between 1 and *k* * β (decision threshold): >= 1 #### Algorithm ``` preference := pizza consecutiveSuccesses := 0 while not decided: ask k random people their preference if >= α give the same response: preference := response with >= α if preference == old preference: consecutiveSuccesses++ else: consecutiveSuccesses = 1 else: consecutiveSuccesses = 0 if consecutiveSuccesses > β: decide(preference) ``` #### Algorithm Explained Everyone has an initial preference for pizza or barbecue. Until someone has *decided*, they query *k* people (the sample size) and ask them what they prefer. If α or more people give the same response, that response is adopted as the new preference. α is called the *quorum size*. If the new preference is the same as the old preference, the `consecutiveSuccesses` counter is incremented. If the new preference is different then the old preference, the `consecutiveSuccesses` counter is set to `1`. If no response gets a quorum (an α majority of the same response) then the `consecutiveSuccesses` counter is set to `0`. Everyone repeats this until they get a quorum for the same response β times in a row. If one person decides pizza, then every other person following the protocol will eventually also decide on pizza. Random changes in preference, caused by random sampling, cause a network preference for one choice, which begets more network preference for that choice until it becomes irreversible and then the nodes can decide. In our example, there is a binary choice between pizza or barbecue, but Snowball can be adapted to achieve consensus on decisions with many possible choices. The liveness and safety thresholds are parameterizable. As the quorum size, α, increases, the safety threshold increases, and the liveness threshold decreases. This means the network can tolerate more byzantine (deliberately incorrect, malicious) nodes and remain safe, meaning all nodes will eventually agree whether something is accepted or rejected. The liveness threshold is the number of malicious participants that can be tolerated before the protocol is unable to make progress. These values, which are constants, are quite small on the Avalanche Network. The sample size, *k*, is `20`. So when a node asks a group of nodes their opinion, it only queries `20` nodes out of the whole network. The quorum size, α, is `14`. So if `14` or more nodes give the same response, that response is adopted as the querying node's preference. The decision threshold, β, is `20`. A node decides on choice after receiving `20` consecutive quorum (α majority) responses. Snowball is very scalable as the number of nodes on the network, *n*, increases. Regardless of the number of participants in the network, the number of consensus messages sent remains the same because in a given query, a node only queries `20` nodes, even if there are thousands of nodes in the network. Everything discussed to this point is how Avalanche is described in [the Avalanche white-paper](https://assets-global.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). The implementation of the Avalanche consensus protocol by Ava Labs (namely in AvalancheGo) has some optimizations for latency and throughput. ### Blocks A block is a fundamental component that forms the structure of a blockchain. It serves as a container or data structure that holds a collection of transactions or other relevant information. Each block is cryptographically linked to the previous block, creating a chain of blocks, hence the term "blockchain." In addition to storing a reference of its parent, a block contains a set of transactions. These transactions can represent various types of information, such as financial transactions, smart contract operations, or data storage requests. If a node receives a vote for a block, it also counts as a vote for all of the block's ancestors (its parent, the parents' parent, etc.). ### Finality Avalanche consensus is probabilistically safe up to a safety threshold. That is, the probability that a correct node accepts a transaction that another correct node rejects can be made arbitrarily low by adjusting system parameters. In Nakamoto consensus protocol (as used in Bitcoin and Ethereum, for example), a block may be included in the chain but then be removed and not end up in the canonical chain. This means waiting an hour for transaction settlement. In Avalanche, acceptance/rejection are **final and irreversible** and only take a few seconds. ### Optimizations It's not safe for nodes to just ask, "Do you prefer this block?" when they query validators. In Ava Labs' implementation, during a query a node asks, "Given that this block exists, which block do you prefer?" Instead of getting back a binary yes/no, the node receives the other node's preferred block. Nodes don't only query upon hearing of a new block; they repeatedly query other nodes until there are no blocks processing. Nodes may not need to wait until they get all *k* query responses before registering the outcome of a poll. If a block has already received *alpha* votes, then there's no need to wait for the rest of the responses. ### Validators If it were free to become a validator on the Avalanche network, that would be problematic because a malicious actor could start many, many nodes which would get queried very frequently. The malicious actor could make the node act badly and cause a safety or liveness failure. The validators, the nodes which are queried as part of consensus, have influence over the network. They have to pay for that influence with real-world value in order to prevent this kind of ballot stuffing. This idea of using real-world value to buy influence over the network is called Proof of Stake. To become a validator, a node must **bond** (stake) something valuable (**AVAX**). The more AVAX a node bonds, the more often that node is queried by other nodes. When a node samples the network it's not uniformly random. Rather, it's weighted by stake amount. Nodes are incentivized to be validators because they get a reward if, while they validate, they're sufficiently correct and responsive. Avalanche doesn't have slashing. If a node doesn't behave well while validating, such as giving incorrect responses or perhaps not responding at all, its stake is still returned in whole, but with no reward. As long as a sufficient portion of the bonded AVAX is held by correct nodes, then the network is safe, and is live for virtuous transactions. ### Big Ideas Two big ideas in Avalanche are **subsampling** and **transitive voting**. Subsampling has low message overhead. It doesn't matter if there are twenty validators or two thousand validators; the number of consensus messages a node sends during a query remains constant. Transitive voting, where a vote for a block is a vote for all its ancestors, helps with transaction throughput. Each vote is actually many votes in one. ### Loose Ends Transactions are created by users which call an API on an [AvalancheGo](https://github.com/ava-labs/avalanchego) full node or create them using a library such as [AvalancheJS](https://github.com/ava-labs/avalanchejs). ### Other Observations Conflicting transactions are not guaranteed to be live. That's not really a problem because if you want your transaction to be live then you should not issue a conflicting transaction. Snowman is the name of Ava Labs' implementation of the Avalanche consensus protocol for linear chains. If there are no undecided transactions, the Avalanche consensus protocol *quiesces*. That is, it does nothing if there is no work to be done. This makes Avalanche more sustainable than Proof-of-work where nodes need to constantly do work. Avalanche has no leader. Any node can propose a transaction and any node that has staked AVAX can vote on every transaction, which makes the network more robust and decentralized. ## Why Do We Care? Avalanche is a general consensus engine. It doesn't matter what type of application is put on top of it. The protocol allows the decoupling of the application layer from the consensus layer. If you're building a dapp on Avalanche then you just need to define a few things, like how conflicts are defined and what is in a transaction. You don't need to worry about how nodes come to an agreement. The consensus protocol is a black box that put something into it and it comes back as accepted or rejected. Avalanche can be used for all kinds of applications, not just P2P payment networks. Avalanche's Primary Network has an instance of the Ethereum Virtual Machine, which is backward compatible with existing Ethereum Dapps and dev tooling. The Ethereum consensus protocol has been replaced with Avalanche consensus to enable lower block latency and higher throughput. Avalanche is very performant. It can process thousands of transactions per second with one to two second acceptance latency. ## Summary Avalanche consensus is a radical breakthrough in distributed systems. It represents as large a leap forward as the classical and Nakamoto consensus protocols that came before it. Now that you have a better understanding of how it works, check out other documentations for building game-changing Dapps and financial instruments on Avalanche. # AVAX Token URL: /docs/primary-network/avax-token Learn about the native token of Avalanche Primary Network. AVAX is the native utility token of Avalanche. It's a hard-capped, scarce asset that is used to pay for fees, secure the platform through staking, and provide a basic unit of account between the multiple Avalanche L1s created on Avalanche. `1 nAVAX` is equal to `0.000000001 AVAX`. ## Utility AVAX is a capped-supply (up to 720M) resource in the Avalanche ecosystem that's used to power the network. AVAX is used to secure the ecosystem through staking and for day-to-day operations like issuing transactions. AVAX represents the weight that each node has in network decisions. No single actor owns the Avalanche Network, so each validator in the network is given a proportional weight in the network's decisions corresponding to the proportion of total stake that they own through proof of stake (PoS). Any entity trying to execute a transaction on Avalanche pays a corresponding fee (commonly known as "gas") to run it on the network. The fees used to execute a transaction on Avalanche is burned, or permanently removed from circulating supply. ## Tokenomics A fixed amount of 360M AVAX was minted at genesis, but a small amount of AVAX is constantly minted as a reward to validators. The protocol rewards validators for good behavior by minting them AVAX rewards at the end of their staking period. The minting process offsets the AVAX burned by transactions fees. While AVAX is still far away from its supply cap, it will almost always remain an inflationary asset. Avalanche does not take away any portion of a validator's already staked tokens (commonly known as "slashing") for negligent/malicious staking periods, however this behavior is disincentivized as validators who attempt to do harm to the network would expend their node's computing resources for no reward. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at year $j$, with $R_1 = 360M$, and $R_l$ representing the last year that the values of $\gamma,\lambda \in \R$ were changed; $c_j$ is the yet un-minted supply of coins to reach $720M$ at year $j$ such that $c_j \leq 360M$; $u$ represents a staker, with $u.s_{amount}$ representing the total amount of stake that $u$ possesses, and $u.s_{time}$ the length of staking for $u$. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at: $$ R_j = R_l + \sum_{\forall u} \rho(u.s_{amount}, u.s_{time}) \times \frac{c_j}{L} \times \left( \sum_{i=0}^{j}\frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda}\right)^i} \right) $$ where, $$ L = \left(\sum_{i=0}^{\infty} \frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda} \right)^i} \right) $$ At genesis, $c_1 = 360M$. The values of $\gamma$ and $\lambda$ are governable, and if changed, the function is recomputed with the new value of $c_*$. We have that $\sum_{*}\rho(*) \le 1$. $\rho(*)$ is a linear function that can be computed as follows ($u.s_{time}$ is measured in weeks, and $u.s_{amount}$ is measured in AVAX tokens): $$ \rho(u.s_{amount}, u.s_{time}) = (0.002 \times u.s_{time} + 0.896) \times \frac{u.s_{amount}}{R_j} $$ If the entire supply of tokens at year $j$ is staked for the maximum amount of staking time (one year, or 52 weeks), then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 1$. If, instead, every token is staked continuously for the minimal stake duration of two weeks, then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 0.9$. Therefore, staking for the maximum amount of time incurs an additional 11.11% of tokens minted, incentivizing stakers to stake for longer periods. Due to the capped-supply, the above function guarantees that AVAX will never exceed a total of $720M$ tokens, or $\lim_{j \to \infty} R(j) = 720M$. # Primary Network URL: /docs/primary-network Learn about the Avalanche Primary Network and its three blockchains. Avalanche is a heterogeneous network of blockchains. As opposed to homogeneous networks, where all applications reside in the same chain, heterogeneous networks allow separate chains to be created for different applications. The Primary Network is a special [Avalanche L1](/docs/quick-start/avalanche-l1s) that runs three blockchains: * The Platform Chain [(P-Chain)](/docs/quick-start/primary-network#p-chain) * The Contract Chain [(C-Chain)](/docs/quick-start/primary-network#c-chain) * The Exchange Chain [(X-Chain)](/docs/quick-start/primary-network#x-chain) [Avalanche Mainnet](/docs/quick-start/networks/mainnet) is comprised of the Primary Network and all deployed Avalanche L1s. A node can become a validator for the Primary Network by staking at least **2,000 AVAX**. ![Primary network](/images/primary-network1.png) ## The Chains All validators of the Primary Network are required to validate and secure the following: ### C-Chain The **C-Chain** is an implementation of the Ethereum Virtual Machine (EVM). The [C-Chain's API](/docs/api-reference/c-chain/api) supports Geth's API and supports the deployment and execution of smart contracts written in Solidity. The C-Chain is an instance of the [Coreth](https://github.com/ava-labs/coreth) Virtual Machine. ### P-Chain The **P-Chain** is responsible for all validator and Avalanche L1-level operations. The [P-Chain API](/docs/api-reference/p-chain/api) supports the creation of new blockchains and Avalanche L1s, the addition of validators to Avalanche L1s, staking operations, and other platform-level operations. The P-Chain is an instance of the Platform Virtual Machine. ### X-Chain The **X-Chain** is responsible for operations on digital smart assets known as **Avalanche Native Tokens**. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can't be traded until tomorrow." The [X-Chain API](/docs/api-reference/x-chain/api) supports the creation and trade of Avalanche Native Tokens. One asset traded on the X-Chain is AVAX. When you issue a transaction to a blockchain on Avalanche, you pay a fee denominated in AVAX. The X-Chain is an instance of the Avalanche Virtual Machine (AVM). # Validator Rewards Formula URL: /docs/primary-network/rewards-formula Learn about the rewards formula for the Avalanche Primary Network validator ## Primary Network Validator Rewards Consider a Primary Network validator which stakes a $Stake$ amount of `AVAX` for $StakingPeriod$ seconds. The potential reward is calculated **at the beginning of the staking period**. At the beginning of the staking period there is a $Supply$ amount of `AVAX` in the network. The maximum amount of `AVAX` is $MaximumSupply$. At the end of its staking period, a responsive Primary Network validator receives a reward. $$ Potential Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate $$ where, $$ MaximumSupply - Supply = \text{the number of AVAX tokens left to emit in the network} $$ $$ \frac{Stake}{Supply} = \text{the individual's stake as a percentage of all available AVAX tokens in the network} $$ $$ \frac{StakingPeriod}{MintingPeriod} = \text{time tokens are locked up divided by the $MintingPeriod$} $$ $$ \text{$MintingPeriod$ is one year as configured by the network).} $$ $$ EffectiveConsumptionRate = $$ $$ \frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period} $$ Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward only the staking period duration is taken into account. $EffectiveConsumptionRate$ is the rate at which the Primary Network validator is rewarded based on $StakingPeriod$ selection. $MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$: $$ MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate $$ The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. The smaller $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MinConsumptionRate$. A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is: $$ Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator} $$ Note that this formula is the same as the reward formula at the top of this section because $EffectiveConsumptionRate$ = $MaxConsumptionRate$. For reference, you can find all the Primary network parameters in [the section below](#primary-network-parameters-on-mainnet). ## Delegators Weight Checks There are bounds set of the maximum amount of delegators' stake that a validator can receive. The maximum weight $MaxWeight$ a validator $Validator$ can have is: $$ MaxWeight = \min(Validator.Weight \times MaxValidatorWeightFactor, MaxValidatorStake) $$ where $MaxValidatorWeightFactor$ and $MaxValidatorStake$ are the Primary Network Parameters described above. A delegator won't be added to a validator if the combination of their weights and all other validator's delegators' weight is larger than $MaxWeight$. Note that this must be true at any point in time. Note that setting $MaxValidatorWeightFactor$ to 1 disables delegation since the $MaxWeight = Validator.Weight$. ## Notes on Percentages `PercentDenominator = 1_000_000` is the denominator used to calculate percentages. It allows you to specify percentages up to 4 digital positions. To denominate your percentage in `PercentDenominator` just multiply it by `10_000`. For example: * `100%` corresponds to `100 * 10_000 = 1_000_000` * `1%` corresponds to `1* 10_000 = 10_000` * `0.02%` corresponds to `0.002 * 10_000 = 200` * `0.0007%` corresponds to `0.0007 * 10_000 = 7` ## Primary Network Parameters on Mainnet For reference we list below the Primary Network parameters on Mainnet: * `AssetID = Avax` * `InitialSupply = 240_000_000 Avax` * `MaximumSupply = 720_000_000 Avax`. * `MinConsumptionRate = 0.10 * reward.PercentDenominator`. * `MaxConsumptionRate = 0.12 * reward.PercentDenominator`. * `Minting Period = 365 * 24 * time.Hour`. * `MinValidatorStake = 2_000 Avax`. * `MaxValidatorStake = 3_000_000 Avax`. * `MinStakeDuration = 2 * 7 * 24 * time.Hour`. * `MaxStakeDuration = 365 * 24 * time.Hour`. * `MinDelegationFee = 20000`, that is `2%`. * `MinDelegatorStake = 25 Avax`. * `MaxValidatorWeightFactor = 5`. This is a platformVM parameter rather than a genesis one, so it's shared across networks. * `UptimeRequirement = 0.8`, that is `80%`. ### Interactive Graph The graph below demonstrates the reward as a function of the length of time staked. The x-axis depicts $\frac{StakingPeriod}{MintingPeriod}$ as a percentage while the y-axis depicts $Reward$ as a percentage of $MaximumSupply - Supply$, the amount of tokens left to be emitted. Graph variables correspond to those defined above: * `h` (high) = $MaxConsumptionRate$ * `l` (low) = $MinConsumptionRate$ * `s` = $\frac{Stake}{Supply}$