remove rpcdaemon

This commit is contained in:
a 2022-11-14 21:22:15 -06:00
parent 1fda1f8e90
commit 71d9469f23
56 changed files with 35 additions and 18894 deletions

View File

@ -1,527 +0,0 @@
- [Introduction](#introduction)
- [Getting Started](#getting-started)
* [Running locally](#running-locally)
* [Running remotely](#running-remotely)
* [Healthcheck](#healthcheck)
* [Testing](#testing)
- [FAQ](#faq)
* [Relations between prune options and rpc methods](#relations-between-prune-options-and-rpc-method)
* [RPC Implementation Status](#rpc-implementation-status)
* [Securing the communication between RPC daemon and Erigon instance via TLS and authentication](#securing-the-communication-between-rpc-daemon-and-erigon-instance-via-tls-and-authentication)
* [Ethstats](#ethstats)
* [Allowing only specific methods (Allowlist)](#allowing-only-specific-methods--allowlist-)
* [Trace transactions progress](#trace-transactions-progress)
* [Clients getting timeout, but server load is low](#clients-getting-timeout--but-server-load-is-low)
* [Server load too high](#server-load-too-high)
* [Faster Batch requests](#faster-batch-requests)
- [For Developers](#for-developers)
* [Code generation](#code-generation)
## Introduction
Erigon's `rpcdaemon` runs in its own seperate process.
This brings many benefits including easier development, the ability to run multiple daemons at once, and the ability to
run the daemon remotely. It is possible to run the daemon locally as well (read-only) if both processes have access to
the data folder.
## Getting Started
The `rpcdaemon` gets built as part of the main `erigon` build process, but you can build it directly with this command:
```[bash]
make rpcdaemon
```
### Running locally
Run `rpcdaemon` on same computer with Erigon. It's default option because it using Shared Memory access to Erigon's db -
it's much faster than TCP access. Provide both `--datadir` and `--private.api.addr` flags:
```[bash]
make erigon
./build/bin/erigon --datadir=<your_data_dir> --private.api.addr=localhost:9090
make rpcdaemon
./build/bin/rpcdaemon --datadir=<your_data_dir> --txpool.api.addr=localhost:9090 --private.api.addr=localhost:9090 --http.api=eth,erigon,web3,net,debug,trace,txpool
```
Note that we've also specified which RPC namespaces to enable in the above command by `--http.api` flag.
### Running remotely
To start the daemon remotely - just don't set `--datadir` flag:
```[bash]
make erigon
./build/bin/erigon --datadir=<your_data_dir> --private.api.addr=0.0.0.0:9090
make rpcdaemon
./build/bin/rpcdaemon --private.api.addr=<erigon_ip>:9090 --txpool.api.addr=localhost:9090 --http.api=eth,erigon,web3,net,debug,trace,txpool
```
The daemon should respond with something like:
```[bash]
INFO [date-time] HTTP endpoint opened url=localhost:8545...
```
When RPC daemon runs remotely, by default it maintains a state cache, which is updated every time when Erigon imports a
new block. When state cache is reasonably warm, it allows such remote RPC daemon to execute queries related to `latest`
block (i.e. to current state) with comparable performance to a local RPC daemon
(around 2x slower vs 10x slower without state cache). Since there can be multiple such RPC daemons per one Erigon node,
it may scale well for some workloads that are heavy on the current state queries.
### Healthcheck
There are 2 options for running healtchecks, POST request, or GET request with custom headers. Both options are available
at the `/health` endpoint.
#### POST request
If the health check is successful it returns 200 OK.
If the health check fails it returns 500 Internal Server Error.
Configuration of the health check is sent as POST body of the method.
```
{
"min_peer_count": <minimal number of the node peers>,
"known_block": <number_of_block_that_node_should_know>
}
```
Not adding a check disables that.
**`min_peer_count`** -- checks for mimimum of healthy node peers. Requires
`net` namespace to be listed in `http.api`.
**`known_block`** -- sets up the block that node has to know about. Requires
`eth` namespace to be listed in `http.api`.
Example request
```http POST http://localhost:8545/health --raw '{"min_peer_count": 3, "known_block": "0x1F"}'```
Example response
```
{
"check_block": "HEALTHY",
"healthcheck_query": "HEALTHY",
"min_peer_count": "HEALTHY"
}
```
#### GET with headers
If the healthcheck is successful it will return a 200 status code.
If the healthcheck fails for any reason a status 500 will be returned. This is true if one of the criteria requested
fails its check.
You can set any number of values on the `X-ERIGON-HEALTHCHECK` header. Ones that are not included are skipped in the
checks.
Available Options:
- `synced` - will check if the node has completed syncing
- `min_peer_count<count>` - will check that the node has at least `<count>` many peers
- `check_block<block>` - will check that the node is at least ahead of the `<block>` specified
- `max_seconds_behind<seconds>` - will check that the node is no more than `<seconds>` behind from its latest block
Example Request
```
curl --location --request GET 'http://localhost:8545/health' \
--header 'X-ERIGON-HEALTHCHECK: min_peer_count1' \
--header 'X-ERIGON-HEALTHCHECK: synced' \
--header 'X-ERIGON-HEALTHCHECK: max_seconds_behind600'
```
Example Response
```
{
"check_block":"DISABLED",
"max_seconds_behind":"HEALTHY",
"min_peer_count":"HEALTHY",
"synced":"HEALTHY"
}
```
### Testing
By default, the `rpcdaemon` serves data from `localhost:8545`. You may send `curl` commands to see if things are
working.
Try `eth_blockNumber` for example. In a third terminal window enter this command:
```[bash]
curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc": "2.0", "method": "eth_blockNumber", "params": [], "id":1}' localhost:8545
```
This should return something along the lines of this (depending on how far your Erigon node has synced):
```[bash]
{
"jsonrpc": "2.0",
"id": 1,
"result":" 0xa5b9ba"
}
```
Also, there
are [extensive instructions for using Postman](https://github.com/ledgerwatch/erigon/wiki/Using-Postman-to-Test-TurboGeth-RPC)
to test the RPC.
## FAQ
### Relations between prune options and RPC methods
Next options available (by `--prune` flag):
```
* h - prune history (ChangeSets, HistoryIndices - used to access historical state, like eth_getStorageAt, eth_getBalanceAt, debug_traceTransaction, trace_block, trace_transaction, etc.)
* r - prune receipts (Receipts, Logs, LogTopicIndex, LogAddressIndex - used by eth_getLogs and similar RPC methods)
* t - prune tx lookup (used to get transaction by hash)
* c - prune call traces (used by trace_filter method)
```
By default data pruned after 90K blocks, can change it by flags like `--prune.history.after=100_000`
Some methods, if not found historical data in DB, can fallback to old blocks re-execution - but it require `h`.
### RPC Implementation Status
Label "remote" means: `--private.api.addr` flag is required.
The following table shows the current implementation status of Erigon's RPC daemon.
| Command | Avail | Notes |
| ------------------------------------------ |---------|--------------------------------------|
| admin_nodeInfo | Yes | |
| admin_peers | Yes | |
| | | |
| web3_clientVersion | Yes | |
| web3_sha3 | Yes | |
| | | |
| net_listening | HC | (`remote` hard coded returns true) |
| net_peerCount | Limited | internal sentries only |
| net_version | Yes | `remote`. |
| | | |
| eth_blockNumber | Yes | |
| eth_chainID/eth_chainId | Yes | |
| eth_protocolVersion | Yes | |
| eth_syncing | Yes | |
| eth_gasPrice | Yes | |
| eth_maxPriorityFeePerGas | Yes | |
| eth_feeHistory | Yes | |
| | | |
| eth_getBlockByHash | Yes | |
| eth_getBlockByNumber | Yes | |
| eth_getBlockTransactionCountByHash | Yes | |
| eth_getBlockTransactionCountByNumber | Yes | |
| eth_getUncleByBlockHashAndIndex | Yes | |
| eth_getUncleByBlockNumberAndIndex | Yes | |
| eth_getUncleCountByBlockHash | Yes | |
| eth_getUncleCountByBlockNumber | Yes | |
| | | |
| eth_getTransactionByHash | Yes | |
| eth_getRawTransactionByHash | Yes | |
| eth_getTransactionByBlockHashAndIndex | Yes | |
| eth_retRawTransactionByBlockHashAndIndex | Yes | |
| eth_getTransactionByBlockNumberAndIndex | Yes | |
| eth_retRawTransactionByBlockNumberAndIndex | Yes | |
| eth_getTransactionReceipt | Yes | |
| eth_getBlockReceipts | Yes | |
| | | |
| eth_estimateGas | Yes | |
| eth_getBalance | Yes | |
| eth_getCode | Yes | |
| eth_getTransactionCount | Yes | |
| eth_getStorageAt | Yes | |
| eth_call | Yes | |
| eth_callMany | Yes | Erigon Method PR#4567 |
| eth_callBundle | Yes | |
| eth_createAccessList | Yes | |
| | | |
| eth_newFilter | Yes | Added by PR#4253 |
| eth_newBlockFilter | Yes | |
| eth_newPendingTransactionFilter | Yes | |
| eth_getFilterChanges | Yes | |
| eth_uninstallFilter | Yes | |
| eth_getLogs | Yes | |
| | | |
| eth_accounts | No | deprecated |
| eth_sendRawTransaction | Yes | `remote`. |
| eth_sendTransaction | - | not yet implemented |
| eth_sign | No | deprecated |
| eth_signTransaction | - | not yet implemented |
| eth_signTypedData | - | ???? |
| | | |
| eth_getProof | - | not yet implemented |
| | | |
| eth_mining | Yes | returns true if --mine flag provided |
| eth_coinbase | Yes | |
| eth_hashrate | Yes | |
| eth_submitHashrate | Yes | |
| eth_getWork | Yes | |
| eth_submitWork | Yes | |
| | | |
| eth_subscribe | Limited | Websock Only - newHeads, |
| | | newPendingTransactions, |
| | | newPendingBlock |
| eth_unsubscribe | Yes | Websock Only |
| | | |
| engine_newPayloadV1 | Yes | |
| engine_forkchoiceUpdatedV1 | Yes | |
| engine_getPayloadV1 | Yes | |
| engine_exchangeTransitionConfigurationV1 | Yes | |
| | | |
| debug_accountRange | Yes | Private Erigon debug module |
| debug_accountAt | Yes | Private Erigon debug module |
| debug_getModifiedAccountsByNumber | Yes | |
| debug_getModifiedAccountsByHash | Yes | |
| debug_storageRangeAt | Yes | |
| debug_traceBlockByHash | Yes | Streaming (can handle huge results) |
| debug_traceBlockByNumber | Yes | Streaming (can handle huge results) |
| debug_traceTransaction | Yes | Streaming (can handle huge results) |
| debug_traceCall | Yes | Streaming (can handle huge results) |
| debug_traceCallMany | Yes | Erigon Method PR#4567. |
| | | |
| trace_call | Yes | |
| trace_callMany | Yes | |
| trace_rawTransaction | - | not yet implemented (come help!) |
| trace_replayBlockTransactions | yes | stateDiff only (come help!) |
| trace_replayTransaction | yes | stateDiff only (come help!) |
| trace_block | Yes | |
| trace_filter | Yes | no pagination, but streaming |
| trace_get | Yes | |
| trace_transaction | Yes | |
| | | |
| txpool_content | Yes | `remote` |
| txpool_status | Yes | `remote` |
| | | |
| eth_getCompilers | No | deprecated |
| eth_compileLLL | No | deprecated |
| eth_compileSolidity | No | deprecated |
| eth_compileSerpent | No | deprecated |
| | | |
| db_putString | No | deprecated |
| db_getString | No | deprecated |
| db_putHex | No | deprecated |
| db_getHex | No | deprecated |
| | | |
| erigon_getHeaderByHash | Yes | Erigon only |
| erigon_getHeaderByNumber | Yes | Erigon only |
| erigon_getLogsByHash | Yes | Erigon only |
| erigon_forks | Yes | Erigon only |
| erigon_issuance | Yes | Erigon only |
| erigon_GetBlockByTimestamp | Yes | Erigon only |
| erigon_BlockNumber | Yes | Erigon only |
| | | |
| bor_getSnapshot | Yes | Bor only |
| bor_getAuthor | Yes | Bor only |
| bor_getSnapshotAtHash | Yes | Bor only |
| bor_getSigners | Yes | Bor only |
| bor_getSignersAtHash | Yes | Bor only |
| bor_getCurrentProposer | Yes | Bor only |
| bor_getCurrentValidators | Yes | Bor only |
| bor_getRootHash | Yes | Bor only |
This table is constantly updated. Please visit again.
### Securing the communication between RPC daemon and Erigon instance via TLS and authentication
In some cases, it is useful to run Erigon nodes in a different network (for example, in a Public cloud), but RPC daemon
locally. To ensure the integrity of communication and access control to the Erigon node, TLS authentication can be
enabled. On the high level, the process consists of these steps (this process needs to be done for any "cluster" of
Erigon and RPC daemon nodes that are supposed to work together):
1. Generate key pair for the Certificate Authority (CA). The private key of CA will be used to authorise new Erigon
instances as well as new RPC daemon instances, so that they can mutually authenticate.
2. Create CA certificate file that needs to be deployed on any Erigon instance and any RPC daemon. This CA cerf file is
used as a "root of trust", whatever is in it, will be trusted by the participants when they authenticate their
counterparts.
3. For each Erigon instance and each RPC daemon instance, generate a key pair. If you are lazy, you can generate one
pair for all Erigon nodes, and one pair for all RPC daemons, and copy these keys around.
4. Using the CA private key, create cerificate file for each public key generated on the previous step. This
effectively "inducts" these keys into the "cluster of trust".
5. On each instance, deploy 3 files - CA certificate, instance key, and certificate signed by CA for this instance key.
Following is the detailed description of how it can be done using `openssl` suite of tools.
Generate CA key pair using Elliptic Curve (as opposed to RSA). The generated CA key will be in the file `CA-key.pem`.
Access to this file will allow anyone to later include any new instance key pair into the "cluster of trust", so keep it
secure.
```
openssl ecparam -name prime256v1 -genkey -noout -out CA-key.pem
```
Create CA self-signed certificate (this command will ask questions, answers aren't important for now). The file created
by this command is `CA-cert.pem`
```
openssl req -x509 -new -nodes -key CA-key.pem -sha256 -days 3650 -out CA-cert.pem
```
For Erigon node, generate a key pair:
```
openssl ecparam -name prime256v1 -genkey -noout -out erigon-key.pem
```
Also, generate one for the RPC daemon:
```
openssl ecparam -name prime256v1 -genkey -noout -out RPC-key.pem
```
Now create certificate signing request for Erigon key pair:
```
openssl req -new -key erigon-key.pem -out erigon.csr
```
And from this request, produce the certificate (signed by CA), proving that this key is now part of the "cluster of
trust"
```
openssl x509 -req -in erigon.csr -CA CA-cert.pem -CAkey CA-key.pem -CAcreateserial -out erigon.crt -days 3650 -sha256
```
Then, produce the certificate signing request for RPC daemon key pair:
```
openssl req -new -key RPC-key.pem -out RPC.csr
```
And from this request, produce the certificate (signed by CA), proving that this key is now part of the "cluster of
trust"
```
openssl x509 -req -in RPC.csr -CA CA-cert.pem -CAkey CA-key.pem -CAcreateserial -out RPC.crt -days 3650 -sha256
```
When this is all done, these three files need to be placed on the machine where Erigon is running: `CA-cert.pem`
, `erigon-key.pem`, `erigon.crt`. And Erigon needs to be run with these extra options:
```
--tls --tls.cacert CA-cert.pem --tls.key erigon-key.pem --tls.cert erigon.crt
```
On the RPC daemon machine, these three files need to be placed: `CA-cert.pem`, `RPC-key.pem`, and `RPC.crt`. And RPC
daemon needs to be started with these extra options:
```
--tls.key RPC-key.pem --tls.cacert CA-cert.pem --tls.cert RPC.crt
```
**WARNING** Normally, the "client side" (which in our case is RPC daemon), verifies that the host name of the server
matches the "Common Name" attribute of the "server" cerificate. At this stage, this verification is turned off, and it
will be turned on again once we have updated the instruction above on how to properly generate cerificates with "Common
Name".
When running Erigon instance in the Google Cloud, for example, you need to specify the **Internal IP** in
the `--private.api.addr` option. And, you will need to open the firewall on the port you are using, to that connection
to the Erigon instances can be made.
### Ethstats
This version of the RPC daemon is compatible with [ethstats-client](https://github.com/goerli/ethstats-client).
To run ethstats, run the RPC daemon remotely and open some of the APIs.
`./build/bin/rpcdaemon --private.api.addr=localhost:9090 --http.api=net,eth,web3`
Then update your `app.json` for ethstats-client like that:
```json
[
{
"name": "ethstats",
"script": "app.js",
"log_date_format": "YYYY-MM-DD HH:mm Z",
"merge_logs": false,
"watch": false,
"max_restarts": 10,
"exec_interpreter": "node",
"exec_mode": "fork_mode",
"env": {
"NODE_ENV": "production",
"RPC_HOST": "localhost",
"RPC_PORT": "8545",
"LISTENING_PORT": "30303",
"INSTANCE_NAME": "Erigon node",
"CONTACT_DETAILS": <your twitter handle>,
"WS_SERVER": "wss://ethstats.net/api",
"WS_SECRET": <put your secret key here>,
"VERBOSITY": 2
}
}
]
```
Run ethstats-client through pm2 as usual.
You will see these warnings in the RPC daemon output, but they are expected
```
WARN [11-05|09:03:47.911] Served conn=127.0.0.1:59753 method=eth_newBlockFilter reqid=5 t="21.194µs" err="the method eth_newBlockFilter does not exist/is not available"
WARN [11-05|09:03:47.911] Served conn=127.0.0.1:59754 method=eth_newPendingTransactionFilter reqid=6 t="9.053µs" err="the method eth_newPendingTransactionFilter does not exist/is not available"
```
### Allowing only specific methods (Allowlist)
In some cases you might want to only allow certain methods in the namespaces and hide others. That is possible
with `rpc.accessList` flag.
1. Create a file, say, `rules.json`
2. Add the following content
```json
{
"allow": [
"net_version",
"web3_eth_getBlockByHash"
]
}
```
3. Provide this file to the rpcdaemon using `--rpc.accessList` flag
```
> rpcdaemon --private.api.addr=localhost:9090 --http.api=eth,debug,net,web3 --rpc.accessList=rules.json
```
Now only these two methods are available.
### Clients getting timeout, but server load is low
In this case: increase default rate-limit - amount of requests server handle simultaneously - requests over this limit
will wait. Increase it - if your 'hot data' is small or have much RAM or see "request timeout" while server load is low.
```
./build/bin/erigon --private.api.addr=localhost:9090 --private.api.ratelimit=1024
```
### Server load too high
Reduce `--private.api.ratelimit`
### Read DB directly without Json-RPC/Graphql
[./../../docs/programmers_guide/db_faq.md](./../../docs/programmers_guide/db_faq.md)
### Faster Batch requests
Currently batch requests are spawn multiple goroutines and process all sub-requests in parallel. To limit impact of 1
huge batch to other users - added flag `--rpc.batch.concurrency` (default: 2). Increase it to process large batches
faster.
Known Issue: if at least 1 request is "streamable" (has parameter of type *jsoniter.Stream) - then whole batch will
processed sequentially (on 1 goroutine).
## For Developers
### Code generation
`go.mod` stores right version of generators, use `make grpc` to install it and generate code (it also installs protoc
into ./build/bin folder).

View File

@ -1,584 +0,0 @@
package cli
import (
"context"
"encoding/binary"
"fmt"
"net"
"net/http"
"path/filepath"
"strings"
"time"
"github.com/go-chi/chi/v5"
"github.com/ledgerwatch/erigon-lib/common/dir"
libstate "github.com/ledgerwatch/erigon-lib/state"
"github.com/ledgerwatch/erigon/eth/ethconfig"
"github.com/ledgerwatch/erigon/rpc/rpccfg"
"github.com/ledgerwatch/erigon/turbo/debug"
"github.com/ledgerwatch/erigon/turbo/logging"
"github.com/wmitsuda/otterscan/cmd/otter/cli/httpcfg"
"github.com/wmitsuda/otterscan/cmd/otter/health"
"github.com/wmitsuda/otterscan/cmd/otter/rpcservices"
"github.com/ledgerwatch/erigon-lib/direct"
"github.com/ledgerwatch/erigon-lib/gointerfaces"
"github.com/ledgerwatch/erigon-lib/gointerfaces/grpcutil"
"github.com/ledgerwatch/erigon-lib/gointerfaces/remote"
"github.com/ledgerwatch/erigon-lib/gointerfaces/txpool"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon-lib/kv/kvcache"
kv2 "github.com/ledgerwatch/erigon-lib/kv/mdbx"
"github.com/ledgerwatch/erigon-lib/kv/remotedb"
"github.com/ledgerwatch/erigon-lib/kv/remotedbserver"
"github.com/ledgerwatch/erigon/cmd/utils"
"github.com/ledgerwatch/erigon/common/paths"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/node"
"github.com/ledgerwatch/erigon/node/nodecfg"
"github.com/ledgerwatch/erigon/node/nodecfg/datadir"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
"github.com/ledgerwatch/erigon/turbo/services"
"github.com/ledgerwatch/erigon/turbo/snapshotsync"
"github.com/ledgerwatch/erigon/turbo/snapshotsync/snap"
"github.com/ledgerwatch/log/v3"
"github.com/spf13/cobra"
"golang.org/x/sync/semaphore"
"google.golang.org/grpc"
grpcHealth "google.golang.org/grpc/health"
"google.golang.org/grpc/health/grpc_health_v1"
)
var rootCmd = &cobra.Command{
Use: "otter",
Short: "otterscan is a chain explorer that can also host a custom JSON RPC server that connects to an Erigon node",
}
func RootCommand() (*cobra.Command, *httpcfg.HttpCfg) {
utils.CobraFlags(rootCmd, debug.Flags, utils.MetricFlags, logging.Flags)
cfg := &httpcfg.HttpCfg{}
cfg.Enabled = true
cfg.StateCache = kvcache.DefaultCoherentConfig
rootCmd.PersistentFlags().StringVar(&cfg.PrivateApiAddr, "private.api.addr", "127.0.0.1:9090", "private api network address, for example: 127.0.0.1:9090")
rootCmd.PersistentFlags().StringVar(&cfg.DataDir, "datadir", "", "path to Erigon working directory")
rootCmd.PersistentFlags().StringVar(&cfg.HttpListenAddress, "http.addr", nodecfg.DefaultHTTPHost, "HTTP-RPC server listening interface")
rootCmd.PersistentFlags().StringVar(&cfg.TLSCertfile, "tls.cert", "", "certificate for client side TLS handshake")
rootCmd.PersistentFlags().StringVar(&cfg.TLSKeyFile, "tls.key", "", "key file for client side TLS handshake")
rootCmd.PersistentFlags().StringVar(&cfg.TLSCACert, "tls.cacert", "", "CA certificate for client side TLS handshake")
rootCmd.PersistentFlags().IntVar(&cfg.HttpPort, "http.port", nodecfg.DefaultHTTPPort, "HTTP-RPC server listening port")
rootCmd.PersistentFlags().StringSliceVar(&cfg.HttpCORSDomain, "http.corsdomain", []string{}, "Comma separated list of domains from which to accept cross origin requests (browser enforced)")
rootCmd.PersistentFlags().StringSliceVar(&cfg.HttpVirtualHost, "http.vhosts", nodecfg.DefaultConfig.HTTPVirtualHosts, "Comma separated list of virtual hostnames from which to accept requests (server enforced). Accepts '*' wildcard.")
rootCmd.PersistentFlags().BoolVar(&cfg.HttpCompression, "http.compression", true, "Disable http compression")
rootCmd.PersistentFlags().StringSliceVar(&cfg.API, "http.api", []string{"eth", "erigon"}, "API's offered over the HTTP-RPC interface: eth,erigon,web3,net,debug,trace,txpool,db. Supported methods: https://github.com/ledgerwatch/erigon/tree/devel/cmd/rpcdaemon")
rootCmd.PersistentFlags().Uint64Var(&cfg.Gascap, "rpc.gascap", 50000000, "Sets a cap on gas that can be used in eth_call/estimateGas")
rootCmd.PersistentFlags().Uint64Var(&cfg.MaxTraces, "trace.maxtraces", 200, "Sets a limit on traces that can be returned in trace_filter")
rootCmd.PersistentFlags().BoolVar(&cfg.WebsocketEnabled, "ws", false, "Enable Websockets")
rootCmd.PersistentFlags().BoolVar(&cfg.WebsocketCompression, "ws.compression", false, "Enable Websocket compression (RFC 7692)")
rootCmd.PersistentFlags().StringVar(&cfg.RpcAllowListFilePath, "rpc.accessList", "", "Specify granular (method-by-method) API allowlist")
rootCmd.PersistentFlags().UintVar(&cfg.RpcBatchConcurrency, utils.RpcBatchConcurrencyFlag.Name, 2, utils.RpcBatchConcurrencyFlag.Usage)
rootCmd.PersistentFlags().BoolVar(&cfg.RpcStreamingDisable, utils.RpcStreamingDisableFlag.Name, false, utils.RpcStreamingDisableFlag.Usage)
rootCmd.PersistentFlags().IntVar(&cfg.DBReadConcurrency, utils.DBReadConcurrencyFlag.Name, utils.DBReadConcurrencyFlag.Value, utils.DBReadConcurrencyFlag.Usage)
rootCmd.PersistentFlags().BoolVar(&cfg.TraceCompatibility, "trace.compat", false, "Bug for bug compatibility with OE for trace_ routines")
rootCmd.PersistentFlags().StringVar(&cfg.TxPoolApiAddr, "txpool.api.addr", "", "txpool api network address, for example: 127.0.0.1:9090 (default: use value of --private.api.addr)")
rootCmd.PersistentFlags().BoolVar(&cfg.Sync.UseSnapshots, "snapshot", true, utils.SnapshotFlag.Usage)
rootCmd.PersistentFlags().IntVar(&cfg.StateCache.KeysLimit, "state.cache", kvcache.DefaultCoherentConfig.KeysLimit, "Amount of keys to store in StateCache (enabled if no --datadir set). Set 0 to disable StateCache. 1_000_000 keys ~ equal to 2Gb RAM (maybe we will add RAM accounting in future versions).")
rootCmd.PersistentFlags().BoolVar(&cfg.GRPCServerEnabled, "grpc", false, "Enable GRPC server")
rootCmd.PersistentFlags().StringVar(&cfg.GRPCListenAddress, "grpc.addr", nodecfg.DefaultGRPCHost, "GRPC server listening interface")
rootCmd.PersistentFlags().IntVar(&cfg.GRPCPort, "grpc.port", nodecfg.DefaultGRPCPort, "GRPC server listening port")
rootCmd.PersistentFlags().BoolVar(&cfg.GRPCHealthCheckEnabled, "grpc.healthcheck", false, "Enable GRPC health check")
rootCmd.PersistentFlags().BoolVar(&cfg.TraceRequests, utils.HTTPTraceFlag.Name, false, "Trace HTTP requests with INFO level")
rootCmd.PersistentFlags().DurationVar(&cfg.HTTPTimeouts.ReadTimeout, "http.timeouts.read", rpccfg.DefaultHTTPTimeouts.ReadTimeout, "Maximum duration for reading the entire request, including the body.")
rootCmd.PersistentFlags().DurationVar(&cfg.HTTPTimeouts.WriteTimeout, "http.timeouts.write", rpccfg.DefaultHTTPTimeouts.WriteTimeout, "Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read")
rootCmd.PersistentFlags().DurationVar(&cfg.HTTPTimeouts.IdleTimeout, "http.timeouts.idle", rpccfg.DefaultHTTPTimeouts.IdleTimeout, "Maximum amount of time to wait for the next request when keep-alives are enabled. If http.timeouts.idle is zero, the value of http.timeouts.read is used")
rootCmd.PersistentFlags().DurationVar(&cfg.EvmCallTimeout, "rpc.evmtimeout", rpccfg.DefaultEvmCallTimeout, "Maximum amount of time to wait for the answer from EVM call.")
if err := rootCmd.MarkPersistentFlagFilename("rpc.accessList", "json"); err != nil {
panic(err)
}
if err := rootCmd.MarkPersistentFlagDirname("datadir"); err != nil {
panic(err)
}
// otterscan server setting
rootCmd.PersistentFlags().BoolVar(&cfg.OtsServerDisable, "ots.server.addr", false, "disable ots server to run rpc daemon only")
rootCmd.PersistentFlags().StringVar(&cfg.OtsStaticDir, "ots.static.dir", "./dist", "dir to serve static files from")
rootCmd.PersistentFlags().BoolVar(&cfg.DisableRpcDaemon, "disable.rpc.daemon", false, "dont run rpc daemon, for use when specifying external rtpc daemon")
rootCmd.PersistentFlags().StringVar(&cfg.OtsBeaconApiUrl, "ots.beaconapi.url", "http://localhost:3500", "where the website will make request for beacon api")
rootCmd.PersistentFlags().StringVar(&cfg.OtsRpcDaemonUrl, "ots.rpcdaemon.url", "/rpc", "where the website will make request for beacon api")
rootCmd.PersistentFlags().StringVar(&cfg.OtsAssetUrl, "ots.asset.url", "", "where website will make request for assets served by the OTS server")
rootCmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
if err := debug.SetupCobra(cmd); err != nil {
return err
}
cfg.WithDatadir = cfg.DataDir != ""
if cfg.WithDatadir {
if cfg.DataDir == "" {
cfg.DataDir = paths.DefaultDataDir()
}
cfg.Dirs = datadir.New(cfg.DataDir)
}
if cfg.TxPoolApiAddr == "" {
cfg.TxPoolApiAddr = cfg.PrivateApiAddr
}
return nil
}
rootCmd.PersistentPostRunE = func(cmd *cobra.Command, args []string) error {
debug.Exit()
return nil
}
cfg.StateCache.MetricsLabel = "rpc"
return rootCmd, cfg
}
type StateChangesClient interface {
StateChanges(ctx context.Context, in *remote.StateChangeRequest, opts ...grpc.CallOption) (remote.KV_StateChangesClient, error)
}
func subscribeToStateChangesLoop(ctx context.Context, client StateChangesClient, cache kvcache.Cache) {
go func() {
for {
select {
case <-ctx.Done():
return
default:
}
if err := subscribeToStateChanges(ctx, client, cache); err != nil {
if grpcutil.IsRetryLater(err) || grpcutil.IsEndOfStream(err) {
time.Sleep(3 * time.Second)
continue
}
log.Warn("[txpool.handleStateChanges]", "err", err)
}
}
}()
}
func subscribeToStateChanges(ctx context.Context, client StateChangesClient, cache kvcache.Cache) error {
streamCtx, cancel := context.WithCancel(ctx)
defer cancel()
stream, err := client.StateChanges(streamCtx, &remote.StateChangeRequest{WithStorage: true, WithTransactions: false}, grpc.WaitForReady(true))
if err != nil {
return err
}
for req, err := stream.Recv(); ; req, err = stream.Recv() {
if err != nil {
return err
}
if req == nil {
return nil
}
cache.OnNewBlock(req)
}
}
func checkDbCompatibility(ctx context.Context, db kv.RoDB) error {
// DB schema version compatibility check
var version []byte
var compatErr error
var compatTx kv.Tx
if compatTx, compatErr = db.BeginRo(ctx); compatErr != nil {
return fmt.Errorf("open Ro Tx for DB schema compability check: %w", compatErr)
}
defer compatTx.Rollback()
if version, compatErr = compatTx.GetOne(kv.DatabaseInfo, kv.DBSchemaVersionKey); compatErr != nil {
return fmt.Errorf("read version for DB schema compability check: %w", compatErr)
}
if len(version) != 12 {
return fmt.Errorf("database does not have major schema version. upgrade and restart Erigon core")
}
major := binary.BigEndian.Uint32(version)
minor := binary.BigEndian.Uint32(version[4:])
patch := binary.BigEndian.Uint32(version[8:])
var compatible bool
dbSchemaVersion := &kv.DBSchemaVersion
if major != dbSchemaVersion.Major {
compatible = false
} else if minor != dbSchemaVersion.Minor {
compatible = false
} else {
compatible = true
}
if !compatible {
return fmt.Errorf("incompatible DB Schema versions: reader %d.%d.%d, database %d.%d.%d",
dbSchemaVersion.Major, dbSchemaVersion.Minor, dbSchemaVersion.Patch,
major, minor, patch)
}
log.Info("DB schemas compatible", "reader", fmt.Sprintf("%d.%d.%d", dbSchemaVersion.Major, dbSchemaVersion.Minor, dbSchemaVersion.Patch),
"database", fmt.Sprintf("%d.%d.%d", major, minor, patch))
return nil
}
func EmbeddedServices(ctx context.Context,
erigonDB kv.RoDB, stateCacheCfg kvcache.CoherentConfig,
blockReader services.FullBlockReader, snapshots *snapshotsync.RoSnapshots, agg *libstate.Aggregator22,
ethBackendServer remote.ETHBACKENDServer, txPoolServer txpool.TxpoolServer, miningServer txpool.MiningServer,
) (eth rpchelper.ApiBackend, txPool txpool.TxpoolClient, mining txpool.MiningClient, stateCache kvcache.Cache, ff *rpchelper.Filters, err error) {
if stateCacheCfg.KeysLimit > 0 {
stateCache = kvcache.NewDummy()
// notification about new blocks (state stream) doesn't work now inside erigon - because
// erigon does send this stream to privateAPI (erigon with enabled rpc, still have enabled privateAPI).
// without this state stream kvcache can't work and only slow-down things
//
//stateCache = kvcache.New(stateCacheCfg)
} else {
stateCache = kvcache.NewDummy()
}
kvRPC := remotedbserver.NewKvServer(ctx, erigonDB, snapshots, agg)
stateDiffClient := direct.NewStateDiffClientDirect(kvRPC)
subscribeToStateChangesLoop(ctx, stateDiffClient, stateCache)
directClient := direct.NewEthBackendClientDirect(ethBackendServer)
eth = rpcservices.NewRemoteBackend(directClient, erigonDB, blockReader)
txPool = direct.NewTxPoolClient(txPoolServer)
mining = direct.NewMiningClient(miningServer)
ff = rpchelper.New(ctx, eth, txPool, mining, func() {})
return
}
// RemoteServices - use when RPCDaemon run as independent process. Still it can use --datadir flag to enable
// `cfg.WithDatadir` (mode when it on 1 machine with Erigon)
func RemoteServices(ctx context.Context, cfg httpcfg.HttpCfg, logger log.Logger, rootCancel context.CancelFunc) (
db kv.RoDB, borDb kv.RoDB,
eth rpchelper.ApiBackend, txPool txpool.TxpoolClient, mining txpool.MiningClient,
stateCache kvcache.Cache, blockReader services.FullBlockReader,
ff *rpchelper.Filters, agg *libstate.Aggregator22, err error) {
if !cfg.WithDatadir && cfg.PrivateApiAddr == "" {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("either remote db or local db must be specified")
}
// Do not change the order of these checks. Chaindata needs to be checked first, because PrivateApiAddr has default value which is not ""
// If PrivateApiAddr is checked first, the Chaindata option will never work
if cfg.WithDatadir {
dir.MustExist(cfg.Dirs.SnapHistory)
var rwKv kv.RwDB
log.Trace("Creating chain db", "path", cfg.Dirs.Chaindata)
limiter := semaphore.NewWeighted(int64(cfg.DBReadConcurrency))
rwKv, err = kv2.NewMDBX(logger).RoTxsLimiter(limiter).Path(cfg.Dirs.Chaindata).Readonly().Open()
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, err
}
if compatErr := checkDbCompatibility(ctx, rwKv); compatErr != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, compatErr
}
db = rwKv
stateCache = kvcache.NewDummy()
blockReader = snapshotsync.NewBlockReader()
// bor (consensus) specific db
var borKv kv.RoDB
borDbPath := filepath.Join(cfg.DataDir, "bor")
{
// ensure db exist
tmpDb, err := kv2.NewMDBX(logger).Path(borDbPath).Label(kv.ConsensusDB).Open()
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, err
}
tmpDb.Close()
}
log.Trace("Creating consensus db", "path", borDbPath)
borKv, err = kv2.NewMDBX(logger).Path(borDbPath).Label(kv.ConsensusDB).Readonly().Open()
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, err
}
// Skip the compatibility check, until we have a schema in erigon-lib
borDb = borKv
} else {
if cfg.StateCache.KeysLimit > 0 {
stateCache = kvcache.NewDummy()
//stateCache = kvcache.New(cfg.StateCache)
} else {
stateCache = kvcache.NewDummy()
}
log.Info("if you run RPCDaemon on same machine with Erigon add --datadir option")
}
if db != nil {
var cc *params.ChainConfig
if err := db.View(context.Background(), func(tx kv.Tx) error {
genesisBlock, err := rawdb.ReadBlockByNumber(tx, 0)
if err != nil {
return err
}
if genesisBlock == nil {
return fmt.Errorf("genesis not found in DB. Likely Erigon was never started on this datadir")
}
cc, err = rawdb.ReadChainConfig(tx, genesisBlock.Hash())
if err != nil {
return err
}
cfg.Snap.Enabled, err = snap.Enabled(tx)
if err != nil {
return err
}
return nil
}); err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, err
}
if cc == nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("chain config not found in db. Need start erigon at least once on this db")
}
cfg.Snap.Enabled = cfg.Snap.Enabled || cfg.Sync.UseSnapshots
}
creds, err := grpcutil.TLS(cfg.TLSCACert, cfg.TLSCertfile, cfg.TLSKeyFile)
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("open tls cert: %w", err)
}
conn, err := grpcutil.Connect(creds, cfg.PrivateApiAddr)
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("could not connect to execution service privateApi: %w", err)
}
kvClient := remote.NewKVClient(conn)
remoteKv, err := remotedb.NewRemote(gointerfaces.VersionFromProto(remotedbserver.KvServiceAPIVersion), logger, kvClient).Open()
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("could not connect to remoteKv: %w", err)
}
subscribeToStateChangesLoop(ctx, kvClient, stateCache)
onNewSnapshot := func() {}
if cfg.WithDatadir {
if cfg.Snap.Enabled {
allSnapshots := snapshotsync.NewRoSnapshots(cfg.Snap, cfg.Dirs.Snap)
// To povide good UX - immediatly can read snapshots after RPCDaemon start, even if Erigon is down
// Erigon does store list of snapshots in db: means RPCDaemon can read this list now, but read by `kvClient.Snapshots` after establish grpc connection
allSnapshots.OptimisticReopenWithDB(db)
allSnapshots.LogStat()
if agg, err = libstate.NewAggregator22(cfg.Dirs.SnapHistory, cfg.Dirs.Tmp, ethconfig.HistoryV3AggregationStep, db); err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("create aggregator: %w", err)
}
if err = agg.ReopenFiles(); err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("create aggregator: %w", err)
}
db.View(context.Background(), func(tx kv.Tx) error {
agg.LogStats(tx, func(endTxNumMinimax uint64) uint64 {
_, histBlockNumProgress, _ := rawdb.TxNums.FindBlockNum(tx, endTxNumMinimax)
return histBlockNumProgress
})
return nil
})
onNewSnapshot = func() {
go func() { // don't block events processing by network communication
reply, err := kvClient.Snapshots(ctx, &remote.SnapshotsRequest{}, grpc.WaitForReady(true))
if err != nil {
log.Warn("[Snapshots] reopen", "err", err)
return
}
if err := allSnapshots.ReopenList(reply.BlockFiles, true); err != nil {
log.Error("[Snapshots] reopen", "err", err)
} else {
allSnapshots.LogStat()
}
if err = agg.ReopenFiles(); err != nil {
log.Error("[Snapshots] reopen", "err", err)
} else {
db.View(context.Background(), func(tx kv.Tx) error {
agg.LogStats(tx, func(endTxNumMinimax uint64) uint64 {
_, histBlockNumProgress, _ := rawdb.TxNums.FindBlockNum(tx, endTxNumMinimax)
return histBlockNumProgress
})
return nil
})
}
}()
}
onNewSnapshot()
// TODO: how to don't block startup on remote RPCDaemon?
// txNums = exec22.TxNumsFromDB(allSnapshots, db)
blockReader = snapshotsync.NewBlockReaderWithSnapshots(allSnapshots)
} else {
log.Info("Use --snapshots=false")
}
}
if !cfg.WithDatadir {
blockReader = snapshotsync.NewRemoteBlockReader(remote.NewETHBACKENDClient(conn))
}
remoteEth := rpcservices.NewRemoteBackend(remote.NewETHBACKENDClient(conn), db, blockReader)
blockReader = remoteEth
txpoolConn := conn
if cfg.TxPoolApiAddr != cfg.PrivateApiAddr {
txpoolConn, err = grpcutil.Connect(creds, cfg.TxPoolApiAddr)
if err != nil {
return nil, nil, nil, nil, nil, nil, nil, ff, nil, fmt.Errorf("could not connect to txpool api: %w", err)
}
}
mining = txpool.NewMiningClient(txpoolConn)
miningService := rpcservices.NewMiningService(mining)
txPool = txpool.NewTxpoolClient(txpoolConn)
txPoolService := rpcservices.NewTxPoolService(txPool)
if db == nil {
db = remoteKv
}
eth = remoteEth
go func() {
if !remoteKv.EnsureVersionCompatibility() {
rootCancel()
}
if !remoteEth.EnsureVersionCompatibility() {
rootCancel()
}
if mining != nil && !miningService.EnsureVersionCompatibility() {
rootCancel()
}
if !txPoolService.EnsureVersionCompatibility() {
rootCancel()
}
}()
ff = rpchelper.New(ctx, eth, txPool, mining, onNewSnapshot)
return db, borDb, eth, txPool, mining, stateCache, blockReader, ff, agg, err
}
func StartRpcServer(ctx context.Context, r chi.Router, cfg httpcfg.HttpCfg, rpcAPI []rpc.API) error {
if cfg.Enabled {
return startServer(ctx, r, cfg, rpcAPI)
}
return nil
}
func startServer(ctx context.Context, r chi.Router, cfg httpcfg.HttpCfg, rpcAPI []rpc.API) error {
// register apis and create handler stack
httpEndpoint := fmt.Sprintf("%s:%d", cfg.HttpListenAddress, cfg.HttpPort)
log.Trace("TraceRequests = %t\n", cfg.TraceRequests)
srv := rpc.NewServer(cfg.RpcBatchConcurrency, cfg.TraceRequests, cfg.RpcStreamingDisable)
allowListForRPC, err := parseAllowListForRPC(cfg.RpcAllowListFilePath)
if err != nil {
return err
}
srv.SetAllowList(allowListForRPC)
var defaultAPIList []rpc.API
for _, api := range rpcAPI {
if api.Namespace != "engine" {
defaultAPIList = append(defaultAPIList, api)
}
}
var apiFlags []string
for _, flag := range cfg.API {
if flag != "engine" {
apiFlags = append(apiFlags, flag)
}
}
if err := node.RegisterApisFromWhitelist(defaultAPIList, apiFlags, srv, false); err != nil {
return fmt.Errorf("could not start register RPC apis: %w", err)
}
httpHandler := node.NewHTTPHandlerStack(srv, cfg.HttpCORSDomain, cfg.HttpVirtualHost, cfg.HttpCompression)
var wsHandler http.Handler
if cfg.WebsocketEnabled {
wsHandler = srv.WebsocketHandler([]string{"*"}, nil, cfg.WebsocketCompression)
}
apiHandler, err := createHandler(cfg, defaultAPIList, httpHandler, wsHandler, nil)
if err != nil {
return err
}
r.Mount("/rpc", apiHandler)
listener, _, err := node.StartHTTPEndpoint(httpEndpoint, cfg.HTTPTimeouts, r)
if err != nil {
return fmt.Errorf("could not start RPC api: %w", err)
}
info := &[]interface{}{"url", httpEndpoint, "ws", cfg.WebsocketEnabled,
"ws.compression", cfg.WebsocketCompression, "grpc", cfg.GRPCServerEnabled}
if cfg.GRPCServerEnabled {
startGrpcServer(ctx, info, cfg)
if err != nil {
return fmt.Errorf("could not start GRPC api: %w", err)
}
}
log.Info("HTTP endpoint opened", *info...)
defer func() {
srv.Stop()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
_ = listener.Shutdown(shutdownCtx)
log.Info("HTTP endpoint closed", "url", httpEndpoint)
}()
<-ctx.Done()
log.Info("Exiting...")
return nil
}
func startGrpcServer(ctx context.Context, info *[]any, cfg httpcfg.HttpCfg) (err error) {
var (
healthServer *grpcHealth.Server
grpcServer *grpc.Server
grpcListener net.Listener
grpcEndpoint string
)
grpcEndpoint = fmt.Sprintf("%s:%d", cfg.GRPCListenAddress, cfg.GRPCPort)
if grpcListener, err = net.Listen("tcp", grpcEndpoint); err != nil {
return fmt.Errorf("could not start GRPC listener: %w", err)
}
grpcServer = grpc.NewServer()
if cfg.GRPCHealthCheckEnabled {
healthServer = grpcHealth.NewServer()
grpc_health_v1.RegisterHealthServer(grpcServer, healthServer)
}
go grpcServer.Serve(grpcListener)
*info = append(*info, "grpc.port", cfg.GRPCPort)
defer func() {
if cfg.GRPCHealthCheckEnabled {
healthServer.Shutdown()
}
grpcServer.GracefulStop()
_ = grpcListener.Close()
log.Info("GRPC endpoint closed", "url", grpcEndpoint)
}()
return nil
}
func createHandler(cfg httpcfg.HttpCfg, apiList []rpc.API, httpHandler http.Handler, wsHandler http.Handler, jwtSecret []byte) (http.Handler, error) {
var handler http.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// adding a healthcheck here
if health.ProcessHealthcheckIfNeeded(w, r, apiList) {
return
}
if cfg.WebsocketEnabled && wsHandler != nil && isWebsocket(r) {
wsHandler.ServeHTTP(w, r)
return
}
if jwtSecret != nil && !rpc.CheckJwtSecret(w, r, jwtSecret) {
return
}
httpHandler.ServeHTTP(w, r)
})
return handler, nil
}
// isWebsocket checks the header of a http request for a websocket upgrade request.
func isWebsocket(r *http.Request) bool {
return strings.EqualFold(r.Header.Get("Upgrade"), "websocket") &&
strings.Contains(strings.ToLower(r.Header.Get("Connection")), "upgrade")
}

View File

@ -1,19 +0,0 @@
package httpcfg
import (
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/cli/httpcfg"
)
type HttpCfg struct {
httpcfg.HttpCfg
DisableRpcDaemon bool
OtsServerDisable bool
OtsApiPath string
OtsStaticDir string
OtsAssetUrl string
OtsRpcDaemonUrl string
OtsBeaconApiUrl string
}

View File

@ -1,43 +0,0 @@
package cli
import (
"encoding/json"
"io"
"os"
"strings"
"github.com/ledgerwatch/erigon/rpc"
)
type allowListFile struct {
Allow rpc.AllowList `json:"allow"`
}
func parseAllowListForRPC(path string) (rpc.AllowList, error) {
path = strings.TrimSpace(path)
if path == "" { // no file is provided
return nil, nil
}
file, err := os.Open(path)
if err != nil {
return nil, err
}
defer func() {
file.Close() //nolint: errcheck
}()
fileContents, err := io.ReadAll(file)
if err != nil {
return nil, err
}
var allowListFileObj allowListFile
err = json.Unmarshal(fileContents, &allowListFileObj)
if err != nil {
return nil, err
}
return allowListFileObj.Allow, nil
}

View File

@ -1,132 +0,0 @@
package commands
import (
"github.com/ledgerwatch/erigon-lib/gointerfaces/txpool"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon-lib/kv/kvcache"
libstate "github.com/ledgerwatch/erigon-lib/state"
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/commands"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
"github.com/ledgerwatch/erigon/turbo/services"
"github.com/wmitsuda/otterscan/cmd/otter/cli/httpcfg"
erigonHttpcfg "github.com/ledgerwatch/erigon/cmd/rpcdaemon/cli/httpcfg"
)
// APIList describes the list of available RPC apis
func APIList(db kv.RoDB, borDb kv.RoDB, eth rpchelper.ApiBackend, txPool txpool.TxpoolClient, mining txpool.MiningClient,
filters *rpchelper.Filters, stateCache kvcache.Cache,
blockReader services.FullBlockReader, agg *libstate.Aggregator22, cfg httpcfg.HttpCfg) (list []rpc.API) {
tcfg := erigonHttpcfg.HttpCfg{
MaxTraces: cfg.MaxTraces,
Gascap: cfg.Gascap,
TraceCompatibility: cfg.TraceCompatibility,
}
baseUtils := NewBaseUtilsApi(filters, stateCache, blockReader, agg, cfg.WithDatadir, cfg.EvmCallTimeout)
base := commands.NewBaseApi(filters, stateCache, blockReader, agg, cfg.WithDatadir, cfg.EvmCallTimeout)
ethImpl := commands.NewEthAPI(base, db, eth, txPool, mining, cfg.Gascap)
erigonImpl := commands.NewErigonAPI(base, db, eth)
txpoolImpl := commands.NewTxPoolAPI(base, db, txPool)
netImpl := commands.NewNetAPIImpl(eth)
debugImpl := commands.NewPrivateDebugAPI(base, db, cfg.Gascap)
traceImpl := commands.NewTraceAPI(base, db, &tcfg)
web3Impl := commands.NewWeb3APIImpl(eth)
dbImpl := commands.NewDBAPIImpl() /* deprecated */
adminImpl := commands.NewAdminAPI(eth)
parityImpl := commands.NewParityAPIImpl(db)
borImpl := commands.NewBorAPI(base, db, borDb) // bor (consensus) specific
otsImpl := NewOtterscanAPI(baseUtils, db)
for _, enabledAPI := range cfg.API {
switch enabledAPI {
case "eth":
list = append(list, rpc.API{
Namespace: "eth",
Public: true,
Service: commands.EthAPI(ethImpl),
Version: "1.0",
})
case "debug":
list = append(list, rpc.API{
Namespace: "debug",
Public: true,
Service: commands.PrivateDebugAPI(debugImpl),
Version: "1.0",
})
case "net":
list = append(list, rpc.API{
Namespace: "net",
Public: true,
Service: commands.NetAPI(netImpl),
Version: "1.0",
})
case "txpool":
list = append(list, rpc.API{
Namespace: "txpool",
Public: true,
Service: commands.TxPoolAPI(txpoolImpl),
Version: "1.0",
})
case "web3":
list = append(list, rpc.API{
Namespace: "web3",
Public: true,
Service: commands.Web3API(web3Impl),
Version: "1.0",
})
case "trace":
list = append(list, rpc.API{
Namespace: "trace",
Public: true,
Service: commands.TraceAPI(traceImpl),
Version: "1.0",
})
case "db": /* Deprecated */
list = append(list, rpc.API{
Namespace: "db",
Public: true,
Service: commands.DBAPI(dbImpl),
Version: "1.0",
})
case "erigon":
list = append(list, rpc.API{
Namespace: "erigon",
Public: true,
Service: commands.ErigonAPI(erigonImpl),
Version: "1.0",
})
case "bor":
list = append(list, rpc.API{
Namespace: "bor",
Public: true,
Service: commands.BorAPI(borImpl),
Version: "1.0",
})
case "admin":
list = append(list, rpc.API{
Namespace: "admin",
Public: false,
Service: commands.AdminAPI(adminImpl),
Version: "1.0",
})
case "parity":
list = append(list, rpc.API{
Namespace: "parity",
Public: false,
Service: commands.ParityAPI(parityImpl),
Version: "1.0",
})
case "ots":
list = append(list, rpc.API{
Namespace: "ots",
Public: true,
Service: OtterscanAPI(otsImpl),
Version: "1.0",
})
}
}
return list
}

View File

@ -1,10 +0,0 @@
package commands
// NotImplemented is the URI prefix for smartcard wallets.
const NotImplemented = "the method is currently not implemented: %s"
// NotAvailableChainData x
const NotAvailableChainData = "the function %s is not available, please use --private.api.addr option instead of --datadir option"
// NotAvailableDeprecated x
const NotAvailableDeprecated = "the method has been deprecated: %s"

View File

@ -1,39 +0,0 @@
package commands
import (
"context"
"testing"
"github.com/ledgerwatch/erigon-lib/kv/memdb"
"github.com/ledgerwatch/erigon/core"
)
func TestGetChainConfig(t *testing.T) {
db := memdb.NewTestDB(t)
config, _, err := core.CommitGenesisBlock(db, core.DefaultGenesisBlock())
if err != nil {
t.Fatalf("setting up genensis block: %v", err)
}
tx, txErr := db.BeginRo(context.Background())
if txErr != nil {
t.Fatalf("error starting tx: %v", txErr)
}
defer tx.Rollback()
api := &BaseAPI{}
config1, err1 := api.chainConfig(tx)
if err1 != nil {
t.Fatalf("reading chain config: %v", err1)
}
if config.String() != config1.String() {
t.Fatalf("read different config: %s, expected %s", config1.String(), config.String())
}
config2, err2 := api.chainConfig(tx)
if err2 != nil {
t.Fatalf("reading chain config: %v", err2)
}
if config.String() != config2.String() {
t.Fatalf("read different config: %s, expected %s", config2.String(), config.String())
}
}

View File

@ -1,526 +0,0 @@
package commands
import (
"context"
"errors"
"fmt"
"math/big"
"sync"
"github.com/holiman/uint256"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/commands"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/vm"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/adapter/ethapi"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
"github.com/ledgerwatch/erigon/turbo/transactions"
"github.com/ledgerwatch/log/v3"
)
// API_LEVEL Must be incremented every time new additions are made
const API_LEVEL = 8
type TransactionsWithReceipts struct {
Txs []*commands.RPCTransaction `json:"txs"`
Receipts []map[string]interface{} `json:"receipts"`
FirstPage bool `json:"firstPage"`
LastPage bool `json:"lastPage"`
}
type OtterscanAPI interface {
GetApiLevel() uint8
GetInternalOperations(ctx context.Context, hash common.Hash) ([]*InternalOperation, error)
SearchTransactionsBefore(ctx context.Context, addr common.Address, blockNum uint64, pageSize uint16) (*TransactionsWithReceipts, error)
SearchTransactionsAfter(ctx context.Context, addr common.Address, blockNum uint64, pageSize uint16) (*TransactionsWithReceipts, error)
GetBlockDetails(ctx context.Context, number rpc.BlockNumber) (map[string]interface{}, error)
GetBlockDetailsByHash(ctx context.Context, hash common.Hash) (map[string]interface{}, error)
GetBlockTransactions(ctx context.Context, number rpc.BlockNumber, pageNumber uint8, pageSize uint8) (map[string]interface{}, error)
HasCode(ctx context.Context, address common.Address, blockNrOrHash rpc.BlockNumberOrHash) (bool, error)
TraceTransaction(ctx context.Context, hash common.Hash) ([]*TraceEntry, error)
GetTransactionError(ctx context.Context, hash common.Hash) (hexutil.Bytes, error)
GetTransactionBySenderAndNonce(ctx context.Context, addr common.Address, nonce uint64) (*common.Hash, error)
GetContractCreator(ctx context.Context, addr common.Address) (*ContractCreatorData, error)
}
type OtterscanAPIImpl struct {
*BaseAPIUtils
db kv.RoDB
}
func NewOtterscanAPI(base *BaseAPIUtils, db kv.RoDB) *OtterscanAPIImpl {
return &OtterscanAPIImpl{
BaseAPIUtils: base,
db: db,
}
}
func (api *OtterscanAPIImpl) GetApiLevel() uint8 {
return API_LEVEL
}
// TODO: dedup from eth_txs.go#GetTransactionByHash
func (api *OtterscanAPIImpl) getTransactionByHash(ctx context.Context, tx kv.Tx, hash common.Hash) (types.Transaction, *types.Block, common.Hash, uint64, uint64, error) {
// https://infura.io/docs/ethereum/json-rpc/eth-getTransactionByHash
blockNum, ok, err := api.txnLookup(ctx, tx, hash)
if err != nil {
return nil, nil, common.Hash{}, 0, 0, err
}
if !ok {
return nil, nil, common.Hash{}, 0, 0, nil
}
block, err := api.blockByNumberWithSenders(tx, blockNum)
if err != nil {
return nil, nil, common.Hash{}, 0, 0, err
}
if block == nil {
return nil, nil, common.Hash{}, 0, 0, nil
}
blockHash := block.Hash()
var txnIndex uint64
var txn types.Transaction
for i, transaction := range block.Transactions() {
if transaction.Hash() == hash {
txn = transaction
txnIndex = uint64(i)
break
}
}
// Add GasPrice for the DynamicFeeTransaction
// var baseFee *big.Int
// if chainConfig.IsLondon(blockNum) && blockHash != (common.Hash{}) {
// baseFee = block.BaseFee()
// }
// if no transaction was found then we return nil
if txn == nil {
return nil, nil, common.Hash{}, 0, 0, nil
}
return txn, block, blockHash, blockNum, txnIndex, nil
}
func (api *OtterscanAPIImpl) runTracer(ctx context.Context, tx kv.Tx, hash common.Hash, tracer vm.Tracer) (*core.ExecutionResult, error) {
txn, block, blockHash, _, txIndex, err := api.getTransactionByHash(ctx, tx, hash)
if err != nil {
return nil, err
}
if txn == nil {
return nil, fmt.Errorf("transaction %#x not found", hash)
}
chainConfig, err := api.chainConfig(tx)
if err != nil {
return nil, err
}
getHeader := func(hash common.Hash, number uint64) *types.Header {
return rawdb.ReadHeader(tx, hash, number)
}
msg, blockCtx, txCtx, ibs, _, err := transactions.ComputeTxEnv(ctx, block, chainConfig, getHeader, ethash.NewFaker(), tx, blockHash, txIndex)
if err != nil {
return nil, err
}
var vmConfig vm.Config
if tracer == nil {
vmConfig = vm.Config{}
} else {
vmConfig = vm.Config{Debug: true, Tracer: tracer}
}
vmenv := vm.NewEVM(blockCtx, txCtx, ibs, chainConfig, vmConfig)
result, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(msg.Gas()), true, false /* gasBailout */)
if err != nil {
return nil, fmt.Errorf("tracing failed: %v", err)
}
return result, nil
}
func (api *OtterscanAPIImpl) GetInternalOperations(ctx context.Context, hash common.Hash) ([]*InternalOperation, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
tracer := NewOperationsTracer(ctx)
if _, err := api.runTracer(ctx, tx, hash, tracer); err != nil {
return nil, err
}
return tracer.Results, nil
}
// Search transactions that touch a certain address.
//
// It searches back a certain block (excluding); the results are sorted descending.
//
// The pageSize indicates how many txs may be returned. If there are less txs than pageSize,
// they are just returned. But it may return a little more than pageSize if there are more txs
// than the necessary to fill pageSize in the last found block, i.e., let's say you want pageSize == 25,
// you already found 24 txs, the next block contains 4 matches, then this function will return 28 txs.
func (api *OtterscanAPIImpl) SearchTransactionsBefore(ctx context.Context, addr common.Address, blockNum uint64, pageSize uint16) (*TransactionsWithReceipts, error) {
dbtx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer dbtx.Rollback()
log.Info("got cursor")
callFromCursor, err := dbtx.Cursor(kv.CallFromIndex)
if err != nil {
return nil, err
}
defer callFromCursor.Close()
log.Info("call from cur")
callToCursor, err := dbtx.Cursor(kv.CallToIndex)
if err != nil {
return nil, err
}
defer callToCursor.Close()
log.Info("cur to call")
chainConfig, err := api.chainConfig(dbtx)
if err != nil {
return nil, err
}
isFirstPage := false
if blockNum == 0 {
isFirstPage = true
} else {
// Internal search code considers blockNum [including], so adjust the value
blockNum--
}
// Initialize search cursors at the first shard >= desired block number
callFromProvider := NewCallCursorBackwardBlockProvider(callFromCursor, addr, blockNum)
callToProvider := NewCallCursorBackwardBlockProvider(callToCursor, addr, blockNum)
callFromToProvider := newCallFromToBlockProvider(false, callFromProvider, callToProvider)
txs := make([]*commands.RPCTransaction, 0, pageSize)
receipts := make([]map[string]interface{}, 0, pageSize)
resultCount := uint16(0)
hasMore := true
for {
if resultCount >= pageSize || !hasMore {
break
}
var results []*TransactionsWithReceipts
results, hasMore, err = api.traceBlocks(ctx, addr, chainConfig, pageSize, resultCount, callFromToProvider)
if err != nil {
return nil, err
}
for _, r := range results {
if r == nil {
return nil, errors.New("internal error during search tracing")
}
for i := len(r.Txs) - 1; i >= 0; i-- {
txs = append(txs, r.Txs[i])
}
for i := len(r.Receipts) - 1; i >= 0; i-- {
receipts = append(receipts, r.Receipts[i])
}
resultCount += uint16(len(r.Txs))
if resultCount >= pageSize {
break
}
}
}
return &TransactionsWithReceipts{txs, receipts, isFirstPage, !hasMore}, nil
}
// Search transactions that touch a certain address.
//
// It searches forward a certain block (excluding); the results are sorted descending.
//
// The pageSize indicates how many txs may be returned. If there are less txs than pageSize,
// they are just returned. But it may return a little more than pageSize if there are more txs
// than the necessary to fill pageSize in the last found block, i.e., let's say you want pageSize == 25,
// you already found 24 txs, the next block contains 4 matches, then this function will return 28 txs.
func (api *OtterscanAPIImpl) SearchTransactionsAfter(ctx context.Context, addr common.Address, blockNum uint64, pageSize uint16) (*TransactionsWithReceipts, error) {
dbtx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer dbtx.Rollback()
callFromCursor, err := dbtx.Cursor(kv.CallFromIndex)
if err != nil {
return nil, err
}
defer callFromCursor.Close()
callToCursor, err := dbtx.Cursor(kv.CallToIndex)
if err != nil {
return nil, err
}
defer callToCursor.Close()
chainConfig, err := api.chainConfig(dbtx)
if err != nil {
return nil, err
}
isLastPage := false
if blockNum == 0 {
isLastPage = true
} else {
// Internal search code considers blockNum [including], so adjust the value
blockNum++
}
// Initialize search cursors at the first shard >= desired block number
callFromProvider := NewCallCursorForwardBlockProvider(callFromCursor, addr, blockNum)
callToProvider := NewCallCursorForwardBlockProvider(callToCursor, addr, blockNum)
callFromToProvider := newCallFromToBlockProvider(true, callFromProvider, callToProvider)
txs := make([]*commands.RPCTransaction, 0, pageSize)
receipts := make([]map[string]interface{}, 0, pageSize)
resultCount := uint16(0)
hasMore := true
for {
if resultCount >= pageSize || !hasMore {
break
}
var results []*TransactionsWithReceipts
results, hasMore, err = api.traceBlocks(ctx, addr, chainConfig, pageSize, resultCount, callFromToProvider)
if err != nil {
return nil, err
}
for _, r := range results {
if r == nil {
return nil, errors.New("internal error during search tracing")
}
txs = append(txs, r.Txs...)
receipts = append(receipts, r.Receipts...)
resultCount += uint16(len(r.Txs))
if resultCount >= pageSize {
break
}
}
}
// Reverse results
lentxs := len(txs)
for i := 0; i < lentxs/2; i++ {
txs[i], txs[lentxs-1-i] = txs[lentxs-1-i], txs[i]
receipts[i], receipts[lentxs-1-i] = receipts[lentxs-1-i], receipts[i]
}
return &TransactionsWithReceipts{txs, receipts, !hasMore, isLastPage}, nil
}
func (api *OtterscanAPIImpl) traceBlocks(ctx context.Context, addr common.Address, chainConfig *params.ChainConfig, pageSize, resultCount uint16, callFromToProvider BlockProvider) ([]*TransactionsWithReceipts, bool, error) {
var wg sync.WaitGroup
// Estimate the common case of user address having at most 1 interaction/block and
// trace N := remaining page matches as number of blocks to trace concurrently.
// TODO: this is not optimimal for big contract addresses; implement some better heuristics.
estBlocksToTrace := pageSize - resultCount
results := make([]*TransactionsWithReceipts, estBlocksToTrace)
totalBlocksTraced := 0
hasMore := true
for i := 0; i < int(estBlocksToTrace); i++ {
var nextBlock uint64
var err error
nextBlock, hasMore, err = callFromToProvider()
if err != nil {
return nil, false, err
}
// TODO: nextBlock == 0 seems redundant with hasMore == false
if !hasMore && nextBlock == 0 {
break
}
wg.Add(1)
totalBlocksTraced++
go api.searchTraceBlock(ctx, &wg, addr, chainConfig, i, nextBlock, results)
}
wg.Wait()
return results[:totalBlocksTraced], hasMore, nil
}
func (api *OtterscanAPIImpl) delegateGetBlockByNumber(tx kv.Tx, b *types.Block, number rpc.BlockNumber, inclTx bool) (map[string]interface{}, error) {
td, err := rawdb.ReadTd(tx, b.Hash(), b.NumberU64())
if err != nil {
return nil, err
}
response, err := ethapi.RPCMarshalBlockDeprecated(b, inclTx, inclTx)
if !inclTx {
delete(response, "transactions") // workaround for https://github.com/ledgerwatch/erigon/issues/4989#issuecomment-1218415666
}
response["totalDifficulty"] = (*hexutil.Big)(td)
response["transactionCount"] = b.Transactions().Len()
if err == nil && number == rpc.PendingBlockNumber {
// Pending blocks need to nil out a few fields
for _, field := range []string{"hash", "nonce", "miner"} {
response[field] = nil
}
}
// Explicitly drop unwanted fields
response["logsBloom"] = nil
return response, err
}
// TODO: temporary workaround due to API breakage from watch_the_burn
type internalIssuance struct {
BlockReward string `json:"blockReward,omitempty"`
UncleReward string `json:"uncleReward,omitempty"`
Issuance string `json:"issuance,omitempty"`
}
func (api *OtterscanAPIImpl) delegateIssuance(tx kv.Tx, block *types.Block, chainConfig *params.ChainConfig) (internalIssuance, error) {
if chainConfig.Ethash == nil {
// Clique for example has no issuance
return internalIssuance{}, nil
}
minerReward, uncleRewards := ethash.AccumulateRewards(chainConfig, block.Header(), block.Uncles())
issuance := minerReward
for _, r := range uncleRewards {
p := r // avoids warning?
issuance.Add(&issuance, &p)
}
var ret internalIssuance
ret.BlockReward = hexutil.EncodeBig(minerReward.ToBig())
ret.Issuance = hexutil.EncodeBig(issuance.ToBig())
issuance.Sub(&issuance, &minerReward)
ret.UncleReward = hexutil.EncodeBig(issuance.ToBig())
return ret, nil
}
func (api *OtterscanAPIImpl) delegateBlockFees(ctx context.Context, tx kv.Tx, block *types.Block, senders []common.Address, chainConfig *params.ChainConfig) (uint64, error) {
receipts, err := api.getReceipts(ctx, tx, chainConfig, block, senders)
if err != nil {
return 0, fmt.Errorf("getReceipts error: %v", err)
}
fees := uint64(0)
for _, receipt := range receipts {
txn := block.Transactions()[receipt.TransactionIndex]
effectiveGasPrice := uint64(0)
if !chainConfig.IsLondon(block.NumberU64()) {
effectiveGasPrice = txn.GetPrice().Uint64()
} else {
baseFee, _ := uint256.FromBig(block.BaseFee())
gasPrice := new(big.Int).Add(block.BaseFee(), txn.GetEffectiveGasTip(baseFee).ToBig())
effectiveGasPrice = gasPrice.Uint64()
}
fees += effectiveGasPrice * receipt.GasUsed
}
return fees, nil
}
func (api *OtterscanAPIImpl) getBlockWithSenders(ctx context.Context, number rpc.BlockNumber, tx kv.Tx) (*types.Block, []common.Address, error) {
if number == rpc.PendingBlockNumber {
return api.pendingBlock(), nil, nil
}
n, hash, _, err := rpchelper.GetBlockNumber(rpc.BlockNumberOrHashWithNumber(number), tx, api.filters)
if err != nil {
return nil, nil, err
}
block, senders, err := api._blockReader.BlockWithSenders(ctx, tx, hash, n)
return block, senders, err
}
func (api *OtterscanAPIImpl) GetBlockTransactions(ctx context.Context, number rpc.BlockNumber, pageNumber uint8, pageSize uint8) (map[string]interface{}, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
b, senders, err := api.getBlockWithSenders(ctx, number, tx)
if err != nil {
return nil, err
}
if b == nil {
return nil, nil
}
chainConfig, err := api.chainConfig(tx)
if err != nil {
return nil, err
}
getBlockRes, err := api.delegateGetBlockByNumber(tx, b, number, true)
if err != nil {
return nil, err
}
// Receipts
receipts, err := api.getReceipts(ctx, tx, chainConfig, b, senders)
if err != nil {
return nil, fmt.Errorf("getReceipts error: %v", err)
}
result := make([]map[string]interface{}, 0, len(receipts))
for _, receipt := range receipts {
txn := b.Transactions()[receipt.TransactionIndex]
marshalledRcpt := marshalReceipt(receipt, txn, chainConfig, b, txn.Hash(), true)
marshalledRcpt["logs"] = nil
marshalledRcpt["logsBloom"] = nil
result = append(result, marshalledRcpt)
}
// Pruned block attrs
prunedBlock := map[string]interface{}{}
for _, k := range []string{"timestamp", "miner", "baseFeePerGas"} {
prunedBlock[k] = getBlockRes[k]
}
// Crop tx input to 4bytes
var txs = getBlockRes["transactions"].([]interface{})
for _, rawTx := range txs {
rpcTx := rawTx.(*ethapi.RPCTransaction)
if len(rpcTx.Input) >= 4 {
rpcTx.Input = rpcTx.Input[:4]
}
}
// Crop page
pageEnd := b.Transactions().Len() - int(pageNumber)*int(pageSize)
pageStart := pageEnd - int(pageSize)
if pageEnd < 0 {
pageEnd = 0
}
if pageStart < 0 {
pageStart = 0
}
response := map[string]interface{}{}
getBlockRes["transactions"] = getBlockRes["transactions"].([]interface{})[pageStart:pageEnd]
response["fullblock"] = getBlockRes
response["receipts"] = result[pageStart:pageEnd]
return response, nil
}

View File

@ -1,97 +0,0 @@
package commands
import (
"context"
"fmt"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/rpc"
)
func (api *OtterscanAPIImpl) GetBlockDetails(ctx context.Context, number rpc.BlockNumber) (map[string]interface{}, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
b, senders, err := api.getBlockWithSenders(ctx, number, tx)
if err != nil {
return nil, err
}
if b == nil {
return nil, nil
}
chainConfig, err := api.chainConfig(tx)
if err != nil {
return nil, err
}
getBlockRes, err := api.delegateGetBlockByNumber(tx, b, number, false)
if err != nil {
return nil, err
}
getIssuanceRes, err := api.delegateIssuance(tx, b, chainConfig)
if err != nil {
return nil, err
}
feesRes, err := api.delegateBlockFees(ctx, tx, b, senders, chainConfig)
if err != nil {
return nil, err
}
response := map[string]interface{}{}
response["block"] = getBlockRes
response["issuance"] = getIssuanceRes
response["totalFees"] = hexutil.Uint64(feesRes)
return response, nil
}
// TODO: remove duplication with GetBlockDetails
func (api *OtterscanAPIImpl) GetBlockDetailsByHash(ctx context.Context, hash common.Hash) (map[string]interface{}, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
// b, senders, err := rawdb.ReadBlockByHashWithSenders(tx, hash)
blockNumber := rawdb.ReadHeaderNumber(tx, hash)
if blockNumber == nil {
return nil, fmt.Errorf("couldn't find block number for hash %v", hash.Bytes())
}
b, senders, err := api._blockReader.BlockWithSenders(ctx, tx, hash, *blockNumber)
if err != nil {
return nil, err
}
if b == nil {
return nil, nil
}
chainConfig, err := api.chainConfig(tx)
if err != nil {
return nil, err
}
getBlockRes, err := api.delegateGetBlockByNumber(tx, b, rpc.BlockNumber(b.Number().Int64()), false)
if err != nil {
return nil, err
}
getIssuanceRes, err := api.delegateIssuance(tx, b, chainConfig)
if err != nil {
return nil, err
}
feesRes, err := api.delegateBlockFees(ctx, tx, b, senders, chainConfig)
if err != nil {
return nil, err
}
response := map[string]interface{}{}
response["block"] = getBlockRes
response["issuance"] = getIssuanceRes
response["totalFees"] = hexutil.Uint64(feesRes)
return response, nil
}

View File

@ -1,245 +0,0 @@
package commands
import (
"bytes"
"context"
"fmt"
"sort"
"github.com/RoaringBitmap/roaring/roaring64"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/changeset"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/state"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/types/accounts"
"github.com/ledgerwatch/erigon/core/vm"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/turbo/shards"
"github.com/ledgerwatch/log/v3"
)
type ContractCreatorData struct {
Tx common.Hash `json:"hash"`
Creator common.Address `json:"creator"`
}
func (api *OtterscanAPIImpl) GetContractCreator(ctx context.Context, addr common.Address) (*ContractCreatorData, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
reader := state.NewPlainStateReader(tx)
plainStateAcc, err := reader.ReadAccountData(addr)
if err != nil {
return nil, err
}
// No state == non existent
if plainStateAcc == nil {
return nil, nil
}
// EOA?
if plainStateAcc.IsEmptyCodeHash() {
return nil, nil
}
// Contract; search for creation tx; navigate forward on AccountsHistory/ChangeSets
//
// We search shards in forward order on purpose because popular contracts may have
// dozens of states changes due to ETH deposits/withdraw after contract creation,
// so it is optimal to search from the beginning even if the contract has multiple
// incarnations.
accHistory, err := tx.Cursor(kv.AccountsHistory)
if err != nil {
return nil, err
}
defer accHistory.Close()
accCS, err := tx.CursorDupSort(kv.AccountChangeSet)
if err != nil {
return nil, err
}
defer accCS.Close()
// Locate shard that contains the block where incarnation changed
acs := changeset.Mapper[kv.AccountChangeSet]
k, v, err := accHistory.Seek(acs.IndexChunkKey(addr.Bytes(), 0))
if err != nil {
return nil, err
}
if !bytes.HasPrefix(k, addr.Bytes()) {
log.Error("Couldn't find any shard for account history", "addr", addr)
return nil, fmt.Errorf("could't find any shard for account history addr=%v", addr)
}
var acc accounts.Account
bm := roaring64.NewBitmap()
prevShardMaxBl := uint64(0)
for {
_, err := bm.ReadFrom(bytes.NewReader(v))
if err != nil {
return nil, err
}
// Shortcut precheck
st, err := acs.Find(accCS, bm.Maximum(), addr.Bytes())
if err != nil {
return nil, err
}
if st == nil {
log.Error("Unexpected error, couldn't find changeset", "block", bm.Maximum(), "addr", addr)
return nil, fmt.Errorf("unexpected error, couldn't find changeset block=%v addr=%v", bm.Maximum(), addr)
}
// Found the shard where the incarnation change happens; ignore all
// next shards
if err := acc.DecodeForStorage(st); err != nil {
return nil, err
}
if acc.Incarnation >= plainStateAcc.Incarnation {
break
}
prevShardMaxBl = bm.Maximum()
k, v, err = accHistory.Next()
if err != nil {
return nil, err
}
// No more shards; it means the max bl from previous shard
// contains the incarnation change
if !bytes.HasPrefix(k, addr.Bytes()) {
break
}
}
// Binary search block number inside shard; get first block where desired
// incarnation appears
blocks := bm.ToArray()
var searchErr error
r := sort.Search(len(blocks), func(i int) bool {
bl := blocks[i]
st, err := acs.Find(accCS, bl, addr.Bytes())
if err != nil {
searchErr = err
return false
}
if st == nil {
log.Error("Unexpected error, couldn't find changeset", "block", bl, "addr", addr)
return false
}
if err := acc.DecodeForStorage(st); err != nil {
searchErr = err
return false
}
if acc.Incarnation < plainStateAcc.Incarnation {
return false
}
return true
})
if searchErr != nil {
return nil, searchErr
}
// The sort.Search function finds the first block where the incarnation has
// changed to the desired one, so we get the previous block from the bitmap;
// however if the found block is already the first one from the bitmap, it means
// the block we want is the max block from the previous shard.
blockFound := prevShardMaxBl
if r > 0 {
blockFound = blocks[r-1]
}
// Trace block, find tx and contract creator
chainConfig, err := api.chainConfig(tx)
if err != nil {
return nil, err
}
tracer := NewCreateTracer(ctx, addr)
if err := api.deployerFinder(tx, ctx, blockFound, chainConfig, tracer); err != nil {
return nil, err
}
return &ContractCreatorData{
Tx: tracer.Tx.Hash(),
Creator: tracer.Creator,
}, nil
}
func (api *OtterscanAPIImpl) deployerFinder(dbtx kv.Tx, ctx context.Context, blockNum uint64, chainConfig *params.ChainConfig, tracer GenericTracer) error {
block, err := api.blockByNumberWithSenders(dbtx, blockNum)
if err != nil {
return err
}
if block == nil {
return nil
}
reader := state.NewPlainState(dbtx, blockNum)
stateCache := shards.NewStateCache(32, 0 /* no limit */)
cachedReader := state.NewCachedReader(reader, stateCache)
noop := state.NewNoopWriter()
cachedWriter := state.NewCachedWriter(noop, stateCache)
ibs := state.New(cachedReader)
signer := types.MakeSigner(chainConfig, blockNum)
getHeader := func(hash common.Hash, number uint64) *types.Header {
h, e := api._blockReader.Header(ctx, dbtx, hash, number)
if e != nil {
log.Error("getHeader error", "number", number, "hash", hash, "err", e)
}
return h
}
engine := ethash.NewFaker()
header := block.Header()
rules := chainConfig.Rules(block.NumberU64())
// we can filter away anything that does not include 0xf0, 0xf5, or 0x38, aka create, create2 or codesize opcodes
// while this will result in false positives, it should reduce the time a lot.
// it can be improved in the future with smarter algorithms (ala, looking for
deployers := map[common.Address]struct{}{}
for _, tx := range block.Transactions() {
dat := tx.GetData()
for _, v := range dat {
if sender, ok := tx.GetSender(); ok {
if v == 0xf0 || v == 0xf5 {
deployers[sender] = struct{}{}
}
}
}
}
for idx, tx := range block.Transactions() {
if sender, ok := tx.GetSender(); ok {
if _, ok := deployers[sender]; !ok {
continue
}
}
ibs.Prepare(tx.Hash(), block.Hash(), idx)
msg, _ := tx.AsMessage(*signer, header.BaseFee, rules)
BlockContext := core.NewEVMBlockContext(header, core.GetHashFn(header, getHeader), engine, nil)
TxContext := core.NewEVMTxContext(msg)
vmenv := vm.NewEVM(BlockContext, TxContext, ibs, chainConfig, vm.Config{Debug: true, Tracer: tracer})
if _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.GetGas()), true /* refunds */, false /* gasBailout */); err != nil {
return err
}
_ = ibs.FinalizeTx(vmenv.ChainConfig().Rules(block.NumberU64()), cachedWriter)
if tracer.Found() {
tracer.SetTransaction(tx)
return nil
}
}
return nil
}

View File

@ -1,38 +0,0 @@
package commands
import (
"math/big"
"time"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/core/vm"
)
// Helper implementation of vm.Tracer; since the interface is big and most
// custom tracers implement just a few of the methods, this is a base struct
// to avoid lots of empty boilerplate code
type DefaultTracer struct {
}
func (t *DefaultTracer) CaptureStart(env *vm.EVM, depth int, from common.Address, to common.Address, precompile bool, create bool, calltype vm.CallType, input []byte, gas uint64, value *big.Int, code []byte) {
}
func (t *DefaultTracer) CaptureState(env *vm.EVM, pc uint64, op vm.OpCode, gas, cost uint64, scope *vm.ScopeContext, rData []byte, depth int, err error) {
}
func (t *DefaultTracer) CaptureFault(env *vm.EVM, pc uint64, op vm.OpCode, gas, cost uint64, scope *vm.ScopeContext, depth int, err error) {
}
func (t *DefaultTracer) CaptureEnd(depth int, output []byte, startGas, endGas uint64, d time.Duration, err error) {
}
func (t *DefaultTracer) CaptureSelfDestruct(from common.Address, to common.Address, value *big.Int) {
}
func (t *DefaultTracer) CaptureAccountRead(account common.Address) error {
return nil
}
func (t *DefaultTracer) CaptureAccountWrite(account common.Address) error {
return nil
}

View File

@ -1,78 +0,0 @@
package commands
import (
"context"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/state"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/vm"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/turbo/shards"
"github.com/ledgerwatch/log/v3"
)
type GenericTracer interface {
vm.Tracer
SetTransaction(tx types.Transaction)
Found() bool
}
func (api *OtterscanAPIImpl) genericTracer(dbtx kv.Tx, ctx context.Context, blockNum uint64, chainConfig *params.ChainConfig, tracer GenericTracer) error {
block, err := api.blockByNumberWithSenders(dbtx, blockNum)
if err != nil {
return err
}
log.Info("got block with senders")
if block == nil {
return nil
}
reader := state.NewPlainState(dbtx, blockNum)
stateCache := shards.NewStateCache(32, 0 /* no limit */)
cachedReader := state.NewCachedReader(reader, stateCache)
noop := state.NewNoopWriter()
cachedWriter := state.NewCachedWriter(noop, stateCache)
ibs := state.New(cachedReader)
signer := types.MakeSigner(chainConfig, blockNum)
log.Info("created states")
getHeader := func(hash common.Hash, number uint64) *types.Header {
h, e := api._blockReader.Header(ctx, dbtx, hash, number)
if e != nil {
log.Error("getHeader error", "number", number, "hash", hash, "err", e)
}
return h
}
engine := ethash.NewFaker()
header := block.Header()
rules := chainConfig.Rules(block.NumberU64())
log.Info("got transactions", "amt", len(block.Transactions()))
for idx, tx := range block.Transactions() {
log.Info("processing txn", "idx", idx)
ibs.Prepare(tx.Hash(), block.Hash(), idx)
msg, _ := tx.AsMessage(*signer, header.BaseFee, rules)
BlockContext := core.NewEVMBlockContext(header, core.GetHashFn(header, getHeader), engine, nil)
TxContext := core.NewEVMTxContext(msg)
vmenv := vm.NewEVM(BlockContext, TxContext, ibs, chainConfig, vm.Config{Debug: true, Tracer: tracer})
if _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.GetGas()), true /* refunds */, false /* gasBailout */); err != nil {
return err
}
_ = ibs.FinalizeTx(vmenv.ChainConfig().Rules(block.NumberU64()), cachedWriter)
if tracer.Found() {
tracer.SetTransaction(tx)
return nil
}
}
return nil
}

View File

@ -1,31 +0,0 @@
package commands
import (
"context"
"fmt"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/adapter"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
)
func (api *OtterscanAPIImpl) HasCode(ctx context.Context, address common.Address, blockNrOrHash rpc.BlockNumberOrHash) (bool, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return false, fmt.Errorf("hasCode cannot open tx: %w", err)
}
defer tx.Rollback()
blockNumber, _, _, err := rpchelper.GetBlockNumber(blockNrOrHash, tx, api.filters)
if err != nil {
return false, err
}
reader := adapter.NewStateReader(tx, blockNumber)
acc, err := reader.ReadAccountData(address)
if acc == nil || err != nil {
return false, err
}
return !acc.IsEmptyCodeHash(), nil
}

View File

@ -1,128 +0,0 @@
package commands
import (
"bytes"
"github.com/RoaringBitmap/roaring/roaring64"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
)
// Given a ChunkLocator, moves back over the chunks and inside each chunk, moves
// backwards over the block numbers.
func NewBackwardBlockProvider(chunkLocator ChunkLocator, maxBlock uint64) BlockProvider {
// block == 0 means no max
if maxBlock == 0 {
maxBlock = MaxBlockNum
}
var iter roaring64.IntIterable64
var chunkProvider ChunkProvider
isFirst := true
finished := false
return func() (uint64, bool, error) {
if finished {
return 0, false, nil
}
if isFirst {
isFirst = false
// Try to get first chunk
var ok bool
var err error
chunkProvider, ok, err = chunkLocator(maxBlock)
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return 0, false, nil
}
if chunkProvider == nil {
finished = true
return 0, false, nil
}
// Has at least the first chunk; initialize the iterator
chunk, ok, err := chunkProvider()
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return 0, false, nil
}
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(chunk)); err != nil {
finished = true
return 0, false, err
}
// It can happen that on the first chunk we'll get a chunk that contains
// the last block <= maxBlock in the middle of the chunk/bitmap, so we
// remove all blocks after it (since there is no AdvanceIfNeeded() in
// IntIterable64)
if maxBlock != MaxBlockNum {
bm.RemoveRange(maxBlock+1, MaxBlockNum)
}
iter = bm.ReverseIterator()
// This means it is the last chunk and the min block is > the last one
if !iter.HasNext() {
chunk, ok, err := chunkProvider()
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return 0, false, nil
}
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(chunk)); err != nil {
finished = true
return 0, false, err
}
iter = bm.ReverseIterator()
}
}
nextBlock := iter.Next()
hasNext := iter.HasNext()
if !hasNext {
iter = nil
// Check if there is another chunk to get blocks from
chunk, ok, err := chunkProvider()
if err != nil {
return 0, false, err
}
if !ok {
finished = true
return nextBlock, false, nil
}
hasNext = true
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(chunk)); err != nil {
finished = true
return 0, false, err
}
iter = bm.ReverseIterator()
}
return nextBlock, hasNext, nil
}
}
func NewCallCursorBackwardBlockProvider(cursor kv.Cursor, addr common.Address, maxBlock uint64) BlockProvider {
chunkLocator := newCallChunkLocator(cursor, addr, false)
return NewBackwardBlockProvider(chunkLocator, maxBlock)
}

View File

@ -1,109 +0,0 @@
package commands
import (
"testing"
)
func TestFromToBackwardBlockProviderWith1Chunk(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestFromToBackwardBlockProviderWith1ChunkMiddleBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 1005)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestFromToBackwardBlockProviderWith1ChunkNotExactBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 1003)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1000, false)
}
func TestFromToBackwardBlockProviderWith1ChunkLastBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 1000)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1000, false)
}
func TestFromToBackwardBlockProviderWith1ChunkBlockNotFound(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 900)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 0, false)
}
func TestFromToBackwardBlockProviderWithNoChunks(t *testing.T) {
chunkLocator := newMockBackwardChunkLocator([][]byte{})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 0, false)
}
func TestFromToBackwardBlockProviderWithMultipleChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1, chunk2})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1600, true)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestFromToBackwardBlockProviderWithMultipleChunksBlockBetweenChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1, chunk2})
fromBlockProvider := NewBackwardBlockProvider(chunkLocator, 1500)
toBlockProvider := NewBackwardBlockProvider(newMockBackwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(true, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}

View File

@ -1,143 +0,0 @@
package commands
import (
"bytes"
"testing"
"github.com/RoaringBitmap/roaring/roaring64"
)
func newMockBackwardChunkLocator(chunks [][]byte) ChunkLocator {
return func(block uint64) (ChunkProvider, bool, error) {
for i, v := range chunks {
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(v)); err != nil {
return nil, false, err
}
if block > bm.Maximum() {
continue
}
return newMockBackwardChunkProvider(chunks[:i+1]), true, nil
}
// Not found; return the last to simulate the behavior of returning
// everything up to the 0xffff... chunk
if len(chunks) > 0 {
return newMockBackwardChunkProvider(chunks), true, nil
}
return nil, true, nil
}
}
func newMockBackwardChunkProvider(chunks [][]byte) ChunkProvider {
i := len(chunks) - 1
return func() ([]byte, bool, error) {
if i < 0 {
return nil, false, nil
}
chunk := chunks[i]
i--
return chunk, true, nil
}
}
func TestBackwardBlockProviderWith1Chunk(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
blockProvider := NewBackwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWith1ChunkMiddleBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
blockProvider := NewBackwardBlockProvider(chunkLocator, 1005)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWith1ChunkNotExactBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
blockProvider := NewBackwardBlockProvider(chunkLocator, 1003)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWith1ChunkLastBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
blockProvider := NewBackwardBlockProvider(chunkLocator, 1000)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWith1ChunkBlockNotFound(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1})
blockProvider := NewBackwardBlockProvider(chunkLocator, 900)
checkNext(t, blockProvider, 0, false)
}
func TestBackwardBlockProviderWithNoChunks(t *testing.T) {
chunkLocator := newMockBackwardChunkLocator([][]byte{})
blockProvider := NewBackwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 0, false)
}
func TestBackwardBlockProviderWithMultipleChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewBackwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 1600, true)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWithMultipleChunksBlockBetweenChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewBackwardBlockProvider(chunkLocator, 1500)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1000, false)
}
func TestBackwardBlockProviderWithMultipleChunksBlockNotFound(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockBackwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewBackwardBlockProvider(chunkLocator, 900)
checkNext(t, blockProvider, 0, false)
}

View File

@ -1,107 +0,0 @@
package commands
import (
"bytes"
"github.com/RoaringBitmap/roaring/roaring64"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
)
// Given a ChunkLocator, moves forward over the chunks and inside each chunk, moves
// forward over the block numbers.
func NewForwardBlockProvider(chunkLocator ChunkLocator, minBlock uint64) BlockProvider {
var iter roaring64.IntPeekable64
var chunkProvider ChunkProvider
isFirst := true
finished := false
return func() (uint64, bool, error) {
if finished {
return 0, false, nil
}
if isFirst {
isFirst = false
// Try to get first chunk
var ok bool
var err error
chunkProvider, ok, err = chunkLocator(minBlock)
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return 0, false, nil
}
if chunkProvider == nil {
finished = true
return 0, false, nil
}
// Has at least the first chunk; initialize the iterator
chunk, ok, err := chunkProvider()
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return 0, false, nil
}
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(chunk)); err != nil {
finished = true
return 0, false, err
}
iter = bm.Iterator()
// It can happen that on the first chunk we'll get a chunk that contains
// the first block >= minBlock in the middle of the chunk/bitmap, so we
// skip all previous blocks before it.
iter.AdvanceIfNeeded(minBlock)
// This means it is the last chunk and the min block is > the last one
if !iter.HasNext() {
finished = true
return 0, false, nil
}
}
nextBlock := iter.Next()
hasNext := iter.HasNext()
if !hasNext {
iter = nil
// Check if there is another chunk to get blocks from
chunk, ok, err := chunkProvider()
if err != nil {
finished = true
return 0, false, err
}
if !ok {
finished = true
return nextBlock, false, nil
}
hasNext = true
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(chunk)); err != nil {
finished = true
return 0, false, err
}
iter = bm.Iterator()
}
return nextBlock, hasNext, nil
}
}
func NewCallCursorForwardBlockProvider(cursor kv.Cursor, addr common.Address, minBlock uint64) BlockProvider {
chunkLocator := newCallChunkLocator(cursor, addr, true)
return NewForwardBlockProvider(chunkLocator, minBlock)
}

View File

@ -1,108 +0,0 @@
package commands
import (
"testing"
)
func TestFromToForwardBlockProviderWith1Chunk(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1000, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, false)
}
func TestFromToForwardBlockProviderWith1ChunkMiddleBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 1005)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, false)
}
func TestFromToForwardBlockProviderWith1ChunkNotExactBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 1007)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1010, false)
}
func TestFromToForwardBlockProviderWith1ChunkLastBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 1010)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1010, false)
}
func TestFromToForwardBlockProviderWith1ChunkBlockNotFound(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 1100)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 0, false)
}
func TestFromToForwardBlockProviderWithNoChunks(t *testing.T) {
chunkLocator := newMockForwardChunkLocator([][]byte{})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 0, false)
}
func TestFromToForwardBlockProviderWithMultipleChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1, chunk2})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 0)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1000, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1600, false)
}
func TestFromToForwardBlockProviderWithMultipleChunksBlockBetweenChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1, chunk2})
fromBlockProvider := NewForwardBlockProvider(chunkLocator, 1300)
toBlockProvider := NewForwardBlockProvider(newMockForwardChunkLocator([][]byte{}), 0)
blockProvider := newCallFromToBlockProvider(false, fromBlockProvider, toBlockProvider)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1600, false)
}

View File

@ -1,143 +0,0 @@
package commands
import (
"bytes"
"testing"
"github.com/RoaringBitmap/roaring/roaring64"
)
func newMockForwardChunkLocator(chunks [][]byte) ChunkLocator {
return func(block uint64) (ChunkProvider, bool, error) {
for i, v := range chunks {
bm := roaring64.NewBitmap()
if _, err := bm.ReadFrom(bytes.NewReader(v)); err != nil {
return nil, false, err
}
if block > bm.Maximum() {
continue
}
return newMockForwardChunkProvider(chunks[i:]), true, nil
}
// Not found; return the last to simulate the behavior of returning
// the 0xffff... chunk
if len(chunks) > 0 {
return newMockForwardChunkProvider(chunks[len(chunks)-1:]), true, nil
}
return nil, true, nil
}
}
func newMockForwardChunkProvider(chunks [][]byte) ChunkProvider {
i := 0
return func() ([]byte, bool, error) {
if i >= len(chunks) {
return nil, false, nil
}
chunk := chunks[i]
i++
return chunk, true, nil
}
}
func TestForwardBlockProviderWith1Chunk(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
blockProvider := NewForwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 1000, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, false)
}
func TestForwardBlockProviderWith1ChunkMiddleBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
blockProvider := NewForwardBlockProvider(chunkLocator, 1005)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, false)
}
func TestForwardBlockProviderWith1ChunkNotExactBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
blockProvider := NewForwardBlockProvider(chunkLocator, 1007)
checkNext(t, blockProvider, 1010, false)
}
func TestForwardBlockProviderWith1ChunkLastBlock(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
blockProvider := NewForwardBlockProvider(chunkLocator, 1010)
checkNext(t, blockProvider, 1010, false)
}
func TestForwardBlockProviderWith1ChunkBlockNotFound(t *testing.T) {
// Mocks 1 chunk
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1})
blockProvider := NewForwardBlockProvider(chunkLocator, 1100)
checkNext(t, blockProvider, 0, false)
}
func TestForwardBlockProviderWithNoChunks(t *testing.T) {
chunkLocator := newMockForwardChunkLocator([][]byte{})
blockProvider := NewForwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 0, false)
}
func TestForwardBlockProviderWithMultipleChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewForwardBlockProvider(chunkLocator, 0)
checkNext(t, blockProvider, 1000, true)
checkNext(t, blockProvider, 1005, true)
checkNext(t, blockProvider, 1010, true)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1600, false)
}
func TestForwardBlockProviderWithMultipleChunksBlockBetweenChunks(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewForwardBlockProvider(chunkLocator, 1300)
checkNext(t, blockProvider, 1501, true)
checkNext(t, blockProvider, 1600, false)
}
func TestForwardBlockProviderWithMultipleChunksBlockNotFound(t *testing.T) {
// Mocks 2 chunks
chunk1 := createBitmap(t, []uint64{1000, 1005, 1010})
chunk2 := createBitmap(t, []uint64{1501, 1600})
chunkLocator := newMockForwardChunkLocator([][]byte{chunk1, chunk2})
blockProvider := NewForwardBlockProvider(chunkLocator, 1700)
checkNext(t, blockProvider, 0, false)
}

View File

@ -1,63 +0,0 @@
package commands
func newCallFromToBlockProvider(isBackwards bool, callFromProvider, callToProvider BlockProvider) BlockProvider {
var nextFrom, nextTo uint64
var hasMoreFrom, hasMoreTo bool
initialized := false
return func() (uint64, bool, error) {
if !initialized {
initialized = true
var err error
if nextFrom, hasMoreFrom, err = callFromProvider(); err != nil {
return 0, false, err
}
hasMoreFrom = hasMoreFrom || nextFrom != 0
if nextTo, hasMoreTo, err = callToProvider(); err != nil {
return 0, false, err
}
hasMoreTo = hasMoreTo || nextTo != 0
}
if !hasMoreFrom && !hasMoreTo {
return 0, false, nil
}
var blockNum uint64
if !hasMoreFrom {
blockNum = nextTo
} else if !hasMoreTo {
blockNum = nextFrom
} else {
blockNum = nextFrom
if isBackwards {
if nextTo < nextFrom {
blockNum = nextTo
}
} else {
if nextTo > nextFrom {
blockNum = nextTo
}
}
}
// Pull next; it may be that from AND to contains the same blockNum
if hasMoreFrom && blockNum == nextFrom {
var err error
if nextFrom, hasMoreFrom, err = callFromProvider(); err != nil {
return 0, false, err
}
hasMoreFrom = hasMoreFrom || nextFrom != 0
}
if hasMoreTo && blockNum == nextTo {
var err error
if nextTo, hasMoreTo, err = callToProvider(); err != nil {
return 0, false, err
}
hasMoreTo = hasMoreTo || nextTo != 0
}
return blockNum, hasMoreFrom || hasMoreTo, nil
}
}

View File

@ -1,31 +0,0 @@
package commands
import (
"testing"
"github.com/RoaringBitmap/roaring/roaring64"
)
func createBitmap(t *testing.T, blocks []uint64) []byte {
bm := roaring64.NewBitmap()
bm.AddMany(blocks)
chunk, err := bm.ToBytes()
if err != nil {
t.Fatal(err)
}
return chunk
}
func checkNext(t *testing.T, blockProvider BlockProvider, expectedBlock uint64, expectedHasNext bool) {
bl, hasNext, err := blockProvider()
if err != nil {
t.Fatal(err)
}
if bl != expectedBlock {
t.Fatalf("Expected block %d, received %d", expectedBlock, bl)
}
if expectedHasNext != hasNext {
t.Fatalf("Expected hasNext=%t, received=%t; at block=%d", expectedHasNext, hasNext, expectedBlock)
}
}

View File

@ -1,103 +0,0 @@
package commands
import (
"context"
"sync"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/commands"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/core/state"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/vm"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/turbo/shards"
"github.com/ledgerwatch/log/v3"
)
func (api *OtterscanAPIImpl) searchTraceBlock(ctx context.Context, wg *sync.WaitGroup, addr common.Address, chainConfig *params.ChainConfig, idx int, bNum uint64, results []*TransactionsWithReceipts) {
wg.Done()
// Trace block for Txs
newdbtx, err := api.db.BeginRo(ctx)
if err != nil {
log.Error("Search trace error", "err", err)
results[idx] = nil
return
}
defer newdbtx.Rollback()
_, result, err := api.traceBlock(newdbtx, ctx, bNum, addr, chainConfig)
if err != nil {
log.Error("Search trace error", "err", err)
results[idx] = nil
return
}
results[idx] = result
}
func (api *OtterscanAPIImpl) traceBlock(dbtx kv.Tx, ctx context.Context, blockNum uint64, searchAddr common.Address, chainConfig *params.ChainConfig) (bool, *TransactionsWithReceipts, error) {
rpcTxs := make([]*commands.RPCTransaction, 0)
receipts := make([]map[string]interface{}, 0)
// Retrieve the transaction and assemble its EVM context
blockHash, err := rawdb.ReadCanonicalHash(dbtx, blockNum)
if err != nil {
return false, nil, err
}
block, senders, err := api._blockReader.BlockWithSenders(ctx, dbtx, blockHash, blockNum)
if err != nil {
return false, nil, err
}
reader := state.NewPlainState(dbtx, blockNum)
stateCache := shards.NewStateCache(32, 0 /* no limit */)
cachedReader := state.NewCachedReader(reader, stateCache)
noop := state.NewNoopWriter()
cachedWriter := state.NewCachedWriter(noop, stateCache)
ibs := state.New(cachedReader)
signer := types.MakeSigner(chainConfig, blockNum)
getHeader := func(hash common.Hash, number uint64) *types.Header {
h, e := api._blockReader.Header(ctx, dbtx, hash, number)
if e != nil {
log.Error("getHeader error", "number", number, "hash", hash, "err", e)
}
return h
}
engine := ethash.NewFaker()
blockReceipts := rawdb.ReadReceipts(dbtx, block, senders)
header := block.Header()
rules := chainConfig.Rules(block.NumberU64())
found := false
for idx, tx := range block.Transactions() {
ibs.Prepare(tx.Hash(), block.Hash(), idx)
msg, _ := tx.AsMessage(*signer, header.BaseFee, rules)
tracer := NewTouchTracer(searchAddr)
BlockContext := core.NewEVMBlockContext(header, core.GetHashFn(header, getHeader), engine, nil)
TxContext := core.NewEVMTxContext(msg)
vmenv := vm.NewEVM(BlockContext, TxContext, ibs, chainConfig, vm.Config{Debug: true, Tracer: tracer})
if _, err := core.ApplyMessage(vmenv, msg, new(core.GasPool).AddGas(tx.GetGas()), true /* refunds */, false /* gasBailout */); err != nil {
return false, nil, err
}
_ = ibs.FinalizeTx(vmenv.ChainConfig().Rules(block.NumberU64()), cachedWriter)
if tracer.Found {
rpcTx := newRPCTransaction(tx, block.Hash(), blockNum, uint64(idx), block.BaseFee())
mReceipt := marshalReceipt(blockReceipts[idx], tx, chainConfig, block, tx.Hash(), true)
mReceipt["timestamp"] = block.Time()
rpcTxs = append(rpcTxs, rpcTx)
receipts = append(receipts, mReceipt)
found = true
}
}
return found, &TransactionsWithReceipts{rpcTxs, receipts, false, false}, nil
}

View File

@ -1,50 +0,0 @@
package commands
import (
"context"
"math/big"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/vm"
)
type CreateTracer struct {
DefaultTracer
ctx context.Context
target common.Address
found bool
Creator common.Address
Tx types.Transaction
}
func NewCreateTracer(ctx context.Context, target common.Address) *CreateTracer {
return &CreateTracer{
ctx: ctx,
target: target,
found: false,
}
}
func (t *CreateTracer) SetTransaction(tx types.Transaction) {
t.Tx = tx
}
func (t *CreateTracer) Found() bool {
return t.found
}
func (t *CreateTracer) CaptureStart(env *vm.EVM, depth int, from common.Address, to common.Address, precompile bool, create bool, calltype vm.CallType, input []byte, gas uint64, value *big.Int, code []byte) {
if t.found {
return
}
if !create {
return
}
if to != t.target {
return
}
t.found = true
t.Creator = from
}

View File

@ -1,60 +0,0 @@
package commands
import (
"context"
"math/big"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/core/vm"
)
type OperationType int
const (
OP_TRANSFER OperationType = 0
OP_SELF_DESTRUCT OperationType = 1
OP_CREATE OperationType = 2
OP_CREATE2 OperationType = 3
)
type InternalOperation struct {
Type OperationType `json:"type"`
From common.Address `json:"from"`
To common.Address `json:"to"`
Value *hexutil.Big `json:"value"`
}
type OperationsTracer struct {
DefaultTracer
ctx context.Context
Results []*InternalOperation
}
func NewOperationsTracer(ctx context.Context) *OperationsTracer {
return &OperationsTracer{
ctx: ctx,
Results: make([]*InternalOperation, 0),
}
}
func (t *OperationsTracer) CaptureStart(env *vm.EVM, depth int, from common.Address, to common.Address, precompile bool, create bool, calltype vm.CallType, input []byte, gas uint64, value *big.Int, code []byte) {
if depth == 0 {
return
}
if calltype == vm.CALLT && value.Uint64() != 0 {
t.Results = append(t.Results, &InternalOperation{OP_TRANSFER, from, to, (*hexutil.Big)(value)})
return
}
if calltype == vm.CREATET {
t.Results = append(t.Results, &InternalOperation{OP_CREATE, from, to, (*hexutil.Big)(value)})
}
if calltype == vm.CREATE2T {
t.Results = append(t.Results, &InternalOperation{OP_CREATE2, from, to, (*hexutil.Big)(value)})
}
}
func (l *OperationsTracer) CaptureSelfDestruct(from common.Address, to common.Address, value *big.Int) {
l.Results = append(l.Results, &InternalOperation{OP_SELF_DESTRUCT, from, to, (*hexutil.Big)(value)})
}

View File

@ -1,27 +0,0 @@
package commands
import (
"bytes"
"math/big"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/core/vm"
)
type TouchTracer struct {
DefaultTracer
searchAddr common.Address
Found bool
}
func NewTouchTracer(searchAddr common.Address) *TouchTracer {
return &TouchTracer{
searchAddr: searchAddr,
}
}
func (t *TouchTracer) CaptureStart(env *vm.EVM, depth int, from common.Address, to common.Address, precompile bool, create bool, calltype vm.CallType, input []byte, gas uint64, value *big.Int, code []byte) {
if !t.Found && (bytes.Equal(t.searchAddr.Bytes(), from.Bytes()) || bytes.Equal(t.searchAddr.Bytes(), to.Bytes())) {
t.Found = true
}
}

View File

@ -1,87 +0,0 @@
package commands
import (
"context"
"math/big"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/core/vm"
)
func (api *OtterscanAPIImpl) TraceTransaction(ctx context.Context, hash common.Hash) ([]*TraceEntry, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
tracer := NewTransactionTracer(ctx)
if _, err := api.runTracer(ctx, tx, hash, tracer); err != nil {
return nil, err
}
return tracer.Results, nil
}
type TraceEntry struct {
Type string `json:"type"`
Depth int `json:"depth"`
From common.Address `json:"from"`
To common.Address `json:"to"`
Value *hexutil.Big `json:"value"`
Input hexutil.Bytes `json:"input"`
}
type TransactionTracer struct {
DefaultTracer
ctx context.Context
Results []*TraceEntry
}
func NewTransactionTracer(ctx context.Context) *TransactionTracer {
return &TransactionTracer{
ctx: ctx,
Results: make([]*TraceEntry, 0),
}
}
func (t *TransactionTracer) CaptureStart(env *vm.EVM, depth int, from common.Address, to common.Address, precompile bool, create bool, callType vm.CallType, input []byte, gas uint64, value *big.Int, code []byte) {
if precompile {
return
}
inputCopy := make([]byte, len(input))
copy(inputCopy, input)
_value := new(big.Int)
_value.Set(value)
if callType == vm.CALLT {
t.Results = append(t.Results, &TraceEntry{"CALL", depth, from, to, (*hexutil.Big)(_value), inputCopy})
return
}
if callType == vm.STATICCALLT {
t.Results = append(t.Results, &TraceEntry{"STATICCALL", depth, from, to, nil, inputCopy})
return
}
if callType == vm.DELEGATECALLT {
t.Results = append(t.Results, &TraceEntry{"DELEGATECALL", depth, from, to, nil, inputCopy})
return
}
if callType == vm.CALLCODET {
t.Results = append(t.Results, &TraceEntry{"CALLCODE", depth, from, to, (*hexutil.Big)(_value), inputCopy})
return
}
if callType == vm.CREATET {
t.Results = append(t.Results, &TraceEntry{"CREATE", depth, from, to, (*hexutil.Big)(value), inputCopy})
return
}
if callType == vm.CREATE2T {
t.Results = append(t.Results, &TraceEntry{"CREATE2", depth, from, to, (*hexutil.Big)(value), inputCopy})
return
}
}
func (l *TransactionTracer) CaptureSelfDestruct(from common.Address, to common.Address, value *big.Int) {
last := l.Results[len(l.Results)-1]
l.Results = append(l.Results, &TraceEntry{"SELFDESTRUCT", last.Depth + 1, from, to, (*hexutil.Big)(value), nil})
}

View File

@ -1,163 +0,0 @@
package commands
import (
"bytes"
"context"
"sort"
"github.com/RoaringBitmap/roaring/roaring64"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/changeset"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/core/types/accounts"
)
func (api *OtterscanAPIImpl) GetTransactionBySenderAndNonce(ctx context.Context, addr common.Address, nonce uint64) (*common.Hash, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
accHistoryC, err := tx.Cursor(kv.AccountsHistory)
if err != nil {
return nil, err
}
defer accHistoryC.Close()
accChangesC, err := tx.CursorDupSort(kv.AccountChangeSet)
if err != nil {
return nil, err
}
defer accChangesC.Close()
// Locate the chunk where the nonce happens
acs := changeset.Mapper[kv.AccountChangeSet]
k, v, err := accHistoryC.Seek(acs.IndexChunkKey(addr.Bytes(), 0))
if err != nil {
return nil, err
}
bitmap := roaring64.New()
maxBlPrevChunk := uint64(0)
var acc accounts.Account
for {
if k == nil || !bytes.HasPrefix(k, addr.Bytes()) {
// Check plain state
data, err := tx.GetOne(kv.PlainState, addr.Bytes())
if err != nil {
return nil, err
}
if err := acc.DecodeForStorage(data); err != nil {
return nil, err
}
// Nonce changed in plain state, so it means the last block of last chunk
// contains the actual nonce change
if acc.Nonce > nonce {
break
}
// Not found; asked for nonce still not used
return nil, nil
}
// Inspect block changeset
if _, err := bitmap.ReadFrom(bytes.NewReader(v)); err != nil {
return nil, err
}
maxBl := bitmap.Maximum()
data, err := acs.Find(accChangesC, maxBl, addr.Bytes())
if err != nil {
return nil, err
}
if err := acc.DecodeForStorage(data); err != nil {
return nil, err
}
// Desired nonce was found in this chunk
if acc.Nonce > nonce {
break
}
maxBlPrevChunk = maxBl
k, v, err = accHistoryC.Next()
if err != nil {
return nil, err
}
}
// Locate the exact block inside chunk when the nonce changed
blocks := bitmap.ToArray()
var errSearch error = nil
idx := sort.Search(len(blocks), func(i int) bool {
if errSearch != nil {
return false
}
// Locate the block changeset
data, err := acs.Find(accChangesC, blocks[i], addr.Bytes())
if err != nil {
errSearch = err
return false
}
if err := acc.DecodeForStorage(data); err != nil {
errSearch = err
return false
}
// Since the state contains the nonce BEFORE the block changes, we look for
// the block when the nonce changed to be > the desired once, which means the
// previous history block contains the actual change; it may contain multiple
// nonce changes.
return acc.Nonce > nonce
})
if errSearch != nil {
return nil, errSearch
}
// Since the changeset contains the state BEFORE the change, we inspect
// the block before the one we found; if it is the first block inside the chunk,
// we use the last block from prev chunk
nonceBlock := maxBlPrevChunk
if idx > 0 {
nonceBlock = blocks[idx-1]
}
found, txHash, err := api.findNonce(ctx, tx, addr, nonce, nonceBlock)
if err != nil {
return nil, err
}
if !found {
return nil, nil
}
return &txHash, nil
}
func (api *OtterscanAPIImpl) findNonce(ctx context.Context, tx kv.Tx, addr common.Address, nonce uint64, blockNum uint64) (bool, common.Hash, error) {
hash, err := rawdb.ReadCanonicalHash(tx, blockNum)
if err != nil {
return false, common.Hash{}, err
}
block, senders, err := api._blockReader.BlockWithSenders(ctx, tx, hash, blockNum)
if err != nil {
return false, common.Hash{}, err
}
txs := block.Transactions()
for i, s := range senders {
if s != addr {
continue
}
t := txs[i]
if t.GetNonce() == nonce {
return true, t.Hash(), nil
}
}
return false, common.Hash{}, nil
}

View File

@ -1,23 +0,0 @@
package commands
import (
"context"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
)
func (api *OtterscanAPIImpl) GetTransactionError(ctx context.Context, hash common.Hash) (hexutil.Bytes, error) {
tx, err := api.db.BeginRo(ctx)
if err != nil {
return nil, err
}
defer tx.Rollback()
result, err := api.runTracer(ctx, tx, hash, nil)
if err != nil {
return nil, err
}
return result.Revert(), nil
}

View File

@ -1,94 +0,0 @@
package commands
import (
"bytes"
"encoding/binary"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
)
// Bootstrap a function able to locate a series of byte chunks containing
// related block numbers, starting from a specific block number (greater or equal than).
type ChunkLocator func(block uint64) (chunkProvider ChunkProvider, ok bool, err error)
// Allows to iterate over a set of byte chunks.
//
// If err is not nil, it indicates an error and the other returned values should be
// ignored.
//
// If err is nil and ok is true, the returned chunk should contain the raw chunk data.
//
// If err is nil and ok is false, it indicates that there is no more data. Subsequent calls
// to the same function should return (nil, false, nil).
type ChunkProvider func() (chunk []byte, ok bool, err error)
type BlockProvider func() (nextBlock uint64, hasMore bool, err error)
// Standard key format for call from/to indexes [address + block]
func callIndexKey(addr common.Address, block uint64) []byte {
key := make([]byte, common.AddressLength+8)
copy(key[:common.AddressLength], addr.Bytes())
binary.BigEndian.PutUint64(key[common.AddressLength:], block)
return key
}
const MaxBlockNum = ^uint64(0)
// This ChunkLocator searches over a cursor with a key format of [common.Address, block uint64],
// where block is the first block number contained in the chunk value.
//
// It positions the cursor on the chunk that contains the first block >= minBlock.
func newCallChunkLocator(cursor kv.Cursor, addr common.Address, navigateForward bool) ChunkLocator {
return func(minBlock uint64) (ChunkProvider, bool, error) {
searchKey := callIndexKey(addr, minBlock)
k, _, err := cursor.Seek(searchKey)
if k == nil {
return nil, false, nil
}
if err != nil {
return nil, false, err
}
return newCallChunkProvider(cursor, addr, navigateForward), true, nil
}
}
// This ChunkProvider is built by NewForwardChunkLocator and advances the cursor forward until
// there is no more chunks for the desired addr.
func newCallChunkProvider(cursor kv.Cursor, addr common.Address, navigateForward bool) ChunkProvider {
first := true
var err error
// TODO: is this flag really used?
eof := false
return func() ([]byte, bool, error) {
if err != nil {
return nil, false, err
}
if eof {
return nil, false, nil
}
var k, v []byte
if first {
first = false
k, v, err = cursor.Current()
} else {
if navigateForward {
k, v, err = cursor.Next()
} else {
k, v, err = cursor.Prev()
}
}
if err != nil {
eof = true
return nil, false, err
}
if !bytes.HasPrefix(k, addr.Bytes()) {
eof = true
return nil, false, nil
}
return v, true, nil
}
}

View File

@ -1,295 +0,0 @@
package commands
import (
"bytes"
"context"
"math/big"
"sync"
"time"
lru "github.com/hashicorp/golang-lru"
"github.com/holiman/uint256"
"github.com/ledgerwatch/erigon-lib/kv"
libstate "github.com/ledgerwatch/erigon-lib/state"
"github.com/ledgerwatch/log/v3"
"github.com/ledgerwatch/erigon-lib/kv/kvcache"
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/commands"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/common/math"
"github.com/ledgerwatch/erigon/consensus/misc"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
"github.com/ledgerwatch/erigon/turbo/services"
)
type BaseAPIUtils struct {
*commands.BaseAPI
stateCache kvcache.Cache // thread-safe
blocksLRU *lru.Cache // thread-safe
filters *rpchelper.Filters
_chainConfig *params.ChainConfig
_genesis *types.Block
_genesisLock sync.RWMutex
_historyV3 *bool
_historyV3Lock sync.RWMutex
_blockReader services.FullBlockReader
_txnReader services.TxnReader
_agg *libstate.Aggregator22
evmCallTimeout time.Duration
}
func NewBaseUtilsApi(f *rpchelper.Filters, stateCache kvcache.Cache, blockReader services.FullBlockReader, agg *libstate.Aggregator22, singleNodeMode bool, evmCallTimeout time.Duration) *BaseAPIUtils {
blocksLRUSize := 128 // ~32Mb
if !singleNodeMode {
blocksLRUSize = 512
}
blocksLRU, err := lru.New(blocksLRUSize)
if err != nil {
panic(err)
}
return &BaseAPIUtils{filters: f, stateCache: stateCache, blocksLRU: blocksLRU, _blockReader: blockReader, _txnReader: blockReader, _agg: agg, evmCallTimeout: evmCallTimeout}
}
func (api *BaseAPIUtils) chainConfig(tx kv.Tx) (*params.ChainConfig, error) {
cfg, _, err := api.chainConfigWithGenesis(tx)
return cfg, err
}
// nolint:unused
func (api *BaseAPIUtils) genesis(tx kv.Tx) (*types.Block, error) {
_, genesis, err := api.chainConfigWithGenesis(tx)
return genesis, err
}
func (api *BaseAPIUtils) txnLookup(ctx context.Context, tx kv.Tx, txnHash common.Hash) (uint64, bool, error) {
return api._txnReader.TxnLookup(ctx, tx, txnHash)
}
func (api *BaseAPIUtils) blockByNumberWithSenders(tx kv.Tx, number uint64) (*types.Block, error) {
hash, hashErr := rawdb.ReadCanonicalHash(tx, number)
if hashErr != nil {
return nil, hashErr
}
return api.blockWithSenders(tx, hash, number)
}
func (api *BaseAPIUtils) blockByHashWithSenders(tx kv.Tx, hash common.Hash) (*types.Block, error) {
if api.blocksLRU != nil {
if it, ok := api.blocksLRU.Get(hash); ok && it != nil {
return it.(*types.Block), nil
}
}
number := rawdb.ReadHeaderNumber(tx, hash)
if number == nil {
return nil, nil
}
return api.blockWithSenders(tx, hash, *number)
}
func (api *BaseAPIUtils) blockWithSenders(tx kv.Tx, hash common.Hash, number uint64) (*types.Block, error) {
if api.blocksLRU != nil {
if it, ok := api.blocksLRU.Get(hash); ok && it != nil {
return it.(*types.Block), nil
}
}
block, _, err := api._blockReader.BlockWithSenders(context.Background(), tx, hash, number)
if err != nil {
return nil, err
}
if block == nil { // don't save nil's to cache
return nil, nil
}
// don't save empty blocks to cache, because in Erigon
// if block become non-canonical - we remove it's transactions, but block can become canonical in future
if block.Transactions().Len() == 0 {
return block, nil
}
if api.blocksLRU != nil {
// calc fields before put to cache
for _, txn := range block.Transactions() {
txn.Hash()
}
block.Hash()
api.blocksLRU.Add(hash, block)
}
return block, nil
}
func (api *BaseAPIUtils) historyV3(tx kv.Tx) bool {
api._historyV3Lock.RLock()
historyV3 := api._historyV3
api._historyV3Lock.RUnlock()
if historyV3 != nil {
return *historyV3
}
enabled, err := rawdb.HistoryV3.Enabled(tx)
if err != nil {
log.Warn("HisoryV2Enabled: read", "err", err)
return false
}
api._historyV3Lock.Lock()
api._historyV3 = &enabled
api._historyV3Lock.Unlock()
return enabled
}
func (api *BaseAPIUtils) chainConfigWithGenesis(tx kv.Tx) (*params.ChainConfig, *types.Block, error) {
api._genesisLock.RLock()
cc, genesisBlock := api._chainConfig, api._genesis
api._genesisLock.RUnlock()
if cc != nil {
return cc, genesisBlock, nil
}
genesisBlock, err := rawdb.ReadBlockByNumber(tx, 0)
if err != nil {
return nil, nil, err
}
cc, err = rawdb.ReadChainConfig(tx, genesisBlock.Hash())
if err != nil {
return nil, nil, err
}
if cc != nil && genesisBlock != nil {
api._genesisLock.Lock()
api._genesis = genesisBlock
api._chainConfig = cc
api._genesisLock.Unlock()
}
return cc, genesisBlock, nil
}
func (api *BaseAPIUtils) pendingBlock() *types.Block {
return api.filters.LastPendingBlock()
}
func (api *BaseAPIUtils) blockByRPCNumber(number rpc.BlockNumber, tx kv.Tx) (*types.Block, error) {
n, _, _, err := rpchelper.GetBlockNumber(rpc.BlockNumberOrHashWithNumber(number), tx, api.filters)
if err != nil {
return nil, err
}
block, err := api.blockByNumberWithSenders(tx, n)
return block, err
}
func (api *BaseAPIUtils) headerByRPCNumber(number rpc.BlockNumber, tx kv.Tx) (*types.Header, error) {
n, h, _, err := rpchelper.GetBlockNumber(rpc.BlockNumberOrHashWithNumber(number), tx, api.filters)
if err != nil {
return nil, err
}
return api._blockReader.Header(context.Background(), tx, h, n)
}
// newRPCTransaction returns a transaction that will serialize to the RPC
// representation, with the given location metadata set (if available).
func newRPCTransaction(tx types.Transaction, blockHash common.Hash, blockNumber uint64, index uint64, baseFee *big.Int) *commands.RPCTransaction {
// Determine the signer. For replay-protected transactions, use the most permissive
// signer, because we assume that signers are backwards-compatible with old
// transactions. For non-protected transactions, the homestead signer signer is used
// because the return value of ChainId is zero for those transactions.
var chainId *big.Int
result := &commands.RPCTransaction{
Type: hexutil.Uint64(tx.Type()),
Gas: hexutil.Uint64(tx.GetGas()),
Hash: tx.Hash(),
Input: hexutil.Bytes(tx.GetData()),
Nonce: hexutil.Uint64(tx.GetNonce()),
To: tx.GetTo(),
Value: (*hexutil.Big)(tx.GetValue().ToBig()),
}
switch t := tx.(type) {
case *types.LegacyTx:
chainId = types.DeriveChainId(&t.V).ToBig()
result.GasPrice = (*hexutil.Big)(t.GasPrice.ToBig())
result.V = (*hexutil.Big)(t.V.ToBig())
result.R = (*hexutil.Big)(t.R.ToBig())
result.S = (*hexutil.Big)(t.S.ToBig())
case *types.AccessListTx:
chainId = t.ChainID.ToBig()
result.ChainID = (*hexutil.Big)(chainId)
result.GasPrice = (*hexutil.Big)(t.GasPrice.ToBig())
result.V = (*hexutil.Big)(t.V.ToBig())
result.R = (*hexutil.Big)(t.R.ToBig())
result.S = (*hexutil.Big)(t.S.ToBig())
result.Accesses = &t.AccessList
case *types.DynamicFeeTransaction:
chainId = t.ChainID.ToBig()
result.ChainID = (*hexutil.Big)(chainId)
result.Tip = (*hexutil.Big)(t.Tip.ToBig())
result.FeeCap = (*hexutil.Big)(t.FeeCap.ToBig())
result.V = (*hexutil.Big)(t.V.ToBig())
result.R = (*hexutil.Big)(t.R.ToBig())
result.S = (*hexutil.Big)(t.S.ToBig())
result.Accesses = &t.AccessList
baseFee, overflow := uint256.FromBig(baseFee)
if baseFee != nil && !overflow && blockHash != (common.Hash{}) {
// price = min(tip + baseFee, gasFeeCap)
price := math.Min256(new(uint256.Int).Add(tx.GetTip(), baseFee), tx.GetFeeCap())
result.GasPrice = (*hexutil.Big)(price.ToBig())
} else {
result.GasPrice = nil
}
}
signer := types.LatestSignerForChainID(chainId)
result.From, _ = tx.Sender(*signer)
if blockHash != (common.Hash{}) {
result.BlockHash = &blockHash
result.BlockNumber = (*hexutil.Big)(new(big.Int).SetUint64(blockNumber))
result.TransactionIndex = (*hexutil.Uint64)(&index)
}
return result
}
// newRPCBorTransaction returns a Bor transaction that will serialize to the RPC
// representation, with the given location metadata set (if available).
func newRPCBorTransaction(opaqueTx types.Transaction, txHash common.Hash, blockHash common.Hash, blockNumber uint64, index uint64, baseFee *big.Int) *commands.RPCTransaction {
tx := opaqueTx.(*types.LegacyTx)
result := &commands.RPCTransaction{
Type: hexutil.Uint64(tx.Type()),
ChainID: (*hexutil.Big)(new(big.Int)),
GasPrice: (*hexutil.Big)(tx.GasPrice.ToBig()),
Gas: hexutil.Uint64(tx.GetGas()),
Hash: txHash,
Input: hexutil.Bytes(tx.GetData()),
Nonce: hexutil.Uint64(tx.GetNonce()),
From: common.Address{},
To: tx.GetTo(),
Value: (*hexutil.Big)(tx.GetValue().ToBig()),
}
if blockHash != (common.Hash{}) {
result.BlockHash = &blockHash
result.BlockNumber = (*hexutil.Big)(new(big.Int).SetUint64(blockNumber))
result.TransactionIndex = (*hexutil.Uint64)(&index)
}
return result
}
// newRPCPendingTransaction returns a pending transaction that will serialize to the RPC representation
func newRPCPendingTransaction(tx types.Transaction, current *types.Header, config *params.ChainConfig) *commands.RPCTransaction {
var baseFee *big.Int
if current != nil {
baseFee = misc.CalcBaseFee(config, current)
}
return newRPCTransaction(tx, common.Hash{}, 0, 0, baseFee)
}
// newRPCRawTransactionFromBlockIndex returns the bytes of a transaction given a block and a transaction index.
func newRPCRawTransactionFromBlockIndex(b *types.Block, index uint64) (hexutil.Bytes, error) {
txs := b.Transactions()
if index >= uint64(len(txs)) {
return nil, nil
}
var buf bytes.Buffer
err := txs[index].MarshalBinary(&buf)
return buf.Bytes(), err
}

View File

@ -1,66 +0,0 @@
package commands
import (
"context"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/rpc"
"github.com/ledgerwatch/erigon/turbo/rpchelper"
)
// headerByNumberOrHash - intent to read recent headers only
func headerByNumberOrHash(ctx context.Context, tx kv.Tx, blockNrOrHash rpc.BlockNumberOrHash, api *BaseAPIUtils) (*types.Header, error) {
blockNum, _, _, err := rpchelper.GetBlockNumber(blockNrOrHash, tx, api.filters)
if err != nil {
return nil, err
}
header, err := api._blockReader.HeaderByNumber(ctx, tx, blockNum)
if err != nil {
return nil, err
}
// header can be nil
return header, nil
}
// accessListResult returns an optional accesslist
// Its the result of the `eth_createAccessList` RPC call.
// It contains an error if the transaction itself failed.
type accessListResult struct {
Accesslist *types.AccessList `json:"accessList"`
Error string `json:"error,omitempty"`
GasUsed hexutil.Uint64 `json:"gasUsed"`
}
// to address is warm already, so we can save by adding it to the access list
// only if we are adding a lot of its storage slots as well
func optimizeToInAccessList(accessList *accessListResult, to common.Address) {
indexToRemove := -1
for i := 0; i < len(*accessList.Accesslist); i++ {
entry := (*accessList.Accesslist)[i]
if entry.Address != to {
continue
}
// https://eips.ethereum.org/EIPS/eip-2930#charging-less-for-accesses-in-the-access-list
accessListSavingPerSlot := params.ColdSloadCostEIP2929 - params.WarmStorageReadCostEIP2929 - params.TxAccessListStorageKeyGas
numSlots := uint64(len(entry.StorageKeys))
if numSlots*accessListSavingPerSlot <= params.TxAccessListAddressGas {
indexToRemove = i
}
}
if indexToRemove >= 0 {
*accessList.Accesslist = removeIndex(*accessList.Accesslist, indexToRemove)
}
}
func removeIndex(s types.AccessList, index int) types.AccessList {
return append(s[:index], s[index+1:]...)
}

View File

@ -1,112 +0,0 @@
package commands
import (
"context"
"math/big"
"github.com/holiman/uint256"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/rawdb"
"github.com/ledgerwatch/erigon/core/state"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/core/vm"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/turbo/transactions"
"github.com/ledgerwatch/log/v3"
)
func (api *BaseAPIUtils) getReceipts(ctx context.Context, tx kv.Tx, chainConfig *params.ChainConfig, block *types.Block, senders []common.Address) (types.Receipts, error) {
if cached := rawdb.ReadReceipts(tx, block, senders); cached != nil {
return cached, nil
}
getHeader := func(hash common.Hash, number uint64) *types.Header {
h, e := api._blockReader.Header(ctx, tx, hash, number)
if e != nil {
log.Error("getHeader error", "number", number, "hash", hash, "err", e)
}
return h
}
_, _, _, ibs, _, err := transactions.ComputeTxEnv(ctx, block, chainConfig, getHeader, ethash.NewFaker(), tx, block.Hash(), 0)
if err != nil {
return nil, err
}
usedGas := new(uint64)
gp := new(core.GasPool).AddGas(block.GasLimit())
ethashFaker := ethash.NewFaker()
noopWriter := state.NewNoopWriter()
receipts := make(types.Receipts, len(block.Transactions()))
for i, txn := range block.Transactions() {
ibs.Prepare(txn.Hash(), block.Hash(), i)
header := block.Header()
receipt, _, err := core.ApplyTransaction(chainConfig, core.GetHashFn(header, getHeader), ethashFaker, nil, gp, ibs, noopWriter, header, txn, usedGas, vm.Config{})
if err != nil {
return nil, err
}
receipt.BlockHash = block.Hash()
receipts[i] = receipt
}
return receipts, nil
}
func marshalReceipt(receipt *types.Receipt, txn types.Transaction, chainConfig *params.ChainConfig, block *types.Block, txnHash common.Hash, signed bool) map[string]interface{} {
var chainId *big.Int
switch t := txn.(type) {
case *types.LegacyTx:
if t.Protected() {
chainId = types.DeriveChainId(&t.V).ToBig()
}
case *types.AccessListTx:
chainId = t.ChainID.ToBig()
case *types.DynamicFeeTransaction:
chainId = t.ChainID.ToBig()
}
var from common.Address
if signed {
signer := types.LatestSignerForChainID(chainId)
from, _ = txn.Sender(*signer)
}
fields := map[string]interface{}{
"blockHash": receipt.BlockHash,
"blockNumber": hexutil.Uint64(receipt.BlockNumber.Uint64()),
"transactionHash": txnHash,
"transactionIndex": hexutil.Uint64(receipt.TransactionIndex),
"from": from,
"to": txn.GetTo(),
"type": hexutil.Uint(txn.Type()),
"gasUsed": hexutil.Uint64(receipt.GasUsed),
"cumulativeGasUsed": hexutil.Uint64(receipt.CumulativeGasUsed),
"contractAddress": nil,
"logs": receipt.Logs,
"logsBloom": types.CreateBloom(types.Receipts{receipt}),
}
if !chainConfig.IsLondon(block.NumberU64()) {
fields["effectiveGasPrice"] = hexutil.Uint64(txn.GetPrice().Uint64())
} else {
baseFee, _ := uint256.FromBig(block.BaseFee())
gasPrice := new(big.Int).Add(block.BaseFee(), txn.GetEffectiveGasTip(baseFee).ToBig())
fields["effectiveGasPrice"] = hexutil.Uint64(gasPrice.Uint64())
}
// Assign receipt status.
fields["status"] = hexutil.Uint64(receipt.Status)
if receipt.Logs == nil {
fields["logs"] = [][]*types.Log{}
}
// If the ContractAddress is 20 0x0 bytes, assume it is not a contract creation
if receipt.ContractAddress != (common.Address{}) {
fields["contractAddress"] = receipt.ContractAddress
}
return fields
}

View File

@ -1,42 +0,0 @@
package commands
import (
"fmt"
"github.com/holiman/uint256"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/core/state"
)
// StorageRangeResult is the result of a debug_storageRangeAt API call.
type StorageRangeResult struct {
Storage StorageMap `json:"storage"`
NextKey *common.Hash `json:"nextKey"` // nil if Storage includes the last key in the trie.
}
// StorageMap a map from storage locations to StorageEntry items
type StorageMap map[common.Hash]StorageEntry
// StorageEntry an entry in storage of the account
type StorageEntry struct {
Key *common.Hash `json:"key"`
Value common.Hash `json:"value"`
}
func StorageRangeAt(stateReader *state.PlainState, contractAddress common.Address, start []byte, maxResult int) (StorageRangeResult, error) {
result := StorageRangeResult{Storage: StorageMap{}}
resultCount := 0
if err := stateReader.ForEachStorage(contractAddress, common.BytesToHash(start), func(key, seckey common.Hash, value uint256.Int) bool {
if resultCount < maxResult {
result.Storage[seckey] = StorageEntry{Key: &key, Value: value.Bytes32()}
} else {
result.NextKey = &key
}
resultCount++
return resultCount <= maxResult
}, maxResult+1); err != nil {
return StorageRangeResult{}, fmt.Errorf("error walking over storage: %w", err)
}
return result, nil
}

View File

@ -1,23 +0,0 @@
package health
import (
"context"
"fmt"
"github.com/ledgerwatch/erigon/rpc"
)
func checkBlockNumber(blockNumber rpc.BlockNumber, api EthAPI) error {
if api == nil {
return fmt.Errorf("no connection to the Erigon server or `eth` namespace isn't enabled")
}
data, err := api.GetBlockByNumber(context.TODO(), blockNumber, false)
if err != nil {
return err
}
if len(data) == 0 { // block not found
return fmt.Errorf("no known block with number %v (%x hex)", blockNumber, blockNumber)
}
return nil
}

View File

@ -1,28 +0,0 @@
package health
import (
"context"
"errors"
"fmt"
)
var (
errNotEnoughPeers = errors.New("not enough peers")
)
func checkMinPeers(minPeerCount uint, api NetAPI) error {
if api == nil {
return fmt.Errorf("no connection to the Erigon server or `net` namespace isn't enabled")
}
peerCount, err := api.PeerCount(context.TODO())
if err != nil {
return err
}
if uint64(peerCount) < uint64(minPeerCount) {
return fmt.Errorf("%w: %d (minimum %d)", errNotEnoughPeers, peerCount, minPeerCount)
}
return nil
}

View File

@ -1,25 +0,0 @@
package health
import (
"errors"
"net/http"
"github.com/ledgerwatch/log/v3"
)
var (
errNotSynced = errors.New("not synced")
)
func checkSynced(ethAPI EthAPI, r *http.Request) error {
i, err := ethAPI.Syncing(r.Context())
if err != nil {
log.Root().Warn("unable to process synced request", "err", err.Error())
return err
}
if i == nil || i == false {
return nil
}
return errNotSynced
}

View File

@ -1,35 +0,0 @@
package health
import (
"errors"
"fmt"
"net/http"
"github.com/ledgerwatch/erigon/rpc"
)
var (
errTimestampTooOld = errors.New("timestamp too old")
)
func checkTime(
r *http.Request,
seconds int,
ethAPI EthAPI,
) error {
i, err := ethAPI.GetBlockByNumber(r.Context(), rpc.LatestBlockNumber, false)
if err != nil {
return err
}
timestamp := 0
if ts, ok := i["timestamp"]; ok {
if cs, ok := ts.(uint64); ok {
timestamp = int(cs)
}
}
if timestamp > seconds {
return fmt.Errorf("%w: got ts: %d, need: %d", errTimestampTooOld, timestamp, seconds)
}
return nil
}

View File

@ -1,227 +0,0 @@
package health
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/ledgerwatch/log/v3"
"github.com/ledgerwatch/erigon/rpc"
)
type requestBody struct {
MinPeerCount *uint `json:"min_peer_count"`
BlockNumber *rpc.BlockNumber `json:"known_block"`
}
const (
urlPath = "/health"
healthHeader = "X-ERIGON-HEALTHCHECK"
synced = "synced"
minPeerCount = "min_peer_count"
checkBlock = "check_block"
maxSecondsBehind = "max_seconds_behind"
)
var (
errCheckDisabled = errors.New("error check disabled")
errBadHeaderValue = errors.New("bad header value")
)
func ProcessHealthcheckIfNeeded(
w http.ResponseWriter,
r *http.Request,
rpcAPI []rpc.API,
) bool {
if !strings.EqualFold(r.URL.Path, urlPath) {
return false
}
netAPI, ethAPI := parseAPI(rpcAPI)
headers := r.Header.Values(healthHeader)
if len(headers) != 0 {
processFromHeaders(headers, ethAPI, netAPI, w, r)
} else {
processFromBody(w, r, netAPI, ethAPI)
}
return true
}
func processFromHeaders(headers []string, ethAPI EthAPI, netAPI NetAPI, w http.ResponseWriter, r *http.Request) {
var (
errCheckSynced = errCheckDisabled
errCheckPeer = errCheckDisabled
errCheckBlock = errCheckDisabled
errCheckSeconds = errCheckDisabled
)
for _, header := range headers {
lHeader := strings.ToLower(header)
if lHeader == synced {
errCheckSynced = checkSynced(ethAPI, r)
}
if strings.HasPrefix(lHeader, minPeerCount) {
peers, err := strconv.Atoi(strings.TrimPrefix(lHeader, minPeerCount))
if err != nil {
errCheckPeer = err
break
}
errCheckPeer = checkMinPeers(uint(peers), netAPI)
}
if strings.HasPrefix(lHeader, checkBlock) {
block, err := strconv.Atoi(strings.TrimPrefix(lHeader, checkBlock))
if err != nil {
errCheckBlock = err
break
}
errCheckBlock = checkBlockNumber(rpc.BlockNumber(block), ethAPI)
}
if strings.HasPrefix(lHeader, maxSecondsBehind) {
seconds, err := strconv.Atoi(strings.TrimPrefix(lHeader, maxSecondsBehind))
if err != nil {
errCheckSeconds = err
break
}
if seconds < 0 {
errCheckSeconds = errBadHeaderValue
break
}
now := time.Now().Unix()
errCheckSeconds = checkTime(r, int(now)-seconds, ethAPI)
}
}
reportHealthFromHeaders(errCheckSynced, errCheckPeer, errCheckBlock, errCheckSeconds, w)
}
func processFromBody(w http.ResponseWriter, r *http.Request, netAPI NetAPI, ethAPI EthAPI) {
body, errParse := parseHealthCheckBody(r.Body)
defer r.Body.Close()
var errMinPeerCount = errCheckDisabled
var errCheckBlock = errCheckDisabled
if errParse != nil {
log.Root().Warn("unable to process healthcheck request", "err", errParse)
} else {
// 1. net_peerCount
if body.MinPeerCount != nil {
errMinPeerCount = checkMinPeers(*body.MinPeerCount, netAPI)
}
// 2. custom query (shouldn't fail)
if body.BlockNumber != nil {
errCheckBlock = checkBlockNumber(*body.BlockNumber, ethAPI)
}
// TODO add time from the last sync cycle
}
err := reportHealthFromBody(errParse, errMinPeerCount, errCheckBlock, w)
if err != nil {
log.Root().Warn("unable to process healthcheck request", "err", err)
}
}
func parseHealthCheckBody(reader io.Reader) (requestBody, error) {
var body requestBody
bodyBytes, err := io.ReadAll(reader)
if err != nil {
return body, err
}
err = json.Unmarshal(bodyBytes, &body)
if err != nil {
return body, err
}
return body, nil
}
func reportHealthFromBody(errParse, errMinPeerCount, errCheckBlock error, w http.ResponseWriter) error {
statusCode := http.StatusOK
errors := make(map[string]string)
if shouldChangeStatusCode(errParse) {
statusCode = http.StatusInternalServerError
}
errors["healthcheck_query"] = errorStringOrOK(errParse)
if shouldChangeStatusCode(errMinPeerCount) {
statusCode = http.StatusInternalServerError
}
errors["min_peer_count"] = errorStringOrOK(errMinPeerCount)
if shouldChangeStatusCode(errCheckBlock) {
statusCode = http.StatusInternalServerError
}
errors["check_block"] = errorStringOrOK(errCheckBlock)
return writeResponse(w, errors, statusCode)
}
func reportHealthFromHeaders(errCheckSynced, errCheckPeer, errCheckBlock, errCheckSeconds error, w http.ResponseWriter) error {
statusCode := http.StatusOK
errs := make(map[string]string)
if shouldChangeStatusCode(errCheckSynced) {
statusCode = http.StatusInternalServerError
}
errs[synced] = errorStringOrOK(errCheckSynced)
if shouldChangeStatusCode(errCheckPeer) {
statusCode = http.StatusInternalServerError
}
errs[minPeerCount] = errorStringOrOK(errCheckPeer)
if shouldChangeStatusCode(errCheckBlock) {
statusCode = http.StatusInternalServerError
}
errs[checkBlock] = errorStringOrOK(errCheckBlock)
if shouldChangeStatusCode(errCheckSeconds) {
statusCode = http.StatusInternalServerError
}
errs[maxSecondsBehind] = errorStringOrOK(errCheckSeconds)
return writeResponse(w, errs, statusCode)
}
func writeResponse(w http.ResponseWriter, errs map[string]string, statusCode int) error {
w.WriteHeader(statusCode)
bodyJson, err := json.Marshal(errs)
if err != nil {
return err
}
_, err = w.Write(bodyJson)
if err != nil {
return err
}
return nil
}
func shouldChangeStatusCode(err error) bool {
return err != nil && !errors.Is(err, errCheckDisabled)
}
func errorStringOrOK(err error) string {
if err == nil {
return "HEALTHY"
}
if errors.Is(err, errCheckDisabled) {
return "DISABLED"
}
return fmt.Sprintf("ERROR: %v", err)
}

View File

@ -1,562 +0,0 @@
package health
import (
"context"
"encoding/json"
"errors"
"io"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/rpc"
)
type netApiStub struct {
response hexutil.Uint
error error
}
func (n *netApiStub) PeerCount(_ context.Context) (hexutil.Uint, error) {
return n.response, n.error
}
type ethApiStub struct {
blockResult map[string]interface{}
blockError error
syncingResult interface{}
syncingError error
}
func (e *ethApiStub) GetBlockByNumber(_ context.Context, _ rpc.BlockNumber, _ bool) (map[string]interface{}, error) {
return e.blockResult, e.blockError
}
func (e *ethApiStub) Syncing(_ context.Context) (interface{}, error) {
return e.syncingResult, e.syncingError
}
func TestProcessHealthcheckIfNeeded_HeadersTests(t *testing.T) {
cases := []struct {
headers []string
netApiResponse hexutil.Uint
netApiError error
ethApiBlockResult map[string]interface{}
ethApiBlockError error
ethApiSyncingResult interface{}
ethApiSyncingError error
expectedStatusCode int
expectedBody map[string]string
}{
// 0 - sync check enabled - syncing
{
headers: []string{"synced"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
synced: "HEALTHY",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 1 - sync check enabled - not syncing
{
headers: []string{"synced"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: struct{}{},
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "ERROR: not synced",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 2 - sync check enabled - error checking sync
{
headers: []string{"synced"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: struct{}{},
ethApiSyncingError: errors.New("problem checking sync"),
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "ERROR: problem checking sync",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 3 - peer count enabled - good request
{
headers: []string{"min_peer_count1"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "HEALTHY",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 4 - peer count enabled - not enough peers
{
headers: []string{"min_peer_count10"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "ERROR: not enough peers: 1 (minimum 10)",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 5 - peer count enabled - error checking peers
{
headers: []string{"min_peer_count10"},
netApiResponse: hexutil.Uint(1),
netApiError: errors.New("problem checking peers"),
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "ERROR: problem checking peers",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 6 - peer count enabled - badly formed request
{
headers: []string{"min_peer_countABC"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: make(map[string]interface{}),
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "ERROR: strconv.Atoi: parsing \"abc\": invalid syntax",
checkBlock: "DISABLED",
maxSecondsBehind: "DISABLED",
},
},
// 7 - block check - all ok
{
headers: []string{"check_block10"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{"test": struct{}{}},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "HEALTHY",
maxSecondsBehind: "DISABLED",
},
},
// 8 - block check - no block found
{
headers: []string{"check_block10"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "ERROR: no known block with number 10 (a hex)",
maxSecondsBehind: "DISABLED",
},
},
// 9 - block check - error checking block
{
headers: []string{"check_block10"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: errors.New("problem checking block"),
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "ERROR: problem checking block",
maxSecondsBehind: "DISABLED",
},
},
// 10 - block check - badly formed request
{
headers: []string{"check_blockABC"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "ERROR: strconv.Atoi: parsing \"abc\": invalid syntax",
maxSecondsBehind: "DISABLED",
},
},
// 11 - seconds check - all ok
{
headers: []string{"max_seconds_behind60"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{
"timestamp": time.Now().Add(1 * time.Second).Unix(),
},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "HEALTHY",
},
},
// 12 - seconds check - too old
{
headers: []string{"max_seconds_behind60"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{
"timestamp": uint64(time.Now().Add(1 * time.Hour).Unix()),
},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "ERROR: timestamp too old: got ts:",
},
},
// 13 - seconds check - less than 0 seconds
{
headers: []string{"max_seconds_behind-1"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{
"timestamp": uint64(time.Now().Add(1 * time.Hour).Unix()),
},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "ERROR: bad header value",
},
},
// 14 - seconds check - badly formed request
{
headers: []string{"max_seconds_behindABC"},
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
synced: "DISABLED",
minPeerCount: "DISABLED",
checkBlock: "DISABLED",
maxSecondsBehind: "ERROR: strconv.Atoi: parsing \"abc\": invalid syntax",
},
},
// 15 - all checks - report ok
{
headers: []string{"synced", "check_block10", "min_peer_count1", "max_seconds_behind60"},
netApiResponse: hexutil.Uint(10),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{
"timestamp": time.Now().Add(1 * time.Second).Unix(),
},
ethApiBlockError: nil,
ethApiSyncingResult: false,
ethApiSyncingError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
synced: "HEALTHY",
minPeerCount: "HEALTHY",
checkBlock: "HEALTHY",
maxSecondsBehind: "HEALTHY",
},
},
}
for idx, c := range cases {
w := httptest.NewRecorder()
r, err := http.NewRequest(http.MethodGet, "http://localhost:9090/health", nil)
if err != nil {
t.Errorf("%v: creating request: %v", idx, err)
}
for _, header := range c.headers {
r.Header.Add("X-ERIGON-HEALTHCHECK", header)
}
netAPI := rpc.API{
Namespace: "",
Version: "",
Service: &netApiStub{
response: c.netApiResponse,
error: c.netApiError,
},
Public: false,
}
ethAPI := rpc.API{
Namespace: "",
Version: "",
Service: &ethApiStub{
blockResult: c.ethApiBlockResult,
blockError: c.ethApiBlockError,
syncingResult: c.ethApiSyncingResult,
syncingError: c.ethApiSyncingError,
},
Public: false,
}
apis := make([]rpc.API, 2)
apis[0] = netAPI
apis[1] = ethAPI
ProcessHealthcheckIfNeeded(w, r, apis)
result := w.Result()
if result.StatusCode != c.expectedStatusCode {
t.Errorf("%v: expected status code: %v, but got: %v", idx, c.expectedStatusCode, result.StatusCode)
}
bodyBytes, err := io.ReadAll(result.Body)
if err != nil {
t.Errorf("%v: reading response body: %s", idx, err)
}
var body map[string]string
err = json.Unmarshal(bodyBytes, &body)
if err != nil {
t.Errorf("%v: unmarshalling the response body: %s", idx, err)
}
result.Body.Close()
for k, v := range c.expectedBody {
val, found := body[k]
if !found {
t.Errorf("%v: expected the key: %s to be in the response body but it wasn't there", idx, k)
}
if !strings.Contains(val, v) {
t.Errorf("%v: expected the response body key: %s to contain: %s, but it contained: %s", idx, k, v, val)
}
}
}
}
func TestProcessHealthcheckIfNeeded_RequestBody(t *testing.T) {
cases := []struct {
body string
netApiResponse hexutil.Uint
netApiError error
ethApiBlockResult map[string]interface{}
ethApiBlockError error
expectedStatusCode int
expectedBody map[string]string
}{
// 0 - happy path
{
body: "{\"min_peer_count\": 1, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{"test": struct{}{}},
ethApiBlockError: nil,
expectedStatusCode: http.StatusOK,
expectedBody: map[string]string{
"healthcheck_query": "HEALTHY",
"min_peer_count": "HEALTHY",
"check_block": "HEALTHY",
},
},
// 1 - bad request body
{
body: "{\"min_peer_count\" 1, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{"test": struct{}{}},
ethApiBlockError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
"healthcheck_query": "ERROR:",
"min_peer_count": "DISABLED",
"check_block": "DISABLED",
},
},
// 2 - min peers - error from api
{
body: "{\"min_peer_count\": 1, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: errors.New("problem getting peers"),
ethApiBlockResult: map[string]interface{}{"test": struct{}{}},
ethApiBlockError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
"healthcheck_query": "HEALTHY",
"min_peer_count": "ERROR: problem getting peers",
"check_block": "HEALTHY",
},
},
// 3 - min peers - not enough peers
{
body: "{\"min_peer_count\": 10, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{"test": struct{}{}},
ethApiBlockError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
"healthcheck_query": "HEALTHY",
"min_peer_count": "ERROR: not enough peers",
"check_block": "HEALTHY",
},
},
// 4 - check block - no block
{
body: "{\"min_peer_count\": 1, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: nil,
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
"healthcheck_query": "HEALTHY",
"min_peer_count": "HEALTHY",
"check_block": "ERROR: no known block with number ",
},
},
// 5 - check block - error getting block info
{
body: "{\"min_peer_count\": 1, \"known_block\": 123}",
netApiResponse: hexutil.Uint(1),
netApiError: nil,
ethApiBlockResult: map[string]interface{}{},
ethApiBlockError: errors.New("problem getting block"),
expectedStatusCode: http.StatusInternalServerError,
expectedBody: map[string]string{
"healthcheck_query": "HEALTHY",
"min_peer_count": "HEALTHY",
"check_block": "ERROR: problem getting block",
},
},
}
for idx, c := range cases {
w := httptest.NewRecorder()
r, err := http.NewRequest(http.MethodGet, "http://localhost:9090/health", nil)
if err != nil {
t.Errorf("%v: creating request: %v", idx, err)
}
r.Body = io.NopCloser(strings.NewReader(c.body))
netAPI := rpc.API{
Namespace: "",
Version: "",
Service: &netApiStub{
response: c.netApiResponse,
error: c.netApiError,
},
Public: false,
}
ethAPI := rpc.API{
Namespace: "",
Version: "",
Service: &ethApiStub{
blockResult: c.ethApiBlockResult,
blockError: c.ethApiBlockError,
},
Public: false,
}
apis := make([]rpc.API, 2)
apis[0] = netAPI
apis[1] = ethAPI
ProcessHealthcheckIfNeeded(w, r, apis)
result := w.Result()
if result.StatusCode != c.expectedStatusCode {
t.Errorf("%v: expected status code: %v, but got: %v", idx, c.expectedStatusCode, result.StatusCode)
}
bodyBytes, err := io.ReadAll(result.Body)
if err != nil {
t.Errorf("%v: reading response body: %s", idx, err)
}
var body map[string]string
err = json.Unmarshal(bodyBytes, &body)
if err != nil {
t.Errorf("%v: unmarshalling the response body: %s", idx, err)
}
result.Body.Close()
for k, v := range c.expectedBody {
val, found := body[k]
if !found {
t.Errorf("%v: expected the key: %s to be in the response body but it wasn't there", idx, k)
}
if !strings.Contains(val, v) {
t.Errorf("%v: expected the response body key: %s to contain: %s, but it contained: %s", idx, k, v, val)
}
}
}
}

View File

@ -1,17 +0,0 @@
package health
import (
"context"
"github.com/ledgerwatch/erigon/common/hexutil"
"github.com/ledgerwatch/erigon/rpc"
)
type NetAPI interface {
PeerCount(_ context.Context) (hexutil.Uint, error)
}
type EthAPI interface {
GetBlockByNumber(_ context.Context, number rpc.BlockNumber, fullTx bool) (map[string]interface{}, error)
Syncing(ctx context.Context) (interface{}, error)
}

View File

@ -1,22 +0,0 @@
package health
import (
"github.com/ledgerwatch/erigon/rpc"
)
func parseAPI(api []rpc.API) (netAPI NetAPI, ethAPI EthAPI) {
for _, rpc := range api {
if rpc.Service == nil {
continue
}
if netCandidate, ok := rpc.Service.(NetAPI); ok {
netAPI = netCandidate
}
if ethCandidate, ok := rpc.Service.(EthAPI); ok {
ethAPI = ethCandidate
}
}
return netAPI, ethAPI
}

View File

@ -1,48 +1,36 @@
package main
import (
"os"
"fmt"
"log"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/ledgerwatch/erigon-lib/common"
"github.com/ledgerwatch/log/v3"
"github.com/spf13/cobra"
"github.com/wmitsuda/otterscan/cmd/otter/cli"
mycmds "github.com/wmitsuda/otterscan/cmd/otter/commands"
"github.com/jessevdk/go-flags"
)
type config struct {
OtsPort int `short:"p" long:"port" default:"3333"`
OtsApiPath string `long:"api_path" default:"/"`
OtsStaticDir string `long:"static_dir" default:"dist"`
OtsAssetUrl string `long:"assert_url" default:""`
OtsRpcDaemonUrl string `long:"rpc_daemon_url" default:"https://brilliant.staging.gfx.town"`
OtsBeaconApiUrl string `long:"beacon_api_url" default:"" `
}
var Conf config
var parser = flags.NewParser(&Conf, flags.Default)
func main() {
cmd, cfg := cli.RootCommand()
rootCtx, rootCancel := common.RootContext()
cmd.RunE = func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
logger := log.New()
db, borDb, backend, txPool, mining, stateCache, blockReader, ff, agg, err := cli.RemoteServices(ctx, *cfg, logger, rootCancel)
if err != nil {
log.Error("Could not connect to DB", "err", err)
return nil
}
defer db.Close()
if borDb != nil {
defer borDb.Close()
}
parser.Parse()
r := chi.NewRouter()
// route the server
if !cfg.OtsServerDisable {
RouteServer(r, *cfg)
}
apiList := mycmds.APIList(db, borDb, backend, txPool, mining, ff, stateCache, blockReader, agg, *cfg)
if err := cli.StartRpcServer(ctx, r, *cfg, apiList); err != nil {
log.Error(err.Error())
return nil
}
return nil
}
if err := cmd.ExecuteContext(rootCtx); err != nil {
log.Error(err.Error())
os.Exit(1)
RouteServer(r, Conf)
log.Printf("Running with config: %+v", Conf)
err := http.ListenAndServe(fmt.Sprintf(":%d", Conf.OtsPort), r)
if err != nil {
log.Println(err)
}
}

View File

@ -1,18 +0,0 @@
# Postman testing
There are two files here:
- RPC_Testing.json
- Trace_Testing.json
You can import them into Postman using these
instructions: https://github.com/ledgerwatch/erigon/wiki/Using-Postman-to-Test-TurboGeth-RPC
The first one is used to generate help text and other documentation as well as running a sanity check against a new
release. There is basically one test for each of the 81 RPC endpoints.
The second file contains 31 test cases specifically for the nine trace routines (five tests for five of the routines,
three for another, one each for the other three).
Another collection of related tests can be found
here: https://github.com/Great-Hill-Corporation/trueblocks-core/tree/develop/src/other/trace_tests

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -1,321 +0,0 @@
package rpcdaemontest
import (
"context"
"crypto/ecdsa"
"encoding/binary"
"fmt"
"math/big"
"net"
"testing"
"github.com/holiman/uint256"
"github.com/ledgerwatch/erigon-lib/gointerfaces/remote"
"github.com/ledgerwatch/erigon-lib/gointerfaces/txpool"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/accounts/abi/bind"
"github.com/ledgerwatch/erigon/accounts/abi/bind/backends"
"github.com/ledgerwatch/erigon/cmd/rpcdaemon/commands/contracts"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/consensus"
"github.com/ledgerwatch/erigon/consensus/ethash"
"github.com/ledgerwatch/erigon/core"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/crypto"
"github.com/ledgerwatch/erigon/ethdb/privateapi"
"github.com/ledgerwatch/erigon/params"
"github.com/ledgerwatch/erigon/turbo/snapshotsync"
"github.com/ledgerwatch/erigon/turbo/stages"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/test/bufconn"
)
func CreateTestKV(t *testing.T) kv.RwDB {
s, _, _ := CreateTestSentry(t)
return s.DB
}
type testAddresses struct {
key *ecdsa.PrivateKey
key1 *ecdsa.PrivateKey
key2 *ecdsa.PrivateKey
address common.Address
address1 common.Address
address2 common.Address
}
func makeTestAddresses() testAddresses {
var (
key, _ = crypto.HexToECDSA("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291")
key1, _ = crypto.HexToECDSA("49a7b37aa6f6645917e7b807e9d1c00d4fa71f18343b0d4122a4d2df64dd6fee")
key2, _ = crypto.HexToECDSA("8a1f9a8f95be41cd7ccb6168179afb4504aefe388d1e14474d32c45c72ce7b7a")
address = crypto.PubkeyToAddress(key.PublicKey)
address1 = crypto.PubkeyToAddress(key1.PublicKey)
address2 = crypto.PubkeyToAddress(key2.PublicKey)
)
return testAddresses{
key: key,
key1: key1,
key2: key2,
address: address,
address1: address1,
address2: address2,
}
}
func CreateTestSentry(t *testing.T) (*stages.MockSentry, *core.ChainPack, []*core.ChainPack) {
addresses := makeTestAddresses()
var (
key = addresses.key
address = addresses.address
address1 = addresses.address1
address2 = addresses.address2
)
var (
gspec = &core.Genesis{
Config: params.AllEthashProtocolChanges,
Alloc: core.GenesisAlloc{
address: {Balance: big.NewInt(9000000000000000000)},
address1: {Balance: big.NewInt(200000000000000000)},
address2: {Balance: big.NewInt(300000000000000000)},
},
GasLimit: 10000000,
}
)
m := stages.MockWithGenesis(t, gspec, key, false)
contractBackend := backends.NewSimulatedBackendWithConfig(gspec.Alloc, gspec.Config, gspec.GasLimit)
defer contractBackend.Close()
// Generate empty chain to have some orphaned blocks for tests
orphanedChain, err := core.GenerateChain(m.ChainConfig, m.Genesis, m.Engine, m.DB, 5, func(i int, block *core.BlockGen) {
}, true)
if err != nil {
t.Fatal(err)
}
chain, err := getChainInstance(&addresses, m.ChainConfig, m.Genesis, m.Engine, m.DB, contractBackend)
if err != nil {
t.Fatal(err)
}
if err = m.InsertChain(orphanedChain); err != nil {
t.Fatal(err)
}
if err = m.InsertChain(chain); err != nil {
t.Fatal(err)
}
return m, chain, []*core.ChainPack{orphanedChain}
}
var chainInstance *core.ChainPack
func getChainInstance(
addresses *testAddresses,
config *params.ChainConfig,
parent *types.Block,
engine consensus.Engine,
db kv.RwDB,
contractBackend *backends.SimulatedBackend,
) (*core.ChainPack, error) {
var err error
if chainInstance == nil {
chainInstance, err = generateChain(addresses, config, parent, engine, db, contractBackend)
}
return chainInstance.Copy(), err
}
func generateChain(
addresses *testAddresses,
config *params.ChainConfig,
parent *types.Block,
engine consensus.Engine,
db kv.RwDB,
contractBackend *backends.SimulatedBackend,
) (*core.ChainPack, error) {
var (
key = addresses.key
key1 = addresses.key1
key2 = addresses.key2
address = addresses.address
address1 = addresses.address1
address2 = addresses.address2
theAddr = common.Address{1}
chainId = big.NewInt(1337)
// this code generates a log
signer = types.LatestSignerForChainID(nil)
)
transactOpts, _ := bind.NewKeyedTransactorWithChainID(key, chainId)
transactOpts1, _ := bind.NewKeyedTransactorWithChainID(key1, chainId)
transactOpts2, _ := bind.NewKeyedTransactorWithChainID(key2, chainId)
var poly *contracts.Poly
var tokenContract *contracts.Token
// We generate the blocks without plain state because it's not supported in core.GenerateChain
return core.GenerateChain(config, parent, engine, db, 10, func(i int, block *core.BlockGen) {
var (
txn types.Transaction
txs []types.Transaction
err error
)
ctx := context.Background()
switch i {
case 0:
txn, err = types.SignTx(types.NewTransaction(0, theAddr, uint256.NewInt(1000000000000000), 21000, new(uint256.Int), nil), *signer, key)
if err != nil {
panic(err)
}
err = contractBackend.SendTransaction(ctx, txn)
if err != nil {
panic(err)
}
case 1:
txn, err = types.SignTx(types.NewTransaction(1, theAddr, uint256.NewInt(1000000000000000), 21000, new(uint256.Int), nil), *signer, key)
if err != nil {
panic(err)
}
err = contractBackend.SendTransaction(ctx, txn)
if err != nil {
panic(err)
}
case 2:
_, txn, tokenContract, err = contracts.DeployToken(transactOpts, contractBackend, address1)
case 3:
txn, err = tokenContract.Mint(transactOpts1, address2, big.NewInt(10))
case 4:
txn, err = tokenContract.Transfer(transactOpts2, address, big.NewInt(3))
case 5:
// Multiple transactions sending small amounts of ether to various accounts
var j uint64
var toAddr common.Address
nonce := block.TxNonce(address)
for j = 1; j <= 32; j++ {
binary.BigEndian.PutUint64(toAddr[:], j)
txn, err = types.SignTx(types.NewTransaction(nonce, toAddr, uint256.NewInt(1_000_000_000_000_000), 21000, new(uint256.Int), nil), *signer, key)
if err != nil {
panic(err)
}
err = contractBackend.SendTransaction(ctx, txn)
if err != nil {
panic(err)
}
txs = append(txs, txn)
nonce++
}
case 6:
_, txn, tokenContract, err = contracts.DeployToken(transactOpts, contractBackend, address1)
if err != nil {
panic(err)
}
txs = append(txs, txn)
txn, err = tokenContract.Mint(transactOpts1, address2, big.NewInt(100))
if err != nil {
panic(err)
}
txs = append(txs, txn)
// Multiple transactions sending small amounts of ether to various accounts
var j uint64
var toAddr common.Address
for j = 1; j <= 32; j++ {
binary.BigEndian.PutUint64(toAddr[:], j)
txn, err = tokenContract.Transfer(transactOpts2, toAddr, big.NewInt(1))
if err != nil {
panic(err)
}
txs = append(txs, txn)
}
case 7:
var toAddr common.Address
nonce := block.TxNonce(address)
binary.BigEndian.PutUint64(toAddr[:], 4)
txn, err = types.SignTx(types.NewTransaction(nonce, toAddr, uint256.NewInt(1000000000000000), 21000, new(uint256.Int), nil), *signer, key)
if err != nil {
panic(err)
}
err = contractBackend.SendTransaction(ctx, txn)
if err != nil {
panic(err)
}
txs = append(txs, txn)
binary.BigEndian.PutUint64(toAddr[:], 12)
txn, err = tokenContract.Transfer(transactOpts2, toAddr, big.NewInt(1))
if err != nil {
panic(err)
}
txs = append(txs, txn)
case 8:
_, txn, poly, err = contracts.DeployPoly(transactOpts, contractBackend)
if err != nil {
panic(err)
}
txs = append(txs, txn)
case 9:
txn, err = poly.DeployAndDestruct(transactOpts, big.NewInt(0))
if err != nil {
panic(err)
}
txs = append(txs, txn)
}
if err != nil {
panic(err)
}
if txs == nil && txn != nil {
txs = append(txs, txn)
}
for _, txn := range txs {
block.AddTx(txn)
}
contractBackend.Commit()
}, true)
}
type IsMiningMock struct{}
func (*IsMiningMock) IsMining() bool { return false }
func CreateTestGrpcConn(t *testing.T, m *stages.MockSentry) (context.Context, *grpc.ClientConn) { //nolint
ctx, cancel := context.WithCancel(context.Background())
apis := m.Engine.APIs(nil)
if len(apis) < 1 {
t.Fatal("couldn't instantiate Engine api")
}
ethashApi := apis[1].Service.(*ethash.API)
server := grpc.NewServer()
remote.RegisterETHBACKENDServer(server, privateapi.NewEthBackendServer(ctx, nil, m.DB, m.Notifications.Events, snapshotsync.NewBlockReader(), nil, nil, nil, false))
txpool.RegisterTxpoolServer(server, m.TxPoolGrpcServer)
txpool.RegisterMiningServer(server, privateapi.NewMiningServer(ctx, &IsMiningMock{}, ethashApi))
listener := bufconn.Listen(1024 * 1024)
dialer := func() func(context.Context, string) (net.Conn, error) {
go func() {
if err := server.Serve(listener); err != nil {
fmt.Printf("%v\n", err)
}
}()
return func(context.Context, string) (net.Conn, error) {
return listener.Dial()
}
}
conn, err := grpc.DialContext(ctx, "", grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(dialer()))
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
cancel()
conn.Close()
server.Stop()
})
return ctx, conn
}

View File

@ -1,310 +0,0 @@
package rpcservices
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"sync/atomic"
"github.com/ledgerwatch/erigon-lib/gointerfaces"
"github.com/ledgerwatch/erigon-lib/gointerfaces/remote"
types2 "github.com/ledgerwatch/erigon-lib/gointerfaces/types"
"github.com/ledgerwatch/erigon-lib/kv"
"github.com/ledgerwatch/erigon/common"
"github.com/ledgerwatch/erigon/core/types"
"github.com/ledgerwatch/erigon/ethdb/privateapi"
"github.com/ledgerwatch/erigon/p2p"
"github.com/ledgerwatch/erigon/rlp"
"github.com/ledgerwatch/erigon/turbo/services"
"github.com/ledgerwatch/log/v3"
"google.golang.org/grpc"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
)
type RemoteBackend struct {
remoteEthBackend remote.ETHBACKENDClient
log log.Logger
version gointerfaces.Version
db kv.RoDB
blockReader services.FullBlockReader
}
func NewRemoteBackend(client remote.ETHBACKENDClient, db kv.RoDB, blockReader services.FullBlockReader) *RemoteBackend {
return &RemoteBackend{
remoteEthBackend: client,
version: gointerfaces.VersionFromProto(privateapi.EthBackendAPIVersion),
log: log.New("remote_service", "eth_backend"),
db: db,
blockReader: blockReader,
}
}
func (back *RemoteBackend) EnsureVersionCompatibility() bool {
versionReply, err := back.remoteEthBackend.Version(context.Background(), &emptypb.Empty{}, grpc.WaitForReady(true))
if err != nil {
back.log.Error("getting Version", "err", err)
return false
}
if !gointerfaces.EnsureVersion(back.version, versionReply) {
back.log.Error("incompatible interface versions", "client", back.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return false
}
back.log.Info("interfaces compatible", "client", back.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return true
}
func (back *RemoteBackend) Etherbase(ctx context.Context) (common.Address, error) {
res, err := back.remoteEthBackend.Etherbase(ctx, &remote.EtherbaseRequest{})
if err != nil {
if s, ok := status.FromError(err); ok {
return common.Address{}, errors.New(s.Message())
}
return common.Address{}, err
}
return gointerfaces.ConvertH160toAddress(res.Address), nil
}
func (back *RemoteBackend) NetVersion(ctx context.Context) (uint64, error) {
res, err := back.remoteEthBackend.NetVersion(ctx, &remote.NetVersionRequest{})
if err != nil {
if s, ok := status.FromError(err); ok {
return 0, errors.New(s.Message())
}
return 0, err
}
return res.Id, nil
}
func (back *RemoteBackend) NetPeerCount(ctx context.Context) (uint64, error) {
res, err := back.remoteEthBackend.NetPeerCount(ctx, &remote.NetPeerCountRequest{})
if err != nil {
if s, ok := status.FromError(err); ok {
return 0, errors.New(s.Message())
}
return 0, err
}
return res.Count, nil
}
func (back *RemoteBackend) ProtocolVersion(ctx context.Context) (uint64, error) {
res, err := back.remoteEthBackend.ProtocolVersion(ctx, &remote.ProtocolVersionRequest{})
if err != nil {
if s, ok := status.FromError(err); ok {
return 0, errors.New(s.Message())
}
return 0, err
}
return res.Id, nil
}
func (back *RemoteBackend) ClientVersion(ctx context.Context) (string, error) {
res, err := back.remoteEthBackend.ClientVersion(ctx, &remote.ClientVersionRequest{})
if err != nil {
if s, ok := status.FromError(err); ok {
return "", errors.New(s.Message())
}
return "", err
}
return res.NodeName, nil
}
func (back *RemoteBackend) Subscribe(ctx context.Context, onNewEvent func(*remote.SubscribeReply)) error {
subscription, err := back.remoteEthBackend.Subscribe(ctx, &remote.SubscribeRequest{}, grpc.WaitForReady(true))
if err != nil {
if s, ok := status.FromError(err); ok {
return errors.New(s.Message())
}
return err
}
for {
event, err := subscription.Recv()
if errors.Is(err, io.EOF) {
log.Debug("rpcdaemon: the subscription channel was closed")
break
}
if err != nil {
return err
}
onNewEvent(event)
}
return nil
}
func (back *RemoteBackend) SubscribeLogs(ctx context.Context, onNewLogs func(reply *remote.SubscribeLogsReply), requestor *atomic.Value) error {
subscription, err := back.remoteEthBackend.SubscribeLogs(ctx, grpc.WaitForReady(true))
if err != nil {
if s, ok := status.FromError(err); ok {
return errors.New(s.Message())
}
return err
}
requestor.Store(subscription.Send)
for {
logs, err := subscription.Recv()
if errors.Is(err, io.EOF) {
log.Info("rpcdaemon: the logs subscription channel was closed")
break
}
if err != nil {
return err
}
onNewLogs(logs)
}
return nil
}
func (back *RemoteBackend) TxnLookup(ctx context.Context, tx kv.Getter, txnHash common.Hash) (uint64, bool, error) {
return back.blockReader.TxnLookup(ctx, tx, txnHash)
}
func (back *RemoteBackend) BlockWithSenders(ctx context.Context, tx kv.Getter, hash common.Hash, blockHeight uint64) (block *types.Block, senders []common.Address, err error) {
return back.blockReader.BlockWithSenders(ctx, tx, hash, blockHeight)
}
func (back *RemoteBackend) BodyWithTransactions(ctx context.Context, tx kv.Getter, hash common.Hash, blockHeight uint64) (body *types.Body, err error) {
return back.blockReader.BodyWithTransactions(ctx, tx, hash, blockHeight)
}
func (back *RemoteBackend) BodyRlp(ctx context.Context, tx kv.Getter, hash common.Hash, blockHeight uint64) (bodyRlp rlp.RawValue, err error) {
return back.blockReader.BodyRlp(ctx, tx, hash, blockHeight)
}
func (back *RemoteBackend) Body(ctx context.Context, tx kv.Getter, hash common.Hash, blockHeight uint64) (body *types.Body, txAmount uint32, err error) {
return back.blockReader.Body(ctx, tx, hash, blockHeight)
}
func (back *RemoteBackend) Header(ctx context.Context, tx kv.Getter, hash common.Hash, blockHeight uint64) (*types.Header, error) {
return back.blockReader.Header(ctx, tx, hash, blockHeight)
}
func (back *RemoteBackend) HeaderByNumber(ctx context.Context, tx kv.Getter, blockHeight uint64) (*types.Header, error) {
return back.blockReader.HeaderByNumber(ctx, tx, blockHeight)
}
func (back *RemoteBackend) HeaderByHash(ctx context.Context, tx kv.Getter, hash common.Hash) (*types.Header, error) {
return back.blockReader.HeaderByHash(ctx, tx, hash)
}
func (back *RemoteBackend) CanonicalHash(ctx context.Context, tx kv.Getter, blockHeight uint64) (common.Hash, error) {
return back.blockReader.CanonicalHash(ctx, tx, blockHeight)
}
func (back *RemoteBackend) TxnByIdxInBlock(ctx context.Context, tx kv.Getter, blockNum uint64, i int) (types.Transaction, error) {
return back.blockReader.TxnByIdxInBlock(ctx, tx, blockNum, i)
}
func (back *RemoteBackend) EngineNewPayloadV1(ctx context.Context, payload *types2.ExecutionPayload) (res *remote.EnginePayloadStatus, err error) {
return back.remoteEthBackend.EngineNewPayloadV1(ctx, payload)
}
func (back *RemoteBackend) EngineForkchoiceUpdatedV1(ctx context.Context, request *remote.EngineForkChoiceUpdatedRequest) (*remote.EngineForkChoiceUpdatedReply, error) {
return back.remoteEthBackend.EngineForkChoiceUpdatedV1(ctx, request)
}
func (back *RemoteBackend) EngineGetPayloadV1(ctx context.Context, payloadId uint64) (res *types2.ExecutionPayload, err error) {
return back.remoteEthBackend.EngineGetPayloadV1(ctx, &remote.EngineGetPayloadRequest{
PayloadId: payloadId,
})
}
func (back *RemoteBackend) NodeInfo(ctx context.Context, limit uint32) ([]p2p.NodeInfo, error) {
nodes, err := back.remoteEthBackend.NodeInfo(ctx, &remote.NodesInfoRequest{Limit: limit})
if err != nil {
return nil, fmt.Errorf("nodes info request error: %w", err)
}
if nodes == nil || len(nodes.NodesInfo) == 0 {
return nil, errors.New("empty nodesInfo response")
}
ret := make([]p2p.NodeInfo, 0, len(nodes.NodesInfo))
for _, node := range nodes.NodesInfo {
var rawProtocols map[string]json.RawMessage
if err = json.Unmarshal(node.Protocols, &rawProtocols); err != nil {
return nil, fmt.Errorf("cannot decode protocols metadata: %w", err)
}
protocols := make(map[string]interface{}, len(rawProtocols))
for k, v := range rawProtocols {
protocols[k] = v
}
ret = append(ret, p2p.NodeInfo{
Enode: node.Enode,
ID: node.Id,
IP: node.Enode,
ENR: node.Enr,
ListenAddr: node.ListenerAddr,
Name: node.Name,
Ports: struct {
Discovery int `json:"discovery"`
Listener int `json:"listener"`
}{
Discovery: int(node.Ports.Discovery),
Listener: int(node.Ports.Listener),
},
Protocols: protocols,
})
}
return ret, nil
}
func (back *RemoteBackend) Peers(ctx context.Context) ([]*p2p.PeerInfo, error) {
rpcPeers, err := back.remoteEthBackend.Peers(ctx, &emptypb.Empty{})
if err != nil {
return nil, fmt.Errorf("ETHBACKENDClient.Peers() error: %w", err)
}
peers := make([]*p2p.PeerInfo, 0, len(rpcPeers.Peers))
for _, rpcPeer := range rpcPeers.Peers {
peer := p2p.PeerInfo{
ENR: rpcPeer.Enr,
Enode: rpcPeer.Enode,
ID: rpcPeer.Id,
Name: rpcPeer.Name,
Caps: rpcPeer.Caps,
Network: struct {
LocalAddress string `json:"localAddress"`
RemoteAddress string `json:"remoteAddress"`
Inbound bool `json:"inbound"`
Trusted bool `json:"trusted"`
Static bool `json:"static"`
}{
LocalAddress: rpcPeer.ConnLocalAddr,
RemoteAddress: rpcPeer.ConnRemoteAddr,
Inbound: rpcPeer.ConnIsInbound,
Trusted: rpcPeer.ConnIsTrusted,
Static: rpcPeer.ConnIsStatic,
},
Protocols: nil,
}
peers = append(peers, &peer)
}
return peers, nil
}
func (back *RemoteBackend) PendingBlock(ctx context.Context) (*types.Block, error) {
blockRlp, err := back.remoteEthBackend.PendingBlock(ctx, &emptypb.Empty{})
if err != nil {
return nil, fmt.Errorf("ETHBACKENDClient.PendingBlock() error: %w", err)
}
if blockRlp == nil {
return nil, nil
}
var block types.Block
err = rlp.Decode(bytes.NewReader(blockRlp.BlockRlp), &block)
if err != nil {
return nil, fmt.Errorf("decoding block from %x: %w", blockRlp, err)
}
return &block, nil
}

View File

@ -1,43 +0,0 @@
package rpcservices
import (
"context"
"fmt"
"github.com/ledgerwatch/erigon-lib/gointerfaces"
"github.com/ledgerwatch/erigon-lib/gointerfaces/txpool"
"github.com/ledgerwatch/erigon/ethdb/privateapi"
"github.com/ledgerwatch/log/v3"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
type MiningService struct {
txpool.MiningClient
log log.Logger
version gointerfaces.Version
}
func NewMiningService(client txpool.MiningClient) *MiningService {
return &MiningService{
MiningClient: client,
version: gointerfaces.VersionFromProto(privateapi.MiningAPIVersion),
log: log.New("remote_service", "mining"),
}
}
func (s *MiningService) EnsureVersionCompatibility() bool {
versionReply, err := s.Version(context.Background(), &emptypb.Empty{}, grpc.WaitForReady(true))
if err != nil {
s.log.Error("getting Version", "err", err)
return false
}
if !gointerfaces.EnsureVersion(s.version, versionReply) {
s.log.Error("incompatible interface versions", "client", s.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return false
}
s.log.Info("interfaces compatible", "client", s.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return true
}

View File

@ -1,50 +0,0 @@
package rpcservices
import (
"context"
"fmt"
"time"
"github.com/ledgerwatch/erigon-lib/gointerfaces"
"github.com/ledgerwatch/erigon-lib/gointerfaces/grpcutil"
txpooproto "github.com/ledgerwatch/erigon-lib/gointerfaces/txpool"
txpool2 "github.com/ledgerwatch/erigon-lib/txpool"
"github.com/ledgerwatch/log/v3"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
type TxPoolService struct {
txpooproto.TxpoolClient
log log.Logger
version gointerfaces.Version
}
func NewTxPoolService(client txpooproto.TxpoolClient) *TxPoolService {
return &TxPoolService{
TxpoolClient: client,
version: gointerfaces.VersionFromProto(txpool2.TxPoolAPIVersion),
log: log.New("remote_service", "tx_pool"),
}
}
func (s *TxPoolService) EnsureVersionCompatibility() bool {
Start:
versionReply, err := s.Version(context.Background(), &emptypb.Empty{}, grpc.WaitForReady(true))
if err != nil {
if grpcutil.ErrIs(err, txpool2.ErrPoolDisabled) {
time.Sleep(3 * time.Second)
goto Start
}
s.log.Error("ensure version", "err", err)
return false
}
if !gointerfaces.EnsureVersion(s.version, versionReply) {
s.log.Error("incompatible interface versions", "client", s.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return false
}
s.log.Info("interfaces compatible", "client", s.version.String(),
"server", fmt.Sprintf("%d.%d.%d", versionReply.Major, versionReply.Minor, versionReply.Patch))
return true
}

View File

@ -11,11 +11,10 @@ import (
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/ledgerwatch/erigon/common/debug"
"github.com/wmitsuda/otterscan/cmd/otter/cli/httpcfg"
"github.com/wmitsuda/otterscan/lib/resources"
)
func RouteServer(r chi.Router, cfg httpcfg.HttpCfg) {
func RouteServer(r chi.Router, cfg config) {
r.Group(func(r chi.Router) {
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)

View File

@ -1,222 +0,0 @@
###
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_syncing",
"params": [],
"id": 1
}
###
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getBalance",
"params": [
"0xfffa4763f94f7ad191b366a343092a5d1a47ed08",
"0xde84"
],
"id": 1
}
###
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "debug_accountRange",
"params": [
"0x1e8480",
"",
256,
false,
false,
false
],
"id": 1
}
###
# curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_getTransactionByHash", "params": ["0x1302cc71b89c1482b18a97a6fa2c9c375f4bf7548122363b6e91528440272fde"], "id":1}' localhost:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getTransactionByHash",
"params": [
"0x1302cc71b89c1482b18a97a6fa2c9c375f4bf7548122363b6e91528440272fde"
],
"id": 1
}
###
# curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_getTransactionByHash", "params": ["0x1302cc71b89c1482b18a97a6fa2c9c375f4bf7548122363b6e91528440272fde"], "id":1}' localhost:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getBlockByNumber",
"params": [
"0x4C4B40",
true
],
"id": 1
}
###
# curl -X POST -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber", "params": ["0x1b4", true], "id":1}' localhost:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_newHeader",
"params": [],
"id": 1
}
###
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getBlockByNumber",
"params": [
"0xf4240",
true
],
"id": 2
}
###
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "debug_storageRangeAt",
"params": [
"0x4ced0bc30041f7f4e11ba9f341b54404770c7695dfdba6bb64b6ffeee2074177",
99,
"0x33990122638b9132ca29c723bdf037f1a891a70c",
"0x0000000000000000000000000000000000000000000000000000000000000000",
1024
],
"id": 537758
}
### > 60
### >20
###{"jsonrpc":"2.0","method":"debug_storageRangeAt","params":["0x6e6ec30ba20b263d1bdf6d87a0b1b037ea595929ac10ad74f6b7e1890fdad744", 19,"0x793ae8c1b1a160bfc07bfb0d04f85eab1a71f4f2","0x0000000000000000000000000000000000000000000000000000000000000000",1024],"id":113911}
### {"jsonrpc":"2.0","mesthod":"debug_storageRangeAt","params":["0xbcb55dcb321899291d10818dd06eaaf939ff87a717ac40850b54c6b56e8936ff", 2,"0xca7c390f8f843a8c3036841fde755e5d0acb97da","0x0000000000000000000000000000000000000000000000000000000000000000",1024],"id":3836}
###{"jsonrpc":"2.0","method":"debug_storageRangeAt","params":["0xf212a7655339852bf58f7e1d66f82256d22d13ccba3068a9c47a635738698c84", 0,"0xb278e4cb20dfbf97e78f27001f6b15288302f4d7","0x0000000000000000000000000000000000000000000000000000000000000000",1024],"id":8970}
###
POST 192.168.255.138:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getTransactionReceipt",
"params": [
"0xc05ce241bec59900356ede868d170bc01d743c3cd5ecb129ca99596593022771"
],
"id": 537758
}
###
#POST 192.168.255.138:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "erigon_getLogsByHash",
"params": [
"0x343f85f13356e138152d77287fda5ae0818c514119119ad439f81d69c59fc2f6"
],
"id": 537758
}
###
#POST 192.168.255.138:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getLogs",
"params": [
{
"address": "0x6090a6e47849629b7245dfa1ca21d94cd15878ef",
"fromBlock": "0x3d0000",
"toBlock": "0x3d2600",
"topics": [
null,
"0x374f3a049e006f36f6cf91b02a3b0ee16c858af2f75858733eb0e927b5b7126c"
]
}
],
"id": 537758
}
###
#POST 192.168.255.138:8545
POST localhost:8545
Content-Type: application/json
{
"jsonrpc": "2.0",
"method": "eth_getWork",
"params": [],
"id": 537758
}
###
POST localhost:8545
Content-Type: application/json
{
"id": 1,
"method": "eth_estimateGas",
"params": [
{
"to": "0x5fda30bb72b8dfe20e48a00dfc108d0915be9bb0",
"value": "0x1234"
},
"latest"
]
}

View File

@ -1,5 +0,0 @@
geth
parity
nethermind
turbogeth
erigon

View File

@ -1,22 +0,0 @@
s/,\"id\":\"1\"//g
s/\"result\":null,/\"result\":\{\},/g
s/suicide/selfdestruct/g
s/\"gasUsed\":\"0x0\",//g
s/,\"value\":\"0x0\"//g
s/invalid argument 1: json: cannot unmarshal hex string \\\"0x\\\" into Go value of type hexutil.Uint64/Invalid params: Invalid index: cannot parse integer from empty string./
s/invalid argument 1: json: cannot unmarshal number into Go value of type \[\]hexutil.Uint64/Invalid params: invalid type: integer `0`, expected a sequence./
s/missing value for required argument 1/Invalid params: invalid length 1, expected a tuple of size 2./
s/Invalid params: invalid type: string \\\"0x0\\\", expected a sequence./invalid argument 1: json: cannot unmarshal string into Go value of type \[\]hexutil.Uint64/
s/Invalid params\: Invalid block number\: number too large to fit in target type./invalid argument 0: hex number > 64 bits/
s/the method trace_junk12 does not exist\/is not available/Method not found/
s/,\"traceAddress\":null/,\"traceAddress\":[]/g
s/\"0x0000000000000000000000000000000000000000000000000000000000000000\"/\"0x\"/g
s/\"transactionHash\":\"0x\",\"transactionPosition\":0/\"transactionHash\":null,\"transactionPosition\":null/g
s/\"result\":null/\"result\":[]/g
s/\"error\":{\"code\":-32000,\"message\":\"function trace_replayBlockTransactions not implemented\"}/\"result\":\[\]/
s/\"error\":{\"code\":-32000,\"message\":\"function trace_replayTransaction not implemented\"}/\"result\":{\"output\":\"0x\",\"stateDiff\":null,\"trace\":\[\],\"vmTrace\":null}/
s/\"error\":{\"code\":-32602,\"message\":\"invalid argument 0: json: cannot unmarshal array into Go value of type commands.CallParam\"}/\"result\":\[{\"output\":\"0x\",\"stateDiff\":null,\"trace\":\[\],\"vmTrace\":null},{\"output\":\"0x\",\"stateDiff\":null,\"trace\":\[\],\"vmTrace\":null}]/
s/\"error\":{\"code\":-32602,\"message\":\"invalid argument 0: hex string has length 82, want 64 for common.Hash\"}/\"error\":{\"code\":-32602,\"data\":\"RlpIncorrectListLen\",\"message\":\"Couldn't parse parameters: Transaction is not valid RLP\"}/

View File

@ -1,76 +0,0 @@
005 trace_get fail ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",0]
010 trace_get fail ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3","0x0"]
015 trace_get zero ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",["0x0"]]
020 trace_get one ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",["0x1"]]
025 trace_get both ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3",["0x0","0x1"]]
030 trace_get fail ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3"]
035 trace_get two ["0x5c504ed432cb51138bcf09aa5e8a410dd4a1e204ef84bfed1be16dfba1b22060",["0x2"]]
040 trace_get fail ["0x975994512b958b31608f5692a6dbacba359349533dfb4ba0facfb7291fbec48d",["0x"]]
050 trace_transaction one ["0x17104ac9d3312d8c136b7f44d4b8b47852618065ebfa534bd2d3b5ef218ca1f3"]
055 trace_transaction two ["0x5c504ed432cb51138bcf09aa5e8a410dd4a1e204ef84bfed1be16dfba1b22060"]
060 trace_transaction three ["0x6afbe0f0ea3613edd6b84b71260836c03bddce81604f05c81a070cd671d3d765"]
065 trace_transaction four ["0x80926bb17ecdd526a2d901835482615eec87c4ca7fc30b96d8c6d6ab17bc721e"]
070 trace_transaction five ["0xb8ae0ab093fe1882249187b8f40dbe6e9285b419d096bd8028172d55b47ff3ce"]
075 trace_transaction six ["0xc2b831c051582f13dfaff6df648972e7e94aeeed1e85d23bd968a55b59f3cb5b"]
080 trace_transaction seven ["0xf9d426284bd20415a53991a004122b3a3a619b295ea98d1d88a5fd3a4125408b"]
085 trace_transaction cr_de ["0x343ba476313771d4431018d7d2e935eba2bfe26d5be3e6cb84af6817fd0e4309"]
105 trace_block 0x23 ["0x2328"]
110 trace_block 0x10 ["0x100"]
115 trace_block 0x12 ["0x12"]
120 trace_block 0x12 ["0x121212"]
125 trace_block 0x2e ["0x2ed119"]
130 trace_block 0xa1 ["0xa18dcfbc639be11c353420ede9224d772c56eb9ff327eb73771f798cf42d0027"]
#135 trace_block 0xa6 ["0xa60f34"]
#140 trace_block 0xf4 ["0xf4629"]
#145 trace_block slow ["0x895441"]
150 trace_filter good_1 [{"fromBlock":"0x2328","toBlock":"0x2328"}]
155 trace_filter range_1 [{"fromBlock":"0x2dcaa9","toBlock":"0x2dcaaa"}]
160 trace_filter block_3 [{"fromBlock":"0x3","toBlock":"0x3"}]
165 trace_filter first_tx [{"fromBlock":"0xb443","toBlock":"0xb443"}]
170 trace_filter from_doc [{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbb73bcb5d553b5a556358d27625323fd781d37"],"after":1000,"count":100}]
175 trace_filter rem_a_o [{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbb73bcb5d553b5a556358d27625323fd781d37"]}]
180 trace_filter count_1 [{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbb73bcb5d553b5a556358d27625323fd781d37"],"count":1}]
185 trace_filter after_1 [{"fromBlock":"0x2ed0c4","toBlock":"0x2ed128","toAddress":["0x8bbb73bcb5d553b5a556358d27625323fd781d37"],"after":1,"count":4}]
190 trace_filter to_0xc02 [{"fromBlock":"0xa344e0","toBlock":"0xa344e0","toAddress":["0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"]}]
195 trace_filter fr_0xc3c [{"fromBlock":"0xa344e0","toBlock":"0xa344e0","fromAddress":["0xc3ca90684fd7b8c7e4be88c329269fc32111c4bd"]}]
200 trace_filter both [{"fromBlock":"0xa344e0","toBlock":"0xa344e0","fromAddress":["0xc3ca90684fd7b8c7e4be88c329269fc32111c4bd"],"toAddress":["0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2"]}]
205 trace_filter fail_2 [{"fromBlock":"0xa606ba","toBlock":"0x2dcaa9"}]
210 trace_filter bad_1 [{"fromBlock":"0x2328","toBlock":"0x2327"}]
#215 trace_filter slow_2 [{"fromBlock":"0xa606ba","toBlock":"0xa606ba"}]
#220 trace_filter 10700000 [{"fromBlock":"0xa344e0","toBlock":"0xa344e0"}]
250 trace_replayBlockTransactions fail ["0x3", ["stateDiff"]]
300 trace_replayTransaction fail ["0x02d4a872e096445e80d05276ee756cefef7f3b376bcec14246469c0cd97dad8f", ["fail"]]
320_erigon trace_call fail [{"input":"0x0","nonce":"0x0","from":"0x02fcf30912b6fe2b6452ee19721c6068fe4c7b61","gas":"0xf4240","to":"0x37a9679c41e99db270bda88de8ff50c0cd23f326","gasPrice":"0x4a817c800","value":"0x0"},["fail"],"latest"]
340 trace_callMany fail [[[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["fail"]],[{"from":"0x407d73d8a49eeb85d32cf465507dd71d507100c1","to":"0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b","value":"0x186a0"},["fail"]]],"latest"]
360 trace_rawTransaction fail ["0xd46e8dd67c5d32be8d46e8dd67c5d32be8058bb8eb970870f072445675058bb8eb970870f072445675",["fail"]]
#255 trace_replayBlockTransactions ["0x1",["trace"]]
#250 trace_replayBlockTransactions ["0x1"]
#265 trace_replayBlockTransactions ["0x100"]
#260 trace_replayBlockTransactions ["0x895441",["trace"]]
#275 trace_replayBlockTransactions ["0x895441",["vmTrace"]]
#270 trace_replayBlockTransactions ["0xCF9BF",["trace"]]
#285 trace_replayBlockTransactions ["0xDBBA1",["trace"]]
#280 trace_replayBlockTransactions ["0xDBBA1",["vmTrace"]]
#285 trace_replayBlockTransactions ["CF9BF",["trace"]]
#290 trace_replayTransactions ["CF9BF",["trace"]]
#295trace_replayTransactions ["CF9BF",["trace"]]
305 trace_junk12 no_rpc []
# custom, experimental stuff
405_erigon trace_blockReward rew_0 ["0x0"]
410_erigon trace_blockReward rew_1 ["0x1"]
415_erigon trace_blockReward rew_2 ["0x2"]
420_erigon trace_blockReward rew_3 ["0x3"]
425_erigon trace_uncleReward unc_0 ["0x0"]
430_erigon trace_uncleReward unc_1 ["0x1"]
435_erigon trace_uncleReward unc_2 ["0x2"]
440_erigon trace_uncleReward unc_3 ["0x3"]
445_erigon trace_issuance iss_0 ["0x0"]
450_erigon trace_issuance iss_1 ["0x1"]
455_erigon trace_issuance iss_2 ["0x2"]
460_erigon trace_issuance iss_3 ["0x3"]

129
go.mod
View File

@ -6,152 +6,33 @@ replace github.com/tendermint/tendermint => github.com/bnb-chain/tendermint v0.3
require (
gfx.cafe/open/4bytes v0.0.0-20221026030913-1f42cb43f802
github.com/RoaringBitmap/roaring v1.2.1
github.com/go-chi/chi/v5 v5.0.7
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d
github.com/holiman/uint256 v1.2.1
github.com/jessevdk/go-flags v1.5.0
github.com/ledgerwatch/erigon v1.9.7-0.20221025025825-26fdf9169d27
github.com/ledgerwatch/erigon-lib v0.0.0-20221024025924-48ff56eead80
github.com/ledgerwatch/log/v3 v3.6.0
github.com/spf13/afero v1.9.2
github.com/spf13/cobra v1.5.0
golang.org/x/sync v0.1.0
google.golang.org/grpc v1.50.1
google.golang.org/protobuf v1.28.1
)
require (
crawshaw.io/sqlite v0.3.3-0.20210127221821-98b1f83c5508 // indirect
github.com/VictoriaMetrics/fastcache v1.12.0 // indirect
github.com/VictoriaMetrics/metrics v1.22.2 // indirect
github.com/ajwerner/btree v0.0.0-20211221152037-f427b3e689c0 // indirect
github.com/alecthomas/atomic v0.1.0-alpha2 // indirect
github.com/anacrolix/chansync v0.3.0 // indirect
github.com/anacrolix/dht/v2 v2.19.0 // indirect
github.com/anacrolix/envpprof v1.2.1 // indirect
github.com/anacrolix/generics v0.0.0-20220618083756-f99e35403a60 // indirect
github.com/anacrolix/go-libutp v1.2.0 // indirect
github.com/anacrolix/log v0.13.2-0.20220711050817-613cb738ef30 // indirect
github.com/anacrolix/missinggo v1.3.0 // indirect
github.com/anacrolix/missinggo/perf v1.0.0 // indirect
github.com/anacrolix/missinggo/v2 v2.7.0 // indirect
github.com/anacrolix/mmsg v1.0.0 // indirect
github.com/anacrolix/multiless v0.3.0 // indirect
github.com/anacrolix/stm v0.4.0 // indirect
github.com/anacrolix/sync v0.4.0 // indirect
github.com/anacrolix/torrent v1.47.0 // indirect
github.com/anacrolix/upnp v0.1.3-0.20220123035249-922794e51c96 // indirect
github.com/anacrolix/utp v0.1.0 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/benbjohnson/immutable v0.3.0 // indirect
github.com/benesch/cgosymbolizer v0.0.0-20190515212042-bec6fe6e597b // indirect
github.com/bits-and-blooms/bitset v1.2.2 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/bradfitz/iter v0.0.0-20191230175014-e8f45d346db8 // indirect
github.com/btcsuite/btcd v0.22.0-beta // indirect
github.com/c2h5oh/datasize v0.0.0-20220606134207-859f65c6625b // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/crate-crypto/go-ipa v0.0.0-20220916134416-c5abbdbdf644 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v1.8.0 // indirect
github.com/dlclark/regexp2 v1.4.1-0.20201116162257-a2a8dda75c91 // indirect
github.com/dop251/goja v0.0.0-20220405120441-9037c2b61cbf // indirect
github.com/dustin/go-humanize v1.0.0 // indirect
github.com/edsrzf/mmap-go v1.1.0 // indirect
github.com/emicklei/dot v1.0.0 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/fjl/gencodec v0.0.0-20220412091415-8bb9e558978c // indirect
github.com/garslo/gogen v0.0.0-20170307003452-d6ebae628c7c // indirect
github.com/gballet/go-verkle v0.0.0-20220829125900-a702d458d33c // indirect
github.com/go-kit/kit v0.10.0 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-sourcemap/sourcemap v2.1.3+incompatible // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/goccy/go-json v0.9.7 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/huin/goupnp v1.0.3 // indirect
github.com/google/go-cmp v0.5.8 // indirect
github.com/ianlancetaylor/cgosymbolizer v0.0.0-20220405231054-a1ae3e4bba26 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kevinburke/go-bindata v3.21.0+incompatible // indirect
github.com/klauspost/compress v1.15.11 // indirect
github.com/ledgerwatch/erigon-snapshot v1.1.1-0.20221025023844-6e716b9e651c // indirect
github.com/ledgerwatch/secp256k1 v1.0.0 // indirect
github.com/lispad/go-generics-tools v1.1.0 // indirect
github.com/ledgerwatch/erigon-lib v0.0.0-20221024025924-48ff56eead80 // indirect
github.com/ledgerwatch/log/v3 v3.6.0 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.16 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mschoch/smat v0.2.0 // indirect
github.com/openacid/errors v0.8.1 // indirect
github.com/openacid/low v0.1.14 // indirect
github.com/openacid/must v0.1.3 // indirect
github.com/openacid/slim v0.5.11 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pelletier/go-toml/v2 v2.0.5 // indirect
github.com/pion/datachannel v1.5.2 // indirect
github.com/pion/dtls/v2 v2.1.5 // indirect
github.com/pion/ice/v2 v2.2.6 // indirect
github.com/pion/interceptor v0.1.11 // indirect
github.com/pion/logging v0.2.2 // indirect
github.com/pion/mdns v0.0.5 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.9 // indirect
github.com/pion/rtp v1.7.13 // indirect
github.com/pion/sctp v1.8.2 // indirect
github.com/pion/sdp/v3 v3.0.5 // indirect
github.com/pion/srtp/v2 v2.0.9 // indirect
github.com/pion/stun v0.3.5 // indirect
github.com/pion/transport v0.13.1 // indirect
github.com/pion/turn/v2 v2.0.8 // indirect
github.com/pion/udp v0.1.1 // indirect
github.com/pion/webrtc/v3 v3.1.42 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/quasilyte/go-ruleguard/dsl v0.3.21 // indirect
github.com/rs/cors v1.8.2 // indirect
github.com/rs/dnscache v0.0.0-20211102005908-e0241e321417 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/testify v1.8.0 // indirect
github.com/tendermint/go-amino v0.14.1 // indirect
github.com/tendermint/tendermint v0.31.12 // indirect
github.com/tidwall/btree v1.3.1 // indirect
github.com/torquem-ch/mdbx-go v0.26.1 // indirect
github.com/ugorji/go/codec v1.1.13 // indirect
github.com/ugorji/go/codec/codecgen v1.1.13 // indirect
github.com/urfave/cli v1.22.9 // indirect
github.com/valyala/fastjson v1.6.3 // indirect
github.com/valyala/fastrand v1.1.0 // indirect
github.com/valyala/histogram v1.2.0 // indirect
github.com/xsleonard/go-merkle v1.1.0 // indirect
go.etcd.io/bbolt v1.3.6 // indirect
go.opentelemetry.io/otel v1.8.0 // indirect
go.opentelemetry.io/otel/trace v1.8.0 // indirect
go.uber.org/atomic v1.10.0 // indirect
golang.org/x/crypto v0.1.0 // indirect
golang.org/x/exp v0.0.0-20221018221608-02f3b879a704 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/net v0.1.0 // indirect
golang.org/x/sys v0.1.0 // indirect
golang.org/x/text v0.4.0 // indirect
golang.org/x/time v0.1.0 // indirect
golang.org/x/tools v0.1.12 // indirect
google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1 // indirect
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.2.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

697
go.sum

File diff suppressed because it is too large Load Diff