Before you launch yourself fully into production, it would be interesting to simulate a set of independent nodes on your computer. This can be prepared and orchestrated with Docker Compose(opens new window).
Before looking at the specific Compose elements, you need to define what the regular Docker elements are.
You will run containers. You can start by giving them meaningful names:
Alice's containers: sentry-alice, val-alice, and kms-alice.
Bob's containers: sentry-bob and val-bob.
Carol's container: node-carol.
Docker lets you simulate private networks. To meaningfully achieve the above target setup in terms of network separation, you use Docker's user-defined networks. This means:
Alice's validator and key management system (KMS) are on their private network: name it net-alice-kms.
Alice's validator and sentry are on their private network: name it net-alice.
Bob's validator and sentry are on their private network: name it net-bob.
There is a public network, i.e. the world, on which both sentries and Carol's node run: name it net-public.
Although every machine on the network is a bit different, in terms of Docker images there are only two image types:
The Tendermint nodes (validators, sentries, and regular nodes) will run checkersd within containers created from a single Docker image.
The Tendermint KMS node will run TMKMS from a different Docker image.
The node image contains, and runs by default, the checkers executable. You first have to compile it, and then build the image.
First, build the executable(s) that will be launched by Docker Compose within the Docker images. Depending on your platform, you will use checkersd-linux-amd64 or checkersd-linux-arm64.
If you have a CPU architecture that is neither amd64 nor arm64, update your Makefile accordingly.
If you copy-pasted directly into Makefile, do not forget to convert the spaces into tabs.
Now run either command:
Copy
$ make build-with-checksum
Copy
$ docker run --rm -it \
-v $(pwd):/checkers \
-w /checkers \
checkers_i \
make build-with-checksum
Use this command if you already built this checkers_i image, as this will take less time.
Copy
$ docker run --rm -it \
-v $(pwd):/checkers \
-w /checkers \
golang:1.18.7 \
make build-with-checksum
Use this command if you did not already have the checkers_i image. This command may take longer but should always work.
Now include the relevant executable inside your production image. You need to use a Debian/Ubuntu base image because you compiled on one in the previous step. Create a new Dockerfile-checkersd-debian with:
Depending on your installed version of Docker, you may have to add the flags:
Copy
--build-arg BUILDARCH=amd64
Or just manually replace ${BUILDARCH} with amd64 or whichever is your architecture.
Because you want to simulate production, you can make the case that you prefer to use the smaller alpine Docker image. Alpine and Debian use different C compilers with different dynamically-linked C library dependencies. This makes their compiled executables incompatible – at least with the go build commands as they are declared in the Makefile.
Instructing the compiler to link the C libraries statically with the use of the CGO_ENABLED=0option(opens new window) in go build, or even in your Makefile:
Copy
build-linux:
- GOOS=linux GOARCH=amd64 go build -o ./build/checkersd-linux-amd64 ./cmd/checkersd/main.go
- GOOS=linux GOARCH=arm64 go build -o ./build/checkersd-linux-arm64 ./cmd/checkersd/main.go
+ CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./build/checkersd-linux-amd64 ./cmd/checkersd/main.go
+ CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -o ./build/checkersd-linux-arm64 ./cmd/checkersd/main.go
Then run make build-with-checksum again and use alpine in a new Dockerfile:
For maximum portability of your executables, you may in fact want to add CGO_ENABLED=0 to all your go build commands.
Now you can run it:
Copy
$ docker run --rm -it checkersd_i help
You should see a recognizable list of commands.
Each Docker container will run checkersd as root, which does not matter because it all happens in a container. Therefore, there is no need to create a specific additional user like you would in a serious production setting. For the same reason, there is also no need to create a service to launch it.
Alice runs the Tendermint Key Management System(opens new window) on a separate machine. You need to prepare its image. The image will contain the executable, which you have to compile from its Rust code.
You define a disposable image (the first stage) that clones the code and compiles it. This involves the download of Rust crates (i.e. packages). This image ends up being large but is then disposed of.
You define a slim image (the second stage) in which you only copy the compiled file. This is the image you keep for production. It ends up being small.
The disposable image needs to use Rust of at least version 1.56. Fortunately, there are ready-made Docker images. Pick rust:1.64.0(opens new window).
Next, the executable needs to be compiled for the specific device onto which your key will be stored. You do not use hardware keys in this setup. So, when building it, you use the softsign extension(opens new window). This is achieved by adding the flag --features=softsign.
Finally, what version of the TMKMS should you compile? A given TMKMS version can work with a limited set of specific Tendermint versions. Find the Tendermint version of your checkers code with:
Copy
$ grep tendermint/tendermint go.mod
It should return something like this:
Copy
github.com/tendermint/tendermint v0.34.19
Because here it is version 0.34, it is a good idea to use the KMS from version 0.10.0(opens new window) upwards. At the time of writing, version 0.12.2 still seems to support Tendermint v0.34. It is under the v0.12.2(opens new window) tag on Github. Pick this one.
Having collected the requisites, you can create the multi-staged Docker image in a new Dockerfile-ubuntu-tmkms:
Copy
FROM --platform=linux rust:1.64.0 AS builder
RUN apt-get update
RUN apt-get install libusb-1.0-0-dev --yes
ENV LOCAL=/usr/local
ENV RUSTFLAGS=-Ctarget-feature=+aes,+ssse3
ENV TMKMS_VERSION=v0.12.2
WORKDIR /root
RUN git clone --branch ${TMKMS_VERSION} https://github.com/iqlusioninc/tmkms.git
WORKDIR /root/tmkms
RUN cargo build --release --features=softsign
# The production image starts here
FROM --platform=linux debian:11-slim
COPY --from=builder /root/tmkms/target/release/tmkms ${LOCAL}/bin
ENTRYPOINT [ "tmkms" ]
prod-sim Dockerfile-tmkms-debian View source
Each container needs access to its private information, such as keys, genesis, and database. To facilitate data access and separation between containers, create folders that will map as a volume to the default /root/.checkers or /root/tmkms inside containers. One for each container:
As a secondary effect, this also creates the first shot of config/genesis.json on every node, although you will start work with the one on desk-alice.
Early decisions that you can make at this stage are:
Deciding that the chain will be named checkers-1. It is a convention to append a number in case it has to go through a hard fork.
Deciding that the staking denomination will be called upawn, which will be understood as 1 PAWN equals 1 million of upawn.
Do you need that many decimals? Yes and no. Depending on your version of the Cosmos SDK, there is a hard-coded value of base tokens that a validator has to stake, and the number is 10,000,000. If you do not have enough decimals, the human token would have to have a lot of zeroes.
The default initialization sets the base token to stake, so to get it to be upawn you need to make some changes:
First, you need to create the two validators' operation keys. Such a key is not meant to stay on the validating node when it runs, it is meant to be used at certain junctures only (for instance, to stake on behalf of Alice or Bob, as from their respective desktop computers). So you are going to create them by running "desktop" containers:
Use the --keyring-backend file.
Keep them in the mapped volume with --keyring-dir /root/.checkers/keys.
Create on desk-alice the operator key for val-alice:
Use a passphrase you can remember. It does not need to be exceptionally complex as this is all a local simulation. This exercise uses password and stores this detail on file, which will become handy.
Now you need to import val-alice's consensus key in secrets/val-alice-consensus.key.
The private key will no longer be needed on val-alice. However, during the genesis creation Alice will need access to her consensus public key. Save it in a new pub_validator_key-val-alice.json(opens new window) on Alice's desk without any new line:
On start, val-alice may still recreate a missing private key file due to how defaults are handled in the code. To prevent that, you can instead copy it from sentry-alice where it has no value.
In the above, val-alice is the future network name of Alice's validator, and it will indeed be resolved to an IP address via Docker's internal DNS. In a real production setup, you would use a fully resolved IP address to avoid the vagaries of DNS.
Do not forget, you must inform Alice's validator that it should indeed listen on port 26659. In val-alice/config/config.toml:
Make it listen on an IP address that is within the KMS private network.
0.0.0.0 represents all addresses of the node. In a real production setup, you would choose the IP address of the network card that is on the network common with kms-alice.
Make sure it will not look for the consensus key on file:
Copy
$ docker run --rm -i \
-v $(pwd)/prod-sim/val-alice:/root/.checkers \
--entrypoint sed \
checkersd_i \
-Ei 's/^priv_validator_state_file/# priv_validator_state_file/g' \
/root/.checkers/config/config.toml
Before moving on, make sure that the validator still has a priv_validator_key.json because the code may complain if the file cannot be found. You can copy the key from sentry-alice, which does not present any risk:
In this setup, Alice starts with 1,000 PAWN and Bob with 500 PAWN, of which Alice stakes 60 and Bob 40. With these amounts, the network cannot start if either of them is offline. Get their respective addresses:
Copy
$ ALICE=$(echo password | docker run --rm -i \
-v $(pwd)/prod-sim/desk-alice:/root/.checkers \
checkersd_i \
keys \
--keyring-backend file --keyring-dir /root/.checkers/keys \
show alice --address)
Replace password with the passphrase you picked when creating the keys.
Have Alice add her initial balance in the genesis:
Create Alice's genesis transaction using the specific validator public key that you saved on file, and not the key that would be taken from priv_validator_key.json by default (and is now missing):
It is useful to know this --pubkey method. If you were using a hardware key located on the KMS, this would be the canonical way of generating your genesis transaction.
Because the validators are on a private network and fronted by sentries, you need to set up the configuration of each node so they can find each other; also to make sure that the sentries keep the validators' addresses private. What are the nodes' public keys? For instance, for val-alice, it is:
sentry-alice also has access to sentry-bob and node-carol, although these nodes should probably not be considered persistent. You will add them under "seeds". First, collect the same information from these nodes:
Each container needs to access its own private folder, prepared earlier, and only that folder. Declare the volume mappings with paths relative to the docker-compose.yml file:
With all these computers on their Docker networks, you may still want to access one of them to query the blockchain, or to play games. In order to make your host computer look like an open node, expose Carol's node on all addresses of your host:
Your six containers are running. To monitor their status, and confirm that they are running, use the provided Docker container interface.
Now you can connect to node-carol to start interacting with the blockchain as you would a normal node. For instance, to ask a simple status:
Copy
$ docker run --rm -it \
--network checkers-prod_net-public \
checkersd_i status \
--node "tcp://node-carol:26657"
Note how the net-public network name is prefixed with the Compose project name. If in doubt, you can run:
Copy
$ docker network ls
Copy
$ ./build/checkersd-darwin-amd64 status \
--node "tcp://localhost:26657"
Copy
> ./build/checkersd-windows-amd64 status \
--node "tcp://localhost:26657"
Copy
$ docker run --rm -it \
checkersd_i status \
--node "tcp://192.168.0.2:26657"
Here you would replace 192.168.0.2 with the actual IP address of your host computer.
From this point on everything you already know how to do, such as connecting to your local node, applies.
Whenever you submit a transaction to node-carol, it will be propagated to the sentries and onward to the validators.
At this juncture, you may ask: Is it still possible to run a full game in almost a single block, as you did earlier in the CosmJS integration tests? After all, when node-carol passes on the transactions as they come, it is not certain that the recipients will honor the order in which they were received. Of course, they make sure to order Alice's transactions, thanks to the sequence, as well as Bob's. But do they keep the A-B-A-B... order in which they were sent?
To find out, you need to credit the tests' Alice and Bob accounts:
Get your prod setup's respective addresses for Alice and Bob:
Copy
$ alice=$(echo password | docker run --rm -i \
-v $(pwd)/prod-sim/desk-alice:/root/.checkers \
checkersd_i:v1-alpine \
keys \
--keyring-backend file --keyring-dir /root/.checkers/keys \
show alice --address)
Copy
$ bob=$(echo password | docker run --rm -i \
-v $(pwd)/prod-sim/desk-bob:/root/.checkers \
checkersd_i:v1-alpine \
keys \
--keyring-backend file --keyring-dir /root/.checkers/keys \
show bob --address)
If one of your services (for example, sentry-bob) fails to start because it could not resolve one of the other containers, you can restart that service independently with:
Copy
$ docker compose restart sentry-bob
3
If you want to get more detailed errors from your KMS, you can add a flag in its service definition:
If you want to erase all states after a good run, and if you have a Git commit from which to restore the state files, you can create a new script(opens new window) for that.
Now may be a good time to prepare a standalone setup that can be used by anyone who wants to test a checkers blockchain with minimum effort. The target setup ought to have the following characteristics:
It uses a single Dockerfile.
Such an image could be generated and uploaded into a Docker image registry to increase ease of use.
It can be run by someone who just wants to try checkers without going through node and genesis setups.
The Dockerfile does not need to be in the repository to be usable. It could be copied elsewhere and still work, i.e. no ADDing local files.
The image(s) should be as small as is reasonable.
It uses stake instead of upawn, and has token so as to be compatible with the current state of the checkers CosmJS exercise.
It also provides a faucet to further facilitate tests.
It sacrifices key safety to increase ease of use.
The CosmJS exercise already references this standalone Dockerfile, so this is a circular reference. You can still work on it on your own now.
When running ignite chain serve you also get a faucet, which was called when running the CosmJS integration tests. In fact, CosmJS also offers a faucet package(opens new window). Its API differs from Ignite's faucet. If you went through the CosmJS exercise, you saw its API being called too.
You assemble this multi-stage Dockerfile step by step, starting with the checkers part:
1
Build the checkers executable as you have learned in this section, but this time from the public repository so as to not depend on local files:
Copy
FROM --platform=linux golang:1.18.7-alpine AS builder
ENV CHECKERS_VERSION=main
RUN apk add --update --no-cache make git
WORKDIR /root
RUN git clone --depth 1 --branch ${CHECKERS_VERSION} https://github.com/cosmos/b9-checkers-academy-draft.git checkers
WORKDIR /root/checkers
RUN go build -o ./build/checkersd ./cmd/checkersd/main.go
FROM --platform=linux alpine
COPY --from=builder /root/checkers/build/checkersd /usr/local/bin/checkersd
2
To offer maximum determinism, you are going to reuse unprotected keys. First you need to create them with checkersd separately, somewhere unimportant like a temporary container:
Copy
$ checkersd keys add alice --keyring-backend test
This returns something like:
Copy
- name: alice
type: local
address: cosmos1am3fnp5dd6nndk5jyjq9mpqh3yvt2jmmdv83xn
pubkey: '{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"A/E6dHn3W2XvCrLkhp/dNxAQyVpmduxEXPBg/nP/PyMa"}'
mnemonic: ""
**Important** write this mnemonic phrase in a safe place.
It is the only way to recover your account if you ever forget your password.
zebra burden afford work power afraid field creek laugh govern upgrade project glue ceiling lounge mobile romance pear relief either panel expect eagle jacket
Make a note of the mnemonic, so as to reuse it in the faucet's definition.
Moving on to the faucet, you continue adding to the same Dockerfile.
1
You start its definition as a separate independent stage:
Copy
FROM --platform=linux node:18.7-alpine AS cosmos-faucet
Install the CosmJS faucet package:
Copy
FROM --platform=linux node:18.7-alpine AS cosmos-faucet
+ ENV COSMJS_VERSION=0.28.11
+ RUN npm install @cosmjs/faucet@${COSMJS_VERSION} --global --production
2
Configure the faucet:
Copy
RUN npm install @cosmjs/faucet@${COSMJS_VERSION} --global --production
+ ENV FAUCET_CONCURRENCY=2
+ ENV FAUCET_PORT=4500
+ ENV FAUCET_GAS_PRICE=0.001stake
+ ENV FAUCET_MNEMONIC="zebra burden afford work power afraid field creek laugh govern upgrade project glue ceiling lounge mobile romance pear relief either panel expect eagle jacket"
+ ENV FAUCET_ADDRESS_PREFIX=cosmos
+ ENV FAUCET_TOKENS="stake, token"
+ ENV FAUCET_CREDIT_AMOUNT_STAKE=100
+ ENV FAUCET_CREDIT_AMOUNT_TOKEN=100
+ ENV FAUCET_COOLDOWN_TIME=0
Be aware:
A concurrency of at least 2 is necessary for the CosmJS exercise, because when crediting in before it launches two simultaneous requests. The faucet does not internally keep track of the accounts' sequences and instead uses its distributor accounts in a round-robin fashion.
You used port 4500 to mimic that of Ignite's faucet, so as to be conveniently compatible with the CosmJS exercise.
You pasted the mnemonic that you obtained in the previous key-creation steps.
You reused the token denominations as found in the CosmJS exercise.
3
Finish the faucet declaration with the port to share and the default command to launch:
The faucet needs about 20 seconds to become operational, as it sends four transactions to its distributor accounts. Wait that long before launching any CosmJS tests.
Be aware:
Both processes are started as --detached, which is how they are typically started by users who do not care about the details. If you get errors then stop, remove this flag, and restart to see the logs.
Checkers is started with --name checkers, whose name is then reused in the node address http://checkers:26657 when launching the faucet.
On a side-note, if you want to access Alice's address in order to access her balance, you can run:
Copy
$ docker exec -it checkers \
sh -c "checkersd query bank balances \$ALICE"
And to check the faucet status, you can use:
Copy
$ curl http://localhost:4500/status
You now have a container running both the checkers and a faucet. You are ready to run your CosmJS tests in client.