Performance testing for Holochain, modelled as load tests. The name is a reference to aerodynamics testing and is a good way to refer to this project but the language does not extend to the code.
The wind-tunnel framework, which is found in ./framework, is a collection of crates that implement the core logic for
running load tests and collecting results. This code is not specific to Holochain and should stay that way. It provides extension
points for hooking in Holochain specific behaviour.
It is divided into the following crates:
wind_tunnel_instruments: Tools for collecting and reporting metrics.wind_tunnel_instruments_derive: Procedural macros to generate code that helps integrate thewind_tunnel_instrumentscrate.wind_tunnel_runner: The main logic for running load tests. This is a library that is designed to be embedded inside a test binary.
The bindings, found in ./bindings, customise the wind-tunnel framework to Holochain. They are what you would be
consuming when using wind-tunnel to test Holochain.
The bindings contain the following crates:
holochain_client_instrumented: A wrapper around theholochain_clientthat usesinstrumentsandinstruments_deriveto instrument the client. It exports an API that is nearly identical to theholochain_clientcrate, except that when constructing client connections you need to provide a reporter which it can write results to.holochain_wind_tunnel_runner: This is a wrapper around thewind_tunnel_runnercrate that provides Holochain specific code to be used with thewind_tunnel_runner. Thewind_tunnel_runneris re-exported, so you should just use this crate as your runner when creating tests for Holochain.
Kitsune is a framework for writing distributed hash table based network applications. Bindings for Wind Tunnel enable developers to write scenarios to test Kitsune.
The bindings contain the following crates:
kitsune_client_instrumented: A test application specifically written for Wind Tunnel. With Kitsune at its core, instances are created to publish text messages which are sent to all participating peers. The API is minimal, as the focus lies on observing speed and resilience of delivering messages to peers.kitsune_wind_tunnel_runner: A wrapper around thewind_tunnel_runnercrate that provides Kitsune specific code to be used with thewind_tunnel_runner. It provides CLI options to configure scenarios.
The scenarios, found in ./scenarios, are what describe the performance testing scenarios to be run against Holochain. Each scenario
is a binary that uses the holochain_wind_tunnel_runner as a library. When it is run it will have all the capabilities that wind-tunnel provides.
There is more information about how to create scenarios in a separate section.
Note
This section is optional, you can skip it if you are working outside this repository or choose your own hApp packaging strategy.
When a scenario is run, it may install a hApp from any place or using any method that Holochain supports. While working in this repository, there are some helpers to make this easier.
The zomes directory contains Rust projects that are intended to be built into zomes. Please check the directory structure and
naming of convention for existing zomes when adding new ones. In particular:
- Each zome should be in its own directory with a name that describes its purpose.
- Each zome should keep its coordinator and integrity zomes separate as Rust projects in
coordinatorandintegritydirectories. - Each zome should reference the shared zome build script as
build = "../../wasm_build.rs" - The library that gets produced by the zome should be consistently named in the
[lib]section as<zome_name>_(coordinator|integrity).
When you want to use one or more zomes in a scenario, you should package them into a hApp for that scenario. To achieve this your scenario needs to do three things:
- Reference the custom build script which will package the zomes into a hApp for you as
build = "../scenario_build.rs" - Add custom sections to the
Cargo.tomlto describe the hApps you need in your scenario. There is an example at the end of this section. - Reference the installed app from your scenario using the provided macro
scenario_happ_path!("<hApp name>"). This produces astd::path::Paththat can be passed to Holochain when asking it to install a hApp from the file system.
Adding a hApp to your scenario using the Cargo.toml:
[package.metadata.required-dna] # This can either be a single DNA or you can specify this multiple times as a list using [[package.metadata.required-dna]]
name = "return_single_value" # The name to give the DNA that gets built
zomes = ["return_single_value"] # The name(s) of the zomes to include in the DNA, which must match the directory name in `./zomes`
[package.metadata.required-happ] # This can either be a single hApp or you can specify this multiple times as a list using [[package.metadata.required-happ]]
name = "return_single_value" # The name to give the hApp that gets built
dnas = ["return_single_value"] # The name(s) of the DNA to include in the hApp, which must match the name(s) given above.If you need to debug this step, you can run cargo build -p <your-scenario-crate> and check the dnas and happs directories.
The Wind Tunnel framework is designed as a load testing tool. This means that the framework is designed to apply user-defined load to a system and measure the system's response to that load. At a high-level there are two modes of operation. Either you run the scenario and the system on the same machine and write the scenario to apply as much load as possible. Or you run the system in a production-like environment and write the scenario to be distributed across many machines. The Wind Tunnel framework does not distinguish between these two modes of operation and will always behave the same way. It is up to you to write scenarios that are appropriate for each mode of operation.
Load is applied to the system by agents. An agent is a single thread of execution that repeatedly applies the same behaviour to the system. This is in the form of a function which is run repeatedly by Wind Tunnel. There are either many agents running in a single scenario to maximise load from a single machine, or many scenarios running in parallel that each have a single agent. There is nothing stopping you from distributing the scenario and also running multiple agents but these are the suggested layouts to design scenarios around.
In general a scenario consists of setup and teardown hooks, and an agent behaviour to apply load to the system. There are global setup and teardown hooks that run once per scenario run. There are also agent setup and teardown hooks that run once per agent during a scenario run. There are then one or more agent behaviours. For simple tests you just define a single behaviour and all agents will behave the same way. For more complex tests you can define multiple behaviours and assign each agent to one of them. This allows more complex test scenarios to be described where different agents take different actions and may interact with each other. For example, you might have some agents creating data and other agents just reading the data.
Wind Tunnel is not responsible for capturing information about your system. It can store the information that you collect and do some basic analysis on it. Alternatively, it can push metrics to InfluxDB. But it is up to you to collect the information that you need and to analyse it in detail. For example, the Wind Tunnel bindings for Holochain capture API response times on the app and admin interfaces and automatically reports this to Wind Tunnel but if you need to measure other things then you will need to write your own code to do that.
In this first mode of operation you want to run the scenario and the system on the same machine. You should write the scenario to apply as much load as possible to the system. That means keeping your agent behaviour hook as fast as possible. Preferably by doing as much setup as possible in the agent setup hook and then just doing simple actions in the agent behaviour hook.
This kind of test is good for finding the limits of the system in a controlled environment. It can reveal things like high memory usage, response times degrading over time and other bottlenecks in performance.
It may be useful to distribute this type of test. However, if it is written to maximise load then it only makes sense to distribute it if the target system is also distributed in some way. With Holochain, for example, this wouldn't make sense because although Holochain is distributed, it is not distributed in the sense of scaling performance for a single app.
In this second mode of operation you can still design and run the scenario on a single machine but that is just for development and the value comes from running it in a distributed way. The intended configuration is to have many copies of the scenario binary distributed across many machines. Each scenario binary will be configured to run a single agent. All the scenarios are configured to point at the same test system. When testing Holochain, for example, Holochain is distributed first then a scenario binary is placed on each node with Holochain and points at the local interface for Holochain.
Rather than looking to stress test the system in this mode, you are looking to measure the system's response to a realistic load. This is not understood by Wind Tunnel but you are permitted to block the agent behaviour hook to slow down the load the Wind Tunnel runner will apply. This allows you to be quite creative when designing your load pattern. For example, you could define a common agent behaviour function then create multiple agent behaviour hooks within your scenario that are use the common function at different rates. This would simulate varied behaviour by different agents.
The following is a quick-start guide to run a scenario in the wind-tunnel repo locally with holochain metrics enabled and then upload the recorded metrics to a locally running instance of InfluxDB.
- Enter nix shell by running in a terminal:
nix develop
- Start a local instance of InfluxDB by running
influxd
which will keep this terminal occupied.
- Open a second terminal to configure InfluxDB and populate it with template variables and dashboards defined in
./influx/templates. To do so, enter the nix shell again
nix develop
and then run:
configure_influx
- Before running the scenario we want to set the
HOLOCHAIN_INFLUXIVE_FILEvariable in order to have holochain process record metrics and write them to a.influxfile:
export HOLOCHAIN_INFLUXIVE_FILE=$WT_METRICS_DIR/holochain.influx- Now you can run a scenario. Available scenario commands are documented in the scenario's README, for example in
scenarios/app_install/README.md:
RUST_LOG=info cargo run --package app_install -- --behaviour minimal --duration 300
However, since we want to record metrics and upload them to InfluxDB, we additionally add the option --reporter=influx-file:
RUST_LOG=info cargo run --package app_install -- --behaviour minimal --duration 300 --reporter=influx-file
which will write the metrics to a file in the folder ./telegraf/metrics.
- Once the scenario run has completed, we can upload the metrics to InfluxDB by running the following command:
nix run .#local-upload-metrics- With the metrics uploaded to InfluxDB we can now finally also run the summariser to generate a summary report:
cargo run summariser
Note
Writing scenarios requires some knowledge of wind-tunnel's methodology. That is assumed knowledge for this section!
Writing a Wind Tunnel scenario is relatively straight forward. The complexity is mostly in the measurement and analysis of the system once the scenario is running. To begin, you need a Rust project with a single binary target.
cargo new --bin --edition 2021 my_scenario
You will probably need more dependencies at some point, but the minimum to get started are the holochain_wind_tunnel_runner and
holochain_types crates.
cargo add holochain_wind_tunnel_runner
cargo add holochain_typesIf this scenario is being written inside this repository then there are some extra setup steps. Please see the project layout docs.
Add the following imports to the top of your main.rs:
use holochain_types::prelude::ExternIO;
use holochain_wind_tunnel_runner::prelude::*;
use holochain_wind_tunnel_runner::scenario_happ_path;Then replace your main function with the following:
fn main() -> WindTunnelResult<()> {
let builder = ScenarioDefinitionBuilder::<HolochainRunnerContext, HolochainAgentContext>::new_with_init(
env!("CARGO_PKG_NAME"),
)
.with_default_duration_s(60)
.use_agent_behaviour(agent_behaviour);
run(builder)?;
Ok(())
}This is the basic structure of a Wind Tunnel scenario. The ScenarioDefinitionBuilder is used to define the scenario. It includes
a CLI which will allow you to override some of the defaults that are set in your code. Using the builder you can configure your hooks
which are just Rust functions that take a context and return a WindTunnelResult.
The run function is then called with the builder. At that point the Wind Tunnel runner takes over and configures, then runs your scenario.
Before you can run this, you'll need to provide the agent behaviour hook. Add the following to your main.rs:
fn agent_behaviour(
ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>,
) -> HookResult {
println!("Hello from, {}", ctx.agent_name());
std::thread::sleep(std::time::Duration::from_secs(1));
Ok(())
}This is just an example hook and you will want to replace it once you have got your scenario running. Note the AgentContext that is provided
to the hook. This is created per-agent and gives you access to the agent's ID and the runner's context. Both the agent and the runner context are
used for sharing configuration between the runner and your hooks, and state between your hooks.
Your scenario should now be runnable. Try running it with
cargo run -- --duration 10You should see the print messages from the agent behaviour hook. If so, you are ready to start writing your scenario. To get started,
you are recommended to take a look at documentation for the holochain_wind_tunnel_runner crate. This has common code to use in your
your scenarios and example of how to use them. This will help you get started much more quickly than starting from scratch. There is
also a tips section below which you might find helpful as you run into questions.
Note
Writing scenarios requires some knowledge of wind-tunnel's methodology as well as an overview of how Kitsune works. That is assumed knowledge for this section!
Writing a Kitsune Wind Tunnel scenario is relatively straight forward. The Kitsune client defines three common functions for the developer. A chatter can be created, it can join_chatter_network and it can say a list of messages. As long as a chatter has not joined the network, it won't receive messages from other peers and will also not send messages to them. Once joined, it starts receiving and sending messages it has said. It will also receive messages that were sent before it joined the network.
For communication among peers to work, a bootstrap server must be running that enables peers to discover each other, and a signal server is required for establishing direct WebRTC connections. See Kitsune Tests.
The only Wind Tunnel specific dependency you will need is kitsune_wind_tunnel_runner.
cargo add kitsune_wind_tunnel_runnerIf this scenario is being written inside this repository then there are some extra setup steps. Please see the project layout docs.
Add the following imports to the top of your main.rs:
use kitsune_wind_tunnel_runner::prelude::*;Then replace your main function with the following:
fn main() -> WindTunnelResult<()> {
let builder =
KitsuneScenarioDefinitionBuilder::<KitsuneRunnerContext, KitsuneAgentContext>::new_with_init(
"scenario_name",
)?.into_std()
.use_agent_behaviour(agent_behavior);
run(builder)?;
Ok(())
}This is the basic structure of a Kitsune Wind Tunnel scenario. The ScenarioDefinitionBuilder is used to define the scenario. It includes
a CLI which will allow you to override some of the defaults that are set in your code. Using the builder you can configure your hooks
which are just Rust functions that take a context and return a WindTunnelResult.
The run function is then called with the builder. At that point the Wind Tunnel runner takes over and configures, then runs your scenario.
Before you can run this, you'll need to provide the agent behaviour hook. Add the following to your main.rs:
fn agent_behaviour(
ctx: &mut AgentContext<KitsuneRunnerContext, KitsuneAgentContext>,
) -> HookResult {
println!("Hello from, {}", ctx.agent_name());
std::thread::sleep(std::time::Duration::from_secs(1));
Ok(())
}This is just an example hook and you will want to replace it once you have got your scenario running. Note the KitsuneAgentContext that is provided
to the hook. This is created per-agent and gives you access to the agent's ID and the runner's context as well as the chatter ID which is specific to
Kitsune. Both the agent and the runner context are used for sharing configuration between the runner and your hooks, and state between your hooks.
Your scenario should now be runnable. Try running it with
cargo run -- --bootstrap-server-url http://127.0.0.1:30000 --signal-server-url ws://127.0.0.1:30000 --duration 10You should see the print messages from the agent behaviour hook. If so, you are ready to start writing your scenario.
The behaviour hooks are synchronous but the Holochain client is asynchronous. The ability to run async code in your hooks is exposed
through the AgentContext and RunnerContext.
fn agent_behaviour(ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>) -> HookResult {
ctx.runner_context().executor().execute_in_place(async {
// Do something async here
})?;
Ok(())
}This is useful for scenarios that need to measure things that don't happen through the instrumented client that is talking to the system under test.
fn agent_behaviour(ctx: &mut AgentContext<HolochainRunnerContext, HolochainAgentContext>) -> HookResult {
let metric = ReportMetric::new("my_custom_metric")
.with_field("value", 1);
ctx.runner_context().reporter().clone().add_custom(metric);
Ok(())
}The metric will appear in InfluxDB as wt.custom.my_custom_metric with a field value set to 1.
When developing your scenarios, you can disable anything that requires running infrastructure, other than the target system. However, once you are ready to run your scenario to get results, you will need a few extra steps.
InfluxDB is used to store the metrics that Wind Tunnel collects. You can run it locally from inside a Nix shell launched with nix develop:
influxdThis terminal will then be occupied running InfluxDB. Start another terminal where you can configure the database and create a user, again from inside the Nix shell:
configure_influxThis will do a one-time setup for InfluxDB and also configure your shell environment to use it. Next time you start a new terminal you will need to run use_influx instead.
You can now navigate to the InfluxDB dashboard and log in with windtunnel/windtunnel. The variables and dashboards you need will already be set up,
so you can now run your scenario and the metrics will be pushed to InfluxDB.
Telegraf is used for collecting host metrics and writing them to disk. This is not required locally, but if you would like to run it, then you can do so from inside the Nix shell.
Currently, telegraf is configured to collect the following metrics:
- CPU stats
- Disk stats and usage
- Kernel Info
- Memory and Swap stats
- Network stats
- Processes
- System stats
To run the telegraf agent to collect host metrics while running scenarios enter the Nix shell and run
start_host_metrics_telegrafWind Tunnel scenarios configure and run a Holochain conductor per agent by
default. You can override the holochain binary used to start the conductors
with the WT_HOLOCHAIN_PATH environment variable, by setting it to the path of
the custom holochain binary.
The stdout for the in-process Holochain conductor that is managed by Wind
Tunnel is piped to the scenarios' logs with the log target of
holochain_conductor::<agent-name> at the log level of INFO. Therefore, to
view the stdout of the conductors, you need to set the RUST_LOG environment
variable to RUST_LOG=holochain_conductor=info. To view the logs as well as
the stdout from the conductors, you can set RUST_LOG to
RUST_LOG=holochain=info, or simply set it to RUST_LOG=info, which will
print all logs from all sources at the INFO level or above. If you want to
view only warnings from the conductors but also the stdout then set it to
RUST_LOG=holochain=warn,holochain_conductor=info
Alternatively, if you want to run a Holochain conductor separately and have all agents connect to the same conductor then you first need to start a conductor. For a zero-config and quick way to do this, you can use the following command:
hc s clean && echo "1234" | hc s --piped create && echo "1234" | RUST_LOG=warn hc s --piped -f 8888 runThis will run a sandboxed Holochain conductor and force the admin interface to
be accessible at localhost port 8888.
For more advanced scenarios or for distributed tests, this is not appropriate!
Once the external conductor is running, you will need to set the
--connection-string when running a scenario. This should be set to the
WebSocket address of the admin interface of the running conductor, which, in
the case above, would be ws://localhost:8888.
To run Holochain with metrics enabled, the HOLOCHAIN_INFLUXIVE_FILE environment variable must be set beforehand to a valid path within WT_METRICS_DIR (set by the Nix shell).
For example:
export HOLOCHAIN_INFLUXIVE_FILE=$WT_METRICS_DIR/holochain.influxEach scenario is expected to provide a README.md with at least:
- A description of the scenario and what it is testing for.
- A suggested command or commands to run the scenario, with justification for the configuration used.
For example, see the zome_call_single_value scenario.
As well as the command you use to run the scenario, you will need to select an appropriate reporter. Run the scenario with the --help flag to see the available options.
For local development, the default in-memory reporter will do.
If you have influx running and only want scenario metrics, then you can use the influx-client option.
If you have set up Holochain or host metrics then you can use the influx-file option and then import all metrics in the next step.
Once you've finished running a scenario, you can collect host, Holochain and scenario metrics with:
nix run .#local-upload-metricsAt this point the metrics will be uploaded to InfluxDB, and you will be able to view the metrics in the InfluxDB dashboards by run_id.
Running this Nix command will also clean up the current metrics from disk, so you are immediately ready to run the next scenario.
Warning
The metrics must be imported after each scenario run since they are associated only with the latest scenario run. [!Warning] If Holochain ran with metrics enabled, it must be restarted after each scenario run since its output file is deleted after importing. [!Warning] If host metrics were enabled with Telegraf, it must be restarted after each scenario run since its output file is deleted after importing.
There is a Nix environment provided, and it is recommended that you use its shell for development:
nix developDecide what type of test you are writing and pick one of the next two sections. Then you can move to writing and running the scenario.
For standard Wind Tunnel tests it is recommended to allow Wind Tunnel to manage the Holochain conductor which is configured by the scenario itself.
You can also run a Holochain conductor separately and manage it yourself - see the Running Holochain section above. When doing this it is recommended to stop and start the sandboxed conductor between test runs, because getting a Holochain conductor back to a clean state through its API is not yet implemented.
You can run one of the scenarios in the scenarios directory:
RUST_LOG=info cargo run -p zome_call_single_value -- --duration 60Or, if using a separate Holochain conductor:
RUST_LOG=info cargo run -p zome_call_single_value -- --duration 60 --connection-string=ws://localhost:8888You can easily test the Wind Tunnel scenarios with Nomad by running them locally. This requires running a Nomad agent locally as both a client and a server.
First, enter the Nix devShell with nix develop to make sure you have all the packages install.
Alternatively, install Nomad and Holochain locally
so that both nomad and holochain are in your PATH.
Once Nomad is installed, run the agent with the configuration provided in this repo to spin up both a server and client, do this with:
sudo nomad agent -config=nomad/dev-agent-config.hclNow navigate to http://localhost:4646/ui to view the Nomad dashboard.
Next, in a new terminal window, generate the nomad job for each scenario by
passing the scenario name into the generate_jobs script, such as:
./nomad/generate_jobs.sh app_install_minimalThe generated job will be in the nomad/jobs directory.
Next, in a new terminal window, build the scenario you want to run with:
nix build .#app_installReplace app_install with the name of the scenario that you want to run.
Once the scenario is built you can run the Nomad job with:
nomad job run -address=http://localhost:4646 -var scenario_url=result/bin/app_install -var reporter=in-memory nomad/jobs/app_install_minimal.nomad.hclAll the jobs are in the nomad/jobs directory, so you can replace app_install_minimal with the name of the job you want to run.
-addresssets Nomad to talk to the locally running instance and not the dedicated Wind Tunnel cluster one.-var scenario_url=...provides the path to the scenario binary that you built in the previous step.-var reporter=in-memorysets the reporter type to print tostdoutinstead of writing an InfluxDB metrics file.
You can also override existing and omitted variables with the -var flag. For example, to set the duration (in seconds) use:
nomad job run -address=http://localhost:4646 -var scenario_url=result/bin/app_install -var reporter=in-memory -var duration=300 nomad/jobs/app_install_minimal.nomad.hclOr to download and use a different Holochain binary to start the conductors:
nomad job run -address=http://localhost:4646 -var scenario_url=result/bin/app_install -var reporter=in-memory -var holochain_bin_url=https://github.com/holochain/holochain/releases/download/holochain-0.5.6/holochain-x86_64-unknown-linux-gnu nomad/jobs/app_install_minimal.nomad.hclThen, navigate to http://localhost:4646/ui/jobs where you should see your job listed, after clicking on the job you should see one allocation, which is the Nomad name for an instance of the job. You can view the logs of the tasks to see the results. The allocation should be marked as "complete" after the duration specified.
Once you've finished testing you can kill the Nomad agent with ^C in the first terminal running the agent.
Wind Tunnel has a dedicated Nomad cluster for running scenarios.
This cluster can be accessed at https://nomad-server-01.holochain.org:4646/ui.
A token is required to view the details of the cluster, the shared admin "bootstrap" token can be
found in the Holochain shared vault of the password manager under Nomad Server Bootstrap Token.
Enter the token (or use auto-fill) to sign in at https://nomad-server-01.holochain.org:4646/ui/settings/tokens.
You can now view any recent or running jobs at https://nomad-server-01.holochain.org:4646/ui/jobs.
Note
Running scenarios on the remote cluster from the command-line requires quite a few steps including storing the binary on a public file share. For that reason it is recommended to instead use the Nomad workflow which takes care of some of these steps for you.
To run a Wind Tunnel scenario on the Nomad cluster from the command-line, first enter the Nix devShell
with nix develop or install Nomad locally.
You also need to set the NOMAD_ADDR environment variable to https://nomad-server-01.holochain.org:4646
and NOMAD_CACERT to ./nomad/server-ca-cert.pem, which are both set by the Nix devShell.
The final environment variable that needs to be set and is not set by the devShell is NOMAD_TOKEN
which needs to be set to a token with the correct permissions, for now it is fine to just use the admin
token found in the Holochain shared vault of the password manager under Nomad Server Bootstrap Token.
Once Nomad is installed, build the scenario you want to run with Nix so that it puts everything in the correct location for you.
Run:
nix build .#packages.x86_64-linux.app_installReplace app_install with the name of the scenario that you want to run.
This will build the scenario, the output will be in your /nix/store/ with a symlink to it in your local
with the name ./result, zip the files found in the results directory, keeping the directory structure.
An example to do this is with:
mkdir app_install && cp -r result/* app_install/ && cd app_install && zip -r app_install.zip . && cd -You now need to upload the scenario zip file to somewhere public so that the Nomad client can download it. This could be a GitHub release, a public file sharing services, or some other means, as long as it's publicly accessible.
Note
Unlike when running locally in the section above, we cannot just pass a path because the path needs to be accessible to the client and Nomad doesn't have native support for uploading artefacts.
At this point you have to generate the Nomad job for the scenario you want to run. This is done with:
./nomad/scripts/generate_jobs.sh app_install_minimalNow that the scenario zip file is publicly available you can run the scenario with the following:
nomad job run -var scenario_url=http://{some-url} -var holochain_bin_url=https://github.com/holochain/holochain/releases/download/holochain-0.5.6/holochain-x86_64-unknown-linux-gnu nomad/jobs/app_install_minimal.nomad.hcl-var scenario_url=...provides the URL to the scenario zip file that you uploaded in the previous step.-var holochain_bin_url=...provides the URL to download the version of the Holochain binary to test.
You can also override existing and omitted variables with the -var flag. For example, to set the duration
(in seconds) or to set the reporter to print to stdout.
nomad job run -var scenario_url=http://{some-url} -var reporter=in-memory -var duration=300 nomad/jobs/app_install_minimal.nomad.hclThen, navigate to https://nomad-server-01.holochain.org:4646/ui/jobs where you should see a job with the same name as the scenario under test. Clicking on this job should show an allocation, which is the Nomad name for an instance of the job. You can view the logs of the tasks to see the results. The allocation should be marked as "complete" after the duration specified.
You can now get the run ID from the stdout of the run_scenario task in the Nomad web UI and, if the reporter
was set to influx-file (the default value) then you can use that ID to view the results on the corresponding
InfluxDB dashboard, the dashboards can be found at https://ifdb.holochain.org/orgs/37472a94dbe3e7c1/dashboards-list,
the credentials of which can be found in the Holochain shared vault of the password manager.
There is a dedicated GitHub workflow for building all the scenarios designed to run with Nomad, uploading them as GitHub artifacts, and then running them on available Nomad clients specifically available for testing. The metrics from the runs are also uploaded to the InfluxDB instance. This is the recommended way to run the Wind Tunnel scenarios with Nomad.
To run it, simply navigate to https://github.com/holochain/wind-tunnel/actions/workflows/nomad.yaml, select
Run workflow on the right, and select the branch that you want to test. If you only want to test a
sub-selection of the scenarios then simply comment-out or remove the scenarios that you want to exclude
from the matrix in the workflow file, push your changes and make sure to
select the correct branch. You can also override the URL to download the Holochain binary from, if you would
like to test a different version of Holochain, the default is the latest release at
https://github.com/holochain/holochain/releases/latest.
Warning
Currently, the Wait for free nodes step will wait indefinitely if there are never enough free nodes
which will also block other jobs from running.
For Kitsune Wind Tunnel tests, start a bootstrap and signal server:
kitsune2-bootstrap-srv --listen 127.0.0.1:30000This starts the two servers on the provided address. If for some reason the port 30000 is used on your system, you can specify a different port or omit the --listen option altogether to let the command choose a free port.
You can then start a second terminal and run one of the scenarios in the scenarios directory that start with kitsune_:
RUST_LOG=info cargo run -p kitsune_continuous_flow -- --bootstrap-server-url http://127.0.0.1:30000 --signal-server-url ws://127.0.0.1:30000 --duration 20 --agents 2If your bootstrap and signal servers run under a different port, adapt the command accordingly. The scenario creates 2 peer and runs for 20 seconds.
At each run of the Run performance tests on Nomad cluster workflow the run summary is published within the GitHub Pages of this repository.
Framework crates:
Core functionality for use by other Wind Tunnel crates - wind_tunnel_core
Instruments for measuring performance with Wind Tunnel - wind_tunnel_instruments
Derive macros for the wind_tunnel_instruments crate - wind_tunnel_instruments_derive
The Wind Tunnel runner - wind_tunnel_runner
Bindings crates for Holochain:
An instrumented wrapper around the holochain_client - holochain_client_instrumented
Customises the wind_tunnel_runner for Holochain testing - holochain_wind_tunnel_runner