Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions doc/running-tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,25 @@ reason, we recommend to use it before doing a commit when changes only the funct
just test-functional
```

Furthermore, you can only specific tests, rather than all at once.

```bash
# runs all tests in 'floresta-cli' suite
just test-functional-run "--test-suite floresta-cli"

# same as above
just test-functional-run "-t floresta-cli"

# run the stop and ping tests in the floresta-cli suite
just test-functional-run "--test-suite floresta-cli --test-name stop --test-name ping"

# same as above
just test-functional-run "-t floresta-cli -k stop -k ping"

# run many tests that start with the word `getblock` (getblockhash, getblockheader, etc...)
just test-functional-run "-t floresta-cli -k getblock"
```

#### From helper scripts

We provide two helper scripts to support our functional tests in this process and guarantee isolation and reproducibility.
Expand Down Expand Up @@ -79,6 +98,25 @@ The `--build` argument will force the script to build `utreexod` even if it is a
The `--preserve-data-dir` argument will keep the data and logs directories after running the tests
(this is useful if you want to keep the data for debugging purposes).

Furthermore, you can only specific tests, rather than all at once.

```bash
# runs all tests in 'floresta-cli' suite
./tests/run.sh --test-suite floresta-cli

# same as above
./tests/run.sh -t floresta-cli

# run the stop and ping tests in the floresta-cli suite
./tests/run.sh --test-suite floresta-cli --test-name stop --test-name ping

# same as above
./tests/run.sh -t floresta-cli -k stop -k ping

# run many tests that start with the word `getblock` (getblockhash, getblockheader, etc...)
./tests/run.sh -t floresta-cli -k getblock
```

#### From python utility directly
Additional functional tests are available (minimum python version: 3.12).
It's not recommended to run them directly, since you will need to manually
Expand Down
17 changes: 15 additions & 2 deletions tests/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,24 @@ fi
# Clean existing data/logs directories before running the tests
rm -rf "$FLORESTA_TEMP_DIR/data"

# Detect if --preserve-data-dir is among args
# and forward args to uv
PRESERVE_DATA=false
UV_ARGS=()

for arg in "$@"; do
if [[ "$arg" == "--preserve-data-dir" ]]; then
PRESERVE_DATA=true
else
UV_ARGS+=("$arg")
fi
done

# Run the re-freshed tests
uv run ./tests/run_tests.py
uv run ./tests/run_tests.py "${UV_ARGS[@]}"

# Clean up the data dir if we succeeded and --preserve-data-dir was not passed
if [ $? -eq 0 ] && [ "$1" != "--preserve-data-dir" ];
if [ $? -eq 0 ] && [ "$PRESERVE_DATA" = false ];
then
echo "Tests passed, cleaning up the data dir at $FLORESTA_TEMP_DIR"
rm -rf $FLORESTA_TEMP_DIR/data $FLORESTA_TEMP_DIR/logs
Expand Down
124 changes: 69 additions & 55 deletions tests/run_tests.py
Original file line number Diff line number Diff line change
@@ -1,30 +1,24 @@
"""
run_tests.py

Command Line Interface to run a test by its name. The name should be placed at ./tests folder.
It's suposed that you run it through `poetry` package management and `poe` task manager, but you
can run it with `python` if you installed the packages properly, in a isolated or not isolated
environment (althought we recommend the isolated environment).
Command Line Interface to run an individual test or multiple tests in a suite.

All tests will run as a spwaned subprocess and what happens will be logged to a temporary directory
The test-suite should be placed as a subfolder at `./tests` folder and the
test-name should be a file with the suffix `-test.py` inside the test-suite
folder.

```bash
# The default way to run all tests
poetry run poe tests
It's recommended that you run it through `uv` package management, but you can run
it with `python` if you installed the packages properly, in a isolated or not
isolated environment (althought we recommend the isolated environment).

# The default way to run a separated test (see the ones -- or define one -- in pyproject.toml)
poetry run poe example-test
All tests will run as a spwaned subprocess and what happens will be logged to
a temporary directory.

# This will do the same thing in the isolated environment
poetry run python tests/run_tests.py --test-name example_test
For more information about how to run the tests, see
[doc/running_tests.md](doc/running_tests.md).

# You can even define the `data_dir` to logs
poetry run python tests/run_tests.py --test-name example_test --data-dir $HOME/my/path

# If you have a proper environment wit all necessary packages installed
# it can be possible to run without poetry
python tests/run_tests.py --test-name example_test --data-dir $HOME/my/path
```
For more information about how to define a test, see
[tests/example](./tests/example) files.
"""

import argparse
Expand Down Expand Up @@ -52,6 +46,36 @@ def list_test_suites(test_dir: str):
print(f"* {name}")


def run_test(args: argparse.Namespace, test_suite_dir: str, file: str):
"""Run a test file from the test suite directory"""
data_dir = os.path.normpath(os.path.join(args.data_dir, file))
if not os.path.isdir(data_dir):
os.makedirs(data_dir)

# get test file and create a log for it
test_filename = os.path.normpath(os.path.join(test_suite_dir, file))
test_logname = os.path.normpath(os.path.join(data_dir, f"{int(time.time())}.log"))

with open(test_logname, "wt", encoding="utf-8") as log_file:
cli = ["python", test_filename]
cli_msg = " ".join(cli)
print(f"{INFO_EMOJI} Running '{cli_msg}'")
print(f"Writing stuff to {test_logname}")

with subprocess.Popen(cli, stdout=log_file, stderr=log_file) as test:
test.wait()

# Check the test, if failed, log the results
# if passed, just show that worked
if test.returncode != 0:
print(f"Test {file} not passed {FAILURE_EMOJI}")
with open(test_logname, "rt", encoding="utf-8") as log_file:
raise RuntimeError(f"Tests failed:{log_file.read()}")

print(f"Test {file} passed {SUCCESS_EMOJI}")
print()


def main():
"""
Create a CLI called `run_tests` with calling arguments
Expand All @@ -61,8 +85,13 @@ def main():
tool to help with function testing of Floresta

options:
-h, --help show this help message and exit
-d, --data-dir DATA_DIR data directory of the run_tests's functional test logs
-h, --help show this help message and exit.
-d, --data-dir DATA_DIR data directory of the run_tests's functional
test logs.
-t, --test-suite TEST_NAME test-suit directory to be tested by run_tests.
You can add many.
-k, --test-name TEST_NAME test name to be tested by run_tests's.
You can add many.
"""
# Structure the CLI
parser = argparse.ArgumentParser(
Expand All @@ -80,7 +109,15 @@ def main():
parser.add_argument(
"-t",
"--test-suite",
help="test-suit directory to be tested by %(prog)s's. You can add many ",
help="test suite directory to be tested by %(prog)s's. You can add many",
action="append",
default=[],
)

parser.add_argument(
"-k",
"--test-name",
help="test name in a suite to be tested by %(prog)s's. You can add many",
action="append",
default=[],
)
Expand Down Expand Up @@ -135,40 +172,17 @@ def main():
# inside the folder. The tests should have
# a suffix "-test.py"
for file in os.listdir(test_suite_dir):
if file.endswith("-test.py"):

# Define the data-dir and create it
data_dir = os.path.normpath(os.path.join(args.data_dir, file))
if not os.path.isdir(data_dir):
os.makedirs(data_dir)

# get test file and create a log for it
test_filename = os.path.normpath(os.path.join(test_suite_dir, file))
test_logname = os.path.normpath(
os.path.join(data_dir, f"{int(time.time())}.log")
)

# Now start the test
with open(test_logname, "wt", encoding="utf-8") as log_file:
cli = ["python", test_filename]
cli_msg = " ".join(cli)
print(f"Running '{cli_msg}'")

with subprocess.Popen(
cli, stdout=log_file, stderr=log_file
) as test:
test.wait()
print(f"Writing stuff to {test_logname}")

# Check the test, if failed, log the results
# if passed, just show that worked
if test.returncode != 0:
print(f"Test {file} not passed {FAILURE_EMOJI}")
with open(test_logname, "rt", encoding="utf-8") as log_file:
raise RuntimeError(f"Tests failed: {log_file.read()}")

print(f"Test {file} passed {SUCCESS_EMOJI}")
print()
# if we passed one or more test file to filter,
# add them to the list and do nor include those
# that are not in the list. If no files are provided,
# include all of them.
if file.endswith("-test.py"):
if args.test_name:
if any(file.startswith(name) for name in args.test_name):
run_test(args, test_suite_dir, file)
else:
run_test(args, test_suite_dir, file)

print("🎉 ALL TESTS PASSED! GOOD JOB!")

Expand Down
Loading