Evaluating small neural networks for general-purpose lossy data compression
This repository has been archived on 2025-12-23. You can view files and clone it, but you cannot make any changes to it's state, such as pushing and creating new issues, pull requests or comments.
Find a file
2025-12-09 22:08:52 +01:00
results feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
src fix: Offsets 2025-12-09 22:08:52 +01:00
.gitignore feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
.python-version chore: Change versions, setup HPC 2025-11-30 16:51:44 +01:00
benchmark.py feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
job.pbs chore: Also add datapaths to job 2025-11-30 21:58:57 +01:00
main.py Checkpoint 2025-12-09 15:10:12 +01:00
pyproject.toml feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
README.md chore: Update README usage 2025-12-07 22:05:59 +01:00
uv.lock feat: Time+memory tracking 2025-12-07 21:49:45 +01:00

neural compression

Example usage:

python main.py --debug train --dataset enwik9 --data-root ~/data/datasets/ml --method optuna --model transformer --model-save-path ~/data/ml-models/test-transformer.pt

python benchmark.py --debug train --dataset enwik9 --data-root ~/data/datasets/ml --method optuna --model cnn --model-save-path ~/data/ml-models/test-cnn.pt

Running locally

uv sync --all-extras

Running on the Ghent University HPC

See the Infrastructure docs for more information about the clusters.

module swap cluster/joltik # Specify the (GPU) cluster, {joltik,accelgor,litleo}

qsub job.pbs               # Submit job
qstat                      # Check status