Evaluating small neural networks for general-purpose lossy data compression
This repository has been archived on 2025-12-23. You can view files and clone it, but you cannot make any changes to it's state, such as pushing and creating new issues, pull requests or comments.
Find a file
2025-12-07 21:49:45 +01:00
results feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
src feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
.gitignore feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
.python-version chore: Change versions, setup HPC 2025-11-30 16:51:44 +01:00
benchmark.py feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
job.pbs chore: Also add datapaths to job 2025-11-30 21:58:57 +01:00
main.py feat: Add model choice 2025-12-06 21:55:35 +01:00
pyproject.toml feat: Time+memory tracking 2025-12-07 21:49:45 +01:00
README.md Streamline datasets 2025-12-04 23:13:16 +01:00
uv.lock feat: Time+memory tracking 2025-12-07 21:49:45 +01:00

neural compression

Example usage:

python main_cnn.py --debug train --dataset enwik9 --method optuna

Running locally

uv sync --all-extras

Running on the Ghent University HPC

See the Infrastructure docs for more information about the clusters.

module swap cluster/joltik # Specify the (GPU) cluster, {joltik,accelgor,litleo}

qsub job.pbs               # Submit job
qstat                      # Check status