Hannes Martin Signer 6ddfa4ad03 Merge branch 'preprocessing-experiment' into 'main'
Preprocessing experiment

See merge request naaice/model-training!7
2025-03-24 18:02:55 +01:00
2025-02-27 12:09:42 +01:00
2025-02-18 20:28:43 +01:00
2025-01-13 17:26:26 +01:00
2025-02-17 22:23:48 +01:00
2025-02-27 12:48:58 +01:00
2025-02-27 13:09:10 +01:00

Training of AI Surrogate Models

This repository contains the current experiments for training AI models that attempt to predict the chemistry component of POET and is structued as follows:

└── dataset
    └── Barite_50_Data_training.h5
    └── barite_50_4_corner.h5
└── doc
└── results
└── src
    └── POET_Training.ipynb
    └── convert_data.jl
    └── optuna_runs.py
    └── preprocessing.py

The datasets in datasets must first be pulled via git lfs pull to get the data from the large file storage. A conda environment can then be set up with the packages contained in environment.yml with conda env create -f environment.yml

The preprocessing.py file defines all the necessary steps for preprocessing as well as the keras models used. The actual training and additional explanations then take place in POET_Training.ipynb.

Description
No description provided
Readme 778 MiB
Languages
Jupyter Notebook 96.3%
Python 3.5%
Julia 0.2%