mirror of
https://git.gfz-potsdam.de/naaice/model-training.git
synced 2025-12-11 15:38:21 +01:00
Preprocessing experiment See merge request naaice/model-training!7
Training of AI Surrogate Models
This repository contains the current experiments for training AI models that attempt to predict the chemistry component of POET and is structued as follows:
└── dataset
└── Barite_50_Data_training.h5
└── barite_50_4_corner.h5
└── doc
└── results
└── src
└── POET_Training.ipynb
└── convert_data.jl
└── optuna_runs.py
└── preprocessing.py
The datasets in datasets must first be pulled via git lfs pull to get the data from the large file storage.
A conda environment can then be set up with the packages contained in environment.yml with conda env create -f environment.yml
The preprocessing.py file defines all the necessary steps for preprocessing as well as the keras models used. The actual training and additional explanations then take place in POET_Training.ipynb.
Description
Languages
Jupyter Notebook
96.3%
Python
3.5%
Julia
0.2%