mirror of
https://git.gfz-potsdam.de/naaice/model-training.git
synced 2025-12-15 17:18:22 +01:00
23 lines
912 B
Markdown
23 lines
912 B
Markdown
# Training of AI Surrogate Models
|
|
|
|
This repository contains the current experiments for training AI models that attempt to predict the chemistry component of POET and is structued as follows:
|
|
|
|
```
|
|
└── dataset
|
|
└── Barite_50_Data_training.h5
|
|
└── barite_50_4_corner.h5
|
|
└── doc
|
|
└── results
|
|
└── src
|
|
└── POET_Training.ipynb
|
|
└── convert_data.jl
|
|
└── optuna_runs.py
|
|
└── preprocessing.py
|
|
```
|
|
|
|
The datasets in `datasets` must first be pulled via `git lfs pull` to get the data from the large file storage.
|
|
A conda environment can then be set up with the packages contained in environment.yml with `conda env create -f environment.yml`
|
|
|
|
The `preprocessing.py` file defines all the necessary steps for preprocessing as well as the keras models used. The actual training and additional explanations then take place in `POET_Training.ipynb`.
|
|
|