Commit b0b449f1 authored by lukas leufen's avatar lukas leufen

release v0.12.1

Resolve "new release v0.12.1"

Closes #177 #175 #174 #173 #172

See merge request !155
parents c1cb2c8e 76b9700b
Pipeline #47121 passed with stages
in 9 minutes and 13 seconds
......@@ -2,6 +2,19 @@
All notable changes to this project will be documented in this file.
## v0.12.1 - 2020-09-28 - examples in notebook
### general:
- introduced a notebook documentation for easy starting, #174
- updated special installation instructions for the Juelich HPC systems, #172
### new features:
- names of input and output shape are renamed consistently to: input_shape, and output_shape, #175
### technical:
- it is possible to assign a custom name to a run module (e.g. used in logging), #173
## v0.12.0 - 2020-09-21 - Documentation and Bugfixes
### general:
......
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MLAir (v1.0) - Examples\n",
"\n",
"This notebook contains all examples as provided in Leufen et al. (2020). \n",
"Please follow the installation instructions provided in the [README](https://gitlab.version.fz-juelich.de/toar/mlair/-/blob/master/README.md) on gitlab. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 1\n",
"\n",
"The following cell imports MLAir and executes a minimalistic toy experiment. This cell is equivalent to Figure 2 in the manuscript."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import mlair\n",
"\n",
"# just give it a dry run without any modifications\n",
"mlair.run()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 2 \n",
"\n",
"In the following cell we use other station IDs provided as a list of strings (see also [JOIN-Web interface](https://join.fz-juelich.de/services/rest/surfacedata/) of the TOAR database for more details).\n",
"Moreover, we expand the `window_history_size` to 14 days and run the experiment. This cell is equivalent to Figure 3 in the manuscript."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# our new stations to use\n",
"stations = ['DEBW030', 'DEBW037', 'DEBW031', 'DEBW015', 'DEBW107']\n",
"\n",
"# expanded temporal context to 14 (days, because of default sampling=\"daily\")\n",
"window_history_size = 14\n",
"\n",
"# restart the experiment with little customisation\n",
"mlair.run(stations=stations, \n",
" window_history_size=window_history_size)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 3 \n",
"\n",
"The following cell loads the trained model from Example 2 and generates predictions for the two specified stations. \n",
"To ensure that the model is not retrained the keywords `create_new_model` and `train_model` are set to `False`. This cell is equivalent to Figure 4 in the manuscript. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# our new stations to use\n",
"stations = ['DEBY002', 'DEBY079']\n",
"\n",
"# same setting for window_history_size\n",
"window_history_size = 14\n",
"\n",
"# run experiment without training\n",
"mlair.run(stations=stations, \n",
" window_history_size=window_history_size, \n",
" create_new_model=False, \n",
" train_model=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 4\n",
"\n",
"The following cell demonstrates how a user defined model can be implemented by inheriting from `AbstractModelClass`. Within the `__init__` method `super().__init__`, `set_model` and `set_compile_options` should be called. Moreover, it is possible to set custom objects by calling `set_custom_objects`. Those custom objects are used to re-load the model (see also Keras documentation). For demonstration, the loss is added as custom object which is not required because a Keras built-in function is used as loss.\n",
"\n",
"The Keras-model itself is defined in `set_model` by using the sequential or functional Keras API. All compile options can be defined in `set_compile_options`.\n",
"This cell is equivalent to Figure 5 in the manuscript."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import keras\n",
"from keras.losses import mean_squared_error as mse\n",
"from keras.layers import PReLU, Input, Conv2D, Flatten, Dropout, Dense\n",
"\n",
"from mlair.model_modules import AbstractModelClass\n",
"from mlair.workflows import DefaultWorkflow\n",
"\n",
"class MyCustomisedModel(AbstractModelClass):\n",
"\n",
" \"\"\"\n",
" A customised model with a 1x1 Conv, and 2 Dense layers (16, \n",
" output shape). Dropout is used after Conv layer.\n",
" \"\"\"\n",
" def __init__(self, input_shape: list, output_shape: list):\n",
" \n",
" # set attributes shape_inputs and shape_outputs\n",
" super().__init__(input_shape[0], output_shape[0])\n",
"\n",
" # apply to model\n",
" self.set_model()\n",
" self.set_compile_options()\n",
" self.set_custom_objects(loss=self.compile_options['loss'])\n",
"\n",
" def set_model(self):\n",
" x_input = Input(shape=self._input_shape)\n",
" x_in = Conv2D(4, (1, 1))(x_input)\n",
" x_in = PReLU()(x_in)\n",
" x_in = Flatten()(x_in)\n",
" x_in = Dropout(0.1)(x_in)\n",
" x_in = Dense(16)(x_in)\n",
" x_in = PReLU()(x_in)\n",
" x_in = Dense(self._output_shape)(x_in)\n",
" out = PReLU()(x_in)\n",
" self.model = keras.Model(inputs=x_input, outputs=[out])\n",
"\n",
" def set_compile_options(self):\n",
" self.initial_lr = 1e-2\n",
" self.optimizer = keras.optimizers.SGD(lr=self.initial_lr, momentum=0.9)\n",
" self.loss = mse\n",
" self.compile_options = {\"metrics\": [\"mse\", \"mae\"]}\n",
"\n",
"# Make use of MyCustomisedModel within the DefaultWorkflow\n",
"workflow = DefaultWorkflow(model=MyCustomisedModel, epochs=2)\n",
"workflow.run()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example 5 \n",
"\n",
"Embedding of a custom Run Module in a modified MLAir workflow. In comparison to examples 1 to 4, this code example works on a single step deeper regarding the level of abstraction. Instead of calling the run method of MLAir, the user needs to add all stages individually and is responsible for all dependencies between the stages. By using the `Workflow` class as context manager, all stages are automatically connected with the result that all stages can easily be plugged in. This cell is equivalent to Figure 6 in the manuscript."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"class CustomStage(mlair.RunEnvironment):\n",
" \"\"\"A custom MLAir stage for demonstration.\"\"\"\n",
" def __init__(self, test_string):\n",
" super().__init__() # always call super init method\n",
" self._run(test_string) # call a class method\n",
" \n",
" def _run(self, test_string):\n",
" logging.info(\"Just running a custom stage.\")\n",
" logging.info(\"test_string = \" + test_string)\n",
" epochs = self.data_store.get(\"epochs\")\n",
" logging.info(\"epochs = \" + str(epochs))\n",
" \n",
" \n",
"# create your custom MLAir workflow\n",
"CustomWorkflow = mlair.Workflow()\n",
"# provide stages without initialisation\n",
"CustomWorkflow.add(mlair.ExperimentSetup, epochs=128)\n",
"# add also keyword arguments for a specific stage\n",
"CustomWorkflow.add(CustomStage, test_string=\"Hello World\")\n",
"# finally execute custom workflow in order of adding\n",
"CustomWorkflow.run()\n",
" "
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (mlt_new)",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
......@@ -17,17 +17,19 @@ install the geo packages. For special instructions to install MLAir on the Jueli
* (geo) Install **proj** on your machine using the console. E.g. for opensuse / leap `zypper install proj`
* (geo) A c++ compiler is required for the installation of the program **cartopy**
* Install all requirements from [`requirements.txt`](https://gitlab.version.fz-juelich.de/toar/machinelearningtools/-/blob/master/requirements.txt)
* Install all requirements from [`requirements.txt`](https://gitlab.version.fz-juelich.de/toar/mlair/-/blob/master/requirements.txt)
preferably in a virtual environment
* (tf) Currently, TensorFlow-1.13 is mentioned in the requirements. We already tested the TensorFlow-1.15 version and couldn't
find any compatibility errors. Please note, that tf-1.13 and 1.15 have two distinct branches each, the default branch
for CPU support, and the "-gpu" branch for GPU support. If the GPU version is installed, MLAir will make use of the GPU
device.
* Installation of **MLAir**:
* Either clone MLAir from the [gitlab repository](https://gitlab.version.fz-juelich.de/toar/machinelearningtools.git)
* Either clone MLAir from the [gitlab repository](https://gitlab.version.fz-juelich.de/toar/mlair.git)
and use it without installation (beside the requirements)
* or download the distribution file (?? .whl) and install it via `pip install <??>`. In this case, you can simply
import MLAir in any python script inside your virtual environment using `import mlair`.
* or download the distribution file ([current version](https://gitlab.version.fz-juelich.de/toar/mlair/-/blob/master/dist/mlair-0.12.1-py3-none-any.whl))
and install it via `pip install <dist_file>.whl`. In this case, you can simply import MLAir in any python script
inside your virtual environment using `import mlair`.
# How to start with MLAir
......@@ -47,15 +49,19 @@ mlair.run()
The logging output will show you many informations. Additional information (including debug messages) are collected
inside the experiment path in the logging folder.
```log
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
INFO: Experiment path is: /home/<usr>/mlair/testrun_network
...
INFO: load data for DEBW001 from JOIN
INFO: load data for DEBW107 from JOIN
INFO: load data for DEBY081 from JOIN
INFO: load data for DEBW013 from JOIN
INFO: load data for DEBW076 from JOIN
INFO: load data for DEBW087 from JOIN
...
INFO: Training started
...
INFO: mlair finished after 00:00:12 (hh:mm:ss)
INFO: DefaultWorkflow finished after 0:03:04 (hh:mm:ss)
```
## Example 2
......@@ -77,15 +83,17 @@ mlair.run(stations=stations,
```
The output looks similar, but we can see, that the new stations are loaded.
```log
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
...
INFO: load data for DEBW030 from JOIN
INFO: load data for DEBW037 from JOIN
INFO: load data for DEBW031 from JOIN
INFO: load data for DEBW015 from JOIN
...
INFO: Training started
...
INFO: mlair finished after 00:00:24 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:02:03 (hh:mm:ss)
```
## Example 3
......@@ -107,15 +115,15 @@ window_history_size = 14
mlair.run(stations=stations,
window_history_size=window_history_size,
create_new_model=False,
trainable=False)
train_model=False)
```
We can see from the terminal that no training was performed. Analysis is now made on the new stations.
```log
INFO: mlair started
INFO: DefaultWorkflow started
...
INFO: No training has started, because trainable parameter was false.
INFO: No training has started, because train_model parameter was false.
...
INFO: mlair finished after 00:00:06 (hh:mm:ss)
INFO: DefaultWorkflow finished after 0:01:27 (hh:mm:ss)
```
......@@ -137,7 +145,7 @@ DefaultWorkflow.run()
```
The output of running this default workflow will be structured like the following.
```log
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
...
INFO: ExperimentSetup finished after 00:00:01 (hh:mm:ss)
......@@ -153,7 +161,7 @@ INFO: Training finished after 00:02:15 (hh:mm:ss)
INFO: PostProcessing started
...
INFO: PostProcessing finished after 00:01:37 (hh:mm:ss)
INFO: mlair finished after 00:04:05 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:04:05 (hh:mm:ss)
```
# Customised Run Module and Workflow
......@@ -199,7 +207,7 @@ CustomWorkflow.run()
The output will look like:
```log
INFO: mlair started
INFO: Workflow started
...
INFO: ExperimentSetup finished after 00:00:12 (hh:mm:ss)
INFO: CustomStage started
......@@ -207,7 +215,7 @@ INFO: Just running a custom stage.
INFO: test_string = Hello World
INFO: epochs = 128
INFO: CustomStage finished after 00:00:01 (hh:mm:ss)
INFO: mlair finished after 00:00:13 (hh:mm:ss)
INFO: Workflow finished after 00:00:13 (hh:mm:ss)
```
# Custom Model
......@@ -222,17 +230,13 @@ behaviour.
```python
from mlair import AbstractModelClass
import keras
class MyCustomisedModel(AbstractModelClass):
def __init__(self, shape_inputs: list, shape_outputs: list):
super().__init__(shape_inputs[0], shape_outputs[0])
def __init__(self, input_shape: list, output_shape: list):
# settings
self.dropout_rate = 0.1
self.activation = keras.layers.PReLU
# set attributes shape_inputs and shape_outputs
super().__init__(input_shape[0], output_shape[0])
# apply to model
self.set_model()
......@@ -250,38 +254,40 @@ class MyCustomisedModel(AbstractModelClass):
loss has been added for demonstration only, because we use a build-in loss function. Nonetheless, we always encourage
you to add the loss as custom object, to prevent potential errors when loading an already created model instead of
training a new one.
* Now build your model inside `set_model()` by using the instance attributes `self.shape_inputs` and
`self.shape_outputs` and storing the model as `self.model`.
* Now build your model inside `set_model()` by using the instance attributes `self._input_shape` and
`self._output_shape` and storing the model as `self.model`.
```python
import keras
from keras.layers import PReLU, Input, Conv2D, Flatten, Dropout, Dense
class MyCustomisedModel(AbstractModelClass):
def set_model(self):
x_input = keras.layers.Input(shape=self.shape_inputs)
x_in = keras.layers.Conv2D(32, (1, 1), padding='same', name='{}_Conv_1x1'.format("major"))(x_input)
x_in = self.activation(name='{}_conv_act'.format("major"))(x_in)
x_in = keras.layers.Flatten(name='{}'.format("major"))(x_in)
x_in = keras.layers.Dropout(self.dropout_rate, name='{}_Dropout_1'.format("major"))(x_in)
x_in = keras.layers.Dense(16, name='{}_Dense_16'.format("major"))(x_in)
x_in = self.activation()(x_in)
x_in = keras.layers.Dense(self.shape_outputs, name='{}_Dense'.format("major"))(x_in)
out_main = self.activation()(x_in)
self.model = keras.Model(inputs=x_input, outputs=[out_main])
x_input = Input(shape=self._input_shape)
x_in = Conv2D(4, (1, 1))(x_input)
x_in = PReLU()(x_in)
x_in = Flatten()(x_in)
x_in = Dropout(0.1)(x_in)
x_in = Dense(16)(x_in)
x_in = PReLU()(x_in)
x_in = Dense(self._output_shape)(x_in)
out = PReLU()(x_in)
self.model = keras.Model(inputs=x_input, outputs=[out])
```
* Your are free how to design your model. Just make sure to save it in the class attribute model.
* Additionally, set your custom compile options including the loss definition.
```python
from keras.losses import mean_squared_error as mse
class MyCustomisedModel(AbstractModelClass):
def set_compile_options(self):
self.initial_lr = 1e-2
self.optimizer = keras.optimizers.SGD(lr=self.initial_lr, momentum=0.9)
self.lr_decay = mlair.model_modules.keras_extensions.LearningRateDecay(base_lr=self.initial_lr,
drop=.94,
epochs_drop=10)
self.loss = keras.losses.mean_squared_error
self.loss = mse
self.compile_options = {"metrics": ["mse", "mae"]}
```
......@@ -302,6 +308,15 @@ class MyCustomisedModel(AbstractModelClass):
self.loss = keras.losses.mean_squared_error
self.compile_options = {"optimizer" = keras.optimizers.Adam()}
```
## How to plug in the customised model into the workflow?
* Make use of the `model` argument and pass `MyCustomisedModel` when instantiating a workflow.
```python
from mlair.workflows import DefaultWorkflow
workflow = DefaultWorkflow(model=MyCustomisedModel)
workflow.run()
```
## Specials for Branched Models
......
......@@ -27,7 +27,7 @@ The output of running this default workflow will be structured like the followin
.. code-block::
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
...
INFO: ExperimentSetup finished after 00:00:01 (hh:mm:ss)
......@@ -43,7 +43,7 @@ The output of running this default workflow will be structured like the followin
INFO: PostProcessing started
...
INFO: PostProcessing finished after 00:01:37 (hh:mm:ss)
INFO: mlair finished after 00:04:05 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:04:05 (hh:mm:ss)
Custom Model
------------
......@@ -65,9 +65,9 @@ How to create a customised model?
class MyCustomisedModel(AbstractModelClass):
def __init__(self, shape_inputs: list, shape_outputs: list):
def __init__(self, input_shape: list, output_shape: list):
super().__init__(shape_inputs[0], shape_outputs[0])
super().__init__(input_shape[0], output_shape[0])
# settings
self.dropout_rate = 0.1
......@@ -88,22 +88,22 @@ How to create a customised model?
loss has been added for demonstration only, because we use a build-in loss function. Nonetheless, we always encourage
you to add the loss as custom object, to prevent potential errors when loading an already created model instead of
training a new one.
* Now build your model inside :py:`set_model()` by using the instance attributes :py:`self.shape_inputs` and
:py:`self.shape_outputs` and storing the model as :py:`self.model`.
* Now build your model inside :py:`set_model()` by using the instance attributes :py:`self._input_shape` and
:py:`self._output_shape` and storing the model as :py:`self.model`.
.. code-block:: python
class MyCustomisedModel(AbstractModelClass):
def set_model(self):
x_input = keras.layers.Input(shape=self.shape_inputs)
x_input = keras.layers.Input(shape=self._input_shape)
x_in = keras.layers.Conv2D(32, (1, 1), padding='same', name='{}_Conv_1x1'.format("major"))(x_input)
x_in = self.activation(name='{}_conv_act'.format("major"))(x_in)
x_in = keras.layers.Flatten(name='{}'.format("major"))(x_in)
x_in = keras.layers.Dropout(self.dropout_rate, name='{}_Dropout_1'.format("major"))(x_in)
x_in = keras.layers.Dense(16, name='{}_Dense_16'.format("major"))(x_in)
x_in = self.activation()(x_in)
x_in = keras.layers.Dense(self.shape_outputs, name='{}_Dense'.format("major"))(x_in)
x_in = keras.layers.Dense(self._output_shape, name='{}_Dense'.format("major"))(x_in)
out_main = self.activation()(x_in)
self.model = keras.Model(inputs=x_input, outputs=[out_main])
......@@ -143,6 +143,20 @@ How to create a customised model?
self.compile_options = {"optimizer" = keras.optimizers.Adam()}
How to plug in the customised model into the workflow?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Make use of the :py:`model` argument and pass :py:`MyCustomisedModel` when instantiating a workflow.
.. code-block:: python
from mlair.workflows import DefaultWorkflow
workflow = DefaultWorkflow(model=MyCustomisedModel)
workflow.run()
Specials for Branched Models
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -342,7 +356,7 @@ The output will look like:
.. code-block::
INFO: mlair started
INFO: Workflow started
...
INFO: ExperimentSetup finished after 00:00:12 (hh:mm:ss)
INFO: CustomStage started
......@@ -350,4 +364,4 @@ The output will look like:
INFO: test_string = Hello World
INFO: epochs = 128
INFO: CustomStage finished after 00:00:01 (hh:mm:ss)
INFO: mlair finished after 00:00:13 (hh:mm:ss)
\ No newline at end of file
INFO: Workflow finished after 00:00:13 (hh:mm:ss)
\ No newline at end of file
......@@ -60,7 +60,7 @@ inside the experiment path in the logging folder.
.. code-block::
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
INFO: Experiment path is: /home/<usr>/mlair/testrun_network
...
......@@ -68,7 +68,7 @@ inside the experiment path in the logging folder.
...
INFO: Training started
...
INFO: mlair finished after 00:00:12 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:00:12 (hh:mm:ss)
Example 2
......@@ -94,7 +94,7 @@ The output looks similar, but we can see, that the new stations are loaded.
.. code-block::
INFO: mlair started
INFO: DefaultWorkflow started
INFO: ExperimentSetup started
...
INFO: load data for DEBW030 from JOIN
......@@ -102,7 +102,7 @@ The output looks similar, but we can see, that the new stations are loaded.
...
INFO: Training started
...
INFO: mlair finished after 00:00:24 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:00:24 (hh:mm:ss)
Example 3
~~~~~~~~~
......@@ -132,9 +132,9 @@ We can see from the terminal that no training was performed. Analysis is now mad
.. code-block::
INFO: mlair started
INFO: DefaultWorkflow started
...
INFO: No training has started, because trainable parameter was false.
...
INFO: mlair finished after 00:00:06 (hh:mm:ss)
INFO: DefaultWorkflow finished after 00:00:06 (hh:mm:ss)
__version_info__ = {
'major': 0,
'minor': 12,
'micro': 0,
'micro': 1,
}
from mlair.run_modules import RunEnvironment, ExperimentSetup, PreProcessing, ModelSetup, Training, PostProcessing
......
......@@ -23,9 +23,9 @@ How to create a customised model?
class MyCustomisedModel(AbstractModelClass):
def __init__(self, shape_inputs: list, shape_outputs: list):
def __init__(self, input_shape: list, output_shape: list):
super().__init__(shape_inputs[0], shape_outputs[0])
super().__init__(input_shape[0], output_shape[0])
# settings
self.dropout_rate = 0.1
......@@ -49,14 +49,14 @@ How to create a customised model?
class MyCustomisedModel(AbstractModelClass):
def set_model(self):
x_input = keras.layers.Input(shape=self.shape_inputs)
x_input = keras.layers.Input(shape=self._input_shape)
x_in = keras.layers.Conv2D(32, (1, 1), padding='same', name='{}_Conv_1x1'.format("major"))(x_input)
x_in = self.activation(name='{}_conv_act'.format("major"))(x_in)
x_in = keras.layers.Flatten(name='{}'.format("major"))(x_in)
x_in = keras.layers.Dropout(self.dropout_rate, name='{}_Dropout_1'.format("major"))(x_in)
x_in = keras.layers.Dense(16, name='{}_Dense_16'.format("major"))(x_in)
x_in = self.activation()(x_in)
x_in = keras.layers.Dense(self.shape_outputs, name='{}_Dense'.format("major"))(x_in)
x_in = keras.layers.Dense(self._output_shape, name='{}_Dense'.format("major"))(x_in)
out_main = self.activation()(x_in)
self.model = keras.Model(inputs=x_input, outputs=[out_main])
......@@ -139,7 +139,7 @@ class AbstractModelClass(ABC):
the corresponding loss function.
"""
def __init__(self, shape_inputs, shape_outputs) -> None:
def __init__(self, input_shape, output_shape) -> None:
"""Predefine internal attributes for model and loss."""
self.__model = None
self.model_name = self.__class__.__name__
......@@ -154,8 +154,8 @@ class AbstractModelClass(ABC):
}
self.__compile_options = self.__allowed_compile_options
self.__compile_options_is_set = False
self.shape_inputs = shape_inputs
self.shape_outputs = self.__extract_from_tuple(shape_outputs)
self._input_shape = input_shape
self._output_shape = self.__extract_from_tuple(output_shape)
def __getattr__(self, name: str) -> Any:
"""
......@@ -355,17 +355,17 @@ class MyLittleModel(AbstractModelClass):
on the window_lead_time parameter.
"""
def __init__(self, shape_inputs: list, shape_outputs: list):
def __init__(self, input_shape: list, output_shape: list):
"""
Sets model and loss depending on the given arguments.
:param shape_inputs: list of input shapes (expect len=1 with shape=(window_hist, station, variables))
:param shape_outputs: list of output shapes (expect len=1 with shape=(window_forecast))
:param input_shape: list of input shapes (expect len=1 with shape=(window_hist, station, variables))
:param output_shape: list of output shapes (expect len=1 with shape=(window_forecast))
"""
assert len(shape_inputs) == 1