TensorFlow API Reference 📖
Complete API reference for MLPotion's TensorFlow components.
Auto-Generated Documentation
This page is automatically populated with API documentation from the source code.
Extensibility
These components are built using protocol-based design, making MLPotion easy to extend. Want to add new data sources, training methods, or integrations? See Contributing Guide.
Data Loading
mlpotion.frameworks.tensorflow.data.loaders
TensorFlow data loaders.
Classes
CSVDataLoader
CSVDataLoader(
file_pattern: str,
batch_size: int = 32,
column_names: list[str] | None = None,
label_name: str | None = None,
map_fn: Callable[[dict[str, Any]], dict[str, Any]]
| None = None,
config: dict[str, Any] | None = None,
) -> None
Bases: DataLoader[tf.data.Dataset]
Load CSV files into TensorFlow datasets.
This class provides a convenient wrapper around tf.data.experimental.make_csv_dataset,
adding validation, logging, and configuration management. It handles file pattern matching,
column selection, and label separation.
Attributes:
| Name | Type | Description |
|---|---|---|
file_pattern |
str
|
Glob pattern matching the CSV files to load. |
batch_size |
int
|
Number of samples per batch. |
column_names |
list[str] | None
|
Specific columns to load. If None, all columns are loaded. |
label_name |
str | None
|
Name of the column to use as the label. If None, no labels are returned. |
map_fn |
Callable | None
|
Optional function to map over the dataset (e.g., for preprocessing). |
config |
dict | None
|
Additional configuration passed to |
Example
from mlpotion.frameworks.tensorflow import CSVDataLoader
# Simple usage
loader = CSVDataLoader(
file_pattern="data/train_*.csv",
label_name="target_class",
batch_size=64,
config={"num_epochs": 5, "shuffle": True}
)
dataset = loader.load()
# Iterate
for features, labels in dataset:
print(features['some_column'].shape)
break
Source code in mlpotion/frameworks/tensorflow/data/loaders.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | |
Functions
load
load() -> tf.data.Dataset
Load CSV files into a TensorFlow dataset.
Returns:
| Type | Description |
|---|---|
tf.data.Dataset
|
tf.data.Dataset: A |
tf.data.Dataset
|
is provided, or just |
Raises:
| Type | Description |
|---|---|
DataLoadingError
|
If no files match the pattern, or if |
Source code in mlpotion/frameworks/tensorflow/data/loaders.py
165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 | |
RecordDataLoader
RecordDataLoader(
file_pattern: str,
batch_size: int = 32,
column_names: list[str] | None = None,
label_name: str | None = None,
map_fn: Callable[[tf.Tensor], Any] | None = None,
element_spec_json: str | dict[str, Any] | None = None,
config: dict[str, Any] | None = None,
) -> None
Bases: DataLoader[tf.data.Dataset]
Loader for TFRecord files into tf.data.Dataset.
This class facilitates loading data from TFRecord files, which is the recommended format
for high-performance TensorFlow pipelines. It supports parsing examples, handling
nested structures via element_spec, and applying common dataset optimizations.
Attributes:
| Name | Type | Description |
|---|---|---|
file_pattern |
str
|
Glob pattern matching the TFRecord files. |
batch_size |
int
|
Number of samples per batch. |
column_names |
list[str] | None
|
Specific feature keys to extract. |
label_name |
str | None
|
Key of the label feature. |
map_fn |
Callable | None
|
Optional function to map over the dataset. |
element_spec_json |
str | dict | None
|
JSON or dict describing the data structure (optional). |
config |
dict | None
|
Configuration for reading (e.g., |
Example
from mlpotion.frameworks.tensorflow import RecordDataLoader
loader = RecordDataLoader(
file_pattern="data/records/*.tfrecord",
batch_size=128,
label_name="label",
config={
"compression_type": "GZIP",
"num_parallel_reads": tf.data.AUTOTUNE
}
)
dataset = loader.load()
Source code in mlpotion/frameworks/tensorflow/data/loaders.py
242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 | |
Functions
load
load() -> tf.data.Dataset
Load TFRecord files into a tf.data.Dataset.
Returns:
| Type | Description |
|---|---|
tf.data.Dataset
|
tf.data.Dataset: Parsed and optionally mapped dataset of (features, label) or features only. |
Raises:
| Type | Description |
|---|---|
DataLoadingError
|
on failure. |
Source code in mlpotion/frameworks/tensorflow/data/loaders.py
321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 | |
mlpotion.frameworks.tensorflow.data.optimizers
TensorFlow dataset optimization.
Classes
DatasetOptimizer
DatasetOptimizer(
batch_size: int = 32,
shuffle_buffer_size: int | None = None,
prefetch: bool = True,
cache: bool = False,
) -> None
Bases: DatasetOptimizerProtocol[tf.data.Dataset]
Optimize TensorFlow datasets for training performance.
This class applies a standard set of performance optimizations to a tf.data.Dataset:
caching, shuffling, batching, and prefetching. These are critical for preventing
data loading bottlenecks during training.
Attributes:
| Name | Type | Description |
|---|---|---|
batch_size |
int
|
The number of samples per batch. |
shuffle_buffer_size |
int | None
|
Size of the shuffle buffer. If None, shuffling is disabled. |
prefetch |
bool
|
Whether to prefetch data (uses |
cache |
bool
|
Whether to cache the dataset in memory. |
Example
from mlpotion.frameworks.tensorflow import DatasetOptimizer
# Create optimizer
optimizer = DatasetOptimizer(
batch_size=32,
shuffle_buffer_size=1000,
cache=True,
prefetch=True
)
# Apply to a raw dataset
optimized_dataset = optimizer.optimize(raw_dataset)
Initialize dataset optimizer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch_size |
int
|
Batch size |
32
|
shuffle_buffer_size |
int | None
|
Buffer size for shuffling (None = no shuffle) |
None
|
prefetch |
bool
|
Whether to prefetch batches |
True
|
cache |
bool
|
Whether to cache dataset in memory |
False
|
Source code in mlpotion/frameworks/tensorflow/data/optimizers.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | |
Functions
from_config
classmethod
from_config(
config: DataOptimizationConfig,
) -> DatasetOptimizer
Create optimizer from configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config |
DataOptimizationConfig
|
Optimization configuration |
required |
Returns:
| Type | Description |
|---|---|
DatasetOptimizer
|
Configured optimizer instance |
Source code in mlpotion/frameworks/tensorflow/data/optimizers.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 | |
optimize
optimize(dataset: tf.data.Dataset) -> tf.data.Dataset
Optimize dataset for training.
Applies optimizations in the following order:
1. Cache: Caches data in memory (if enabled).
2. Shuffle: Randomizes data order (if shuffle_buffer_size is set).
3. Batch: Groups data into batches.
4. Prefetch: Prepares the next batch while the current one is being processed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset |
tf.data.Dataset
|
The input |
required |
Returns:
| Type | Description |
|---|---|
tf.data.Dataset
|
tf.data.Dataset: The optimized dataset pipeline. |
Source code in mlpotion/frameworks/tensorflow/data/optimizers.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | |
Training
mlpotion.frameworks.tensorflow.training.trainers
TensorFlow model trainers.
This module re-exports the Keras ModelTrainer implementation, as TensorFlow 2.x
uses Keras as its high-level API.
Classes
ModelTrainer
dataclass
Bases: ModelTrainerProtocol[Model, Sequence]
Generic trainer for Keras 3 models.
This class implements the ModelTrainerProtocol for Keras models, providing a standardized
interface for training. It wraps the standard model.fit() method but adds flexibility
and consistency checks.
It supports:
- Automatic model compilation if compile_params are provided.
- Handling of various data formats (tuples, dicts, generators).
- Standardized return format (dictionary of history metrics).
Example
import keras
import numpy as np
from mlpotion.frameworks.keras import ModelTrainer
# Prepare data
X_train = np.random.rand(100, 10)
y_train = np.random.randint(0, 2, 100)
# Define model
model = keras.Sequential([
keras.layers.Dense(1, activation='sigmoid')
])
# Initialize trainer
trainer = ModelTrainer()
# Train
history = trainer.train(
model=model,
data=(X_train, y_train),
compile_params={
"optimizer": "adam",
"loss": "binary_crossentropy",
"metrics": ["accuracy"]
},
fit_params={
"epochs": 5,
"batch_size": 32,
"verbose": 1
}
)
print(history['loss'])
Functions
train
train(
model: Model,
dataset: Any,
config: ModelTrainingConfig,
validation_dataset: Any | None = None,
) -> TrainingResult[Model]
Train a Keras model using the provided dataset and configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model |
Model
|
The Keras model to train. |
required |
dataset |
Any
|
The training data. Can be a tuple |
required |
config |
ModelTrainingConfig
|
Configuration object containing training parameters. |
required |
validation_dataset |
Any | None
|
Optional validation data. |
None
|
Returns:
| Type | Description |
|---|---|
TrainingResult[Model]
|
TrainingResult[Model]: An object containing the trained model, training history, and metrics. |
Source code in mlpotion/frameworks/keras/training/trainers.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 | |
Evaluation
mlpotion.frameworks.tensorflow.evaluation.evaluators
TensorFlow model evaluators.
This module re-exports the Keras ModelEvaluator implementation, as TensorFlow 2.x
uses Keras as its high-level API.
Classes
ModelEvaluator
dataclass
Bases: ModelEvaluatorProtocol[Model, Sequence]
Generic evaluator for Keras 3 models.
This class implements the ModelEvaluatorProtocol for Keras models. It wraps the
model.evaluate() method to provide a consistent evaluation interface.
It ensures that the evaluation result is always returned as a dictionary of metric names to values, regardless of how the model was compiled or what arguments were passed.
Example
import keras
import numpy as np
from mlpotion.frameworks.keras import ModelEvaluator
# Prepare data
X_test = np.random.rand(20, 10)
y_test = np.random.randint(0, 2, 20)
# Define model
model = keras.Sequential([
keras.layers.Dense(1, activation='sigmoid')
])
# Initialize evaluator
evaluator = ModelEvaluator()
# Evaluate
metrics = evaluator.evaluate(
model=model,
data=(X_test, y_test),
compile_params={
"optimizer": "adam",
"loss": "binary_crossentropy",
"metrics": ["accuracy"]
},
eval_params={"batch_size": 32}
)
print(metrics) # {'loss': 0.693..., 'accuracy': 0.5...}
Functions
evaluate
evaluate(
model: Model,
dataset: Any,
config: ModelEvaluationConfig,
) -> EvaluationResult
Evaluate a Keras model on the given data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model |
Model
|
The Keras model to evaluate. |
required |
dataset |
Any
|
The evaluation data. Can be a tuple |
required |
config |
ModelEvaluationConfig
|
Configuration object containing evaluation parameters. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
EvaluationResult |
EvaluationResult
|
An object containing the evaluation metrics. |
Source code in mlpotion/frameworks/keras/evaluation/evaluators.py
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | |
Persistence
mlpotion.frameworks.tensorflow.deployment.persistence
TensorFlow model persistence.
This module re-exports the Keras ModelPersistence implementation, as TensorFlow 2.x
uses Keras as its high-level API.
Classes
ModelPersistence
ModelPersistence(
path: str | Path, model: Model | None = None
) -> None
Bases: ModelPersistenceProtocol[Model]
Persistence helper for Keras models.
This class manages saving and loading of Keras models. It supports standard Keras
formats (.keras, .h5) and SavedModel directories. It also integrates with
ModelInspector to provide model metadata upon loading.
Attributes:
| Name | Type | Description |
|---|---|---|
path |
Path
|
The file path for the model artifact. |
model |
Model | None
|
The Keras model instance (optional). |
Example
import keras
from mlpotion.frameworks.keras import ModelPersistence
# Define model
model = keras.Sequential([keras.layers.Dense(1)])
# Save
saver = ModelPersistence(path="models/my_model.keras", model=model)
saver.save()
# Load
loader = ModelPersistence(path="models/my_model.keras")
loaded_model, metadata = loader.load(inspect=True)
print(metadata['parameters'])
Source code in mlpotion/frameworks/keras/deployment/persistence.py
44 45 46 | |
Attributes
model
property
writable
model: Model | None
Currently attached Keras model (may be None before loading).
path
property
writable
path: Path
Filesystem path where the model is saved/loaded.
Functions
load
load(
*, inspect: bool = True, **kwargs: Any
) -> tuple[Model, dict[str, Any] | None]
Load a Keras model from disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
inspect |
bool
|
Whether to inspect the loaded model and return metadata. |
True
|
**kwargs |
Any
|
Additional arguments passed to |
{}
|
Returns:
| Type | Description |
|---|---|
Model
|
tuple[Model, dict[str, Any] | None]: A tuple containing the loaded model and |
dict[str, Any] | None
|
optional inspection metadata. |
Raises:
| Type | Description |
|---|---|
ModelPersistenceError
|
If the model file cannot be found or loaded. |
Source code in mlpotion/frameworks/keras/deployment/persistence.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | |
save
save(overwrite: bool = True, **kwargs: Any) -> None
Save the attached model to disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
overwrite |
bool
|
Whether to overwrite the file if it already exists. |
True
|
**kwargs |
Any
|
Additional arguments passed to |
{}
|
Raises:
| Type | Description |
|---|---|
ModelPersistenceError
|
If no model is attached or if the file exists and |
Source code in mlpotion/frameworks/keras/deployment/persistence.py
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |
Export
mlpotion.frameworks.tensorflow.deployment.exporters
TensorFlow model exporters.
This module re-exports the Keras ModelExporter implementation, as TensorFlow 2.x
uses Keras as its high-level API.
Classes
ModelExporter
Bases: ModelExporterProtocol[Model]
Generic exporter for Keras 3 models.
This class implements ModelExporterProtocol and supports exporting Keras models
to various formats, including native Keras formats (.keras, .h5) and inference
formats like TensorFlow SavedModel or ONNX (via model.export).
It also supports creating export archives with custom endpoints using keras.export.ExportArchive.
Example
import keras
from mlpotion.frameworks.keras import ModelExporter
model = keras.Sequential([keras.layers.Dense(1)])
exporter = ModelExporter()
# Export as standard Keras file
exporter.export(model, "models/model.keras")
# Export for serving (TF SavedModel)
exporter.export(model, "models/serving", export_format="tf_saved_model")
Functions
export
export(model: Model, path: str, **kwargs: Any) -> None
Export a Keras model to disk.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model |
Model
|
The Keras model to export. |
required |
path |
str
|
The destination path or directory. |
required |
**kwargs |
Any
|
Additional export options:
- |
{}
|
Raises:
| Type | Description |
|---|---|
ModelExporterError
|
If export fails. |
Source code in mlpotion/frameworks/keras/deployment/exporters.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | |
Model Inspection
mlpotion.frameworks.tensorflow.models.inspection
TensorFlow model inspection.
This module re-exports the Keras ModelInspector implementation, as TensorFlow 2.x
uses Keras as its high-level API.
Classes
ModelInspector
dataclass
Bases: ModelInspectorProtocol[ModelLike]
Inspector for Keras models.
This class analyzes Keras models to extract metadata such as input/output shapes, parameter counts, layer details, and signatures. It is useful for validating models before training or deployment, and for generating model reports.
Attributes:
| Name | Type | Description |
|---|---|---|
include_layers |
bool
|
Whether to include detailed information about each layer. |
include_signatures |
bool
|
Whether to include model signatures (if available). |
Example
import keras
from mlpotion.frameworks.keras import ModelInspector
model = keras.Sequential([keras.layers.Dense(1, input_shape=(10,))])
inspector = ModelInspector()
info = inspector.inspect(model)
print(f"Total params: {info['parameters']['total']}")
print(f"Inputs: {info['inputs']}")
Functions
inspect
inspect(model: ModelLike) -> dict[str, Any]
Inspect a Keras model and return structured metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model |
ModelLike
|
The Keras model to inspect. |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
dict[str, Any]: A dictionary containing model metadata:
- |
Source code in mlpotion/frameworks/keras/models/inspection.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | |
See the TensorFlow Guide for usage examples