Flexible Feed-Forward Neural Network for Tabular Data
Overview
BaseFeedForwardModel is a configurable feed-forward neural network designed for tabular data. It provides a flexible architecture with configurable hidden layers, activations, regularization options, and optional preprocessing integration. It's ideal for regression and classification tasks on structured data.
Key Features
Flexible Architecture: Configurable hidden layers and units
Feature-Based Inputs: Named feature inputs for better interpretability
Regularization Options: Dropout, kernel/bias regularizers and constraints
Preprocessing Integration: Optional preprocessing model support
Input:
- Dictionary of named features: {feature_name: (batch_size, 1)}
- Or single tensor: (batch_size, n_features) when using preprocessing model
- Type: Float32
fromkerasfactory.utils.data_analyzerimportDataAnalyzerimportpandasaspd# Create preprocessing modeldf=pd.DataFrame({'feature1':np.random.randn(100),'feature2':np.random.randn(100),'feature3':np.random.randn(100)})analyzer=DataAnalyzer(df)preprocessing_model=analyzer.create_preprocessing_model()# Create model with preprocessingmodel=BaseFeedForwardModel(feature_names=['feature1','feature2','feature3'],hidden_units=[64,32],output_units=1,preprocessing_model=preprocessing_model)
# Save modelmodel.save('feedforward_model.keras')# Load modelloaded_model=keras.models.load_model('feedforward_model.keras')# Save weights onlymodel.save_weights('feedforward_weights.h5')# Load weightsmodel_new=BaseFeedForwardModel(feature_names=['feature1','feature2','feature3'],hidden_units=[64,32],output_units=1)model_new.load_weights('feedforward_weights.h5')
Best Use Cases
Tabular Data: Structured data with named features
Regression Tasks: Continuous value prediction
Classification Tasks: Binary and multi-class classification
Feature Engineering: When you need explicit feature control
Production Systems: With preprocessing model integration
Performance Considerations
hidden_units: Deeper networks (more layers) can learn complex patterns but may overfit
dropout_rate: Higher dropout helps prevent overfitting; use 0.2-0.5 for small datasets
activation: ReLU is default and works well; try Swish for better performance
regularization: Use L2 regularization for weight decay, L1 for feature selection
output_units: 1 for regression/binary classification, n_classes for multi-class
Architecture Tips
Start with 2-3 hidden layers for most problems
Use dropout (0.2-0.5) when you have limited training data
Increase hidden units gradually; 64-256 is a good range
Use batch normalization (via preprocessing) for better training stability
Regularization helps prevent overfitting on small datasets
Notes
Feature names define the input structure and must match your data
All features are concatenated before passing through hidden layers
Dropout is applied between hidden layers, not after the output layer
The model supports any Keras-compatible optimizer and loss function
Preprocessing model integration enables unified training/inference pipelines