π SlowNetwork
π SlowNetwork
π― Overview
The SlowNetwork is a multi-layer network with configurable depth and width that processes input features through multiple dense layers with ReLU activations, then projects the output back to the original feature dimension. This layer is designed to be used as a component in more complex architectures.
This layer is particularly powerful for complex feature processing where you need deep transformations while maintaining the original feature dimension, making it ideal for sophisticated feature engineering and complex pattern recognition.
π How It Works
The SlowNetwork processes data through a multi-layer transformation:
- Input Processing: Takes input features of specified dimension
- Hidden Layers: Applies multiple dense layers with ReLU activations
- Feature Transformation: Transforms features through the hidden layers
- Output Projection: Projects back to original input dimension
- Output Generation: Produces transformed features with same shape
graph TD
A[Input Features] --> B[Hidden Layer 1]
B --> C[Hidden Layer 2]
C --> D[Hidden Layer N]
D --> E[Output Projection]
E --> F[Transformed Features]
G[ReLU Activation] --> B
G --> C
G --> D
style A fill:#e6f3ff,stroke:#4a86e8
style F fill:#e8f5e9,stroke:#66bb6a
style B fill:#fff9e6,stroke:#ffb74d
style C fill:#fff9e6,stroke:#ffb74d
style D fill:#fff9e6,stroke:#ffb74d
style E fill:#f3e5f5,stroke:#9c27b0
π‘ Why Use This Layer?
| Challenge | Traditional Approach | SlowNetwork's Solution |
|---|---|---|
| Complex Processing | Single dense layer | π― Multi-layer processing for complex transformations |
| Feature Dimension | Fixed output dimension | β‘ Maintains input dimension while processing |
| Deep Transformations | Limited transformation depth | π§ Configurable depth for complex patterns |
| Architecture Components | Manual layer stacking | π Pre-built component for complex architectures |
π Use Cases
- Complex Feature Processing: Deep transformations of input features
- Architecture Components: Building blocks for complex architectures
- Feature Engineering: Sophisticated feature transformation
- Pattern Recognition: Complex pattern recognition in features
- Dimensionality Preservation: Maintaining input dimension while processing
π Quick Start
Basic Usage
1 2 3 4 5 6 7 8 9 10 11 12 13 | |
In a Sequential Model
1 2 3 4 5 6 7 8 9 10 11 12 | |
In a Functional Model
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | |
Advanced Configuration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | |
π API Reference
kerasfactory.layers.SlowNetwork
This module implements a SlowNetwork layer that processes features through multiple dense layers. It's designed to be used as a component in more complex architectures.
Classes
SlowNetwork
1 2 3 4 5 6 7 | |
A multi-layer network with configurable depth and width.
This layer processes input features through multiple dense layers with ReLU activations, and projects the output back to the original feature dimension.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim |
int
|
Dimension of the input features. |
required |
num_layers |
int
|
Number of hidden layers. Default is 3. |
3
|
units |
int
|
Number of units per hidden layer. Default is 128. |
128
|
name |
str | None
|
Optional name for the layer. |
None
|
Input shape
2D tensor with shape: (batch_size, input_dim)
Output shape
2D tensor with shape: (batch_size, input_dim) (same as input)
Example
1 2 3 4 5 6 7 8 9 10 | |
Initialize the SlowNetwork layer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_dim |
int
|
Input dimension. |
required |
num_layers |
int
|
Number of hidden layers. |
3
|
units |
int
|
Number of units in each layer. |
128
|
name |
str | None
|
Name of the layer. |
None
|
**kwargs |
Any
|
Additional keyword arguments. |
{}
|
Source code in kerasfactory/layers/SlowNetwork.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | |
π§ Parameters Deep Dive
input_dim (int)
- Purpose: Dimension of the input features
- Range: 1 to 1000+ (typically 16-256)
- Impact: Determines the input and output feature dimension
- Recommendation: Match the actual input feature dimension
num_layers (int)
- Purpose: Number of hidden layers
- Range: 1 to 20+ (typically 2-5)
- Impact: More layers = more complex transformations
- Recommendation: Start with 3, scale based on complexity needs
units (int)
- Purpose: Number of units per hidden layer
- Range: 16 to 512+ (typically 64-256)
- Impact: Larger values = more complex transformations
- Recommendation: Start with 64-128, scale based on data complexity
π Performance Characteristics
- Speed: β‘β‘β‘ Fast for small to medium networks, scales with layers and units
- Memory: πΎπΎπΎ Moderate memory usage due to multiple dense layers
- Accuracy: π―π―π―π― Excellent for complex feature transformation
- Best For: Complex feature processing while maintaining input dimension
π¨ Examples
Example 1: Complex Feature Processing
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 | |
Example 2: Architecture Component
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | |
Example 3: Layer Analysis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | |
π‘ Tips & Best Practices
- Input Dimension: Must match the actual input feature dimension
- Number of Layers: Start with 3, scale based on complexity needs
- Units: Use 64-128 units for most applications
- Activation Functions: ReLU is used by default, consider alternatives if needed
- Regularization: Consider adding dropout between SlowNetwork layers
- Architecture: Use as components in larger architectures
β οΈ Common Pitfalls
- Input Dimension: Must be positive integer
- Number of Layers: Must be positive integer
- Units: Must be positive integer
- Memory Usage: Scales with number of layers and units
- Overfitting: Can overfit with too many layers/units on small datasets
π Related Layers
- GatedResidualNetwork - Gated residual networks
- TransformerBlock - Transformer processing
- BoostingBlock - Boosting block processing
- VariableSelection - Variable selection
π Further Reading
- Multi-Layer Networks - Multi-layer network concepts
- Feature Engineering - Feature engineering techniques
- Deep Learning - Deep learning concepts
- KerasFactory Layer Explorer - Browse all available layers
- Feature Engineering Tutorial - Complete guide to feature engineering