Skip to content

🐌 SlowNetwork

🐌 SlowNetwork

🟑 Intermediate βœ… Stable πŸ”₯ Popular

🎯 Overview

The SlowNetwork is a multi-layer network with configurable depth and width that processes input features through multiple dense layers with ReLU activations, then projects the output back to the original feature dimension. This layer is designed to be used as a component in more complex architectures.

This layer is particularly powerful for complex feature processing where you need deep transformations while maintaining the original feature dimension, making it ideal for sophisticated feature engineering and complex pattern recognition.

πŸ” How It Works

The SlowNetwork processes data through a multi-layer transformation:

  1. Input Processing: Takes input features of specified dimension
  2. Hidden Layers: Applies multiple dense layers with ReLU activations
  3. Feature Transformation: Transforms features through the hidden layers
  4. Output Projection: Projects back to original input dimension
  5. Output Generation: Produces transformed features with same shape
graph TD
    A[Input Features] --> B[Hidden Layer 1]
    B --> C[Hidden Layer 2]
    C --> D[Hidden Layer N]
    D --> E[Output Projection]
    E --> F[Transformed Features]

    G[ReLU Activation] --> B
    G --> C
    G --> D

    style A fill:#e6f3ff,stroke:#4a86e8
    style F fill:#e8f5e9,stroke:#66bb6a
    style B fill:#fff9e6,stroke:#ffb74d
    style C fill:#fff9e6,stroke:#ffb74d
    style D fill:#fff9e6,stroke:#ffb74d
    style E fill:#f3e5f5,stroke:#9c27b0

πŸ’‘ Why Use This Layer?

Challenge Traditional Approach SlowNetwork's Solution
Complex Processing Single dense layer 🎯 Multi-layer processing for complex transformations
Feature Dimension Fixed output dimension ⚑ Maintains input dimension while processing
Deep Transformations Limited transformation depth 🧠 Configurable depth for complex patterns
Architecture Components Manual layer stacking πŸ”— Pre-built component for complex architectures

πŸ“Š Use Cases

  • Complex Feature Processing: Deep transformations of input features
  • Architecture Components: Building blocks for complex architectures
  • Feature Engineering: Sophisticated feature transformation
  • Pattern Recognition: Complex pattern recognition in features
  • Dimensionality Preservation: Maintaining input dimension while processing

πŸš€ Quick Start

Basic Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
import keras
from kerasfactory.layers import SlowNetwork

# Create sample input data
batch_size, input_dim = 32, 16
x = keras.random.normal((batch_size, input_dim))

# Apply slow network
slow_net = SlowNetwork(input_dim=16, num_layers=3, units=64)
output = slow_net(x)

print(f"Input shape: {x.shape}")           # (32, 16)
print(f"Output shape: {output.shape}")     # (32, 16)

In a Sequential Model

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import keras
from kerasfactory.layers import SlowNetwork

model = keras.Sequential([
    keras.layers.Dense(32, activation='relu'),
    SlowNetwork(input_dim=32, num_layers=3, units=64),
    keras.layers.Dense(16, activation='relu'),
    SlowNetwork(input_dim=16, num_layers=2, units=32),
    keras.layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

In a Functional Model

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
import keras
from kerasfactory.layers import SlowNetwork

# Define inputs
inputs = keras.Input(shape=(20,))  # 20 features

# Apply slow network
x = SlowNetwork(input_dim=20, num_layers=3, units=64)(inputs)

# Continue processing
x = keras.layers.Dense(32, activation='relu')(x)
x = SlowNetwork(input_dim=32, num_layers=2, units=32)(x)
x = keras.layers.Dense(16, activation='relu')(x)
outputs = keras.layers.Dense(1, activation='sigmoid')(x)

model = keras.Model(inputs, outputs)

Advanced Configuration

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Advanced configuration with multiple slow networks
def create_complex_network():
    inputs = keras.Input(shape=(30,))

    # Multiple slow networks with different configurations
    x = SlowNetwork(input_dim=30, num_layers=4, units=128)(inputs)
    x = keras.layers.Dense(64, activation='relu')(x)
    x = keras.layers.BatchNormalization()(x)

    x = SlowNetwork(input_dim=64, num_layers=3, units=96)(x)
    x = keras.layers.Dense(48, activation='relu')(x)
    x = keras.layers.Dropout(0.2)(x)

    x = SlowNetwork(input_dim=48, num_layers=2, units=64)(x)
    x = keras.layers.Dense(32, activation='relu')(x)
    x = keras.layers.Dropout(0.1)(x)

    # Multi-task output
    classification = keras.layers.Dense(3, activation='softmax', name='classification')(x)
    regression = keras.layers.Dense(1, name='regression')(x)

    return keras.Model(inputs, [classification, regression])

model = create_complex_network()
model.compile(
    optimizer='adam',
    loss={'classification': 'categorical_crossentropy', 'regression': 'mse'},
    loss_weights={'classification': 1.0, 'regression': 0.5}
)

πŸ“– API Reference

kerasfactory.layers.SlowNetwork

This module implements a SlowNetwork layer that processes features through multiple dense layers. It's designed to be used as a component in more complex architectures.

Classes

SlowNetwork
1
2
3
4
5
6
7
SlowNetwork(
    input_dim: int,
    num_layers: int = 3,
    units: int = 128,
    name: str | None = None,
    **kwargs: Any
)

A multi-layer network with configurable depth and width.

This layer processes input features through multiple dense layers with ReLU activations, and projects the output back to the original feature dimension.

Parameters:

Name Type Description Default
input_dim int

Dimension of the input features.

required
num_layers int

Number of hidden layers. Default is 3.

3
units int

Number of units per hidden layer. Default is 128.

128
name str | None

Optional name for the layer.

None
Input shape

2D tensor with shape: (batch_size, input_dim)

Output shape

2D tensor with shape: (batch_size, input_dim) (same as input)

Example
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
import keras
from kerasfactory.layers import SlowNetwork

# Create sample input data
x = keras.random.normal((32, 16))  # 32 samples, 16 features

# Create the layer
slow_net = SlowNetwork(input_dim=16, num_layers=3, units=64)
y = slow_net(x)
print("Output shape:", y.shape)  # (32, 16)

Initialize the SlowNetwork layer.

Parameters:

Name Type Description Default
input_dim int

Input dimension.

required
num_layers int

Number of hidden layers.

3
units int

Number of units in each layer.

128
name str | None

Name of the layer.

None
**kwargs Any

Additional keyword arguments.

{}
Source code in kerasfactory/layers/SlowNetwork.py
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
def __init__(
    self,
    input_dim: int,
    num_layers: int = 3,
    units: int = 128,
    name: str | None = None,
    **kwargs: Any,
) -> None:
    """Initialize the SlowNetwork layer.

    Args:
        input_dim: Input dimension.
        num_layers: Number of hidden layers.
        units: Number of units in each layer.
        name: Name of the layer.
        **kwargs: Additional keyword arguments.
    """
    # Set public attributes
    self.input_dim = input_dim
    self.num_layers = num_layers
    self.units = units

    # Initialize instance variables
    self.hidden_layers: list[Any] | None = None
    self.output_layer: Any | None = None

    # Validate parameters
    self._validate_params()

    # Call parent's __init__
    super().__init__(name=name, **kwargs)

πŸ”§ Parameters Deep Dive

input_dim (int)

  • Purpose: Dimension of the input features
  • Range: 1 to 1000+ (typically 16-256)
  • Impact: Determines the input and output feature dimension
  • Recommendation: Match the actual input feature dimension

num_layers (int)

  • Purpose: Number of hidden layers
  • Range: 1 to 20+ (typically 2-5)
  • Impact: More layers = more complex transformations
  • Recommendation: Start with 3, scale based on complexity needs

units (int)

  • Purpose: Number of units per hidden layer
  • Range: 16 to 512+ (typically 64-256)
  • Impact: Larger values = more complex transformations
  • Recommendation: Start with 64-128, scale based on data complexity

πŸ“ˆ Performance Characteristics

  • Speed: ⚑⚑⚑ Fast for small to medium networks, scales with layers and units
  • Memory: πŸ’ΎπŸ’ΎπŸ’Ύ Moderate memory usage due to multiple dense layers
  • Accuracy: 🎯🎯🎯🎯 Excellent for complex feature transformation
  • Best For: Complex feature processing while maintaining input dimension

🎨 Examples

Example 1: Complex Feature Processing

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import keras
import numpy as np
from kerasfactory.layers import SlowNetwork

# Create a complex feature processing model
def create_complex_feature_processor():
    inputs = keras.Input(shape=(25,))  # 25 features

    # Multiple slow networks for different processing stages
    x = SlowNetwork(input_dim=25, num_layers=4, units=128)(inputs)
    x = keras.layers.Dense(64, activation='relu')(x)
    x = keras.layers.BatchNormalization()(x)

    x = SlowNetwork(input_dim=64, num_layers=3, units=96)(x)
    x = keras.layers.Dense(48, activation='relu')(x)
    x = keras.layers.Dropout(0.2)(x)

    x = SlowNetwork(input_dim=48, num_layers=2, units=64)(x)
    x = keras.layers.Dense(32, activation='relu')(x)
    x = keras.layers.Dropout(0.1)(x)

    # Output
    outputs = keras.layers.Dense(1, activation='sigmoid')(x)

    return keras.Model(inputs, outputs)

model = create_complex_feature_processor()
model.compile(optimizer='adam', loss='binary_crossentropy')

# Test with sample data
sample_data = keras.random.normal((100, 25))
predictions = model(sample_data)
print(f"Complex feature processor predictions shape: {predictions.shape}")

Example 2: Architecture Component

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Use SlowNetwork as a component in complex architecture
def create_component_based_architecture():
    inputs = keras.Input(shape=(20,))

    # Initial processing
    x = keras.layers.Dense(32, activation='relu')(inputs)
    x = keras.layers.BatchNormalization()(x)

    # SlowNetwork component 1
    x = SlowNetwork(input_dim=32, num_layers=3, units=64)(x)
    x = keras.layers.Dropout(0.2)(x)

    # SlowNetwork component 2
    x = SlowNetwork(input_dim=32, num_layers=2, units=48)(x)
    x = keras.layers.Dropout(0.1)(x)

    # Final processing
    x = keras.layers.Dense(16, activation='relu')(x)
    x = keras.layers.Dropout(0.1)(x)

    # Output
    outputs = keras.layers.Dense(1, activation='sigmoid')(x)

    return keras.Model(inputs, outputs)

model = create_component_based_architecture()
model.compile(optimizer='adam', loss='binary_crossentropy')

Example 3: Layer Analysis

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Analyze SlowNetwork behavior
def analyze_slow_network():
    # Create model with SlowNetwork
    inputs = keras.Input(shape=(15,))
    x = SlowNetwork(input_dim=15, num_layers=3, units=32)(inputs)
    outputs = keras.layers.Dense(1, activation='sigmoid')(x)

    model = keras.Model(inputs, outputs)

    # Test with different input patterns
    test_inputs = [
        keras.random.normal((10, 15)),  # Random data
        keras.random.normal((10, 15)) * 2,  # Scaled data
        keras.random.normal((10, 15)) + 1,  # Shifted data
    ]

    print("SlowNetwork Analysis:")
    print("=" * 40)

    for i, test_input in enumerate(test_inputs):
        prediction = model(test_input)
        print(f"Test {i+1}: Prediction mean = {keras.ops.mean(prediction):.4f}")

    return model

# Analyze SlowNetwork
# model = analyze_slow_network()

πŸ’‘ Tips & Best Practices

  • Input Dimension: Must match the actual input feature dimension
  • Number of Layers: Start with 3, scale based on complexity needs
  • Units: Use 64-128 units for most applications
  • Activation Functions: ReLU is used by default, consider alternatives if needed
  • Regularization: Consider adding dropout between SlowNetwork layers
  • Architecture: Use as components in larger architectures

⚠️ Common Pitfalls

  • Input Dimension: Must be positive integer
  • Number of Layers: Must be positive integer
  • Units: Must be positive integer
  • Memory Usage: Scales with number of layers and units
  • Overfitting: Can overfit with too many layers/units on small datasets

πŸ“š Further Reading