Realistic rain and fog rendering for images and videos using atmospheric scattering physics and depth estimation.
- Physics-based fog using atmospheric scattering equation:
I = J·T + A(1-T) + mask - Depth-Anything-V2 for automatic depth estimation
- Procedural rain generation with temporal consistency for videos
- Full parameter control: intensity, angles, opacity, fog density, etc.
- Video support with smooth frame-to-frame rain motion
pip install torch torchvision opencv-python numpy pillowcd ~/Acad/College/Sem\ 1/Projects/CS/
bash setup_rain_v2.shThis will:
- Create folder structure
- Clone Depth-Anything-V2
- Download model weights (~1.3GB)
cd rain_v2
python main_pipeline.pyParameters:
intensity [25]: Rain density (10-100)beta [1.0]: Fog density (0.3-2.0, higher = thicker fog)variance [0.0]: Rain patchiness (0.0-1.0, higher = more uneven)angle fb [0]: Forward/backward angle (-45 to +45, positive = toward camera)angle lr [0]: Left/right wind angle (-30 to +30)saturation [0.8]: Color saturation (0.5-1.0, lower = more gray)
python video_rain.pyAdditional Parameters:
frame limit [all]: Process only N frames (for testing)speed [12]: Rain falling speed in pixels/framestreak [50-100]: Rain streak length range (min-max)tightness [2]: Drop width in pixels (1-3)opacity [70]: Drop visibility percentage (0-100)blending [60]: Rain brightness percentage (0-100)brightness [0.7]: Overall video brightness (0.5-1.0)
intensity: 30
beta: 0.5
angle fb: 0
angle lr: 0
speed: 10
streak: 40-70
tightness: 1.5
opacity: 60
intensity: 70
beta: 1.2
angle fb: 25
angle lr: 5
speed: 15
streak: 60-110
tightness: 2.5
opacity: 80
intensity: 100
beta: 1.8
angle fb: 35
angle lr: 10
speed: 20
streak: 70-120
tightness: 3
opacity: 85
rain_v2/
├── inputs/ # Source images/videos
├── outputs/
│ ├── depths/ # Generated depth maps
│ ├── rain/ # Rain masks only
│ ├── final/ # Complete results
│ └── intermediate/ # Fog-only (testing)
├── checkpoints/ # Model weights
├── depth_anything_v2/ # Depth estimation model
├── depth_minimal.py # Generate depth maps
├── main_pipeline.py # Image processing
└── video_rain.py # Video processing
Uses Depth-Anything-V2 to generate depth maps from images, normalized to 0-1 range.
Applies Koschmieder's atmospheric scattering model:
- T = e^(-β·d): Transmission map (depth-based fog)
- I = J·T + A(1-T): Fog blending
- β: Scattering coefficient (fog density)
- A: Atmospheric light color (fog color)
Procedural rain with:
- Depth-aware sizing (distant drops smaller/fainter)
- Angle control (forward/backward and lateral motion)
- Temporal consistency (particles tracked across frames)
- Variable opacity and length
Combines fog and rain with:
- Sky darkening (gradient on upper 40% of frame)
- Saturation reduction (rainy day look)
- Alpha blending for transparent rain
- angle_fb: Simulates vehicle motion
- Positive = moving forward (rain comes toward you)
- Negative = reversing (rain moves away)
- Zero = stationary
- angle_lr: Simulates lateral wind
- Positive = wind from left
- Negative = wind from right
- tightness: Drop width (1 = thin, 3 = thick)
- opacity: How visible each drop is
- blending: How bright rain appears against background
- streak: Length of rain streaks
- beta: Fog density (0.5 = light haze, 2.0 = thick fog)
- saturation: Color intensity (0.6 = very gray, 0.9 = colorful)
- brightness: Overall darkness (0.6 = dark/moody, 0.9 = bright)
- Images: ~5-10 seconds per image
- Videos: ~30 seconds per 100 frames (M4 Mac with MPS)
- GPU recommended for faster processing
"xFormers not available": Harmless warning, can be ignored
Video too slow: Set frame_limit to process fewer frames for testing
Rain too faint: Increase opacity and blending parameters
Rain too bright: Decrease opacity or blending
Fog too strong: Lower beta value
- Depth-Anything-V2: https://github.com/DepthAnything/Depth-Anything-V2
- Atmospheric Scattering: Koschmieder's model