While monocular depth estimation (MDE) has made significant progress, its reliability in adverse weather remains limited by the inherent photometric sensitivity of the sRGB color space. In this paper, we present SpecPhys-MoE to decouple depth estimation from environmental illumination changes. We first introduce the Spectral-Physical Feature Extractor (SpecPhys-FE) to map images into the Horizontal/Vertical-Intensity (HVI) color space. Within this space, we use polarized HS maps alongside a learnable intensity collapse function $C_k$ to suppress common artifacts like red discontinuity and black plane noise in low-light and rainy conditions. Additionally, we integrate Fast Fourier Transform (FFT) into the extraction pipeline; by capturing stable phase information, the network retains the scene's geometric topology regardless of severe photometric fluctuations. To handle the conflicting feature representations across different weather domains, we employ a ConvMoE decoder with specialized Swin Transformer experts. A routing mechanism, supervised by environment-grounded scene priors and a dual auxiliary loss, is used to prevent expert collapse. Extensive experiments on the NuScenes and RobotCar datasets demonstrate that SpecPhys-MoE significantly outperforms state-of-the-art methods, providing a reliable and scalable solution for diverse weather conditions in autonomous driving.