Multimodal Sentiment Analysis (MSA) aims to infer sentiment by jointly leveraging textual, visual, and acoustic modalities. However, a core challenge remains: How to dynamically identify and leverage sample-level dependency preferences among different modality combinations. To address this, we decompose the challenge into two subproblems: combination matching and fusion validation. Correspondingly, we propose the Customized-Allocation Mixture-of-Experts (CA-MoE), a novel framework that consists of two core complementary components that enable dynamic and sample-level modality routing within the MoE architecture. First, Affinity-guided Customized Modality Allocation (ACMA) acts as a distributor and leverages Geometry-Gradient Affinity (G2-Affinity) to guide an attraction-repulsion routing mechanism for customized allocation of modality combinations. After expert fusion, we then design Reliability-Aware Expert Selection (RAES) to jointly consider sentiment angular-prototype proximity and competitive magnitude intensity of representation. This yields a reliability selection matrix that weights over experts for the final sentiment prediction. Extensive experiments on three benchmark MSA datasets demonstrate that CA-MoE achieves significant or competitive performance over state-of-the-art methods.