Radial basis function (RBF) kernel support vector machines (SVMs) are powerful nonlinear classifiers, but their privacy-preserving realization remains challenging due to the high cost of secure training and the limited scalability of secure inference. In this work, we propose ESP-SVM, an efficient and scalable privacy-preserving RBF-kernel SVM framework. For the SVM training phase, we develop an MPC-based protocol that revisits PEGASOS-style optimization and removes the dominant secure multiplication bottleneck through an indicator-based reformulation. By expressing the core batch aggregation with binary indicators, the protocol enables MUX-based dot-product computation and eliminates generic secure multiplication from the training loop. Experimental results on UCI datasets with sample sizes ranging from 256 to 4096 show that ESP-SVM achieves an average 27.95× training speedup and 1.64× communication reduction compared with a prior hybrid HE-MPC training framework. For the SVM inference phase, we design an adaptive secure protocol built on an HE-based prediction paradigm. The protocol first performs efficient approximate inference and then selectively invokes exact refinement only when the returned score indicates insufficient confidence. To further reduce model exposure, we disclose only partial random Fourier feature information and return masked decision-related outputs instead of raw scores.