Nonconvex optimization has been increasingly important in machine learning and artificial intelligence. In this talk we study a class of structured problems in which the objectives are nonsmooth and nonconvex. Applications include supervised learning tasks with nonconvex loss or nonconvex regularizers. For solving such large scale problems, we propose new stochastic methods—including a stochastic gradient descent algorithm and a coordinate descent algorithm. The first one updates all the coordinates based on subsampled data while the second one only updates a few coordinates in each iteration. Our analysis establishes theoretical convergence to critical points, under mild conditions. Further experiments demonstrate the empirical advantage of our proposed methods in comparison with existing deterministic methods.