In this talk, we consider a general inertial proximal gradient method with constant and variable stepsizes for a class of nonconvex nonsmooth optimization problems. The proposed method incorporates two different extrapolations with respect to the previous iterates into the backward proximal step and the forward gradient step in classic proximal gradient method. Under more general parameter constraints, we prove that the proposed method generates a convergent subsequence and each limit point is a stationary point of the problem. Furthermore, the generated sequence is globally convergent to a stationary point if the objective function satisfies the Kurdyka- Lojasiewicz property. Local linear convergence also can be established for the proposed method with constant stepsizes by using a common error bound condition. In addition, we conduct some numerical experiments on nonconvex quadratic programming and SCAD penalty problems to demonstrate the advantage of the proposed method.