The gradient descent is a powerful first-order algorithm for convex optimization problems and has extensive applications such as machine learning, image processing, data analysis and $D$-optimal design. However, it usually requires the convexity and Lipschitz continuous gradient of the objective, which limits the applications greatly. Besides, the \L{}ojasiewicz-Polyak inequality, which usually does not require strong convexity, is a very important condition in analyzing linear convergence and error bounds for gradient descent. In this paper, we introduce a $L$-convexity condition, Bregman-\L{}ojasiewicz-Polyak condition, lower control property and a coordination function. The Bregman-\L{}ojasiewicz-Polyak condition is a natural generalization of \L{}ojasiewicz-Polyak condition in the Bregman distance framework. Moreover, a generalized gradient descent algorithm is proposed for minimizing the smooth optimization problems without involving the Lipschitz continuous gradient of the objective. The linear convergence of the proposed gradient descent algorithm is established by using the $L$-convexity condition and Bregman-\L{}ojasiewicz-Polyak condition. Finally, several special cases are presented for illustrating the linear convergence of GGDM.