First-order methods and algorithms for centralized or distributed nonsmooth convex optimization have become more and more popular due to its applications to multi-agent systems coordination, large-scale networks and machine learning. We provide novel convergence analysis of Nesterov's quasi-monotone subgradient algorithm based on Lyapunov function, which simplifies the original analysis based on estimate sequences. Moreover, we present a distributed algorithm as an extension of the quasi-monotone subgradient algorithm and give the convergence analysis. The effectiveness of our algorithm is also illustrated by a numerical example.