Non-interactivity between agents, defense demand and computation complexity make traditional centralized multi-agent reinforcement learning (MARL) algorithms sometimes unpractical in many complicated applications. Therefore, several decentralized MARL algorithms have been proposed. However, only parts of the learning procedure or interactive scenario are decentralized in existing methods. In this paper, we propose a general fully decentralized MARL framework, which can flexibly handle any actor-critic and parameter sharing method. A primal-dual framework is designed to learn individual agents separately. From the perspective of each agent, policy estimation and value evaluation are jointly optimized. Besides, the novel framework can deal with both collaborative, competitive, and mixed circumstances. The experiments compare the proposed decentralized MARL algorithms with the conventional centralized and decentralized algorithms. The results demonstrate that our framework can achieve significantly better performance.