Recently, seven universities including The Chinese University of Hong Kong and Tsinghua University jointly published a paper. They discovered that multiple uses of code can break down language inference steps, thereby improving accuracy, by imposing different restrictions on the frequency of code usage in GPT-4. Based on this, the research team proposed the CSV method, which leverages GPT-4's powerful code generation and evaluation capabilities for self-verification and correction of solutions. Experimental results show that the CSV method can significantly increase GPT-4's accuracy on the MATH dataset from 54.9% to 84.3%. This research provides valuable insights for further enhancing the mathematical reasoning capabilities of large models.