Translation: Harbin Institute of Technology and Huawei's research team have released a comprehensive 50-page review, revealing the issue of hallucinations in general-purpose LLMs within specialized fields. The review points out that general-purpose LLMs exhibit hallucinations in professional domains primarily because they are trained on extensive public datasets, lacking specialized domain knowledge. Researchers call for improving data quality and enhancing the model's ability to learn and recall factual knowledge to mitigate hallucination issues in professional fields.