LRV-Instruction
Public[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Creat:2023-06-15T14:31:41
Update:2025-03-07T20:09:13
https://fuxiaoliu.github.io/LRV/
284
Stars
0
Stars Increase
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning