VulkanIlm
PublicGPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
Creat:2025-07-07T00:32:58
Update:2025-07-07T00:39:16
1
Stars
1
Stars Increase
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).