AIbase

LLM-Load-Unload-Ollama

Public

This is a simple demonstration to show how to keep an LLM loaded for prolonged time in the memory or unloading the model immediately after inferencing when using it via Ollama.

निर्माण समय2024-05-04T11:24:49
अपडेट समय2024-12-12T06:35:46
13
Stars
0
Stars Increase

संबंधित प्रोजेक्ट