AIbase

LLM-Load-Unload-Ollama

Public

This is a simple demonstration to show how to keep an LLM loaded for prolonged time in the memory or unloading the model immediately after inferencing when using it via Ollama.

Heure de création2024-05-04T11:24:49
Heure de mise à jour2024-12-12T06:35:46
13
Stars
0
Stars Increase