llm-web-relay
PublicA FastAPI gateway for local LLMs that adds intelligent web research, multilingual recency/how-to detection, time-anchored guidance, context injection, and OpenAI-compatible SSE streaming. Turn any local model into a recency-aware, context-enhanced assistant instantly.