Run a Local LLM with GPU Acceleration: Deploy Ollama + Open WebUI on Ubuntu via Docker on November 06, 2025 amd docker gpu acceleration local llm nvidia ollama open webui self-hosted ai ubuntu +