The goal of ollama is to wrap the ollama
API and provide infrastructure to be used within {gptstudio}
Installation
You can install the development version of ollama like so:
pak::pak("calderonsamuel/ollama")
Prerequisites
The user is in charge of downloading ollama and providing networking configuration. We recommend using the official docker image, which trivializes this process.
The following code downloads the default ollama image and runs an “ollama” container exposing the 11434 port.
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
By default, this package will use http://localhost:11434
as API host url. Although we provide methods to change this, only do it if you are absolutely sure of what it means.