Skip to main content
Run Large Language Models locally with LM Studio LM Studio is a fantastic tool for running models locally. LM Studio supports multiple open-source models. See the library here. We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:
  • llama3.3 models are good for most basic use-cases.
  • qwen models perform specifically well with tool use.
  • deepseek-r1 models have strong reasoning capabilities.
  • phi4 models are powerful, while being really small in size.

Set up a model

Install LM Studio, download the model you want to use, and run it.

Example

After you have the model locally, use the LM Studio model class to access it
from agno.agent import Agent
from agno.models.lmstudio import LMStudio

agent = Agent(
    model=LMStudio(id="qwen2.5-7b-instruct-1m"),
    markdown=True
)

# Print the response in the terminal
agent.print_response("Share a 2 sentence horror story.")
View more examples here.

Params

ParameterTypeDefaultDescription
idstr"lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF"The id of the LMStudio model to use
namestr"LMStudio"The name of the model
providerstr"LMStudio"The provider of the model
api_keyOptional[str]NoneThe API key for LMStudio (usually not needed for local)
base_urlstr"http://localhost:1234/v1"The base URL for the local LMStudio server
LM Studio also supports the params of OpenAI.
I