Using Microsoft Agent Framework with Podman

October 8, 2025    Microsoft Agent Framework Podman Ollama Azure OpenAI LLM DotNet AI Development

Using Microsoft Agent Framework with Podman

I tried the Microsoft Agent Framework with Podman on Windows 11 and ran into a few hiccups. Below are the practical steps and tips that got me to a working local setup.

Azure Foundry example

The Azure Foundry example worked out of the box. I used an existing Azure OpenAI resource, gathered the endpoint and deployment name, and authenticated with the Azure CLI credential.

Local LLM with Podman

Running a local model with Podman was more challenging. I started with Podman AI Labs, downloaded a model, and created the service, but the call hung. After some searching, I used these Podman commands to run and pull Ollama:

podman run -d --name ollama --replace --restart=always -p 11434:11434 -v ollama:/root/.ollama docker.io/ollama/ollama
podman exec -it ollama ollama pull llama3.2:3b

Once the container and model were running, the local endpoint responded and I was able to use it from .NET.

Example code

A minimal C# example that probes the local Ollama endpoint, requests a response, and shows how you might integrate an agent:

using System;
using System.Net.Http;
using System.Threading.Tasks;
using Azure.Identity;
using Microsoft.Agents.AI;
using OllamaSharp;

static async Task AzureAIChat()
{
    var endpoint = "https://{your}.openai.azure.com/";
    var deploymentName = "gpt-4.1-mini";

    const string JokerName = "Joker Dad";
    const string JokerInstructions = "You are good at telling jokes.";

    var agent = new AzureOpenAIClient(new Uri(endpoint), new AzureCliCredential())
        .GetChatClient(deploymentName)
        .CreateAIAgent(JokerInstructions, JokerName);

    await CallTheAgent(agent);
}

static async Task LocalLlmChat()
{
    var baseUri = new Uri("http://localhost:11434/");
    Console.WriteLine($"Probing {baseUri} ...");

    using var probeClient = new HttpClient { Timeout = TimeSpan.FromSeconds(5) };
    try
    {
        var probeResp = await probeClient.GetAsync(baseUri);
        Console.WriteLine($"Probe response: {(int)probeResp.StatusCode} {probeResp.ReasonPhrase}");
    }
    catch (Exception ex)
    {
        Console.WriteLine($"HTTP probe failed: {ex.GetType().Name}: {ex.Message}");
        Console.WriteLine("Check podman port mapping and container logs (podman ps, podman port, podman logs).");
        return;
    }

    using var chatClient = new OllamaApiClient(baseUri, "llama3.2:3b");
    var response = await chatClient.GetResponseAsync("Tell me a joke.")
        .WaitAsync(TimeSpan.FromSeconds(300));

    Console.WriteLine(response);
}

static async Task CallTheAgent(AIAgent agent)
{
    await foreach (var update in agent.RunStreamingAsync("Tell me a joke."))
    {
        Console.WriteLine(update);
    }
}

await LocalLlmChat();
Console.ReadLine();

Notes:

  • Keep the probe step to surface port/mapping issues early.
  • Increase timeouts when calling model endpoints locally.
  • Replace placeholders for Azure endpoint/deployment when using Azure hosts.

Conclusion

The Microsoft Agent Framework integrates with both Azure-hosted OpenAI and local LLMs behind Podman. The key steps are to confirm container reachability, pull a compatible model, and then connect from your client. There’s more to explore, but this got a simple Podman-based local flow working.