Modern systems generate huge volumes of logs.

Application logs, server logs, and infrastructure logs often contain the first clues when something breaks. The problem is not a lack of data, but the effort required to read and understand it.

Engineers usually scroll through thousands of lines, search for error codes, and try to connect events across time. This is slow and error-prone, especially during incidents.

A LogAnalyzer Agent solves this problem by acting like a calm, experienced engineer who reads logs for you and explains what is going on.

In this article, you’ll learn how to build such an agent using FastAPI, LangChain, and an OpenAI model.

We’ll walk through the backend, the log analysis logic, and a simple web UI that lets you upload a log file and get insights in seconds. We’ll also upload this app to Sevalla so that you can share your project with the world.

You just need some basic knowledge of Python and HTML/CSS/JavaScript to finish this tutorial.

Here is the full code for reference.

What We’ll Cover

What a LogAnalyzer Agent Actually Does

A LogAnalyzer Agent takes raw log text as input and produces human-friendly analysis as output.

Instead of returning a list of errors, it explains the main failures, the likely root cause, and what to do next. This is important because logs are written for machines, not for people under pressure.

In this project, the agent behaves like a senior site reliability engineer. It reads logs in chunks, identifies patterns, and summarises them in simple language. The intelligence comes from a language model, while the reliability comes from careful handling of input and chunking.

High-Level Architecture

The system has three main parts.

The first part is a web UI built with plain HTML, CSS, and JavaScript. This UI allows a user to upload a text file and start analysis.

The second part is a FastAPI backend that receives the file, validates it, and coordinates the analysis.

The third part is the analysis engine itself, which uses LangChain and an OpenAI model to interpret the logs.

The flow is simple: the browser sends a log file to the backend. The backend reads the file, splits it into manageable pieces, and sends each piece to the language model with a clear prompt. The responses are combined and sent back to the browser as a single analysis.

Designing a Prompt That Works

The heart of any AI agent is the prompt. A weak prompt gives vague answers, while a strong prompt produces useful insights.

In this project, the prompt tells the model to act like a senior site reliability engineer. It asks for four things: main errors, likely root cause, practical next steps, and suspicious patterns.

Here is the prompt template used in the backend:

log_analysis_prompt_text = """
You are a senior site reliability engineer.
Analyze the following application logs.
1. Identify the main errors or failures.
2. Explain the likely root cause in simple terms.
3. Suggest practical next steps to fix or investigate.
4. Mention any suspicious patterns or repeated issues.
Logs:
{log_data}
Respond in clear paragraphs. Avoid jargon where possible.
"""

This prompt is simple but effective. It gives the model a role, a clear task, and constraints on the output style. Asking for clear paragraphs helps ensure the response is readable and useful for non-experts as well.

Handling Large Log Files Safely

Language models have input limits. You can’t send a large log file in one request and expect good results. To handle this, the backend splits the log text into smaller chunks. Each chunk overlaps slightly with the next to preserve context.

We’ll use the RecursiveCharacterTextSplitter from LangChain for this purpose. It ensures that chunks aren’t cut in awkward places and that important lines aren’t lost.

def split_logs(log_text: str):
    """Split log text into manageable chunks"""
    splitter = RecursiveCharacterTextSplitter(
        chunk_size=2000,
        chunk_overlap=200
    )
    return splitter.split_text(log_text)

This approach allows the agent to scale to large files while staying within model limits. Each chunk is analyzed independently, and the results are later combined.

Analyzing Logs with LangChain and OpenAI

Once the logs are split, each chunk is passed through the language model using the prompt template. The model used here is a lightweight but capable option, configured with a low temperature to keep responses focused and consistent.

llm = ChatOpenAI(
    temperature=0.2,
    model="gpt-4o-mini"
)

The analysis function loops over all chunks, formats the prompt, invokes the model, and stores the result.

def analyze_logs(log_text: str):
    """Analyze logs by splitting and processing each chunk"""
    chunks = split_logs(log_text)
    combined_analysis = []

  for chunk in chunks:
          formatted_prompt = log_analysis_prompt_text.format(log_data=chunk)
          result = llm.invoke(formatted_prompt)
          combined_analysis.append(result.content)
      return "\n\n".join(combined_analysis)

This design keeps the logic easy to understand. Each chunk produces a small analysis, and the final output is a stitched together explanation of the whole log file.

Building the FastAPI Backend

FastAPI is a good choice for this project because it’s fast, simple, and easy to read. The backend exposes three endpoints. The root endpoint serves the HTML UI. The /analyze endpoint accepts a log file and returns the analysis. And the /health endpoint is used to check if the service is running and properly configured.

The analyze endpoint performs several important checks. It ensures that the file is a text file, verifies that it isn’t empty, and handles errors gracefully. This prevents unnecessary calls to the model and improves user experience.

@app.post("/analyze")
async def analyze_log_file(file: UploadFile = File(...)):
    """Analyze uploaded log file"""
    if not file.filename.endswith(".txt"):
        return JSONResponse(
            status_code=400,
            content={"error": "Only .txt log files are supported"}
        )

     try:
        content = await file.read()
        log_text = content.decode("utf-8", errors="ignore")
        if not log_text.strip():
            return JSONResponse(
                status_code=400,
                content={"error": "Log file is empty"}
            )
        insights = analyze_logs(log_text)
        return {"analysis": insights}
    except Exception as e:
        return JSONResponse(
            status_code=500,
            content={"error": f"Error analyzing logs: {str(e)}"}
        )

This careful handling makes the agent more robust and production-friendly.

Creating a Simple and Clean Web UI

A good agent isn’t useful if people can’t interact with it easily. The frontend in this project is a single HTML file with embedded CSS and JavaScript. It focuses on clarity and speed rather than complexity.

The UI allows users to choose a log file, see the file name, click an analyze button, and view results in a formatted area. A loading spinner provides feedback while the analysis is running. Errors are shown clearly, without technical noise.

The upload and analysis logic is handled by a small JavaScript function that sends the file to the backend using a fetch request.

async function uploadLog() {
    const fileInput = document.getElementById("logFile");
    const file = fileInput.files[0];

if (!file) {
        alert("Please select a log file first");
        return;
    }
    const formData = new FormData();
    formData.append("file", file);
    const response = await fetch("/analyze", {
        method: "POST",
        body: formData
    });
    const data = await response.json();
    document.getElementById("result").textContent = data.analysis;
}

This minimal approach keeps the frontend easy to maintain and adapt.

Log Analyzer UI

Running the Application Locally

To run this project, you need Python, a virtual environment, and an OpenAI API key. The API key is loaded from a .env file to keep secrets out of code. Once dependencies are installed, you can start the server using Uvicorn.

if __name__ == "__main__":
    import uvicorn
    port = int(os.getenv("PORT", 8000))
    uvicorn.run(app, host="0.0.0.0", port=port)

After starting the server, you can open the browser, upload a log file, and see the agent in action.

Deployment to Sevalla

You can choose any cloud provider, like AWS, DigitalOcean, or others, to host your service. I’ll be using Sevalla for this example.

Sevalla is a developer-friendly PaaS provider. It offers application hosting, database, object storage, and static site hosting for your projects.

Every platform will charge you for creating a cloud resource. Sevalla comes with a $20 credit for us to use, so we won’t incur any costs for this example.

Let’s push this project to GitHub so that we can connect our repository to Sevalla. We can also enable auto-deployments so that any new change to the repository is automatically deployed.

Log in to Sevalla and click on Applications → Create new application.

Create Application

You can see the option to link your GitHub repository to create a new application. Use the default settings. Then click Create application.

Application Settings

Now we have to add our OpenAI API key to the environment variables. Click on the Environment variables section once the application is created, and save the OPENAI_API_KEY value as an environment variable.

Environment Variables

Now we’re ready to deploy our application. Click on Deployments and click Deploy now. It will take 2–3 minutes for the deployment to complete.

Once done, click on Visit app. You’ll see the application served via a URL ending with sevalla.app. This is your new root URL. You can replace localhost:8000 with this URL and start using it.

Final UI

Congrats! Your log analyzer service is now live. You can find a sample log in the GitHub repository which you can use to test the service.

You can extend this by adding other capabilities and pushing your code to GitHub. Sevalla will automatically deploy your application to production.

Conclusion

Building a LogAnalyzer Agent is a practical way to apply language models to real engineering problems. Logs are everywhere, and understanding them quickly can save hours during incidents. By combining FastAPI, LangChain, and a clear prompt, you can turn raw text into actionable insight.

The key ideas are simple: split large inputs, guide the model with a strong role and task, and present results in a clean interface. With these principles, you can adapt this agent to many other analysis tasks beyond logs.

Hope you enjoyed this article. Learn more about me by visiting my website.