Skip to content

Conversation

@raisiqueira
Copy link
Collaborator

@raisiqueira raisiqueira commented Dec 12, 2025

New features

  • Add a new tool to use the Django Check command.
  • Updates the list_models tool to receive parameters. Now, the AI can use one or more apps to filter which models will be returned. Partially solves Truncated output when listing models #9.
  • Upgrades FastMCP package.

Django Check example output:

CleanShot 2025-12-12 at 10 32 10

list_models tool filtering by just one app:

CleanShot 2025-12-12 at 12 08 05

Summary by Sourcery

Add a configurable Django system check MCP tool and enhance model listing to support app-based filtering and better handle large outputs.

New Features:

  • Introduce a run_check MCP tool that exposes Django's system checks with support for app labels, tags, deployment checks, fail levels, and database selection.
  • Extend the list_models MCP tool to accept optional app label filters and return structured metadata including total model count and applied filters.

Enhancements:

  • Adjust internal prompt listing to comply with FastMCP 2.13+ middleware context requirements.
  • Document list_models arguments and provide guidance for avoiding client-side output truncation in common MCP clients.

Build:

  • Bump project version to 0.5.0 and update FastMCP dependency to >=2.13.3.

Tests:

  • Add a dedicated test script for the run_check MCP tool covering various parameter combinations and error handling.
  • Expand list_models testing script to validate unfiltered, single-app, and multi-app filtered model listings and their summaries.

@raisiqueira raisiqueira requested a review from fjsj December 12, 2025 15:13
@raisiqueira raisiqueira self-assigned this Dec 12, 2025
@raisiqueira raisiqueira added the enhancement New feature or request label Dec 12, 2025
@sourcery-ai
Copy link

sourcery-ai bot commented Dec 12, 2025

Reviewer's Guide

Implements a new Django run_check MCP tool, enhances the existing list_models tool with app-label filtering and structured metadata, updates tests and documentation accordingly, and upgrades the FastMCP dependency and related integrations.

Sequence diagram for the new run_check Django system check tool

sequenceDiagram
    actor MCPClient
    participant FastMCPServer
    participant DjangoAIBoostServer
    participant run_check_tool
    participant DjangoChecks

    MCPClient->>FastMCPServer: invoke run_check(app_labels, tags, deploy, fail_level, databases)
    FastMCPServer->>DjangoAIBoostServer: dispatch tool call
    DjangoAIBoostServer->>run_check_tool: run_check(app_labels, tags, deploy, fail_level, databases)
    activate run_check_tool

    run_check_tool->>run_check_tool: validate and normalize parameters
    run_check_tool->>DjangoChecks: sync_to_async(execute_checks)
    activate DjangoChecks

    DjangoChecks->>DjangoChecks: map fail_level string to level constant
    alt app_labels provided
        DjangoChecks->>DjangoChecks: resolve app_configs from app_labels
    else no app_labels
        DjangoChecks->>DjangoChecks: use all app_configs
    end

    DjangoChecks->>DjangoChecks: run_checks(app_configs, tags, deploy, databases)
    DjangoChecks-->>run_check_tool: list of check messages
    deactivate DjangoChecks

    run_check_tool->>run_check_tool: categorize messages by level
    run_check_tool->>run_check_tool: compute summary counts and success flag
    run_check_tool-->>DjangoAIBoostServer: CheckResult dict
    deactivate run_check_tool

    DjangoAIBoostServer-->>FastMCPServer: serialized CheckResult
    FastMCPServer-->>MCPClient: tool response (JSON)
Loading

Sequence diagram for the updated list_models tool with app filtering

sequenceDiagram
    actor MCPClient
    participant FastMCPServer
    participant DjangoAIBoostServer
    participant list_models_tool
    participant DjangoApps

    MCPClient->>FastMCPServer: invoke list_models(app_labels)
    FastMCPServer->>DjangoAIBoostServer: dispatch tool call
    DjangoAIBoostServer->>list_models_tool: list_models(app_labels)
    activate list_models_tool

    list_models_tool->>DjangoApps: apps.get_models()
    DjangoApps-->>list_models_tool: all_models

    alt app_labels provided
        list_models_tool->>list_models_tool: filter all_models by model._meta.app_label in app_labels
    else no app_labels
        list_models_tool->>list_models_tool: use all_models without filtering
    end

    loop for each model in filtered models
        list_models_tool->>list_models_tool: extract app_label, model_name, fields, relationships
        list_models_tool->>list_models_tool: append model info to models_info
    end

    list_models_tool->>list_models_tool: build response dict {total_count, app_filter, models}
    list_models_tool-->>DjangoAIBoostServer: ListModelsResponse dict
    deactivate list_models_tool

    DjangoAIBoostServer-->>FastMCPServer: serialized ListModelsResponse
    FastMCPServer-->>MCPClient: tool response (JSON)
Loading

Updated class diagram for list_models and run_check tool response structures

classDiagram

    class ListModelsResponse {
        int total_count
        list~string~ app_filter
        list~ModelInfo~ models
    }

    class ModelInfo {
        string app_label
        string model_name
        string model_module
        list~FieldInfo~ fields
        list~RelationshipInfo~ relationships
    }

    class FieldInfo {
        string name
        string type
        bool null
        bool blank
        int max_length
        bool primary_key
        bool unique
        string related_model
    }

    class RelationshipInfo {
        string name
        string type
        string related_model
        bool many_to_many
        bool one_to_one
        bool many_to_one
        bool one_to_many
    }

    class CheckResult {
        bool success
        string checked_apps
        string tags
        bool deploy_checks
        string fail_level
        int total_issues
        list~CheckMessage~ critical
        list~CheckMessage~ errors
        list~CheckMessage~ warnings
        list~CheckMessage~ info
        list~CheckMessage~ debug
        CheckSummary summary
        string error
    }

    class CheckMessage {
        string id
        int level
        string message
        string hint
        string obj
    }

    class CheckSummary {
        int critical_count
        int error_count
        int warning_count
        int info_count
        int debug_count
    }

    ListModelsResponse "1" o-- "many" ModelInfo
    ModelInfo "1" o-- "many" FieldInfo
    ModelInfo "1" o-- "many" RelationshipInfo

    CheckResult "1" o-- "many" CheckMessage
    CheckResult "1" o-- "1" CheckSummary
Loading

File-Level Changes

Change Details Files
Extend list_models tool to support optional app filtering and return structured metadata, and align the manual test helper with the new return shape.
  • Change list_models signature to accept an optional list of app_labels and return a dict instead of a bare list
  • Filter Django models by app label when app_labels is provided, otherwise use all models
  • Wrap the collected model info in a response dict including total_count, app_filter, and models
  • Update the test helper in test_server.py to construct and print equivalent result structures for unfiltered, single-app, and multi-app model listings, and return the unfiltered-style result
src/django_ai_boost/server_fastmcp.py
test_server.py
Add a new run_check MCP tool that wraps Django’s system check framework with async support and structured results.
  • Define an async run_check tool that accepts app_labels, tags, deploy, fail_level, and databases parameters and returns a rich result dict
  • Use a sync_to_async-wrapped execute_checks function that calls django.core.checks.run_checks with optional app_configs and database selection
  • Translate fail_level strings into Django check level constants and compute an overall success flag based on the highest message level and configured threshold
  • Group check messages into critical, errors, warnings, info, and debug buckets, add a summary section, and include robust error handling for bad app labels and general failures
  • Add a dedicated test script that initializes the test Django project, calls the underlying run_check.fn with various parameter combinations, and prints key fields to verify behavior and error handling
src/django_ai_boost/server_fastmcp.py
test_run_check.py
Update FastMCP integration, tests, and documentation to align with the newer FastMCP version and new tool behavior.
  • Adjust test_prompt to call mcp._list_prompts with a MiddlewareContext and ListPromptsRequest object, as required by FastMCP 2.13+
  • Bump project version to 0.5.0 and increase the fastmcp dependency to >=2.13.3 in pyproject.toml, updating uv.lock accordingly
  • Document the list_models app_labels argument, explain MCP client output truncation behavior, and show usage examples and troubleshooting steps in README.md
test_prompt.py
pyproject.toml
uv.lock
README.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • In run_check, the fail_level semantics don’t fully match the docstring: CRITICAL messages always flip success to False even when fail_level is set higher, whereas the description implies the threshold should uniformly be msg.level >= check_level; consider basing success solely on that comparison for all levels.
  • The run_check call to run_checks passes databases=databases or [], which changes Django’s default behavior of using all configured databases when None; prefer passing databases through as-is so None keeps the framework’s default semantics.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `run_check`, the `fail_level` semantics don’t fully match the docstring: CRITICAL messages always flip `success` to `False` even when `fail_level` is set higher, whereas the description implies the threshold should uniformly be `msg.level >= check_level`; consider basing `success` solely on that comparison for all levels.
- The `run_check` call to `run_checks` passes `databases=databases or []`, which changes Django’s default behavior of using all configured databases when `None`; prefer passing `databases` through as-is so `None` keeps the framework’s default semantics.

## Individual Comments

### Comment 1
<location> `src/django_ai_boost/server_fastmcp.py:608-612` </location>
<code_context>
+                app_configs = [apps.get_app_config(label) for label in app_labels]
+
+            # Run the checks
+            messages = run_checks(
+                app_configs=app_configs,
+                tags=tags,
+                include_deployment_checks=deploy,
+                databases=databases or [],
+            )
+
</code_context>

<issue_to_address>
**issue (bug_risk):** Passing `[]` instead of `None` for `databases` changes Django’s default behavior.

`run_checks` interprets `databases=None` as “all databases” and `[]` as “no databases”. Using `databases or []` means that when `databases` is `None`, no DB checks run. To keep Django’s behavior, pass `databases=databases` directly, or only normalize when it’s already a list (e.g. leave `None` as `None`).
</issue_to_address>

### Comment 2
<location> `src/django_ai_boost/server_fastmcp.py:671-673` </location>
<code_context>
+
+            return result
+
+        except LookupError as e:
+            return {"error": f"App not found: {str(e)}"}
+        except Exception as e:
+            return {"error": f"Error running checks: {str(e)}"}
+
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Align error responses with the success-path schema for easier client handling.

The success path returns a rich `result` dict, while the exception handlers return only `{"error": ...}`. This forces clients to special-case error responses if they rely on fields like `success`, `summary`, or the categorized lists. Prefer returning the same schema on errors (e.g., `{"success": False, "error": ..., "critical": [], ...}`) so clients can handle all outcomes uniformly.

Suggested implementation:

```python
        except LookupError as e:
            error_message = f"App not found: {str(e)}"
            return {
                "success": False,
                "error": error_message,
                "critical": [],
                "errors": [error_message],
                "warnings": [],
                "info": [],
                "debug": [],
                "summary": {
                    "critical_count": 0,
                    "error_count": 1,
                    "warning_count": 0,
                    "info_count": 0,
                    "debug_count": 0,
                },
            }
        except Exception as e:
            error_message = f"Error running checks: {str(e)}"
            return {
                "success": False,
                "error": error_message,
                "critical": [],
                "errors": [error_message],
                "warnings": [],
                "info": [],
                "debug": [],
                "summary": {
                    "critical_count": 0,
                    "error_count": 1,
                    "warning_count": 0,
                    "info_count": 0,
                    "debug_count": 0,
                },
            }

```

If the success-path `result` dict includes additional keys (e.g., an `app_label`, `checks_run`, or similar metadata), you may want to add those keys to the error responses as well (with `None`, `[]`, or other appropriate defaults) so the schema is perfectly uniform for all outcomes.
</issue_to_address>

### Comment 3
<location> `test_server.py:80` </location>
<code_context>
     print("\n3. Testing list_models:")
     print("-" * 60)

+    # Test 1: List all models (no filter)
+    print("\nTest 3a: List all models (no filter)")
+    all_models = apps.get_models()
</code_context>

<issue_to_address>
**issue (testing):** test_list_models now mirrors the implementation instead of exercising the MCP tool and asserting behaviour

This version recreates the expected `result_*` structures but never calls `server_fastmcp.list_models.fn` or asserts on its output, so it won’t catch regressions and can easily diverge from the real behaviour.

Instead, have this test:
- Call `server_fastmcp.list_models.fn(app_labels=...)` for the three cases (no filter, single app, multiple apps).
- Assert that the return value is a dict with `total_count`, `app_filter`, and `models`.
- Assert `total_count == len(models)`.
- Assert `app_filter` is `None` (no filter), `['blog']` (single app), and `['blog', 'auth']` (multiple apps).
- Assert the `(app, model)` pairs match those from `apps.get_models()` for each filter.

You can keep the `print` diagnostics, but the test should primarily fail when the tool’s behaviour changes unexpectedly.
</issue_to_address>

### Comment 4
<location> `test_run_check.py:21` </location>
<code_context>
+initialize_django = server_fastmcp.initialize_django
+
+
+async def test_run_check():
+    """Test the run_check tool with various parameters."""
+    print("Testing run_check tool...")
</code_context>

<issue_to_address>
**issue (testing):** test_run_check script lacks assertions and won’t fail CI if run_check regresses

Right now this only prints output and never asserts, so regressions in `run_check` wouldn’t cause the test (or CI) to fail.

If this is meant as an automated test, please add assertions on the returned dict structure and types (e.g., required keys like `"success"`, `"summary"`, `"warnings"` as a list) and verify that `checked_apps`, `tags`, `deploy_checks`, and `fail_level` reflect the parameters passed in each scenario. You may also want to split scenarios into separate test functions for clearer failures.

If it’s intended as a manual helper script instead, consider renaming it (e.g., `manual_run_check_demo.py`) so it isn’t confused with an automated test.
</issue_to_address>

### Comment 5
<location> `src/django_ai_boost/server_fastmcp.py:109` </location>
<code_context>


 @mcp.tool()
-async def list_models() -> list[dict[str, Any]]:
+async def list_models(app_labels: list[str] | None = None) -> dict[str, Any]:
</code_context>

<issue_to_address>
**issue (complexity):** Consider refactoring `run_check` to use a top-level sync helper and centralized success/summary logic for clearer, more consistent behavior.

You can simplify `run_check` quite a bit without changing behavior by:

1. Flattening the nested sync function.
2. Centralizing success/summary computation instead of mutating `success` in each branch.
3. Keeping the error response shape consistent.

### 1. Flatten `execute_checks` into a top-level sync helper

The inner `execute_checks` doesn’t really need to be nested. You can move it out and wrap it with `sync_to_async` at the call site:

```python
from asgiref.sync import sync_to_async

def _execute_checks(
    app_labels: list[str] | None,
    tags: list[str] | None,
    deploy: bool,
    fail_level: str,
    databases: list[str] | None,
) -> dict[str, Any]:
    from django.core.checks import run_checks
    from django.core.checks.messages import DEBUG, INFO, WARNING, ERROR, CRITICAL

    try:
        level_map = {
            "DEBUG": DEBUG,
            "INFO": INFO,
            "WARNING": WARNING,
            "ERROR": ERROR,
            "CRITICAL": CRITICAL,
        }
        check_level = level_map.get(fail_level.upper(), ERROR)

        app_configs = None
        if app_labels:
            app_configs = [apps.get_app_config(label) for label in app_labels]

        messages = run_checks(
            app_configs=app_configs,
            tags=tags,
            include_deployment_checks=deploy,
            databases=databases or [],
        )

        result = {
            "success": True,
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
            "total_issues": len(messages),
            "critical": [],
            "errors": [],
            "warnings": [],
            "info": [],
            "debug": [],
        }

        # Collect messages only – no success logic here
        for msg in messages:
            message_dict = {
                "id": msg.id,
                "level": msg.level,
                "message": msg.msg,
                "hint": msg.hint or "",
                "obj": str(msg.obj) if msg.obj else "",
            }

            if msg.level >= CRITICAL:
                result["critical"].append(message_dict)
            elif msg.level >= ERROR:
                result["errors"].append(message_dict)
            elif msg.level >= WARNING:
                result["warnings"].append(message_dict)
            elif msg.level >= INFO:
                result["info"].append(message_dict)
            else:
                result["debug"].append(message_dict)

        # Centralized success + summary computation
        has_failing = any(m.level >= check_level for m in messages)
        result["success"] = not has_failing

        result["summary"] = {
            "critical_count": len(result["critical"]),
            "error_count": len(result["errors"]),
            "warning_count": len(result["warnings"]),
            "info_count": len(result["info"]),
            "debug_count": len(result["debug"]),
        }

        return result

    except LookupError as e:
        return {
            "success": False,
            "error": f"App not found: {str(e)}",
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
        }
    except Exception as e:
        return {
            "success": False,
            "error": f"Error running checks: {str(e)}",
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
        }

@mcp.tool()
async def run_check(
    app_labels: list[str] | None = None,
    tags: list[str] | None = None,
    deploy: bool = False,
    fail_level: str = "ERROR",
    databases: list[str] | None = None,
) -> dict[str, Any]:
    return await sync_to_async(_execute_checks)(
        app_labels=app_labels,
        tags=tags,
        deploy=deploy,
        fail_level=fail_level,
        databases=databases,
    )
```

Benefits:

- No nested function; the core logic is directly visible in `_execute_checks`.
- Message categorization is only about bucketing; success is computed once based on `check_level`, not mutated in many branches.
- Error paths still include `error` but now also return `success=False` and the same top-level keys (`checked_apps`, `tags`, etc.), which keeps the response shape consistent while remaining backward compatible (existing clients reading only `error` still work).
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +608 to +612
messages = run_checks(
app_configs=app_configs,
tags=tags,
include_deployment_checks=deploy,
databases=databases or [],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Passing [] instead of None for databases changes Django’s default behavior.

run_checks interprets databases=None as “all databases” and [] as “no databases”. Using databases or [] means that when databases is None, no DB checks run. To keep Django’s behavior, pass databases=databases directly, or only normalize when it’s already a list (e.g. leave None as None).

Comment on lines +671 to +673
except LookupError as e:
return {"error": f"App not found: {str(e)}"}
except Exception as e:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Align error responses with the success-path schema for easier client handling.

The success path returns a rich result dict, while the exception handlers return only {"error": ...}. This forces clients to special-case error responses if they rely on fields like success, summary, or the categorized lists. Prefer returning the same schema on errors (e.g., {"success": False, "error": ..., "critical": [], ...}) so clients can handle all outcomes uniformly.

Suggested implementation:

        except LookupError as e:
            error_message = f"App not found: {str(e)}"
            return {
                "success": False,
                "error": error_message,
                "critical": [],
                "errors": [error_message],
                "warnings": [],
                "info": [],
                "debug": [],
                "summary": {
                    "critical_count": 0,
                    "error_count": 1,
                    "warning_count": 0,
                    "info_count": 0,
                    "debug_count": 0,
                },
            }
        except Exception as e:
            error_message = f"Error running checks: {str(e)}"
            return {
                "success": False,
                "error": error_message,
                "critical": [],
                "errors": [error_message],
                "warnings": [],
                "info": [],
                "debug": [],
                "summary": {
                    "critical_count": 0,
                    "error_count": 1,
                    "warning_count": 0,
                    "info_count": 0,
                    "debug_count": 0,
                },
            }

If the success-path result dict includes additional keys (e.g., an app_label, checks_run, or similar metadata), you may want to add those keys to the error responses as well (with None, [], or other appropriate defaults) so the schema is perfectly uniform for all outcomes.

print("\n3. Testing list_models:")
print("-" * 60)

# Test 1: List all models (no filter)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): test_list_models now mirrors the implementation instead of exercising the MCP tool and asserting behaviour

This version recreates the expected result_* structures but never calls server_fastmcp.list_models.fn or asserts on its output, so it won’t catch regressions and can easily diverge from the real behaviour.

Instead, have this test:

  • Call server_fastmcp.list_models.fn(app_labels=...) for the three cases (no filter, single app, multiple apps).
  • Assert that the return value is a dict with total_count, app_filter, and models.
  • Assert total_count == len(models).
  • Assert app_filter is None (no filter), ['blog'] (single app), and ['blog', 'auth'] (multiple apps).
  • Assert the (app, model) pairs match those from apps.get_models() for each filter.

You can keep the print diagnostics, but the test should primarily fail when the tool’s behaviour changes unexpectedly.

initialize_django = server_fastmcp.initialize_django


async def test_run_check():
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): test_run_check script lacks assertions and won’t fail CI if run_check regresses

Right now this only prints output and never asserts, so regressions in run_check wouldn’t cause the test (or CI) to fail.

If this is meant as an automated test, please add assertions on the returned dict structure and types (e.g., required keys like "success", "summary", "warnings" as a list) and verify that checked_apps, tags, deploy_checks, and fail_level reflect the parameters passed in each scenario. You may also want to split scenarios into separate test functions for clearer failures.

If it’s intended as a manual helper script instead, consider renaming it (e.g., manual_run_check_demo.py) so it isn’t confused with an automated test.

return {"error": f"Error retrieving setting: {str(e)}"}


@mcp.tool()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): Consider refactoring run_check to use a top-level sync helper and centralized success/summary logic for clearer, more consistent behavior.

You can simplify run_check quite a bit without changing behavior by:

  1. Flattening the nested sync function.
  2. Centralizing success/summary computation instead of mutating success in each branch.
  3. Keeping the error response shape consistent.

1. Flatten execute_checks into a top-level sync helper

The inner execute_checks doesn’t really need to be nested. You can move it out and wrap it with sync_to_async at the call site:

from asgiref.sync import sync_to_async

def _execute_checks(
    app_labels: list[str] | None,
    tags: list[str] | None,
    deploy: bool,
    fail_level: str,
    databases: list[str] | None,
) -> dict[str, Any]:
    from django.core.checks import run_checks
    from django.core.checks.messages import DEBUG, INFO, WARNING, ERROR, CRITICAL

    try:
        level_map = {
            "DEBUG": DEBUG,
            "INFO": INFO,
            "WARNING": WARNING,
            "ERROR": ERROR,
            "CRITICAL": CRITICAL,
        }
        check_level = level_map.get(fail_level.upper(), ERROR)

        app_configs = None
        if app_labels:
            app_configs = [apps.get_app_config(label) for label in app_labels]

        messages = run_checks(
            app_configs=app_configs,
            tags=tags,
            include_deployment_checks=deploy,
            databases=databases or [],
        )

        result = {
            "success": True,
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
            "total_issues": len(messages),
            "critical": [],
            "errors": [],
            "warnings": [],
            "info": [],
            "debug": [],
        }

        # Collect messages only – no success logic here
        for msg in messages:
            message_dict = {
                "id": msg.id,
                "level": msg.level,
                "message": msg.msg,
                "hint": msg.hint or "",
                "obj": str(msg.obj) if msg.obj else "",
            }

            if msg.level >= CRITICAL:
                result["critical"].append(message_dict)
            elif msg.level >= ERROR:
                result["errors"].append(message_dict)
            elif msg.level >= WARNING:
                result["warnings"].append(message_dict)
            elif msg.level >= INFO:
                result["info"].append(message_dict)
            else:
                result["debug"].append(message_dict)

        # Centralized success + summary computation
        has_failing = any(m.level >= check_level for m in messages)
        result["success"] = not has_failing

        result["summary"] = {
            "critical_count": len(result["critical"]),
            "error_count": len(result["errors"]),
            "warning_count": len(result["warnings"]),
            "info_count": len(result["info"]),
            "debug_count": len(result["debug"]),
        }

        return result

    except LookupError as e:
        return {
            "success": False,
            "error": f"App not found: {str(e)}",
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
        }
    except Exception as e:
        return {
            "success": False,
            "error": f"Error running checks: {str(e)}",
            "checked_apps": app_labels or "all",
            "tags": tags or "all",
            "deploy_checks": deploy,
            "fail_level": fail_level,
        }

@mcp.tool()
async def run_check(
    app_labels: list[str] | None = None,
    tags: list[str] | None = None,
    deploy: bool = False,
    fail_level: str = "ERROR",
    databases: list[str] | None = None,
) -> dict[str, Any]:
    return await sync_to_async(_execute_checks)(
        app_labels=app_labels,
        tags=tags,
        deploy=deploy,
        fail_level=fail_level,
        databases=databases,
    )

Benefits:

  • No nested function; the core logic is directly visible in _execute_checks.
  • Message categorization is only about bucketing; success is computed once based on check_level, not mutated in many branches.
  • Error paths still include error but now also return success=False and the same top-level keys (checked_apps, tags, etc.), which keeps the response shape consistent while remaining backward compatible (existing clients reading only error still work).

@fjsj fjsj requested a review from hugobessa December 15, 2025 14:31
Copy link
Contributor

@hugobessa hugobessa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, only left two nit comments about imports

Comment on lines +582 to +589
from django.core.checks import run_checks
from django.core.checks.messages import (
DEBUG,
INFO,
WARNING,
ERROR,
CRITICAL,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Any reason for this not to be on top level?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The imports are intentionally inside the function because the Django check framework requires django.setup() to be called first. Our MCP server initializes Django in run_server(), and if we import these at module level, they'd execute before Django is ready

@raisiqueira raisiqueira merged commit e1aa0f6 into main Dec 18, 2025
1 check passed
@raisiqueira raisiqueira deleted the feat/new-mcp-features branch December 18, 2025 13:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants