Skip to content

Conversation

@are-ces
Copy link
Contributor

@are-ces are-ces commented Dec 22, 2025

Description

  • Added e2e tests for WatsonX provider
  • Updated documentation
  • updated e2e tests for the issue mentioned below
  • updated query.py for the issue mentioned below

When calling query without explicitly specifying a model or provider, query.py in LCS checks whether the model is registered. In this case, LCS fails to find the WatsonX model. This happens because WatsonX model identifiers are not built in the expected <provider_id>/<model_id> format. WatsonX is one of the few providers that registers a model_id containing a /, for example: meta-llama/llama-4-maverick-17b-128e-instruct-fp8, which in llama-stack is stored as the model.identifier, while LCS expects it to be watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8.

As a workaround, I propose using environment variables to override the model and provider used in end-to-end tests (E2E_DEFAULT_MODEL_OVERRIDE and E2E_DEFAULT_PROVIDER_OVERRIDE). Additionally, the e2e tests should explicitly specify the model and provider instead of sending only the query field.

We can keep the existing tests in query.feature and streaming.feature that verify calling the endpoint without specifying a provider or model. These tests will continue to fail for WatsonX until the issue is fixed upstream.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)
NA

Related Tickets & Documents

  • Related Issue # LCORE-332
  • Closes # LCORE-332

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

  • New Features

    • Added WatsonX as a supported inference provider and example runtime configuration.
  • Documentation

    • Updated provider docs and README to list WatsonX support.
  • Tests

    • Extended E2E matrix to include WatsonX.
    • Updated test scenarios to include explicit model/provider fields.
    • Added environment-variable overrides for test defaults.
  • Improvements

    • Broadened model matching to accept multiple identifier formats.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 22, 2025

Walkthrough

Adds WatsonX provider support across CI, docker-compose, examples, docs, and E2E tests; updates test harness to accept explicit model/provider via environment overrides and extends test payloads to include model/provider fields.

Changes

Cohort / File(s) Summary
CI / Workflows
.github/workflows/e2e_tests.yaml
Adds a watsonx entry to the E2E matrix, injects WATSONX credentials into run steps, and adds a watsonx-specific test override step to set default model/provider.
Containers / Compose
docker-compose.yaml, docker-compose-library.yaml
Adds WATSONX_BASE_URL, WATSONX_PROJECT_ID, and WATSONX_API_KEY env vars (default-empty) to relevant services.
Documentation
README.md, docs/providers.md
Adds IBM WatsonX provider rows and updates provider mapping/status entries.
Examples & Configs
examples/watsonx-run.yaml, tests/e2e/configs/run-watsonx.yaml
New watsonx example and E2E runtime config files defining providers, storage, registered resources, and vector/tool registrations for watsonx.
API Endpoint Logic
src/app/endpoints/query.py
Consolidates ToolExecutionStep import and broadens model matching to accept either full llama_stack_model_id or plain model_id when selecting model/provider.
E2E Test Harness
tests/e2e/features/environment.py
Adds FALLBACK_MODEL/FALLBACK_PROVIDER constants and logic to honor E2E_DEFAULT_MODEL_OVERRIDE/E2E_DEFAULT_PROVIDER_OVERRIDE before falling back to service-detected model or defaults.
E2E Feature Tests
tests/e2e/features/conversations.feature, tests/e2e/features/query.feature, tests/e2e/features/streaming_query.feature
Extends request JSON payloads across scenarios to include "model": "{MODEL}" and "provider": "{PROVIDER}".

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • radofuchs
  • tisnik
  • asimurka
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title directly and clearly describes the main objective: adding full support for the WatsonX LLM provider to Lightspeed core, which aligns with the changeset's primary focus across configuration files, documentation, and e2e tests.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@are-ces
Copy link
Contributor Author

are-ces commented Dec 22, 2025

PS: Please ignore issues with VertexAI provider as they are independent of these changes; the project has been migrated by the cloud team, I will need to talk to them to re-enable it.

@are-ces are-ces requested review from radofuchs and tisnik and removed request for radofuchs December 22, 2025 09:22
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
.github/workflows/e2e_tests.yaml (1)

263-268: Document the watsonx override as temporary technical debt.

The override E2E_DEFAULT_MODEL_OVERRIDE=watsonx/watsonx/meta-llama/llama-3-3-70b-instruct duplicates the provider name, which is a workaround for WatsonX's model identifier format containing /. Per the PR description, this is intentional until an upstream fix is applied.

Consider tracking this as technical debt with a TODO comment or issue reference to ensure it's revisited when the upstream fix is available.

🔎 Suggested improvement
 # watsonx has a different convention than "<provider>/<model>"
+# TODO(LCORE-XXX): Remove this workaround once Llama Stack properly handles WatsonX model identifiers with embedded slashes
 - name: Set watsonx test overrides
   if: matrix.environment == 'watsonx'
   run: |
     echo "E2E_DEFAULT_MODEL_OVERRIDE=watsonx/watsonx/meta-llama/llama-3-3-70b-instruct" >> $GITHUB_ENV
     echo "E2E_DEFAULT_PROVIDER_OVERRIDE=watsonx" >> $GITHUB_ENV
tests/e2e/features/environment.py (1)

61-86: Consider validating partial override configuration.

The current logic requires both E2E_DEFAULT_MODEL_OVERRIDE and E2E_DEFAULT_PROVIDER_OVERRIDE to be set (line 65). If only one is provided, both are silently ignored and the code falls back to service detection. This could be confusing for users who expect a partial override to work.

Consider adding a warning when only one override is set, or raising an error if they should be used together.

🔎 Proposed validation for partial overrides
 # Check for environment variable overrides first
 model_override = os.getenv("E2E_DEFAULT_MODEL_OVERRIDE")
 provider_override = os.getenv("E2E_DEFAULT_PROVIDER_OVERRIDE")
 
+# Validate that overrides are provided together
+if bool(model_override) != bool(provider_override):
+    print(
+        "⚠ Warning: Both E2E_DEFAULT_MODEL_OVERRIDE and E2E_DEFAULT_PROVIDER_OVERRIDE "
+        "must be set together. Falling back to service detection."
+    )
+
 if model_override and provider_override:
     context.default_model = model_override
     context.default_provider = provider_override
     print(
         f"Using override LLM: {context.default_model} (provider: {context.default_provider})"
     )
examples/watsonx-run.yaml (2)

65-65: Clarify the hardcoded asterisks for openai_api_key.

The openai_api_key is set to a literal string of asterisks '********'. If the braintrust scoring provider is intended to be functional, this will fail authentication. If braintrust is disabled or not used in this example, consider removing this provider entry or adding a comment explaining it's a placeholder.


50-50: Track the workaround for disabled safety shields.

Safety shields are disabled with a warning comment about infinite loop issues with LLM calls (lines 50 and 149). This is a significant security/safety feature being disabled.

Consider creating a tracking issue for this known limitation so it can be re-enabled once the upstream issue is resolved.

Would you like me to help create a GitHub issue to track this limitation?

Also applies to: 149-149

tests/e2e/configs/run-watsonx.yaml (2)

65-65: Clarify the hardcoded asterisks for openai_api_key.

The openai_api_key is set to a literal string of asterisks '********'. If the braintrust scoring provider is needed for E2E tests, this will fail authentication. If braintrust is not used in these tests, consider removing this provider entry or documenting that it's intentionally disabled.


50-50: Document the workaround for disabled safety shields in E2E tests.

Safety shields are disabled with a warning comment about infinite loop issues with LLM calls (lines 50 and 149). This affects the test coverage for safety features.

Consider adding a comment or documentation about which safety-related test scenarios are skipped due to this limitation.

Also applies to: 149-149

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd795b3 and 9dbf66b.

📒 Files selected for processing (12)
  • .github/workflows/e2e_tests.yaml
  • README.md
  • docker-compose-library.yaml
  • docker-compose.yaml
  • docs/providers.md
  • examples/watsonx-run.yaml
  • src/app/endpoints/query.py
  • tests/e2e/configs/run-watsonx.yaml
  • tests/e2e/features/conversations.feature
  • tests/e2e/features/environment.py
  • tests/e2e/features/query.feature
  • tests/e2e/features/streaming_query.feature
🧰 Additional context used
📓 Path-based instructions (5)
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Files:

  • tests/e2e/features/conversations.feature
  • tests/e2e/features/streaming_query.feature
  • tests/e2e/features/query.feature
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.py: Use absolute imports for internal modules in LCS project (e.g., from auth import get_auth_dependency)
All modules must start with descriptive docstrings explaining their purpose
Use logger = logging.getLogger(__name__) pattern for module logging
All functions must include complete type annotations for parameters and return types, using modern syntax (str | int) and Optional[Type] or Type | None
All functions must have docstrings with brief descriptions following Google Python docstring conventions
Function names must use snake_case with descriptive, action-oriented names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead of modifying input parameters
Use async def for I/O operations and external API calls
All classes must include descriptive docstrings explaining their purpose following Google Python docstring conventions
Class names must use PascalCase with descriptive names and standard suffixes: Configuration for config classes, Error/Exception for exceptions, Resolver for strategy patterns, Interface for abstract base classes
Abstract classes must use ABC with @abstractmethod decorators
Include complete type annotations for all class attributes in Python classes
Use import logging and module logger pattern with standard log levels: debug, info, warning, error

Files:

  • src/app/endpoints/query.py
src/app/endpoints/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use FastAPI HTTPException with appropriate status codes for API endpoint error handling

Files:

  • src/app/endpoints/query.py
src/**/{client,app/endpoints/**}.py

📄 CodeRabbit inference engine (CLAUDE.md)

Handle APIConnectionError from Llama Stack in integration code

Files:

  • src/app/endpoints/query.py
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use pytest-mock with AsyncMock objects for mocking in tests

Files:

  • tests/e2e/features/environment.py
🧠 Learnings (1)
📚 Learning: 2025-12-18T10:21:09.038Z
Learnt from: are-ces
Repo: lightspeed-core/lightspeed-stack PR: 935
File: run.yaml:114-115
Timestamp: 2025-12-18T10:21:09.038Z
Learning: In Llama Stack version 0.3.x, telemetry provider configuration is not supported under the `providers` section in run.yaml configuration files. Telemetry can be enabled with just `telemetry.enabled: true` without requiring an explicit provider block.

Applied to files:

  • examples/watsonx-run.yaml
  • tests/e2e/configs/run-watsonx.yaml
🧬 Code graph analysis (2)
src/app/endpoints/query.py (2)
tests/unit/app/endpoints/test_streaming_query.py (1)
  • ToolExecutionStep (207-221)
src/models/responses.py (1)
  • QueryResponse (341-452)
tests/e2e/features/environment.py (1)
src/models/responses.py (1)
  • model_override (1497-1516)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: unit_tests (3.13)
  • GitHub Check: Pyright
  • GitHub Check: Pylinter
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: E2E: library mode / vertexai
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: server mode / vertexai
  • GitHub Check: E2E: library mode / azure
  • GitHub Check: E2E: server mode / azure
  • GitHub Check: E2E: library mode / ci
🔇 Additional comments (11)
docs/providers.md (1)

58-58: LGTM!

The provider change from ibm_watsonx_ai to litellm correctly reflects the implementation approach for WatsonX support, with the supported status appropriately maintained.

README.md (1)

125-125: LGTM!

The WatsonX provider documentation additions are properly formatted and include appropriate references to setup documentation and example configurations.

Also applies to: 181-181

.github/workflows/e2e_tests.yaml (1)

13-13: LGTM!

The watsonx environment additions to the test matrix and the credential configuration for both server and library modes are properly structured and consistent with other provider configurations.

Also applies to: 203-204, 231-232

docker-compose.yaml (1)

35-38: LGTM!

The WatsonX environment variables follow the same pattern as other provider configurations, with appropriate default empty values.

tests/e2e/features/conversations.feature (1)

14-14: LGTM!

The payload extensions to include explicit model and provider fields align with the PR's goal of supporting provider-specific testing. The placeholder substitution pattern is consistent across all scenarios.

Also applies to: 31-31, 53-53, 100-100, 138-138, 152-152, 190-190

docker-compose-library.yaml (1)

37-40: LGTM!

The WatsonX environment variables are consistently configured for library mode, mirroring the server mode configuration in docker-compose.yaml.

tests/e2e/features/query.feature (1)

13-13: LGTM!

The test payload extensions appropriately add explicit model and provider fields to positive test scenarios while preserving error-condition tests that validate missing parameters. This aligns with the PR's goal of supporting explicit provider/model specification.

Also applies to: 25-25, 37-37, 54-54, 72-72

src/app/endpoints/query.py (1)

542-551: Provider filtering provides sufficient protection against identifier collisions.

The matching logic at line 543-544 uses an AND condition between the identifier check and provider_id check: m.identifier in (llama_stack_model_id, model_id) and m.provider_id == provider_id. This ensures that only models from the specified provider are matched, preventing unintended collisions across providers even when multiple providers have models with the same plain identifier (e.g., "custom-model"). While the logic accepts both provider_id/model_id format and plain model_id, the strict provider_id filter isolates the search to a single provider, eliminating the collision risk.

examples/watsonx-run.yaml (1)

144-148: Verify the provider_model_id format aligns with the upstream fix.

The provider_model_id on line 148 uses the format watsonx/meta-llama/llama-3-3-70b-instruct, which includes the provider prefix. According to the PR description, this is a workaround because WatsonX model identifiers contain slashes that cause parsing issues.

Ensure that when the upstream fix is applied, this configuration format remains compatible or that migration guidance is provided.

tests/e2e/configs/run-watsonx.yaml (1)

144-148: Verify the provider_model_id format aligns with the upstream fix.

The provider_model_id on line 148 uses the format watsonx/meta-llama/llama-3-3-70b-instruct, which includes the provider prefix. According to the PR description, this is a workaround because WatsonX model identifiers contain slashes that cause parsing issues.

Ensure that when the upstream fix is applied, this E2E test configuration format remains compatible or is updated accordingly.

tests/e2e/features/streaming_query.feature (1)

14-14: Placeholder substitution is properly configured.

The {MODEL} and {PROVIDER} placeholders are correctly substituted at runtime. The before_all hook in environment.py initializes context.default_model and context.default_provider, and the replace_placeholders() function is consistently called by all step implementations (in llm_query_response.py and common_http.py) before processing JSON payloads. The mechanism is working as intended.

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@tisnik tisnik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks ok. Added just two comments, probably minor ones.

# TODO: Create sepparate validation of provider
if not any(
m.identifier == llama_stack_model_id and m.provider_id == provider_id
m.identifier in (llama_stack_model_id, model_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the whole model/provider handling is a bit wacky, but it is OT: not a problem for this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep I agree

Copy link
Contributor

@radofuchs radofuchs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have two issues here,

  1. we need to keep one query test to test the default behavior without the specified model
  2. the feedback also has a query call that requires the model to be set

added override model and provider to e2e tests
patched conversation tests to include model and provider in the call
Copy link
Contributor Author

@are-ces are-ces left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed @tisnik comments

@are-ces
Copy link
Contributor Author

are-ces commented Jan 7, 2026

I have two issues here,

1. we need to keep one query test to test the default behavior without the specified model

2. the feedback also has a query call that requires the model to be set

@radofuchs streaming query and query endpoints have tests in place for missing provider and model. Do you mean that the conversations test should also have such scenario?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @.github/workflows/e2e_tests.yaml:
- Around line 263-268: The E2E_DEFAULT_MODEL_OVERRIDE value currently includes a
duplicated "watsonx/" prefix that doesn't match the provider_model_id in
tests/e2e/configs/run-watsonx.yaml; update the override for the watsonx matrix
case so E2E_DEFAULT_MODEL_OVERRIDE equals the same identifier used in
run-watsonx.yaml (provider_model_id: watsonx/meta-llama/llama-3-3-70b-instruct)
— i.e., remove the extra leading "watsonx/" from the echo that sets
E2E_DEFAULT_MODEL_OVERRIDE in the Set watsonx test overrides step so the
environment var and config match exactly.
🧹 Nitpick comments (1)
tests/e2e/configs/run-watsonx.yaml (1)

62-65: Consider removing unused Braintrust scoring provider.

The Braintrust scoring provider with a masked OpenAI API key may not be relevant to WatsonX-specific e2e tests. If this provider isn't exercised in the WatsonX test scenarios, consider removing it to keep the configuration focused.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9dbf66b and 1d03c1a.

📒 Files selected for processing (12)
  • .github/workflows/e2e_tests.yaml
  • README.md
  • docker-compose-library.yaml
  • docker-compose.yaml
  • docs/providers.md
  • examples/watsonx-run.yaml
  • src/app/endpoints/query.py
  • tests/e2e/configs/run-watsonx.yaml
  • tests/e2e/features/conversations.feature
  • tests/e2e/features/environment.py
  • tests/e2e/features/query.feature
  • tests/e2e/features/streaming_query.feature
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/e2e/features/query.feature
  • docs/providers.md
  • docker-compose-library.yaml
🧰 Additional context used
📓 Path-based instructions (5)
tests/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use pytest-mock with AsyncMock objects for mocking in tests

Files:

  • tests/e2e/features/environment.py
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.py: Use absolute imports for internal modules in LCS project (e.g., from auth import get_auth_dependency)
All modules must start with descriptive docstrings explaining their purpose
Use logger = logging.getLogger(__name__) pattern for module logging
All functions must include complete type annotations for parameters and return types, using modern syntax (str | int) and Optional[Type] or Type | None
All functions must have docstrings with brief descriptions following Google Python docstring conventions
Function names must use snake_case with descriptive, action-oriented names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead of modifying input parameters
Use async def for I/O operations and external API calls
All classes must include descriptive docstrings explaining their purpose following Google Python docstring conventions
Class names must use PascalCase with descriptive names and standard suffixes: Configuration for config classes, Error/Exception for exceptions, Resolver for strategy patterns, Interface for abstract base classes
Abstract classes must use ABC with @abstractmethod decorators
Include complete type annotations for all class attributes in Python classes
Use import logging and module logger pattern with standard log levels: debug, info, warning, error

Files:

  • src/app/endpoints/query.py
src/app/endpoints/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

Use FastAPI HTTPException with appropriate status codes for API endpoint error handling

Files:

  • src/app/endpoints/query.py
src/**/{client,app/endpoints/**}.py

📄 CodeRabbit inference engine (CLAUDE.md)

Handle APIConnectionError from Llama Stack in integration code

Files:

  • src/app/endpoints/query.py
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Files:

  • tests/e2e/features/streaming_query.feature
  • tests/e2e/features/conversations.feature
🧠 Learnings (1)
📚 Learning: 2025-12-18T10:21:09.038Z
Learnt from: are-ces
Repo: lightspeed-core/lightspeed-stack PR: 935
File: run.yaml:114-115
Timestamp: 2025-12-18T10:21:09.038Z
Learning: In Llama Stack version 0.3.x, telemetry provider configuration is not supported under the `providers` section in run.yaml configuration files. Telemetry can be enabled with just `telemetry.enabled: true` without requiring an explicit provider block.

Applied to files:

  • tests/e2e/configs/run-watsonx.yaml
  • examples/watsonx-run.yaml
🧬 Code graph analysis (2)
tests/e2e/features/environment.py (1)
src/models/responses.py (1)
  • model_override (1497-1516)
src/app/endpoints/query.py (2)
tests/unit/app/endpoints/test_streaming_query.py (1)
  • ToolExecutionStep (207-221)
src/models/responses.py (1)
  • QueryResponse (341-452)
🪛 markdownlint-cli2 (0.18.1)
README.md

125-125: Bare URL used

(MD034, no-bare-urls)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E: server mode / ci
🔇 Additional comments (17)
README.md (2)

125-125: LGTM - Documentation follows existing table format.

The bare URL flagged by markdownlint is consistent with all other provider URLs in this table (lines 122-127). No change needed for consistency.


181-181: WatsonX provider entry looks correct.

The new table row follows the established format and correctly references the example configuration file.

docker-compose.yaml (1)

35-38: LGTM - WatsonX environment variables follow established patterns.

The new environment variables are correctly structured with empty defaults, consistent with other provider configurations (e.g., VertexAI on lines 32-34).

src/app/endpoints/query.py (2)

23-23: Import consolidation looks good.

The ToolExecutionStep import is correctly moved to follow the other llama_stack_client.types.alpha imports.


542-545: Logic change correctly addresses WatsonX model identifier format.

The updated validation now accepts models registered with either:

  • Full format: provider_id/model_id (e.g., watsonx/meta-llama/llama-3-3-70b-instruct)
  • Plain model_id that may itself contain / (e.g., meta-llama/llama-3-3-70b-instruct)

The provider_id check on line 544 ensures we don't accidentally match a model from a different provider, so this relaxed matching is safe.

examples/watsonx-run.yaml (1)

1-161: Well-structured WatsonX example configuration.

The configuration follows established patterns from other example files. A few notes:

  1. The safety provider is disabled with appropriate warning comments (lines 50, 149)
  2. Placeholder credentials (key-not-set, project-not-set, ********) are suitable for example files
  3. Per learnings, telemetry configuration (line 117) correctly uses enabled: true without a provider block for Llama Stack 0.3.x
tests/e2e/features/conversations.feature (1)

14-14: Test payload updates correctly include model/provider fields.

The addition of "model": "{MODEL}", "provider": "{PROVIDER}" to query payloads enables explicit model/provider routing for WatsonX tests where the default lookup would fail due to the / character in model identifiers. The placeholder substitution pattern is appropriate for Gherkin feature files.

tests/e2e/features/streaming_query.feature (1)

14-14: Streaming query tests appropriately extended with model/provider fields.

All scenarios that should specify model/provider now include them, while the validation test scenarios (lines 69-95) correctly test edge cases for missing fields.

.github/workflows/e2e_tests.yaml (2)

13-13: WatsonX environment correctly added to test matrix.

The matrix expansion enables WatsonX E2E tests to run in both server and library modes.


203-204: WatsonX credentials properly sourced from secrets.

Credentials are correctly passed via secrets.WATSONX_PROJECT_ID and secrets.WATSONX_API_KEY, following the established pattern for other provider credentials in this workflow.

tests/e2e/features/environment.py (3)

25-26: LGTM!

The fallback constants are well-named and provide sensible defaults for development environments when model detection fails.


64-74: LGTM!

The override logic correctly requires both environment variables to be set (non-empty) before applying overrides. The truthiness check on line 69 properly handles empty strings, and the log message clearly indicates when overrides are active.


76-90: LGTM!

The three-tier fallback strategy (override → service detection → hardcoded fallback) is well-implemented. The service detection function has proper error handling, and the fallback scenario is clearly logged with a warning indicator.

tests/e2e/configs/run-watsonx.yaml (4)

35-38: Verify the timeout value is intentional.

The WatsonX provider is configured with a 1200-second (20-minute) timeout, which is unusually high for API calls. Confirm this is intentional for WatsonX-specific requirements (e.g., model loading, cold starts) rather than a placeholder value.


50-50: Note: Safety shields intentionally disabled.

The configuration disables safety shields with a documented warning about infinite loop issues. This aligns with the PR's focus on WatsonX support, with the understanding that shield issues will be addressed separately.


144-148: LGTM!

The registered model demonstrates the WatsonX provider_model_id format (watsonx/meta-llama/llama-3-3-70b-instruct) that contains multiple slashes, which aligns with the PR's documented workaround for the model lookup issue. The custom model_id provides a stable reference for testing.


116-117: LGTM!

The telemetry configuration correctly uses enabled: true without an explicit provider block, which aligns with the supported format for Llama Stack 0.3.x. Based on learnings, this is the proper configuration approach.

Comment on lines +263 to +268
# watsonx has a different convention than "<provider>/<model>"
- name: Set watsonx test overrides
if: matrix.environment == 'watsonx'
run: |
echo "E2E_DEFAULT_MODEL_OVERRIDE=watsonx/watsonx/meta-llama/llama-3-3-70b-instruct" >> $GITHUB_ENV
echo "E2E_DEFAULT_PROVIDER_OVERRIDE=watsonx" >> $GITHUB_ENV
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify WatsonX test configuration exists and check model identifier format

echo "=== Checking for WatsonX test configuration ==="
if [ -f "tests/e2e/configs/run-watsonx.yaml" ]; then
  echo "✅ run-watsonx.yaml exists"
  echo ""
  echo "=== Model configuration in run-watsonx.yaml ==="
  grep -A 5 "model_id:" tests/e2e/configs/run-watsonx.yaml || echo "No model_id found"
  grep -A 5 "provider_model_id:" tests/e2e/configs/run-watsonx.yaml || echo "No provider_model_id found"
else
  echo "❌ tests/e2e/configs/run-watsonx.yaml not found"
fi

echo ""
echo "=== Example watsonx config model registration ==="
grep -A 5 "models:" examples/watsonx-run.yaml | head -10

Repository: lightspeed-core/lightspeed-stack

Length of output: 1049


Fix WatsonX model identifier mismatch in test overrides.

The E2E_DEFAULT_MODEL_OVERRIDE value does not match the actual model identifier in the configuration. The override sets:

watsonx/watsonx/meta-llama/llama-3-3-70b-instruct

But tests/e2e/configs/run-watsonx.yaml defines:

provider_model_id: watsonx/meta-llama/llama-3-3-70b-instruct

The override contains an extra watsonx/ prefix. Either remove the duplicate prefix or update the override to match the actual configuration value.

🤖 Prompt for AI Agents
In @.github/workflows/e2e_tests.yaml around lines 263 - 268, The
E2E_DEFAULT_MODEL_OVERRIDE value currently includes a duplicated "watsonx/"
prefix that doesn't match the provider_model_id in
tests/e2e/configs/run-watsonx.yaml; update the override for the watsonx matrix
case so E2E_DEFAULT_MODEL_OVERRIDE equals the same identifier used in
run-watsonx.yaml (provider_model_id: watsonx/meta-llama/llama-3-3-70b-instruct)
— i.e., remove the extra leading "watsonx/" from the echo that sets
E2E_DEFAULT_MODEL_OVERRIDE in the Set watsonx test overrides step so the
environment var and config match exactly.

@openshift-ci
Copy link

openshift-ci bot commented Jan 7, 2026

@are-ces: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/images 1d03c1a link true /test images

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants