Skip to content

Conversation

@Pfannkuchensack
Copy link
Collaborator

Summary

Add support for alternative diffusers Flow Matching schedulers for Flux models:

  • Euler (default) - 1st order, fast, current behavior
  • Heun (2nd order) - Better quality, 2x slower (double model evaluations per step)
  • LCM - Optimized for few-step generation (1-4 steps)

The scheduler can be selected in both the Linear UI (Generation Settings → Advanced) and the Workflow Editor (Flux Denoise node).

Backend changes:

  • New invokeai/backend/flux/schedulers.py with scheduler type definitions and class mapping
  • Modified denoise.py to accept optional diffusers scheduler, with automatic detection of sigmas parameter support
  • Added scheduler InputField to flux_denoise invocation (version 4.1.0 → 4.2.0)

Frontend changes:

  • Added fluxScheduler to Redux state in paramsSlice
  • Created ParamFluxScheduler component for Linear UI dropdown
  • Integrated scheduler selection into buildFLUXGraph

Related Issues / Discussions

QA Instructions

  1. Select a Flux model (Dev or Schnell)
  2. Open Generation Settings → Advanced Options
  3. Verify the Scheduler dropdown appears with options: Euler, Heun (2nd order), LCM
  4. Generate images with each scheduler:
    • Euler: Should produce identical results to previous behavior
    • Heun: Takes ~2x longer, may produce slightly different/improved results
    • LCM: Works best with 1-4 steps
  5. Test in Workflow Editor: The Flux Denoise node should have a scheduler dropdown

Merge Plan

Standard merge, no special considerations.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • ❗Changes to a redux slice have a corresponding migration
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

Add support for alternative diffusers Flow Matching schedulers:
- Euler (default, 1st order)
- Heun (2nd order, better quality, 2x slower)
- LCM (optimized for few steps)

Backend:
- Add schedulers.py with scheduler type definitions and class mapping
- Modify denoise.py to accept optional scheduler parameter
- Add scheduler InputField to flux_denoise invocation (v4.2.0)

Frontend:
- Add fluxScheduler to Redux state and paramsSlice
- Create ParamFluxScheduler component for Linear UI
- Add scheduler to buildFLUXGraph for generation
@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations backend PRs that change backend files frontend PRs that change frontend files labels Dec 26, 2025
@Pfannkuchensack
Copy link
Collaborator Author

Pfannkuchensack commented Dec 26, 2025

I did not test (the scheduler) changes yet. Frontend looks good.

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting a validation error on the step progress callback for Euler and LCM. Heun is working ok. Something about the way steps are being counted, I'd guess?

[2025-12-27 21:47:18,123]::[InvokeAI]::ERROR --> Error while invoking session 8e1ec091-0679-473b-9a59-8b883e99537b, invocation 2c23b3e7-2fdc-4b7b-aac2-a691d36f
0916 (flux_denoise): 1 validation error for InvocationProgressEvent
percentage           
  Input should be less than or equal to 1 [type=less_than_equal, input_value=1.1666666666666667, input_type=float]
    For further information visit https://errors.pydantic.dev/2.12/v/less_than_equal
[2025-12-27 21:47:18,124]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 130, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 244, in invoke_internal        
    output = self.invoke(context)                                                                                                                                           ^^^^^^^^^^^^^^^^^^^^                                              
  File "/home/lstein/invokeai-main/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context                               
    return func(*args, **kwargs)                                                                                                                               
           ^^^^^^^^^^^^^^^^^^^^^                                               
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/flux_denoise.py", line 171, in invoke                                                              latents = self._run_diffusion(context)                    
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                     
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/flux_denoise.py", line 425, in _run_diffusion
    x = denoise(                                                               
        ^^^^^^^^                                                               
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/flux/denoise.py", line 222, in denoise
    step_callback(                                                             
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/flux_denoise.py", line 921, in step_callback
    context.util.flux_step_callback(state)                 
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/shared/invocation_context.py", line 626, in flux_step_callback
    diffusion_step_callback(                                                   
  File "/home/lstein/Projects/InvokeAI/invokeai/app/util/step_callback.py", line 185, in diffusion_step_callback
    signal_progress("Denoising", percentage, image, (width, height))
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/shared/invocation_context.py", line 687, in signal_progress
    self._services.events.emit_invocation_progress(
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/events/events_base.py", line 72, in emit_invocation_progress
    self.dispatch(InvocationProgressEvent.build(queue_item, invocation, message, percentage, image))
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/events/events_common.py", line 149, in build
    return cls(    
           ^^^^                
  File "/home/lstein/invokeai-main/.venv/lib/python3.12/site-packages/pydantic/main.py", line 250, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for InvocationProgressEvent
percentage
  Input should be less than or equal to 1 [type=less_than_equal, input_value=1.1666666666666667, input_type=float]
    For further information visit https://errors.pydantic.dev/2.12/v/less_than_equal

@lstein lstein mentioned this pull request Dec 28, 2025
5 tasks
LCM scheduler may have more internal timesteps than user-facing steps,
causing user_step to exceed total_steps. This resulted in progress
percentage > 1.0, which caused a pydantic validation error.

Fix: Only call step_callback when user_step <= total_steps.
@lstein
Copy link
Collaborator

lstein commented Dec 28, 2025

All the schedulers now work without crashing. Tested on both the linear view and workflow editor mode.

However, the step count does not seem right. For Euler and LCM, when I request six denoising steps I get seven, which is inconsistent with previous behavior. For Heun, I get 11 steps, which is consistent with the behavior in SDXL.

@Pfannkuchensack
Copy link
Collaborator Author

#8705 (comment)

@lstein
Copy link
Collaborator

lstein commented Dec 29, 2025

#8705 (comment)

Reply in #8705 (comment).

Remove the initial step_callback at step=0 to match SD/SDXL behavior.
Previously Flux showed N+1 steps (step 0 + N denoising steps), while
SD/SDXL showed only N steps. Now all models display N steps consistently.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend PRs that change backend files frontend PRs that change frontend files invocations PRs that change invocations python PRs that change python files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants