Replies: 1 comment
-
|
Hello, I seem to be having a similar issue as well. In our case we're also setup on AWS to run distributed tests we set the cc: @cyberw |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello all 👋,
I've got a Locust setup that runs on ECS (a service for the master host and a service for the workers, each running on its own ECS task). Something I've noticed is that if one of the worker tasks gets killed or is replaced (by a deployment for example) the master service does not see this change. Instead, the worker count is incremented and the the master seems to think the previous workers are still reachable.
I didn't want to post this as a bug as it could be due to how we have the services configured, but wanted to see if other folks have seen this behavior and if there's anything I need to do in the workers themselves to ensure they shutdown properly (listening to the correct signals and reporting that signal being fired for example). Something else I've noticed while running this configuration is an issue with stopping the tests sometimes. I haven't quite pinned when this happens (seems to happen when more workers + tasks are running), but the issue prevents the test from stopping and the test gets stuck in a
Loadingstate. I've seen similar issues in the repo that were closed / fixed, but it does seem to still occur with this setup. Curious if anyone may have also have any insight into why this might be occurring at scale.For some added context, we're using ECS's service discovery for communication between the two services.
Thanks!
Nick.
Beta Was this translation helpful? Give feedback.
All reactions