Skip to content

Conversation

@shafi-elastisys
Copy link
Contributor

@shafi-elastisys shafi-elastisys commented Nov 21, 2025

Warning

This is a public repository, ensure not to disclose:

  • personal data beyond what is necessary for interacting with this pull request, nor
  • business confidential information, such as customer names.

What kind of PR is this?

Required: Mark one of the following that is applicable:

  • kind/feature
  • kind/improvement
  • kind/deprecation
  • kind/documentation
  • kind/clean-up
  • kind/bug
  • kind/other

Optional: Mark one or more of the following that are applicable:

Important

Breaking changes should be marked kind/admin-change or kind/dev-change depending on type
Critical security fixes should be marked with kind/security

  • kind/admin-change
  • kind/dev-change
  • kind/security
  • [kind/adr](set-me)

What does this PR do / why do we need this PR?

This PR enables Minio ingress and updates node-local-dns and common-config to support a shared objectstorage setup between SC and WC in the local cluster environment.
...

Information to reviewers

How to run / how to test.

Init config

./scripts/local-cluster.sh config <name> dev <domain-name>

Create the Service Cluster (SC) with ingress enabled:

./scripts/local-cluster.sh create <clustername>-sc <profile> 

Create the Workload Cluster (WC) without Minio:

./scripts/local-cluster.sh create <clustername>-wc <profile> --skip-minio

Set up NodeLocalDNS:

./scripts/local-cluster.sh setup node-local-dns

Install Velero in the WC to verify that storage from SC is accessible:

helmfile -e workload_cluster -lapp=velero apply

Once all Velero pods are running, check the backup location:

velero backup-location get 

Expected output:

velero backup-location get
NAME      PROVIDER   BUCKET/PREFIX                        PHASE       LAST VALIDATED                  ACCESS MODE   DEFAULT
default   aws        local-ck8s-velero/workload-cluster   Available   2025-11-27 11:53:48 +0100 CET   ReadWrite     true

--->

Checklist

  • Proper commit message prefix on all commits
  • Change checks:
    • The change is transparent
    • The change is disruptive
    • The change requires no migration steps
    • The change requires migration steps
    • The change updates CRDs
    • The change updates the config and the schema
  • Documentation checks:
  • Metrics checks:
    • The metrics are still exposed and present in Grafana after the change
    • The metrics names didn't change (Grafana dashboards and Prometheus alerts required no updates)
    • The metrics names did change (Grafana dashboards and Prometheus alerts required an update)
  • Logs checks:
    • The logs do not show any errors after the change
  • PodSecurityPolicy checks:
    • Any changed Pod is covered by Kubernetes Pod Security Standards
    • Any changed Pod is covered by Gatekeeper Pod Security Policies
    • The change does not cause any Pods to be blocked by Pod Security Standards or Policies
  • NetworkPolicy checks:
    • Any changed Pod is covered by Network Policies
    • The change does not cause any dropped packets in the NetworkPolicy Dashboard
  • Audit checks:
    • The change does not cause any unnecessary Kubernetes audit events
    • The change requires changes to Kubernetes audit policy
  • Falco checks:
    • The change does not cause any alerts to be generated by Falco
  • Bug checks:
    • The bug fix is covered by regression tests

@shafi-elastisys shafi-elastisys requested a review from a team as a code owner November 21, 2025 09:38
@shafi-elastisys
Copy link
Contributor Author

shafi-elastisys commented Nov 21, 2025

I’ve added the setup so it can be triggered via a script whenever we need this functionality.
If we are fine with this approach, I’ll update the Development.md file with the corresponding instructions.

@shafi-elastisys shafi-elastisys added the kind/improvement Improvement of existing features, e.g. code cleanup or optimizations. label Nov 21, 2025
@aarnq
Copy link
Contributor

aarnq commented Nov 25, 2025

Could we instead make this a feature available through the config, so you set it as an option during the configuration stage?

Also, this changes how ingress traffic is handled from the default case which I'm not sure we want here given that we have solved that issue before without using host ports.
As it would severely limit the capabilities of testing NetworkPolicies on local clusters for ingress traffic.

Copy link
Contributor

@simonklb simonklb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @aarnq. This looks like a workaround patch to the setup. Is there something preventing this to be configured properly instead?

@shafi-elastisys shafi-elastisys requested a review from a team as a code owner November 27, 2025 10:44
@shafi-elastisys
Copy link
Contributor Author

Now I have changed the solution without enabling hostport in ingress. Also setup of shared object storage is added during the configuration stage.

Comment on lines 364 to 368
log.info "Configuring shared object storage endpoint for Minio"
yq -Pi ".objectStorage.s3.regionEndpoint = \"http://minio.${domain}:30080\"" "${CK8S_CONFIG_PATH}/common-config.yaml"
yq -Pi '.networkPolicies.global.objectStorage.ports[0] = 30080' "${CK8S_CONFIG_PATH}/common-config.yaml"
yq -Pi '.networkPolicies.global.objectStorage.ports[1] = 80' "${CK8S_CONFIG_PATH}/common-config.yaml"
yq -Pi '.networkPolicies.ingressNginx.ingressOverride.enabled = false' "${CK8S_CONFIG_PATH}/common-config.yaml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we just create a new config in https://github.com/elastisys/compliantkubernetes-apps/blob/main/scripts/local-clusters/configs or incorporate this in the existing config(s) instead of patching this afterwards?

Comment on lines 484 to 486
--set ingress.enabled=true \
--set ingress.ingressClassName=nginx \
--set ingress.hosts[0]=minio."${domain}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this not be set in config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are also having task to make the dev more simpler. I think I will rework on the PR to make this as default configuration of setting up object storage sharable between SC and WC. So that the instruction is less also simple. Does that sound good?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me. I don't see why you would not want a shared object storage between the clusters in the local cluster setup since that is closer to what we have in real environments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I would want it same way. That's why I mentioned it can be a default setup instead of making it optional setup using script.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear, when I wrote "I don't see why you would not want..." I meant you as in everyone generally, not you specifically. 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I understood in that way only 😄

@shafi-elastisys shafi-elastisys requested a review from a team as a code owner December 1, 2025 11:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/improvement Improvement of existing features, e.g. code cleanup or optimizations.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[2] Ensure object storage on local clusters are reachable across sc and wc

3 participants