-
Notifications
You must be signed in to change notification settings - Fork 127
Hpa #3492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Hpa #3492
Conversation
Hi @nowjean! Welcome to the project! 🎉 Thanks for opening this pull request! |
✅ All required contributors have signed the F5 CLA for this PR. Thank you! |
I have hereby read the F5 CLA and agree to its terms |
Thank you for you contribution to the project. Please run |
I’ve completed 'make generate-all'. Could you please review my PR? |
172c009
to
d081d68
Compare
So this only affects the control plane, correct? We probably want to support this for the nginx data plane as well (seems like that would be the more beneficial use case). In order to configure deployment options for the data plane, it requires a bit more work, specifically in our APIs and the code itself. The NginxProxy CRD holds the deployment configuration for the nginx data plane, which the control plane uses to configure the data plane when deploying it. Here is a simple example of how we add a new field to the API to allow for configuring these types of deployment fields: #3319. |
I'd also love a more descriptive PR title, as well as a release note in the description so we can include this feature in our release notes :) |
@sjberman Yes, this PR only affects the control plane. Can we also implement HPA for the data plane? AFAIK, the data plane Deployment is created by the NginxProxy CRD, and its name depends on the Gateway's HPA only applies to Deployments with a fixed name, like:
So, I think we can't implement HPA via the Helm chart, especially since data plane and control plane pods are now separated in 2.0. |
@nowjean I updated my comment with a description on how it can be implemented on the data plane side. Glad we're on the same page :) |
Will manual test this PR for both control plane and data plane when we have all the changes :) |
@sjberman @salonichf5 I've pushed my changes to this PR. From my testing, the code correctly applies HPA to both the control plane and data plane. |
Testing applying these HPA for control plane and data plane pods
values.yaml
HPA details
Needed to install the metrics server (enabling insecure TLS) to get metrics for resource memory and should this be communicated to end user about setting additional fields if we want scaling to be active
values.yaml
I saw HPA get configured for control plane pod but i couldn't see one configured for data plane pod. Events from the nginx deployment and logs could normal.
The NginxProxy resource reflects resources value but not
So a couple of observations
What am I doing wrong in terms of testing ? @sjberman @nowjean |
@salonichf5 @sjberman Thanks for testing! Please refer to below guide and review my PR again. I've patched Makefile generate-crds
This option off the description of CRDs. Because, new nginxProxy manifest file occurs
(In my case, I had to upgrade my runc version to build ngf docker images.)
End-users can create multiple Gateways, and each one needs its own HPA, so the logic now lives in the Gateway resource. Plus, I'm not sure about this part:
Normally, we assume that end users already have the Metrics Server running if they're using HPA or similar features. But maybe it's worth adding a note in the docs to avoid confusion. |
c57e992
to
e8399d9
Compare
@salonichf5 @sjberman I added feature of autoscalingTemplate for Dataplane HPA. For me, It is working correctly. |
Looks good now @nowjean ! Thank you so much for your contribution, I'll run the pipeline now -- need to ensure the CRD changes don't lead to any issues. I did verify and ran into the issue. The values.yaml
|
@salonichf5 Thanks, I got some errors in the pipeline and fixing now.. So, How can I run the pipeline? |
re-running it for you now, only we can approve pipeline's run but i'll keep a closer look at your PR. Appreciate your work :) can you rebase your work with main?
|
pre-commit.ci autofix |
@salonichf5 Thanks for your guide:)
After that, I checked commit history my hpa branch.
|
I forgot to run |
@@ -160,7 +163,25 @@ func (p *NginxProvisioner) buildNginxResourceObjects( | |||
if p.isOpenshift { | |||
objects = append(objects, openshiftObjs...) | |||
} | |||
objects = append(objects, service, deployment) | |||
|
|||
if nProxyCfg.Kubernetes.Deployment.Autoscaling.Enabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's need to be a nil check for Autoscaling here
if nProxyCfg.Kubernetes.Deployment.Autoscaling.Enabled { | |
if nProxyCfg.Kubernetes.Deployment.Autoscaling != nil { | |
if nProxyCfg.Kubernetes.Deployment.Autoscaling.Enabled { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I updated above code and test cases.
Proposed changes
Write a clear and concise description that helps reviewers understand the purpose and impact of your changes. Use the
following format:
Problem: I want NGF to work with a HorizontalPodAutoscaler
Solution: Add HPA for deployement
Testing: Describe any testing that you did.
I've deployed my AKS cluster and checked hpa working correctly.
Closes #3447
Checklist
Before creating a PR, run through this checklist and mark each as complete.
Release notes
If this PR introduces a change that affects users and needs to be mentioned in the release notes,
please add a brief note that summarizes the change.