March 10, 2025
Here are the updates for Milestone M31 (Feb 24 - Mar 09).
Last week, we faced Dynamic Denial of Service (DDOS) attacks on applications hosted on NeetoDeploy. help.neeto*.com hosted on NeetoKB and a few other apps with custom domains faced most attacks.
We began the defence by adding the IP ranges from which the attack originated to a blacklist in our cluster’s AWS VPC. However, the IP ranges were dynamic and continually changed over time, requiring us to add to the list consistently for more than a day. We did this to ensure the applications ran safely while buying us time to devise a proper solution.
We now have two levels of security features for NeetoDeploy.
Cloudflare level. We routed all the traffic NeetoDeploy handles through Cloudflare and enabled Cloudflare Proxy. This hides our cluster from public view and allows us to enable Cloudflare's security features for all incoming traffic. This provides the first line of defense and is available to all applications hosted on NeetoDeploy.
We installed multiple Traefik middlewares in our cluster, which filter traffic at the router level. Traefik, the application proxy, is the entry point to our cluster. Since applications' requirements vary, these Traefik middlewares are not enabled globally for all applications. We added a “Security” tab for all applications. Admins can configure various security features for each application, such as Rate limiting, IP range-based blocking, User agent-based blocking, and path-based blocking.
We could block all attack requests using the newly added security features above.
Work on our new build system based on Paketo buildpacks has been completed. Migration to this from the old Heroku-based buildpack will be completed soon.
We added a new stack, Stack 25 (Based on Ubuntu 25, which will be LTS soon). We no longer depend on Heroku and don’t have to wait for them to release and open-source their new stack.
NeetoDeploy components like Slug Compiler, Dyno manager, and Addon manager used kubectl
command to communicate with Kubernetes. This was a crude method we used in the initial days. Later, we deployed a Kubeproxy service, a proxy API server, to communicate with Kubernetes. The dashboard app was migrated to use this new service instead of kubectl
, but the other apps were still using kubectl
. This was causing Rack::Timeout at times since this method is slow and not recommended. We migrated the remaining services to use the KubeProxy service.
Subscribe to get future posts via email.
Let's get started now.