This project is a comprehensive, multi-server application that demonstrates a full-stack, containerized, and orchestrated web service. It includes everything from the infrastructure provisioning to the application code, all managed through modern DevOps practices. The system is designed to be resilient, scalable, and maintainable, utilizing a combination of virtual machines, containers, and automation tools.
The core of the project is a student management application with a React-based frontend and a Node.js backend, supported by a PostgreSQL database. The entire infrastructure is provisioned using Vagrant and configured with Ansible. The application itself is deployed on a Kubernetes cluster, which is also set up and managed by Ansible.
The architecture is composed of several key components, each running on its own virtual machine. This separation of concerns allows for better security, scalability, and maintainability.
The entire system runs on a set of virtual machines managed by Vagrant. The Vagrantfile
defines the following servers:
dhcp_server
: An OpenBSD-based virtual machine that acts as the central networking hub for the entire system. It provides DHCP services to the other virtual machines and acts as a bastion host for SSH access. It also serves as the gateway to the external network.control_server
: A CentOS-based virtual machine that serves as the Kubernetes master node. It is responsible for managing the Kubernetes cluster and orchestrating the deployment of the application.api_gateway_server
: An Ubuntu-based virtual machine that acts as a worker node in the Kubernetes cluster. It is responsible for exposing the application to the outside world through an API gateway.database_server
: A CentOS-based virtual machine that hosts the PostgreSQL database. This server is on a separate network to enhance security.backend_server
: A CentOS-based virtual machine that acts as a worker node in the Kubernetes cluster. It is responsible for running the backend application containers.frontend_server
: A CentOS-based virtual machine that acts as a worker node in the Kubernetes cluster. It is responsible for running the frontend application containers.
Ansible is used for configuration management. The ansible/
directory contains all the necessary playbooks, roles, and templates to configure each of the virtual machines. The inventory.ini
file defines the server groups and their connection details.
The Ansible playbooks automate the following tasks:
- Installation of system packages and dependencies.
- Configuration of networking, including DHCP, DNS, and firewall rules.
- Installation and configuration of Kubernetes on the master and worker nodes.
- Deployment of the application to the Kubernetes cluster.
- Configuration of monitoring and logging services.
The application itself is a classic three-tier architecture:
- Frontend: A React-based single-page application (SPA) located in the
front_student/
directory. It provides the user interface for the student management system. - Backend: A Node.js application using the Express framework, located in the
back_student/
directory. It provides a RESTful API for the frontend to interact with the database. - Database: A PostgreSQL database that stores the application data.
The entire application is containerized using Docker and orchestrated with Kubernetes. The Dockerfile
in each of the front_student/
and back_student/
directories defines how to build the container images for the frontend and backend, respectively.
The Kubernetes manifests in the ansible/ressources/k8s/
directory define the deployment of the application to the Kubernetes cluster. This includes:
- Deployments: For the frontend and backend applications.
- Services: To expose the frontend and backend deployments within the cluster.
- Ingress: To expose the frontend to the outside world through the API gateway.
- PersistentVolumes: To provide persistent storage for the database.
The vagrant/Vagrantfile
is the entry point for provisioning the entire infrastructure. It defines each of the six virtual machines, their operating systems, and their hardware specifications. It also configures the networking between the machines, including the private networks for the application and database tiers.
Ansible is the workhorse of the configuration management process. The ansible/
directory is structured to be modular and reusable.
inventory.ini
: This file defines the hosts and groups of hosts that Ansible will manage. It's the single source of truth for the network addresses and SSH credentials for each of the virtual machines.playbooks/
: Each server has its own playbook (e.g.,control_server_setup.yml
,database_server_setup.yml
, etc.). These playbooks define the high-level tasks that need to be performed on each server.tasks/
: The playbooks are composed of tasks, which are defined in this directory. Each file in this directory represents a specific role or a set of related tasks (e.g.,install_k8s_centos.yml
,configure_psql_app.yml
, etc.). This modular approach makes the automation easier to read, maintain, and reuse.
The dhcp_server
is the heart of the network. It's an OpenBSD machine, which is known for its security and reliability. Its configuration is fully automated by the dhcp_server_setup.yml
playbook.
- DHCP and DNS: It runs a DHCP server to assign IP addresses to the other virtual machines and an Unbound DNS server to provide name resolution.
- Firewall: It uses the Packet Filter (PF) firewall to control traffic between the different network segments and the outside world. The rules are defined in
ansible/ressources/packetfilter/pf.conf
. - BGP: It runs the Border Gateway Protocol (BGP) to exchange routing information with the other servers.
- Monitoring: It has a PF exporter and Prometheus installed to monitor the network traffic.
The control_server
is the Kubernetes master node. It's responsible for managing the entire Kubernetes cluster.
- Kubernetes Control Plane: It runs the Kubernetes API server, scheduler, and controller manager.
- Cluster Setup: The
setup_cluster.yml
task initializes the Kubernetes cluster. - Local Path Provisioner: It sets up a local path provisioner to provide persistent storage for the Kubernetes pods.
- Helm: It has Helm installed, which is a package manager for Kubernetes. This is used to install and manage the various services that run on the cluster.
These three servers are the worker nodes in the Kubernetes cluster. They are responsible for running the application containers.
- Containerd: They use containerd as the container runtime.
- Joining the Cluster: The
join_cluster.yml
task joins these nodes to the Kubernetes cluster. - Application Deployment: The
deploy_application_backend.yml
anddeploy_application_frontend.yml
tasks deploy the backend and frontend applications to the cluster, respectively.
The database_server
is a dedicated machine for the PostgreSQL database. This separation of the database from the application servers is a security best practice.
- PostgreSQL: It runs a PostgreSQL server.
- Database Creation: The
configure_psql_gitea.yml
andconfigure_psql_app.yml
tasks create the databases and users for Gitea and the main application, respectively.
The application is a simple student management system.
front_student/
: The frontend is a React application that provides the user interface.back_student/
: The backend is a Node.js application that provides the API.- Containerization: Both the frontend and backend are containerized using Docker. The
Dockerfile
in each directory defines how to build the images.
In addition to the main application, the project also deploys a number of supporting services to the Kubernetes cluster:
- MetalLB: A load balancer for bare metal Kubernetes clusters.
- Cert-Manager: A tool to automate the management and issuance of TLS certificates.
- Kubernetes Gateway API: An API for configuring and managing API gateways in Kubernetes.
- Gitea: A self-hosted Git service.
- Kubernetes Dashboard: A web-based UI for managing Kubernetes clusters.
- Prometheus Operator: A tool to manage and monitor Prometheus instances.
- Grafana: A tool for visualizing and analyzing metrics.
To get the project up and running, you will need to have Vagrant and VirtualBox installed. Once you have these prerequisites, you can follow these steps:
- Clone the repository:
git clone <repository-url>
- Navigate to the
vagrant/
directory:cd vagrant/
- Start the virtual machines:
This will provision all the virtual machines and run the Ansible playbooks to configure them. This process may take some time.
vagrant up
- Access the application:
Once the provisioning is complete, you can access the application by navigating to the IP address of the
frontend_server
in your web browser. The IP address will be displayed in the output of thevagrant up
command.
This project is a great starting point for learning about modern DevOps practices. Here are some ideas for further exploration:
- CI/CD: Implement a CI/CD pipeline to automate the building, testing, and deployment of the application.
- Monitoring: Enhance the monitoring and logging capabilities of the system using tools like Prometheus, Grafana, and the ELK stack.
- Security: Implement more advanced security measures, such as network policies, vulnerability scanning, and secrets management.
- Scalability: Experiment with scaling the application by adding more worker nodes to the Kubernetes cluster.
- Cloud Deployment: Adapt the project to be deployed to a cloud provider like AWS, GCP, or Azure.