A quick stuff of my debian server config
If help is needed for one of the following commands, use https://explainshell.com/ to get more info.
- 0 Preliminary stuff
- 1 SSH Setup
- 2 General config
- 3 Webserver
- 4 Databases
- 5 SSL and HTTPS
- 6 Webhook
- 7 Mail server
- 8 Security
- 9 Monitoring et Logs
- 10 FTP
- 10 Services
apt update
apt upgrade
apt dist-upgrade
apt autoremove
apt autocleanIf needed, upgrade the debian version.
Update the source-list file:
✏️ /etc/apt/sources.list
Change the sources by upgrading the version name.
deb http://mirrors.online.net/debian bookworm main non-free-firmware
deb-src http://mirrors.online.net/debian bookworm main non-free-firmware
deb http://security.debian.org/debian-security bookworm-security main non-free-firmware
deb-src http://security.debian.org/debian-security bookworm-security main non-free-firmware
Then update packages.
apt update
apt upgrade --without-new-pkgs
apt full-upgrade
dpkg -l 'linux-image*' | grep ^ii | grep -i meta
apt installThen, reboot the system.
rebootWhen back online, purge obsolete packages.
apt autoremove --purge
apt autoclean
apt clean
apt purge '~c'
apt purge '~o'Check the version.
lsb_release -a✏️ /root/.ssh/authorized_keys
If you need to generate a key, you can use PuTTyGen or the following command:
ssh-keygen -t ed25519 -C "[email protected]"✏️ /etc/ssh/sshd_config
Configuration:
Port <Change to whatever>PermitRootLogin prohibit-passwordPubkeyAuthentication yesPasswordAuthentication noPermitEmptyPasswords noChallengeResponseAuthentication noUsePAM noX11Forwarding noPrintMotd noUseDNS noAcceptEnv LANG LC_*
⚙️ Restart ssh and reconnect:
service ssh restartCommon tools
apt install -y software-properties-common gnupg2 curl wget zip unzip dos2unix jq dnsutilsGit will be used to manage websites from github repositories.
Install:
apt install -y git
git --versionSetting:
git config --global user.name "Your name"
git config --global user.email "[email protected]"
git config --global core.editor "vim"Add github
Vim is a free and open-source, screen-based text editor program.
apt install vimWhen transferring files made in windows on the server, it might create errors. Install dos2unix to rewrite faulted files.
apt install dos2unixHow to use:
dos2unix /path/to/fileRemove old logs
crontab -e0 12 * * * /snap/bin/certbot renew --quiet
0 12 * * * apt update
0 12 * * * find /var/log -name "*.1" -type f -delete
0 12 * * * /usr/bin/find /var/log -type f -name '*.log' -mtime +2 -exec rm {} \;Change timezone
timedatectl set-timezone Europe/ParisApache 2.4 will operate PHP
💡 Documentation (httpd.apache.org)
apt install -y apache2Check its status:
systemctl status apache2Ensure that the service will be started at boot:
systemctl enable apache2Let’s start by create a custom set of defined constants.
✏️ /etc/apache2/conf-custom/constants.conf
Define APACHE_PORT 8085Then include it in the main configuration file.
✏️ /etc/apache2/apache2.conf
# Global configuration
#
include conf-custom/constants.confNow, the defined constants can be called within any Apache configuration file.
✏️ /etc/apache2/ports.conf
# If you just change the port or add more ports here, you will likely also have to change the VirtualHost statement in /etc/apache2/sites-enabled/000-default.conf
Listen ${APACHE_PORT}
# <IfModule ssl_module>
# Listen 443
# </IfModule>
# <IfModule mod_gnutls.c>
# Listen 443
# </IfModule>✏️ /etc/apache2/conf-available/charset.conf
# Read the documentation before enabling AddDefaultCharset.
# In general, it is only a good idea if you know that all your files have this encoding. It will override any encoding given in the files in meta http-equiv or xml encoding tags.
AddDefaultCharset UTF-8✏️ /etc/apache2/conf-available/security.conf
ServerTokens ProdServerSignature OffTraceEnable Off
✏️ /etc/apache2/conf-custom/wordpress.conf
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>Enable configurations
a2enconf charset securityEnable mods
a2enmod rewrite http2 mime ssl deflate env headers mpm_event deflate actions⚙️ Then, restart the service.
systemctl restart apache2Nginx will be used as a reverse-proxy for Apache and NodeJS. It will operate static files.
By default, the Nginx version is tied to the Debian release. To force upgrade to the latest version, add the repository to the source list.
To avoid any odd issue, you may install the "native" version first:
apt install -y nginxcurl -fsSL https://nginx.org/keys/nginx_signing.key | tee /etc/apt/trusted.gpg.d/nginx_signing.asc
echo "deb https://nginx.org/packages/mainline/debian/ $(lsb_release -cs) nginx" | tee /etc/apt/sources.list.d/nginx.list
apt update
apt install -y nginx✏️ /etc/nginx/nginx.conf
✏️ /etc/nginx/conf.d/cache.conf
add_header Cache-Control "public, max-age=31536000, immutable";✏️ /etc/nginx/conf.d/charset.conf
map $sent_http_content_type $charset {
default '';
~^text/ utf-8;
text/css utf-8;
application/javascript utf-8;
application/rss+xml utf-8;
application/json utf-8;
application/manifest+json utf-8;
application/geo+json utf-8;
}
charset $charset;
charset_types *;✏️ /etc/nginx/conf.d/default.conf
upstream apachephp {
server <SERVER_IP>:<APACHE_PORT>;
}
server {
charset utf-8;
source_charset utf-8;
override_charset on;
server_name localhost;
}✏️ /etc/nginx/conf.d/headers.conf
# add_header X-Frame-Options "SAMEORIGIN";
# add_header X-XSS-Protection "1;mode=block";
add_header X-Content-Type-Options nosniff;
add_header Cache-Control "public, immutable";
add_header Strict-Transport-Security "max-age=500; includeSubDomains; preload;";
add_header Referrer-Policy origin-when-cross-origin;
add_header Content-Security-Policy "default-src 'self'; connect-src 'self' http: https: blob: ws: *.github.com api.github.com *.youtube.com; img-src 'self' data: http: https: blob: *.gravatar.com youtube.com www.youtube.com *.youtube.com; script-src 'self' 'unsafe-inline' 'unsafe-eval' http: https: blob: www.google-analytics.com *.googleapis.com *.googlesynddication.com *.doubleclick.net youtube.com www.youtube.com *.youtube.com; style-src 'self' 'unsafe-inline' http: https: blob: *.googleapis.com youtube.com www.youtube.com *.youtube.com; font-src 'self' data: http: https: blob: *.googleapis.com *.googleuservercontent.com youtube.com www.youtube.com; child-src http: https: blob: youtube.com www.youtube.com; base-uri 'self'; frame-ancestors 'self'";✏️ /etc/nginx/conf.d/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 16k;
proxy_buffers 32 16k;
proxy_busy_buffers_size 64k;
proxy_hide_header Upgrade;✏️ /etc/nginx/conf.d/webmanifest.conf
add_header X-Content-Type-Options nosniff;
add_header Cache-Control "max-age=31536000,immutable";✏️ /etc/nginx/conf.d/gzip.conf
types {
application/x-font-ttf ttf;
font/opentype ott;
}
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 256;
gzip_buffers 16 8k;
gzip_http_version 1.1;
#gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
# text/html is always compressed by gzip module
# don't compress woff/woff2 as they're compressed already✏️ /etc/nginx/snippets/cache.conf
add_header Cache-Control "public, no-transform";✏️ /etc/nginx/snippets/expires.conf
map $sent_http_content_type $expires {
default off;
text/html epoch;
text/css max;
application/javascript max;
~image/ max;
}✏️ /etc/nginx/snippets/favicon-error.conf
location = /favicon.ico {
access_log off;
log_not_found off;
}
location = /robots.txt {
return 204;
access_log off;
log_not_found off;
}✏️ /etc/nginx/snippets/ssl-config.conf
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Dropping SSL and TLSv1
ssl_prefer_server_ciphers on;
ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK";
ssl_ecdh_curve secp384r1;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# Cache credentials
ssl_session_timeout 1h;
# Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 208.67.222.222 valid=300s;
resolver_timeout 5s;✏️ /etc/nginx/mime.types
⚙️ Then, check if your config is okay and restart the service.
nginx -t
systemctl restart nginxTo use php 8, a third party repository is needed. If you want to stick with php 7.4, ignore the first steps and replace "8.4" by "7.4".
apt -y install apt-transport-https lsb-release ca-certificates
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list'Then update php and check if php 8 is available for installation.
apt update
apt-cache policy phpIf everything is reay, install the version of php you need, then check if it's installed correctly.
apt install php8.4 php8.4-opcache libapache2-mod-php8.4 php8.4-mysql php8.4-curl php8.4-gd php8.4-intl php8.4-mbstring php8.4-xml php8.4-zip php8.4-fpm php8.4-readline php8.4-xml
php -vAdd a mod for factcgi in apache.
✏️ /etc/apache2/mods-enabled/fastcgi.conf
<IfModule mod_fastcgi.c>
AddHandler fastcgi-script .fcgi
FastCgiIpcDir /var/lib/apache2/fastcgi
AddType application/x-httpd-fastphp .php
Action application/x-httpd-fastphp /php-fcgi
Alias /php-fcgi /usr/lib/cgi-bin/php-fcgi
FastCgiExternalServer /usr/lib/cgi-bin/php-fcgi -socket /run/php/php8.4-fpm.sock -pass-header Authorization
<Directory /usr/lib/cgi-bin>
Require all granted
</Directory>
</IfModule>And enable it.
a2enmod fastcgiEnable the php8.4-fpm service.
a2enmod proxy_fcgi setenvif
a2enconf php8.4-fpm
a2dismod php8.4⚙️ Then restart Apache2.
Once everything is working, configure your php instance.
✏️ /etc/php/8.4/fpm/php.ini
max_execution_time = 300post_max_size = 512Mupload_max_filesize = 512Mdate.timezone = Europe/Paris
Now that php is available in the command line, install composer
curl -sS https://getcomposer.org/installer | phpAdd it to global path:
mv composer.phar /usr/local/bin/composer
chmod +x /usr/local/bin/composerNodeJS can be installed with the package manager, but in order to get more flexibility over the version, I prefer to use NVM (Node Version Manager).
💡 Documentation (github.com/nvm-sh/nvm)
Download the latest installer script from the repository and run it.
curl -sL https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh -o install_nvm.sh
bash install_nvm.sh
source ~/.profile
nvm -vThen, install the latest version of NodeJS with nvm command:
nvm install node
nvm use node
nvm alias nodeOr a specific version:
nvm ls-remote
nvm install v17.2.0
nvm use v17.2.0
nvm alias default 17.2.0NPM should have been installed with NodeJS. It can be updated right away with the command:
npm i -g npm@latestCheck for outdated, incorrect and unused dependencies, globally or locally.
💡 Documentation (github.com/npm/npm-check-updates)
npm install -g npm-check-updatesPM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.
npm install pm2 -gOnce it has been started, we need to make sure it restart automatically with each reboot.
pm2 startupWhen a process is started with pm2, save a list of currently active processes so it’s restored on reboot.
pm2 saveIf needed, a save can be loaded manually.
pm2 restoreNVM has an issue: updating the version will not keep your globally installed packages. Here’s a script to make this automatically:
✏️ /usr/local/bin/node-update
#!/bin/bash
# Step 1: Save list of global npm packages
echo "Saving the list of global npm packages…"
GLOBAL_PACKAGES=$(npm list -g --depth=0 --json | jq -r '.dependencies | keys[]')
echo "Global npm packages saved: $GLOBAL_PACKAGES"
# Step 2: Save PM2 processes
echo "Saving PM2 process list…"
pm2 save
echo "PM2 processes saved."
# Step 3: Load nvm environment
echo "Loading nvm…"
export NVM_DIR="$HOME/.nvm"
if [ -s "$NVM_DIR/nvm.sh" ]; then
. "$NVM_DIR/nvm.sh"
echo "nvm loaded successfully."
else
echo "Error: nvm not found. Please install nvm and try again."
exit 1
fi
# Step 4: Fetch and install the latest Node.js version
echo "Fetching the latest Node.js version…"
LATEST_VERSION=$(nvm ls-remote | grep -Eo 'v[0-9]+\.[0-9]+\.[0-9]+' | tail -n 1)
if [ -z "$LATEST_VERSION" ]; then
echo "Error: Unable to fetch the latest Node.js version. Exiting."
exit 1
fi
echo "Latest Node.js version fetched: $LATEST_VERSION"
echo "Installing Node.js version $LATEST_VERSION…"
nvm install "$LATEST_VERSION"
# Step 5: Set the latest Node.js version as default
echo "Setting Node.js version $LATEST_VERSION as the default version…"
nvm use "$LATEST_VERSION"
nvm alias default "$LATEST_VERSION"
echo "Default Node.js version set to $LATEST_VERSION."
# Step 6: Reinstall global npm packages
echo "Reinstalling global npm packages…"
for package in $GLOBAL_PACKAGES; do
echo "Installing $package…"
npm install -g "$package"
done
echo "Global npm packages reinstalled."
# Step 7: Reinstall PM2 globally
echo "Reinstalling PM2…"
npm install -g pm2
echo "PM2 reinstalled."
# Step 8: Resurrect PM2 processes
echo "Resurrecting PM2 processes…"
pm2 resurrect
echo "PM2 processes resurrected."
# Step 9: Final Confirmation
echo "Node.js update process completed successfully!"
echo "Installed Node.js version: $(node -v)"Make it executable:
chmod +x /usr/local/bin/node-updateTo use it, just call:
node-updateMariaDB Server is one of the most popular open source relational databases. It’s made by the original developers of MySQL and guaranteed to stay open source. It is part of most cloud offerings and the default in most Linux distributions.
It is built upon the values of performance, stability, and openness, and MariaDB Foundation ensures contributions will be accepted on technical merit. Recent new functionality includes advanced clustering with Galera Cluster 4, compatibility features with Oracle Database and Temporal Data Tables, allowing one to query the data as it stood at any point in the past.
apt install mariadb-server mariadb-clientRun secure script to set password, remove test database and disabled remote root user login.
mysql_secure_installationCreate an admin utilisator for external connections.
mysql -u root -p
CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'user'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;Alternative to PhpMyAdmin, Adminer is a web-based MySQL management tool. It is a free and open-source database management tool written in PHP.
wget "http://www.adminer.org/latest.php" -O /var/www/mywebsite/adminer.php
wget "https://raw.githubusercontent.com/vrana/adminer/master/designs/dracula/adminer.css" -O /var/www/mywebsite/adminer.css
chown -R www-data:www-data /var/www/mywebsite
chmod -R 755 /var/www/mywebsite/adminer.phpTo add plugins, create an index file in the same directory:
function adminer_object() {
// required to run any plugin
include_once './plugins/plugin.php';
// autoloader
foreach (glob("plugins/*.php") as $filename) {
include_once "./$filename";
}
$plugins = [
// specify enabled plugins here
];
/* It is possible to combine customization and plugins:
class AdminerCustomization extends AdminerPlugin {
}
return new AdminerCustomization($plugins);
*/
return new AdminerPlugin($plugins);
}
// include original Adminer or Adminer Editor
include './adminer.php';Create SSL certificates for virtualhosts.
Preliminary, it is needed to install the package manager snap (snapcraft.io), as it’s now the preferred way of installing certbot.
apt install snapd
snap install snapd💡 Documentation (eff-certbot.readthedocs.io)
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbotSimply add a new domain:
certbot certonly --nginx -d mywebsite.com -d www.mywebsite.com -d cdn.mywebsite.comThis will automatically change the vhost file. To make it manually, use the command without the --nginx flag.
If, at any point, this certificate needs to be expanded to include a new domain, you can use the --cert-name command (the expand command would create a -0001 version):
certbot --cert-name mywebsite.com -d mywebsite.com,www.mywebsite.com,xyz.mywebsite.comAnd to remove a certificate:
certbot delete --cert-name mywebsite.comRenewal should be enabled by default.
Webhook require Go to be installed.
Go to Go website to get the latest version.
wget https://go.dev/dl/go1.25.0.linux-amd64.tar.gz
tar -xvf go1.25.0.linux-amd64.tar.gz -C /usr/localAdd go to PATH variable and check if it is working.
export PATH=$PATH:/usr/local/go/bin
go version💡 Documentation (github.com/adnanh)
snap install webhook
ln -s /snap/webhook/current/bin/webhook /usr/bin/webhookPrepare the general config file.
✏️ /usr/share/hooks/hooks.json
Add the script to be executed by the hooks
✏️ /usr/share/hooks/mywebsite/deploy.sh
#!/bin/bash
exec > /usr/share/hooks/mywebsite/output.log 2>&1
git fetch --all
git checkout --force "origin/main"Then make it executable.
chmod +x /usr/share/hooks/mywebsite/deploy.sh⚙️ Run webhook with:
/usr/bin/webhook -hooks /usr/share/hooks/hooks.json -secure -verboseIn case webhook default service isn't providing enough flexibility, you can create a custom service.
Start by disabling the default service:
systemctl disable webhookLet’s create a service file:
✏️ /opt/webhook/webhook.service:
[Unit]
Description=Webhook Custom Service
After=network.target
[Service]
ExecStart=/usr/bin/webhook -hooks=/usr/share/hooks/hooks.json -hotreload=true -ip "127.0.0.1" -port=9000 -verbose=true
WorkingDirectory=/opt/webhook
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.targetNow, it needs to be linked in /etc/systemd/system/. Be sure not to call it just "webhook.service", because it would conflict with another service:
ln -s /opt/webhook/webhook.service /etc/systemd/system/go-webhook.service
systemctl daemon-reload
systemctl enable go-webhook
systemctl start go-webhookEvery change made will be automatically taken in account, so you don’t have to reload the configuration manually like apache or nginx.
💡 Documentation (github.com/adnanh/webhook/discussions/562)
This configuration will create a full mailing system, with users, aliases and antispam, using mysql.
First, you need to create a DNS record for your domain.
@ 86400 IN MX 10 yourdomain.com
You can also create a DNS record for SPF. For example, with google services:
@ 10800 IN TXT "v=spf1 +mx +a +ip4:<YOUR_IP> include:_spf.google.com ?all"
Install Postfix and it's extension for using it with mysql. Postfix will handle SMTP.
apt install -y postfix postfix-mysqlDuring the install, an assistant will ask which type of mail configuration you wish to use. Chose "no configuration".
Dovecot will store received mails and provide IMAP access for users.
apt install -y dovecot-core dovecot-mysql dovecot-pop3d dovecot-imapd dovecot-managesieved dovecot-lmtpdConnect to mysql to create a database.
mysql -u root -p
CREATE DATABASE mailserver;Then, we’ll need to create a specific user with readonly rights to check the email addresses. Here, it will be named mailserver too. To prevent issues with access, use 127.0.0.1 instead of localhost.
CREATE USER 'mailserver'@'127.0.0.1' IDENTIFIED BY 'password';
GRANT SELECT ON mailserver.* TO 'mailserver'@'127.0.0.1';
FLUSH PRIVILEGES;Next, create databases :
USE mailserver;
CREATE TABLE IF NOT EXISTS `virtual_domains` (
`id` int(11) NOT NULL auto_increment,
`name` varchar(50) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `virtual_users` (
`id` int(11) NOT NULL auto_increment,
`domain_id` int(11) NOT NULL,
`email` varchar(100) NOT NULL,
`password` varchar(150) NOT NULL,
`quota` bigint(11) NOT NULL DEFAULT 0,
PRIMARY KEY (`id`),
UNIQUE KEY `email` (`email`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `virtual_aliases` (
`id` int(11) NOT NULL auto_increment,
`domain_id` int(11) NOT NULL,
`source` varchar(100) NOT NULL,
`destination` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
FOREIGN KEY (domain_id) REFERENCES virtual_domains(id) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8;✏️ /etc/postfix/conf/mysql-virtual-mailbox-domains.cf
user = mailserver
password = password
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_domains WHERE name='%s'Now, add the configuration line to /etc/postfix/main.cf with this command. Then, test it with postmap.
postconf virtual_mailbox_domains=mysql:/etc/postfix/conf/mysql-virtual-mailbox-domains.cf
postmap -q mywebsite.com mysql:/etc/postfix/conf/mysql-virtual-mailbox-domains.cfIt should return 1.
Then, create the mapping for mailboxes.
✏️ /etc/postfix/conf/mysql-virtual-mailbox-maps.cf
user = mailserver
password = password
hosts = 127.0.0.1
dbname = mailserver
query = SELECT 1 FROM virtual_domains WHERE name='%s'And the mapping for aliases.
✏️ /etc/postfix/conf/mysql-virtual-alias-maps.cf
user = mailserver
password = password
hosts = 127.0.0.1
dbname = mailserver
query = SELECT destination FROM virtual\_aliases WHERE source='%s'Then add the config lines.
postconf virtual_mailbox_maps=mysql:/etc/postfix/conf/mysql-virtual-mailbox-maps.cf
postconf virtual_alias_maps=mysql:/etc/postfix/conf/mysql-virtual-alias-maps.cf
postmap -q [email protected] mysql:/etc/postfix/conf/mysql-virtual-alias-maps.cfThe alias should return the mail it refers to.
Finally, create a file that will handle the catch all of aliases.
✏️ /etc/postfix/conf/mysql-email2email.cf
user = mailserver
password = password
hosts = 127.0.0.1
dbname = mailserver
query = SELECT email FROM virtual_users WHERE email='%s'postconf virtual_alias_maps=mysql:/etc/postfix/conf/mysql-virtual-email2email.cf
postmap -q [email protected] mysql:/etc/postfix/conf/mysql-virtual-alias-maps.cfIt should return the same address.
Lastly, configure postfix to check all aliases.
postconf virtual_alias_maps=mysql:/etc/postfix/conf/mysql-virtual-alias-maps.cf,mysql:/etc/postfix/conf/mysql-virtual-email2email.cfAnd now, secure the files so only postfix can reach it, since it contains passwords in clear.
chgrp postfix /etc/postfix/conf/mysql-*.c
chmod u=rw,g=r,o= /etc/postfix/conf/mysql-*.cfLastly, make Postfix listen to IPv6 too.
postconf -e 'inet_protocols = all'Start by creating a new user with group id 5000 that will own all virtual mailboxes.
groupadd -g 5000 vmail
useradd -g vmail -u 5000 vmail -d /var/mail/vhosts -m
chown -R vmail:vmail /var/mail/vhostsNow there will be a few changes made to files in /etc/dovecot/conf.d folder.
✏️ 10-auth.conf
disable_plaintext_auth = no
auth_mechanisms = plain login
# !include auth-system.conf.ext
!include auth-sql.conf.ext
#!include auth-ldap.conf.ext
#!include auth-passwdfile.conf.ext
#!include auth-checkpassword.conf.ext
#!include auth-static.conf.ext✏️ 10-mail.conf
mail_location = maildir:/var/mail/vhosts/%d/%n/Maildir
#...
separator = .
#...✏️ 10-master.conf
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}
#...
service auth {
# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
}✏️ 10-ssl.conf
ssl = required
ssl_cert = </etc/letsencrypt/live/mywebsite.com/fullchain.pem
ssl_key = </etc/letsencrypt/live/mywebsite.com/privkey.pem✏️ 10-ssl.conf
ssl = required
ssl_cert = </etc/letsencrypt/live/mywebsite.com/fullchain.pem
ssl_key = </etc/letsencrypt/live/mywebsite.com/privkey.pemNow, in the root folder of Dovecot.
✏️ /etc/dovecot/dovecot-sql.conf.ext
driver = mysql
default_pass_scheme = BLF-CRYP
connect = \
host=127.0.0.1 \
dbname=mailserver \
user=mailserver \
password=password
user_query = SELECT email as user, \
concat('*:bytes=', quota) AS quota_rule, \
'/var/mail/vhosts/%d/%n' AS home, \
5000 AS uid, 5000 AS gid \
FROM virtual_users WHERE email='%u'
password_query = SELECT password FROM virtual_users WHERE email='%u'
iterate_query = SELECT email AS user FROM virtual_usersNow, set permissions:
chown root:root /etc/dovecot/dovecot-sql.conf.ext
chmod go= /etc/dovecot/dovecot-sql.conf.extFinally, restart Dovecot.
systemctl restart dovecotvirtual_transport=lmtp:unix:private/dovecot-lmtp✏️ /etc/dovecot/conf.d/20-lmtp.conf
protocol lmtp {
# Space separated list of plugins to load (default is global mail_plugins).
mail_plugins = $mail_plugins sieve
}Restart Dovecot to enable configuration, and check if Postfix configuration is clear.
systemctl restart dovecot
postfix checkInstall swaks.
apt install swaks -yIn a second console, use the command:
swaks --to [email protected] --server localhostEnable SMTP authentification so that Postfix can communicate with Dovecot throught a socket.
smtpd_sasl_type=dovecot
smtpd_sasl_path=private/auth
smtpd_sasl_auth_enable=yessmtp_tls_security_level = may
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
smtpd_tls_security_level = may
smtpd_tls_cert_file = /etc/letsencrypt/live/mywebsite.com/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/mywebsite.com/privkey.pem
smtpd_tls_auth_only = yes✏️ /etc/postfix/master.cf
submission inet n - y - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_tls_auth_only=yes
-o smtpd_reject_unlisted_recipient=no
-o smtpd_client_restrictions=
-o smtpd_helo_restrictions=
-o smtpd_sender_restrictions=
-o smtpd_relay_restrictions=
-o smtpd_recipient_restrictions=permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
-o smtpd_sender_restrictions=reject_sender_login_mismatch,permit_sasl_authenticated,rejectStart by installing RSpamD and Redis.
apt install rspamd redisConfigure Postfix so it uses it to pass mail through RSpamD filters.
postconf smtpd_milters=inet:127.0.0.1:11332
postconf non_smtpd_milters=inet:127.0.0.1:11332
postconf milter_mail_macros="i {mail_addr} {client_addr} {client_name} {auth_authen}"To make sure that spam mails are treated as such, they must get a flag.
✏️ /etc/rspamd/override.d/milter_headers.conf
extended_spam_headers = true;You can test RSpamD configuration with this command:
rspamadm configtestAnd restart it to get the new configuration.
systemctl restart rspamdThen, Dovecot must be configured to read these filters and transfer them to the spam folder.
✏️ /etc/dovecot/conf.d/90-sieve.conf
sieve_after = /etc/dovecot/sieve-afterCreate said folder:
mkdir /etc/dovecot/sieve-afterAnd add a new file in it:
✏️ /etc/dovecot/sieve-after/spam-to-folder.sieve
require ["fileinto"];
if header :contains "X-Spam" "Yes" {
fileinto "Junk";
stop;
}Then compile it so that Dovecot can read it.:
sievec /etc/dovecot/sieve-after/spam-to-folder.sieveNow, configure Redis so that it persist data.
✏️ /etc/rspamd/override.d/redis.conf
servers = "127.0.0.1";And enable autolearn.
✏️ /etc/rspamd/override.d/classifier-bayes.conf
autolearn = [-5, 10];To enable learning from user actions, make a few changes in Dovecot.
✏️ /etc/dovecot/conf.d/20-imap.conf
mail_plugins = $mail_plugins quota imap_sieve✏️ /etc/dovecot/conf.d/90-sieve.conf
# From elsewhere to Junk folder
imapsieve_mailbox1_name = Junk
imapsieve_mailbox1_causes = COPY
imapsieve_mailbox1_before = file:/etc/dovecot/sieve/learn-spam.sieve
# From Junk folder to elsewhere
imapsieve_mailbox2_name = *
imapsieve_mailbox2_from = Junk
imapsieve_mailbox2_causes = COPY
imapsieve_mailbox2_before = file:/etc/dovecot/sieve/learn-ham.sieve
sieve_pipe_bin_dir = /etc/dovecot/sieve
sieve_global_extensions = +vnd.dovecot.pipe
sieve_plugins = sieve_imapsieve sieve_extprogramsThen create a new folder:
mkdir /etc/dovecot/sieveAnd new files:
✏️ /etc/dovecot/sieve/learn-spam.sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve"];
pipe :copy "rspamd-learn-spam.sh";✏️ /etc/dovecot/sieve/learn-ham.sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve", "variables"];
if string "${mailbox}" "Trash" {
stop;
}
pipe :copy "rspamd-learn-ham.sh";Compile the files:
sievec /etc/dovecot/sieve/learn-spam.sieve
sievec /etc/dovecot/sieve/learn-ham.sieveFinally, create two bash files:
✏️ /etc/dovecot/sieve/rspamd-learn-spam.sh
#!/bin/sh
exec /usr/bin/rspamc learn_spam✏️ /etc/dovecot/sieve/rspamd-learn-ham.sh
#!/bin/sh
exec /usr/bin/rspamc learn_hamMake them executable:
chmod u=rwx,go= /etc/dovecot/sieve/rspamd-learn-{spam,ham}.sh
chown vmail:vmail /etc/dovecot/sieve/rspamd-learn-{spam,ham}.shAnd restart Dovecot.
Dovecot can remove emails in Junk folder after they reach a certain age.
✏️ /etc/dovecot/conf.d/15-mailboxes.conf
mailbox Junk {
special_use = \Junk
auto = subscribe
autoexpunge = 30d
}
mailbox Trash {
special_use = \Trash
auto = subscribe
autoexpunge = 30d
}Finally, restart Dovecot and RSpamD.
systemctl restart dovecot rspamdDKIM is a signature authentification for mailing. It prevent mails from ending into spam folders.
mkdir /var/lib/rspamd/dkim
chown _rspamd:_rspamd /var/lib/rspamd/dkimCreate a new private key.
rspamadm dkim_keygen -d mywebsite.com -s customkeyThen, you need to create a new DNS record.
customkey._domainkey 10800 IN TXT "v=DKIM1; k=rsa; p=<YOUR_PUBLICKEY>"
✏️ /etc/rspamd/local.d/dkim_signing.conf
path = "/var/lib/rspamd/dkim/$domain.$selector.key";
selector_map = "/etc/rspamd/dkim_selectors.map";✏️ /etc/rspamd/dkim_selectors.map
mywebsite.com customkeyCreate a file that will store the private key created earlier.
✏️ /var/lib/rspamd/dkim/mywebsite.com.customkey.key
And make sure that RSpamD y a accès.
chown _rspamd /var/lib/rspamd/dkim/*
chmod u=r,go= /var/lib/rspamd/dkim/*Then, restart RSpamD.
systemctl restart rspamdDNS:
@ 14400 IN TXT "v=spf1 mx a ptr ip4:<server ip> include:_spf.google.com ~all"_dmarc.mywebsite.com 3600 IN TXT "v=DMARC1;p=quarantine;pct=100;rua=mailto:[email protected];ruf=mailto:[email protected];adkim=s;aspf=r"Postfix and Dovecot are not always looking for the latest ssl certificate after a renewal. To keep them up to date, restart both services using a hook post-renewal of certbox.
✏️ /etc/letsencrypt/renewal-hooks/post/restart-mail.sh:
#!/bin/bash
systemctl restart postfix dovecotMake it executable:
chmod +x /etc/letsencrypt/renewal-hooks/post/restart-mail.shapt install dnsutils mailutilsA few tools to test your mail configuration:
- The commands
dig TXT yourdomainto check your SPF entry, anddig contact._domainkey.yourdomain.com TXTto check your DKIM. - DKIMcore
- Google Admin Tookbox CheckMX
- MXToolbox
- MailTester
UFW is a firewall that provides a simple, easy-to-use interface for managing network.
apt install ufw🔺 UFW is NOT enabled by default, to avoid being locked out the server. To check the status, use:
ufw status
Default rules are located in /etc/default/ufw. Applications rules are defined in /etc/ufw/applications.d/.
🛑 Let’s start by allow your SSH port to avoid being locked out. There must be a rule for SSH. Use ufw app list to list all applications.
if not, let’s create it:
✏️ /etc/ufw/applications.d/openssh-server:
[OpenSSH]
title=Secure shell server, an rshd replacement
description=OpenSSH is a free implementation of the Secure Shell protocol.
ports=<SSH_PORT>/tcpIf it exist, be sure to change the SSH port. Then add it to the active rules:
ufw allow in "OpenSSH"Now, proceed to add other needed rules, either with ufw allow or ufw deny, on a chosen port. Alternatively, you can use ufw allow <app> to allow all traffic on a given application.
ufw allow in "WWW full TCP"
ufw allow in "WWW full UDP"
ufw allow in "Mail submission"
ufw allow in "SMTP"
ufw allow in "SMTPS"
ufw allow in "IMAP"
ufw allow in "IMAPS
ufw allow in "POP3"
ufw allow in "POP3S"⚙️ Finally, enable UFW and check its status:
ufw enable
ufw statusIf you have installed Webhook, let’s make a custom application rule (but it's not necessary if nginx receives the request and pass it directly):
✏️ /etc/ufw/applications.d/webhook
[Webhook]
title=Webhook Service
description=Lightweight configurable tool written that allows you to easily create HTTP endpoints
ports=<WEBHOOK_PORT>/tcpUfw usually reload after adding a new rule. Check the status, and reload if needed.
💡 USEFUL TIP
You can list all ufw rules with a specific number, for example to easily delete them.
ufw status numbered
ufw delete <number>Fail2Ban is an intrusion prevention software framework that will lock IP out of the server.
apt install fail2banTo avoid custom rules to be erased by a new update, create a copy of the configuration file.
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local✏️ /etc/fail2ban/jail.local
-
Under
[DEFAULT]section, change / add the following parameters:bantime = 5hfindtime = 20mmaxretry = 5ignoreip = 127.0.0.1/8 ::1banaction = ufwbanaction_allports = ufw
-
Under
[sshd]:port = <SSH_PORT>enabled = true
-
Under
[POSTFIX](if installed):port = <SMTP_PORT>enabled = truemode = aggressive
⚙️ Then, restart the service to load the new configuration and check its status.
systemctl restart fail2ban
fail2ban-client status
fail2ban-client status sshd⚙️ If everything works fine, enable the service at startup:
systemctl enable fail2ban.serviceIf you want to use custom filters with fail2ban it's possible by creating new files in /etc/fail2ban/filter.d/.
CrowdSec is an Alternative to Fail2Ban, that relies on participative security with crowdsourced protection against ip
curl -sL https://packagecloud.io/install/repositories/crowdsec/crowdsec/script.deb.sh | bash
apt install crowdsec -yCheck that it works:
systemctl status crowdsecIntegration with UFW
Install dependency for integration with UFW:
apt install crowdsec-firewall-bouncer-iptables -yThen enable it:
cscli bouncers add ufw-bouncerAnd check if it works:
cscli bouncers listWatching services
cscli collections install crowdsecurity/sshd
cscli collections install crowdsecurity/nginx
cscli collections install crowdsecurity/postfix
cscli collections install crowdsecurity/dovecot
systemctl restart crowdsecCheck the logs:
cscli metricsCheck banned IPs:
cscli decisions listLock a specific IP:
cscli decisions add --ip XX.XX.XX.XX --duration 24h --scope ip --type ban --reason "IP malveillante"Lock a specific IP range:
cscli decisions add --range XX.XX.XX.0/24 --duration 24h --scope range --type ban --reason "Réseau malveillant"Unlock a specific IP:
cscli decisions delete --ip <IP>To avoid being locked out, whitelist a safe IP.
✏️ /etc/crowdsec/parsers/s02-enrich/custom-whitelist.yaml
name: crowdsecurity/custom-whitelist
description: "Whitelist IP sécurité"
whitelist:
reason: "IP de sécurité"
ip:
- XX.XX.XX.XXThen restart CrowdSec and check if the IP is correctly whitelisted:
systemctl restart crowdsec
cscli decisions listNetdata is a real-time performance monitoring tool that provides insights into real-time metrics from systems, applications, and services.
💡 Documentation (netdata.cloud)
Install Netdata using the official installation script:
bash <(curl -Ss https://my-netdata.io/kickstart.sh) --stable-channel --disable-telemetryThe installation script will automatically:
- Install all dependencies
- Compile and install Netdata
- Create a systemd service
- Configure basic monitoring
Netdata is accessible by default on port 19999. To secure access, configure authentication:
✏️ /etc/netdata/netdata.conf
[global]
hostname = your-server-name
memory mode = dbengine
page cache size = 256
dbengine multihost disk space = 256
[web]
bind to = 127.0.0.1:19999
allow connections from = 127.0.0.1
allow connections from = ::1
allow connections from = <YOUR_IP>/32To access Netdata through your domain, add a location block to your Nginx configuration:
✏️ /etc/nginx/sites-available/netdata
server {
listen 80;
server_name monitoring.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:19999;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}Enable the site and restart Nginx:
ln -s /etc/nginx/sites-available/netdata /etc/nginx/sites-enabled/
nginx -t
systemctl restart nginxCreate custom alert configurations:
✏️ /etc/netdata/health.d/cpu.conf
template: 10min_cpu_usage
on: system.cpu
calc: $user + $system
every: 10s
warn: $this > (($status >= $WARNING) ? (80) : (90))
crit: $this > (($status == $CRITICAL) ? (90) : (95))
delay: up 1m down 5m
info: average cpu utilization for the last 10 minutes
to: sysadmin✏️ /etc/netdata/health.d/disk.conf
template: disk_usage
on: disk.space
every: 1m
warn: $this < 20
crit: $this < 10
delay: up 1m down 5m
info: disk space usage
to: sysadminLogrotate is a system utility that manages the automatic rotation and compression of log files.
Logrotate is usually pre-installed on Debian systems. If not:
apt install logrotateCreate custom logrotate configurations for your services:
✏️ /etc/logrotate.d/nginx
/var/log/nginx/*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
if [ -f /var/run/nginx.pid ]; then
kill -USR1 `cat /var/run/nginx.pid`
fi
endscript
}✏️ /etc/logrotate.d/apache2
/var/log/apache2/*.log {
daily
missingok
rotate 52
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
postrotate
if /etc/init.d/apache2 status > /dev/null ; then \
/etc/init.d/apache2 reload > /dev/null; \
fi;
endscript
}✏️ /etc/logrotate.d/mysql
/var/log/mysql/*.log {
daily
rotate 7
missingok
compress
create 640 mysql adm
postrotate
if test -x /usr/bin/mysqladmin && \
/usr/bin/mysqladmin ping -h localhost --silent; then
/usr/bin/mysqladmin flush-logs
fi
endscript
}✏️ /etc/logrotate.d/fail2ban
/var/log/fail2ban.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
postrotate
systemctl reload fail2ban
endscript
}Test your logrotate configuration:
logrotate -d /etc/logrotate.confForce a rotation:
logrotate -f /etc/logrotate.d/nginxMonit is a utility for monitoring and managing daemon processes or similar programs running on Unix systems.
apt install monit✏️ /etc/monit/monitrc
set daemon 60
set logfile /var/log/monit.log
set idfile /var/lib/monit/id
set statefile /var/lib/monit/state
# Email alerts
set mailserver localhost
set mail-format {
from: [email protected]
subject: $SERVICE $EVENT at $DATE
message: Monit $ACTION $SERVICE at $DATE on $HOST: $DESCRIPTION.
}
set alert [email protected]
# Web interface
set httpd port 2812 and
use address 127.0.0.1
allow 127.0.0.1
allow <YOUR_IP>/32
# Check system resources
check system $HOSTNAME
if loadavg (1min) > 4 then alert
if loadavg (5min) > 2 then alert
if memory usage > 80% then alert
if cpu usage (user) > 80% then alert
if cpu usage (system) > 80% then alert
# Check services
check process nginx with pidfile /var/run/nginx.pid
start program = "/etc/init.d/nginx start"
stop program = "/etc/init.d/nginx stop"
if failed host 127.0.0.1 port 80 then restart
if 5 restarts within 5 cycles then timeout
check process apache2 with pidfile /var/run/apache2/apache2.pid
start program = "/etc/init.d/apache2 start"
stop program = "/etc/init.d/apache2 stop"
if failed host 127.0.0.1 port 8085 then restart
if 5 restarts within 5 cycles then timeout
check process mysql with pidfile /var/run/mysqld/mysqld.pid
start program = "/etc/init.d/mysql start"
stop program = "/etc/init.d/mysql stop"
if failed host 127.0.0.1 port 3306 then restart
if 5 restarts within 5 cycles then timeout
check process fail2ban with pidfile /var/run/fail2ban/fail2ban.pid
start program = "/etc/init.d/fail2ban start"
stop program = "/etc/init.d/fail2ban stop"
if 5 restarts within 5 cycles then timeout
# Check disk space
check device rootfs with path /
if space usage > 80% then alert
if inode usage > 80% then alert
# Check SSL certificate expiration
check file ssl_cert with path /etc/letsencrypt/live/yourdomain.com/fullchain.pem
if changed timestamp then alertCreate custom alert scripts:
✏️ /usr/local/bin/disk-alert.sh
#!/bin/bash
# Disk space alert script
THRESHOLD=80
DISK_USAGE=$(df / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$DISK_USAGE" -gt "$THRESHOLD" ]; then
echo "WARNING: Disk usage is ${DISK_USAGE}%" | \
mail -s "Disk Space Alert on $(hostname)" [email protected]
fi✏️ /usr/local/bin/ssl-expiry-check.sh
#!/bin/bash
# SSL certificate expiry check
DOMAIN="yourdomain.com"
DAYS_WARNING=30
EXPIRY_DATE=$(openssl x509 -enddate -noout -in /etc/letsencrypt/live/$DOMAIN/fullchain.pem | cut -d= -f2)
EXPIRY_EPOCH=$(date -d "$EXPIRY_DATE" +%s)
CURRENT_EPOCH=$(date +%s)
DAYS_LEFT=$(( ($EXPIRY_EPOCH - $CURRENT_EPOCH) / 86400 ))
if [ "$DAYS_LEFT" -lt "$DAYS_WARNING" ]; then
echo "WARNING: SSL certificate for $DOMAIN expires in $DAYS_LEFT days" | \
mail -s "SSL Certificate Expiry Alert" [email protected]
fiMake scripts executable:
chmod +x /usr/local/bin/disk-alert.sh
chmod +x /usr/local/bin/ssl-expiry-check.shAdd monitoring tasks to crontab:
crontab -e# Monitoring tasks
0 */6 * * * /usr/local/bin/disk-alert.sh
0 8 * * 1 /usr/local/bin/ssl-expiry-check.sh
0 2 * * * /usr/bin/find /var/log -name "*.log" -mtime +30 -deleteEnable and start monitoring services:
systemctl enable monit
systemctl start monit
systemctl enable netdata
systemctl start netdataCheck status:
systemctl status monit
systemctl status netdata
monit statusFor centralized logging:
apt install rsyslog✏️ /etc/rsyslog.conf
# Add at the end of the file
# Send all logs to a central server (replace with your log server IP)
*.* @logserver.yourdomain.com:514✏️ /etc/logrotate.d/rsyslog
/var/log/syslog
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}The point here is to define an access for a screenshot app to upload files in a specific directory via sftp.
Start by creating a new user:
adduser screenshotDo NOT create it without a home, it wouldn’t be able to connect in SFTP.
Let’s allow the user to connect to ssh with a password. Edit the ssh config file and add the following at the end:
✏️ /etc/ssh/sshd_config
# Example of overriding settings on a per-user basis
Match User screenshot
PasswordAuthentication yes⚙️ Restart ssh
service ssh restartNow you just need to give the user access to the directory where the files will be uploaded:
chown -R screenshot:screenshot /path/to/folder/Install OpenVPN
curl -O https://raw.githubusercontent.com/Angristan/openvpn-install/master/openvpn-install.sh
chmod +x openvpn-install.sh
./openvpn-install.shThe script will setup and ask for questions.
Add the ports defined to UFW. For example, with a custom script:
✏️ /etc/ufw/applications.d/openvpn
[OpenVPN]
title=OpenVPN Service
description=Open Source VPN
ports=1194/udpufw allow in "OpenVPN"The script will add a first user. To add another one, reexecute the script and select the choice "Add a new user".
./openvpn-install.shConfiguration files (*.ovpn) are written in /root/.
apt install lftpIn order not to write plain text mariadb credentials in scripts, create a file in /root:
✏️ /root/.my.cnf
[client]
user = your_mysql_user
password = your_mysql_password
host = localhostThen, secure it:
chmod 600 /root/.my.cnfSame way, create a file to store the ftp credentials in /root. Be sure there is no space and no empty line, it seem to make the parsing fail.
✏️ /root/.ftp_credentials
host=host
user=user
password=passwordMake sure it’s correctly encoded and secure it:
dos2unix /root/.ftp_credentials
chmod 600 /root/.ftp_credentials✏️ /opt/backups/backup-db.sh
✏️ /opt/backups/backup-config.sh
✏️ /opt/backups/backup-sites.sh
Make them excutable:
chmod +x /opt/backups/*Each script can be executed manually. Let’s automate it:
crontab -e0 0 */2 * * /opt/backups/backup-db.sh >> /var/log/backups.log 2>&1
0 0 1 */3 * /opt/backups/backup-sites.md >> /var/log/backups.log 2>&1
0 0 1 */6 * /opt/backups/backup-config.md >> /var/log/backups.log 2>&1





