[{"content":"","date":"February 7, 2023","permalink":"/tags/ansible/","section":"Tags","summary":"","title":"Ansible"},{"content":"","date":"February 7, 2023","permalink":"/categories/","section":"Categories","summary":"","title":"Categories"},{"content":"","date":"February 7, 2023","permalink":"/tags/cloud/","section":"Tags","summary":"","title":"Cloud"},{"content":"","date":"February 7, 2023","permalink":"/tags/ec2/","section":"Tags","summary":"","title":"Ec2"},{"content":"","date":"February 7, 2023","permalink":"/tags/github-actions/","section":"Tags","summary":"","title":"Github-Actions"},{"content":"","date":"February 7, 2023","permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo"},{"content":"","date":"February 7, 2023","permalink":"/tags/hybrid/","section":"Tags","summary":"","title":"Hybrid"},{"content":"","date":"February 7, 2023","permalink":"/tags/k8s/","section":"Tags","summary":"","title":"K8s"},{"content":"","date":"February 7, 2023","permalink":"/tags/microk8s/","section":"Tags","summary":"","title":"Microk8s"},{"content":"","date":"February 7, 2023","permalink":"/tags/minio/","section":"Tags","summary":"","title":"Minio"},{"content":"","date":"February 7, 2023","permalink":"/categories/posts/","section":"Categories","summary":"","title":"Posts"},{"content":"","date":"February 7, 2023","permalink":"/posts/","section":"Posts","summary":"","title":"Posts"},{"content":"","date":"February 7, 2023","permalink":"/tags/rancher/","section":"Tags","summary":"","title":"Rancher"},{"content":"Introduction # I have been running my own private cloud for almost 10 years now, going from a single node Openstack deployment to a multi-node Kubernetes cluster that runs nodes in different geographic areas. I have been using it for a variety of things, from hosting my own websites to running my own email server, because i like to test different technologies as real world as possible and i like to have full control over my data.\nFor this hybrid deployment i created a separate 1 node Ubuntu 22.04.1 LTS microk8s cluster running on an AMD Ryzen 7 5800X 8-Core Processor with 128GB DDR4, 2 x 4TB WDC WD40EFAX-68J HDD ( spinning disks for S3 storage ) and 2 x WDC WDS200T2B0A SSD. The SSD\u0026rsquo;s are used one for the OS and the default hostpath storage class and the other one is used for Rancher\u0026rsquo;s hostpath provisioner ( local-path-provisioner ).\nI tried the minimal configuration using the onboard LAN but it was performing so poorly that i bought a 4 x 1Gb PCI-E network card and i used it for the cluster\u0026rsquo;s only physical network connection. The rest of the networks are software defined with WireGuard. There\u0026rsquo;s also an AMD GPU Radeon RX 580X w ~8GB of VRAM that i intended to use for GPU accelerated workloads.\nUnfortunately giving a chance to AMD for a full CPU + GPU platform was a bad ideea, as drivers and support for the GPU are still not there and therefore i\u0026rsquo;s impossible to run any GPU accelerated k8s jobs. This is still under research and i might sell the AMD GPU and get an Nvidia GPU for AI and machine learning workloads.\nInitialy i was planing to write a very short article with minimal block design and concept, but when examining my thousands of lines in the notes i realised that i have to write a lot more to make it understandable and useful for others. Besides that, some of the information in this article was really challenging to find and corelate, so it might be helpfull for a lot of other people that are playing around with similar technologies.\nI\u0026rsquo;m going to try as much as possible to organise into chapters, but bare in mind that this is a work in progress and i will be updating it as i go along. It\u0026rsquo;s quite a lot to write, and time is a scarce resource\n\u0026ldquo;Ok, but where\u0026rsquo;s the hybrid part ?\u0026rdquo; you might ask.\nWell, as any home customer i\u0026rsquo;m behind a crazy ISP NAT / CGNAT and wtahever any ISP is doing these days and a port map is simply out of the question. So here enters an Amazon EC2 instance ( or any other cloud, as all we need is a droplet with a public IP ) In my case i\u0026rsquo;ve chosen an amazon t4g.nano instance ( 2 vCPU, 0.5GB RAM, 1GB EBS ) and i\u0026rsquo;ve installed a WireGuard VPN server on it and Haproxy to \u0026ldquo;translate\u0026rdquo; ports from the public IP to the internal WireGuard cluster network ip\u0026rsquo;s.\n🚦 My advice is to look into reserved instances and plan for 1 year ahead, at least. This will keep your EC2 cost to under ~100USD / year.\n⚠️ As a little disclaimer, be mindfull when exposing private servers and services to the internet, and make sure that you don\u0026rsquo;t keep sensitive information in non-encrypted storage or files. Use GitHub secrets and other secure methods as much as possible, use long username / password combinations and don\u0026rsquo;t use the same password for multiple services.\nThis website is hosted using this hybrid cloud solution and is one of the services that is exposed to the public internet. The cluster # I wanted to set some objectives right from the start, so this setup will be \u0026ldquo;production like\u0026rdquo; from all points of view except redundancy ( as i have a single physical node ), but the one node can be replicated as it is to a multiple node deployment.\nAll services are secured and publcly exposed services are secured with Let\u0026rsquo;s Encrypt SSL certificates and firewalled and all management access is done over an encrypted virtual private network defined with WireGuard.\nObjectives:\nSecure and private whith the ability to expose services to the public internet.\nSoftware-defined everything as much as possible. All the nodes are assumed to have only 1 network interface and 1 public IP\nHighly available and fault tolerant. The cluster should be able to recover from a node failure without any manual intervention. ( This is not the case for the single node deployment, but with minimal changes it can be replicated to a multi-node deployment ).\nSandbox for multiple technologies to have a versatile testing ground for new ideas.\nEasy to maintain and upgrade. The cluster should be easy to maintain and upgrade with minimal downtime.\nInitial setup # 🚦 Networks :\n192.168.0.0/16 - VPC\n192.168.168.0/24 - LAN ( the local home network )\n10.0.0.0/24 - WireGuard VPN internal network\nEC2 instance # I\u0026rsquo;m going to start here with the EC2 instance, as it\u0026rsquo;s the only node that is not part of the cluster, but it\u0026rsquo;s the one that will be used to access the cluster\u0026rsquo;s public services. You\u0026rsquo;re free to use any other cloud provider as i will not be using any of Amazon\u0026rsquo;s specific services.\nCreate a VPC, a security group and an EC2 instance. A basic terraform configuration would look like this :\nproviders.tf\nterraform { required_providers { aws = { source = \u0026#34;hashicorp/aws\u0026#34; version = \u0026#34;~\u0026gt; 4.5.0\u0026#34; } } } provider \u0026#34;aws\u0026#34; { region = \u0026#34;us-east-1\u0026#34; } Change the region to your preferred one.\nvpc.tf\ndata \u0026#34;aws_availability_zones\u0026#34; \u0026#34;available\u0026#34; {} module \u0026#34;vpc\u0026#34; { source = \u0026#34;terraform-aws-modules/vpc/aws\u0026#34; version = \u0026#34;3.2.0\u0026#34; name = \u0026#34;personal\u0026#34; cidr = \u0026#34;192.168.0.0/16\u0026#34; azs = data.aws_availability_zones.available.names private_subnets = [\u0026#34;192.168.10.0/24\u0026#34;, \u0026#34;192.168.20.0/24\u0026#34;, \u0026#34;192.168.30.0/24\u0026#34;] public_subnets = [\u0026#34;192.168.40.0/24\u0026#34;, \u0026#34;192.168.50.0/24\u0026#34;, \u0026#34;192.168.60.0/24\u0026#34;] enable_nat_gateway = false enable_dns_hostnames = true tags = { Env = \u0026#34;personal\u0026#34; } } output \u0026#34;vpc_id\u0026#34; { description = \u0026#34;The ID of the VPC\u0026#34; value = module.vpc.vpc_id } output \u0026#34;private_subnets\u0026#34; { description = \u0026#34;Private Subnets\u0026#34; value = module.vpc.private_subnets } output \u0026#34;public_subnets\u0026#34; { description = \u0026#34;Public Subnets\u0026#34; value = module.vpc.public_subnets } keypair.tf\nresource \u0026#34;aws_key_pair\u0026#34; \u0026#34;personal\u0026#34; { key_name = \u0026#34;personal\u0026#34; public_key = \u0026#34;yourkeyhere\u0026#34; } ec2.tf\nresource \u0026#34;aws_instance\u0026#34; \u0026#34;private\u0026#34; { ami = \u0026#34;ami-0b49a4a6e8e22fa16\u0026#34; # \u0026lt;= ubuntu 20.04 AMI instance_type = \u0026#34;t4g.nano\u0026#34; associate_public_ip_address = true key_name = \u0026#34;personal\u0026#34; subnet_id = module.vpc.public_subnets[1] vpc_security_group_ids = [aws_security_group.personal.id] root_block_device { volume_size = 30 } tags = { Name = \u0026#34;personal\u0026#34; Env = \u0026#34;personal\u0026#34; } } resource \u0026#34;aws_eip\u0026#34; \u0026#34;personal\u0026#34; { instance = aws_instance.private.id vpc = true } # Outputs output \u0026#34;public_ip\u0026#34; { description = \u0026#34;the instance public ip\u0026#34; value = aws_eip.personal.public_ip } security.tf\nresource \u0026#34;aws_security_group\u0026#34; \u0026#34;personal\u0026#34; { name_prefix = \u0026#34;personal\u0026#34; vpc_id = module.vpc.vpc_id egress { from_port = 0 to_port = 0 protocol = \u0026#34;-1\u0026#34; cidr_blocks = [\u0026#34;0.0.0.0/0\u0026#34;] ipv6_cidr_blocks = [\u0026#34;::/0\u0026#34;] } ingress { from_port = 8 to_port = 0 protocol = \u0026#34;icmp\u0026#34; description = \u0026#34;Allow ping\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34; ] } ingress { from_port = 80 to_port = 80 protocol = \u0026#34;tcp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34; ] description = \u0026#34;HTTP\u0026#34; } ingress { from_port = 443 to_port = 443 protocol = \u0026#34;tcp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34;, ] description = \u0026#34;HTTPS\u0026#34; } ingress { from_port = 9000 to_port = 9000 protocol = \u0026#34;tcp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34;, ] description = \u0026#34;S3\u0026#34; } ingress { from_port = 9001 to_port = 9001 protocol = \u0026#34;tcp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34;, ] description = \u0026#34;S3-Admin\u0026#34; } ingress { from_port = 20000 to_port = 20000 protocol = \u0026#34;udp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34;, ] description = \u0026#34;Allow UDP WireGuard\u0026#34; } ingress { from_port = 22 to_port = 22 protocol = \u0026#34;tcp\u0026#34; cidr_blocks = [ \u0026#34;0.0.0.0/0\u0026#34;, ] description = \u0026#34;Allow SSH\u0026#34; } ingress { from_port = 0 to_port = 65535 protocol = \u0026#34;tcp\u0026#34; self = true description = \u0026#34;Allow self\u0026#34; } tags = { Name = \u0026#34;personal\u0026#34; Env = \u0026#34;personal\u0026#34; } } This should produce an EC2 instance and a public IP that you can use to access the cluster\u0026rsquo;s public services.\nWireGuard and Haproxy # For the next step, SSH into the instance and install Haproxy and WireGuard.\nsudo apt update sudo apt install wireguard sudo apt install haproxy wg genkey | sudo tee /etc/wireguard/private.key sudo chmod go= /etc/wireguard/private.key sudo wg pubkey \u0026lt; /etc/wireguard/private.key | sudo tee /etc/wireguard/public.key The sudo chmod go=\u0026hellip; command removes any permissions on the file for users and groups other than the root user to ensure that only it can access the private key. Choose an IPV4 address for the WireGuard network. I\u0026rsquo;m going to use 10.0.0.0/24 Create a wireguard configuration file for the endpoint.\nsudo nano /etc/wireguard/wg0.conf [Interface] Address = 10.0.0.1/32 SaveConfig = true ListenPort = 20000 PrivateKey = \u0026lt;private_key\u0026gt; [Peer] PublicKey = \u0026lt;peer_public_key\u0026gt; AllowedIPs = 10.0.0.2/32 [Peer] PublicKey = \u0026lt;peer_public_key\u0026gt; AllowedIPs = 10.0.0.3/32, 10.0.0.99/32, 10.0.0.100/32 # k8s host wg0 IP, Metallb IP\u0026#39;s In case you want a more detailed explanation of the WireGuard configuration in general, check out this article on DigitalOcean Start the WireGuard service and enable it to start on boot.\nsystemctl start wg-quick@wg0.service systemctl enable wg-quick@wg0.service Now we need to configure Haproxy to expose our public services to the internet. I chose to use Haproxy as a \u0026ldquo;naked\u0026rdquo; TCP proxy because i\u0026rsquo;m using Traefik later on and it was far easier to setup than any other solution. ( If you know a better way, please let me know )\nglobal nbproc 2 cpu-map 1 0 cpu-map 2 1 maxconn 150000 log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # See: https://ssl-config.mozilla.org/#server=haproxy\u0026amp;server-version=2.0.3\u0026amp;config=intermediate ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend stats bind *:8080 stats enable stats uri /stats stats refresh 10s # stats admin stats auth admin:adminpassword # change this, obviosuly # or disable it :) userlist basic-auth-list group is-regular-user listen l1 bind 0.0.0.0:443 mode tcp timeout connect 4000 timeout client 180000 timeout server 180000 server k8s 10.0.0.99:443 # This is a k8s IP that exist only # in the virtual WireGuard network listen l2 bind 0.0.0.0:80 mode tcp timeout connect 4000 timeout client 180000 timeout server 180000 server k8s 10.0.0.99:80 # This is a k8s IP that exist only # in the virtual WireGuard network listen l4 bind 0.0.0.0:9000 mode tcp timeout connect 4000 timeout client 180000 timeout server 180000 server minio 10.0.0.3:9000 # This is the private wg0 IP of the k8s host # as minio runs directly on the bare-metal # machine for performance reasons listen l5 bind 0.0.0.0:9001 mode tcp timeout connect 4000 timeout client 180000 timeout server 180000 server minio 10.0.0.3:9001 # This is the private wg0 IP of the k8s host # as minio runs directly on the bare-metal # machine for performance reasons I\u0026rsquo;m running Haproxy with \u0026rsquo;nbproc\u0026rsquo; as i want to take advantage of the multi-core CPU of the instance. I\u0026rsquo;m also using the \u0026lsquo;cpu-map\u0026rsquo; option to bind the processes to the cores.\nBlock structure # WireGuard creates a virtual network and allows traffic between all nodes in the network, so for Haproxy the IP 10.0.0.99 is just a machine in the local network. As a result, i can deploy services using a Traefik instance ( that has the IP 10.0.0.99/32 ) and expose them to the internet using Haproxy, and i can deploy other services using other load balancers / Traefik instances and expose them only to the private network configured with WireGuard without exposing them publicaly but compatible with a public DNS service ( i\u0026rsquo;m using Cloudflare ).\nA service like \u0026lsquo;https://s3.serverworks.es:9000\u0026rsquo; can have a public IP and be accessible from the internet, and a service like \u0026lsquo;https://devenv.serverworks.es\u0026rsquo; can be accessible only from the private network with a public DNS record pointing to the private IP inside the WireGuard network ( 10.0.0.100/32 or others ). This also means that valid SSL certificates can be used for both internal and public services generated by the same mechanism using cert-manager. The setup does not update the IP addreses automaticaly but i might add this feature later on.\nIn general i would recomend against using a wildcard matching rule in the DNS ( *.domain.tld ) but if you have a single Traefik load balancer for a project you can eliminate the need of subdomain DNS updates if you set \u0026lsquo;*.domain.tld IN A \u0026lt;your_public_ip\u0026gt;\u0026rsquo; and use the \u0026lsquo;Host\u0026rsquo; header to route the traffic to the correct service.\nOnly from within the WireGuard VPN network i can administer and access k8s and k8s internal services, making this setup very secure.\nMicrok8s # I won\u0026rsquo;t cover installing a fresh ubuntu( Ubuntu 22.04.1 LTS ) partitioning and mounting the disks as i have a particular setup and i don\u0026rsquo;t want to cover all the possible scenarios.\nMy setup is like this ( all disks are formated as ext4 and have 1 partition, except sda that holds the OS ):\nSSD Setup\nsda - OS sdb - /mnt/ssd_data1 ( rancher hostpath provisioner ) HDD Setup\nsdc - /mnt/hdd_data1 ( MinIO disk1 ) sdd - /mnt/hdd_data2 ( MinIO disk2 ) Swap\nsde - /mnt/swap ( 128GB swap ) ( i had an old unused SSD and i decided to use it as swap ) You can\u0026rsquo;t use less than 2 disks / mounts / folders for MinIO( more on this in the MinIO chapter ).\n💡 I\u0026rsquo;m using the following mount parameters for extra performance:\nnoatime,nodiratime,data=writeback,barrier=0 0 0 Install microk8s and enable the following addons:\nsudo snap install microk8s --classic --channel=1.25/stable ( i\u0026rsquo;m using 1.25 but you can use the latest stable version ) Set up your Docker Registry login for microk8s ( this is optional but i recomend it because Docker limits the number of pulls for unauthenticated users ):\nThis is option one, that basicaly uses your docker login for all image pulls that are otherwise unspecified Containerd login to docker.io:\nMicrok8s Docs\nnano /var/snap/microk8s/current/args/containerd-template.toml # Login to my docker ID ( you@domain.tld ) [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.configs] [plugins.\u0026#34;io.containerd.grpc.v1.cri\u0026#34;.registry.configs.\u0026#34;docker.io\u0026#34;.auth] auth = \u0026#34;your_docker_aurh_key\u0026#34; # obtained from your config.json file This is option two, that allows you to specify a registry for each image pull 1 : Login to Docker CLI 2 : Generate the Kubernetes secret from Docker authentication token 3 : Update the Kubernetes deployment yaml to reference the secret\nFirst, using the Docker CLI, run the “docker login” command and provide the desired credentials. This creates a file at ~/.docker/config.json which contains the associated authentication token. ( I rand this on my MacBook and used the contents of the JSON )\nNext, copy the config.json file to host where kubectl is installed. Run the following command to import the config.json to be stored as a secret in Kubernetes. Customise the “config.json” path to point to the local copy.\nkubectl create secret generic registry_login \\ --from-file=.dockerconfigjson=~/config.json \\ --type=kubernetes.io/dockerconfigjson Or inline like this:\nkubectl create secret docker-registry registry_login \\ --docker-server=\u0026lt;your-registry-server\u0026gt; \\ --docker-username=\u0026lt;your-name\u0026gt; \\ --docker-password=\u0026lt;your-pword\u0026gt; \\ --docker-email=\u0026lt;your-email\u0026gt; The cluster is now configured to authenticate remotely, the last step is to update the deployment configuration to reference the secret. In the yaml definition of the deployment, add the reference to ‘imagePullSecrets’ as follows, it should be at the same level as the ‘containers’ definition.\nspec: imagePullSecrets: - name: registry_login containers: Enable the following addons:\nsudo microk8s enable community sudo microk8s enable dns rbac metrics-server \\ metallb:10.0.0.99-10.0.0.100 MinIO # Reference\nwget https://dl.min.io/server/minio/release/linux-amd64/archive/minio_20230210184839.0.0_amd64.deb -O minio.deb sudo dpkg -i minio.deb After installing MinIO, create or edit /etc/default/minio:\n# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server. # This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment. # Omit to use the default values \u0026#39;minioadmin:minioadmin\u0026#39;. # MinIO recommends setting non-default values as a best practice, regardless of environment. MINIO_ROOT_USER=\u0026lt;your_minio_root_user\u0026gt; MINIO_ROOT_PASSWORD=\u0026lt;your_minio_root_password\u0026gt; # MINIO_VOLUMES sets the storage volumes or paths to use for the MinIO server. # The specified path uses MinIO expansion notation to denote a sequential series of drives between 1 and 4, inclusive. # All drives or paths included in the expanded drive list must exist *and* be empty or freshly formatted for MinIO to start successfully. MINIO_VOLUMES=\u0026#34;/mnt/hdd_data{1...2}\u0026#34; # MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server. # MinIO assumes your network control plane can correctly resolve this hostname to the local machine. # Uncomment the following line and replace the value with the correct hostname for the local machine. MINIO_SERVER_URL=\u0026#34;\u0026lt;your_minio_server_url\u0026gt;:port\u0026#34; # i.e. https://s3.serverworks.es:9000 MINIO_BROWSER_REDIRECT_URL=\u0026#34;\u0026lt;your_minio_browser_redirect_url\u0026gt;:port\u0026#34; # i.e. https://s3.serverworks.es:9001 # --console-address = admin web page / --address = S3 endpoint MINIO_OPTS=\u0026#34;--console-address :9001 --address :9000\u0026#34; # change the ports for your setup 💡 You can use the following command to generate a random password:\nopenssl rand -base64 32 This will gennerate a 32 character password with a mix of upper and lower case letters, numbers and special characters.\nMinio is installed directly on the k8s cluster host. It is not installed via helm. It runs as a unprivileged user \u0026ldquo;minio-user\u0026rdquo; and has SSL certs in /home/minio-user/.minio/certs, and the certificated will need to be created and managed with Let\u0026rsquo;s Encrypt. ( cert-manager is only for k8s services )\nCreate a file named \u0026ldquo;cloudflare.ini\u0026rdquo; with the following content:\nIf you are using CloudFlare, you can find the instructions here, if not use your DNS provider\u0026rsquo;s instructions. # Cloudflare API token used by Certbot dns_cloudflare_api_token = \u0026lt;your_cloudflare_api_token\u0026gt; Generate your SSL cert with certbot:\ncertbot certonly --dns-cloudflare --dns-cloudflare-credentials cloudflare.ini --dns-cloudflare-propagation-seconds 60 -d \u0026lt;your_s3_domain\u0026gt; Copy the generated certs to the minio user\u0026rsquo;s home directory, and ajust the file names and permissions:\ncp /etc/letsencrypt/live/\u0026lt;your_s3_domain\u0026gt;/privkey.pem /home/minio-user/.minio/certs/private.key cp /etc/letsencrypt/live/\u0026lt;your_s3_domain\u0026gt;/fullchain.pem /home/minio-user/.minio/certs/public.crt I wrote a bash script to automate the process of copying the certs to the minio user\u0026rsquo;s home directory. MinIO wants the certs in that specific path to work with SSL properly and i haven\u0026rsquo;t been able to change MinIO\u0026rsquo;s config to just load the certs from \u0026lsquo;/etc/letsencrypt/live/\u0026lt;your_s3_domain\u0026gt;/\u0026rsquo;, if you do leave a comment.\n#!/bin/bash # Define the source and destination directories src_dir=\u0026#34;/etc/letsencrypt/live/\u0026lt;your_s3_domain\u0026gt;\u0026#34; dest_dir=\u0026#34;/home/minio-user/.minio/certs/\u0026#34; # Copy fullchain.pem to public.crt cp -f \u0026#34;$src_dir/fullchain.pem\u0026#34; \u0026#34;$dest_dir/public.crt\u0026#34; # Copy privkey.pem to private.key cp -f \u0026#34;$src_dir/privkey.pem\u0026#34; \u0026#34;$dest_dir/private.key\u0026#34; # Change ownership of the destination directory to minio-user:minio-user chown -R minio-user:minio-user \u0026#34;$dest_dir\u0026#34; # Restart the minio service to apply the changes systemctl restart minio.service After you run \u0026ldquo;systemctl start minio\u0026rdquo; you can check the details like this:\njournalctl -f -u minio.service Should look similar to this:\nNow you can access your MinIO server via the browser at \u0026lt;https://\u0026lt;your_minio_server_url\u0026gt;:9001\u0026gt;\nYou can play around and create buckets, upload files, etc. The monioring part will be covered later on, when deploying the Prometheus and Grafana stack to microk8s Monitoring).\n💡 Read the MinIO documentation for the MinIO console.\nMinIO Client # Reference\nI suggest you install the MinIO client on your local machine, it will make your life easier when testing updating and managing your S3 setup.\nThis guide asumes you have downloaded and installed MinIO client on your local machine.\nEdit your minio config file ( located at ~/.mc/config.json ) and add your setup, change the values to match your setup and the name of the alias to whatever you want ( mine is local ).\n{ \u0026#34;version\u0026#34;: \u0026#34;10\u0026#34;, \u0026#34;aliases\u0026#34;: { \u0026#34;local\u0026#34;: { \u0026#34;url\u0026#34;: \u0026#34;https://\u0026lt;your_minio_server_url\u0026gt;:9000\u0026#34;, \u0026#34;accessKey\u0026#34;: \u0026#34;\u0026lt;s3_username\u0026gt;\u0026#34;, \u0026#34;secretKey\u0026#34;: \u0026#34;\u0026lt;s3_password\u0026gt;\u0026#34;, \u0026#34;api\u0026#34;: \u0026#34;S3v4\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;auto\u0026#34; } } } Test your setup by listing the buckets:\nmc ls local/ # should list the buckets if you created any K8S Services # I like to start by setting up monitoring and i\u0026rsquo;m using Prometheus and Grafana for that, with prometheus-stack being what i\u0026rsquo;ll using for all my setups.\nI\u0026rsquo;m using Traefik to expose and secure Grafana with SSL, and Traefik uses a separate certificate management than the rest of the services so i will be covering the setup of Traefik and Let\u0026rsquo;s Encrypt in the first section, even before the Prometheus stack is configured. For now we have the internal monitoring of the k8s cluster and we can see that with a client like Infra. ( i\u0026rsquo;m using the free version ).\nI have no affiliation with Infra, i just like the app, and i might pay the ~100$ for the pro version in the future. Traefik # Needs Metallb enabled. Reference\nhelm repo add traefik https://helm.traefik.io/traefik helm repo update kubectl create namespace traefik I could probably write an entire article on Traefik configuration ( I might ) but for now i\u0026rsquo;m sticking to the basics plus Cloudflare and Let\u0026rsquo;s Encrypt, which are mandatory for public facing services.\nA basic values.yaml would look like this:\n--- experimental: # kubernetesGateway: # Enable the Kubernetes Gateway API support if you want, i did not have time to test it. # enabled: true # https://traefik.io/blog/getting-started-with-traefik-and-the-new-kubernetes-gateway-api/ image: name: traefik tag: 2.9.6 # Whether Role Based Access Control objects like roles and rolebindings should be created rbac: enabled: true logs: general: level: INFO # Fix for acme.json file being changed to 660 from 600 # https://github.com/traefik/traefik-helm-chart/issues/164 podSecurityContext: fsGroup: null additionalArguments: - --global.sendAnonymousUsage=false # Disable anonymous usage - --entrypoints.metrics.address=:9100 - --entrypoints.websecure.http.tls.certresolver=cloudflare - --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare - --certificatesresolvers.cloudflare.acme.dnschallenge.disablePropagationCheck=true - --certificatesresolvers.cloudflare.acme.dnschallenge.delayBeforeCheck=60 - --certificatesresolvers.cloudflare.acme.email=\u0026lt;your_email\u0026gt; - --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1 # Cloudflare DNS - --certificatesresolvers.cloudflare.acme.storage=/data/acme.json ports: web: redirectTo: websecure # Redirect HTTP to HTTPS metrics: expose: true # We want to have Traefik metrics and maybe tracing later on # The exposed port for this service, refference here : https://github.com/traefik/traefik-helm-chart/issues/626 exposedPort: 9100 # The port protocol (TCP/UDP) protocol: TCP env: - name: CF_API_EMAIL valueFrom: secretKeyRef: key: email name: cloudflare-api-credentials - name: CF_API_KEY valueFrom: secretKeyRef: key: apiKey name: cloudflare-api-credentials ingressRoute: dashboard: enabled: false persistence: # storageClass: \u0026lt;your-storage-class\u0026gt; enabled: true path: /data size: 1Gi # 1Gi is the minimum as we only store some json files. deployment: enabled: true # Number of pods of the deployment replicas: 1 # Additional deployment annotations (e.g. for jaeger-operator sidecar injection) annotations: {} # Additional pod annotations (e.g. for mesh injection or prometheus scraping) podAnnotations: {} # Additional containers (e.g. for metric offloading sidecars) additionalContainers: [] # Additional initContainers (e.g. for setting file permission as shown below) initContainers: # The \u0026#34;volume-permissions\u0026#34; init container is required if you run into permission issues. # Related issue: https://github.com/containous/traefik/issues/6972 - name: volume-permissions image: busybox:1.35 command: [\u0026#34;sh\u0026#34;, \u0026#34;-c\u0026#34;, \u0026#34;touch /data/acme.json \u0026amp;\u0026amp; chmod -Rv 600 /data/* \u0026amp;\u0026amp; chown 65532:65532 /data/acme.json\u0026#34;] volumeMounts: - name: data mountPath: /data # Custom pod DNS policy. Apply if `hostNetwork: true` # dnsPolicy: ClusterFirstWithHostNet # Allways use resources definitions and limits, even for testing. resources: requests: cpu: \u0026#34;100m\u0026#34; memory: \u0026#34;128Mi\u0026#34; limits: cpu: \u0026#34;500m\u0026#34; memory: \u0026#34;512Mi\u0026#34; I taught of a way to make the comments more compact and i included some key points in the actual file.\nBefore deploying the helm chart, make sure you define the \u0026rsquo;traefik-secrets.yaml\u0026rsquo;:\n--- apiVersion: v1 kind: Secret metadata: name: traefik-dashboard-auth namespace: traefik data: users: YWRtaW46JGFwcjEkTk1hUGxWbmIkd1laUkhMbnVCNThrWXpheVdhZmtzLwoK --- apiVersion: v1 kind: Secret metadata: name: cloudflare-api-credentials namespace: traefik type: Opaque stringData: email: \u0026lt;your_cloudflare_email\u0026gt; apiKey: \u0026lt;your_cloudflare_api_key\u0026gt; Here\u0026rsquo;s an example on how to generate Traefik credentials( obviously use a smarter user / pass combination ):\nhtpasswd -nb admin qwer1234 | base64 This will generate a key:\nYWRtaW46JGFwcjEkTk1hUGxWbmIkd1laUkhMbnVCNThrWXpheVdhZmtzLwoK Check to see if Traefik is UP before enabling the dashboard:\nkubectl get pods -n traefik Apply the services:\nkubectl apply -f traefik-secrets.yaml # This is the current version, but it might change in the future kubectl apply -k \u0026#34;github.com/kubernetes-sigs/gateway-api/config/crd?ref=v0.4.0\u0026#34; helm install traefik traefik/traefik --namespace=traefik --values=traefik-values.yaml Add the dashboard IngressRoute and Middleware:\nI can\u0026rsquo;t stress this enough. Use a complex user / password combination. This is configured as a publicly exposed service. --- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: traefik-dashboard-basicauth namespace: traefik spec: basicAuth: secret: traefik-dashboard-auth --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard namespace: traefik spec: entryPoints: - websecure routes: - match: Host(`\u0026lt;your_domain\u0026gt;`) \u0026amp;\u0026amp; PathPrefix(`/dashboard`) or Host(`\u0026lt;your_sub_domain\u0026gt;`) # A matter of choice. kind: Rule middlewares: - name: traefik-dashboard-basicauth namespace: traefik services: - name: api@internal kind: TraefikService Apply the dashboard:\nkubectl apply -f traefik-dashboard.yaml You should be able to login using your credentials.\nMonitoring # For monitoring, we will use Prometheus and Grafana as a helm distribution from the official helm repository.\nhelm repo add prometheus-community https://prometheus-community.github.io/helm-charts # Or helm repo update # if you already have the repo I will be again using a values.yaml file to configure the helm chart that you might need to customise to your needs.\ngrafana: persistence: enabled: true storageSpec: volumeClaimTemplate: spec: # storageClassName: \u0026lt;your-storage-class\u0026gt; accessModes: [\u0026#34;ReadWriteOnce\u0026#34;] resources: requests: storage: 10Gi additionalDataSources: [] admin: existingSecret: \u0026#34;\u0026#34; passwordKey: admin-password userKey: admin-user adminPassword: \u0026lt;your_password\u0026gt; adminUser: \u0026lt;your_user\u0026gt; prometheus: service: nodePort: 30000 # To expose prometheus on the local node if you want, we will use this later to get the MinIO graphs in the MinIO dashboard. type: NodePort prometheusSpec: retention: 7d # set this to a conservative value, prom. uses a lot of disk space serviceMonitorSelectorNilUsesHelmValues: false # this allows ServiceMonitor definitions in all namespaces storageSpec: volumeClaimTemplate: spec: # storageClassName: \u0026lt;your-storage-class\u0026gt; accessModes: [\u0026#34;ReadWriteOnce\u0026#34;] resources: requests: storage: 50Gi Install the helm and create the namespace:\nhelm install monitoring-stack prometheus-community/kube-prometheus-stack -f monitoring/values.yaml --namespace monitoring Wait for the stack to be fully ready, you can check by running:\nkubectl get pods -n monitoring Expose the Grafana service:\n--- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: namespace: monitoring name: grafana spec: entryPoints: - websecure routes: - kind: Rule match: Host(`\u0026lt;your_domain\u0026gt;`) \u0026amp;\u0026amp; PathPrefix(`/grafana`) or Host(`\u0026lt;your_sub_domain\u0026gt;`) # A matter of choice. services: - name: monitoring-stack-grafana port: 80 tls: certResolver: cloudflare kubectl apply -f grafana-ingress.yaml -n monitoring You should be able to login using your credentials:\n","date":"February 7, 2023","permalink":"/posts/running-my-cloud/","section":"Posts","summary":"","title":"Running my hybrid cloud"},{"content":"","date":"February 7, 2023","permalink":"/tags/s3/","section":"Tags","summary":"","title":"S3"},{"content":"","date":"February 7, 2023","permalink":"/tags/ssl/","section":"Tags","summary":"","title":"SSL"},{"content":"","date":"February 7, 2023","permalink":"/tags/","section":"Tags","summary":"","title":"Tags"},{"content":"","date":"February 7, 2023","permalink":"/tags/tailscale/","section":"Tags","summary":"","title":"Tailscale"},{"content":"","date":"February 7, 2023","permalink":"/tags/terraform/","section":"Tags","summary":"","title":"Terraform"},{"content":"","date":"February 7, 2023","permalink":"/","section":"Theodor Ganescu","summary":"","title":"Theodor Ganescu"},{"content":"","date":"February 7, 2023","permalink":"/tags/wireguard/","section":"Tags","summary":"","title":"Wireguard"}]