Create an S3 WORM object storage server



Using a Raspberry PI as a MinIO cloud storage server with immutable worm lock enabled

In this article, we'll create a home/test lab anti-ransomware solution for backup data storage using a:
  • Raspberry Pi 4 Model B with two 512GB USB 3.0 flash disk drives
  • Immutable S3 write-once-read-many (WORM) enabled MinIO S3 bucket with erasure code
  • Virtual air gap service with schedule to disable/enable the network to allow inbound backups

The WORM S3 bucket storage created by this project will be my 3rd backup target for my home data backups. If everything else gets hit with ransomware, in theory the immutable S3 bucket storage will remain unaffected.

    MinIO 'Object Locking' (also referred to as 'Object Retention') enforces Write-Once Read-Many (WORM) immutability to protect versioned objects from deletion. MinIO supports both duration based object retention and indefinite Legal Hold retention. This feature depends upon 'Erasure Coding' which is a (to quote MinIO's docos): "data redundancy and availability feature that allows MinIO deployments to automatically reconstruct objects on-the-fly despite the loss of multiple drives or nodes in the cluster. Erasure Coding provides object-level healing with significantly less overhead than adjacent technologies such as RAID or replication." Intended design is a multi-node cluster where multiple drives connected to each node form a 'erasure set'. MinIO writes new object data along with parity blocks striped across the erasure set drives. In this configuration, a loss of one or more drives or even an entire node can be recovered.

    Object Locking key facts:
  • MinIO Object Locking meets SEC17a-4(f), FINRA 4511(C), and CFTC 1.31(c)-(d) requirements as per Cohasset Associates.
  • MinIO's default bucket object WORM lock minimum retention period/protection for all versions is 30 days.
  • MinIO blocks any attempt to delete object versions held under WORM lock. The earliest possible time after which a client may delete the version is when the lock expires.
  • MinIO object locking is feature and API compatible with AWS S3. (For more info on AWS S3 object lock, see https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html)
  • Object locking can only be enabled during bucket creation per S3 behavior. One cannot enable object locking on a an existing bucket that was created w/o locking enabled. And once created, one cannot disable Object Lock or suspend versioning for the bucket.
  • Retention rules can be configured at any time after object lock bucket creation.
  • Object locking requires versioning which is enabled by default upon object lock bucket creation.

    S3 Object Lock can exist in two retention modes:
  • Governance mode: users can't overwrite or delete an object version or alter its lock settings unless they have special permissions.
  • Compliance mode: a protected object version can't be overwritten or deleted by any user, including the root user. When an object is locked in compliance mode, its retention mode can't be changed, and its retention period can't be shortened. Compliance mode helps ensure that an object version can't be overwritten or deleted for the duration of the retention period.
One can also enable indefinite Legal Hold retention for an object using the MinIO Console or 'mc' CLI. The legal hold will supercede the WORM retention rules in place until a user w/sufficient permissions lifts the hold. I won't be covering this but you can read more about it here: https://min.io/docs/minio/kubernetes/upstream/administration/object-management/object-retention.html#enable-legal-hold-retention



For the purposes of this lab exercise, we will be installing and configuring MinIO with Object Locking in Governance mode on a single Raspberry Pi 4 Model B using two USB thumb drives (SanDisk 512GB Ultra Fit USB 3.1 Flash Drive) partitioned into four volumes/filesystems.
    Warning: a single MinIO node deployment with two USB thumb drives configured for S3 bucket object locking is NOT intended design for a production environment. There are so many points of failure with this non-enterprise-level configuration, such as:
  • single node instead of a cluster. The more cluster nodes the better.
  • Storage based upon two USB 3.0 connected disks (thumb drives no less!).
  • single NIC on a single node
  • Self-signed CA certificate for HTTPS/SSL/TLS access and encryption, inherently less secure

With that warning out of the way, let's begin!


We'll assume the Raspberry PI 4 Model B has already had the Raspbian OS installed, hostname set.
In the lab example, I've already set the hostname via UI (yes I am that lazy) and then configured a static IP address as follows:
- set static IP:

vi /etc/dhcpcd.conf

...appended following lines:

interface wlan0
static ip_address=192.168.1.125/24
static routers=192.168.1.99
static domain_name_servers=209.18.47.62


- save and reboot



Sorry, your browser does not support inline SVG.

 

 

Configure filesystem storage



---- LVM: We're going to create 4 separate LVM (Logical Volume Manager) volume groups, volumes, filesystems. Overview:

  • Our server has two physical 512GB USB 3.0 thumb drives detected by the Raspbian OS as /dev/sda and /dev/sdb per 'fdisk', example:

    root@cloudy:~# fdisk -l | egrep 'Disk \/|Disk model:'

    Disk /dev/ram0: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram1: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram2: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram3: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram4: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram5: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram6: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram7: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram8: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram9: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram10: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram11: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram12: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram13: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram14: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/ram15: 4 MiB, 4194304 bytes, 8192 sectors
    Disk /dev/mmcblk0: 119.08 GiB, 127865454592 bytes, 249737216 sectors
    Disk /dev/sda: 460.27 GiB, 494206451712 bytes, 965246976 sectors
    Disk model: SanDisk 3.2Gen1
    Disk /dev/sdb: 460.27 GiB, 494206451712 bytes, 965246976 sectors
    Disk model: SanDisk 3.2Gen1


  • We'll assign a GPT partition to both using 'parted'
  • We'll need to install LVM and XFS.
  • Then we'll create the 4 separate LVM volume groups, volumes, and format 4 XFS filesystems (XFS is the preferred filesystem by Minio).
-- First create GPT partition label on each USB thumbdrive:

root@cloudy:~# parted /dev/sda

GNU Parted 3.4
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) mklabel gpt
Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) print
Model: USB SanDisk 3.2Gen1 (scsi)
Disk /dev/sda: 494GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags


...and then partition for a total of 4 XFS partitions (two per thumbdrive):

(parted) mkpart primary xfs 1MB 235520MB

(parted) mkpart extended xfs 235520MB 471040MB

...'p' (print) the current partitions for sda:

(parted) p

Model: USB SanDisk 3.2Gen1 (scsi) Disk /dev/sda: 494GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 236GB 236GB xfs primary 2 236GB 471GB 236GB xfs extended (parted) quit Information: You may need to update /etc/fstab.

….repeat same steps above for /dev/sdb


-- Install LVM: (LVM2 is not installed by default on Raspbian)

# apt-get install lvm2


- Create 4 physical volumes based upon the gpt partitions we created with parted above:

root@cloudy:~# pvcreate /dev/sda1
Physical volume "/dev/sda1" successfully created.

root@cloudy:~# pvcreate /dev/sda2
Physical volume "/dev/sda2" successfully created.

root@cloudy:~# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.

root@cloudy:~# pvcreate /dev/sdb2
Physical volume "/dev/sdb2" successfully created.


- Create 4 volume groups:

root@cloudy:~# vgcreate xfsminio_vg1 /dev/sda1
Volume group "xfsminio_vg1" successfully created

root@cloudy:~# vgcreate xfsminio_vg2 /dev/sda2
Volume group "xfsminio_vg2" successfully created

root@cloudy:~# vgcreate xfsminio_vg3 /dev/sdb1
Volume group "xfsminio_vg3" successfully created

root@cloudy:~# vgcreate xfsminio_vg4 /dev/sdb2
Volume group "xfsminio_vg4" successfully created


- Create 4 logical volumes:

root@cloudy:~# lvcreate -L +219G -n xfsminio_1 xfsminio_vg1
Logical volume "xfsminio_1" created.

root@cloudy:~# lvcreate -L +219G -n xfsminio_2 xfsminio_vg2
Logical volume "xfsminio_2" created.

root@cloudy:~# lvcreate -L +219G -n xfsminio_3 xfsminio_vg3
Logical volume "xfsminio_3" created.

root@cloudy:~# lvcreate -L +219G -n xfsminio_4 xfsminio_vg4
Logical volume "xfsminio_4" created.


-- Install XFS: (XFS is not installed by default on Raspbian but XFS is the preferred filesystem by Minio)

# apt-get install xfsprogs

….and confirm it is installed, the driver is running:

root@cloudy:~# modprobe -v xfs
insmod /lib/modules/5.15.61-v8+/kernel/fs/xfs/xfs.ko.xz

root@cloudy:~# grep xfs /proc/filesystems
xfs

root@cloudy:~# lsmod | grep xfs
xfs 1613824 0

root@cloudy:~# modinfo xfs

filename: /lib/modules/5.15.61-v8+/kernel/fs/xfs/xfs.ko.xz
license: GPL
description: SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled
author: Silicon Graphics, Inc.
alias: fs-xfs
srcversion: 7DF29575A772E2F414B4380
depends:
intree: Y
name: xfs
vermagic: 5.15.61-v8+ SMP preempt mod_unload modversions aarch64
root@cloudy:~#


- Format four XFS filesystems:
mkfs.xfs /dev/xfsminio_vg1/xfsminio_1

mkfs.xfs /dev/xfsminio_vg2/xfsminio_2

mkfs.xfs /dev/xfsminio_vg3/xfsminio_3

mkfs.xfs /dev/xfsminio_vg4/xfsminio_4



- Create the mount directories and mount the 4 filesystems:

root@cloudy:~# cd /
root@cloudy:/# mkdir xfsminio_1 xfsminio_2 xfsminio_3 xfsminio_4

root@cloudy:/dev# mount /dev/xfsminio_vg1/xfsminio_1 /xfsminio_1

root@cloudy:/dev# mount /dev/xfsminio_vg2/xfsminio_2 /xfsminio_2

root@cloudy:/dev# mount /dev/xfsminio_vg3/xfsminio_3 /xfsminio_3

root@cloudy:/dev# mount /dev/xfsminio_vg4/xfsminio_4 /xfsminio_4


- Now we can see we have 4 219GB XFS formatted filesystems:
root@cloudy:/dev# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/root 117G 3.4G 109G 3% /
devtmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 1.6G 1.3M 1.6G 1% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/mmcblk0p1 255M 31M 225M 12% /boot
tmpfs 782M 20K 782M 1% /run/user/1000
/dev/mapper/xfsminio_vg1-xfsminio_1 219G 1.6G 218G 1% /xfsminio_1
/dev/mapper/xfsminio_vg2-xfsminio_2 219G 1.6G 218G 1% /xfsminio_2
/dev/mapper/xfsminio_vg3-xfsminio_3 219G 1.6G 218G 1% /xfsminio_3
/dev/mapper/xfsminio_vg4-xfsminio_4 219G 1.6G 218G 1% /xfsminio_4



We would like these filesystems to mount automatically at boot time. To do that we need to populate the /etc/fstab with mount information.
1) Run 'blkid' to see the UUIDs for the four new filesystems so we can add mount instructions to /etc/fstab to mount the filesystems automatically at boot time:

root@cloudy:/dev# blkid

/dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="29F5-65C4" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="f5b253a5-01"
/dev/mmcblk0p2: LABEL="rootfs" UUID="cbe4d267-24de-4402-9a4b-1413a1da5eb8" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="f5b253a5-02"
/dev/sda1: UUID="kmLqPd-02U8-tGhA-FJz8-gqCo-WuEk-qG9zZm" TYPE="LVM2_member" PARTLABEL="primary" PARTUUID="5b2886f1-f0ef-47f8-9cb0-edd03402348e"
/dev/sda2: UUID="fI5Urx-Zrce-FWq0-Mf38-3Weg-OSPd-oOXeJP" TYPE="LVM2_member" PARTLABEL="extended" PARTUUID="ca789999-6ebd-4ce4-a45b-ed88185a2e5f"
/dev/sdb1: UUID="roesK1-xCyi-7aIV-PDmJ-K9ho-Un3M-FW8vs0" TYPE="LVM2_member" PARTLABEL="primary" PARTUUID="a7bff4b9-8e21-4796-80fe-8beaa2392247"
/dev/sdb2: UUID="Ksp7wC-RWyM-Dmn1-ApGf-C1a2-SF7P-29ngKP" TYPE="LVM2_member" PARTLABEL="extended" PARTUUID="a7f6eaef-2064-4775-8abf-fb7502559ff6"
/dev/mapper/xfsminio_vg1-xfsminio_1: UUID="5e5b405f-d8d7-4ca7-852e-41728f583a9b" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/xfsminio_vg2-xfsminio_2: UUID="b90e8b54-a0f8-4d9b-b3bb-3cd7f545f338" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/xfsminio_vg3-xfsminio_3: UUID="e1b47735-8601-4434-82c0-1fbff8c506fa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/xfsminio_vg4-xfsminio_4: UUID="669bc36b-6669-48d5-9f1c-9c07b4678962" BLOCK_SIZE="512" TYPE="xfs"


2) Populate the /etc/fstab file with the UUIDs, mountpoints, mount options so our new filesystems mount at boot time by appending:

root@cloudy:/dev# echo 'UUID=5e5b405f-d8d7-4ca7-852e-41728f583a9b /xfsminio_1 xfs defaults 1 1' >> /etc/fstab

root@cloudy:/dev# echo 'UUID=b90e8b54-a0f8-4d9b-b3bb-3cd7f545f338 /xfsminio_2 xfs defaults 1 1' >> /etc/fstab

root@cloudy:/dev# echo 'UUID=e1b47735-8601-4434-82c0-1fbff8c506fa /xfsminio_3 xfs defaults 1 1' >> /etc/fstab

root@cloudy:/dev# echo 'UUID=669bc36b-6669-48d5-9f1c-9c07b4678962 /xfsminio_4 xfs defaults 1 1' >> /etc/fstab



Sorry, your browser does not support inline SVG.

 

 

Install minio and the minio client 'mc'



cd /usr/bin

wget https://dl.minio.io/server/minio/release/linux-arm/minio

wget https://dl.minio.io/client/mc/release/linux-arm/mc

chmod 755 minio

chmod 755 mc



Sorry, your browser does not support inline SVG.

 

 

Configure Encryption



--- SSL/HTTPS:
To encrypt the data written from source to the MinIO S3 worm storage, we need to enable HTTPS and for that we need to either send a Certificate Signing Request (CSR) to a well known Certificate Authority (CA)...or we can take a shortcut and install a self-signed CA certificate since this is just a lab/home test environment.
Configure a self-signed SSL certificate for HTTPS encryption using the 'dead simple' 'certgen' tool to generate self-signed certificates.:

- Execute:

wget https://github.com/minio/certgen/releases/latest/download/certgen-linux-arm64

mkdir /root/.minio/certs

cd /root/.minio/certs


Command syntax for 'certgen': /root/certgen-linux-arm64 -host hostname_here,ip_address_here

...example:

/root/certgen-linux-arm64 -host cloudy,192.168.1.125

….now inside the /root/.minio/certs directory we find the private.key and public.crt
root@cloudy:~/.minio/certs# ls -l

total 20
-rw------- 1 root root 241 Sep 20 20:22 private.key <------ certgen created this
-rw-r--r-- 1 root root 700 Sep 20 20:22 public.crt <------ certgen created this
root@cloudy:~/.minio/certs#


- We can start the Minio server w/this command to confirm 'https:///' connects successfully via browser after running:
Syntax: minio server --address ":443" --console-address ":44683" /filesystem1 /filesystem2 /filesystem3 /filesystem4

Example:
/usr/bin/minio server --address ":443" --console-address ":44683" /xfsminio_1 /xfsminio_2 /xfsminio_3 /xfsminio_4

Confirm a browser can successfully connect:
- in the browser address bar type: https://servername:443
...you should see a MinIO console login page. Login with the access-key and secret-key you created earlier.

After confirming that works, stop the server (kill -15 `pgrep minio`)


Sorry, your browser does not support inline SVG.

 

 

Create a MinIO server startup script to ease starting and stopping the MinIO server. We'll also point to this script later with a systemd service to start MinIO server at boot time.

- Using vi or nano (in this example, vi); create a new script named startminio.sh with the contents underneath the 'vi' command:

# vi startminio.sh


- Make the startminio.sh script executable:
# chmod +x startminio.sh

- start the MinIO server:
# /root/startminio.sh

Example:
root@cloudy:~# ./startminio.sh

MinIO Object Storage Server
Copyright: 2015-2022 MinIO, Inc.
License: GNU AGPLv3
Version: RELEASE.2022-09-17T00-09-45Z (go1.18.6 linux/arm)

Status: 4 Online, 0 Offline.
API: https://192.168.1.125 https://127.0.0.1
RootUser: cloudburst
RootPass: f1rLH1MYEdSLvVWJDko0
Console: https://192.168.1.125:44683 https://127.0.0.1:44683
RootUser: cloudburst
RootPass: f1rLH1MYEdSLvVWJDko0

Command-line: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc alias set myminio https://192.168.1.125 cloudburst f1rLH1MYEdSLvVWJDko)

Documentation: https://docs.min.io



Sorry, your browser does not support inline SVG.

 

 

Configure S3 object storage



- Use 'mc' tool to add the Object Storage by running the following command:
mc config host add ALIAS COS-ENDPOINT ACCESS-KEY SECRET-KEY

….example (alias: cos, cos-endpoint:https://cloudy, access-key: cloudburst, secret-key: the string ending in Ddo0:
root@cloudy:~# mc config host add cos https://cloudy cloudburst f1rLH1MYEdSLvVWJDdo0 --api S3v4

mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Fingerprint of cos public key: bd521fce31116bc4ecc579fd137b77ffae316c974c115b4cc97cb529ccea88b6
Confirm public key y/N:
Added `cos` successfully.



- Create the immutable S3 bucket: mc mb ––debug -l ALIAS/BUCKETNAME

…example:

root@cloudy:~# mc mb --debug -l cos/wormbucket

mc: PUT /wormbucket/ HTTP/1.1
Host: cloudy
User-Agent: MinIO (linux; arm) minio-go/v7.0.36 mc/RELEASE.2022-09-16T09-16-47Z
Content-Length: 0
Authorization: AWS4-HMAC-SHA256 Credential=cloudburst/20220921/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-bucket-object-lock-enabled;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Bucket-Object-Lock-Enabled: true
X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
X-Amz-Date: 20220921T004013Z
Accept-Encoding: gzip

mc: HTTP/1.1 200 OK
Content-Length: 0
Accept-Ranges: bytes
Content-Security-Policy: block-all-mixed-content
Date: Wed, 21 Sep 2022 00:40:13 GMT
Location: /wormbucket
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Request-Id: 1716B943E035127B
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block

mc: TLS Certificate found:
mc: >> Organization: Certgen Development
mc: >> Expires: 2023-09-21 00:22:11 +0000 UTC
mc: Response Time: 52.117027ms

Bucket created successfully `cos/wormbucket`.


- And whabam! Bucket created and 'Object Locking:' is 'Enabled' (per MinIO admin page which you can reach via https://servername ...in our case https://cloudy)

minio worm bucket success



Sorry, your browser does not support inline SVG.

 

 

Automate starting MinIO immutable S3 storage at boot time



---- Now that we have a working MinIO server with immutable S3 storage, let's create a systemd service to start MinIO server automatically at boot time and allow for easier service manual stop and start.

- First, identify the the systemctl units related to the XFS filesystem mountpoints which we named xfsminio_1 through _4 via:
root@cloudy:/etc/systemd# systemctl list-units | grep 'xfsminio_' | awk '{ print $1 }'

xfsminio_1.mount
xfsminio_2.mount
xfsminio_3.mount
xfsminio_4.mount


- next, create a file named minio.service inside /etc/systemd/system with following contents.



     Notes:
  • we put the minio relevant filesystem *.mount unit names in the 'After=' line so systemctl knows that after the network, those filesystems must be mounted first before systemd is to online the minio.service service
  • ExecStart points to our startup script
  • in this lab setup, the WorkingDirectory is '/root' and the User account is 'root'. For a properly secured setup, one would use a specific user account for minio with it's own working home directory.

- set proper permissions:
chmod 755 minio.service

- enable the service:
systemctl enable minio.service

- test the minio.service service:
   - ensure minio server is not running currently:
root@cloudy:~# ps -ef | grep minio
root 2037 1927 0 13:29 pts/0 00:00:00 grep minio

- start service:
root@cloudy:~# systemctl start minio.service

root@cloudy:~# ps -ef | grep minio | grep -v grep

root 2040 1 0 13:29 ? 00:00:00 /bin/bash /root/startminio.sh
root 2041 2040 27 13:29 ? 00:00:01 /usr/bin/minio server --address :443 /xfsminio_1 /xfsminio_2 /xfsminio_3 /xfsminio_4

- now enable the service:
root@cloudy:~# systemctl enable minio.service

Created symlink /etc/systemd/system/multi-user.target.wants/minio.service → /etc/systemd/system/minio.service.




-- Successful s3cmd upload from a Windows workstation!!:

C:\s3cmd-2.2.0>python s3cmd ls
2022-09-21 00:40 s3://wormbucket

C:\s3cmd-2.2.0>python s3cmd sync "C:\Users\warp1\Documents\My Safes\NewDatabase.kdbx" s3://wormbucket
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
upload: 'C:\Users\warp1\Documents\My Safes\NewDatabase.kdbx' -> 's3://wormbucket/NewDatabase.kdbx' (87902 bytes in 0.1 seconds, 822.69 KB/s) [1 of 1]
Done. Uploaded 87902 bytes in 1.0 seconds, 85.84 KB/s.


Sorry, your browser does not support inline SVG.

 

 

---- Security Hardening:



--- change default SSH port:

vi /etc/ssh/sshd_config

...change #Port 22 to something else (in this example 2222):
Port 2222

...save the file, then restart sshd:
systemctl restart ssh


--- Firewall:

Raspbian uses iptables by default but managing iptables is a royal pain. Let's use Debian's default iptables firewall management tool UFW (Uncomplicated Firewall)!

- install UFW:
apt-get install ufw

- confirm it's present but inactive:
ufw status verbose

- allow the non-standard SSH port through:
ufw allow 2222

- allow https through:
ufw allow 443

- allow TCP 44683 (our minio console port we specified) through:>
ufw allow 44683

- check rules:
ufw status verbose

That's it! The firewall will deny all other connection attempts via other ports.


--- Virtual air gap:

Physical air gapped servers are cutoff from the Internet making them much more secure from external threats.
The next best thing is a 'virtual' air gap...one in which we configure a method to disable networking to cutoff the MinIO server from the outside world via software. In this lab we'll employ scripts to down the Ethernet NIC interface and up the interface via systemd timer schedules. The purpose is to enable the network access just long enough to send some backups to the immutable S3 bucket and then disable afterwards.

- Create a simple script vgap_on.sh in /root/ with contents:


- Create a second script vgap_off.sh with contents:


- Make them both executable:

chown 550 vgap_on.sh
chown 550 vgap_off.sh


---- Here we will create a systemd service and timer as an alternate, more consistent method than crontab to schedule our virtual air gap to:
  • disable daily at 1:30:30 (up the network interface)
  • enable daily at 2:15:30 (down the network interface)
  • thereby providing a daily 45 minute window where client servers can push backups (s3cmd sync upload commands)


  • -- Create file named vgapon.service inside /etc/systemd/system w/contents:


    -- create vgapon.timer in /etc/systemd/system w/contents to down the network interface at 2:15:30:


    …….. Repeat for vgapoff.service and vgapoff.timer, like:

    - vgapoff.service:


    - vgapoff.timer:


    -- apply proper perms:
    chmod 644 vgap*

    -- Enable the services and their timers:

    root@cloudy:/etc/systemd/system# systemctl enable vgapon.service

    Created symlink /etc/systemd/system/multi-user.target.wants/vgapon.service → /etc/systemd/system/vgapon.service.

    root@cloudy:/etc/systemd/system# systemctl enable vgapon.timer

    Created symlink /etc/systemd/system/timers.target.wants/vgapon.timer → /etc/systemd/system/vgapon.timer.

    root@cloudy:/etc/systemd/system# systemctl enable vgapoff.service

    Created symlink /etc/systemd/system/multi-user.target.wants/vgapoff.service → /etc/systemd/system/vgapoff.service.

    root@cloudy:/etc/systemd/system# systemctl enable vgapoff.timer

    Created symlink /etc/systemd/system/timers.target.wants/vgapoff.timer → /etc/systemd/system/vgapoff.timer.




    -- Status, as we can see, systemctl is aware of the services and their timers. They report as 'inactive (dead)' because they are not actually currently running which is as expected.

    root@cloudy:/etc/systemd/system# systemctl status vgapon.service

    ● vgapon.service - Compact memory
    Loaded: loaded (/etc/systemd/system/vgapon.service; enabled; vendor preset: enabled)
    Active: inactive (dead)


    root@cloudy:/etc/systemd/system# systemctl status vgapon.timer

    ● vgapon.timer - Logs some system statistics to the systemd journal
    Loaded: loaded (/etc/systemd/system/vgapon.timer; enabled; vendor preset: enabled)
    Active: inactive (dead)
    Trigger: n/a
    Triggers: ● cmpctmem.service

    p
    root@cloudy:/etc/systemd/system# systemctl status vgapoff.service

    ● vgapoff.service - Compact memory
    Loaded: loaded (/etc/systemd/system/vgapoff.service; enabled; vendor preset: enabled)
    Active: inactive (dead)


    root@cloudy:/etc/systemd/system# systemctl status vgapoff.timer

    ● vgapoff.timer - Logs some system statistics to the systemd journal
    Loaded: loaded (/etc/systemd/system/vgapoff.timer; enabled; vendor preset: enabled)
    Active: inactive (dead)
    Trigger: n/a
    Triggers: ● cmpctmem.service



    -- We can test by starting the vgapon.service but be prepared to have physical keyboard/monitor access to the system as we will lose network connectivity.

    root@cloudy:/etc/systemd/system# systemctl start vgapon.service

    ...at this point it should not be possible to remotely connect to the server by any means (SSH, http/https, s3).

    ...to re-establish network connectivity, via phyiscal keyboard/monitor, execute: systemctl start vgapoff.sh

    Schedule any backups to the worm bucket between 1:30:30 and 02:15:30AM daily.
    Obviously adjust the virtual gap times according to your backup schedule.
    Also schedule any backups to start at 1:35AM to give the system 5 minutes plumb NIC with the IP address.


    That's it! We now have a virtually air gapped, secure on-prem cloud storage server using WORM immutable S3 storage with erasure coding in a lab environment!!


     

    © Copyright 2022 rebellian. All Rights Reserved.