Distributed Object Storage

Upload anywhere.
Serve everywhere.

3 edge nodes, automatic replication, global reads. Drop a file and it's live on every node in under 2 seconds.

Drop files here or click to upload Any file type — max 1 GB
Select File
Uploading...
Bucket: cdnfiles · Replication: 3x · SSL: TLS 1.3

Uploaded Files

0 files · 0 B
Name Size Modified

VPN Configs

Checking...

Registered Peers

Name IP Public Key

Live Status

Status Name VPN IP Connected To Last Handshake Rx / Tx

Cluster Health

us-central checking
147.93.131.83
WireGuard
SeaweedFS
Latency
us-east checking
86.48.29.97
WireGuard
SeaweedFS
Latency
us-west checking
85.239.239.133
WireGuard
SeaweedFS
Latency

Usage Guide

Upload via S3 API

PUT any file to the S3-compatible endpoint. It replicates automatically.

bash
curl -X PUT \
  -T myfile.jpg \
  https://cdn.zyr.xyz/cdnfiles/myfile.jpg

Upload via Filer

Use multipart form upload for directory-style paths.

bash
curl -F "file=@photo.png" \
  https://cdn.zyr.xyz/filer/images/photo.png

Read Files

Files are served from the nearest edge node via GeoDNS. Cached at the edge.

url
https://cdn.zyr.xyz/cdnfiles/myfile.jpg

List Objects

List bucket contents via the S3 ListObjects API.

bash
curl https://cdn.zyr.xyz/cdnfiles/?list-type=2

Delete Files

DELETE removes from all replicas across the cluster.

bash
curl -X DELETE \
  https://cdn.zyr.xyz/cdnfiles/myfile.jpg

S3 SDK Compatible

Use any S3 SDK — boto3, aws-sdk, minio-js. Just point the endpoint.

python
import boto3

s3 = boto3.client('s3',
  endpoint_url='https://cdn.zyr.xyz',
  aws_access_key_id='',
  aws_secret_access_key='')

s3.upload_file('local.jpg',
  'cdnfiles', 'remote.jpg')

How It Works

1

Upload

Client uploads via HTTPS to cdn.zyr.xyz. Cloudflare DNS round-robins to the nearest of 3 edge servers.

2

Ingest

Nginx terminates SSL and proxies to the local SeaweedFS filer/S3 gateway on the receiving node.

3

Replicate

SeaweedFS writes locally and synchronously replicates to the other 2 data centers (replication=200). Filer active-active sync keeps metadata consistent.

4

Serve

Reads hit the nearest node via GeoDNS. Nginx serves from edge cache (HIT) or fetches from the local SeaweedFS volume (MISS).

Stack

Storage SeaweedFS 4.19
Replication Filer Active-Active Sync
CDN / Proxy Nginx + Edge Cache
Networking WireGuard Mesh
DNS Cloudflare (round-robin)
SSL Let's Encrypt (TLS 1.3)
Monitoring Prometheus + Grafana
Firewall UFW (SSH, HTTP/S, WG only)

Nodes

Node Public IP WireGuard IP Datacenter Role
us-central 147.93.131.83 10.0.0.1 dc1 Master + Volume + Filer + Grafana
us-east 86.48.29.97 10.0.0.2 dc2 Master + Volume + Filer
us-west 85.239.239.133 10.0.0.3 dc3 Master + Volume + Filer

Upload Password

Enter the upload password to continue.