You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-17Lines changed: 19 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -46,6 +46,22 @@ The only way this project can possibly work is to be super focused about what it
46
46
47
47
3. OFM does not promise worry-free automatic updates for self-hosters. Only use the autoupdate version of http-host if you keep a close eye on this repo.
48
48
49
+
## What is the tech stack?
50
+
51
+
There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well.
52
+
53
+
There is no cloud, just dedicated servers. The HTTPS server is nginx on Ubuntu.
54
+
55
+
## Btrfs images
56
+
57
+
Production-quality hosting of 300 million tiny files is hard. The average file size is just 450 byte. Dozens of tile servers have been written to tackle this problem, but they all have their limitations.
58
+
59
+
The original idea of this project is to avoid using tile servers altogether. Instead, the tiles are directly served from Btrfs partition images + hard links using an optimised nginx config. I wrote [extract_mbtiles](scripts/tile_gen/extract_mbtiles) and [shrink_btrfs](scripts/tile_gen/shrink_btrfs) scripts for this very purpose.
60
+
61
+
This replaces a running service with a pure, file-system-level implementation. Since the Linux kernel's file caching is among the highest-performing and most thoroughly tested codes ever written, it delivers serious performance.
62
+
63
+
I run some [benchmarks](docs/quick_notes/http_benchmark.md) on a Hetzner server, the aim was to saturate a gigabit connection. At the end, it was able to serve 30 Gbit on loopback interface, on cold nginx cache.
64
+
49
65
## Code structure
50
66
51
67
The project has the following parts
@@ -85,26 +101,14 @@ A very important part, probably needs the most work in the long term future.
85
101
86
102
#### load balancer script - scripts/loadbalancer
87
103
88
-
Round Robin DNS based load balancer, script for health checking and updating records.
89
-
90
-
Pushes warnings to a Telegram bot.
104
+
Round Robin DNS based load balancer, script for health checking and updating records. It pushes status messages to a Telegram bot.
91
105
92
106
Currently it's running in read-only mode, DNS updates need manual confirmation.
93
107
94
108
## Self hosting
95
109
96
110
See [self hosting docs](docs/self_hosting.md).
97
111
98
-
## Btrfs images
99
-
100
-
Production-quality hosting of 300 million tiny files is hard. The average file size is just 450 byte. Dozens of tile servers have been written to tackle this problem, but they all have their limitations.
101
-
102
-
The original idea of this project is to avoid using tile servers altogether. Instead, the tiles are directly served from Btrfs partition images + hard links using an optimised nginx config. I wrote [extract_mbtiles](scripts/tile_gen/extract_mbtiles) and [shrink_btrfs](scripts/tile_gen/shrink_btrfs) scripts for this very purpose.
103
-
104
-
This replaces a running service with a pure, file-system-level implementation. Since the Linux kernel's file caching is among the highest-performing and most thoroughly tested codes ever written, it delivers serious performance.
105
-
106
-
I run some [benchmarks](docs/quick_notes/http_benchmark.md) on a Hetzner server, the aim was to saturate a gigabit connection. At the end, it was able to serve 30 Gbit on loopback interface, on cold nginx cache.
107
-
108
112
## FAQ
109
113
110
114
### Full planet downloads
@@ -126,10 +130,8 @@ There are three public buckets:
126
130
127
131
### Domains and Cloudflare
128
132
129
-
Tiles are currently available on:
130
-
131
-
- tiles.openfreemap.org - Cloudflare proxied
132
-
- direct.openfreemap.org - direct connection, Round-Robin DNS
133
+
-`tiles.openfreemap.org` - Cloudflare proxied
134
+
-`direct.openfreemap.org` - direct connection, Round-Robin DNS
133
135
134
136
The project has been designed in such a way that we can migrate away from Cloudflare if needed. This is the reason why there are a .com and a .org domain: the .com will always stay on Cloudflare to host the R2 buckets, while the .org domain is independent.
Copy file name to clipboardExpand all lines: website/blocks/main.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -62,7 +62,7 @@ GitHub: [openfreemap](https://github.com/hyperknot/openfreemap) and [openfreemap
62
62
63
63
## What is the tech stack?
64
64
65
-
There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well.
65
+
There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well. (You can read more about it on [GitHub](https://github.com/hyperknot/openfreemap).
66
66
67
67
There is no cloud, just dedicated servers. The HTTPS server is nginx on Ubuntu.
0 commit comments