This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
Previous "reasonable resource limits" cause test failures with Go 1.9.
They were added to protect the previous generation of the build server
and no longer needed.
Fixes issue #4653.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4656
It turns out that ZFS doesn't do any normalization when storing files,
but does do normalization "as part of any comparison process".
In practice, this seems to mean that if you LStat a normalized filename,
ZFS will return the FileInfo for the un-normalized version of that
filename.
This meant that our test to see whether a separate file with a
normalized version of the filename already exists was failing, as we
were detecting the same file.
The fix is to use os.SameFile, to see whether we're getting the same
FileInfo from the normalized and un-normalized versions of the same
filename.
One complication is that ZFS also seems to apply its magic to os.Rename,
meaning that we can't use it to rename an un-normalized file to its
normalized filename. Instead we have to move via a temporary object. If
the move to the temporary object fails, that's OK, we can skip it and
move on. If the move from the temporary object fails however, I'm not
sure of the best approach: the current one is to leave the temporary
file name as-is, and get Syncthing to syncronize it, so at least we
don't lose the file. I'm not sure if there are any implications of this
however.
As part of reworking normalizePath, I spotted that it appeared to be
returning the wrong thing: the doc and the surrounding code expecting it
to return the normalized filename, but it was returning the
un-normalized one. I fixed this, but it seems suspicious that, if the
previous behaviour was incorrect, noone ever ran afoul of it. Maybe all
filesystems will do some searching and give you a normalized filename if
you request an unnormalized one.
As part of this, I found that TestNormalization was broken: it was
passing, when in fact one of the files it should have verified was
present was missing. Maybe this was related to the above issue with
normalizePath's return value, I'm not sure. Fixed en route.
Kindly tested by @khinsen on the forum, and it appears to work.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4646
This adds one new feature, that discovery servers can have ?nolookup to
be used only for announces. The default set of discovery servers is
changed to:
- discovery.s.n used for lookups. This is dual stack load balanced over
all discovery servers, and returns both IPv4 and IPV6 results when they
exist.
- discovery-v4.s.n used for announces. This has IPv4 addresses only and
the discovery servers will update the unspecified address with the IPv4
source address, as usual.
- discovery-v6.s.n which is exactly the same for IPv6.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4647
This no longer pokes at model internals, and only touches the config.
As a result, model handles this in CommitConfiguration, which restarts
the folders if things change, which repopulate m.folderDevice, m.deviceFolder
and other interal mappings.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4639
These files always have the symlink bit set, because they are reparse
points. Nonetheless they are not symlinks, and Lstat reports a size for
them. We use this fact to disambiguate, and hope fervently that nothing
else matches this description so it comes back to bite us...
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4622
Just because there are a ton of people struggling to set env vars.
Perhaps this should live in advanced settings, and perhaps we should have a button to view the log.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4604
LGTM: calmh, imsodin
Also attempt to handle this nicer by ignoring the truncate failure when
it doesn't matter, and recover by deleting the temp file when it does.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4594
This keeps the data we need about sequence numbers and object counts
persistently in the database. The sizeTracker is expanded into a
metadataTracker than handled multiple folders, and the Counts struct is
made protobuf serializable. It gains a Sequence field to assist in
tracking that as well, and a collection of Counts become a CountsSet
(for serialization purposes).
The initial database scan is also a consistency check of the global
entries. This shouldn't strictly be necessary. Nonetheless I added a
created timestamp to the metadata and set a variable to compare against
that. When the time since the metadata creation is old enough, we drop
the metadata and rebuild from scratch like we used to, while also
consistency checking.
A new environment variable STCHECKDBEVERY can override this interval,
and for example be set to zero to force the check immediately.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4547
LGTM: imsodin
So STDEADLOCK seems to do the same thing as STDEADLOCKTIMEOUT, except in
the other package. Consolidate?
STDEADLOCKTHRESHOLD is actually called STLOCKTHRESHOLD, correct the help
text.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4598
Two issues since the filesystem migration;
- filepath.Base() doesn't do what the code expected when the path ends
in a slash, and we make sure the path ends in a slash.
- the return should be fully qualified, not just the last component.
With this change it works as before, and passes the new test for it.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4591
LGTM: imsodin
Fix the folder restart behavior (ignore Label), improve the API for that
(imho).
Also removes the tab switch animation in the settings modal, because
annoying.
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4577