* Trigger connection loop on config device addition (fixes#7600)
* Also check for device address equality
* Move EqualStrings from api_test to utils, and use in connections/service.go
* Make sure CommitConfiguration cannot block due on the deviceAddressesChanged channel
* Update lib/connections/service.go
Co-authored-by: Jakob Borg <jakob@kastelo.net>
This makes us use TLS 1.3+ on sync connections by default. A new option
`insecureAllowOldTLSVersions` exists to allow communication with TLS
1.2-only clients (roughly Syncthing 1.2.2 and older). Even with that
option set you get a slightly simplified setup, with the cipher suite
order fixed instead of auto detected.
No longer hide the web UI controls for the new untrusted/encrypted
device feature. Testing hasn't been very widespread, but there has been
some and quite a few bugs have been caught and fixed. I believe its time
to not hide it anymore, and cautiously recommend usage. E.g. mention
that the feature hasn't been widely used yet and anyone using it is an
early adopter, but drop the bit about not using it with production data.
We can maybe stress the need for backups in general and especially
using this.
There was a logic mistake, so the limit in question wasn't used. On my
macOS this doesn't seem to matter, the hard limit returned is 2^63-1 and
setting the soft limit to that works. However I'm assuming that's not
the case for older macOSes since it was so nicely documented, so we
should still have this working. (10240 FDs should be enough for
anybody.)
This is a mostly pointless change to make security scanners and static
analysis tools happy, as they all hate seeing md5. None of our md5 uses
were security relevant, but still. Only visible effect of this change is
that our temp file names for very long file names become slightly longer
than they were previously...
This adds a couple of dummy asset files protected by the "noassets"
build tag. The purpose is that it should be possible for, for example,
CI tools and static analysis things to compile and analyze the source
tree without our custom asset generation step. Also makes `go test -tags
noassets ./...` work without building assets first.
This truncates times meant for API consumption to second precision,
where fractions won't typically matter or add any value. Exception to
this is timestamps on logs and events, and of course I'm not touching
things like file metadata.
I'm not 100% certain this is an exhaustive change, but it's the things I
found by grepping and following the breadcrumbs from lib/api...
I also considered general-but-ugly solutions, like having the API
serializer itself do reflection magic or even regexps on returned
objects, but decided against it because aurgh...
This loosens the ‘is this localhost?’ check to include *.localhost host
names.
This allows for clearer (hence better) names to be used in browsers,
e.g. when accessing a remote syncthing instance ‘foo’ using a ssh port
forward, one can use foo.localhost to remind oneself which one is which.
💡 Without these changes, Syncthing shows a ‘Host check error’ when
pointing a browser at http://foo.localhost/, and with these changes, the
interface loads as usual.
The .localhost top level domain is a reserved top-level domain (RFC 2606):
> The ".localhost" TLD has traditionally been statically defined in
> host DNS implementations as having an A record pointing to the
> loop back IP address and is reserved for such use. Any other use
> would conflict with widely deployed code which assumes this use.
> – https://tools.ietf.org/html/rfc2606
As Wikipedia puts it:
> This allows the use of these names for either documentation purposes
or in local testing scenarios. – https://en.wikipedia.org/wiki/.localhost
On Linux systems, systemd-resolved resolves *.localhost, on purpose:
https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html
See also #4815, #4816.
An untrusted device will receive padded info for small blocks, and hence
sometimes request a larger block than actually exists on disk.
Previously we let this pass because we didn't have a hash to compare to
in that case and we ignored the EOF error based on that.
Now the untrusted device does pass an encrypted hash that we decrypt and
verify. This means we can't check for len(hash)==0 any more, but on the
other hand we do have a valid hash we can apply to the data we actually
read. If it matches then we don't need to worry about the read
supposedly being a bit short.
Benchmark results on Linux/amd64, using updated benchmark for old and
new:
name old time/op new time/op delta
HashFile-8 88.6ms ± 1% 88.3ms ± 1% -0.33% (p=0.046 n=19+19)
name old speed new speed delta
HashFile-8 201MB/s ± 1% 202MB/s ± 1% +0.33% (p=0.044 n=19+19)
name old alloc/op new alloc/op delta
HashFile-8 59.4kB ± 0% 46.1kB ± 0% -22.47% (p=0.000 n=14+20)
name old allocs/op new allocs/op delta
HashFile-8 29.0 ± 0% 27.0 ± 0% -6.90% (p=0.000 n=20+20)
Co-authored-by: greatroar <@>
* lib/db: Add ExpirePendingFolders().
Use-case is to drop any no-longer-pending folders for a specific
device when parsing its ClusterConfig message where previously offered
folders are not mentioned any more.
The timestamp in ObservedFolder is stored with only second precision,
so round to seconds here as well. This allows calling the function
within the same second of adding or updating entries.
* lib/model: Weed out pending folders when receiving ClusterConfig.
Filter the entries by timestamp, which must be newer than or equal to
the reception time of the ClusterConfig. For just mentioned ones,
this assumption will hold as AddOrUpdatePendingFolder() updates the
timestamp.
* lib/model, gui: Notify when one or more pending folders expired.
Introduce new event type FolderOfferCancelled and use it to trigger a
complete refreshCluster() cycle. Listing individual entries would be
much more code and probably just as much work to answer the API
request.
* lib/model: Add comment and rename ExpirePendingFolders().
* lib/events: Rename FolderOfferCancelled to ClusterPendingChanged.
* lib/model: Reuse ClusterPendingChanged event for cleanPending()
Changing the config does not necessarily mean that the
/resut/cluster/pending endpoints need to be refreshed, but only if
something was actually removed. Detect this and indicate it through
the ClusterPendingChanged event, which is already hooked up to requery
respective endpoints within the GUI.
No more need for a separate refreshCluster() in reaction to
ConfigSaved event or calling refreshConfig().
* lib/model: Gofmt.
* lib/db: Warn instead of info log for failed removal.
* gui: Fix pending notifications not loading on GUI start.
* lib/db: Use short device ID in log message.
* lib/db: Return list of expired folder IDs after deleting them.
* lib/model: Refactor Pending...Changed events.
* lib/model: Adjust format of removed pending folders enumeration.
Use an array of objects with device / folder ID properties, matching
the other places where it's used.
* lib/db: Drop invalid entries in RemovePendingFoldersBeforeTime().
* lib/model: Gofmt.
My local gofmt did not complain here, strangely...
* gui: Handle PendingDevicesChanged event.
Even though it currently only holds one device at a time, wrap the
contents in an array under the "added" property name.
* lib/model: Fix null values in PendingFoldersChanged removed member.
* gui: Handle PendingFoldersChanged event.
* lib/model: Simplify construction of expiredPendingList.
* lib/model: Reduce code duplication in cleanPending().
Use goto and a label for the common parts of calling the DB removal
function and building the event data part.
* lib/events, gui: Mark ...Rejected events deprecated.
Extend comments explaining the conditions when the replacement event
types are emitted.
* lib/model: Wrap removed devices in array of objects as well.
* lib/db: Use iter.Value() instead of needless db.Get(iter.Key())
* lib/db: Add comment explaining RemovePendingFoldersBeforeTime().
* lib/model: Rename fields folderID and deviceID in event data.
* lib/db: Only list actually expired IDs as removed.
Skip entries where Delete() failed as well as invalid entries that got
removed automatically.
* lib/model: Gofmt
This splits the ignore getting to two methods, one that loads from disk
(the old one) and one that just returns whatever is already loaded (the
new one). The folder summary service which is just interested in stats
now uses the latter method. This means that it, and API calls that call
it, does not get blocked by folder I/O.
This adds two new configuration options:
// The number of connections at which we stop trying to connect to more
// devices, zero meaning no limit. Does not affect incoming connections.
ConnectionLimitEnough int
// The maximum number of connections which we will allow in total, zero
// meaning no limit. Affects incoming connections and prevents
// attempting outgoing connections.
ConnectionLimitMax int
These can be used to limit the number of concurrent connections in
various ways.
This adds a statistic to track the last connection duration per device.
It isn't used for much in this PR, but it's available for #7223 to use
in deciding how to order device connection attempts (deprioritizing
devices that just dropped our connection the last time).
The test would fail if the umask on UNIX is greater than 0022, because
the OS transparently subtracts it from the mode passed to Mkdir(), as
the Go documentation confirms.
Our goal here is not to test os.Mkdir(), so just make sure the desired
mode is actually set by forcing it afterwards.
This breaks out some methods from the connection loop to make it simpler
to manage and understand.
Some slight simplifications to remove the `seen` variable (we can filter
`nextDial` based on times are in the future or not, so we don't need to
track `seen`) and adding a minimum loop interval (5s) in case some
dialer goes haywire and requests a 0s redial interval or such.
Otherwise no significant behavioral changes.
This does two things:
- Exclude QUIC from go1.16 builds, automatically, for now, since it
doesn't work and just panics.
- Provide some fake listeners and dialers when QUIC is disabled.
These fake listeners and dialers indicate that they are disabled and
unsupported, which silences "Dialing $address: unknown address scheme:
quic" type of stuff which is not super helpful to the user.
This removes the switch for using a Badger database, because it has bugs
that it seems there is no interest in fixing, and no actual bug tracker
to track them in.
It retains the actual implementation for the sole purpose of being able
to do the conversion back to LevelDB if anyone is actually running with
USE_BADGER. At some point in a couple of versions we can remove the
implementation as well.
COM0 and LPT0 are not listed in the official Microsoft's documentation
at https://docs.microsoft.com/windows/win32/fileio/naming-a-file, but
in reality are also invalid in Windows Explorer.
Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Add specific errors for the failures, resulting in this rather than just
the generic "invalid filename":
[MRIW7] 08:50:50 INFO: Puller (folder default, item "NUL"): syncing: filename is invalid: name is reserved
[MRIW7] 08:50:50 INFO: Puller (folder default, item "fail."): syncing: filename is invalid: name ends with space or period
[MRIW7] 08:50:50 INFO: Puller (folder default, item "sup:yo"): syncing: filename is invalid: name contains reserved character
[MRIW7] 08:50:50 INFO: default: Failed to sync 3 items
Use the IoctlFileClone and IoctlFileCloneRange ioctl wrappers and the
FileCloneRange type provided by golang.org/x/sys/unix instead of
locally implementing them. This also allows to re-enable the code for
ppc/ppc64/ppc64le again (see commit 758a1a6a37 ("lib/fs: Disable ioctl
on ppc (fixes#6898) (#6901)")) since golang.org/x/sys/unix internally
uses the correct FICLONE and FICLONERANGE values depending on $GOARCH.
Most notably, it now detects all-lowercase files and returns these
as-is. The tests have been expanded with two cases and are now used
as a benchmark (admittedly a rather trivial one).
name old time/op new time/op delta
UnicodeLowercaseMaybeChange-8 4.59µs ± 2% 4.57µs ± 1% ~ (p=0.197 n=10+10)
UnicodeLowercaseNoChange-8 3.26µs ± 1% 3.09µs ± 1% -5.27% (p=0.000 n=9+10)
This changes the cache to cache less things, yet retain the required
efficiency for our walk usecase. This uses less memory.
Specifically, instead of keeping result and child caches for each path
level, only keep a single cached child. In practice our operations are
depth-first, or almost depth-first, and then we retain the same hit
ratio for a smaller cache size.
I improved the benchmark so that it counts the Lstat and DirNames
operations performed, and they do not change significantly. The amount
of allocated memory is reduced by 20% and the walk itself is actually
slightly faster.
This also removes the clear based on number of cached names (as that is
not a thing any more) and the timer based clear (which was unused). This
means we'll retain the last cache state forever until it's cleared by a
write operation, but we did that before too and that state is now a lot
smaller...
The overhead compared to not using a casefs, for our typical "double
walk" (walk the tree then stat everything again) is 2x the dirnames we
would otherwise call, and no overhead on the stats (unchanged from old
implementation)
```
name old time/op new time/op delta
WalkCaseFakeFS100k/rawfs-8 306ms ± 1% 305ms ± 2% ~ (p=0.182 n=9+10)
WalkCaseFakeFS100k/casefs-8 579ms ± 5% 557ms ± 1% -3.77% (p=0.000 n=10+10)
name old B/entry new B/entry delta
WalkCaseFakeFS100k/rawfs-8 590 ± 0% 590 ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 1.09k ± 0% 0.87k ± 0% -19.98% (p=0.000 n=10+10)
name old DirNames/entry new DirNames/entry delta
WalkCaseFakeFS100k/rawfs-8 0.51 ± 0% 0.51 ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 1.02 ± 0% 1.02 ± 0% ~ (all equal)
name old DirNames/op new DirNames/op delta
WalkCaseFakeFS100k/rawfs-8 51.2k ± 0% 51.2k ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 102k ± 0% 102k ± 0% ~ (all equal)
name old Lstat/entry new Lstat/entry delta
WalkCaseFakeFS100k/rawfs-8 3.02 ± 0% 3.02 ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 3.02 ± 0% 3.02 ± 0% ~ (all equal)
name old Lstat/op new Lstat/op delta
WalkCaseFakeFS100k/rawfs-8 302k ± 0% 302k ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 302k ± 0% 302k ± 0% ~ (all equal)
name old allocs/entry new allocs/entry delta
WalkCaseFakeFS100k/rawfs-8 15.7 ± 0% 15.7 ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 27.5 ± 0% 26.1 ± 0% -5.09% (p=0.000 n=10+10)
name old ns/entry new ns/entry delta
WalkCaseFakeFS100k/rawfs-8 2.02k ± 1% 2.02k ± 2% ~ (p=0.163 n=9+10)
WalkCaseFakeFS100k/casefs-8 3.83k ± 5% 3.68k ± 1% -3.77% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
WalkCaseFakeFS100k/rawfs-8 89.2MB ± 0% 89.2MB ± 0% ~ (p=0.364 n=9+10)
WalkCaseFakeFS100k/casefs-8 164MB ± 0% 131MB ± 0% -19.97% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
WalkCaseFakeFS100k/rawfs-8 2.38M ± 0% 2.38M ± 0% ~ (all equal)
WalkCaseFakeFS100k/casefs-8 4.16M ± 0% 3.95M ± 0% -5.05% (p=0.000 n=10+10)
```
Since iterators must be released before committing or discarding a
transaction we have the pattern of both deferring a release plus doing
it manually. But we can't release twice because we track this with a
WaitGroup that will panic when there are more Done()s than Add()s. This
just adds a boolean to let an iterator keep track.
We created a new fileset before stopping the folder during restart. When
we create that fileset it loads the current metadata and sequence
numbers from the database. But the folder may have time to update those
before stopping, leaving the new fileset with bad data.
This would cause wrong accounting (forgotten files) and potentially
sequence reuse causing files not sent to other devices.
This change reuses the fileset on restart, skipping the issue entirely.
It also moves the creation of the fileset back under the lock so there
should be no chance of concurrency issues here.
The FileSet.Drop operation in there needs to potentially update a whole lot of global lists, which can take a while (longer than the deadlock interval apparently)
* Add clean up for Simple File Versioning pt.1
created test
* Add clean up for Simple File Versioning pt.2
Passing the test
* stuck on how javascript communicates with backend
* Add trash clean up for Simple File Versioning
Add trash clean up functionality of to allow the user to delete backups
after specified amount of days.
* Fixed html and js style
* Refactored cleanup test cases
Refactored cleanup test cases to one file and deleted duplicated code.
* Added copyright to test file
* Refactor folder cleanout to utility function
* change utility function to package private
* refactored utility function; fixed build errors
* Updated copyright year.
* refactor test and logging
* refactor html and js
* revert style change in html
* reverted changes in html and some js
* checkout origin head version edit...html
* checkout upstream master and correct file
Our authentication is based on device ID (certificate fingerprint) but
we also check the certificate name for ... historical extra security
reasons. (I don't think this adds anything but it is what it is.) Since
that check breaks in Go 1.15 this change does two things:
- Adds a manual check for the peer certificate CommonName, and if they
are equal we are happy and don't call the more advanced
VerifyHostname() function. This allows our old style certificates to
still pass the check.
- Adds the cert name "syncthing" as a DNS SAN when generating the
certificate. This is the correct way nowadays and makes VerifyHostname()
happy in Go 1.15 as well, even without the above patch.
Apparently our Tags field depended on having specific files react to
tags and add themselves there. This, instead, works for all tags.
Also, pass tags to the test command line.
The QUIC package is notorious for being incompatible with either too
old or too new Go releases. Currently it doesn't build with Go 1.15 RC
and I want to test the rest with Go 1.15. With this I can do `go run
build.go --tags noquic` to do that.
With this change we emulate a case sensitive filesystem on top of
insensitive filesystems. This means we correctly pick up case-only renames
and throw a case conflict error when there would be multiple files differing
only in case.
This safety check has a small performance hit (about 20% more filesystem
operations when scanning for changes). The new advanced folder option
`caseSensitiveFS` can be used to disable the safety checks, retaining the
previous behavior on systems known to be fully case sensitive.
Co-authored-by: Jakob Borg <jakob@kastelo.net>
Prompted by https://forum.syncthing.net/t/infinite-filesystem-recursion-detected/15285. In my opinion the filesystem shouldn't throw warnings but pass on errors for the caller to decide what's to be happening with it. Right now in this PR an infinite recursion is a normal scan error, i.e. folder is in failed state and displays failed items, but no warning. I think that's appropriate but if deemed appropriate an additional warning can be thrown in the scanner.
This fixes the change in #6674 where the weak hash became a deciding
factor. Now we again just use it to accept a block, but don't take a
negative as meaning the block is bad.
If the GC finds a key k that it wants to keep, it records that in a
Bloom filter. If a key k' can be removed but its hash collides with k,
it will be kept. Since the old Bloom filter code was completely
deterministic, the next run would encounter the same collision, assuming
k must still be kept.
A randomized hash function that uses all the SHA-256 bits solves this
problem: the second run has a non-zero probability of removing k', as
long as the Bloom filter is not completely full.
* Fix ui, hide report date
* Undo Goland madness
* UR now web scale
* Fix migration
* Fix marshaling, force tick on start
* Fix tests
* Darwin build
* Split "all" build target, add package name as a tag
* Remove pq and sql dep from syncthing, split build targets
* Empty line
* Revert "Empty line"
This reverts commit f74af2b067.
* Revert "Remove pq and sql dep from syncthing, split build targets"
This reverts commit 8fc295ad00.
* Revert "Split "all" build target, add package name as a tag"
This reverts commit f4dc889951.
* Normalise contract types
* Fix build add more logging
This is shorter, skips two allocations, makes the function inlineable
and is safer, since the compiler now check whether
DeviceIDLength == sha256.Size.
This changes the error handling in loading ignores slightly:
- There is a new ParseError type that is returned as the error
(somewhere in the chain) when the problem was not an I/O error loading
the file, but some issue with the contents.
- If the file was read successfully but not parsed successfully we still
return the lines read (in addition to nil patterns and a ParseError).
- In the API, if the error IsParseError then we return a successful
HTTP response with the lines and the actual error included in the JSON
object.
- In the GUI, as long as the HTTP call to load the ignores was
successful we can edit the ignores. If there was an error we show this
as a validation error on the dialog.
Also some cleanup on the Javascript side as it for some reason used
jQuery instead of Angular for this editor...
crypto/rand output is cryptographically secure by the Go library
documentation's promise. That, rather than strength (= passes randomness
tests) is the property that Syncthing needs).
This makes Go 1.15 test/vet happy, avoiding "conversion from untyped int
to string yields a string of one rune" warning where we do
string(KeyTypeWhatever) in namespaced.go.
It also clarifies and enforces the currently allowed range of these
numbers so I think it's fine.