cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
|
|
|
// source: database.proto
|
|
|
|
|
|
|
|
/*
|
|
|
|
Package main is a generated protocol buffer package.
|
|
|
|
|
|
|
|
It is generated from these files:
|
|
|
|
database.proto
|
|
|
|
|
|
|
|
It has these top-level messages:
|
|
|
|
DatabaseRecord
|
|
|
|
ReplicationRecord
|
|
|
|
DatabaseAddress
|
|
|
|
*/
|
|
|
|
package main
|
|
|
|
|
|
|
|
import proto "github.com/gogo/protobuf/proto"
|
|
|
|
import fmt "fmt"
|
|
|
|
import math "math"
|
|
|
|
import _ "github.com/gogo/protobuf/gogoproto"
|
|
|
|
|
|
|
|
import io "io"
|
|
|
|
|
|
|
|
// Reference imports to suppress errors if they are not otherwise used.
|
|
|
|
var _ = proto.Marshal
|
|
|
|
var _ = fmt.Errorf
|
|
|
|
var _ = math.Inf
|
|
|
|
|
|
|
|
// This is a compile-time assertion to ensure that this generated file
|
|
|
|
// is compatible with the proto package it is being compiled against.
|
|
|
|
// A compilation error at this line likely means your copy of the
|
|
|
|
// proto package needs to be updated.
|
|
|
|
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
|
|
|
|
|
|
|
|
type DatabaseRecord struct {
|
|
|
|
Addresses []DatabaseAddress `protobuf:"bytes,1,rep,name=addresses" json:"addresses"`
|
|
|
|
Misses int32 `protobuf:"varint,2,opt,name=misses,proto3" json:"misses,omitempty"`
|
|
|
|
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
|
2018-03-06 08:15:29 -07:00
|
|
|
Missed int64 `protobuf:"varint,4,opt,name=missed,proto3" json:"missed,omitempty"`
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseRecord) Reset() { *m = DatabaseRecord{} }
|
|
|
|
func (m *DatabaseRecord) String() string { return proto.CompactTextString(m) }
|
|
|
|
func (*DatabaseRecord) ProtoMessage() {}
|
|
|
|
func (*DatabaseRecord) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{0} }
|
|
|
|
|
|
|
|
type ReplicationRecord struct {
|
|
|
|
Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"`
|
|
|
|
Addresses []DatabaseAddress `protobuf:"bytes,2,rep,name=addresses" json:"addresses"`
|
|
|
|
Seen int64 `protobuf:"varint,3,opt,name=seen,proto3" json:"seen,omitempty"`
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *ReplicationRecord) Reset() { *m = ReplicationRecord{} }
|
|
|
|
func (m *ReplicationRecord) String() string { return proto.CompactTextString(m) }
|
|
|
|
func (*ReplicationRecord) ProtoMessage() {}
|
|
|
|
func (*ReplicationRecord) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{1} }
|
|
|
|
|
|
|
|
type DatabaseAddress struct {
|
|
|
|
Address string `protobuf:"bytes,1,opt,name=address,proto3" json:"address,omitempty"`
|
|
|
|
Expires int64 `protobuf:"varint,2,opt,name=expires,proto3" json:"expires,omitempty"`
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseAddress) Reset() { *m = DatabaseAddress{} }
|
|
|
|
func (m *DatabaseAddress) String() string { return proto.CompactTextString(m) }
|
|
|
|
func (*DatabaseAddress) ProtoMessage() {}
|
|
|
|
func (*DatabaseAddress) Descriptor() ([]byte, []int) { return fileDescriptorDatabase, []int{2} }
|
|
|
|
|
|
|
|
func init() {
|
|
|
|
proto.RegisterType((*DatabaseRecord)(nil), "main.DatabaseRecord")
|
|
|
|
proto.RegisterType((*ReplicationRecord)(nil), "main.ReplicationRecord")
|
|
|
|
proto.RegisterType((*DatabaseAddress)(nil), "main.DatabaseAddress")
|
|
|
|
}
|
|
|
|
func (m *DatabaseRecord) Marshal() (dAtA []byte, err error) {
|
|
|
|
size := m.Size()
|
|
|
|
dAtA = make([]byte, size)
|
|
|
|
n, err := m.MarshalTo(dAtA)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return dAtA[:n], nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseRecord) MarshalTo(dAtA []byte) (int, error) {
|
|
|
|
var i int
|
|
|
|
_ = i
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
if len(m.Addresses) > 0 {
|
|
|
|
for _, msg := range m.Addresses {
|
|
|
|
dAtA[i] = 0xa
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
|
|
|
|
n, err := msg.MarshalTo(dAtA[i:])
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
i += n
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if m.Misses != 0 {
|
|
|
|
dAtA[i] = 0x10
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(m.Misses))
|
|
|
|
}
|
|
|
|
if m.Seen != 0 {
|
|
|
|
dAtA[i] = 0x18
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
|
|
|
|
}
|
2018-03-06 08:15:29 -07:00
|
|
|
if m.Missed != 0 {
|
|
|
|
dAtA[i] = 0x20
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(m.Missed))
|
|
|
|
}
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
return i, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *ReplicationRecord) Marshal() (dAtA []byte, err error) {
|
|
|
|
size := m.Size()
|
|
|
|
dAtA = make([]byte, size)
|
|
|
|
n, err := m.MarshalTo(dAtA)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return dAtA[:n], nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *ReplicationRecord) MarshalTo(dAtA []byte) (int, error) {
|
|
|
|
var i int
|
|
|
|
_ = i
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
if len(m.Key) > 0 {
|
|
|
|
dAtA[i] = 0xa
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Key)))
|
|
|
|
i += copy(dAtA[i:], m.Key)
|
|
|
|
}
|
|
|
|
if len(m.Addresses) > 0 {
|
|
|
|
for _, msg := range m.Addresses {
|
|
|
|
dAtA[i] = 0x12
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(msg.Size()))
|
|
|
|
n, err := msg.MarshalTo(dAtA[i:])
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
i += n
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if m.Seen != 0 {
|
|
|
|
dAtA[i] = 0x18
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(m.Seen))
|
|
|
|
}
|
|
|
|
return i, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseAddress) Marshal() (dAtA []byte, err error) {
|
|
|
|
size := m.Size()
|
|
|
|
dAtA = make([]byte, size)
|
|
|
|
n, err := m.MarshalTo(dAtA)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return dAtA[:n], nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseAddress) MarshalTo(dAtA []byte) (int, error) {
|
|
|
|
var i int
|
|
|
|
_ = i
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
if len(m.Address) > 0 {
|
|
|
|
dAtA[i] = 0xa
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(len(m.Address)))
|
|
|
|
i += copy(dAtA[i:], m.Address)
|
|
|
|
}
|
|
|
|
if m.Expires != 0 {
|
|
|
|
dAtA[i] = 0x10
|
|
|
|
i++
|
|
|
|
i = encodeVarintDatabase(dAtA, i, uint64(m.Expires))
|
|
|
|
}
|
|
|
|
return i, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func encodeFixed64Database(dAtA []byte, offset int, v uint64) int {
|
|
|
|
dAtA[offset] = uint8(v)
|
|
|
|
dAtA[offset+1] = uint8(v >> 8)
|
|
|
|
dAtA[offset+2] = uint8(v >> 16)
|
|
|
|
dAtA[offset+3] = uint8(v >> 24)
|
|
|
|
dAtA[offset+4] = uint8(v >> 32)
|
|
|
|
dAtA[offset+5] = uint8(v >> 40)
|
|
|
|
dAtA[offset+6] = uint8(v >> 48)
|
|
|
|
dAtA[offset+7] = uint8(v >> 56)
|
|
|
|
return offset + 8
|
|
|
|
}
|
|
|
|
func encodeFixed32Database(dAtA []byte, offset int, v uint32) int {
|
|
|
|
dAtA[offset] = uint8(v)
|
|
|
|
dAtA[offset+1] = uint8(v >> 8)
|
|
|
|
dAtA[offset+2] = uint8(v >> 16)
|
|
|
|
dAtA[offset+3] = uint8(v >> 24)
|
|
|
|
return offset + 4
|
|
|
|
}
|
|
|
|
func encodeVarintDatabase(dAtA []byte, offset int, v uint64) int {
|
|
|
|
for v >= 1<<7 {
|
|
|
|
dAtA[offset] = uint8(v&0x7f | 0x80)
|
|
|
|
v >>= 7
|
|
|
|
offset++
|
|
|
|
}
|
|
|
|
dAtA[offset] = uint8(v)
|
|
|
|
return offset + 1
|
|
|
|
}
|
|
|
|
func (m *DatabaseRecord) Size() (n int) {
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
if len(m.Addresses) > 0 {
|
|
|
|
for _, e := range m.Addresses {
|
|
|
|
l = e.Size()
|
|
|
|
n += 1 + l + sovDatabase(uint64(l))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if m.Misses != 0 {
|
|
|
|
n += 1 + sovDatabase(uint64(m.Misses))
|
|
|
|
}
|
|
|
|
if m.Seen != 0 {
|
|
|
|
n += 1 + sovDatabase(uint64(m.Seen))
|
|
|
|
}
|
2018-03-06 08:15:29 -07:00
|
|
|
if m.Missed != 0 {
|
|
|
|
n += 1 + sovDatabase(uint64(m.Missed))
|
|
|
|
}
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *ReplicationRecord) Size() (n int) {
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
l = len(m.Key)
|
|
|
|
if l > 0 {
|
|
|
|
n += 1 + l + sovDatabase(uint64(l))
|
|
|
|
}
|
|
|
|
if len(m.Addresses) > 0 {
|
|
|
|
for _, e := range m.Addresses {
|
|
|
|
l = e.Size()
|
|
|
|
n += 1 + l + sovDatabase(uint64(l))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if m.Seen != 0 {
|
|
|
|
n += 1 + sovDatabase(uint64(m.Seen))
|
|
|
|
}
|
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
func (m *DatabaseAddress) Size() (n int) {
|
|
|
|
var l int
|
|
|
|
_ = l
|
|
|
|
l = len(m.Address)
|
|
|
|
if l > 0 {
|
|
|
|
n += 1 + l + sovDatabase(uint64(l))
|
|
|
|
}
|
|
|
|
if m.Expires != 0 {
|
|
|
|
n += 1 + sovDatabase(uint64(m.Expires))
|
|
|
|
}
|
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
|
|
|
func sovDatabase(x uint64) (n int) {
|
|
|
|
for {
|
|
|
|
n++
|
|
|
|
x >>= 7
|
|
|
|
if x == 0 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return n
|
|
|
|
}
|
|
|
|
func sozDatabase(x uint64) (n int) {
|
|
|
|
return sovDatabase(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
|
|
|
}
|
|
|
|
func (m *DatabaseRecord) Unmarshal(dAtA []byte) error {
|
|
|
|
l := len(dAtA)
|
|
|
|
iNdEx := 0
|
|
|
|
for iNdEx < l {
|
|
|
|
preIndex := iNdEx
|
|
|
|
var wire uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
wire |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fieldNum := int32(wire >> 3)
|
|
|
|
wireType := int(wire & 0x7)
|
|
|
|
if wireType == 4 {
|
|
|
|
return fmt.Errorf("proto: DatabaseRecord: wiretype end group for non-group")
|
|
|
|
}
|
|
|
|
if fieldNum <= 0 {
|
|
|
|
return fmt.Errorf("proto: DatabaseRecord: illegal tag %d (wire type %d)", fieldNum, wire)
|
|
|
|
}
|
|
|
|
switch fieldNum {
|
|
|
|
case 1:
|
|
|
|
if wireType != 2 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
|
|
|
|
}
|
|
|
|
var msglen int
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
msglen |= (int(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if msglen < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
postIndex := iNdEx + msglen
|
|
|
|
if postIndex > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
m.Addresses = append(m.Addresses, DatabaseAddress{})
|
|
|
|
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
iNdEx = postIndex
|
|
|
|
case 2:
|
|
|
|
if wireType != 0 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Misses", wireType)
|
|
|
|
}
|
|
|
|
m.Misses = 0
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
m.Misses |= (int32(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
case 3:
|
|
|
|
if wireType != 0 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
|
|
|
|
}
|
|
|
|
m.Seen = 0
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
m.Seen |= (int64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2018-03-06 08:15:29 -07:00
|
|
|
case 4:
|
|
|
|
if wireType != 0 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Missed", wireType)
|
|
|
|
}
|
|
|
|
m.Missed = 0
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
m.Missed |= (int64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
default:
|
|
|
|
iNdEx = preIndex
|
|
|
|
skippy, err := skipDatabase(dAtA[iNdEx:])
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if skippy < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
if (iNdEx + skippy) > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
iNdEx += skippy
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if iNdEx > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
func (m *ReplicationRecord) Unmarshal(dAtA []byte) error {
|
|
|
|
l := len(dAtA)
|
|
|
|
iNdEx := 0
|
|
|
|
for iNdEx < l {
|
|
|
|
preIndex := iNdEx
|
|
|
|
var wire uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
wire |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fieldNum := int32(wire >> 3)
|
|
|
|
wireType := int(wire & 0x7)
|
|
|
|
if wireType == 4 {
|
|
|
|
return fmt.Errorf("proto: ReplicationRecord: wiretype end group for non-group")
|
|
|
|
}
|
|
|
|
if fieldNum <= 0 {
|
|
|
|
return fmt.Errorf("proto: ReplicationRecord: illegal tag %d (wire type %d)", fieldNum, wire)
|
|
|
|
}
|
|
|
|
switch fieldNum {
|
|
|
|
case 1:
|
|
|
|
if wireType != 2 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Key", wireType)
|
|
|
|
}
|
|
|
|
var stringLen uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
stringLen |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
intStringLen := int(stringLen)
|
|
|
|
if intStringLen < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
postIndex := iNdEx + intStringLen
|
|
|
|
if postIndex > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
m.Key = string(dAtA[iNdEx:postIndex])
|
|
|
|
iNdEx = postIndex
|
|
|
|
case 2:
|
|
|
|
if wireType != 2 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Addresses", wireType)
|
|
|
|
}
|
|
|
|
var msglen int
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
msglen |= (int(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if msglen < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
postIndex := iNdEx + msglen
|
|
|
|
if postIndex > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
m.Addresses = append(m.Addresses, DatabaseAddress{})
|
|
|
|
if err := m.Addresses[len(m.Addresses)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
iNdEx = postIndex
|
|
|
|
case 3:
|
|
|
|
if wireType != 0 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Seen", wireType)
|
|
|
|
}
|
|
|
|
m.Seen = 0
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
m.Seen |= (int64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
iNdEx = preIndex
|
|
|
|
skippy, err := skipDatabase(dAtA[iNdEx:])
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if skippy < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
if (iNdEx + skippy) > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
iNdEx += skippy
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if iNdEx > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
func (m *DatabaseAddress) Unmarshal(dAtA []byte) error {
|
|
|
|
l := len(dAtA)
|
|
|
|
iNdEx := 0
|
|
|
|
for iNdEx < l {
|
|
|
|
preIndex := iNdEx
|
|
|
|
var wire uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
wire |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fieldNum := int32(wire >> 3)
|
|
|
|
wireType := int(wire & 0x7)
|
|
|
|
if wireType == 4 {
|
|
|
|
return fmt.Errorf("proto: DatabaseAddress: wiretype end group for non-group")
|
|
|
|
}
|
|
|
|
if fieldNum <= 0 {
|
|
|
|
return fmt.Errorf("proto: DatabaseAddress: illegal tag %d (wire type %d)", fieldNum, wire)
|
|
|
|
}
|
|
|
|
switch fieldNum {
|
|
|
|
case 1:
|
|
|
|
if wireType != 2 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
|
|
|
|
}
|
|
|
|
var stringLen uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
stringLen |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
intStringLen := int(stringLen)
|
|
|
|
if intStringLen < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
postIndex := iNdEx + intStringLen
|
|
|
|
if postIndex > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
m.Address = string(dAtA[iNdEx:postIndex])
|
|
|
|
iNdEx = postIndex
|
|
|
|
case 2:
|
|
|
|
if wireType != 0 {
|
|
|
|
return fmt.Errorf("proto: wrong wireType = %d for field Expires", wireType)
|
|
|
|
}
|
|
|
|
m.Expires = 0
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
m.Expires |= (int64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
iNdEx = preIndex
|
|
|
|
skippy, err := skipDatabase(dAtA[iNdEx:])
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
if skippy < 0 {
|
|
|
|
return ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
if (iNdEx + skippy) > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
iNdEx += skippy
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if iNdEx > l {
|
|
|
|
return io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
func skipDatabase(dAtA []byte) (n int, err error) {
|
|
|
|
l := len(dAtA)
|
|
|
|
iNdEx := 0
|
|
|
|
for iNdEx < l {
|
|
|
|
var wire uint64
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return 0, ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return 0, io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
wire |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
wireType := int(wire & 0x7)
|
|
|
|
switch wireType {
|
|
|
|
case 0:
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return 0, ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return 0, io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
iNdEx++
|
|
|
|
if dAtA[iNdEx-1] < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return iNdEx, nil
|
|
|
|
case 1:
|
|
|
|
iNdEx += 8
|
|
|
|
return iNdEx, nil
|
|
|
|
case 2:
|
|
|
|
var length int
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return 0, ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return 0, io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
length |= (int(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
iNdEx += length
|
|
|
|
if length < 0 {
|
|
|
|
return 0, ErrInvalidLengthDatabase
|
|
|
|
}
|
|
|
|
return iNdEx, nil
|
|
|
|
case 3:
|
|
|
|
for {
|
|
|
|
var innerWire uint64
|
|
|
|
var start int = iNdEx
|
|
|
|
for shift := uint(0); ; shift += 7 {
|
|
|
|
if shift >= 64 {
|
|
|
|
return 0, ErrIntOverflowDatabase
|
|
|
|
}
|
|
|
|
if iNdEx >= l {
|
|
|
|
return 0, io.ErrUnexpectedEOF
|
|
|
|
}
|
|
|
|
b := dAtA[iNdEx]
|
|
|
|
iNdEx++
|
|
|
|
innerWire |= (uint64(b) & 0x7F) << shift
|
|
|
|
if b < 0x80 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
innerWireType := int(innerWire & 0x7)
|
|
|
|
if innerWireType == 4 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
next, err := skipDatabase(dAtA[start:])
|
|
|
|
if err != nil {
|
|
|
|
return 0, err
|
|
|
|
}
|
|
|
|
iNdEx = start + next
|
|
|
|
}
|
|
|
|
return iNdEx, nil
|
|
|
|
case 4:
|
|
|
|
return iNdEx, nil
|
|
|
|
case 5:
|
|
|
|
iNdEx += 4
|
|
|
|
return iNdEx, nil
|
|
|
|
default:
|
|
|
|
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
panic("unreachable")
|
|
|
|
}
|
|
|
|
|
|
|
|
var (
|
|
|
|
ErrInvalidLengthDatabase = fmt.Errorf("proto: negative length found during unmarshaling")
|
|
|
|
ErrIntOverflowDatabase = fmt.Errorf("proto: integer overflow")
|
|
|
|
)
|
|
|
|
|
|
|
|
func init() { proto.RegisterFile("database.proto", fileDescriptorDatabase) }
|
|
|
|
|
|
|
|
var fileDescriptorDatabase = []byte{
|
2018-03-06 08:15:29 -07:00
|
|
|
// 264 bytes of a gzipped FileDescriptorProto
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xe2, 0x4b, 0x49, 0x2c, 0x49,
|
|
|
|
0x4c, 0x4a, 0x2c, 0x4e, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0xc9, 0x4d, 0xcc, 0xcc,
|
|
|
|
0x93, 0xd2, 0x4d, 0xcf, 0x2c, 0xc9, 0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xcf, 0x4f,
|
2018-03-06 08:15:29 -07:00
|
|
|
0xcf, 0xd7, 0x07, 0x4b, 0x26, 0x95, 0xa6, 0x81, 0x79, 0x60, 0x0e, 0x98, 0x05, 0xd1, 0xa4, 0xd4,
|
|
|
|
0xcf, 0xc8, 0xc5, 0xe7, 0x02, 0x35, 0x27, 0x28, 0x35, 0x39, 0xbf, 0x28, 0x45, 0xc8, 0x92, 0x8b,
|
|
|
|
0x33, 0x31, 0x25, 0xa5, 0x28, 0xb5, 0xb8, 0x38, 0xb5, 0x58, 0x82, 0x51, 0x81, 0x59, 0x83, 0xdb,
|
|
|
|
0x48, 0x54, 0x0f, 0x64, 0xb6, 0x1e, 0x4c, 0xa1, 0x23, 0x44, 0xda, 0x89, 0xe5, 0xc4, 0x3d, 0x79,
|
|
|
|
0x86, 0x20, 0x84, 0x6a, 0x21, 0x31, 0x2e, 0xb6, 0xdc, 0x4c, 0xb0, 0x3e, 0x26, 0x05, 0x46, 0x0d,
|
|
|
|
0xd6, 0x20, 0x28, 0x4f, 0x48, 0x88, 0x8b, 0xa5, 0x38, 0x35, 0x35, 0x4f, 0x82, 0x59, 0x81, 0x51,
|
|
|
|
0x83, 0x39, 0x08, 0xcc, 0x86, 0xab, 0x4d, 0x91, 0x60, 0x01, 0x8b, 0x42, 0x79, 0x4a, 0x25, 0x5c,
|
|
|
|
0x82, 0x41, 0xa9, 0x05, 0x39, 0x99, 0xc9, 0x89, 0x25, 0x99, 0xf9, 0x79, 0x50, 0x37, 0x09, 0x70,
|
|
|
|
0x31, 0x67, 0xa7, 0x56, 0x4a, 0x30, 0x2a, 0x30, 0x6a, 0x70, 0x06, 0x81, 0x98, 0xa8, 0xae, 0x64,
|
|
|
|
0x22, 0xc9, 0x95, 0x58, 0x5c, 0xa3, 0xe4, 0xca, 0xc5, 0x8f, 0xa6, 0x4f, 0x48, 0x82, 0x8b, 0x1d,
|
|
|
|
0xaa, 0x07, 0x6a, 0x2f, 0x8c, 0x0b, 0x92, 0x49, 0xad, 0x28, 0xc8, 0x2c, 0x82, 0xfa, 0x93, 0x39,
|
|
|
|
0x08, 0xc6, 0x75, 0x12, 0x38, 0xf1, 0x50, 0x8e, 0xe1, 0xc4, 0x23, 0x39, 0xc6, 0x0b, 0x8f, 0xe4,
|
|
|
|
0x18, 0x1f, 0x3c, 0x92, 0x63, 0x4c, 0x62, 0x03, 0x87, 0xb3, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff,
|
|
|
|
0x6a, 0x22, 0xa2, 0x85, 0xae, 0x01, 0x00, 0x00,
|
cmd/stdiscosrv: New discovery server (fixes #4618)
This is a new revision of the discovery server. Relevant changes and
non-changes:
- Protocol towards clients is unchanged.
- Recommended large scale design is still to be deployed nehind nginx (I
tested, and it's still a lot faster at terminating TLS).
- Database backend is leveldb again, only. It scales enough, is easy to
setup, and we don't need any backend to take care of.
- Server supports replication. This is a simple TCP channel - protect it
with a firewall when deploying over the internet. (We deploy this within
the same datacenter, and with firewall.) Any incoming client announces
are sent over the replication channel(s) to other peer discosrvs.
Incoming replication changes are applied to the database as if they came
from clients, but without the TLS/certificate overhead.
- Metrics are exposed using the prometheus library, when enabled.
- The database values and replication protocol is protobuf, because JSON
was quite CPU intensive when I tried that and benchmarked it.
- The "Retry-After" value for failed lookups gets slowly increased from
a default of 120 seconds, by 5 seconds for each failed lookup,
independently by each discosrv. This lowers the query load over time for
clients that are never seen. The Retry-After maxes out at 3600 after a
couple of weeks of this increase. The number of failed lookups is
stored in the database, now and then (avoiding making each lookup a
database put).
All in all this means clients can be pointed towards a cluster using
just multiple A / AAAA records to gain both load sharing and redundancy
(if one is down, clients will talk to the remaining ones).
GitHub-Pull-Request: https://github.com/syncthing/syncthing/pull/4648
2018-01-14 01:52:31 -07:00
|
|
|
}
|