This ensures we don't submit the same request twice if we are kicking a
specific osd (as with an osd_reset), or when we hit a transient error and
resend.
Signed-off-by: Sage Weil <sage@newdream.net>
The peer_reset just takes longer (until we reconnect and discover the osd
dropped the session... which it will).
Signed-off-by: Sage Weil <sage@newdream.net>
If an osd has failed or returned and a request has been sent twice, it's
possible to get a reply and unregister the request while the request
message is queued for delivery. Since the message references the caller's
page vector, we need to revoke it before completing.
Signed-off-by: Sage Weil <sage@newdream.net>
The osd request submission path registers the request, drops and retakes
the request_mutex, then sends it to the OSD. A racing kick_requests could
sent it during that interval, causing the same msg to be sent twice and
BUGing in the msgr.
Fix by only sending the message if it hasn't been touched by other
threads.
Signed-off-by: Sage Weil <sage@newdream.net>
The OSD client is responsible for reading and writing data from/to the
object storage pool. This includes determining where objects are
stored in the cluster, and ensuring that requests are retried or
redirected in the event of a node failure or data migration.
If an OSD does not respond before a timeout expires, keepalive
messages are sent across the lossless, ordered communications channel
to ensure that any break in the TCP is discovered. If the session
does reset, a reconnection is attempted and affected requests are
resent (by the message transport layer).
Signed-off-by: Sage Weil <sage@newdream.net>