1
Commit Graph

1280 Commits

Author SHA1 Message Date
Eric Dumazet
54287cc178 [TCP]: Keep copied_seq, rcv_wup and rcv_next together.
I noticed in oprofile study a cache miss in tcp_rcv_established() to read
copied_seq.

ffffffff80400a80 <tcp_rcv_established>: /* tcp_rcv_established total: 4034293  
2.0400 */

 55493  0.0281 :ffffffff80400bc9:   mov    0x4c8(%r12),%eax copied_seq
543103  0.2746 :ffffffff80400bd1:   cmp    0x3e0(%r12),%eax   rcv_nxt    

if (tp->copied_seq == tp->rcv_nxt &&
        len - tcp_header_len <= tp->ucopy.len) {

In this function, the cache line 0x4c0 -> 0x500 is used only for this
reading 'copied_seq' field.

rcv_wup and copied_seq should be next to rcv_nxt field, to lower number of
active cache lines in hot paths. (tcp_rcv_established(), tcp_poll(), ...)

As you suggested, I changed tcp_create_openreq_child() so that these fields
are changed together, to avoid adding a new store buffer stall.

Patch is 64bit friendly (no new hole because of alignment constraints)

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:21 -07:00
Ilpo Järvinen
cf4c6bf83d [TCP]: struct *sock argument renamed: sp -> sk
In general, TCP code uses "sk" for struct sock pointer.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:20 -07:00
John Heffner
886236c124 [TCP]: Add RFC3742 Limited Slow-Start, controlled by variable sysctl_tcp_max_ssthresh.
Signed-off-by: John Heffner <jheffner@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:19 -07:00
Angelo P. Castellani
5ef814753e [TCP] YeAH-TCP: algorithm implementation
YeAH-TCP is a sender-side high-speed enabled TCP congestion control
algorithm, which uses a mixed loss/delay approach to compute the
congestion window. It's design goals target high efficiency, internal,
RTT and Reno fairness, resilience to link loss while keeping network
elements load as low as possible.

For further details look here:
    http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf

Signed-off-by: Angelo P. Castellani <angelo.castellani@gmail.con>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:18 -07:00
Ilpo Järvinen
4dc2665e36 [TCP]: SACK enhanced FRTO
Implements the SACK-enhanced FRTO given in RFC4138 using the
variant given in Appendix B.

RFC4138, Appendix B:
  "This means that in order to declare timeout spurious, the TCP
   sender must receive an acknowledgment for non-retransmitted
   segment between SND.UNA and RecoveryPoint in algorithm step 3.
   RecoveryPoint is defined in conservative SACK-recovery
   algorithm [RFC3517]"

The basic version of the FRTO algorithm can still be used also
when SACK is enabled. To enabled SACK-enhanced version, tcp_frto
sysctl is set to 2.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:16 -07:00
Ilpo Järvinen
288035f915 [TCP]: Prevent reordering adjustments during FRTO
To be honest, I'm not too sure how the reord stuff works in the
first place but this seems necessary.

When FRTO has been active, the one and only retransmission could
be unnecessary but the state and sending order might not be what
the sacktag code expects it to be (to work correctly).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:15 -07:00
Ilpo Järvinen
66e93e45c0 [TCP] FRTO: Fake cwnd for ssthresh callback
TCP without FRTO would be in Loss state with small cwnd. FRTO,
however, leaves cwnd (typically) to a larger value which causes
ssthresh to become too large in case RTO is triggered again
compared to what conventional recovery would do. Because
consecutive RTOs result in only a single ssthresh reduction,
RTO+cumulative ACK+RTO pattern is required to trigger this
event.

A large comment is included for congestion control module writers
trying to figure out what CA_EVENT_FRTO handler should do because
there exists a remote possibility of incompatibility between
FRTO and module defined ssthresh functions.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:14 -07:00
Ilpo Järvinen
d1a54c6a0a [TCP] FRTO: Reverse RETRANS bit clearing logic
Previously RETRANS bits were cleared on the entry to FRTO. We
postpone that into tcp_enter_frto_loss, which is really the
place were the clearing should be done anyway. This allows
simplification of the logic from a clearing loop to the head skb
clearing only.

Besides, the other changes made in the previous patches to
tcp_use_frto made it impossible for the non-SACKed FRTO to be
entered if other than the head has been rexmitted.

With SACK-enhanced FRTO (and Appendix B), however, there can be
a number retransmissions in flight when RTO expires (same thing
could happen before this patchset also with non-SACK FRTO). To
not introduce any jumpiness into the packet counting during FRTO,
instead of clearing RETRANS bits from skbs during entry, do it
later on.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:13 -07:00
Ilpo Järvinen
46d0de4ed9 [TCP] FRTO: Entry is allowed only during (New)Reno like recovery
This interpretation comes from RFC4138:
    "If the sender implements some loss recovery algorithm other
     than Reno or NewReno [FHG04], the F-RTO algorithm SHOULD
     NOT be entered when earlier fast recovery is underway."

I think the RFC means to say (especially in the light of
Appendix B) that ...recovery is underway (not just fast recovery)
or was underway when it was interrupted by an earlier (F-)RTO
that hasn't yet been resolved (snd_una has not advanced enough).
Thus, my interpretation is that whenever TCP has ever
retransmitted other than head, basic version cannot be used
because then the order assumptions which are used as FRTO basis
do not hold.

NewReno has only the head segment retransmitted at a time.
Therefore, walk up to the segment that has not been SACKed, if
that segment is not retransmitted nor anything before it, we know
for sure, that nothing after the non-SACKed segment should be
either. This assumption is valid because TCPCB_EVER_RETRANS does
not leave holes but each non-SACKed segment is rexmitted
in-order.

Check for retrans_out > 1 avoids more expensive walk through the
skb list, as we can know the result beforehand: F-RTO will not be
allowed.

SACKed skb can turn into non-SACked only in the extremely rare
case of SACK reneging, in this case we might fail to detect
retransmissions if there were them for any other than head. To
get rid of that feature, whole rexmit queue would have to be
walked (always) or FRTO should be prevented when SACK reneging
happens. Of course RTO should still trigger after reneging which
makes this issue even less likely to show up. And as long as the
response is as conservative as it's now, nothing bad happens even
then.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:12 -07:00
Ilpo Järvinen
7c9a4a5b67 [TCP]: Prevent unrelated cwnd adjustment while using FRTO
FRTO controls cwnd when it still processes the ACK input or it
has just reverted back to conventional RTO recovery; the normal
rules apply when FRTO has reverted to standard congestion
control.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:11 -07:00
Ilpo Järvinen
94d0ea7786 [TCP] FRTO: frto_counter modulo-op converted to two assignments
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:10 -07:00
Ilpo Järvinen
52c63f1e86 [TCP]: Don't enter to fast recovery while using FRTO
Because TCP is not in Loss state during FRTO recovery, fast
recovery could be triggered by accident. Non-SACK FRTO is more
robust than not yet included SACK-enhanced version (that can
receiver high number of duplicate ACKs with SACK blocks during
FRTO), at least with unidirectional transfers, but under
extraordinary patterns fast recovery can be incorrectly
triggered, e.g., Data loss+ACK losses => cumulative ACK with
enough SACK blocks to meet sacked_out >= dupthresh condition).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:09 -07:00
Ilpo Järvinen
aa8b6a7ad1 [TCP] FRTO: Response should reset also snd_cwnd_cnt
Since purpose is to reduce CWND, we prevent immediate growth. This
is not a major issue nor is "the correct way" specified anywhere.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:08 -07:00
Ilpo Järvinen
95c4922bf9 [TCP] FRTO: fixes fallback to conventional recovery
The FRTO detection did not care how ACK pattern affects to cwnd
calculation of the conventional recovery. This caused incorrect
setting of cwnd when the fallback becames necessary. The
knowledge tcp_process_frto() has about the incoming ACK is now
passed on to tcp_enter_frto_loss() in allowed_segments parameter
that gives the number of segments that must be added to
packets-in-flight while calculating the new cwnd.

Instead of snd_una we use FLAG_DATA_ACKED in duplicate ACK
detection because RFC4138 states (in Section 2.2):
  If the first acknowledgment after the RTO retransmission
  does not acknowledge all of the data that was retransmitted
  in step 1, the TCP sender reverts to the conventional RTO
  recovery.  Otherwise, a malicious receiver acknowledging
  partial segments could cause the sender to declare the
  timeout spurious in a case where data was lost.

If the next ACK after RTO is duplicate, we do not retransmit
anything, which is equal to what conservative conventional
recovery does in such case.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:07 -07:00
Ilpo Järvinen
6408d206c7 [TCP] FRTO: Ignore some uninteresting ACKs
Handles RFC4138 shortcoming (in step 2); it should also have case
c) which ignores ACKs that are not duplicates nor advance window
(opposite dir data, winupdate).

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:06 -07:00
Ilpo Järvinen
7b0eb22b1d [TCP] FRTO: Use Disorder state during operation instead of Open
Retransmission counter assumptions are to be changed. Forcing
reason to do this exist: Using sysctl in check would be racy
as soon as FRTO starts to ignore some ACKs (doing that in the
following patches). Userspace may disable it at any moment
giving nice oops if timing is right. frto_counter would be
inaccessible from userspace, but with SACK enhanced FRTO
retrans_out can include other than head, and possibly leaving
it non-zero after spurious RTO, boom again.

Luckily, solution seems rather simple: never go directly to Open
state but use Disorder instead. This does not really change much,
since TCP could anyway change its state to Disorder during FRTO
using path tcp_fastretrans_alert -> tcp_try_to_open (e.g., when
a SACK block makes ACK dubious). Besides, Disorder seems to be
the state where TCP should be if not recovering (in Recovery or
Loss state) while having some retransmissions in-flight (see
tcp_try_to_open), which is exactly what happens with FRTO.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:05 -07:00
Ilpo Järvinen
7487c48c4f [TCP] FRTO: Consecutive RTOs keep prior_ssthresh and ssthresh
In case a latency spike causes more than one RTO, the later should not
cause the already reduced ssthresh to propagate into the prior_ssthresh
since FRTO declares all such RTOs spurious at once or none of them. In
treating of ssthresh, we mimic what tcp_enter_loss() does.

The previous state (in frto_counter) must be available until we have
checked it in tcp_enter_frto(), and also ACK information flag in
process_frto().

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:04 -07:00
Ilpo Järvinen
30935cf4f9 [TCP] FRTO: Comment cleanup & improvement
Moved comments out from the body of process_frto() to the head
(preferred way; see Documentation/CodingStyle). Bonus: it's much
easier to read in this compacted form.

FRTO algorithm and implementation is described in greater detail.
For interested reader, more information is available in RFC4138.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:03 -07:00
Ilpo Järvinen
bdaae17da8 [TCP] FRTO: Moved tcp_use_frto from tcp.h to tcp_input.c
In addition, removed inline.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:02 -07:00
Ilpo Järvinen
9ead9a1d38 [TCP] FRTO: Separated response from FRTO detection algorithm
FRTO spurious RTO detection algorithm (RFC4138) does not include response
to a detected spurious RTO but can use different response algorithms.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:01 -07:00
Ilpo Järvinen
522e7548a9 [TCP] FRTO: Incorrectly clears TCPCB_EVER_RETRANS bit
FRTO was slightly too brave... Should only clear
TCPCB_SACKED_RETRANS bit.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 22:23:00 -07:00
Alexey Kuznetsov
1194ed0a3e [NETLINK]: Infinite recursion in netlink.
Reply to NETLINK_FIB_LOOKUP messages were misrouted back to kernel,
which resulted in infinite recursion and stack overflow.

The bug is present in all kernel versions since the feature appeared.

The patch also makes some minimal cleanup:

1. Return something consistent (-ENOENT) when fib table is missing
2. Do not crash when queue is empty (does not happen, but yet)
3. Put result of lookup

Signed-off-by: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-25 13:07:28 -07:00
Patrick McHardy
05d224468a [XFRM]: beet: fix pseudo header length value
draft-nikander-esp-beet-mode-07.txt is not entirely clear on how the length
value of the pseudo header should be calculated, it states "The Header Length
field contains the length of the pseudo header, IPv4 options, and padding in
8 octets units.", but also states "Length in octets (Header Len + 1) * 8".
draft-nikander-esp-beet-mode-08-pre1.txt [1] clarifies this, the header length
should not include the first 8 byte.

This change affects backwards compatibility, but option encapsulation didn't
work until very recently anyway.

[1] http://users.piuha.net/jmelen/BEET/draft-nikander-esp-beet-mode-08-pre1.txt

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-23 22:39:02 -07:00
Stephen Hemminger
4d4d3d1e88 [TCP]: Congestion control initialization.
Change to defer congestion control initialization.

If setsockopt() was used to change TCP_CONGESTION before
connection is established, then protocols that use sequence numbers
to keep track of one RTT interval (vegas, illinois, ...) get confused.

Change the init hook to be called after handshake.

Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-23 22:32:11 -07:00
David S. Miller
49688c8431 [NETFILTER] arp_tables: Fix unaligned accesses.
There are two device string comparison loops in arp_packet_match().
The first one goes byte-by-byte but the second one tries to be
clever and cast the string to a long and compare by longs.

The device name strings in the arp table entries are not guarenteed
to be aligned enough to make this value, so just use byte-by-byte
for both cases.

Based upon a report by <drraid@gmail.com>.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-13 16:37:54 -07:00
Patrick McHardy
01102e7ca2 [NETFILTER]: ipt_ULOG: use put_unaligned
Use put_unaligned to fix warnings about unaligned accesses.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-12 14:27:03 -07:00
Jaroslav Kysela
50c9cc2e54 [NETFILTER]: ipt_CLUSTERIP: fix oops in checkentry function
The clusterip_config_find_get() already increases entries reference
counter, so there is no reason to do it twice in checkentry() callback.

This causes the config to be freed before it is removed from the list,
resulting in a crash when adding the next rule.

Signed-off-by: Jaroslav Kysela <perex@suse.cz>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-10 13:26:48 -07:00
David S. Miller
15d33c070d [TCP]: slow_start_after_idle should influence cwnd validation too
For the cases that slow_start_after_idle are meant to deal
with, it is almost a certainty that the congestion window
tests will think the connection is application limited and
we'll thus decrease the cwnd there too.  This defeats the
whole point of setting slow_start_after_idle to zero.

So test it there too.

We do not cancel out the entire tcp_cwnd_validate() function
so that if the sysctl is changed we still have the validation
state maintained.

Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-09 13:31:15 -07:00
Patrick McHardy
254d0d24e3 [XFRM]: beet: fix IP option decapsulation
Beet mode looks for the beet pseudo header after the outer IP header,
which is wrong since that is followed by the ESP header. Additionally
it needs to adjust the packet length after removing the pseudo header
and point the data pointer to the real data location.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-05 16:03:33 -07:00
Patrick McHardy
d4b1e84062 [XFRM]: beet: fix beet mode decapsulation
Beet mode decapsulation fails to properly set up the skb pointers, which
only works by accident in combination with CONFIG_NETFILTER, since in that
case the skb is fixed up in xfrm4_input before passing it to the netfilter
hooks.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-05 15:59:41 -07:00
Patrick McHardy
04fef9893a [XFRM]: beet: use IPOPT_NOP for option padding
draft-nikander-esp-beet-mode-07.txt states "The padding MUST be filled
with NOP options as defined in Internet Protocol [1] section 3.1
Internet header format.", so do that.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-05 15:54:39 -07:00
Patrick McHardy
c5027c9a89 [XFRM]: beet: fix IP option encapsulation
Beet mode calculates an incorrect value for the transport header location
when IP options are present, resulting in encapsulation errors.

The correct location is 4 or 8 bytes before the end of the original IP
header, depending on whether the pseudo header is padded.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-05 15:54:02 -07:00
John Heffner
84565070e4 [TCP]: Do receiver-side SWS avoidance for rcvbuf < MSS.
Signed-off-by: John Heffner <jheffner@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-04-02 13:56:32 -07:00
Robert Olsson
d562f1f8a9 [IPV4] fib_trie: Document locking.
Paul E. McKenney writes:

> Those of use who dive into networking only occasionally would much
> appreciate this.  ;-)

No problem here... 

Acked-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> (but trivial)
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-26 14:22:22 -07:00
Thomas Graf
a0ee18b9b7 [IPv4] fib: Fix out of bound access of fib_props[]
Fixes a typo which caused fib_props[] to have the wrong size
and makes sure the value used to index the array which is
provided by userspace via netlink is checked to avoid out of
bound access.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-25 18:48:03 -07:00
Thomas Graf
e1701c68c1 [NET]: Fix fib_rules compatibility breakage
Based upon a patch from Patrick McHardy.

The fib_rules netlink attribute policy introduced in 2.6.19 broke
userspace compatibilty. When specifying a rule with "from all"
or "to all", iproute adds a zero byte long netlink attribute,
but the policy requires all addresses to have a size equal to
sizeof(struct in_addr)/sizeof(struct in6_addr), resulting in a
validation error.

Check attribute length of FRA_SRC/FRA_DST in the generic framework
by letting the family specific rules implementation provide the
length of an address. Report an error if address length is non
zero but no address attribute is provided. Fix actual bug by
checking address length for non-zero instead of relying on
availability of attribute.

Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-25 18:48:00 -07:00
Patrick McHardy
848c29fd64 [NETFILTER]: nat: avoid rerouting packets if only XFRM policy key changed
Currently NAT not only reroutes packets in the OUTPUT chain when the
routing key changed, but also if only the non-routing part of the
IPsec policy key changed. This breaks ping -I since it doesn't use
SO_BINDTODEVICE but IP_PKTINFO cmsg to specify the output device, and
this information is lost.

Only do full rerouting if the routing key changed, and just do a new
policy lookup with the old route if only the ports changed.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-22 12:30:29 -07:00
John Heffner
53cdcc04c1 [TCP]: Fix tcp_mem[] initialization.
Change tcp_mem initialization function.  The fraction of total memory
is now a continuous function of memory size, and independent of page
size.

Signed-off-by: John Heffner <jheffner@psc.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-16 15:04:03 -07:00
Robert Olsson
d5cc4a73a5 [IPV4]: Do not disable preemption in trie_leaf_remove().
Hello, Just discussed this Patrick...

We have two users of trie_leaf_remove, fn_trie_flush and fn_trie_delete
both are holding RTNL. So there shouldn't be need for this preempt stuff.
This is assumed to a leftover from an older RCU-take.

> Mhh .. I think I just remembered something - me incorrectly suggesting
> to add it there while we were talking about this at OLS :) IIRC the
> idea was to make sure tnode_free (which at that time didn't use
> call_rcu) wouldn't free memory while still in use in a rcu read-side
> critical section. It should have been synchronize_rcu of course,
> but with tnode_free using call_rcu it seems to be completely
> unnecessary. So I guess we can simply remove it.

Signed-off-by: Robert Olsson <robert.olsson@its.uu.se>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-16 15:00:07 -07:00
Geert Uytterhoeven
08882669e0 [IPV4]: Fix warning in ip_mc_rejoin_group.
Kill warning about unused variable `in_dev' when CONFIG_IP_MULTICAST
is not set.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-12 17:02:37 -07:00
Paul Moore
38c8947c1b [NetLabel]: parse the CIPSO ranged tag on incoming packets
Commit 484b366932 added support for the CIPSO
ranged categories tag.  However, it appears that I made a mistake when rebasing
then patch to the latest upstream sources for submission and dropped the part
of the patch that actually parses the tag on incoming packets.  This patch
fixes this mistake by adding the required function call to the
cipso_v4_skbuff_getattr() function.

I've run this patch over the weekend and have not noticed any problems.

Signed-off-by: Paul Moore <paul.moore@hp.com>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-12 14:38:02 -07:00
Evgeniy Polyakov
c4e38f41e3 [IPV4]: Fix rtm_to_ifaddr() error handling.
Return negative error value (embedded in the pointer) instead of
returning NULL.

Signed-off-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-09 13:43:24 -08:00
Herbert Xu
d644329bc9 [UDP]: Reread uh pointer after pskb_trim
The header may have moved when trimming.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-07 16:08:04 -08:00
Linus Torvalds
5b3c1184e7 Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6:
  [DCCP]: Set RTO for newly created child socket
  [DCCP]: Correctly split CCID half connections
  [NET]: Fix compat_sock_common_getsockopt typo.
  [NET]: Revert incorrect accept queue backlog changes.
  [INET]: twcal_jiffie should be unsigned long, not int
  [GIANFAR]: Fix compile error in latest git
  [PPPOE]: Use ifindex instead of device pointer in key lookups.
  [NETFILTER]: ip6_route_me_harder should take into account mark
  [NETFILTER]: nfnetlink_log: fix reference counting
  [NETFILTER]: nfnetlink_log: fix module reference counting
  [NETFILTER]: nfnetlink_log: fix possible NULL pointer dereference
  [NETFILTER]: nfnetlink_log: fix NULL pointer dereference
  [NETFILTER]: nfnetlink_log: fix use after free
  [NETFILTER]: nfnetlink_log: fix reference leak
  [NETFILTER]: tcp conntrack: accept SYN|URG as valid
  [NETFILTER]: nf_conntrack/nf_nat: fix incorrect config ifdefs
  [NETFILTER]: conntrack: fix {nf,ip}_ct_iterate_cleanup endless loops
2007-03-06 19:53:34 -08:00
Jay Vosburgh
a816c7c712 bonding: Improve IGMP join processing
In active-backup mode, the current bonding code duplicates IGMP
traffic to all slaves, so that switches are up to date in case of a
failover from an active to a backup interface.  If bonding then fails
back to the original active interface, it is likely that the "active
slave" switch's IGMP forwarding for the port will be out of date until
some event occurs to refresh the switch (e.g., a membership query).

	This patch alters the behavior of bonding to no longer flood
IGMP to all ports, and to issue IGMP JOINs to the newly active port at
the time of a failover.  This insures that switches are kept up to date
for all cases.

	"GOELLESCH Niels" <niels.goellesch@eurocontrol.int> originally
reported this problem, and included a patch.  His original patch was
modified by Jay Vosburgh to additionally remove the existing IGMP flood
behavior, use RCU, streamline code paths, fix trailing white space, and
adjust for style.

Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-03-06 06:08:11 -05:00
Patrick McHardy
d3ab4298aa [NETFILTER]: tcp conntrack: accept SYN|URG as valid
Some stacks apparently send packets with SYN|URG set. Linux accepts
these packets, so TCP conntrack should to.

Pointed out by Martijn Posthuma <posthuma@sangine.com>.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-05 13:25:20 -08:00
Patrick McHardy
e281db5cdf [NETFILTER]: nf_conntrack/nf_nat: fix incorrect config ifdefs
The nf_conntrack_netlink config option is named CONFIG_NF_CT_NETLINK,
but multiple files use CONFIG_IP_NF_CONNTRACK_NETLINK or
CONFIG_NF_CONNTRACK_NETLINK for ifdefs.

Fix this and reformat all CONFIG_NF_CT_NETLINK ifdefs to only use a line.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-05 13:25:19 -08:00
Patrick McHardy
ec68e97ded [NETFILTER]: conntrack: fix {nf,ip}_ct_iterate_cleanup endless loops
Fix {nf,ip}_ct_iterate_cleanup unconfirmed list handling:

- unconfirmed entries can not be killed manually, they are removed on
  confirmation or final destruction of the conntrack entry, which means
  we might iterate forever without making forward progress.

  This can happen in combination with the conntrack event cache, which
  holds a reference to the conntrack entry, which is only released when
  the packet makes it all the way through the stack or a different
  packet is handled.

- taking references to an unconfirmed entry and using it outside the
  locked section doesn't work, the list entries are not refcounted and
  another CPU might already be waiting to destroy the entry

What the code really wants to do is make sure the references of the hash
table to the selected conntrack entries are released, so they will be
destroyed once all references from skbs and the event cache are dropped.

Since unconfirmed entries haven't even entered the hash yet, simply mark
them as dying and skip confirmation based on that.

Reported and tested by Chuck Ebbert <cebbert@redhat.com>

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-05 13:25:18 -08:00
Paul Moore
c6387a8694 [NetLabel]: Verify sensitivity level has a valid CIPSO mapping
The current CIPSO engine has a problem where it does not verify that
the given sensitivity level has a valid CIPSO mapping when the "std"
CIPSO DOI type is used.  The end result is that bad packets are sent
on the wire which should have never been sent in the first place.
This patch corrects this problem by verifying the sensitivity level
mapping similar to what is done with the category mapping.  This patch
also changes the returned error code in this case to -EPERM to better
match what the category mapping verification code returns.

Signed-off-by: Paul Moore <paul.moore@hp.com>
Acked-by: James Morris <jmorris@namei.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-03-02 20:37:36 -08:00
Arnaldo Carvalho de Melo
a9948a7e15 [TCP]: Fix minisock tcp_create_openreq_child() typo.
On 2/28/07, KOVACS Krisztian <hidden@balabit.hu> wrote:
>
>   Hi,
>
>   While reading TCP minisock code I've found this suspiciously looking
> code fragment:
>
> - 8< -
> struct sock *tcp_create_openreq_child(struct sock *sk, struct request_sock *req, struct sk_buff *skb)
> {
>         struct sock *newsk = inet_csk_clone(sk, req, GFP_ATOMIC);
>
>         if (newsk != NULL) {
>                 const struct inet_request_sock *ireq = inet_rsk(req);
>                 struct tcp_request_sock *treq = tcp_rsk(req);
>                 struct inet_connection_sock *newicsk = inet_csk(sk);
>                 struct tcp_sock *newtp;
> - 8< -
>
>   The above code initializes newicsk to inet_csk(sk), isn't that supposed
> to be inet_csk(newsk)?  As far as I can tell this might leave
> icsk_ack.last_seg_size zero even if we do have received data.

Good catch!

David, please apply the attached patch.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-02-28 11:05:56 -08:00