Documentation: add more details in tipc.rst
kernel-doc for TIPC is too simple, we need to add more information for it. This patch is to extend the abstract, and add the Features and Links items. Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Jon Maloy <jmaloy@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
4f408e1fa6
commit
09ef17863f
@ -4,10 +4,125 @@
|
||||
Linux Kernel TIPC
|
||||
=================
|
||||
|
||||
TIPC (Transparent Inter Process Communication) is a protocol that is
|
||||
specially designed for intra-cluster communication.
|
||||
Introduction
|
||||
============
|
||||
|
||||
For more information about TIPC, see http://tipc.sourceforge.net.
|
||||
TIPC (Transparent Inter Process Communication) is a protocol that is specially
|
||||
designed for intra-cluster communication. It can be configured to transmit
|
||||
messages either on UDP or directly across Ethernet. Message delivery is
|
||||
sequence guaranteed, loss free and flow controlled. Latency times are shorter
|
||||
than with any other known protocol, while maximal throughput is comparable to
|
||||
that of TCP.
|
||||
|
||||
TIPC Features
|
||||
-------------
|
||||
|
||||
- Cluster wide IPC service
|
||||
|
||||
Have you ever wished you had the convenience of Unix Domain Sockets even when
|
||||
transmitting data between cluster nodes? Where you yourself determine the
|
||||
addresses you want to bind to and use? Where you don't have to perform DNS
|
||||
lookups and worry about IP addresses? Where you don't have to start timers
|
||||
to monitor the continuous existence of peer sockets? And yet without the
|
||||
downsides of that socket type, such as the risk of lingering inodes?
|
||||
|
||||
Welcome to the Transparent Inter Process Communication service, TIPC in short,
|
||||
which gives you all of this, and a lot more.
|
||||
|
||||
- Service Addressing
|
||||
|
||||
A fundamental concept in TIPC is that of Service Addressing which makes it
|
||||
possible for a programmer to chose his own address, bind it to a server
|
||||
socket and let client programs use only that address for sending messages.
|
||||
|
||||
- Service Tracking
|
||||
|
||||
A client wanting to wait for the availability of a server, uses the Service
|
||||
Tracking mechanism to subscribe for binding and unbinding/close events for
|
||||
sockets with the associated service address.
|
||||
|
||||
The service tracking mechanism can also be used for Cluster Topology Tracking,
|
||||
i.e., subscribing for availability/non-availability of cluster nodes.
|
||||
|
||||
Likewise, the service tracking mechanism can be used for Cluster Connectivity
|
||||
Tracking, i.e., subscribing for up/down events for individual links between
|
||||
cluster nodes.
|
||||
|
||||
- Transmission Modes
|
||||
|
||||
Using a service address, a client can send datagram messages to a server socket.
|
||||
|
||||
Using the same address type, it can establish a connection towards an accepting
|
||||
server socket.
|
||||
|
||||
It can also use a service address to create and join a Communication Group,
|
||||
which is the TIPC manifestation of a brokerless message bus.
|
||||
|
||||
Multicast with very good performance and scalability is available both in
|
||||
datagram mode and in communication group mode.
|
||||
|
||||
- Inter Node Links
|
||||
|
||||
Communication between any two nodes in a cluster is maintained by one or two
|
||||
Inter Node Links, which both guarantee data traffic integrity and monitor
|
||||
the peer node's availability.
|
||||
|
||||
- Cluster Scalability
|
||||
|
||||
By applying the Overlapping Ring Monitoring algorithm on the inter node links
|
||||
it is possible to scale TIPC clusters up to 1000 nodes with a maintained
|
||||
neighbor failure discovery time of 1-2 seconds. For smaller clusters this
|
||||
time can be made much shorter.
|
||||
|
||||
- Neighbor Discovery
|
||||
|
||||
Neighbor Node Discovery in the cluster is done by Ethernet broadcast or UDP
|
||||
multicast, when any of those services are available. If not, configured peer
|
||||
IP addresses can be used.
|
||||
|
||||
- Configuration
|
||||
|
||||
When running TIPC in single node mode no configuration whatsoever is needed.
|
||||
When running in cluster mode TIPC must as a minimum be given a node address
|
||||
(before Linux 4.17) and told which interface to attach to. The "tipc"
|
||||
configuration tool makes is possible to add and maintain many more
|
||||
configuration parameters.
|
||||
|
||||
- Performance
|
||||
|
||||
TIPC message transfer latency times are better than in any other known protocol.
|
||||
Maximal byte throughput for inter-node connections is still somewhat lower than
|
||||
for TCP, while they are superior for intra-node and inter-container throughput
|
||||
on the same host.
|
||||
|
||||
- Language Support
|
||||
|
||||
The TIPC user API has support for C, Python, Perl, Ruby, D and Go.
|
||||
|
||||
More Information
|
||||
----------------
|
||||
|
||||
- How to set up TIPC:
|
||||
|
||||
http://tipc.io/getting_started.html
|
||||
|
||||
- How to program with TIPC:
|
||||
|
||||
http://tipc.io/programming.html
|
||||
|
||||
- How to contribute to TIPC:
|
||||
|
||||
- http://tipc.io/contacts.html
|
||||
|
||||
- More details about TIPC specification:
|
||||
|
||||
http://tipc.io/protocol.html
|
||||
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
TIPC is implemented as a kernel module in net/tipc/ directory.
|
||||
|
||||
TIPC Base Types
|
||||
---------------
|
||||
|
Loading…
Reference in New Issue
Block a user