1

epoll: Remove ep_scan_ready_list() in comments

Since commit 443f1a0422 ("lift the calls of ep_send_events_proc()
into the callers"), ep_scan_ready_list() has been removed.
But there are still several in comments. All of them should
be replaced with other caller functions.

Signed-off-by: Huang Xiaojia <huangxiaojia2@huawei.com>
Link: https://lore.kernel.org/r/20240206014353.4191262-1-huangxiaojia2@huawei.com
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>
This commit is contained in:
Huang Xiaojia 2024-02-06 09:43:53 +08:00 committed by Christian Brauner
parent d3b1a9a778
commit e6f7958042

View File

@ -206,7 +206,7 @@ struct eventpoll {
*/ */
struct epitem *ovflist; struct epitem *ovflist;
/* wakeup_source used when ep_scan_ready_list is running */ /* wakeup_source used when ep_send_events or __ep_eventpoll_poll is running */
struct wakeup_source *ws; struct wakeup_source *ws;
/* The user that created the eventpoll descriptor */ /* The user that created the eventpoll descriptor */
@ -1153,7 +1153,7 @@ static inline bool chain_epi_lockless(struct epitem *epi)
* This callback takes a read lock in order not to contend with concurrent * This callback takes a read lock in order not to contend with concurrent
* events from another file descriptor, thus all modifications to ->rdllist * events from another file descriptor, thus all modifications to ->rdllist
* or ->ovflist are lockless. Read lock is paired with the write lock from * or ->ovflist are lockless. Read lock is paired with the write lock from
* ep_scan_ready_list(), which stops all list modifications and guarantees * ep_start/done_scan(), which stops all list modifications and guarantees
* that lists state is seen correctly. * that lists state is seen correctly.
* *
* Another thing worth to mention is that ep_poll_callback() can be called * Another thing worth to mention is that ep_poll_callback() can be called
@ -1751,7 +1751,7 @@ static int ep_send_events(struct eventpoll *ep,
* availability. At this point, no one can insert * availability. At this point, no one can insert
* into ep->rdllist besides us. The epoll_ctl() * into ep->rdllist besides us. The epoll_ctl()
* callers are locked out by * callers are locked out by
* ep_scan_ready_list() holding "mtx" and the * ep_send_events() holding "mtx" and the
* poll callback will queue them in ep->ovflist. * poll callback will queue them in ep->ovflist.
*/ */
list_add_tail(&epi->rdllink, &ep->rdllist); list_add_tail(&epi->rdllink, &ep->rdllist);
@ -1904,7 +1904,7 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
__set_current_state(TASK_INTERRUPTIBLE); __set_current_state(TASK_INTERRUPTIBLE);
/* /*
* Do the final check under the lock. ep_scan_ready_list() * Do the final check under the lock. ep_start/done_scan()
* plays with two lists (->rdllist and ->ovflist) and there * plays with two lists (->rdllist and ->ovflist) and there
* is always a race when both lists are empty for short * is always a race when both lists are empty for short
* period of time although events are pending, so lock is * period of time although events are pending, so lock is