License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 07:07:57 -07:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* linux/fs/ext2/dir.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 1992, 1993, 1994, 1995
|
|
|
|
* Remy Card (card@masi.ibp.fr)
|
|
|
|
* Laboratoire MASI - Institut Blaise Pascal
|
|
|
|
* Universite Pierre et Marie Curie (Paris VI)
|
|
|
|
*
|
|
|
|
* from
|
|
|
|
*
|
|
|
|
* linux/fs/minix/dir.c
|
|
|
|
*
|
|
|
|
* Copyright (C) 1991, 1992 Linus Torvalds
|
|
|
|
*
|
|
|
|
* ext2 directory handling functions
|
|
|
|
*
|
|
|
|
* Big-endian to little-endian byte-swapping/bitmaps by
|
|
|
|
* David S. Miller (davem@caip.rutgers.edu), 1995
|
|
|
|
*
|
|
|
|
* All code that works with directory layout had been switched to pagecache
|
|
|
|
* and moved here. AV
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "ext2.h"
|
2007-10-16 01:25:04 -07:00
|
|
|
#include <linux/buffer_head.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
#include <linux/pagemap.h>
|
2007-10-16 01:25:04 -07:00
|
|
|
#include <linux/swap.h>
|
2017-12-11 04:35:14 -07:00
|
|
|
#include <linux/iversion.h>
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
typedef struct ext2_dir_entry_2 ext2_dirent;
|
|
|
|
|
2010-12-07 10:51:05 -07:00
|
|
|
/*
|
|
|
|
* Tests against MAX_REC_LEN etc were put in place for 64k block
|
|
|
|
* sizes; if that is not possible on this arch, we can skip
|
|
|
|
* those tests and speed things up.
|
|
|
|
*/
|
2007-10-21 16:41:40 -07:00
|
|
|
static inline unsigned ext2_rec_len_from_disk(__le16 dlen)
|
|
|
|
{
|
|
|
|
unsigned len = le16_to_cpu(dlen);
|
|
|
|
|
2016-04-01 05:29:48 -07:00
|
|
|
#if (PAGE_SIZE >= 65536)
|
2007-10-21 16:41:40 -07:00
|
|
|
if (len == EXT2_MAX_REC_LEN)
|
|
|
|
return 1 << 16;
|
2010-12-07 10:51:05 -07:00
|
|
|
#endif
|
2007-10-21 16:41:40 -07:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline __le16 ext2_rec_len_to_disk(unsigned len)
|
|
|
|
{
|
2016-04-01 05:29:48 -07:00
|
|
|
#if (PAGE_SIZE >= 65536)
|
2007-10-21 16:41:40 -07:00
|
|
|
if (len == (1 << 16))
|
|
|
|
return cpu_to_le16(EXT2_MAX_REC_LEN);
|
2008-04-28 02:16:02 -07:00
|
|
|
else
|
|
|
|
BUG_ON(len > (1 << 16));
|
2010-12-07 10:51:05 -07:00
|
|
|
#endif
|
2007-10-21 16:41:40 -07:00
|
|
|
return cpu_to_le16(len);
|
|
|
|
}
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
/*
|
|
|
|
* ext2 uses block-sized chunks. Arguably, sector-sized ones would be
|
|
|
|
* more robust, but we have what we have
|
|
|
|
*/
|
|
|
|
static inline unsigned ext2_chunk_size(struct inode *inode)
|
|
|
|
{
|
|
|
|
return inode->i_sb->s_blocksize;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Return the offset into page `page_nr' of the last valid
|
|
|
|
* byte in that page, plus one.
|
|
|
|
*/
|
|
|
|
static unsigned
|
|
|
|
ext2_last_byte(struct inode *inode, unsigned long page_nr)
|
|
|
|
{
|
|
|
|
unsigned last_byte = inode->i_size;
|
|
|
|
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 05:29:47 -07:00
|
|
|
last_byte -= page_nr << PAGE_SHIFT;
|
|
|
|
if (last_byte > PAGE_SIZE)
|
|
|
|
last_byte = PAGE_SIZE;
|
2005-04-16 15:20:36 -07:00
|
|
|
return last_byte;
|
|
|
|
}
|
|
|
|
|
2023-09-21 12:06:01 -07:00
|
|
|
static void ext2_commit_chunk(struct folio *folio, loff_t pos, unsigned len)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 12:06:01 -07:00
|
|
|
struct address_space *mapping = folio->mapping;
|
2007-10-16 01:25:04 -07:00
|
|
|
struct inode *dir = mapping->host;
|
|
|
|
|
2017-12-11 04:35:14 -07:00
|
|
|
inode_inc_iversion(dir);
|
2024-07-10 11:51:11 -07:00
|
|
|
block_write_end(NULL, mapping, pos, len, len, folio, NULL);
|
2007-10-16 01:25:04 -07:00
|
|
|
|
|
|
|
if (pos+len > dir->i_size) {
|
|
|
|
i_size_write(dir, pos+len);
|
|
|
|
mark_inode_dirty(dir);
|
|
|
|
}
|
2023-09-21 12:06:01 -07:00
|
|
|
folio_unlock(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2023-09-21 13:07:39 -07:00
|
|
|
static bool ext2_check_folio(struct folio *folio, int quiet, char *kaddr)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 13:07:39 -07:00
|
|
|
struct inode *dir = folio->mapping->host;
|
2005-04-16 15:20:36 -07:00
|
|
|
struct super_block *sb = dir->i_sb;
|
|
|
|
unsigned chunk_size = ext2_chunk_size(dir);
|
|
|
|
u32 max_inumber = le32_to_cpu(EXT2_SB(sb)->s_es->s_inodes_count);
|
|
|
|
unsigned offs, rec_len;
|
2023-09-21 13:07:39 -07:00
|
|
|
unsigned limit = folio_size(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
ext2_dirent *p;
|
|
|
|
char *error;
|
|
|
|
|
2023-09-21 13:07:39 -07:00
|
|
|
if (dir->i_size < folio_pos(folio) + limit) {
|
|
|
|
limit = offset_in_folio(folio, dir->i_size);
|
2005-04-16 15:20:36 -07:00
|
|
|
if (limit & (chunk_size - 1))
|
|
|
|
goto Ebadsize;
|
|
|
|
if (!limit)
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
for (offs = 0; offs <= limit - EXT2_DIR_REC_LEN(1); offs += rec_len) {
|
|
|
|
p = (ext2_dirent *)(kaddr + offs);
|
2007-10-21 16:41:40 -07:00
|
|
|
rec_len = ext2_rec_len_from_disk(p->rec_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2010-12-07 10:51:05 -07:00
|
|
|
if (unlikely(rec_len < EXT2_DIR_REC_LEN(1)))
|
2005-04-16 15:20:36 -07:00
|
|
|
goto Eshort;
|
2010-12-07 10:51:05 -07:00
|
|
|
if (unlikely(rec_len & 3))
|
2005-04-16 15:20:36 -07:00
|
|
|
goto Ealign;
|
2010-12-07 10:51:05 -07:00
|
|
|
if (unlikely(rec_len < EXT2_DIR_REC_LEN(p->name_len)))
|
2005-04-16 15:20:36 -07:00
|
|
|
goto Enamelen;
|
2010-12-07 10:51:05 -07:00
|
|
|
if (unlikely(((offs + rec_len - 1) ^ offs) & ~(chunk_size-1)))
|
2005-04-16 15:20:36 -07:00
|
|
|
goto Espan;
|
2010-12-07 10:51:05 -07:00
|
|
|
if (unlikely(le32_to_cpu(p->inode) > max_inumber))
|
2005-04-16 15:20:36 -07:00
|
|
|
goto Einumber;
|
|
|
|
}
|
|
|
|
if (offs != limit)
|
|
|
|
goto Eend;
|
|
|
|
out:
|
2023-09-21 13:07:39 -07:00
|
|
|
folio_set_checked(folio);
|
2016-04-22 12:06:44 -07:00
|
|
|
return true;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
/* Too bad, we had an error */
|
|
|
|
|
|
|
|
Ebadsize:
|
2008-10-15 22:04:02 -07:00
|
|
|
if (!quiet)
|
|
|
|
ext2_error(sb, __func__,
|
|
|
|
"size of directory #%lu is not a multiple "
|
|
|
|
"of chunk size", dir->i_ino);
|
2005-04-16 15:20:36 -07:00
|
|
|
goto fail;
|
|
|
|
Eshort:
|
|
|
|
error = "rec_len is smaller than minimal";
|
|
|
|
goto bad_entry;
|
|
|
|
Ealign:
|
|
|
|
error = "unaligned directory entry";
|
|
|
|
goto bad_entry;
|
|
|
|
Enamelen:
|
|
|
|
error = "rec_len is too small for name_len";
|
|
|
|
goto bad_entry;
|
|
|
|
Espan:
|
|
|
|
error = "directory entry across blocks";
|
|
|
|
goto bad_entry;
|
|
|
|
Einumber:
|
|
|
|
error = "inode out of bounds";
|
|
|
|
bad_entry:
|
2008-10-15 22:04:02 -07:00
|
|
|
if (!quiet)
|
|
|
|
ext2_error(sb, __func__, "bad entry in directory #%lu: : %s - "
|
2023-09-21 13:07:39 -07:00
|
|
|
"offset=%llu, inode=%lu, rec_len=%d, name_len=%d",
|
|
|
|
dir->i_ino, error, folio_pos(folio) + offs,
|
2008-10-15 22:04:02 -07:00
|
|
|
(unsigned long) le32_to_cpu(p->inode),
|
|
|
|
rec_len, p->name_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
goto fail;
|
|
|
|
Eend:
|
2008-10-15 22:04:02 -07:00
|
|
|
if (!quiet) {
|
|
|
|
p = (ext2_dirent *)(kaddr + offs);
|
2023-09-21 13:07:39 -07:00
|
|
|
ext2_error(sb, "ext2_check_folio",
|
2008-10-15 22:04:02 -07:00
|
|
|
"entry in directory #%lu spans the page boundary"
|
2023-09-21 13:07:39 -07:00
|
|
|
"offset=%llu, inode=%lu",
|
|
|
|
dir->i_ino, folio_pos(folio) + offs,
|
2008-10-15 22:04:02 -07:00
|
|
|
(unsigned long) le32_to_cpu(p->inode));
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
fail:
|
2016-04-22 12:06:44 -07:00
|
|
|
return false;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2021-03-28 23:54:02 -07:00
|
|
|
/*
|
2023-09-21 13:07:40 -07:00
|
|
|
* Calls to ext2_get_folio()/folio_release_kmap() must be nested according
|
|
|
|
* to the rules documented in kmap_local_folio()/kunmap_local().
|
2021-03-28 23:54:02 -07:00
|
|
|
*
|
2023-09-21 13:07:40 -07:00
|
|
|
* NOTE: ext2_find_entry() and ext2_dotdot() act as a call
|
|
|
|
* to folio_release_kmap() and should be treated as a call to
|
|
|
|
* folio_release_kmap() for nesting purposes.
|
2021-03-28 23:54:02 -07:00
|
|
|
*/
|
2023-09-21 13:07:40 -07:00
|
|
|
static void *ext2_get_folio(struct inode *dir, unsigned long n,
|
|
|
|
int quiet, struct folio **foliop)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
|
|
|
struct address_space *mapping = dir->i_mapping;
|
2022-05-17 15:06:23 -07:00
|
|
|
struct folio *folio = read_mapping_folio(mapping, n, NULL);
|
2023-09-21 13:07:40 -07:00
|
|
|
void *kaddr;
|
2022-05-17 15:06:23 -07:00
|
|
|
|
|
|
|
if (IS_ERR(folio))
|
2022-12-13 18:07:28 -07:00
|
|
|
return ERR_CAST(folio);
|
2023-09-21 13:07:40 -07:00
|
|
|
kaddr = kmap_local_folio(folio, 0);
|
2022-05-17 15:06:23 -07:00
|
|
|
if (unlikely(!folio_test_checked(folio))) {
|
2023-09-21 13:07:40 -07:00
|
|
|
if (!ext2_check_folio(folio, quiet, kaddr))
|
2022-05-17 15:06:23 -07:00
|
|
|
goto fail;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
2023-09-21 13:07:40 -07:00
|
|
|
*foliop = folio;
|
|
|
|
return kaddr;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
fail:
|
2023-09-21 13:07:40 -07:00
|
|
|
folio_release_kmap(folio, kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
return ERR_PTR(-EIO);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE! unlike strncmp, ext2_match returns 1 for success, 0 for failure.
|
|
|
|
*
|
|
|
|
* len <= EXT2_NAME_LEN and de != NULL are guaranteed by caller.
|
|
|
|
*/
|
|
|
|
static inline int ext2_match (int len, const char * const name,
|
|
|
|
struct ext2_dir_entry_2 * de)
|
|
|
|
{
|
|
|
|
if (len != de->name_len)
|
|
|
|
return 0;
|
|
|
|
if (!de->inode)
|
|
|
|
return 0;
|
|
|
|
return !memcmp(name, de->name, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* p is at least 6 bytes before the end of page
|
|
|
|
*/
|
|
|
|
static inline ext2_dirent *ext2_next_entry(ext2_dirent *p)
|
|
|
|
{
|
2007-10-21 16:41:40 -07:00
|
|
|
return (ext2_dirent *)((char *)p +
|
|
|
|
ext2_rec_len_from_disk(p->rec_len));
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned
|
|
|
|
ext2_validate_entry(char *base, unsigned offset, unsigned mask)
|
|
|
|
{
|
|
|
|
ext2_dirent *de = (ext2_dirent*)(base + offset);
|
|
|
|
ext2_dirent *p = (ext2_dirent*)(base + (offset&mask));
|
|
|
|
while ((char*)p < (char*)de) {
|
|
|
|
if (p->rec_len == 0)
|
|
|
|
break;
|
|
|
|
p = ext2_next_entry(p);
|
|
|
|
}
|
2022-12-13 18:26:27 -07:00
|
|
|
return offset_in_page(p);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void ext2_set_de_type(ext2_dirent *de, struct inode *inode)
|
|
|
|
{
|
|
|
|
if (EXT2_HAS_INCOMPAT_FEATURE(inode->i_sb, EXT2_FEATURE_INCOMPAT_FILETYPE))
|
2019-01-20 17:54:31 -07:00
|
|
|
de->file_type = fs_umode_to_ftype(inode->i_mode);
|
2005-04-16 15:20:36 -07:00
|
|
|
else
|
|
|
|
de->file_type = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2013-05-15 15:51:49 -07:00
|
|
|
ext2_readdir(struct file *file, struct dir_context *ctx)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2013-05-15 15:51:49 -07:00
|
|
|
loff_t pos = ctx->pos;
|
|
|
|
struct inode *inode = file_inode(file);
|
2005-04-16 15:20:36 -07:00
|
|
|
struct super_block *sb = inode->i_sb;
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 05:29:47 -07:00
|
|
|
unsigned int offset = pos & ~PAGE_MASK;
|
|
|
|
unsigned long n = pos >> PAGE_SHIFT;
|
2005-04-16 15:20:36 -07:00
|
|
|
unsigned long npages = dir_pages(inode);
|
|
|
|
unsigned chunk_mask = ~(ext2_chunk_size(inode)-1);
|
2024-08-30 06:04:51 -07:00
|
|
|
bool need_revalidate = !inode_eq_iversion(inode, *(u64 *)file->private_data);
|
2019-01-20 17:54:31 -07:00
|
|
|
bool has_filetype;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
if (pos > inode->i_size - EXT2_DIR_REC_LEN(1))
|
2006-03-15 14:41:59 -07:00
|
|
|
return 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2019-01-20 17:54:31 -07:00
|
|
|
has_filetype =
|
|
|
|
EXT2_HAS_INCOMPAT_FEATURE(sb, EXT2_FEATURE_INCOMPAT_FILETYPE);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
for ( ; n < npages; n++, offset = 0) {
|
|
|
|
ext2_dirent *de;
|
2023-09-21 13:07:41 -07:00
|
|
|
struct folio *folio;
|
|
|
|
char *kaddr = ext2_get_folio(inode, n, 0, &folio);
|
2022-12-13 18:07:28 -07:00
|
|
|
char *limit;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2022-12-13 18:07:28 -07:00
|
|
|
if (IS_ERR(kaddr)) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(sb, __func__,
|
2005-04-16 15:20:36 -07:00
|
|
|
"bad page in #%lu",
|
|
|
|
inode->i_ino);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 05:29:47 -07:00
|
|
|
ctx->pos += PAGE_SIZE - offset;
|
2022-12-13 18:07:28 -07:00
|
|
|
return PTR_ERR(kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
2006-03-15 14:41:59 -07:00
|
|
|
if (unlikely(need_revalidate)) {
|
|
|
|
if (offset) {
|
|
|
|
offset = ext2_validate_entry(kaddr, offset, chunk_mask);
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 05:29:47 -07:00
|
|
|
ctx->pos = (n<<PAGE_SHIFT) + offset;
|
2006-03-15 14:41:59 -07:00
|
|
|
}
|
2024-08-30 06:04:51 -07:00
|
|
|
*(u64 *)file->private_data = inode_query_iversion(inode);
|
2017-12-11 04:35:14 -07:00
|
|
|
need_revalidate = false;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
de = (ext2_dirent *)(kaddr+offset);
|
|
|
|
limit = kaddr + ext2_last_byte(inode, n) - EXT2_DIR_REC_LEN(1);
|
|
|
|
for ( ;(char*)de <= limit; de = ext2_next_entry(de)) {
|
|
|
|
if (de->rec_len == 0) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(sb, __func__,
|
2005-04-16 15:20:36 -07:00
|
|
|
"zero-length directory entry");
|
2023-09-21 13:07:41 -07:00
|
|
|
folio_release_kmap(folio, de);
|
2006-03-15 14:41:59 -07:00
|
|
|
return -EIO;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
if (de->inode) {
|
|
|
|
unsigned char d_type = DT_UNKNOWN;
|
|
|
|
|
2019-01-20 17:54:31 -07:00
|
|
|
if (has_filetype)
|
|
|
|
d_type = fs_ftype_to_dtype(de->file_type);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2013-05-15 15:51:49 -07:00
|
|
|
if (!dir_emit(ctx, de->name, de->name_len,
|
|
|
|
le32_to_cpu(de->inode),
|
|
|
|
d_type)) {
|
2023-09-21 13:07:41 -07:00
|
|
|
folio_release_kmap(folio, de);
|
2006-03-15 14:41:59 -07:00
|
|
|
return 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
}
|
2013-05-15 15:51:49 -07:00
|
|
|
ctx->pos += ext2_rec_len_from_disk(de->rec_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
2023-09-21 13:07:41 -07:00
|
|
|
folio_release_kmap(folio, kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
2006-03-15 14:41:59 -07:00
|
|
|
return 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ext2_find_entry()
|
|
|
|
*
|
|
|
|
* finds an entry in the specified directory with the wanted name. It
|
2009-11-30 09:58:43 -07:00
|
|
|
* returns the page in which the entry was found (as a parameter - res_page),
|
|
|
|
* and the entry itself. Page is returned mapped and unlocked.
|
2005-04-16 15:20:36 -07:00
|
|
|
* Entry is guaranteed to be valid.
|
2020-11-12 10:42:44 -07:00
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* On Success folio_release_kmap() should be called on *foliop.
|
2021-03-28 23:54:02 -07:00
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* NOTE: Calls to ext2_get_folio()/folio_release_kmap() must be nested
|
|
|
|
* according to the rules documented in kmap_local_folio()/kunmap_local().
|
2021-03-28 23:54:02 -07:00
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* ext2_find_entry() and ext2_dotdot() act as a call to ext2_get_folio()
|
|
|
|
* and should be treated as a call to ext2_get_folio() for nesting
|
|
|
|
* purposes.
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2016-07-20 19:47:26 -07:00
|
|
|
struct ext2_dir_entry_2 *ext2_find_entry (struct inode *dir,
|
2023-09-21 13:07:45 -07:00
|
|
|
const struct qstr *child, struct folio **foliop)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2008-08-24 04:28:39 -07:00
|
|
|
const char *name = child->name;
|
|
|
|
int namelen = child->len;
|
2005-04-16 15:20:36 -07:00
|
|
|
unsigned reclen = EXT2_DIR_REC_LEN(namelen);
|
|
|
|
unsigned long start, n;
|
|
|
|
unsigned long npages = dir_pages(dir);
|
|
|
|
struct ext2_inode_info *ei = EXT2_I(dir);
|
|
|
|
ext2_dirent * de;
|
|
|
|
|
|
|
|
if (npages == 0)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
start = ei->i_dir_start_lookup;
|
|
|
|
if (start >= npages)
|
|
|
|
start = 0;
|
|
|
|
n = start;
|
|
|
|
do {
|
2023-09-21 13:07:45 -07:00
|
|
|
char *kaddr = ext2_get_folio(dir, n, 0, foliop);
|
2022-12-13 18:53:47 -07:00
|
|
|
if (IS_ERR(kaddr))
|
|
|
|
return ERR_CAST(kaddr);
|
2020-06-07 20:40:42 -07:00
|
|
|
|
|
|
|
de = (ext2_dirent *) kaddr;
|
|
|
|
kaddr += ext2_last_byte(dir, n) - reclen;
|
|
|
|
while ((char *) de <= kaddr) {
|
|
|
|
if (de->rec_len == 0) {
|
|
|
|
ext2_error(dir->i_sb, __func__,
|
|
|
|
"zero-length directory entry");
|
2023-09-21 13:07:45 -07:00
|
|
|
folio_release_kmap(*foliop, de);
|
2020-06-07 20:40:42 -07:00
|
|
|
goto out;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
2020-06-07 20:40:42 -07:00
|
|
|
if (ext2_match(namelen, name, de))
|
|
|
|
goto found;
|
|
|
|
de = ext2_next_entry(de);
|
|
|
|
}
|
2023-09-21 13:07:45 -07:00
|
|
|
folio_release_kmap(*foliop, kaddr);
|
2008-10-15 22:04:02 -07:00
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
if (++n >= npages)
|
|
|
|
n = 0;
|
2023-09-21 13:07:45 -07:00
|
|
|
/* next folio is past the blocks we've got */
|
mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.
This promise never materialized. And unlikely will.
We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE. And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.
Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.
Let's stop pretending that pages in page cache are special. They are
not.
The changes are pretty straight-forward:
- <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
- PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
- page_cache_get() -> get_page();
- page_cache_release() -> put_page();
This patch contains automated changes generated with coccinelle using
script below. For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.
The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.
There are few places in the code where coccinelle didn't reach. I'll
fix them manually in a separate patch. Comments and documentation also
will be addressed with the separate patch.
virtual patch
@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E
@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT
@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE
@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK
@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)
@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)
@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-01 05:29:47 -07:00
|
|
|
if (unlikely(n > (dir->i_blocks >> (PAGE_SHIFT - 9)))) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(dir->i_sb, __func__,
|
2007-02-10 02:45:06 -07:00
|
|
|
"dir %lu size %lld exceeds block count %llu",
|
|
|
|
dir->i_ino, dir->i_size,
|
|
|
|
(unsigned long long)dir->i_blocks);
|
|
|
|
goto out;
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
} while (n != start);
|
|
|
|
out:
|
2020-06-07 20:40:43 -07:00
|
|
|
return ERR_PTR(-ENOENT);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
found:
|
|
|
|
ei->i_dir_start_lookup = n;
|
|
|
|
return de;
|
|
|
|
}
|
|
|
|
|
2022-11-10 23:43:52 -07:00
|
|
|
/*
|
2020-11-12 10:42:44 -07:00
|
|
|
* Return the '..' directory entry and the page in which the entry was found
|
|
|
|
* (as a parameter - p).
|
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* On Success folio_release_kmap() should be called on *foliop.
|
2021-03-28 23:54:02 -07:00
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* NOTE: Calls to ext2_get_folio()/folio_release_kmap() must be nested
|
|
|
|
* according to the rules documented in kmap_local_folio()/kunmap_local().
|
2021-03-28 23:54:02 -07:00
|
|
|
*
|
2023-09-21 13:07:45 -07:00
|
|
|
* ext2_find_entry() and ext2_dotdot() act as a call to ext2_get_folio()
|
|
|
|
* and should be treated as a call to ext2_get_folio() for nesting
|
|
|
|
* purposes.
|
2020-11-12 10:42:44 -07:00
|
|
|
*/
|
2023-09-21 13:07:45 -07:00
|
|
|
struct ext2_dir_entry_2 *ext2_dotdot(struct inode *dir, struct folio **foliop)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 13:07:45 -07:00
|
|
|
ext2_dirent *de = ext2_get_folio(dir, 0, 0, foliop);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2022-12-13 18:53:47 -07:00
|
|
|
if (!IS_ERR(de))
|
|
|
|
return ext2_next_entry(de);
|
|
|
|
return NULL;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2020-06-07 20:40:42 -07:00
|
|
|
int ext2_inode_by_name(struct inode *dir, const struct qstr *child, ino_t *ino)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2008-08-24 04:28:39 -07:00
|
|
|
struct ext2_dir_entry_2 *de;
|
2023-09-21 13:07:45 -07:00
|
|
|
struct folio *folio;
|
|
|
|
|
|
|
|
de = ext2_find_entry(dir, child, &folio);
|
2020-06-07 20:40:43 -07:00
|
|
|
if (IS_ERR(de))
|
2020-06-07 20:40:42 -07:00
|
|
|
return PTR_ERR(de);
|
|
|
|
|
|
|
|
*ino = le32_to_cpu(de->inode);
|
2023-09-21 13:07:45 -07:00
|
|
|
folio_release_kmap(folio, de);
|
2020-06-07 20:40:42 -07:00
|
|
|
return 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
2023-09-21 12:06:01 -07:00
|
|
|
static int ext2_prepare_chunk(struct folio *folio, loff_t pos, unsigned len)
|
2010-06-04 02:29:56 -07:00
|
|
|
{
|
2024-07-10 20:09:04 -07:00
|
|
|
return __block_write_begin(folio, pos, len, ext2_get_block);
|
2010-06-04 02:29:56 -07:00
|
|
|
}
|
|
|
|
|
2022-11-16 11:08:23 -07:00
|
|
|
static int ext2_handle_dirsync(struct inode *dir)
|
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
err = filemap_write_and_wait(dir->i_mapping);
|
|
|
|
if (!err)
|
|
|
|
err = sync_inode_metadata(dir, 1);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2023-01-16 01:52:05 -07:00
|
|
|
int ext2_set_link(struct inode *dir, struct ext2_dir_entry_2 *de,
|
2023-09-21 13:07:45 -07:00
|
|
|
struct folio *folio, struct inode *inode, bool update_times)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 13:07:45 -07:00
|
|
|
loff_t pos = folio_pos(folio) + offset_in_folio(folio, de);
|
2007-10-21 16:41:40 -07:00
|
|
|
unsigned len = ext2_rec_len_from_disk(de->rec_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
int err;
|
|
|
|
|
2023-09-21 13:07:45 -07:00
|
|
|
folio_lock(folio);
|
2023-09-21 12:06:01 -07:00
|
|
|
err = ext2_prepare_chunk(folio, pos, len);
|
2023-01-16 01:52:05 -07:00
|
|
|
if (err) {
|
2023-09-21 13:07:45 -07:00
|
|
|
folio_unlock(folio);
|
2023-01-16 01:52:05 -07:00
|
|
|
return err;
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
de->inode = cpu_to_le32(inode->i_ino);
|
2007-10-16 01:25:04 -07:00
|
|
|
ext2_set_de_type(de, inode);
|
2023-09-21 12:06:01 -07:00
|
|
|
ext2_commit_chunk(folio, pos, len);
|
ext2: Do not update mtime of a moved directory
One of our users is complaining that his backup tool is upset on ext2
(while it's happy on ext3, xfs, ...) because of the mtime change.
The problem is:
mkdir foo
mkdir bar
mkdir foo/a
Now under ext2:
mv foo/a foo/b
changes mtime of 'foo/a' (foo/b after the move). That does not really
make sense and it does not happen under any other filesystem I've seen.
More complicated is:
mv foo/a bar/a
This changes mtime of foo/a (bar/a after the move) and it makes some
sense since we had to update parent directory pointer of foo/a. But
again, no other filesystem does this. So after some thoughts I'd vote
for consistency and change ext2 to behave the same as other filesystems.
Do not update mtime of a moved directory. Specs don't say anything
about it (neither that it should, nor that it should not be updated) and
other common filesystems (ext3, ext4, xfs, reiserfs, fat, ...) don't do
it. So let's become more consistent.
Spotted by ronny.pretzsch@dfs.de, initial fix by Jörn Engel.
Reported-by: <ronny.pretzsch@dfs.de>
Cc: <hare@suse.de>
Cc: Jörn Engel <joern@logfs.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-17 16:26:20 -07:00
|
|
|
if (update_times)
|
2023-10-04 11:52:19 -07:00
|
|
|
inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
|
2005-04-16 15:20:36 -07:00
|
|
|
EXT2_I(dir)->i_flags &= ~EXT2_BTREE_FL;
|
|
|
|
mark_inode_dirty(dir);
|
2023-01-16 01:52:05 -07:00
|
|
|
return ext2_handle_dirsync(dir);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Parent is locked.
|
|
|
|
*/
|
|
|
|
int ext2_add_link (struct dentry *dentry, struct inode *inode)
|
|
|
|
{
|
2015-03-17 15:25:59 -07:00
|
|
|
struct inode *dir = d_inode(dentry->d_parent);
|
2005-04-16 15:20:36 -07:00
|
|
|
const char *name = dentry->d_name.name;
|
|
|
|
int namelen = dentry->d_name.len;
|
|
|
|
unsigned chunk_size = ext2_chunk_size(dir);
|
|
|
|
unsigned reclen = EXT2_DIR_REC_LEN(namelen);
|
|
|
|
unsigned short rec_len, name_len;
|
2023-09-21 13:07:42 -07:00
|
|
|
struct folio *folio = NULL;
|
2005-04-16 15:20:36 -07:00
|
|
|
ext2_dirent * de;
|
|
|
|
unsigned long npages = dir_pages(dir);
|
|
|
|
unsigned long n;
|
2007-10-16 01:25:04 -07:00
|
|
|
loff_t pos;
|
2005-04-16 15:20:36 -07:00
|
|
|
int err;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We take care of directory expansion in the same loop.
|
2023-09-21 13:07:42 -07:00
|
|
|
* This code plays outside i_size, so it locks the folio
|
2005-04-16 15:20:36 -07:00
|
|
|
* to protect that region.
|
|
|
|
*/
|
|
|
|
for (n = 0; n <= npages; n++) {
|
2023-09-21 13:07:42 -07:00
|
|
|
char *kaddr = ext2_get_folio(dir, n, 0, &folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
char *dir_end;
|
|
|
|
|
2022-12-13 18:14:50 -07:00
|
|
|
if (IS_ERR(kaddr))
|
|
|
|
return PTR_ERR(kaddr);
|
2023-09-21 13:07:42 -07:00
|
|
|
folio_lock(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
dir_end = kaddr + ext2_last_byte(dir, n);
|
|
|
|
de = (ext2_dirent *)kaddr;
|
2023-09-21 13:07:42 -07:00
|
|
|
kaddr += folio_size(folio) - reclen;
|
2005-04-16 15:20:36 -07:00
|
|
|
while ((char *)de <= kaddr) {
|
|
|
|
if ((char *)de == dir_end) {
|
|
|
|
/* We hit i_size */
|
|
|
|
name_len = 0;
|
|
|
|
rec_len = chunk_size;
|
2007-10-21 16:41:40 -07:00
|
|
|
de->rec_len = ext2_rec_len_to_disk(chunk_size);
|
2005-04-16 15:20:36 -07:00
|
|
|
de->inode = 0;
|
|
|
|
goto got_it;
|
|
|
|
}
|
|
|
|
if (de->rec_len == 0) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(dir->i_sb, __func__,
|
2005-04-16 15:20:36 -07:00
|
|
|
"zero-length directory entry");
|
|
|
|
err = -EIO;
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
err = -EEXIST;
|
|
|
|
if (ext2_match (namelen, name, de))
|
|
|
|
goto out_unlock;
|
|
|
|
name_len = EXT2_DIR_REC_LEN(de->name_len);
|
2007-10-21 16:41:40 -07:00
|
|
|
rec_len = ext2_rec_len_from_disk(de->rec_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
if (!de->inode && rec_len >= reclen)
|
|
|
|
goto got_it;
|
|
|
|
if (rec_len >= name_len + reclen)
|
|
|
|
goto got_it;
|
|
|
|
de = (ext2_dirent *) ((char *) de + rec_len);
|
|
|
|
}
|
2023-09-21 13:07:42 -07:00
|
|
|
folio_unlock(folio);
|
|
|
|
folio_release_kmap(folio, kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
BUG();
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
got_it:
|
2023-09-21 13:07:42 -07:00
|
|
|
pos = folio_pos(folio) + offset_in_folio(folio, de);
|
2023-09-21 12:06:01 -07:00
|
|
|
err = ext2_prepare_chunk(folio, pos, rec_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
if (err)
|
|
|
|
goto out_unlock;
|
|
|
|
if (de->inode) {
|
|
|
|
ext2_dirent *de1 = (ext2_dirent *) ((char *) de + name_len);
|
2007-10-21 16:41:40 -07:00
|
|
|
de1->rec_len = ext2_rec_len_to_disk(rec_len - name_len);
|
|
|
|
de->rec_len = ext2_rec_len_to_disk(name_len);
|
2005-04-16 15:20:36 -07:00
|
|
|
de = de1;
|
|
|
|
}
|
|
|
|
de->name_len = namelen;
|
2007-10-16 01:25:04 -07:00
|
|
|
memcpy(de->name, name, namelen);
|
2005-04-16 15:20:36 -07:00
|
|
|
de->inode = cpu_to_le32(inode->i_ino);
|
|
|
|
ext2_set_de_type (de, inode);
|
2023-09-21 12:06:01 -07:00
|
|
|
ext2_commit_chunk(folio, pos, rec_len);
|
2023-10-04 11:52:19 -07:00
|
|
|
inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
|
2005-04-16 15:20:36 -07:00
|
|
|
EXT2_I(dir)->i_flags &= ~EXT2_BTREE_FL;
|
|
|
|
mark_inode_dirty(dir);
|
2022-11-16 11:08:23 -07:00
|
|
|
err = ext2_handle_dirsync(dir);
|
2005-04-16 15:20:36 -07:00
|
|
|
/* OFFSET_CACHE */
|
|
|
|
out_put:
|
2023-09-21 13:07:42 -07:00
|
|
|
folio_release_kmap(folio, de);
|
2005-04-16 15:20:36 -07:00
|
|
|
return err;
|
|
|
|
out_unlock:
|
2023-09-21 13:07:42 -07:00
|
|
|
folio_unlock(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
goto out_put;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ext2_delete_entry deletes a directory entry by merging it with the
|
2021-03-28 23:54:01 -07:00
|
|
|
* previous entry. Page is up-to-date.
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
2023-09-21 13:07:45 -07:00
|
|
|
int ext2_delete_entry(struct ext2_dir_entry_2 *dir, struct folio *folio)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 13:07:44 -07:00
|
|
|
struct inode *inode = folio->mapping->host;
|
|
|
|
size_t from, to;
|
|
|
|
char *kaddr;
|
2007-10-16 01:25:04 -07:00
|
|
|
loff_t pos;
|
2023-09-21 13:07:44 -07:00
|
|
|
ext2_dirent *de, *pde = NULL;
|
2005-04-16 15:20:36 -07:00
|
|
|
int err;
|
|
|
|
|
2023-09-21 13:07:44 -07:00
|
|
|
from = offset_in_folio(folio, dir);
|
|
|
|
to = from + ext2_rec_len_from_disk(dir->rec_len);
|
|
|
|
kaddr = (char *)dir - from;
|
|
|
|
from &= ~(ext2_chunk_size(inode)-1);
|
|
|
|
de = (ext2_dirent *)(kaddr + from);
|
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
while ((char*)de < (char*)dir) {
|
|
|
|
if (de->rec_len == 0) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(inode->i_sb, __func__,
|
2005-04-16 15:20:36 -07:00
|
|
|
"zero-length directory entry");
|
2023-01-11 13:21:52 -07:00
|
|
|
return -EIO;
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
pde = de;
|
|
|
|
de = ext2_next_entry(de);
|
|
|
|
}
|
|
|
|
if (pde)
|
2023-09-21 13:07:44 -07:00
|
|
|
from = offset_in_folio(folio, pde);
|
|
|
|
pos = folio_pos(folio) + from;
|
|
|
|
folio_lock(folio);
|
2023-09-21 12:06:01 -07:00
|
|
|
err = ext2_prepare_chunk(folio, pos, to - from);
|
2023-01-11 13:21:52 -07:00
|
|
|
if (err) {
|
2023-09-21 13:07:44 -07:00
|
|
|
folio_unlock(folio);
|
2023-01-11 13:21:52 -07:00
|
|
|
return err;
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
if (pde)
|
2007-10-21 16:41:40 -07:00
|
|
|
pde->rec_len = ext2_rec_len_to_disk(to - from);
|
2005-04-16 15:20:36 -07:00
|
|
|
dir->inode = 0;
|
2023-09-21 12:06:01 -07:00
|
|
|
ext2_commit_chunk(folio, pos, to - from);
|
2023-10-04 11:52:19 -07:00
|
|
|
inode_set_mtime_to_ts(inode, inode_set_ctime_current(inode));
|
2005-04-16 15:20:36 -07:00
|
|
|
EXT2_I(inode)->i_flags &= ~EXT2_BTREE_FL;
|
|
|
|
mark_inode_dirty(inode);
|
2023-01-11 13:21:52 -07:00
|
|
|
return ext2_handle_dirsync(inode);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the first fragment of directory.
|
|
|
|
*/
|
|
|
|
int ext2_make_empty(struct inode *inode, struct inode *parent)
|
|
|
|
{
|
2023-09-21 13:07:46 -07:00
|
|
|
struct folio *folio = filemap_grab_folio(inode->i_mapping, 0);
|
2005-04-16 15:20:36 -07:00
|
|
|
unsigned chunk_size = ext2_chunk_size(inode);
|
|
|
|
struct ext2_dir_entry_2 * de;
|
|
|
|
int err;
|
|
|
|
void *kaddr;
|
|
|
|
|
2023-09-21 13:07:46 -07:00
|
|
|
if (IS_ERR(folio))
|
|
|
|
return PTR_ERR(folio);
|
2007-10-16 01:25:04 -07:00
|
|
|
|
2023-09-21 12:06:01 -07:00
|
|
|
err = ext2_prepare_chunk(folio, 0, chunk_size);
|
2005-04-16 15:20:36 -07:00
|
|
|
if (err) {
|
2023-09-21 13:07:46 -07:00
|
|
|
folio_unlock(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
goto fail;
|
|
|
|
}
|
2023-09-21 13:07:46 -07:00
|
|
|
kaddr = kmap_local_folio(folio, 0);
|
2006-01-10 17:38:27 -07:00
|
|
|
memset(kaddr, 0, chunk_size);
|
2005-04-16 15:20:36 -07:00
|
|
|
de = (struct ext2_dir_entry_2 *)kaddr;
|
|
|
|
de->name_len = 1;
|
2007-10-21 16:41:40 -07:00
|
|
|
de->rec_len = ext2_rec_len_to_disk(EXT2_DIR_REC_LEN(1));
|
2005-04-16 15:20:36 -07:00
|
|
|
memcpy (de->name, ".\0\0", 4);
|
|
|
|
de->inode = cpu_to_le32(inode->i_ino);
|
|
|
|
ext2_set_de_type (de, inode);
|
|
|
|
|
|
|
|
de = (struct ext2_dir_entry_2 *)(kaddr + EXT2_DIR_REC_LEN(1));
|
|
|
|
de->name_len = 2;
|
2007-10-21 16:41:40 -07:00
|
|
|
de->rec_len = ext2_rec_len_to_disk(chunk_size - EXT2_DIR_REC_LEN(1));
|
2005-04-16 15:20:36 -07:00
|
|
|
de->inode = cpu_to_le32(parent->i_ino);
|
|
|
|
memcpy (de->name, "..\0", 4);
|
|
|
|
ext2_set_de_type (de, inode);
|
2022-12-31 10:42:05 -07:00
|
|
|
kunmap_local(kaddr);
|
2023-09-21 12:06:01 -07:00
|
|
|
ext2_commit_chunk(folio, 0, chunk_size);
|
2022-11-16 11:08:23 -07:00
|
|
|
err = ext2_handle_dirsync(inode);
|
2005-04-16 15:20:36 -07:00
|
|
|
fail:
|
2023-09-21 13:07:46 -07:00
|
|
|
folio_put(folio);
|
2005-04-16 15:20:36 -07:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* routine to check that the specified directory is empty (for rmdir)
|
|
|
|
*/
|
2023-09-21 13:07:43 -07:00
|
|
|
int ext2_empty_dir(struct inode *inode)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2023-09-21 13:07:43 -07:00
|
|
|
struct folio *folio;
|
2022-12-13 18:14:50 -07:00
|
|
|
char *kaddr;
|
2005-04-16 15:20:36 -07:00
|
|
|
unsigned long i, npages = dir_pages(inode);
|
|
|
|
|
|
|
|
for (i = 0; i < npages; i++) {
|
2022-12-13 18:14:50 -07:00
|
|
|
ext2_dirent *de;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2023-09-21 13:07:43 -07:00
|
|
|
kaddr = ext2_get_folio(inode, i, 0, &folio);
|
2022-12-13 18:14:50 -07:00
|
|
|
if (IS_ERR(kaddr))
|
ext2: unbugger ext2_empty_dir()
In 27cfa258951a "ext2: fix fs corruption when trying to remove
a non-empty directory with IO error" a funny thing has happened:
- page = ext2_get_page(inode, i, dir_has_error, &page_addr);
+ page = ext2_get_page(inode, i, 0, &page_addr);
- if (IS_ERR(page)) {
- dir_has_error = 1;
- continue;
- }
+ if (IS_ERR(page))
+ goto not_empty;
And at not_empty: we hit ext2_put_page(page, page_addr), which does
put_page(page). Which, unless I'm very mistaken, should oops
immediately when given ERR_PTR(-E...) as page.
OK, shit happens, insufficiently tested patches included. But when
commit in question describes the fault-injection test that exercised
that particular failure exit...
Ow.
CC: stable@vger.kernel.org
Fixes: 27cfa258951a ("ext2: fix fs corruption when trying to remove a non-empty directory with IO error")
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jan Kara <jack@suse.cz>
2022-11-25 20:17:17 -07:00
|
|
|
return 0;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
de = (ext2_dirent *)kaddr;
|
|
|
|
kaddr += ext2_last_byte(inode, i) - EXT2_DIR_REC_LEN(1);
|
|
|
|
|
|
|
|
while ((char *)de <= kaddr) {
|
|
|
|
if (de->rec_len == 0) {
|
2008-04-28 02:16:03 -07:00
|
|
|
ext2_error(inode->i_sb, __func__,
|
2005-04-16 15:20:36 -07:00
|
|
|
"zero-length directory entry");
|
|
|
|
printk("kaddr=%p, de=%p\n", kaddr, de);
|
|
|
|
goto not_empty;
|
|
|
|
}
|
|
|
|
if (de->inode != 0) {
|
|
|
|
/* check for . and .. */
|
|
|
|
if (de->name[0] != '.')
|
|
|
|
goto not_empty;
|
|
|
|
if (de->name_len > 2)
|
|
|
|
goto not_empty;
|
|
|
|
if (de->name_len < 2) {
|
|
|
|
if (de->inode !=
|
|
|
|
cpu_to_le32(inode->i_ino))
|
|
|
|
goto not_empty;
|
|
|
|
} else if (de->name[1] != '.')
|
|
|
|
goto not_empty;
|
|
|
|
}
|
|
|
|
de = ext2_next_entry(de);
|
|
|
|
}
|
2023-09-21 13:07:43 -07:00
|
|
|
folio_release_kmap(folio, kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
not_empty:
|
2023-09-21 13:07:43 -07:00
|
|
|
folio_release_kmap(folio, kaddr);
|
2005-04-16 15:20:36 -07:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2024-08-30 06:04:51 -07:00
|
|
|
static int ext2_dir_open(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
file->private_data = kzalloc(sizeof(u64), GFP_KERNEL);
|
|
|
|
if (!file->private_data)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ext2_dir_release(struct inode *inode, struct file *file)
|
|
|
|
{
|
|
|
|
kfree(file->private_data);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static loff_t ext2_dir_llseek(struct file *file, loff_t offset, int whence)
|
|
|
|
{
|
|
|
|
return generic_llseek_cookie(file, offset, whence,
|
|
|
|
(u64 *)file->private_data);
|
|
|
|
}
|
|
|
|
|
2006-03-28 02:56:42 -07:00
|
|
|
const struct file_operations ext2_dir_operations = {
|
2024-08-30 06:04:51 -07:00
|
|
|
.open = ext2_dir_open,
|
|
|
|
.release = ext2_dir_release,
|
|
|
|
.llseek = ext2_dir_llseek,
|
2005-04-16 15:20:36 -07:00
|
|
|
.read = generic_read_dir,
|
2016-04-20 20:42:46 -07:00
|
|
|
.iterate_shared = ext2_readdir,
|
2008-02-06 02:40:10 -07:00
|
|
|
.unlocked_ioctl = ext2_ioctl,
|
2006-08-29 11:06:20 -07:00
|
|
|
#ifdef CONFIG_COMPAT
|
|
|
|
.compat_ioctl = ext2_compat_ioctl,
|
|
|
|
#endif
|
2009-12-15 17:46:49 -07:00
|
|
|
.fsync = ext2_fsync,
|
2005-04-16 15:20:36 -07:00
|
|
|
};
|