2019-05-19 05:07:45 -07:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2019-10-28 05:10:41 -07:00
|
|
|
lib-y += delay.o
|
|
|
|
lib-y += memcpy.o
|
|
|
|
lib-y += memset.o
|
2020-11-30 02:13:19 -07:00
|
|
|
lib-y += memmove.o
|
2024-07-31 20:36:59 -07:00
|
|
|
ifeq ($(CONFIG_KASAN_GENERIC)$(CONFIG_KASAN_SW_TAGS),)
|
2023-01-13 14:23:00 -07:00
|
|
|
lib-y += strcmp.o
|
|
|
|
lib-y += strlen.o
|
|
|
|
lib-y += strncmp.o
|
2024-07-31 20:36:59 -07:00
|
|
|
endif
|
2024-01-08 16:57:05 -07:00
|
|
|
lib-y += csum.o
|
2024-01-14 22:59:24 -07:00
|
|
|
ifeq ($(CONFIG_MMU), y)
|
|
|
|
lib-$(CONFIG_RISCV_ISA_V) += uaccess_vector.o
|
|
|
|
endif
|
2020-09-06 22:58:22 -07:00
|
|
|
lib-$(CONFIG_MMU) += uaccess.o
|
2019-10-28 05:10:41 -07:00
|
|
|
lib-$(CONFIG_64BIT) += tishift.o
|
2023-02-24 09:26:29 -07:00
|
|
|
lib-$(CONFIG_RISCV_ISA_ZICBOZ) += clear_page.o
|
riscv: Optimize crc32 with Zbc extension
As suggested by the B-ext spec, the Zbc (carry-less multiplication)
instructions can be used to accelerate CRC calculations. Currently, the
crc32 is the most widely used crc function inside kernel, so this patch
focuses on the optimization of just the crc32 APIs.
Compared with the current table-lookup based optimization, Zbc based
optimization can also achieve large stride during CRC calculation loop,
meantime, it avoids the memory access latency of the table-lookup based
implementation and it reduces memory footprint.
If Zbc feature is not supported in a runtime environment, then the
table-lookup based implementation would serve as fallback via alternative
mechanism.
By inspecting the vmlinux built by gcc v12.2.0 with default optimization
level (-O2), we can see below instruction count change for each 8-byte
stride in the CRC32 loop:
rv64: crc32_be (54->31), crc32_le (54->13), __crc32c_le (54->13)
rv32: crc32_be (50->32), crc32_le (50->16), __crc32c_le (50->16)
The compile target CPU is little endian, extra effort is needed for byte
swapping for the crc32_be API, thus, the instruction count change is not
as significant as that in the *_le cases.
This patch is tested on QEMU VM with the kernel CRC32 selftest for both
rv64 and rv32. Running the CRC32 selftest on a real hardware (SpacemiT K1)
with Zbc extension shows 65% and 125% performance improvement respectively
on crc32_test() and crc32c_test().
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Reviewed-by: Charlie Jenkins <charlie@rivosinc.com>
Link: https://lore.kernel.org/r/20240621054707.1847548-1-xiao.w.wang@intel.com
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
2024-06-20 22:47:07 -07:00
|
|
|
lib-$(CONFIG_RISCV_ISA_ZBC) += crc32.o
|
2020-12-17 09:01:45 -07:00
|
|
|
|
|
|
|
obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
|
2024-01-14 22:59:22 -07:00
|
|
|
lib-$(CONFIG_RISCV_ISA_V) += xor.o
|
2024-01-14 22:59:24 -07:00
|
|
|
lib-$(CONFIG_RISCV_ISA_V) += riscv_v_helpers.o
|