Commit f4d6477f introduced a workaround for the lack of hardware
broadcasting of the cache maintenance operations on ARM11MPCore.
However, the workaround is only valid on CPUs that do not do speculative
loads into the D-cache.
This patch adds a Kconfig option with the corresponding help to make the
above clear. When the DMA_CACHE_RWFO option is disabled, the kernel
behaviour is that prior to the f4d6477f commit. This also allows ARMv6
UP processors with speculative loads to work correctly.
For other processors, a different workaround may be needed.
Cc: Ronen Shitrit <rshitrit@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
A recent patch for DMA cache maintenance on ARM11MPCore added a write
for ownership trick to the v6_dma_inv_range() function. Such operation
destroys data already present in the buffer. However, this function is
used with with dma_sync_single_for_device() which is supposed to
preserve the existing data transfered into the buffer. This patch adds a
combination of read/write for ownership to preserve the original data.
Reported-by: Ronen Shitrit <rshitrit@marvell.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This macro is not defined when !CONFIG_MMU so this patch moves the
CONSISTENT_* definitions to the CONFIG_MMU section.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and above have a restriction whereby aliasing virtual:physical
mappings must not have differing memory type and sharability
attributes. Strictly, this covers the memory type (strongly ordered,
device, memory), cache attributes (uncached, write combine, write
through, write back read alloc, write back write alloc) and the
shared bit.
However, using ioremap() and its variants on system RAM results in
mappings which differ in these attributes from the main system RAM
mapping. Other architectures which similar restrictions approch this
problem in the same way - they do not permit ioremap on main system
RAM.
Make ARM behave in the same way, with a WARN_ON() such that users can
be traced and an alternative approach found.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The VM subsystem assumes that there are valid memmap entries to
the bank end aligned to MAX_ORDER_NR_PAGES. It will try and read
these page structs, and so we cannot free any memmap entries that
it may inspect.
Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
When functions incoming parameters are not in input operands list gcc
4.5 does not load the parameters into registers before calling this
function but the inline assembly assumes valid addresses inside this
function. This breaks the code because r0 and r1 are invalid when
execution enters v4wb_copy_user_page ()
Also the constant needs to be used as third input operand so account
for that as well.
Tested on qemu arm.
CC: <stable@kernel.org>
Signed-off-by: Khem Raj <raj.khem@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Instruction faults on pre-ARMv6 CPUs are interpreted as
a 'translation fault', but do_translation_fault doesn't
handle well if user mode trying to run instruction above
TASK_SIZE, and result in the infinite retry of that
instruction.
CC: <stable@kernel.org>
Signed-off-by: Anfei Zhou <anfei.zhou@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When CONFIG_DEBUG_HIGHMEM is used, the fixmap entry used for a highmem page
by kmap_atomic() is always cleared by kunmap_atomic(). This helps find
bad usages such as dereferences after the unmap, or overflow into the
adjacent fixmap areas.
But this debugging aid is completely bypassed when a kmap for the same
page already exists as the kmap is reused instead. ON VIVT systems we
have no choice but to reuse that kmap due to cache coherency issues,
but on non VIVT systems we should always force the fixmap usage when
debugging is active.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This fixes a bug in mm/init.c when freeing the TCM compile memory,
this was being referred to as a char * which is incorrect: this
will dereference the pointer and feed in the value at the location
instead of the address to it. Change it to a plain char and use
&(char) to reference it.
Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch fixes the flush_cache_all for ARMv7 SMP.It was
missing from commit b8349b569a
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: <stable@kernel.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (224 commits)
ARM: remove 'select GENERIC_TIME'
ARM: 6136/1: ARCH_REQUIRE_GPIOLIB selects GENERIC_GPIO
ARM: 6074/1: oprofile: convert from sysdev to platform device
ARM: 6073/1: oprofile: remove old files and update KConfig
ARM: 6072/1: oprofile: use perf-events framework as backend
ARM: 6071/1: perf-events: allow modules to query the number of hardware counters
ARM: 6070/1: perf-events: add support for xscale PMUs
ARM: 6069/1: perf-events: use numeric ID to identify PMU
ARM: 6064/1: pmu: register IRQs at runtime
ARM: Optionally allow ARMv6 to use 'normal, bufferable' memory for DMA
ARM: 6134/1: Handle instruction cache maintenance fault properly
ARM: nwfpe: allow debugging output to be configured at runtime
ARM: rename mach_cpu_disable() to platform_cpu_disable()
ARM: 6132/1: PL330: Add common core driver
ARM: 6094/1: Extend cache-l2x0 to support the 16-way PL310
ARM: Move memory mapping into mmu.c
ARM: Ensure meminfo is sorted prior to sanity_check_meminfo
ARM: Remove useless linux/bootmem.h includes
ARM: convert /proc/cpu/aligment to seq_file
arm: use asm-generic/scatterlist.h
...
Provide a configuration option to allow the ARMv6 to use normal
bufferable memory for coherent DMA. This option is forced to 'y'
for ARMv7, and offered as a configuration option on ARMv6.
Enabling this option requires drivers to have the necessary barriers
to ensure that data in DMA coherent memory is visible prior to the
DMA operation commencing.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Between "clean D line..." and "invalidate I line" operations in
v7_coherent_user_range(), the memory page may get swapped out.
And the fault on "invalidate I line" could not be properly handled
causing the oops.
In ARMv6 "external abort on linefetch" replaced by "instruction cache
maintenance fault". Let's handle it as translation fault. It fixes the
issue.
I'm not sure if it's reasonable to check arch version in run-time.
Let's do it in compile time for now.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Siarhei Siamashka <siarhei.siamashka@nokia.com>
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The L310 cache controller's interface is almost identical
to the L210. One major difference is that the PL310 can
have up to 16 ways.
This change uses the cache's part ID and the Associativity
bits in the AUX_CTRL register to determine the number of ways.
Also, this version prints out the CACHE_ID and AUX_CTRL registers.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Jason S. McMullan <jason.mcmullan@netronome.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Convert code away from ->read_proc/->write_proc interfaces. Switch to
proc_create()/proc_create_data() which makes addition of proc entries
reliable wrt NULL ->proc_fops, NULL ->data and so on.
Problem with ->read_proc et al is described here commit
786d7e1612 "Fix rmmod/read/write races in
/proc entries"
This patch is part of an effort to remove the old simple procfs PAGE_SIZE
buffer interface.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Enable Tauros2 L2 in mmp2. Tauros2 L2 is shared in Marvell ARM cores.
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
The standard I-cache Invalidate All (ICIALLU) and Branch Predication
Invalidate All (BPIALL) operations are not automatically broadcast to
the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable
variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Snoop Control Unit on the ARM11MPCore hardware does not detect the
cache operations and the dma_cache_maint*() functions may leave stale
cache entries on other CPUs. The solution implemented in this patch
performs a Read or Write For Ownership in the ARMv6 DMA cache
maintenance functions. These LDR/STR instructions change the cache line
state to shared or exclusive so that the cache maintenance operation has
the desired effect.
Tested-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 7959722 introduced calls to copy_(to|from)_user_page() from
access_process_vm() in mm/nommu.c. The copy_to_user_page() was not
implemented on noMMU ARM.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 31aa8fd6 introduced the __arm_ioremap_caller() function but the
nommu.c version did not have the _caller suffix.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The show_mem() and mem_init() function are assuming that the page map is
contiguous and calculates the start and end page of a bank using (map +
pfn). This fails with SPARSEMEM where pfn_to_page() must be used.
Tested-by: Will Deacon <Will.Deacon@arm.com>
Tested-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Handle incorrectly reported permission faults for qsd8650. On
permission faults, retry MVA to PA conversion. If retry detects
translation fault. Report as translation fault.
Cc: Jamie Lokier <jamie@shareable.org>
Signed-off-by: Dave Estes <cestes@quicinc.com>
Fix compiler error in copypage-fs.c
missing struct vm_area_struct *vma in function
fa_copy_user_highpage
Signed-off-by: Hans Ulli Kroll <ulli.kroll@googlemail.com>
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'
This is caused because:
.section .data
.section .text
.section .text
.previous
does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.
Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This enables the l2x0 support and ensures that the secondary
CPU can see the page table and secondary data at this point.
Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This cache flush occurs when we first insert a page into the page
tables, where a page did not exist previously. There can be no
cache lines associated with this virtual mapping, so this cache
flush is redundant.
Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Mikael Pettersson <mikpe at it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When crash happens in interrupt context there is no userspace context.
We always use current->active_mm in those cases.
Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The VIVT cache of a highmem page is always flushed before the page
is unmapped. This cache flush is explicit through flush_cache_kmaps()
in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
kunmap_atomic(). There is also an implicit flush of those highmem pages
that were part of a process that just terminated making those pages free
as the whole VIVT cache has to be flushed on every task switch. Hence
unmapped highmem pages need no cache maintenance in that case.
However unmapped pages may still be cached with a VIPT cache because the
cache is tagged with physical addresses. There is no need for a whole
cache flush during task switching for that reason, and despite the
explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
some highmem pages that were mapped in user space end up still cached
even when they become unmapped.
So, we do have to perform cache maintenance on those unmapped highmem
pages in the context of DMA when using a VIPT cache. Unfortunately,
it is not possible to perform that cache maintenance using physical
addresses as all the L1 cache maintenance coprocessor functions accept
virtual addresses only. Therefore we have no choice but to set up a
temporary virtual mapping for that purpose.
And of course the explicit cache flushing when unmapping a highmem page
on a system with a VIPT cache now can go, which should increase
performance.
While at it, because the code in __flush_dcache_page() has to be modified
anyway, let's also make sure the mapped highmem pages are pinned with
kmap_high_get() for the duration of the cache maintenance operation.
Because kunmap() does unmap highmem pages lazily, it was reported by
Gary King <GKing@nvidia.com> that those pages ended up being unmapped
during cache maintenance on SMP causing segmentation faults.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Write combining/cached device mappings are not setting the shared bit,
which could potentially cause problems on SMP systems since the cache
lines won't participate in the cache coherency protocol.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
The mandatory barriers (mb, rmb, wmb) are used even on uniprocessor
systems for things like ordering Normal Non-cacheable memory accesses
with DMA transfer (via Device memory writes). The current implementation
uses dmb() for mb() and friends but this is not sufficient. The DMB only
ensures the relative ordering of the observability of accesses by other
processors or devices acting as masters. In case of DMA transfers
started by writes to device memory, the relative ordering is not ensured
because accesses to slave ports of a device are not considered
observable by the DMB definition.
A DSB is required for the data to reach the main memory (even if mapped
as Normal Non-cacheable) before the device receives the notification to
begin the transfer. Furthermore, some L2 cache controllers (like L2x0 or
PL310) buffer stores to Normal Non-cacheable memory and this would need
to be drained with the outer_sync() function call.
The patch also allows platforms to define their own mandatory barriers
implementation by selecting CONFIG_ARCH_HAS_BARRIERS and providing a
mach/barriers.h file.
Note that the SMP barriers are unchanged (being DMBs as before) since
they are only guaranteed to work with Normal Cacheable memory.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The L2x0 cache controllers need to explicitly drain their write buffer
even for Normal Noncacheable memory accesses.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch introduces the outer_cache_fns.sync function pointer together
with the OUTER_CACHE_SYNC config option that can be used to drain the
write buffer of the outer cache.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add ARM_L1_CACHE_SHIFT_6 to arch/arm/Kconfig to allow CPUs with
L1 cache lines which are 64bytes to indicate this without having to
alter the arch/arm/mm/Kconfig entry each time.
Update the mm Kconfig so that ARM_L1_CACHE_SHIFT default value
uses this and change OMAP3 and S5PC1XX to select ARM_L1_CACHE_SHIFT_6.
Acked-by: Ben Dooks <ben-linux@fluff.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
update_mmu_cache() is called with the page table for the faulted-in
page still mapped. We need to modify the PTE for this page to ensure
coherency with other shared mappings when multiple shared mappings
exist within a MM.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies. We do this via make_coherent() by making the pages
uncacheable.
This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().
Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():
On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
to construct a pointer to the pte again. Passing a pte_t * is much
more elegant. Maybe we might even replace the pte argument with the
pte_t?
Ben Herrenschmidt would also like the pte pointer for PowerPC:
Passing the ptep in there is exactly what I want. I want that
-instead- of the PTE value, because I have issue on some ppc cases,
for I$/D$ coherency, where set_pte_at() may decide to mask out the
_PAGE_EXEC.
So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.
Includes a fix from Stephen Rothwell:
sparc: fix fallout from update_mmu_cache API change
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some glibc versions intentionally create lots of alignment faults in
their gconv code, which if not fixed up, results in segfaults during
boot. This can prevent systems booting properly.
There is no clear hard-configurable default for this; the desired
default depends on the nature of the userspace which is going to be
booted.
So, provide a way for the alignment fault handler to be configured via
the kernel command line.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Makes it consistent with VMALLOC_START
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Adds DMA area to 'virtual memory map' startup message
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Code based on parisc and x86_32.
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch implements the work-around for the errata 588369.The secure
API is used to alter L2 debug register because of trust-zone.
This version updated with comments from Russell and Catalin and
generated against 2.6.33-rc6 mainline kernel. Detail
comments can be found:
http://www.spinics.net/lists/linux-omap/msg23431.html
Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds L2 Cache support for OMAP4. External L2 cache
is used in OMAP4
CC: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds the cache maintainance by line helper functions.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Otherwise the kernel built with both CPU_V6 and CPU_V7 will not
boot on omap2.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.
This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARM setup code includes its own parser for early params, there's
also one in the generic init code.
This patch removes __early_init (and related code) from
arch/arm/kernel/setup.c, and changes users to the generic early_init
macro instead.
The generic macro takes a char * argument, rather than char **, so we
need to update the parser functions a little.
Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Always creating this directory avoids other users having to jump
through silly hoops when they want to share this directory.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows the procfs vmallocinfo file to show who created the ioremap
regions. Note: __builtin_return_address(0) doesn't do what's expected
if its used in an inline function, so we leave __arm_ioremap callers
in such places alone.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting. Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.
Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks. On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
These are now unused, and so can be removed.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
dma_cache_maint_contiguous is now simple enough to live inside
dma_cache_maint_page, so move it there.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API. This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
The perf events subsystem allows counting of both hardware and
software events. This patch implements the bare minimum for software
performance events.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We already know the pfn for the page to be modified in make_coherent,
so let's stop recalculating it unnecessarily.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
update_mmu_cache() is called with a page table already mapped. We
call make_coherent(), which then calls adjust_pte() which wants to
map other page tables. This causes kmap_atomic() to BUG() because
the slot its trying to use is already taken.
Since do_adjust_pte() modifies the page tables, we are also missing
any form of locking, so we're risking corrupting the page tables.
Fix this by using pte_offset_map_nested(), and taking the pte page
table lock around do_adjust_pte().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The comments in cacheflush.h should follow what's in
struct cpu_cache_fns. The comments for V6 and V7 are
unnecessary.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The comments in arm_machine_restart() suggest that cpu_proc_fin()
will clean and disable cache and turn off interrupts. This does
not seem to be implemented for proc-v7.S, implement it the same
way as for proc-v6.S.
This also makes kexec work for v7. Note that a related TLB and
branch traget flush patch is also needed to avoid kexec
"crc error".
Note that there are still some issues that seem to be related
to L2 cache being on and causing occasional uncompress "crc error"
with kexec. Anyways, this gets kexec mostly working on V7 for now.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We need to do that if we tinker with the MMU entries.
This fixes the occasional bug with kexec where the new
fails to uncompress with "crc error". Most likely at
least kexec on v6 and v7 need this fix.
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* master.kernel.org:/home/rmk/linux-2.6-arm:
ARM: Ensure ARMv6/7 mm files are built using appropriate assembler options
ARM: Fix wrong dmb
ARM: 5874/1: serial21285: fix disable_irq-from-interrupt-handler deadlock
ARM: 5873/1: ARM: Fix the reset logic for ARM RealView boards
ARM: 5872/1: ARM: include needed linux/cpu.h in asm/cpu.h
ARM: 5871/1: arch/arm: Fix build failure for lpd7a404_defconfig caused by missing includes
ARM: 5870/1: arch/arm: Fix build failure for defconfigs without CONFIG_ISA_DMA_API set
ARM: 5868/1: ARM: fix "BUG: using smp_processor_id() in preemptible code"
ARM: 5867/1: Update U300 defconfig
ARM: 5866/1: arm ptrace: use unsigned types for kernel pt_regs
[ARM] pxa: fix strange characters in zaurus gpio .desc
ARM: add missing recvmmsg syscall number
[ARM] pxa: fix compiler warnings of unused variable 'id' in cpu_is_pxa9*()
[ARM] pxa: update pwm_backlight->notify() to include missed 'struct device *'
[ARM] pxa: enable L2 if present in XSC3
[ARM] pxa: do not enable L2 after MMU is enabled
A kernel with both ARMv6 and ARMv7 selected results in build errors.
Fix this by specifying the proper architectures for these assembly
files.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM
is set Removes redundant casts in printout messages
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Check whether L2 is present or not in XSC3. If it's present, enable L2
immediately.
Disabling L2 after L2 is enabled that would result in unpredicatable behavior
of XSC3 processor.
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
Outer cache checked whether L2 is enabled or not. If L2 isn't enabled in XSC3,
it would enable L2. This operation is evil that would make system hang.
In XSC3 core document, these words are mentioned in below.
"Following reset, the L2 Unified Cache Enable bit is cleared. To enable the L2
Cache, software may set the bit to a '1' before or at the same time as enabling
the MMU. Enabling the L2 Cache after the MMU has been enabled or disabling the
L2 Cache after the L2 Cache has been enabled, may result in unpredictable
behavior of the processor."
When outer cache is initialized, the MMU is already enabled. We couldn't enable
L2 after MMU enabled.
Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
PAGE_KERNEL should not be executable; any area marked executable can
be prefetched into the instruction cache. We don't want vmalloc areas
to be read in this way.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It is unpredictable to have the same memory mapped using different
shared bit settings for ARMv6 and ARMv7 CPUs. Fix this for the CPU
write buffer bug test.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
26-bit ARM support was removed a long time ago, and this symbol has
been defined to be 'y' ever since. As it's never disabled anymore,
we can kill it without any side effects.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 2c9b9c849 added an argument to __cpuc_flush_dcache_page
and renamed it.
Update a caller of the old function to fix this build error:
CC arch/arm/mm/copypage-v6.o
arch/arm/mm/copypage-v6.c: In function 'v6_copy_user_highpage_nonaliasing':
arch/arm/mm/copypage-v6.c:51: error: implicit declaration of function '__cpuc_flush_dcache_page'
make[1]: *** [arch/arm/mm/copypage-v6.o] Error 1
make: *** [arch/arm/mm] Error 2
Reported-by: Jinsung Yang <jsgood.yang@samsung.com>
Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is not enough users to warrant its existence, and it is actually
an obstacle to progress with the new DMA API which cannot cover this
case properly.
To keep backward compatibility, let's perform the necessary custom
cache maintenance locally in the only driver affected.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There's no point having the hardware support background operations
if we issue a cache operation, and then wait for it to complete
before calculating the address of the next operation. We gain no
advantage in the cache controller stalling the bus until completion.
What we should be doing is using the 'wait' time productively by
calculating the address of the next operation, and only then waiting
for the previous operation to complete. This means that cache
operations can occur in parallel with the CPU calculating the next
address.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Taking the spinlock for every iteration is very expensive; instead,
batch iterations up into 4K blocks, releasing and reacquiring the
spinlock between each block.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Dirk Behme reported instability on ARM11 SMP (VIPT non-aliasing cache)
caused by the dynamic linker changing protection on text pages to write
GOT entries. The problem is due to an interaction between the write
faulting code providing new anonymous pages which are incoherent with
the I-cache due to write buffering, and the I-cache not having been
invalidated.
a4db94d plugs the hole with the data cache coherency. This patch
provides the other half of the fix by flushing the I-cache in
flush_cache_range() for VM_EXEC VMAs (which is what we have when the
region is being made executable again.) This ensures that the I-cache
will be up to date with the newly COW'd pages.
Note: if users are writing instructions, then they still need to use
the ARM sys_cacheflush API to ensure that the caches are correctly
synchronized.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
flush_cache_mm() is called in two cases:
1. when a process exits, just before the page tables are torn down.
We can allow the stale lines to evict themselves over time without
causing any harm.
2. when a process forks, and we've allocated a new ASID.
The instruction cache issues are dealt with as pages are brought
into the new process address space. Flushing the I-cache here is
therefore unnecessary.
However, we must keep the VIPT aliasing D-cache flush to ensure that
any dirty cache lines are not written back after the pages have been
reallocated for some other use - which would result in corruption.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The I and D caches for copy-on-write pages on processors with
write-allocate caches become incoherent causing problems on application
relying on CoW for text pages (dynamic linker relocating symbols in a
text page). This patch flushes the D-cache for such pages.
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Both call sites for __flush_dcache_page() end up calling
__flush_icache_all() themselves, so having __flush_dcache_page() do
this as well is wasteful. Remove the duplicated icache flushing.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If running in non-secure mode accessing
some registers of l2x0 will fault. So
check if l2x0 is already enabled, if so
do not access those secure registers.
Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The zero page is read-only, and has its cache state cleared during
boot. No further maintanence for this page is required.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
page_address() is a function call rather than a macro, and so:
if (page_address(page))
do_something(page_address(page));
results in two calls to this function. This is unnecessary; remove
the duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We had two copies of the wrapper code for VIVT cache flushing - one in
asm/cacheflush.h and one in arch/arm/mm/flush.c. Reduce this down to
one common copy.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Support for the Tauros2 L2 cache controller as used with the PJ1
and PJ4 CPUs.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
The Marvell Dove (88AP510) is a high-performance, highly integrated,
low power SoC with high-end ARM-compatible processor (known as PJ4),
graphics processing unit, high-definition video decoding acceleration
hardware, and a broad range of peripherals.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
On ARMv7, it is invalid to map the same physical address multiple times
with different memory types. Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.
However, DMA memory maps it as "strongly ordered". Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it
anymore.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
This entirely separates the DMA coherent buffer remapping code from
the allocation code, and gets rid of the duplicate copy in the !MMU
section.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
IXP23xx added support for dma_alloc_coherent() for DMA arches with an
exception in dma_alloc_coherent(). This is a subset of what goes on
in __dma_alloc(), and there is no reason why dma_alloc_writecombine()
should not be given the same treatment (except, maybe, that IXP23xx
doesn't use it.)
We can better deal with this by moving the arch_is_coherent() test
inside __dma_alloc() and killing the code duplication.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
No point wrapping the contents of this function with #ifdef CONFIG_MMU
when we can place it and the core_initcall() entirely within the
existing conditional block.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
We effectively have three implementations of dma_free_coherent() mixed up
in the code; the incoherent MMU, coherent MMU and noMMU versions.
The coherent MMU and noMMU versions are actually functionally identical.
The incoherent MMU version is almost the same, but with the additional
step of unmapping the secondary mapping.
Separate out this additional step into __dma_free_remap() and simplify
the resulting dma_free_coherent() code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage
the memory. dma_alloc_coherent() is expected to work with a granularity
of a page, so this is wrong. Fix it by using the helper functions now
provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
The coherent architecture dma_alloc_coherent was using kmalloc/kfree to
manage the memory. dma_alloc_coherent() is expected to work with a
granularity of a page, so this is wrong. Fix it by using the helper
functions now provided.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Samsung S5PC1xx SoCs are based on ARM Coretex8, which has 64 bytes of L1
cache line size. Enable proper handling of L1 cache on these SoCs.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If running in non-secure mode, enabling this register will fault.
Signed-off-by: Tony Thompson <Anthony.Thompson@arm.com>
Acked-by: Srinidhi Kasagar <srinidhikasagar@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Mapping the same memory using two different attributes (memory
type, shareability, cacheability) is unpredictable. During boot,
we encounter a situation when we're updating the kernel's page
tables which can lead to dirty cache lines existing in the cache
which are subsequently missed. This causes stack corruption,
and therefore a crash.
Therefore, ensure that the shared and cacheability settings
matches the configuration that will be used later; this together
with the restriction in early_cachepolicy() ensures that we won't
create a mismatch during boot.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present. This is not
limited just to those operations in flush.c, but elsewhere. Place the
workaround in the already existing __flush_icache_all() function
instead.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This adds a better sched_clock() to the IOP platform,
implemented using its new clocksource support.
Tested on n2100, compile-tested for all plat-iop machines.
[dan.j.williams@intel.com: allow early cp6 access]
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
When SPARSEMEM_EXTREME is enabled, memory_present() wants to use bootmem
to allocate data structures. However, we call memory_present() after
declaring memory to bootmem, but before we've reserved areas.
This leads to sparsemem data structures being overwritten later in the
kernel's initialization (when slab initializes.)
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
We were using GFP_DMA for masks other than 0xffffffff, which is
wrong when some masks are initialized to 0xffffffffffffffff.
This caused such masks to obtain memory from the precious DMA
pool.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Remove the URL listed for Maverick EP9312 since it is not available
and modify the help text appropriately.
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Ryan Mallon <ryan@bluewatersys.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On ARM, update_mmu_cache() does dcache flush for a page only if
it has a kernel mapping (page_mapping(page) != NULL). The correct
behavior would be to force the flush based on dcache_dirty bit only.
One of the cases where present logic would be a problem is when
a RAM based block device[1] is used as a swap disk. In this case,
we would have in-memory data corruption as shown in steps below:
do_swap_page()
{
- Allocate a new page (if not already in swap cache)
- Issue read from swap disk
- Block driver issues flush_dcache_page()
- flush_dcache_page() simply sets PG_dcache_dirty bit and does not
actually issue a flush since this page has no user space mapping yet.
- Now, if swap disk is almost full, this newly read page is removed
from swap cache and corrsponding swap slot is freed.
- Map this page anonymously in user space.
- update_mmu_cache()
- Since this page does not have kernel mapping (its not in page/swap
cache and is mapped anonymously), it does not issue dcache flush
even if dcache_dirty bit is set by flush_dcache_page() above.
<user now gets stale data since dcache was never flushed>
}
Same problem exists on mips too.
[1] example:
- brd (RAM based block device)
- ramzswap (RAM based compressed swap device)
Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Seemingly this support was missed when highmem was added, so
DEBUG_HIGHMEM wouldn't have checked the kmap_atomic type.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
If sparsemem is enabled, the start_pfn passed to the free_memmap()
function corresponds to an area of memory not known to the kernel and
pfn_to_page returns a wrong value. The (start_pfn - 1), however, is
known to the kernel.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This is needed because applications using the sys_cacheflush system call
can pass a memory range which isn't mapped yet even though the
corresponding vma is valid. The patch also adds unwinding annotations
for correct backtraces from the coherent_user_range() functions.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
According to the following in arch/arm/mm/fault.c page faults from
kernel mode are invalid if mmap_sem is already held and there is
no exception handler defined for the faulting instruction:
/*
* As per x86, we may deadlock here. However, since the kernel only
* validly references user space from well defined areas of the code,
* we can bug out early if this is from code which shouldn't.
*/
if (!down_read_trylock(&mm->mmap_sem)) {
if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))
goto no_context;
Since mmap_sem can be held at arbitrary times by another thread this
also means that any page faults from kernel mode are invalid if no
exception handler is defined for them, regardless whether mmap_sem is
held at the time of fault.
To easier detect code that can trigger the above error, add a check
also for the case where mmap_sem is acquired. As this has an overhead
make it a VM debug check.
Signed-off-by: Imre Deak <imre.deak@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Steven Walter <stevenrwalter@gmail.com> writes:
> I've been tracking down an instance of userspace data corruption,
> and I believe I have found a window during fork where data can be
> lost. The corruption is occurring on an ARMv5 system with VIVT
> caches. Here's the scenario in question. Thread A is forking,
> Thread B is running in userspace:
>
> Thread A: flush_cache_mm() (dup_mmap)
> Thread B: writes to a page in the above mm
> Thread A: pte_wrprotect() the above page (copy_one_pte)
> Thread B: writes to the same page again
>
> During thread B's second write, he'll take a fault and enter the
> do_wp_page() case. We'll end up calling copy_page(), which notably
> uses the kernel virtual addresses for the old and new pages. This
> means that the new page does not necessarily have the data from the
> first write. Now there are two conflicting copies of the same
> cache-line in dcache. If the userspace cache-line flushes before
> the kernel cache-line, we lose the changes made during the first
> write. do_wp_page does call flush_dcache_page on the newly-copied
> page, but there's still a window where the CPU could flush the
> userspace cache-line before then.
Resolve this by flushing the user mapping before copying the page
on processors with a writeback VIVT cache.
Note: this does have a performance impact, and so needs further
consideration before being merged - can we optimize out some of
the cache flushes if, eg, we know that the page isn't yet mapped?
Thread: <e06498070903061426o5875ad13hc6328aa0d3f08ed7@mail.gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Our copy_user_highpage() implementations may require cache maintainence.
Ensure that implementations have all necessary details to perform this
maintainence.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Currently, on ARMv6 and ARMv7, if an application tries to execute
code (or garbage) on non-executable page it hangs. It caused by
incorrect prefetch abort handling. Now every prefetch abort
processes as a translation fault.
To fix this we have to analyze instruction fault status register
to figure out reason why we've got the abort and process it
accordingly.
To make IFSR different from DFSR we set bit 31 which is reserved in
both IFSR and DFSR.
This patch also tries to protect from future hangs on unexpected
exceptions. An application will be killed if unexpected exception
type was received.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Instruction fault status register, IFSR, was introduced on ARMv6 to
provide status information about the last insturction fault. It
needed for proper prefetch abort handling.
Now we have three prefetch abort model:
* legacy - for CPUs before ARMv6. They doesn't provide neither
IFSR nor IFAR. We simulate IFSR with section translation fault
status for them to generalize code;
* ARMv6 - provides IFSR, but not IFAR;
* ARMv7 - provides both IFSR and IFAR.
Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>