1
Commit Graph

85 Commits

Author SHA1 Message Date
Nicolas Pitre
4fa20439a8 ARM: clean up idle handlers
Let's factor out the need_resched() check instead of having it duplicated
in every pm_idle implementations to avoid inconsistencies (omap2_pm_idle
is missing it already).

The forceful re-enablement of IRQs after pm_idle has returned can go.
The warning certainly doesn't trigger for existing users.

To get rid of the pm_idle calling convention oddity, let's introduce
arm_pm_idle() allowing for the local_irq_enable() to be factored out
from SOC specific implementations. The default pm_idle function becomes
a wrapper for arm_pm_idle and it takes care of enabling IRQs closer to
where they are initially disabled.

And finally move the comment explaining the reason for that turning off
of IRQs to a more proper location.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-and-tested-by: Jamie Iles <jamie@jamieiles.com>
2012-01-20 18:55:05 -05:00
Linus Torvalds
770e1b035d Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm
* 'for-linus' of git://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (207 commits)
  ARM: 7267/1: Remove BUILD_BUG_ON from asm/bug.h
  ARM: 7269/1: mach-sa1100: fix sched_clock breakage
  ARM: 7198/1: arm/imx6: add restart support for imx6q
  ARM: restart: remove the now empty arch_reset()
  ARM: restart: remove comments about adding code to arch_reset()
  ARM: restart: lpc32xx & u300: remove unnecessary printk
  ARM: restart: plat-samsung: remove plat/reset.h and s5p_reset_hook
  ARM: restart: w90x900: use new restart hook
  ARM: restart: Versatile Express: use new restart hook
  ARM: restart: versatile: use new restart hook
  ARM: restart: u300: use new restart hook
  ARM: restart: tegra: use new restart hook
  ARM: restart: spear: use new restart hook
  ARM: restart: shark: use new restart hook
  ARM: restart: sa1100: use new restart hook
  ARM: 7252/1: restart: S5PV210: use new restart hook
  ARM: 7251/1: restart: S5PC100: use new restart hook
  ARM: 7250/1: restart: S5P64X0: use new restart hook
  ARM: 7266/1: restart: S3C64XX: use new restart hook
  ARM: 7265/1: restart: S3C24XX: use new restart hook
  ...

Fix up trivial conflict in arch/arm/mm/init.c due to removal of
memblock_init() clashing with the movement of the sorting of the meminfo
array.
2012-01-06 18:15:25 -08:00
Russell King
7b9dd47136 Merge branch 'restart' into for-linus
Conflicts:
	arch/arm/mach-exynos/cpu.c

The changes to arch/arm/mach-exynos/cpu.c were moved to
mach-exynos/common.c.
2012-01-05 13:25:27 +00:00
Russell King
f88b8979d2 ARM: restart: remove the now empty arch_reset()
Remove the now empty arch_reset() from all the mach/system.h includes,
and remove its callsite.  Remove arm_machine_restart() as this function
no longer does anything useful.

For samsung platforms, remove the include of mach/system-reset.h and
plat/system-reset.h from their respective mach/system.h headers as these
just define their arch_reset functions.  As a result, the s3c2410 and
plat-samsung system-reset.h files are no longer referenced, so remove
these files entirely.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-05 12:57:22 +00:00
Russell King
4045407fd7 Merge branch 'restart-cleanup' into restart
Conflicts:
	arch/arm/kernel/setup.c
2012-01-05 12:56:44 +00:00
Will Deacon
290130a177 ARM: reset: implement soft_restart for jumping to a physical address
Tools such as kexec and CPU hotplug require a way to reset the processor
and branch to some code in physical space. This requires various bits of
jiggery pokery with the caches and MMU which, when it goes wrong, tends
to lock up the system.

This patch fleshes out the soft_restart implementation so that it
branches to the reset code using the identity mapping. This requires us
to change to a temporary stack, held within the kernel image as a static
array, to avoid conflicting with the new view of memory.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-12-12 16:07:35 +00:00
Frederic Weisbecker
1268fbc746 nohz: Remove tick_nohz_idle_enter_norcu() / tick_nohz_idle_exit_norcu()
Those two APIs were provided to optimize the calls of
tick_nohz_idle_enter() and rcu_idle_enter() into a single
irq disabled section. This way no interrupt happening in-between would
needlessly process any RCU job.

Now we are talking about an optimization for which benefits
have yet to be measured. Let's start simple and completely decouple
idle rcu and dyntick idle logics to simplify.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-12-11 10:31:57 -08:00
Frederic Weisbecker
2bbb6817c0 nohz: Allow rcu extended quiescent state handling seperately from tick stop
It is assumed that rcu won't be used once we switch to tickless
mode and until we restart the tick. However this is not always
true, as in x86-64 where we dereference the idle notifiers after
the tick is stopped.

To prepare for fixing this, add two new APIs:
tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

If no use of RCU is made in the idle loop between
tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
must instead call the new *_norcu() version such that the arch doesn't
need to call rcu_idle_enter() and rcu_idle_exit().

Otherwise the arch must call tick_nohz_enter_idle() and
tick_nohz_exit_idle() and also call explicitly:

- rcu_idle_enter() after its last use of RCU before the CPU is put
to sleep.
- rcu_idle_exit() before the first use of RCU after the CPU is woken
up.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-12-11 10:31:36 -08:00
Frederic Weisbecker
280f06774a nohz: Separate out irq exit and idle loop dyntick logic
The tick_nohz_stop_sched_tick() function, which tries to delay
the next timer tick as long as possible, can be called from two
places:

- From the idle loop to start the dytick idle mode
- From interrupt exit if we have interrupted the dyntick
idle mode, so that we reprogram the next tick event in
case the irq changed some internal state that requires this
action.

There are only few minor differences between both that
are handled by that function, driven by the ts->inidle
cpu variable and the inidle parameter. The whole guarantees
that we only update the dyntick mode on irq exit if we actually
interrupted the dyntick idle mode, and that we enter in RCU extended
quiescent state from idle loop entry only.

Split this function into:

- tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
dynticks idle mode unconditionally if it can, and enters into RCU
extended quiescent state.

- tick_nohz_irq_exit() which only updates the dynticks idle mode
when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
into tick_nohz_idle_exit().

This simplifies the code and micro-optimize the irq exit path (no need
for local_irq_save there). This also prepares for the split between
dynticks and rcu extended quiescent state logics. We'll need this split to
further fix illegal uses of RCU in extended quiescent states in the idle
loop.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
2011-12-11 10:31:35 -08:00
Russell King
742eaa6a6e Merge branch 'for-rmk' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
Conflicts:
	arch/arm/common/gic.c
	arch/arm/plat-omap/include/plat/common.h
2011-12-05 23:20:17 +00:00
Will Deacon
11ed0ba175 ARM: 7161/1: errata: no automatic store buffer drain
This patch implements a workaround for PL310 erratum 769419. On
revisions of the PL310 prior to r3p2, the Store Buffer does not
automatically drain. This can cause normal, non-cacheable writes to be
retained when the memory system is idle, leading to suboptimal I/O
performance for drivers using coherent DMA.

This patch adds an optional wmb() call to the cpu_idle loop. On systems
with an outer cache, this causes an explicit flush of the store buffer.

Cc: stable@vger.kernel.org
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-11-21 13:12:18 +00:00
Russell King
e879c862fb ARM: restart: only perform setup for restart when soft-restarting
We only need to set the system up for a soft-restart if we're going to
be doing a soft-restart.  Provide a new function (soft_restart()) which
does the setup and final call for this, and make platforms use it.
Eliminate the call to setup_restart() from the default handler.

This means that platforms arch_reset() function is no longer called with
the page tables prepared for a soft-restart, and caches will still be
enabled.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Acked-by: Krzysztof Ha■asa <khc@pm.waw.pl>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Acked-by: Wan ZongShun <mcuos.com@gmail.com>
Acked-by: Eric Miao <eric.y.miao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-11-21 09:47:48 +00:00
Russell King
5aafec15bd ARM: restart: remove argument to setup_mm_for_reboot()
setup_mm_for_reboot() doesn't make use of its argument, so remove it.

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-11-10 22:30:28 +00:00
Russell King
ac15e00b1e ARM: restart: move reboot failure handing into machine_restart()
Move the failure to reboot into machine_restart() to always catch
this condition, even if a platform decides to hook the restarting
via arm_pm_restart().

Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-11-10 22:30:20 +00:00
Paul Gortmaker
ecea4ab6d3 arm: convert core files from module.h to export.h
Many of the core ARM kernel files are not modules, but just
including module.h for exporting symbols.  Now these files can
use the lighter footprint export.h for this role.

There are probably lots more, but ARM files of mach-* and plat-*
don't get coverage via a simple yesconfig build.  They will have
to be cleaned up and tested via using their respective configs.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-10-31 19:30:49 -04:00
Laura Abbott
b380ab4f85 ARM: 7068/1: process: change from __backtrace to dump_stack in show_regs
Currently, show_regs calls __backtrace which does
nothing if CONFIG_FRAME_POINTER is not set. Switch to
dump_stack which handles both CONFIG_FRAME_POINTER and
CONFIG_ARM_UNWIND correctly.

__backtrace is now superseded by dump_stack in general
and show_regs was the last caller so remove __backtrace
as well.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:12:41 +01:00
David Brown
cbc158d6bf cpuidle: Consistent spelling of cpuidle_idle_call()
Commit a0bfa13738 mispells
cpuidle_idle_call() on ARM and SH code.  Fix this to be consistent.

Cc: Kevin Hilman <khilman@deeprootsystems.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: x86@kernel.org
Cc: Len Brown <len.brown@intel.com>
Signed-off-by: David Brown <davidb@codeaurora.org>
[ Also done by Mark Brown - th ebug has been around forever, and was
  noticed in -next, but the idle tree never picked it up. Bad bad bad ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-08-04 16:35:34 -10:00
Len Brown
a0bfa13738 cpuidle: stop depending on pm_idle
cpuidle users should call cpuidle_call_idle() directly
rather than via (pm_idle)() function pointer.

Architecture may choose to continue using (pm_idle)(),
but cpuidle need not depend on it:

  my_arch_cpu_idle()
	...
	if(cpuidle_call_idle())
		pm_idle();

cc: Kevin Hilman <khilman@deeprootsystems.com>
cc: Paul Mundt <lethal@linux-sh.org>
cc: x86@kernel.org
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
2011-08-03 19:06:37 -04:00
Catalin Marinas
2e82669acf ARM: 6867/1: Introduce THREAD_NOTIFY_COPY for copy_thread() hooks
This patch adds THREAD_NOTIFY_COPY for calling registered handlers
during the copy_thread() function call. It also changes the VFP handler
to use a switch statement rather than if..else and ignore this event.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-04-10 21:13:36 +01:00
Will Deacon
6cde6d4217 ARM: 6619/1: nommu: avoid mapping vectors page when !CONFIG_MMU
When running without an MMU, we do not need to install a mapping for the
vectors page. Attempting to do so causes a compile-time error because
install_special_mapping is not defined.

This patch adds compile-time guards to the vector mapping functions
so that we can build nommu configurations once more.

Acked-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-01-11 17:32:24 +00:00
Russell King
809b4e00ba Merge branch 'devel-stable' into devel 2010-10-19 22:06:36 +01:00
Russell King
23beab76b4 Merge branches 'at91', 'dcache', 'ftrace', 'hwbpt', 'misc', 'mmci', 's3c', 'st-ux' and 'unwind' into devel 2010-10-18 22:34:25 +01:00
Kevin Hilman
c7b0aff44a ARM: 6428/1: add cpu_idle_wait() to support CPUidle on SMP systems.
In order for CPUidle to work on SMP systems, an implementation of
cpu_idle_wait() is needed.

This patch duplicates the x86 implementation of cpu_idle_wait() for
ARM.

Tested-by: Colin Cross <ccross@android.com>
Signed-off-by: Kevin Hilman <khilman@deeprootsystems.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-10-08 10:02:24 +01:00
Nicolas Pitre
ec706dab29 ARM: add a vma entry for the user accessible vector page
The kernel makes the high vector page visible to user space. This page
contains (amongst others) small code segments that can be executed in
user space.  Make this page visible through ptrace and /proc/<pid>/mem
in order to let gdb perform code parsing needed for proper unwinding.

For example, the ERESTART_RESTARTBLOCK handler actually has a stack
frame -- it returns to a PC value stored on the user's stack.   To
unwind after a "sleep" system call was interrupted twice, GDB would
have to recognize this situation and understand that stack frame
layout -- which it currently cannot do.

We could fix this by hard-coding addresses in the vector page range into
GDB, but that isn't really portable as not all of those addresses are
guaranteed to remain stable across kernel releases.  And having the gdb
process make an exception for this page and get  content from its own
address space for it looks strange, and it is not future proof either.

Being located above PAGE_OFFSET, this vma cannot be deleted by
user space code.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-10-01 22:35:19 -04:00
Will Deacon
864232fa1a ARM: 6357/1: hw-breakpoint: add new ptrace requests for hw-breakpoint interaction
For debuggers to take advantage of the hw-breakpoint framework in the kernel,
it is necessary to expose the API calls via a ptrace interface.

This patch exposes the hardware breakpoints framework as a collection of
virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS
requests. The breakpoints are stored in the debug_info struct of the running
thread.

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: S. Karthikeyan <informkarthik@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-09-08 10:05:00 +01:00
Russell King
7b70c4275f Merge branch 'devel-stable' into devel
Conflicts:
	arch/arm/kernel/entry-armv.S
	arch/arm/kernel/setup.c
	arch/arm/mm/init.c
2010-07-31 14:20:16 +01:00
Russell King
3d3f78d752 ARM: call machine_shutdown() from machine_halt(), etc
x86 calls machine_shutdown() from the various machine_*() calls which
take the machine down ready for halting, restarting, etc, and uses
this to bring the system safely to a point where those actions can be
performed.  Such actions are stopping the secondary CPUs.

So, change the ARM implementation of these to reflect what x86 does.

This solves kexec problems on ARM SMP platforms, where the secondary
CPUs were left running across the kexec call.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-27 10:48:43 +01:00
Russell King
9ca03a21e3 ARM: Factor out common code from cpu_proc_fin()
All implementations of cpu_proc_fin() start by disabling interrupts
and then flush caches.  Rather than have every processors proc_fin()
implementation do this, move it out into generic code - and move the
cache flush past setup_mm_for_reboot() (so it can benefit from having
caches still enabled.)

This allows cpu_proc_fin() to become independent of the L1/L2 cache
types, and eventually move the L2 cache flushing into the L2 support
code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-27 10:48:42 +01:00
Russell King
14764b01a5 Merge git://git.kernel.org/pub/scm/linux/kernel/git/nico/orion into devel-stable 2010-07-21 09:22:45 +01:00
Russell King
ac78884e6d ARM: lockdep: fix unannotated irqs-on
CPU: Testing write buffer coherency: ok
------------[ cut here ]------------
WARNING: at kernel/lockdep.c:3145 check_flags+0xcc/0x1dc()
Modules linked in:
[<c0035120>] (unwind_backtrace+0x0/0xf8) from [<c0355374>] (dump_stack+0x20/0x24)
[<c0355374>] (dump_stack+0x20/0x24) from [<c0060c04>] (warn_slowpath_common+0x58/0x70)
[<c0060c04>] (warn_slowpath_common+0x58/0x70) from [<c0060c3c>] (warn_slowpath_null+0x20/0x24)
[<c0060c3c>] (warn_slowpath_null+0x20/0x24) from [<c008f224>] (check_flags+0xcc/0x1dc)
[<c008f224>] (check_flags+0xcc/0x1dc) from [<c00945dc>] (lock_acquire+0x50/0x140)
[<c00945dc>] (lock_acquire+0x50/0x140) from [<c0358434>] (_raw_spin_lock+0x50/0x88)
[<c0358434>] (_raw_spin_lock+0x50/0x88) from [<c00fd114>] (set_task_comm+0x2c/0x60)
[<c00fd114>] (set_task_comm+0x2c/0x60) from [<c007e184>] (kthreadd+0x30/0x108)
[<c007e184>] (kthreadd+0x30/0x108) from [<c0030104>] (kernel_thread_exit+0x0/0x8)
---[ end trace 1b75b31a2719ed1c ]---
possible reason: unannotated irqs-on.
irq event stamp: 3
hardirqs last  enabled at (2): [<c0059bb0>] finish_task_switch+0x48/0xb0
hardirqs last disabled at (3): [<c002f0b0>] ret_slow_syscall+0xc/0x1c
softirqs last  enabled at (0): [<c005f3e0>] copy_process+0x394/0xe5c
softirqs last disabled at (0): [<(null)>] (null)

Fix this by ensuring that the lockdep interrupt state is manipulated in
the appropriate places.  We essentially treat userspace as an entirely
separate environment which isn't relevant to lockdep (lockdep doesn't
monitor userspace.)  We don't tell lockdep that IRQs will be enabled
in that environment.

Instead, when creating kernel threads (which is a rare event compared
to entering/leaving userspace) we have to update the lockdep state.  Do
this by starting threads with IRQs disabled, and in the kthread helper,
tell lockdep that IRQs are enabled, and enable them.

This provides lockdep with a consistent view of the current IRQ state
in kernel space.

This also revert portions of 0d928b0b61
which didn't fix the problem.

Tested-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-07-10 10:53:13 +01:00
Nicolas Pitre
c743f38013 ARM: initial stack protector (-fstack-protector) support
This is the very basic stuff without the changing canary upon
task switch yet.  Just the Kconfig option and a constant canary
value initialized at boot time.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-06-14 21:31:00 -04:00
Nicolas Pitre
990cb8acf2 [ARM] implement arch_randomize_brk()
For this feature to take effect, CONFIG_COMPAT_BRK must be turned
off.  This can safely be turned off for any EABI user space versions.

Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2010-06-14 21:22:11 -04:00
Russell King
4260415f6a ARM: fix build error in arch/arm/kernel/process.c
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'

This is caused because:

	.section .data
	.section .text
	.section .text
	.previous

does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.

Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-21 08:45:21 +01:00
Tejun Heo
5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Rabin Vincent
22325525d8 ARM: 5868/1: ARM: fix "BUG: using smp_processor_id() in preemptible code"
Fix the following warning, which appears when the register dump for a
faulting process is printed in a kernel with SMP, DEBUG_PREEMPT, and
DEBUG_USER (with user_debug=31) enabled:

BUG: using smp_processor_id() in preemptible [00000000] code: init/1
caller is __show_regs+0x18/0x234
Backtrace:
[<c0159e5c>] (dump_backtrace+0x0/0x114) from [<c01faf30>] (dump_stack+0x18/0x1c)
 r6:c781a000 r5:c0157544 r4:00000001 r3:00000000
[<c01faf18>] (dump_stack+0x0/0x1c) from [<c01e5230>] (debug_smp_processor_id+0xc4/0xf8)
[<c01e516c>] (debug_smp_processor_id+0x0/0xf8) from [<c0157544>] (__show_regs+0x18/0x234)
 r6:c781bfb0 r5:00000000 r4:c781bfb0 r3:00000000
[<c015752c>] (__show_regs+0x0/0x234) from [<c01577a0>] (show_regs+0x40/0x50)
[<c0157760>] (show_regs+0x0/0x50) from [<c015c968>] (__do_user_fault+0x5c/0xa4)
 r4:c781c000 r3:00000000
[<c015c90c>] (__do_user_fault+0x0/0xa4) from [<c015cbe0>] (do_page_fault+0x1b4/0x1e4)
 r7:00000000 r6:00010000 r5:c781bfb0 r4:c781c000
[<c015ca2c>] (do_page_fault+0x0/0x1e4) from [<c01554c8>] (do_DataAbort+0x3c/0xa0)
[<c015548c>] (do_DataAbort+0x0/0xa0) from [<c01560c4>] (ret_from_exception+0x0/0x10)

Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-08 16:14:29 +00:00
Russell King
797245f5da ARM: Convert VFP/Crunch/XscaleCP thread_release() to exit_thread()
This avoids races in the VFP code where the dead thread may have
state on another CPU.  By moving this code to exit_thread(), we
will be running as the thread, and therefore be running on the
current CPU.

This means that we can ensure that the only local state is accessed
in the thread notifiers.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-18 14:53:41 +00:00
Artem Bityutskiy
cde3f86073 ARM: 5759/1: Add register information of threads to coredump
Defines ELF_CORE_COPY_TASK_REGS so that CPU register information
of every thread is included in coredump. Without this, only the faulting
thread is coredumped.

Cc: Roger Quadros <ext-roger.quadros@nokia.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-14 10:33:05 +01:00
Catalin Marinas
b86040a59f Thumb-2: Implementation of the unified start-up and exceptions code
This patch implements the ARM/Thumb-2 unified kernel start-up and
exception handling code.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:32:54 +01:00
Russell King
9ccdac3662 [ARM] idle: clean up pm_idle calling, obey hlt_counter
pm_idle is used by infrastructure (eg, cpuidle) which expects architectures
to call it in a certain way.  Arrange for ARM to follow x86's lead on this
and call pm_idle() with interrupts already disabled.  However, we expect
pm_idle() to enable interrupts before it returns.

Also, OMAP wants to be able to disable hlt-ing, so allow hlt_counter to
prevent all calls to pm_idle.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-06-22 22:34:55 +01:00
Catalin Marinas
feb97c3644 [ARM] 5559/1: Limit the stack unwinding caused by a kthread exit
When a kthread function returns, it branches to do_exit(). However, the
unwinding information isn't valid anymore and any stack trace caused by
do_exit() may be incorrect. This patch adds a kernel_thread_exit()
function and annotated with '.cantunwind' so that the unwinder stops
when reaching it.

Tested-by: Tony Lindgren <tony@atomide.com>

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-06-19 16:44:23 +01:00
Catalin Marinas
26584853a4 Add core support for ARMv6/v7 big-endian
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:

- setting of the BE-8 mode via the CPSR.E register for both kernel and
  user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
  processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
  to the final linking stage to convert the instructions to
  little-endian

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:18 +01:00
Alexey Dobriyan
6f2c55b843 Simplify copy_thread()
First argument unused since 2.3.11.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-02 19:04:51 -07:00
Russell King
be093beb60 [ARM] pass reboot command line to arch_reset()
OMAP wishes to pass state to the boot loader upon reboot in order to
instruct it whether to wait for USB-based reflashing or not.  There is
already a facility to do this via the reboot() syscall, except we ignore
the string passed to machine_restart().

This patch fixes things to pass this string to arch_reset().  This means
that we keep the reboot mode limited to telling the kernel _how_ to
perform the reboot which should be independent of what we request the
boot loader to do.

Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-19 16:20:24 +00:00
Catalin Marinas
2d7c11bfc9 [ARM] 5382/1: unwind: Reorganise the stacktrace support
This patch changes the walk_stacktrace and its callers for easier
integration of stack unwinding. The arch/arm/kernel/stacktrace.h file is
also moved to arch/arm/include/asm/stacktrace.h.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-02-12 13:21:17 +00:00
Russell King
33fa9b1328 [ARM] Convert asm/uaccess.h to linux/uaccess.h
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-06 11:35:55 +01:00
Russell King
1de765c1e9 [ARM] remove pc_pointer()
pc_pointer() was a function to mask the PC for 26-bit ARMs, which
we no longer support.  Remove it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-06 10:35:51 +01:00
Russell King
09d9bae064 [ARM] sparse: fix several warnings
arch/arm/kernel/process.c:270:6: warning: symbol 'show_fpregs' was not declared. Should it be static?

This function isn't used, so can be removed.

arch/arm/kernel/setup.c:532:9: warning: symbol 'len' shadows an earlier one
arch/arm/kernel/setup.c:524:6: originally declared here

A function containing two 'len's.

arch/arm/mm/fault-armv.c:188:13: warning: symbol 'check_writebuffer_bugs' was not declared. Should it be static?
arch/arm/mm/mmap.c:122:5: warning: symbol 'valid_phys_addr_range' was not declared. Should it be static?
arch/arm/mm/mmap.c:137:5: warning: symbol 'valid_mmap_phys_addr_range' was not declared. Should it be static?

Missing includes.

arch/arm/kernel/traps.c:71:77: warning: Using plain integer as NULL pointer
arch/arm/mm/ioremap.c:355:46: error: incompatible types in comparison expression (different address spaces)

Sillies.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-09-05 14:11:24 +01:00
Russell King
a09e64fbc0 [ARM] Move include/asm-arm/arch-* to arch/arm/*/include/mach
This just leaves include/asm-arm/plat-* to deal with.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-08-07 09:55:48 +01:00
Ingo Molnar
9b610fda0d Merge branch 'linus' into timers/nohz 2008-07-18 19:53:16 +02:00
Thomas Gleixner
b8f8c3cf0a nohz: prevent tick stop outside of the idle loop
Jack Ren and Eric Miao tracked down the following long standing
problem in the NOHZ code:

	scheduler switch to idle task
	enable interrupts

Window starts here

	----> interrupt happens (does not set NEED_RESCHED)
	      	irq_exit() stops the tick

	----> interrupt happens (does set NEED_RESCHED)

	return from schedule()
	
	cpu_idle(): preempt_disable();

Window ends here

The interrupts can happen at any point inside the race window. The
first interrupt stops the tick, the second one causes the scheduler to
rerun and switch away from idle again and we end up with the tick
disabled.

The fact that it needs two interrupts where the first one does not set
NEED_RESCHED and the second one does made the bug obscure and extremly
hard to reproduce and analyse. Kudos to Jack and Eric.

Solution: Limit the NOHZ functionality to the idle loop to make sure
that we can not run into such a situation ever again.

cpu_idle()
{
	preempt_disable();

	while(1) {
		 tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
		 			          are in the idle loop

		 while (!need_resched())
		       halt();

		 tick_nohz_restart_sched_tick(); <- disables NOHZ mode
		 preempt_enable_no_resched();
		 schedule();
		 preempt_disable();
	}
}

In hindsight we should have done this forever, but ... 

/me grabs a large brown paperbag.

Debugged-by: Jack Ren <jack.ren@marvell.com>, 
Debugged-by: eric miao <eric.y.miao@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-07-18 18:10:28 +02:00