The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:
CPU1 CPU2 ... CPU N
trigger_all_cpu_backtrace()
set "backtrace_mask" to cpu mask
|
generate NMI interrupts generate NMI interrupts ...
\ | /
\ | /
The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts
generated by other cpus's arch_trigger_all_cpu_backtrace() will
be taken as unknown reason NMI interrupts.
This patch uses a test_and_set to avoid the problem, and stop
the arch_trigger_all_cpu_backtrace() from calling to avoid
dumping a double cpu backtrace info when there is already a
trigger_all_cpu_backtrace() in progress.
Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
Cc: fweisbec@gmail.com
LKML-Reference: <1294198689-15447-2-git-send-email-dzickus@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Don Zickus <dzickus@redhat.com>
There are some paths that walk the die_chain with preemption on.
Make sure we are in an NMI call before we start doing anything.
This was triggered by do_general_protection calling notify_die
with DIE_GPF.
Reported-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Don Zickus <dzickus@redhat.com>
LKML-Reference: <1294198689-15447-1-git-send-email-dzickus@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
wait_for_completion_*_timeout() can return:
0: if the wait timed out
-ve: if the wait was interrupted
+ve: if the completion was completed.
As they currently return an 'unsigned long', the last two cases
are not easily distinguished which can easily result in buggy
code, as is the case for the recently added
wait_for_completion_interruptible_timeout() call in
net/sunrpc/cache.c
So change them both to return 'long'. As MAX_SCHEDULE_TIMEOUT
is LONG_MAX, a large +ve return value should never overflow.
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: J. Bruce Fields <bfields@fieldses.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <20110105125016.64ccab0e@notabene.brown>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Found one x2apic pre-enabled system, x2apic_mode suddenly get
corrupted after register some cpus, when compiled
CONFIG_NR_CPUS=255 instead of 512.
It turns out that generic_processor_info() ==> phyid_set(apicid,
phys_cpu_present_map) causes the problem.
phys_cpu_present_map is sized by MAX_APICS bits, and pre-enabled
system some cpus have an apic id > 255.
The variable after phys_cpu_present_map may get corrupted
silently:
ffffffff828e8420 B phys_cpu_present_map
ffffffff828e8440 B apic_verbosity
ffffffff828e8444 B local_apic_timer_c2_ok
ffffffff828e8448 B disable_apic
ffffffff828e844c B x2apic_mode
ffffffff828e8450 B x2apic_disabled
ffffffff828e8454 B num_processors
...
Actually phys_cpu_present_map is referenced via apic id, instead
index. We should use MAX_LOCAL_APIC instead MAX_APICS.
For 64-bit it will be 32768 in all cases. BSS will increase by 4k bytes
on 64-bit:
text data bss dec filename
21696943 4193748 12787712 38678403 vmlinux.before
21696943 4193748 12791808 38682499 vmlinux.after
No change on 32bit.
Finally we can remove MAX_APCIS that was rather confusing.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
LKML-Reference: <4D23BD9C.3070102@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When the seqfile /proc/cpuinfo gets accesses for each possible cpu
loops_per_jiffy gets recalculated. However its value is only needed
on first access.
In addition loops_per_jiffy should be recalculated when the machine
reports a capability change.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use get_online_cpus() instead of preempt_disable() to make sure cpus
don't go offline while accessing their per cpu data.
The preempt_disable() stuff is old code which was used before
get_online_cpus() was available.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of messages that indicate if a cpu went online or offline.
There is nothing special about this anymore and these messages might
flood the kernel log buffer which makes debugging harder since more
important messages might be overwritten.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This enables the spinning mutex feature on s390 by removing
HAVE_DEFAULT_NO_SPIN_MUTEXES from arch/s390/Kconfig.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The spinning mutex implementation uses cpu_relax() in busy loops as a
compiler barrier. Depending on the architecture, cpu_relax() may do more
than needed in this specific mutex spin loops. On System z we also give
up the time slice of the virtual cpu in cpu_relax(), which prevents
effective spinning on the mutex.
This patch replaces cpu_relax() in the spinning mutex code with
arch_mutex_cpu_relax(), which can be defined by each architecture that
selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
this patch should not affect other architectures than System z for now.
Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290437256.7455.4.camel@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
A race condition exists in the ccwgroup device unregistration code
which can cause a kernel panic due to a use-after-free bug. This
race condition might be triggered when all ccw devices associated with
a ccwgroup device are removed at the same time (e.g. because the
corresponding channel path becomes no longer available).
Fix this race condition by clearing the references from the associated
ccw devices to the ccw group device during unregistration of the
ccw group device.
Signed-off-by: Peter Oberparleiter <peter.oberparleiter@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Call init_idle() which (re-)initializes the idle task structure before
it gets used on a new cpu.
That way we can also get rid of the odd preempt_enable_no_resched()
call we have in the cpu offline path within cpu_idle(). That call
prevented preempt count imbalances between cpu hotplug operations.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Delay idle task creation until a cpu gets set online instead of
creating them for all possible cpus at system startup.
For one cpu system this should safe more than 1 MB.
On my debug system with lots of debug stuff enabled this saves 2 MB.
Same as on x86.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
In case the DASD driver needs to term a running I/O the retry counter
is decreased twice.
Remove the unnecessary retry counter decrease in das_term_IO.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Normal I/O operations through the DASD device driver give only access
to the data fields of an ECKD device even for track based I/O.
This patch extends the DASD device driver to give access to whole
ECKD tracks including count, key and data fields.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The freeze callback may set a stop bit so that a worker thread could
not start I/O. The discipline specific freeze function waits for the
worker to be completed.
Set the stop_bit after the discipline specific freeze function has
returned and no worker is running.
Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If a DASD device has been reserved by a Linux system, and later
this reservation is ‘stolen’ by a second system by means of an
unconditional reserve, then the first system receives a
notification about this fact. With this patch such an event can
be either ignored, as before, or it can be used to let the device
fail all I/O request, so that the device will not block anymore.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When a new path is added at runtime, the CIO layer will call the drivers
path_event callback. The DASD device driver uses this callback to trigger
a path verification for the new path. The driver will use only those
paths for I/O, which have been successfully verified.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Some storage systems support multitrack High Performance FICON
requests, which read or write data to more than one track.
This patch enables the DASD device driver to generate multitrack
High Performance FICON requests.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Any list of indirect data adresses (TIDAL) used by a TCW must not
cross a page boundary. The original itcw implementation complies with
this restriction by allocating allmost twice as much memory as
actually needed, so that in any case there is enough room for the full
TIDAL, either above or below the page boundary.
This patch implements an alternative method, by using a TTIC TIDAW to
connect TIDAL parts below and above a page boundary.
Signed-off-by: Stefan Weinhuber <wein@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Until now machine checks for the swapper process of the IPL cpu are just
implicitly (and more or less accidently) enabled when the first time the
idle process goes into idle state and loads an enabled wait psw.
Before that machine checks are disabled.
So let's enable them explicitly in trap_init() so we have a well defined
time when machine checks are enabled.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The used buffers counter is not incremented in case of an error so
the counter can become negative. Increment the used buffers counter
before checking for errors.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Update the subchannel descriptor if we receive a
"Installed parameters modified" crw.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make the code in the 31 bit entry.S code as similar as possible to the
64 bit version in entry64.S. That makes it easier to add new code to
the first level interrupt handler that affects both 31 and 64 bit kernels.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support to accumulate the number of 64K-bytes blocks all paths
to a device at least support for a transport command.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
arch_needs_cpu() gets always executed on the current cpu. Therefore
the cpu parameter can be ignored it is possible to use __get_cpu_var()
instead of per_cpu() to access the per_cpu variable, which will
generate better code.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Simplify the SIGA sync code and add unlikely annotations. In polling mode
SBALs may be accessed without interrupt, so call SIGA sync before every scan.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
HiperSocket devices only use one SBAL per qdio call without the enhanced SIGA
feature. Since that feature is currently not used remove it from the qdio code
so the compiler can generate better code for the HiperSocket outbound path.
While at it mark the SIGA error conditions as unlikely.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If QIOASSIST is enabled for a qdio device the SIGA instruction requires
a modified function code. This function code modifier was missing for
SIGA-R and SIGA-S which can lead to a kernel panic caused by an
operand exception.
Cc: stable@kernel.org
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add a counter for outbound queue full events to the qdio statistics.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Introduce a scan treshold for the qdio outbound queues. By setting the
threshold the driver can tell qdio after how much used SBALs qdio
should schedule the outbound tasklet that scans the queue for finished
SBALs. The threshold is specific by the drivers because a
Hipersockets device is much faster in utilizing outbound buffers than a
ZFCP or OSA device.
The default values after how many used SBALs the tasklet should run are:
OSA: > 31 SBALs
Hipersockets: > 7 SBALs
zfcp: > 55 SBALs
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
If the shared indicator is used the following race leads to
an inbound stall:
Device CPU0 CPU1
========================================================
non-shared DSCI =>1
ALSI => 1
Thin INT
ALSI => 0
non-shared DSCI
tasklets scheduled
shared DSCI => 1
ALSI => 1
shared DSCI => 0
ALSI ? -> set
Thin INT
ALSI => 0
ALSI was set,
shared DSCI => 1
After that no more interrupts occur because the DSCI is still set.
Fix that race by only resetting the shared DSCI if it was actually
set so the tasklets for all shared devices are scheduled and will
run after the interrupt.
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Also process 4096 bit RSA keys in CRT format. Handle them like the
smaller keys and take care of the zero padding.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The cca on the crypto adapter has a restriction in the size of the
exponent if a key with a modulus bigger than 2048 bit is used. Thus
in that case we have to avoid that the crypto device driver thinks
the adapter is defect and sets it offline. Therfore a new member for
the zcrypt_device struct called max_exp_bit_length is introduced. This
will be set the first time the cca returns the error code function
not implemented. If this is done with an adapter twice it will return
-EINVAL.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Definitions for CEX3 card types are changed to support 4096 bit RSA
keys in the coprocessor.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Ralph Wuerthner <ralph.wuerthner@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Definitions for CEX3 card types are changed to support 4096 bit RSA
keys. Also new structs for the accelerator mode are needed.
Additionaly when checking the length of key parts, the case for bigger
(4096 bit) keys is needed.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Ralph Wuerthner <ralph.wuerthner@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Implemented an asm in the ap bus and made it accessible for the card
specific parts of the zcrypt driver. Thus when a cex3a is recognized
a check can be performed to dermine whether the card supports 4096 bit
RSA keys.
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Currently the buffer for diagnose data is allocated in the open function
of the debugfs file and is released in the close function. This has the
drawback that a user (root) can pin that memory by not closing the file.
This patch moves the buffer allocation to the read function. The buffer is
automatically released after the buffer is copied to userspace.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Get rid of register/unregister_early_external_interrupt() and clean up
the code while at it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use register_external_interrupt() instead of register_early_external_interrupt().
The early variant is not necessary since kmalloc works already.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use an early init call to initialize pfault. That way it is possible to
use the register_external_interrupt() instead of the early variant.
No need to enable pfault any earlier since it has only effect if user
space processes are running.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for AP Bus I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Holger Dengler <hd@linux.vnet.ibm.com>
Signed-off-by: Felix Beck <felix.beck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CTC I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for CLAW I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add support for LCS I/O interrupt statistics in /proc/interrupts.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>