1
Commit Graph

71981 Commits

Author SHA1 Message Date
Christoph Hellwig
2596110a39 exportfs: add new methods
Add the guts for the new filesystem API to exportfs.

There's now a fh_to_dentry method that returns a dentry for the object looked
for given a filehandle fragment, and a fh_to_parent operation that returns the
dentry for the encoded parent directory in case the file handle contains it.

There are default implementations for these methods that only take a callback
for an nfs-enhanced iget variant and implement the rest of the semantics.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Neil Brown <neilb@suse.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: <linux-ext4@vger.kernel.org>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: David Chinner <dgc@sgi.com>
Cc: Timothy Shimmin <tes@sgi.com>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Chris Mason <mason@suse.com>
Cc: Jeff Mahoney <jeffm@suse.com>
Cc: "Vladimir V. Saveliev" <vs@namesys.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Christoph Hellwig
6e91ea2bb0 exportfs: add fid type
This patchset is a medium scale rewrite of the export operations interface.
The goal is to make the interface less complex, and easier to understand from
the filesystem side, aswell as preparing generic support for exporting of
64bit inode numbers.

This touches all nfs exporting filesystems, and I've done testing on all of
the filesystems I have here locally (xfs, ext2, ext3, reiserfs, jfs)

This patch:

Add a structured fid type so that we don't have to pass an array of u32 values
around everywhere.  It's a union of possible layouts.

As a start there's only the u32 array and the traditional 32bit inode format,
but there will be more in one of my next patchset when I start to document the
various filehandle formats we have in lowlevel filesystems better.

Also add an enum that gives the various filehandle types human- readable
names.

Note: Some people might think the struct containing an anonymous union is
ugly, but I didn't want to pass around a raw union type.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Neil Brown <neilb@suse.de>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: <linux-ext4@vger.kernel.org>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: David Chinner <dgc@sgi.com>
Cc: Timothy Shimmin <tes@sgi.com>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Chris Mason <mason@suse.com>
Cc: Jeff Mahoney <jeffm@suse.com>
Cc: "Vladimir V. Saveliev" <vs@namesys.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Bernhard Walle
00bf4098be kexec: add BSS to resource tree
Add the BSS to the resource tree just as kernel text and kernel data are in
the resource tree.  The main reason behind this is to avoid crashkernel
reservation in that area.

While it's not strictly necessary to have the BSS in the resource tree (the
actual collision detection is done in the reserve_bootmem() function before),
the usage of the BSS resource should be presented to the user in /proc/iomem
just as Kernel data and Kernel code.

Note: The patch currently is only implemented for x86 and ia64 (because
efi_initialize_iomem_resources() has the same signature on i386 and ia64).

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Bernhard Walle <bwalle@suse.de>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: <linux-arch@vger.kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
FUJITA Tomonori
c03ab37cbe intel-iommu sg chaining support
x86_64 defines ARCH_HAS_SG_CHAIN. So if IOMMU implementations don't
support sg chaining, we will get data corruption.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Acked-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
358dd8ac53 intel-iommu: fix for IOMMU early crash
pci_dev's->sysdata is highly overloaded and currently IOMMU is broken due
to IOMMU code depending on this field.

This patch introduces new field in pci_dev's dev.archdata struct to hold
IOMMU specific per device IOMMU private data.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Greg KH <greg@kroah.com>
Cc: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
f76aec76ec intel-iommu: optimize sg map/unmap calls
This patch adds PageSelectiveInvalidation support replacing existing
DomainSelectiveInvalidation for intel_{map/unmap}_sg() calls and also
enables to mapping one big contiguous DMA virtual address which is mapped
to discontiguous physical address for SG map/unmap calls.

"Doamin selective invalidations" wipes out the IOMMU address translation
cache based on domain ID where as "Page selective invalidations" wipes out
the IOMMU address translation cache for that address mask range which is
more cache friendly when compared to Domain selective invalidations.

Here is how it is done.
1) changes to iova.c
alloc_iova() now takes a bool size_aligned argument, which
when when set, returns the io virtual address that is
naturally aligned to 2 ^ x, where x is the order
of the size requested.

Returning this io vitual address which is naturally
aligned helps iommu to do the "page selective
invalidations" which is IOMMU cache friendly
over "domain selective invalidations".

2) Changes to driver/pci/intel-iommu.c
Clean up intel_{map/unmap}_{single/sg} () calls so that
s/g map/unamp calls is no more dependent on
intel_{map/unmap}_single()

intel_map_sg() now computes the total DMA virtual address
required and allocates the size aligned total DMA virtual address
and maps the discontiguous physical address to the allocated
contiguous DMA virtual address.

In the intel_unmap_sg() case since the DMA virtual address
is contiguous and size_aligned, PageSelectiveInvalidation
is used replacing earlier DomainSelectiveInvalidations.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Greg KH <greg@kroah.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Suresh B <suresh.b.siddha@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Arjan van de Ven <arjan@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
49a0429e53 Intel IOMMU: Iommu floppy workaround
This config option (DMAR_FLPY_WA) sets up 1:1 mapping for the floppy device so
that the floppy device which does not use DMA api's will continue to work.

Once the floppy driver starts using DMA api's this config option can be turn
off or this patch can be yanked out of kernel at that time.

[akpm@linux-foundation.org: cleanups, rename things, build fix]
[jengelh@computergmbh.de: Kconfig fixes]
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
e820482cd2 Intel IOMMU: Iommu Gfx workaround
When we fix all the opensource gfx drivers to use the DMA api's, at that time
we can yank this config options out.

[jengelh@computergmbh.de: Kconfig fixes]
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
3460a6d9ce Intel IOMMU: DMAR fault handling support
MSI interrupt handler registrations and fault handling support for Intel-IOMMU
hadrware.

This patch enables the MSI interrupts for the DMA remapping units and in the
interrupt handler read the fault cause and outputs the same on to the console.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:19 -07:00
Keshavamurthy, Anil S
7d3b03ce7b Intel IOMMU: Intel iommu cmdline option - forcedac
Introduce intel_iommu=forcedac commandline option.  This option is helpful to
verify the pci device capability of handling physical dma'able address greater
than 4G.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
eb3fa7cb51 Intel IOMMU: Avoid memory allocation failures in dma map api calls
Intel IOMMU driver needs memory during DMA map calls to setup its internal
page tables and for other data structures.  As we all know that these DMA map
calls are mostly called in the interrupt context or with the spinlock held by
the upper level drivers(network/storage drivers), so in order to avoid any
memory allocation failure due to low memory issues, this patch makes memory
allocation by temporarily setting PF_MEMALLOC flags for the current task
before making memory allocation calls.

We evaluated mempools as a backup when kmem_cache_alloc() fails
and found that mempools are really not useful here because
 1) We don't know for sure how much to reserve in advance
 2) And mempools are not useful for GFP_ATOMIC case (as we call
    memory alloc functions with GFP_ATOMIC)

(akpm: point 2 is wrong...)

With PF_MEMALLOC flag set in the current->flags, the VM subsystem avoids any
watermark checks before allocating memory thus guarantee'ing the memory till
the last free page.  Further, looking at the code in mm/page_alloc.c in
__alloc_pages() function, looks like this flag is useful only in the
non-interrupt context.

If we are in the interrupt context and memory allocation in IOMMU driver fails
for some reason, then the DMA map api's will return failure and it is up to
the higher level drivers to retry.  Suppose, if upper level driver programs
the controller with the buggy DMA virtual address, the IOMMU will block that
DMA transaction when that happens thus preventing any corruption to main
memory.

So far in our test scenario, we were unable to create any memory allocation
failure inside dma map api calls.

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
ba39592764 Intel IOMMU: Intel IOMMU driver
Actual intel IOMMU driver.  Hardware spec can be found at:
http://www.intel.com/technology/virtualization

This driver sets X86_64 'dma_ops', so hook into standard DMA APIs.  In this
way, PCI driver will get virtual DMA address.  This change is transparent to
PCI drivers.

[akpm@linux-foundation.org: remove unneeded cast]
[akpm@linux-foundation.org: build fix]
[bunk@stusta.de: fix duplicate CONFIG_DMAR Makefile line]
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
f8de50eb6b Intel IOMMU: IOVA allocation and management routines
This code implements a generic IOVA allocation and management.  As per Dave's
suggestion we are now allocating IO virtual address from Higher DMA limit
address rather than lower end address and this eliminated the need to preserve
the IO virtual address for multiple devices sharing the same domain virtual
address.

Also this code uses red black trees to store the allocated and reserved iova
nodes.  This showed a good performance improvements over previous linear
linked list.

[akpm@linux-foundation.org: remove inlines]
[akpm@linux-foundation.org: coding style fixes]
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
a9c55b3ba8 Intel IOMMU: clflush_cache_range now takes size param
Introduce the size param for clflush_cache_range().

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
994a65e25d Intel IOMMU: PCI generic helper function
When devices are under a p2p bridge, upstream transactions get replaced by the
device id of the bridge as it owns the PCIE transaction.  Hence its necessary
to setup translations on behalf of the bridge as well.  Due to this limitation
all devices under a p2p share the same domain in a DMAR.

We just cache the type of device, if its a native PCIe device
or not for later use.

[akpm@linux-foundation.org: BUG_ON -> WARN_ON+recover]
Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Keshavamurthy, Anil S
10e5247f40 Intel IOMMU: DMAR detection and parsing logic
This patch supports the upcomming Intel IOMMU hardware a.k.a.  Intel(R)
Virtualization Technology for Directed I/O Architecture and the hardware spec
for the same can be found here
http://www.intel.com/technology/virtualization/index.htm

FAQ! (questions from akpm, answers from ak)

> So...  what's all this code for?
>
> I assume that the intent here is to speed things up under Xen, etc?

Yes in some cases, but not this code.  That would be the Xen version of this
code that could potentially assign whole devices to guests.  I expect this to
be only useful in some special cases though because most hardware is not
virtualizable and you typically want an own instance for each guest.

Ok at some point KVM might implement this too; i likely would use this code
for this.

> Do we
> have any benchmark results to help us to decide whether a merge would be
> justified?

The main advantage for doing it in the normal kernel is not performance, but
more safety.  Broken devices won't be able to corrupt memory by doing random
DMA.

Unfortunately that doesn't work for graphics yet, for that need user space
interfaces for the X server are needed.

There are some potential performance benefits too:

- When you have a device that cannot address the complete address range an
  IOMMU can remap its memory instead of bounce buffering.  Remapping is likely
  cheaper than copying.

- The IOMMU can merge sg lists into a single virtual block.  This could
  potentially speed up SG IO when the device is slow walking SG lists.  [I
  long ago benchmarked 5% on some block benchmark with an old MPT Fusion; but
  it probably depends a lot on the HBA]

And you get better driver debugging because unexpected memory accesses from
the devices will cause a trappable event.

>
> Does it slow anything down?

It adds more overhead to each IO so yes.

This patch:

Add support for early detection and parsing of DMAR's (DMA Remapping) reported
to OS via ACPI tables.

DMA remapping(DMAR) devices support enables independent address translations
for Direct Memory Access(DMA) from Devices.  These DMA remapping devices are
reported via ACPI tables and includes pci device scope covered by these DMA
remapping device.

For detailed info on the specification of "Intel(R) Virtualization Technology
for Directed I/O Architecture" please see
http://www.intel.com/technology/virtualization/index.htm

Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@infradead.org>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Greg KH <greg@kroah.com>
Cc: Len Brown <lenb@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Jan Kara
89910cccb8 ext2: avoid rec_len overflow with 64KB block size
With 64KB blocksize, a directory entry can have size 64KB which does not
fit into 16 bits we have for entry length.  So we store 0xffff instead and
convert the value when read from / written to disk.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
J. Bruce Fields
321bcf9216 dcache: don't expose uninitialized memory in /proc/<pid>/fd/<fd>
Well, it's not especially important that target->d_iname get the contents
of dentry->d_iname, but it's important that it get initialized with
*something*, otherwise we're just exposing some random piece of memory to
anyone who reads the link at /proc/<pid>/fd/<fd> for the deleted file, when
it's still held open by someone.

I've run a test program that copies a short (<36 character) name ontop of a
long (>=36 character) name and see that the first time I run it, without
this patch, I get unpredicatable results out of /proc/<pid>/fd/<fd>.

Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Serge E. Hallyn
b68680e473 capabilities: clean up file capability reading
Simplify the vfs_cap_data structure.

Also fix get_file_caps which was declaring
__le32 v1caps[XATTR_CAPS_SZ] on the stack, but
XATTR_CAPS_SZ is already * sizeof(__le32).

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Serge E. Hallyn <serue@us.ibm.com>
Cc: Andrew Morgan <morgan@kernel.org>
Cc: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:18 -07:00
Yasunori Goto
b9049e2344 memory hotplug: make kmem_cache_node for SLUB on memory online avoid panic
Fix a panic due to access NULL pointer of kmem_cache_node at discard_slab()
after memory online.

When memory online is called, kmem_cache_nodes are created for all SLUBs
for new node whose memory are available.

slab_mem_going_online_callback() is called to make kmem_cache_node() in
callback of memory online event.  If it (or other callbacks) fails, then
slab_mem_offline_callback() is called for rollback.

In memory offline, slab_mem_going_offline_callback() is called to shrink
all slub cache, then slab_mem_offline_callback() is called later.

[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: locking fix]
[akpm@linux-foundation.org: build fix]
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Yasunori Goto
7b78d335ac memory hotplug: rearrange memory hotplug notifier
Current memory notifier has some defects yet.  (Fortunately, nothing uses
it.) This patch is to fix and rearrange for them.

  - Add information of start_pfn, nr_pages, and node id if node status is
    changes from/to memoryless node for callback functions.
    Callbacks can't do anything without those information.
  - Add notification going-online status.
    It is necessary for creating per node structure before the node's
    pages are available.
  - Move GOING_OFFLINE status notification after page isolation.
    It is good place for return memory like cache for callback,
    because returned page is not used again.
  - Make CANCEL events for rollingback when error occurs.
  - Delete MEM_MAPPING_INVALID notification. It will be not used.
  - Fix compile error of (un)register_memory_notifier().

Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Yasunori Goto
10020ca246 memory hotplug: document the memory hotplug notifier
Add description about event notification callback routine to the document

Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Rusty Russell
a24e785111 i386: paravirt boot sequence
This patch uses the updated boot protocol to do paravirtualized boot.
If the boot version is >= 2.07, then it will do two things:

 1. Check the bootparams loadflags to see if we should reload the
    segment registers and clear interrupts.  This is appropriate
    for normal native boot and some paravirtualized environments, but
    inapproprate for others.

 2. Check the hardware architecture, and dispatch to the appropriate
    kernel entrypoint.  If the bootloader doesn't set this, then we
    simply do the normal boot sequence.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Rusty Russell
214541d1f3 add WEAK() for creating weak asm labels
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Rusty Russell
e5371ac566 update boot spec to 2.07
Updates for version 2.07 of the boot protocol.  This includes:

load_flags.KEEP_SEGMENTS- flag to request/inhibit segment reloads
hardware_subarch	- what subarchitecture we're booting under
hardware_subarch_data	- per-architecture data

The intention of these changes is to make booting a paravirtualized
kernel work via the normal Linux boot protocol.

Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: H. Peter Anvin <hpa@zytor.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-22 08:13:17 -07:00
Trond Myklebust
55b70a0300 NFS: Fix a typo in nfs_call_unlink()
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2007-10-21 13:37:07 -04:00
Trond Myklebust
bad2a52411 NFSv2: Ensure that the directory metadata gets revalidated on file create
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
2007-10-21 13:37:02 -04:00
Linus Torvalds
efea90a454 Merge branch 'for-linus' of master.kernel.org:/pub/scm/linux/kernel/git/cooloney/blackfin-2.6
* 'for-linus' of master.kernel.org:/pub/scm/linux/kernel/git/cooloney/blackfin-2.6:
  Blackfin arch: update boards files
  Blackfin arch: dma add some API and cleanup bf54x DMA definition
  Blackfin arch: cleanup and promote the general purpose timers api to a core blackfin component
  Blackfin arch: add a cheesy install target
  Blackfin arch: add functions for converting between sclks and usecs
  Blackfin arch: add assembly function for doing 64bit unsigned division
  Blackfin arch: -mno-fdpic works
  Blackfin arch: use "char bfin_board_name[]" rather than "char *bfin_board_name" per discussion on lkml as the former uses less storage
  Blackfin arch: Fixing Bug: balance calls to get_task_mm with corresponding mmput calls
  Blackfin serial driver Kconfig: depend on DMA not being enabled rather than a specific DMA size
  Blackfin arch: Fix bug: missing CHIPID register field definition of BF54x
  Blackfin arch: Fix up /proc/cpuinfo so it is like everyone else
  Blackfin arch: Optimization - no need to make additional math here
  Blackfin arch: force irq_flags into the .data section
  Blackfin arch BF548 defconfig: enable watchdog by default
  Blackfin arch: add new processor ADSP-BF52x arch/mach support
2007-10-21 09:57:55 -07:00
Linus Torvalds
2fb59d623a Merge branch 'audit.b43' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit-current
* 'audit.b43' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/audit-current:
  [PATCH] audit: watching subtrees
  [PATCH] new helper - inotify_evict_watch()
  [PATCH] new helper - inotify_clone_watch()
  [PATCH] new helpers - collect_mounts() and release_collected_mounts()
  [PATCH] pass dentry to audit_inode()/audit_inode_child()
2007-10-21 08:54:32 -07:00
Nick Piggin
efdc31319d nobh: nobh_write_end fix
This path mustn't have been tested :( I did attempt to exercise it
by injecting failures here, but I suspect PageMappedToDisk may have
been getting in the way. Will need more of a look, although I think
nobh mode is OK for an -rc1 (it shouldn't eat anyone's data).

Commit 03158cd7eb ("fs: restore nobh")
introcduced a NULL deref.  Spotted by the Coverity checker.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Badari Pulavarty <pbadari@us.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-21 08:54:05 -07:00
Bryan Wu
d4b1d27368 Blackfin arch: update boards files
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 17:03:55 +08:00
Bryan Wu
452af71f36 Blackfin arch: dma add some API and cleanup bf54x DMA definition
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-22 00:02:14 +08:00
Mike Frysinger
780431e397 Blackfin arch: cleanup and promote the general purpose timers api to a core blackfin component
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 23:37:54 +08:00
Mike Frysinger
29cae11372 Blackfin arch: add a cheesy install target
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-22 00:45:55 +08:00
Mike Frysinger
2f6cf7bfc6 Blackfin arch: add functions for converting between sclks and usecs
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:59:49 +08:00
Mike Frysinger
b0a68dc07e Blackfin arch: add assembly function for doing 64bit unsigned division
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:57:36 +08:00
Mike Frysinger
1c668d8246 Blackfin arch: -mno-fdpic works
now that -mno-fdpic works, force it on so that
we can use any blackfin toolchain to build up the
kernel and kernel modules

wrap -mno-fdpic in $(call cc-option,-mno-fdpic) so that older
toolchains will still work

Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:55:18 +08:00
Mike Frysinger
066954a389 Blackfin arch: use "char bfin_board_name[]" rather than "char *bfin_board_name" per discussion on lkml as the former uses less storage
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:36:06 +08:00
Bernd Schmidt
c1e7399da7 Blackfin arch: Fixing Bug: balance calls to get_task_mm with corresponding mmput calls
We must balance calls to get_task_mm with corresponding mmput calls, otherwise
refcounting is screwed up and mms don't get freed when their task exits.

Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:32:27 +08:00
Mike Frysinger
eaa854902a Blackfin serial driver Kconfig: depend on DMA not being enabled rather than a specific DMA size
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 22:30:01 +08:00
Bryan Wu
1e5b24431b Blackfin arch: Fix bug: missing CHIPID register field definition of BF54x
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 16:58:49 +08:00
Robin Getz
73b0c0b0c1 Blackfin arch: Fix up /proc/cpuinfo so it is like everyone else
Fix up /proc/cpuinfo so it is like everyone else, and gets
parsed by various applications properly. Still needs some tweaking on
parts without full L1 sram, like 532, 531, so it doesn't print out L1
bank info that doesn't exist.

Signed-off-by: Robin Getz <robin.getz@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 17:03:31 +08:00
Michael Hennerich
4fb4524162 Blackfin arch: Optimization - no need to make additional math here
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 16:53:53 +08:00
Mike Frysinger
a99bbccd87 Blackfin arch: force irq_flags into the .data section
force irq_flags into the .data section by initializing it to
the hardware masks that cannot be disabled.  this way if we
use irq enable/disable functions before the .bss has been
zeroed out (as does our l1 relocate/dma functions), we dont
hit a problem where bss contains bogus crap.

Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-22 00:19:31 +08:00
Mike Frysinger
876a6682aa Blackfin arch BF548 defconfig: enable watchdog by default
Signed-off-by: Mike Frysinger <michael.frysinger@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-22 00:19:08 +08:00
Michael Hennerich
590031450a Blackfin arch: add new processor ADSP-BF52x arch/mach support
Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
Signed-off-by: Bryan Wu <bryan.wu@analog.com>
2007-10-21 16:54:27 +08:00
Al Viro
74c3cbe33b [PATCH] audit: watching subtrees
New kind of audit rule predicates: "object is visible in given subtree".
The part that can be sanely implemented, that is.  Limitations:
	* if you have hardlink from outside of tree, you'd better watch
it too (or just watch the object itself, obviously)
	* if you mount something under a watched tree, tell audit
that new chunk should be added to watched subtrees
	* if you umount something in a watched tree and it's still mounted
elsewhere, you will get matches on events happening there.  New command
tells audit to recalculate the trees, trimming such sources of false
positives.

Note that it's _not_ about path - if something mounted in several places
(multiple mount, bindings, different namespaces, etc.), the match does
_not_ depend on which one we are using for access.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-10-21 02:37:45 -04:00
Al Viro
455434d450 [PATCH] new helper - inotify_evict_watch()
Kicks the watch out without dropping it.  Called under ->inotify_mutex

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-10-21 02:37:38 -04:00
Al Viro
b9efe8a234 [PATCH] new helper - inotify_clone_watch()
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-10-21 02:37:32 -04:00
Al Viro
8aec080945 [PATCH] new helpers - collect_mounts() and release_collected_mounts()
Get a snapshot of a subtree, creating private clones of vfsmounts
for all its components and release such snapshot resp.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2007-10-21 02:37:25 -04:00