[PATCH] mm: pte_offset_map_lock loops
authorHugh Dickins <hugh@veritas.com>
Sun, 30 Oct 2005 01:16:27 +0000 (18:16 -0700)
committerLinus Torvalds <torvalds@g5.osdl.org>
Sun, 30 Oct 2005 04:40:40 +0000 (21:40 -0700)
commit705e87c0c3c38424f7f30556c85bc20e808d2f59
tree7a237e6266f4801385e1226cc497b47e3a2458bd
parent8f4e2101fd7df9031a754eedb82e2060b51f8c45
[PATCH] mm: pte_offset_map_lock loops

Convert those common loops using page_table_lock on the outside and
pte_offset_map within to use just pte_offset_map_lock within instead.

These all hold mmap_sem (some exclusively, some not), so at no level can a
page table be whipped away from beneath them.  But whereas pte_alloc loops
tested with the "atomic" pmd_present, these loops are testing with pmd_none,
which on i386 PAE tests both lower and upper halves.

That's now unsafe, so add a cast into pmd_none to test only the vital lower
half: we lose a little sensitivity to a corrupt middle directory, but not
enough to worry about.  It appears that i386 and UML were the only
architectures vulnerable in this way, and pgd and pud no problem.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
fs/proc/task_mmu.c
include/asm-i386/pgtable.h
include/asm-um/pgtable.h
mm/mempolicy.c
mm/mprotect.c
mm/msync.c
mm/swapfile.c