[PATCH] AOP_TRUNCATED_PAGE victims in read_pages() belong in the LRU
authorZach Brown <zach.brown@oracle.com>
Sun, 25 Jun 2006 12:46:46 +0000 (05:46 -0700)
committerLinus Torvalds <torvalds@g5.osdl.org>
Sun, 25 Jun 2006 17:00:54 +0000 (10:00 -0700)
AOP_TRUNCATED_PAGE victims in read_pages() belong in the LRU

Nick Piggin rightly pointed out that the introduction of AOP_TRUNCATED_PAGE
to read_pages() was wrong to leave A_T_P victim pages in the page cache but
not put them in the LRU.  Failing to do so hid them from the VM.

A_T_P just means that the aop method unlocked the page rather than
performing IO.  It would be very rare that the page was truncated between
the unlock and testing A_T_P.  So we leave the pages in the LRU for likely
reuse soon rather than backing them back out of the page cache.  We do this
by matching the behaviour before the A_T_P introduction which added pages
to the LRU regardless of what ->readpage() did.

This doesn't include the unrelated cleanup in Nick's initial fix which
changed read_pages() to return void to match its only caller's behaviour of
ignoring errors.

Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Zach Brown <zach.brown@oracle.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
mm/readahead.c

index 0f142a4..4ee52ca 100644 (file)
@@ -182,14 +182,11 @@ static int read_pages(struct address_space *mapping, struct file *filp,
                list_del(&page->lru);
                if (!add_to_page_cache(page, mapping,
                                        page->index, GFP_KERNEL)) {
-                       ret = mapping->a_ops->readpage(filp, page);
-                       if (ret != AOP_TRUNCATED_PAGE) {
-                               if (!pagevec_add(&lru_pvec, page))
-                                       __pagevec_lru_add(&lru_pvec);
-                               continue;
-                       } /* else fall through to release */
-               }
-               page_cache_release(page);
+                       mapping->a_ops->readpage(filp, page);
+                       if (!pagevec_add(&lru_pvec, page))
+                               __pagevec_lru_add(&lru_pvec);
+               } else
+                       page_cache_release(page);
        }
        pagevec_lru_add(&lru_pvec);
        ret = 0;