X-Git-Url: http://ftp.safe.ca/?a=blobdiff_plain;f=Documentation%2Fcachetlb.txt;h=9164ae3b83bcc6b466ffe25cf97093b72684a944;hb=f534116308a0d553641725c4619814337758784f;hp=da42ab414c4864db4bb9d35b0cca9005b27670d2;hpb=1c7037db50ebecf3d5cfbf7082daa5d97d900fef;p=safe%2Fjmp%2Flinux-2.6 diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt index da42ab4..9164ae3 100644 --- a/Documentation/cachetlb.txt +++ b/Documentation/cachetlb.txt @@ -5,7 +5,7 @@ This document describes the cache/tlb flushing interfaces called by the Linux VM subsystem. It enumerates over each interface, -describes it's intended purpose, and what side effect is expected +describes its intended purpose, and what side effect is expected after the interface is invoked. The side effects described below are stated for a uniprocessor @@ -88,12 +88,12 @@ changes occur: This is used primarily during fault processing. 5) void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t pte) + unsigned long address, pte_t *ptep) At the end of every page fault, this routine is invoked to tell the architecture specific code that a translation - described by "pte" now exists at virtual address "address" - for address space "vma->vm_mm", in the software page tables. + now exists at virtual address "address" for address space + "vma->vm_mm", in the software page tables. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -231,7 +231,7 @@ require a whole different set of interfaces to handle properly. The biggest problem is that of virtual aliasing in the data cache of a processor. -Is your port susceptible to virtual aliasing in it's D-cache? +Is your port susceptible to virtual aliasing in its D-cache? Well, if your D-cache is virtually indexed, is larger in size than PAGE_SIZE, and does not prevent multiple cache lines for the same physical address from existing at once, you have this problem. @@ -249,7 +249,7 @@ one way to solve this (in particular SPARC_FLAG_MMAPSHARED). Next, you have to solve the D-cache aliasing issue for all other cases. Please keep in mind that fact that, for a given page mapped into some user address space, there is always at least one more -mapping, that of the kernel in it's linear mapping starting at +mapping, that of the kernel in its linear mapping starting at PAGE_OFFSET. So immediately, once the first user maps a given physical page into its address space, by implication the D-cache aliasing problem has the potential to exist since the kernel already @@ -377,3 +377,27 @@ maps this page at its virtual address. All the functionality of flush_icache_page can be implemented in flush_dcache_page and update_mmu_cache. In 2.7 the hope is to remove this interface completely. + +The final category of APIs is for I/O to deliberately aliased address +ranges inside the kernel. Such aliases are set up by use of the +vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O +subsystem assumes that the user mapping and kernel offset mapping are +the only aliases. This isn't true for vmap aliases, so anything in +the kernel trying to do I/O to vmap areas must manually manage +coherency. It must do this by flushing the vmap range before doing +I/O and invalidating it after the I/O returns. + + void flush_kernel_vmap_range(void *vaddr, int size) + flushes the kernel cache for a given virtual address range in + the vmap area. This is to make sure that any data the kernel + modified in the vmap range is made visible to the physical + page. The design is to make this area safe to perform I/O on. + Note that this API does *not* also flush the offset map alias + of the area. + + void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates + the cache for a given virtual address range in the vmap area + which prevents the processor from making the cache stale by + speculatively reading data while the I/O was occurring to the + physical pages. This is only necessary for data reads into the + vmap area.