[v6 00/15] complete deferred page initialization

classic Classic list List threaded Threaded
70 messages Options
1234
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 00/15] complete deferred page initialization

Pavel Tatashin
Changelog:
v6 - v4
- Fixed ARM64 + kasan code, as reported by Ard Biesheuvel
- Tested ARM64 code in qemu and found few more issues, that I fixed in this
  iteration
- Added page roundup/rounddown to x86 and arm zeroing routines to zero the
  whole allocated range, instead of only provided address range.
- Addressed SPARC related comment from Sam Ravnborg
- Fixed section mismatch warnings related to memblock_discard().

v5 - v4
- Fixed build issues reported by kbuild on various configurations

v4 - v3
- Rewrote code to zero sturct pages in __init_single_page() as
  suggested by Michal Hocko
- Added code to handle issues related to accessing struct page
  memory before they are initialized.

v3 - v2
- Addressed David Miller comments about one change per patch:
    * Splited changes to platforms into 4 patches
    * Made "do not zero vmemmap_buf" as a separate patch

v2 - v1
- Per request, added s390 to deferred "struct page" zeroing
- Collected performance data on x86 which proofs the importance to
  keep memset() as prefetch (see below).

SMP machines can benefit from the DEFERRED_STRUCT_PAGE_INIT config option,
which defers initializing struct pages until all cpus have been started so
it can be done in parallel.

However, this feature is sub-optimal, because the deferred page
initialization code expects that the struct pages have already been zeroed,
and the zeroing is done early in boot with a single thread only.  Also, we
access that memory and set flags before struct pages are initialized. All
of this is fixed in this patchset.

In this work we do the following:
- Never read access struct page until it was initialized
- Never set any fields in struct pages before they are initialized
- Zero struct page at the beginning of struct page initialization

Performance improvements on x86 machine with 8 nodes:
Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz

Single threaded struct page init: 7.6s/T improvement
Deferred struct page init: 10.2s/T improvement

Pavel Tatashin (15):
  x86/mm: reserve only exiting low pages
  x86/mm: setting fields in deferred pages
  sparc64/mm: setting fields in deferred pages
  mm: discard memblock data later
  mm: don't accessed uninitialized struct pages
  sparc64: simplify vmemmap_populate
  mm: defining memblock_virt_alloc_try_nid_raw
  mm: zero struct pages during initialization
  sparc64: optimized struct page zeroing
  x86/kasan: explicitly zero kasan shadow memory
  arm64/kasan: explicitly zero kasan shadow memory
  mm: explicitly zero pagetable memory
  mm: stop zeroing memory during allocation in vmemmap
  mm: optimize early system hash allocations
  mm: debug for raw alloctor

 arch/arm64/mm/kasan_init.c          |  42 ++++++++++
 arch/sparc/include/asm/pgtable_64.h |  30 +++++++
 arch/sparc/mm/init_64.c             |  31 +++-----
 arch/x86/kernel/setup.c             |   5 +-
 arch/x86/mm/init_64.c               |   9 ++-
 arch/x86/mm/kasan_init_64.c         |  67 ++++++++++++++++
 include/linux/bootmem.h             |  27 +++++++
 include/linux/memblock.h            |   9 ++-
 include/linux/mm.h                  |   9 +++
 mm/memblock.c                       | 152 ++++++++++++++++++++++++++++--------
 mm/nobootmem.c                      |  16 ----
 mm/page_alloc.c                     |  31 +++++---
 mm/sparse-vmemmap.c                 |  10 ++-
 mm/sparse.c                         |   6 +-
 14 files changed, 356 insertions(+), 88 deletions(-)

--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 01/15] x86/mm: reserve only exiting low pages

Pavel Tatashin
Struct pages are initialized by going through __init_single_page(). Since
the existing physical memory in memblock is represented in memblock.memory
list, struct page for every page from this list goes through
__init_single_page().

The second memblock list: memblock.reserved, manages the allocated memory.
The memory that won't be available to kernel allocator. So, every page from
this list goes through reserve_bootmem_region(), where certain struct page
fields are set, the assumption being that the struct pages have been
initialized beforehand.

In trim_low_memory_range() we unconditionally reserve memoryfrom PFN 0, but
memblock.memory might start at a later PFN. For example, in QEMU,
e820__memblock_setup() can use PFN 1 as the first PFN in memblock.memory,
so PFN 0 is not on memblock.memory (and hence isn't initialized via
__init_single_page) but is on memblock.reserved (and hence we set fields in
the uninitialized struct page).

Currently, the struct page memory is always zeroed during allocation,
which prevents this problem from being detected. But, if some asserts
provided by CONFIG_DEBUG_VM_PGFLAGS are tighten, this problem may become
visible in existing kernels.

In this patchset we will stop zeroing struct page memory during allocation.
Therefore, this bug must be fixed in order to avoid random assert failures
caused by CONFIG_DEBUG_VM_PGFLAGS triggers.

The fix is to reserve memory from the first existing PFN.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/x86/kernel/setup.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 3486d0498800..489cdc141bcb 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -790,7 +790,10 @@ early_param("reservelow", parse_reservelow);
 
 static void __init trim_low_memory_range(void)
 {
- memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE));
+ unsigned long min_pfn = find_min_pfn_with_active_regions();
+ phys_addr_t base = min_pfn << PAGE_SHIFT;
+
+ memblock_reserve(base, ALIGN(reserve_low, PAGE_SIZE));
 }
 
 /*
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 02/15] x86/mm: setting fields in deferred pages

Pavel Tatashin
In reply to this post by Pavel Tatashin
Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().

With deferred struct page feature enabled there is a case where we set some
fields prior to initializing:

        mem_init() {
                register_page_bootmem_info();
                free_all_bootmem();
                ...
        }

When register_page_bootmem_info() is called only non-deferred struct pages
are initialized. But, this function goes through some reserved pages which
might be part of the deferred, and thus are not yet initialized.

  mem_init
   register_page_bootmem_info
    register_page_bootmem_info_node
     get_page_bootmem
      .. setting fields here ..
      such as: page->freelist = (void *)type;

We end-up with similar issue as in the previous patch, where currently we
do not observe problem as memory is zeroed. But, if flag asserts are
changed we can start hitting issues.

Also, because in this patch series we will stop zeroing struct page memory
during allocation, we must make sure that struct pages are properly
initialized prior to using them.

The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/x86/mm/init_64.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 136422d7d539..1e863baec847 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1165,12 +1165,17 @@ void __init mem_init(void)
 
  /* clear_bss() already clear the empty_zero_page */
 
- register_page_bootmem_info();
-
  /* this will put all memory onto the freelists */
  free_all_bootmem();
  after_bootmem = 1;
 
+ /* Must be done after boot memory is put on freelist, because here we
+ * might set fields in deferred struct pages that have not yet been
+ * initialized, and free_all_bootmem() initializes all the reserved
+ * deferred pages for us.
+ */
+ register_page_bootmem_info();
+
  /* Register memory areas for /proc/kcore */
  kclist_add(&kcore_vsyscall, (void *)VSYSCALL_ADDR,
  PAGE_SIZE, KCORE_OTHER);
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 03/15] sparc64/mm: setting fields in deferred pages

Pavel Tatashin
In reply to this post by Pavel Tatashin
Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
flags and other fields in "struct page"es are never changed prior to first
initializing struct pages by going through __init_single_page().

With deferred struct page feature enabled there is a case where we set some
fields prior to initializing:

 mem_init() {
         register_page_bootmem_info();
         free_all_bootmem();
         ...
 }

When register_page_bootmem_info() is called only non-deferred struct pages
are initialized. But, this function goes through some reserved pages which
might be part of the deferred, and thus are not yet initialized.

mem_init
register_page_bootmem_info
 register_page_bootmem_info_node
  get_page_bootmem
   .. setting fields here ..
   such as: page->freelist = (void *)type;

We end-up with similar issue as in the previous patch, where currently we
do not observe problem as memory is zeroed. But, if flag asserts are
changed we can start hitting issues.

Also, because in this patch series we will stop zeroing struct page memory
during allocation, we must make sure that struct pages are properly
initialized prior to using them.

The deferred-reserved pages are initialized in free_all_bootmem().
Therefore, the fix is to switch the above calls.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/sparc/mm/init_64.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index fed73f14aa49..25ded711ab6c 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2487,9 +2487,15 @@ void __init mem_init(void)
 {
  high_memory = __va(last_valid_pfn << PAGE_SHIFT);
 
- register_page_bootmem_info();
  free_all_bootmem();
 
+ /* Must be done after boot memory is put on freelist, because here we
+ * might set fields in deferred struct pages that have not yet been
+ * initialized, and free_all_bootmem() initializes all the reserved
+ * deferred pages for us.
+ */
+ register_page_bootmem_info();
+
  /*
  * Set up the zero page, mark it reserved, so that page count
  * is not manipulated when freeing the page from user ptes.
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 04/15] mm: discard memblock data later

Pavel Tatashin
In reply to this post by Pavel Tatashin
There is existing use after free bug when deferred struct pages are
enabled:

The memblock_add() allocates memory for the memory array if more than
128 entries are needed.  See comment in e820__memblock_setup():

  * The bootstrap memblock region count maximum is 128 entries
  * (INIT_MEMBLOCK_REGIONS), but EFI might pass us more E820 entries
  * than that - so allow memblock resizing.

This memblock memory is freed here:
        free_low_memory_core_early()

We access the freed memblock.memory later in boot when deferred pages are
initialized in this path:

        deferred_init_memmap()
                for_each_mem_pfn_range()
                  __next_mem_pfn_range()
                    type = &memblock.memory;

One possible explanation for why this use-after-free hasn't been hit
before is that the limit of INIT_MEMBLOCK_REGIONS has never been exceeded
at least on systems where deferred struct pages were enabled.

Another reason why we want this problem fixed in this patch series is,
in the next patch, we will need to access memblock.reserved from
deferred_init_memmap().

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 include/linux/memblock.h |  6 ++++--
 mm/memblock.c            | 38 +++++++++++++++++---------------------
 mm/nobootmem.c           | 16 ----------------
 mm/page_alloc.c          |  4 ++++
 4 files changed, 25 insertions(+), 39 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 77d427974f57..bae11c7e7bf3 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -61,6 +61,7 @@ extern int memblock_debug;
 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
 #define __init_memblock __meminit
 #define __initdata_memblock __meminitdata
+void memblock_discard(void);
 #else
 #define __init_memblock
 #define __initdata_memblock
@@ -74,8 +75,6 @@ phys_addr_t memblock_find_in_range_node(phys_addr_t size, phys_addr_t align,
  int nid, ulong flags);
 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
    phys_addr_t size, phys_addr_t align);
-phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr);
-phys_addr_t get_allocated_memblock_memory_regions_info(phys_addr_t *addr);
 void memblock_allow_resize(void);
 int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid);
 int memblock_add(phys_addr_t base, phys_addr_t size);
@@ -110,6 +109,9 @@ void __next_mem_range_rev(u64 *idx, int nid, ulong flags,
 void __next_reserved_mem_region(u64 *idx, phys_addr_t *out_start,
  phys_addr_t *out_end);
 
+void __memblock_free_early(phys_addr_t base, phys_addr_t size);
+void __memblock_free_late(phys_addr_t base, phys_addr_t size);
+
 /**
  * for_each_mem_range - iterate through memblock areas from type_a and not
  * included in type_b. Or just type_a if type_b is NULL.
diff --git a/mm/memblock.c b/mm/memblock.c
index 2cb25fe4452c..bf14aea6ab70 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -285,31 +285,27 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
 }
 
 #ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
-
-phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info(
- phys_addr_t *addr)
-{
- if (memblock.reserved.regions == memblock_reserved_init_regions)
- return 0;
-
- *addr = __pa(memblock.reserved.regions);
-
- return PAGE_ALIGN(sizeof(struct memblock_region) *
-  memblock.reserved.max);
-}
-
-phys_addr_t __init_memblock get_allocated_memblock_memory_regions_info(
- phys_addr_t *addr)
+/**
+ * Discard memory and reserved arrays if they were allocated
+ */
+void __init memblock_discard(void)
 {
- if (memblock.memory.regions == memblock_memory_init_regions)
- return 0;
+ phys_addr_t addr, size;
 
- *addr = __pa(memblock.memory.regions);
+ if (memblock.reserved.regions != memblock_reserved_init_regions) {
+ addr = __pa(memblock.reserved.regions);
+ size = PAGE_ALIGN(sizeof(struct memblock_region) *
+  memblock.reserved.max);
+ __memblock_free_late(addr, size);
+ }
 
- return PAGE_ALIGN(sizeof(struct memblock_region) *
-  memblock.memory.max);
+ if (memblock.memory.regions == memblock_memory_init_regions) {
+ addr = __pa(memblock.memory.regions);
+ size = PAGE_ALIGN(sizeof(struct memblock_region) *
+  memblock.memory.max);
+ __memblock_free_late(addr, size);
+ }
 }
-
 #endif
 
 /**
diff --git a/mm/nobootmem.c b/mm/nobootmem.c
index 36454d0f96ee..3637809a18d0 100644
--- a/mm/nobootmem.c
+++ b/mm/nobootmem.c
@@ -146,22 +146,6 @@ static unsigned long __init free_low_memory_core_early(void)
  NULL)
  count += __free_memory_core(start, end);
 
-#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
- {
- phys_addr_t size;
-
- /* Free memblock.reserved array if it was allocated */
- size = get_allocated_memblock_reserved_regions_info(&start);
- if (size)
- count += __free_memory_core(start, start + size);
-
- /* Free memblock.memory array if it was allocated */
- size = get_allocated_memblock_memory_regions_info(&start);
- if (size)
- count += __free_memory_core(start, start + size);
- }
-#endif
-
  return count;
 }
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fc32aa81f359..63d16c185736 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1584,6 +1584,10 @@ void __init page_alloc_init_late(void)
  /* Reinit limits that are based on free pages after the kernel is up */
  files_maxfiles_init();
 #endif
+#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK
+ /* Discard memblock private memory */
+ memblock_discard();
+#endif
 
  for_each_populated_zone(zone)
  set_zone_contiguous(zone);
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 05/15] mm: don't accessed uninitialized struct pages

Pavel Tatashin
In reply to this post by Pavel Tatashin
In deferred_init_memmap() where all deferred struct pages are initialized
we have a check like this:

    if (page->flags) {
            VM_BUG_ON(page_zone(page) != zone);
            goto free_range;
    }

This way we are checking if the current deferred page has already been
initialized. It works, because memory for struct pages has been zeroed, and
the only way flags are not zero if it went through __init_single_page()
before.  But, once we change the current behavior and won't zero the memory
in memblock allocator, we cannot trust anything inside "struct page"es
until they are initialized. This patch fixes this.

This patch defines a new accessor memblock_get_reserved_pfn_range()
which returns successive ranges of reserved PFNs.  deferred_init_memmap()
calls it to determine if a PFN and its struct page has already been
initialized.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 include/linux/memblock.h |  3 +++
 mm/memblock.c            | 54 ++++++++++++++++++++++++++++++++++++++++++------
 mm/page_alloc.c          | 11 +++++++++-
 3 files changed, 61 insertions(+), 7 deletions(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index bae11c7e7bf3..b6a2a610f5e1 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -320,6 +320,9 @@ int memblock_is_map_memory(phys_addr_t addr);
 int memblock_is_region_memory(phys_addr_t base, phys_addr_t size);
 bool memblock_is_reserved(phys_addr_t addr);
 bool memblock_is_region_reserved(phys_addr_t base, phys_addr_t size);
+void memblock_get_reserved_pfn_range(unsigned long pfn,
+     unsigned long *pfn_start,
+     unsigned long *pfn_end);
 
 extern void __memblock_dump_all(void);
 
diff --git a/mm/memblock.c b/mm/memblock.c
index bf14aea6ab70..08f449acfdd1 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1580,7 +1580,13 @@ void __init memblock_mem_limit_remove_map(phys_addr_t limit)
  memblock_cap_memory_range(0, max_addr);
 }
 
-static int __init_memblock memblock_search(struct memblock_type *type, phys_addr_t addr)
+/**
+ * Return index in regions array if addr is within the region. Otherwise
+ * return -1. If -1 is returned and *next_idx is not %NULL, sets it to the
+ * next region index or -1 if there is none.
+ */
+static int __init_memblock memblock_search(struct memblock_type *type,
+   phys_addr_t addr, int *next_idx)
 {
  unsigned int left = 0, right = type->cnt;
 
@@ -1595,22 +1601,26 @@ static int __init_memblock memblock_search(struct memblock_type *type, phys_addr
  else
  return mid;
  } while (left < right);
+
+ if (next_idx)
+ *next_idx = (right == type->cnt) ? -1 : right;
+
  return -1;
 }
 
 bool __init memblock_is_reserved(phys_addr_t addr)
 {
- return memblock_search(&memblock.reserved, addr) != -1;
+ return memblock_search(&memblock.reserved, addr, NULL) != -1;
 }
 
 bool __init_memblock memblock_is_memory(phys_addr_t addr)
 {
- return memblock_search(&memblock.memory, addr) != -1;
+ return memblock_search(&memblock.memory, addr, NULL) != -1;
 }
 
 int __init_memblock memblock_is_map_memory(phys_addr_t addr)
 {
- int i = memblock_search(&memblock.memory, addr);
+ int i = memblock_search(&memblock.memory, addr, NULL);
 
  if (i == -1)
  return false;
@@ -1622,7 +1632,7 @@ int __init_memblock memblock_search_pfn_nid(unsigned long pfn,
  unsigned long *start_pfn, unsigned long *end_pfn)
 {
  struct memblock_type *type = &memblock.memory;
- int mid = memblock_search(type, PFN_PHYS(pfn));
+ int mid = memblock_search(type, PFN_PHYS(pfn), NULL);
 
  if (mid == -1)
  return -1;
@@ -1646,7 +1656,7 @@ int __init_memblock memblock_search_pfn_nid(unsigned long pfn,
  */
 int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size)
 {
- int idx = memblock_search(&memblock.memory, base);
+ int idx = memblock_search(&memblock.memory, base, NULL);
  phys_addr_t end = base + memblock_cap_size(base, &size);
 
  if (idx == -1)
@@ -1655,6 +1665,38 @@ int __init_memblock memblock_is_region_memory(phys_addr_t base, phys_addr_t size
  memblock.memory.regions[idx].size) >= end;
 }
 
+/**
+ * memblock_get_reserved_pfn_range - search for the next reserved region
+ *
+ * @pfn: start searching from this pfn.
+ *
+ * RETURNS:
+ * [start_pfn, end_pfn), where start_pfn >= pfn. If none is found
+ * start_pfn, and end_pfn are both set to ULONG_MAX.
+ */
+void __init_memblock memblock_get_reserved_pfn_range(unsigned long pfn,
+     unsigned long *start_pfn,
+     unsigned long *end_pfn)
+{
+ struct memblock_type *type = &memblock.reserved;
+ int next_idx, idx;
+
+ idx = memblock_search(type, PFN_PHYS(pfn), &next_idx);
+ if (idx == -1 && next_idx == -1) {
+ *start_pfn = ULONG_MAX;
+ *end_pfn = ULONG_MAX;
+ return;
+ }
+
+ if (idx == -1) {
+ idx = next_idx;
+ *start_pfn = PFN_DOWN(type->regions[idx].base);
+ } else {
+ *start_pfn = pfn;
+ }
+ *end_pfn = PFN_DOWN(type->regions[idx].base + type->regions[idx].size);
+}
+
 /**
  * memblock_is_region_reserved - check if a region intersects reserved memory
  * @base: base of region to check
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63d16c185736..983de0a8047b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1447,6 +1447,7 @@ static int __init deferred_init_memmap(void *data)
  pg_data_t *pgdat = data;
  int nid = pgdat->node_id;
  struct mminit_pfnnid_cache nid_init_state = { };
+ unsigned long resv_start_pfn = 0, resv_end_pfn = 0;
  unsigned long start = jiffies;
  unsigned long nr_pages = 0;
  unsigned long walk_start, walk_end;
@@ -1491,6 +1492,10 @@ static int __init deferred_init_memmap(void *data)
  pfn = zone->zone_start_pfn;
 
  for (; pfn < end_pfn; pfn++) {
+ if (pfn >= resv_end_pfn)
+ memblock_get_reserved_pfn_range(pfn,
+ &resv_start_pfn,
+ &resv_end_pfn);
  if (!pfn_valid_within(pfn))
  goto free_range;
 
@@ -1524,7 +1529,11 @@ static int __init deferred_init_memmap(void *data)
  cond_resched();
  }
 
- if (page->flags) {
+ /*
+ * Check if this page has already been initialized due
+ * to being reserved during boot in memblock.
+ */
+ if (pfn >= resv_start_pfn) {
  VM_BUG_ON(page_zone(page) != zone);
  goto free_range;
  }
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 06/15] sparc64: simplify vmemmap_populate

Pavel Tatashin
In reply to this post by Pavel Tatashin
Remove duplicating code by using common functions
vmemmap_pud_populate and vmemmap_pgd_populate.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/sparc/mm/init_64.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 25ded711ab6c..ffb25be19389 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2590,30 +2590,19 @@ int __meminit vmemmap_populate(unsigned long vstart, unsigned long vend,
  vstart = vstart & PMD_MASK;
  vend = ALIGN(vend, PMD_SIZE);
  for (; vstart < vend; vstart += PMD_SIZE) {
- pgd_t *pgd = pgd_offset_k(vstart);
+ pgd_t *pgd = vmemmap_pgd_populate(vstart, node);
  unsigned long pte;
  pud_t *pud;
  pmd_t *pmd;
 
- if (pgd_none(*pgd)) {
- pud_t *new = vmemmap_alloc_block(PAGE_SIZE, node);
+ if (!pgd)
+ return -ENOMEM;
 
- if (!new)
- return -ENOMEM;
- pgd_populate(&init_mm, pgd, new);
- }
-
- pud = pud_offset(pgd, vstart);
- if (pud_none(*pud)) {
- pmd_t *new = vmemmap_alloc_block(PAGE_SIZE, node);
-
- if (!new)
- return -ENOMEM;
- pud_populate(&init_mm, pud, new);
- }
+ pud = vmemmap_pud_populate(pgd, vstart, node);
+ if (!pud)
+ return -ENOMEM;
 
  pmd = pmd_offset(pud, vstart);
-
  pte = pmd_val(*pmd);
  if (!(pte & _PAGE_VALID)) {
  void *block = vmemmap_alloc_block(PMD_SIZE, node);
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 07/15] mm: defining memblock_virt_alloc_try_nid_raw

Pavel Tatashin
In reply to this post by Pavel Tatashin
A new variant of memblock_virt_alloc_* allocations:
memblock_virt_alloc_try_nid_raw()
    - Does not zero the allocated memory
    - Does not panic if request cannot be satisfied

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 include/linux/bootmem.h | 27 +++++++++++++++++++++++++
 mm/memblock.c           | 53 ++++++++++++++++++++++++++++++++++++++++++-------
 2 files changed, 73 insertions(+), 7 deletions(-)

diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index e223d91b6439..ea30b3987282 100644
--- a/include/linux/bootmem.h
+++ b/include/linux/bootmem.h
@@ -160,6 +160,9 @@ extern void *__alloc_bootmem_low_node(pg_data_t *pgdat,
 #define BOOTMEM_ALLOC_ANYWHERE (~(phys_addr_t)0)
 
 /* FIXME: Move to memblock.h at a point where we remove nobootmem.c */
+void *memblock_virt_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
+      phys_addr_t min_addr,
+      phys_addr_t max_addr, int nid);
 void *memblock_virt_alloc_try_nid_nopanic(phys_addr_t size,
  phys_addr_t align, phys_addr_t min_addr,
  phys_addr_t max_addr, int nid);
@@ -176,6 +179,14 @@ static inline void * __init memblock_virt_alloc(
     NUMA_NO_NODE);
 }
 
+static inline void * __init memblock_virt_alloc_raw(
+ phys_addr_t size,  phys_addr_t align)
+{
+ return memblock_virt_alloc_try_nid_raw(size, align, BOOTMEM_LOW_LIMIT,
+    BOOTMEM_ALLOC_ACCESSIBLE,
+    NUMA_NO_NODE);
+}
+
 static inline void * __init memblock_virt_alloc_nopanic(
  phys_addr_t size, phys_addr_t align)
 {
@@ -257,6 +268,14 @@ static inline void * __init memblock_virt_alloc(
  return __alloc_bootmem(size, align, BOOTMEM_LOW_LIMIT);
 }
 
+static inline void * __init memblock_virt_alloc_raw(
+ phys_addr_t size,  phys_addr_t align)
+{
+ if (!align)
+ align = SMP_CACHE_BYTES;
+ return __alloc_bootmem_nopanic(size, align, BOOTMEM_LOW_LIMIT);
+}
+
 static inline void * __init memblock_virt_alloc_nopanic(
  phys_addr_t size, phys_addr_t align)
 {
@@ -309,6 +328,14 @@ static inline void * __init memblock_virt_alloc_try_nid(phys_addr_t size,
   min_addr);
 }
 
+static inline void * __init memblock_virt_alloc_try_nid_raw(
+ phys_addr_t size, phys_addr_t align,
+ phys_addr_t min_addr, phys_addr_t max_addr, int nid)
+{
+ return ___alloc_bootmem_node_nopanic(NODE_DATA(nid), size, align,
+ min_addr, max_addr);
+}
+
 static inline void * __init memblock_virt_alloc_try_nid_nopanic(
  phys_addr_t size, phys_addr_t align,
  phys_addr_t min_addr, phys_addr_t max_addr, int nid)
diff --git a/mm/memblock.c b/mm/memblock.c
index 08f449acfdd1..3fbf3bcb52d9 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1327,7 +1327,6 @@ static void * __init memblock_virt_alloc_internal(
  return NULL;
 done:
  ptr = phys_to_virt(alloc);
- memset(ptr, 0, size);
 
  /*
  * The min_count is set to 0 so that bootmem allocated blocks
@@ -1340,6 +1339,38 @@ static void * __init memblock_virt_alloc_internal(
  return ptr;
 }
 
+/**
+ * memblock_virt_alloc_try_nid_raw - allocate boot memory block without zeroing
+ * memory and without panicking
+ * @size: size of memory block to be allocated in bytes
+ * @align: alignment of the region and block's size
+ * @min_addr: the lower bound of the memory region from where the allocation
+ *  is preferred (phys address)
+ * @max_addr: the upper bound of the memory region from where the allocation
+ *      is preferred (phys address), or %BOOTMEM_ALLOC_ACCESSIBLE to
+ *      allocate only from memory limited by memblock.current_limit value
+ * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
+ *
+ * Public function, provides additional debug information (including caller
+ * info), if enabled. Does not zero allocated memory, does not panic if request
+ * cannot be satisfied.
+ *
+ * RETURNS:
+ * Virtual address of allocated memory block on success, NULL on failure.
+ */
+void * __init memblock_virt_alloc_try_nid_raw(
+ phys_addr_t size, phys_addr_t align,
+ phys_addr_t min_addr, phys_addr_t max_addr,
+ int nid)
+{
+ memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx %pF\n",
+     __func__, (u64)size, (u64)align, nid, (u64)min_addr,
+     (u64)max_addr, (void *)_RET_IP_);
+
+ return memblock_virt_alloc_internal(size, align,
+    min_addr, max_addr, nid);
+}
+
 /**
  * memblock_virt_alloc_try_nid_nopanic - allocate boot memory block
  * @size: size of memory block to be allocated in bytes
@@ -1351,8 +1382,8 @@ static void * __init memblock_virt_alloc_internal(
  *      allocate only from memory limited by memblock.current_limit value
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  *
- * Public version of _memblock_virt_alloc_try_nid_nopanic() which provides
- * additional debug information (including caller info), if enabled.
+ * Public function, provides additional debug information (including caller
+ * info), if enabled. This function zeroes the allocated memory.
  *
  * RETURNS:
  * Virtual address of allocated memory block on success, NULL on failure.
@@ -1362,11 +1393,17 @@ void * __init memblock_virt_alloc_try_nid_nopanic(
  phys_addr_t min_addr, phys_addr_t max_addr,
  int nid)
 {
+ void *ptr;
+
  memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx %pF\n",
      __func__, (u64)size, (u64)align, nid, (u64)min_addr,
      (u64)max_addr, (void *)_RET_IP_);
- return memblock_virt_alloc_internal(size, align, min_addr,
-     max_addr, nid);
+
+ ptr = memblock_virt_alloc_internal(size, align,
+   min_addr, max_addr, nid);
+ if (ptr)
+ memset(ptr, 0, size);
+ return ptr;
 }
 
 /**
@@ -1380,7 +1417,7 @@ void * __init memblock_virt_alloc_try_nid_nopanic(
  *      allocate only from memory limited by memblock.current_limit value
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  *
- * Public panicking version of _memblock_virt_alloc_try_nid_nopanic()
+ * Public panicking version of memblock_virt_alloc_try_nid_nopanic()
  * which provides debug information (including caller info), if enabled,
  * and panics if the request can not be satisfied.
  *
@@ -1399,8 +1436,10 @@ void * __init memblock_virt_alloc_try_nid(
      (u64)max_addr, (void *)_RET_IP_);
  ptr = memblock_virt_alloc_internal(size, align,
    min_addr, max_addr, nid);
- if (ptr)
+ if (ptr) {
+ memset(ptr, 0, size);
  return ptr;
+ }
 
  panic("%s: Failed to allocate %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx\n",
       __func__, (u64)size, (u64)align, nid, (u64)min_addr,
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 08/15] mm: zero struct pages during initialization

Pavel Tatashin
In reply to this post by Pavel Tatashin
Add struct page zeroing as a part of initialization of other fields in
__init_single_page().

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 include/linux/mm.h | 9 +++++++++
 mm/page_alloc.c    | 1 +
 2 files changed, 10 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 46b9ac5e8569..183ac5e733db 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -93,6 +93,15 @@ extern int mmap_rnd_compat_bits __read_mostly;
 #define mm_forbids_zeropage(X) (0)
 #endif
 
+/*
+ * On some architectures it is expensive to call memset() for small sizes.
+ * Those architectures should provide their own implementation of "struct page"
+ * zeroing by defining this macro in <asm/pgtable.h>.
+ */
+#ifndef mm_zero_struct_page
+#define mm_zero_struct_page(pp)  ((void)memset((pp), 0, sizeof(struct page)))
+#endif
+
 /*
  * Default maximum number of active map areas, this limits the number of vmas
  * per mm struct. Users can overwrite this number by sysctl but there is a
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 983de0a8047b..4d32c1fa4c6c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1168,6 +1168,7 @@ static void free_one_page(struct zone *zone,
 static void __meminit __init_single_page(struct page *page, unsigned long pfn,
  unsigned long zone, int nid)
 {
+ mm_zero_struct_page(page);
  set_page_links(page, zone, nid, pfn);
  init_page_count(page);
  page_mapcount_reset(page);
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 09/15] sparc64: optimized struct page zeroing

Pavel Tatashin
In reply to this post by Pavel Tatashin
Add an optimized mm_zero_struct_page(), so struct page's are zeroed without
calling memset(). We do eight to tent regular stores based on the size of
struct page. Compiler optimizes out the conditions of switch() statement.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/sparc/include/asm/pgtable_64.h | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 6fbd931f0570..cee5cc7ccc51 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -230,6 +230,36 @@ extern unsigned long _PAGE_ALL_SZ_BITS;
 extern struct page *mem_map_zero;
 #define ZERO_PAGE(vaddr) (mem_map_zero)
 
+/* This macro must be updated when the size of struct page grows above 80
+ * or reduces below 64.
+ * The idea that compiler optimizes out switch() statement, and only
+ * leaves clrx instructions
+ */
+#define mm_zero_struct_page(pp) do { \
+ unsigned long *_pp = (void *)(pp); \
+ \
+ /* Check that struct page is either 64, 72, or 80 bytes */ \
+ BUILD_BUG_ON(sizeof(struct page) & 7); \
+ BUILD_BUG_ON(sizeof(struct page) < 64); \
+ BUILD_BUG_ON(sizeof(struct page) > 80); \
+ \
+ switch (sizeof(struct page)) { \
+ case 80: \
+ _pp[9] = 0; /* fallthrough */ \
+ case 72: \
+ _pp[8] = 0; /* fallthrough */ \
+ default: \
+ _pp[7] = 0; \
+ _pp[6] = 0; \
+ _pp[5] = 0; \
+ _pp[4] = 0; \
+ _pp[3] = 0; \
+ _pp[2] = 0; \
+ _pp[1] = 0; \
+ _pp[0] = 0; \
+ } \
+} while (0)
+
 /* PFNs are real physical page numbers.  However, mem_map only begins to record
  * per-page information starting at pfn_base.  This is to handle systems where
  * the first physical page in the machine is at some huge physical address,
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 10/15] x86/kasan: explicitly zero kasan shadow memory

Pavel Tatashin
In reply to this post by Pavel Tatashin
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.

We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/x86/mm/kasan_init_64.c | 67 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 67 insertions(+)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 02c9d7553409..ec6b2272fd80 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -84,6 +84,66 @@ static struct notifier_block kasan_die_notifier = {
 };
 #endif
 
+/*
+ * x86 variant of vmemmap_populate() uses either PMD_SIZE pages or base pages
+ * to map allocated memory.  This routine determines the page size for the given
+ * address from vmemmap.
+ */
+static u64 get_vmemmap_pgsz(u64 addr)
+{
+ pgd_t *pgd;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ pgd = pgd_offset_k(addr);
+ BUG_ON(pgd_none(*pgd) || pgd_large(*pgd));
+
+ p4d = p4d_offset(pgd, addr);
+ BUG_ON(p4d_none(*p4d) || p4d_large(*p4d));
+
+ pud = pud_offset(p4d, addr);
+ BUG_ON(pud_none(*pud) || pud_large(*pud));
+
+ pmd = pmd_offset(pud, addr);
+ BUG_ON(pmd_none(*pmd));
+
+ if (pmd_large(*pmd))
+ return PMD_SIZE;
+ return PAGE_SIZE;
+}
+
+/*
+ * Memory that was allocated by vmemmap_populate is not zeroed, so we must
+ * zero it here explicitly.
+ */
+static void
+zero_vmemmap_populated_memory(void)
+{
+ u64 i, start, end;
+
+ for (i = 0; i < E820_MAX_ENTRIES && pfn_mapped[i].end; i++) {
+ void *kaddr_start = pfn_to_kaddr(pfn_mapped[i].start);
+ void *kaddr_end = pfn_to_kaddr(pfn_mapped[i].end);
+
+ start = (u64)kasan_mem_to_shadow(kaddr_start);
+ end = (u64)kasan_mem_to_shadow(kaddr_end);
+
+ /* Round to the start end of the mapped pages */
+ start = rounddown(start, get_vmemmap_pgsz(start));
+ end = roundup(end, get_vmemmap_pgsz(start));
+ memset((void *)start, 0, end - start);
+ }
+
+ start = (u64)kasan_mem_to_shadow(_stext);
+ end = (u64)kasan_mem_to_shadow(_end);
+
+ /* Round to the start end of the mapped pages */
+ start = rounddown(start, get_vmemmap_pgsz(start));
+ end = roundup(end, get_vmemmap_pgsz(start));
+ memset((void *)start, 0, end - start);
+}
+
 void __init kasan_early_init(void)
 {
  int i;
@@ -156,6 +216,13 @@ void __init kasan_init(void)
  pte_t pte = __pte(__pa(kasan_zero_page) | __PAGE_KERNEL_RO);
  set_pte(&kasan_zero_pte[i], pte);
  }
+
+ /*
+ * vmemmap_populate does not zero the memory, so we need to zero it
+ * explicitly
+ */
+ zero_vmemmap_populated_memory();
+
  /* Flush TLBs again to be sure that write protection applied. */
  __flush_tlb_all();
 
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

Pavel Tatashin
In reply to this post by Pavel Tatashin
To optimize the performance of struct page initialization,
vmemmap_populate() will no longer zero memory.

We must explicitly zero the memory that is allocated by vmemmap_populate()
for kasan, as this memory does not go through struct page initialization
path.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..e78a9ecbb687 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
  set_pgd(pgd_offset_k(start), __pgd(0));
 }
 
+/*
+ * Memory that was allocated by vmemmap_populate is not zeroed, so we must
+ * zero it here explicitly.
+ */
+static void
+zero_vmemmap_populated_memory(void)
+{
+ struct memblock_region *reg;
+ u64 start, end;
+
+ for_each_memblock(memory, reg) {
+ start = __phys_to_virt(reg->base);
+ end = __phys_to_virt(reg->base + reg->size);
+
+ if (start >= end)
+ break;
+
+ start = (u64)kasan_mem_to_shadow((void *)start);
+ end = (u64)kasan_mem_to_shadow((void *)end);
+
+ /* Round to the start end of the mapped pages */
+ start = round_down(start, SWAPPER_BLOCK_SIZE);
+ end = round_up(end, SWAPPER_BLOCK_SIZE);
+ memset((void *)start, 0, end - start);
+ }
+
+ start = (u64)kasan_mem_to_shadow(_text);
+ end = (u64)kasan_mem_to_shadow(_end);
+
+ /* Round to the start end of the mapped pages */
+ start = round_down(start, SWAPPER_BLOCK_SIZE);
+ end = round_up(end, SWAPPER_BLOCK_SIZE);
+ memset((void *)start, 0, end - start);
+}
+
 void __init kasan_init(void)
 {
  u64 kimg_shadow_start, kimg_shadow_end;
@@ -205,8 +240,15 @@ void __init kasan_init(void)
  pfn_pte(sym_to_pfn(kasan_zero_page), PAGE_KERNEL_RO));
 
  memset(kasan_zero_page, 0, PAGE_SIZE);
+
  cpu_replace_ttbr1(lm_alias(swapper_pg_dir));
 
+ /*
+ * vmemmap_populate does not zero the memory, so we need to zero it
+ * explicitly
+ */
+ zero_vmemmap_populated_memory();
+
  /* At this point kasan is fully initialized. Enable error messages */
  init_task.kasan_depth = 0;
  pr_info("KernelAddressSanitizer initialized\n");
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 12/15] mm: explicitly zero pagetable memory

Pavel Tatashin
In reply to this post by Pavel Tatashin
Soon vmemmap_alloc_block() will no longer zero the block, so zero memory
at its call sites for everything except struct pages.  Struct page memory
is zero'd by struct page initialization.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 mm/sparse-vmemmap.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index c50b1a14d55e..d40c721ab19f 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -191,6 +191,7 @@ pmd_t * __meminit vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node)
  void *p = vmemmap_alloc_block(PAGE_SIZE, node);
  if (!p)
  return NULL;
+ memset(p, 0, PAGE_SIZE);
  pmd_populate_kernel(&init_mm, pmd, p);
  }
  return pmd;
@@ -203,6 +204,7 @@ pud_t * __meminit vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node)
  void *p = vmemmap_alloc_block(PAGE_SIZE, node);
  if (!p)
  return NULL;
+ memset(p, 0, PAGE_SIZE);
  pud_populate(&init_mm, pud, p);
  }
  return pud;
@@ -215,6 +217,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
  void *p = vmemmap_alloc_block(PAGE_SIZE, node);
  if (!p)
  return NULL;
+ memset(p, 0, PAGE_SIZE);
  p4d_populate(&init_mm, p4d, p);
  }
  return p4d;
@@ -227,6 +230,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
  void *p = vmemmap_alloc_block(PAGE_SIZE, node);
  if (!p)
  return NULL;
+ memset(p, 0, PAGE_SIZE);
  pgd_populate(&init_mm, pgd, p);
  }
  return pgd;
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 13/15] mm: stop zeroing memory during allocation in vmemmap

Pavel Tatashin
In reply to this post by Pavel Tatashin
Replace allocators in sprase-vmemmap to use the non-zeroing version. So,
we will get the performance improvement by zeroing the memory in parallel
when struct pages are zeroed.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 mm/sparse-vmemmap.c | 6 +++---
 mm/sparse.c         | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index d40c721ab19f..3b646b5ce1b6 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -41,7 +41,7 @@ static void * __ref __earlyonly_bootmem_alloc(int node,
  unsigned long align,
  unsigned long goal)
 {
- return memblock_virt_alloc_try_nid(size, align, goal,
+ return memblock_virt_alloc_try_nid_raw(size, align, goal,
     BOOTMEM_ALLOC_ACCESSIBLE, node);
 }
 
@@ -56,11 +56,11 @@ void * __meminit vmemmap_alloc_block(unsigned long size, int node)
 
  if (node_state(node, N_HIGH_MEMORY))
  page = alloc_pages_node(
- node, GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL,
+ node, GFP_KERNEL | __GFP_RETRY_MAYFAIL,
  get_order(size));
  else
  page = alloc_pages(
- GFP_KERNEL | __GFP_ZERO | __GFP_RETRY_MAYFAIL,
+ GFP_KERNEL | __GFP_RETRY_MAYFAIL,
  get_order(size));
  if (page)
  return page_address(page);
diff --git a/mm/sparse.c b/mm/sparse.c
index 7b4be3fd5cac..0e315766ad11 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -441,9 +441,9 @@ void __init sparse_mem_maps_populate_node(struct page **map_map,
  }
 
  size = PAGE_ALIGN(size);
- map = memblock_virt_alloc_try_nid(size * map_count,
-  PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
-  BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
+ map = memblock_virt_alloc_try_nid_raw(size * map_count,
+      PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
+      BOOTMEM_ALLOC_ACCESSIBLE, nodeid);
  if (map) {
  for (pnum = pnum_begin; pnum < pnum_end; pnum++) {
  if (!present_section_nr(pnum))
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 14/15] mm: optimize early system hash allocations

Pavel Tatashin
In reply to this post by Pavel Tatashin
Clients can call alloc_large_system_hash() with flag: HASH_ZERO to specify
that memory that was allocated for system hash needs to be zeroed,
otherwise the memory does not need to be zeroed, and client will initialize
it.

If memory does not need to be zero'd, call the new
memblock_virt_alloc_raw() interface, and thus improve the boot performance.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 mm/page_alloc.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4d32c1fa4c6c..000806298dfb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7354,18 +7354,17 @@ void *__init alloc_large_system_hash(const char *tablename,
 
  log2qty = ilog2(numentries);
 
- /*
- * memblock allocator returns zeroed memory already, so HASH_ZERO is
- * currently not used when HASH_EARLY is specified.
- */
  gfp_flags = (flags & HASH_ZERO) ? GFP_ATOMIC | __GFP_ZERO : GFP_ATOMIC;
  do {
  size = bucketsize << log2qty;
- if (flags & HASH_EARLY)
- table = memblock_virt_alloc_nopanic(size, 0);
- else if (hashdist)
+ if (flags & HASH_EARLY) {
+ if (flags & HASH_ZERO)
+ table = memblock_virt_alloc_nopanic(size, 0);
+ else
+ table = memblock_virt_alloc_raw(size, 0);
+ } else if (hashdist) {
  table = __vmalloc(size, gfp_flags, PAGE_KERNEL);
- else {
+ } else {
  /*
  * If bucketsize is not a power-of-two, we may free
  * some pages at the end of hash table which
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[v6 15/15] mm: debug for raw alloctor

Pavel Tatashin
In reply to this post by Pavel Tatashin
When CONFIG_DEBUG_VM is enabled, this patch sets all the memory that is
returned by memblock_virt_alloc_try_nid_raw() to ones to ensure that no
places excpect zeroed memory.

Signed-off-by: Pavel Tatashin <[hidden email]>
Reviewed-by: Steven Sistare <[hidden email]>
Reviewed-by: Daniel Jordan <[hidden email]>
Reviewed-by: Bob Picco <[hidden email]>
---
 mm/memblock.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 3fbf3bcb52d9..29fcb1dd8a81 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1363,12 +1363,19 @@ void * __init memblock_virt_alloc_try_nid_raw(
  phys_addr_t min_addr, phys_addr_t max_addr,
  int nid)
 {
+ void *ptr;
+
  memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=0x%llx max_addr=0x%llx %pF\n",
      __func__, (u64)size, (u64)align, nid, (u64)min_addr,
      (u64)max_addr, (void *)_RET_IP_);
 
- return memblock_virt_alloc_internal(size, align,
-    min_addr, max_addr, nid);
+ ptr = memblock_virt_alloc_internal(size, align,
+   min_addr, max_addr, nid);
+#ifdef CONFIG_DEBUG_VM
+ if (ptr && size > 0)
+ memset(ptr, 0xff, size);
+#endif
+ return ptr;
 }
 
 /**
--
2.14.0

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

Will Deacon
In reply to this post by Pavel Tatashin
On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:

> To optimize the performance of struct page initialization,
> vmemmap_populate() will no longer zero memory.
>
> We must explicitly zero the memory that is allocated by vmemmap_populate()
> for kasan, as this memory does not go through struct page initialization
> path.
>
> Signed-off-by: Pavel Tatashin <[hidden email]>
> Reviewed-by: Steven Sistare <[hidden email]>
> Reviewed-by: Daniel Jordan <[hidden email]>
> Reviewed-by: Bob Picco <[hidden email]>
> ---
>  arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 42 insertions(+)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..e78a9ecbb687 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>   set_pgd(pgd_offset_k(start), __pgd(0));
>  }
>  
> +/*
> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
> + * zero it here explicitly.
> + */
> +static void
> +zero_vmemmap_populated_memory(void)
> +{
> + struct memblock_region *reg;
> + u64 start, end;
> +
> + for_each_memblock(memory, reg) {
> + start = __phys_to_virt(reg->base);
> + end = __phys_to_virt(reg->base + reg->size);
> +
> + if (start >= end)
> + break;
> +
> + start = (u64)kasan_mem_to_shadow((void *)start);
> + end = (u64)kasan_mem_to_shadow((void *)end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> + }
> +
> + start = (u64)kasan_mem_to_shadow(_text);
> + end = (u64)kasan_mem_to_shadow(_end);
> +
> + /* Round to the start end of the mapped pages */
> + start = round_down(start, SWAPPER_BLOCK_SIZE);
> + end = round_up(end, SWAPPER_BLOCK_SIZE);
> + memset((void *)start, 0, end - start);
> +}

I can't help but think this would be an awful lot nicer if you made
vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
implement a version of vmemmap_populate that does the zeroing when we need
it, without having to duplicate a bunch of the code like this. I think it
would also be less error-prone, because you wouldn't have to do the
allocation and the zeroing in two separate steps.

Will
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

Pavel Tatashin
Hi Will,

Thank you for looking at this change. What you described was in my
previous iterations of this project.

See for example here: https://lkml.org/lkml/2017/5/5/369

I was asked to remove that flag, and only zero memory in place when
needed. Overall the current approach is better everywhere else in the
kernel, but it adds a little extra code to kasan initialization.

Pasha

On 08/08/2017 05:07 AM, Will Deacon wrote:

> On Mon, Aug 07, 2017 at 04:38:45PM -0400, Pavel Tatashin wrote:
>> To optimize the performance of struct page initialization,
>> vmemmap_populate() will no longer zero memory.
>>
>> We must explicitly zero the memory that is allocated by vmemmap_populate()
>> for kasan, as this memory does not go through struct page initialization
>> path.
>>
>> Signed-off-by: Pavel Tatashin <[hidden email]>
>> Reviewed-by: Steven Sistare <[hidden email]>
>> Reviewed-by: Daniel Jordan <[hidden email]>
>> Reviewed-by: Bob Picco <[hidden email]>
>> ---
>>   arch/arm64/mm/kasan_init.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 42 insertions(+)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index 81f03959a4ab..e78a9ecbb687 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -135,6 +135,41 @@ static void __init clear_pgds(unsigned long start,
>>   set_pgd(pgd_offset_k(start), __pgd(0));
>>   }
>>  
>> +/*
>> + * Memory that was allocated by vmemmap_populate is not zeroed, so we must
>> + * zero it here explicitly.
>> + */
>> +static void
>> +zero_vmemmap_populated_memory(void)
>> +{
>> + struct memblock_region *reg;
>> + u64 start, end;
>> +
>> + for_each_memblock(memory, reg) {
>> + start = __phys_to_virt(reg->base);
>> + end = __phys_to_virt(reg->base + reg->size);
>> +
>> + if (start >= end)
>> + break;
>> +
>> + start = (u64)kasan_mem_to_shadow((void *)start);
>> + end = (u64)kasan_mem_to_shadow((void *)end);
>> +
>> + /* Round to the start end of the mapped pages */
>> + start = round_down(start, SWAPPER_BLOCK_SIZE);
>> + end = round_up(end, SWAPPER_BLOCK_SIZE);
>> + memset((void *)start, 0, end - start);
>> + }
>> +
>> + start = (u64)kasan_mem_to_shadow(_text);
>> + end = (u64)kasan_mem_to_shadow(_end);
>> +
>> + /* Round to the start end of the mapped pages */
>> + start = round_down(start, SWAPPER_BLOCK_SIZE);
>> + end = round_up(end, SWAPPER_BLOCK_SIZE);
>> + memset((void *)start, 0, end - start);
>> +}
>
> I can't help but think this would be an awful lot nicer if you made
> vmemmap_alloc_block take extra GFP flags as a parameter. That way, we could
> implement a version of vmemmap_populate that does the zeroing when we need
> it, without having to duplicate a bunch of the code like this. I think it
> would also be less error-prone, because you wouldn't have to do the
> allocation and the zeroing in two separate steps.
>
> Will
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

Will Deacon
On Tue, Aug 08, 2017 at 07:49:22AM -0400, Pasha Tatashin wrote:

> Hi Will,
>
> Thank you for looking at this change. What you described was in my previous
> iterations of this project.
>
> See for example here: https://lkml.org/lkml/2017/5/5/369
>
> I was asked to remove that flag, and only zero memory in place when needed.
> Overall the current approach is better everywhere else in the kernel, but it
> adds a little extra code to kasan initialization.

Damn, I actually prefer the flag :)

But actually, if you look at our implementation of vmemmap_populate, then we
have our own version of vmemmap_populate_basepages that terminates at the
pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's resistance
to do this in the core code, then I'd be inclined to replace our
vmemmap_populate implementation in the arm64 code with a single version that
can terminate at either the PMD or the PTE level, and do zeroing if
required. We're already special-casing it, so we don't really lose anything
imo.

Will
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [v6 11/15] arm64/kasan: explicitly zero kasan shadow memory

Pavel Tatashin
Hi Will,

 > Damn, I actually prefer the flag :)
 >
 > But actually, if you look at our implementation of vmemmap_populate,
then we
 > have our own version of vmemmap_populate_basepages that terminates at the
 > pmd level anyway if ARM64_SWAPPER_USES_SECTION_MAPS. If there's
resistance
 > to do this in the core code, then I'd be inclined to replace our
 > vmemmap_populate implementation in the arm64 code with a single
version that
 > can terminate at either the PMD or the PTE level, and do zeroing if
 > required. We're already special-casing it, so we don't really lose
anything
 > imo.

Another approach is to create a new mapping interface for kasan only. As
what Ard Biesheuvel wrote:

 > KASAN uses vmemmap_populate as a convenience: kasan has nothing to do
 > with vmemmap, but the function already existed and happened to do what
 > KASAN requires.
 >
 > Given that that will no longer be the case, it would be far better to
 > stop using vmemmap_populate altogether, and clone it into a KASAN
 > specific version (with an appropriate name) with the zeroing folded
 > into it.

I agree with this statement, but I think it should not be part of this
project.

Pasha
1234
Loading...