[RFC v7 00/25] powerpc: Memory Protection Keys

classic Classic list List threaded Threaded
53 messages Options
123
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 00/25] powerpc: Memory Protection Keys

Ram Pai
Memory protection keys enable applications to protect its
address  space from inadvertent access from or corruption
by itself.

The overall idea:
-----------------
 A process allocates a   key  and associates it with
 an  address  range  within    its   address   space.
 The process  then  can  dynamically  set read/write
 permissions on  the   key   without  involving  the
 kernel. Any  code that  violates   the  permissions
 of  the address space; as defined by its associated
 key, will receive a segmentation fault.

This  patch series enables the feature on PPC64 HPTE
platform.

ISA3.0   section  5.7.13   describes  the  detailed
specifications.


Highlevel view of the design:
---------------------------
When  an  application associates a key with a address
address  range,  program  the key in    the Linux PTE.
When the MMU   detects  a page fault, allocate a hash
page  and   program  the  key into HPTE.  And finally
when the  MMU    detects  a  key  violation;  due  to
invalid    application  access, invoke the registered
signal   handler and provide the violated  key number
as   well  as the state of the key register (AMR), at
the time it faulted.


Testing:
-------
This  patch  series has passed all the protection key
tests   available    in   the selftests directory.The
tests are updated  to    work on both x86 and powerpc.
NOTE: All the selftest related patches will be   part
of  a separate patch series.


Outstanding issues:
-------------------
How will the application know if pkey is  enabled, if
so   how   many     pkeys     are       available? Is
PKEY_DISABLE_EXECUTE supported?  - Ben.


History:
-------
version v7:
        (1) refers to device tree property to enable
                protection keys.
        (2) adds 4K PTE support.
        (3) fixes a couple of bugs noticed by Thiago
        (4) decouples this patch series from   arch-
            independent code. This patch series can
            now stand by itself, with one kludge
            patch(2).

version v6:
        (1) selftest changes  are broken down into 20
                incremental patches.
        (2) A  separate   key  allocation  mask  that
        includes    PKEY_DISABLE_EXECUTE   is
                added for powerpc
        (3) pkey feature  is enabled for 64K HPT case
                only.  RPT and 4k HPT is disabled.
        (4) Documentation   is   updated   to  better
                capture the semantics.
        (5) introduced   arch_pkeys_enabled() to find
        if an arch enables pkeys.  Correspond-
                ing change the  logic   that displays
                key value in smaps.
        (6) code  rearranged  in many places based on
        comments from   Dave Hansen,   Balbir,
                Anshuman.
        (7) fixed  one bug where a bogus key could be
                associated     successfully        in
                pkey_mprotect().

version v5:
        (1) reverted back to the old  design -- store
            the key in the pte,  instead of bypassing
            it.  The v4  design  slowed down the hash
            page path.
        (2) detects key violation when kernel is told
        to access user pages.
        (3) further  refined the patches into smaller
        consumable units
        (4) page faults   handlers captures the fault-
                ing key
            from the pte   instead of   the vma. This
            closes  a  race  between  where  the  key
            update in the  vma and a key fault caused
            by the key programmed in the pte.
        (5) a key created   with access-denied should
            also set it up to deny write. Fixed it.
        (6) protection-key   number   is displayed in
        smaps the x86 way.

version v4:
        (1) patches no more depend on the pte bits
        to program the hpte
                        -- comment by Balbir
        (2) documentation updates
        (3) fixed a bug in the selftest.
        (4) unlike x86, powerpc   lets signal handler
                change   key   permission   bits; the
                change   will   persist across signal
                handler   boundaries.   Earlier    we
                allowed   the   signal   handler   to
                modify   a   field   in   the siginfo
                structure   which would  than be used
        by  the  kernel  to  program  the key
                protection register (AMR)
         -- resolves a issue raised by Ben.
    "Calls  to  sys_swapcontext  with   a
                made-up  context  will  end up with a
                crap  AMR  if done by code who didn't
                know about that register".
        (5) these  changes  enable protection keys on
        4k-page kernel aswell.

version v3:
        (1) split the patches into smaller consumable
                patches.
        (2) added  the  ability  to  disable  execute
        permission  on  a  key  at   creation.
        (3) rename    calc_pte_to_hpte_pkey_bits() to
            pte_to_hpte_pkey_bits()
                -- suggested by Anshuman
        (4) some   code   optimization and clarity in
                do_page_fault()  
        (5) A bug fix while  invalidating a hpte slot
                in __hash_page_4K()
        -- noticed by Aneesh
       

version v2:
        (1) documentation and selftest added.
  (2) fixed a  bug  in 4k  hpte  backed 64k pte
        where  page    invalidation   was not
                done  correctly,  and  initialization
                of    second-part-of-the-pte  was not
                done    correctly  if the pte was not
                yet Hashed with a hpte.
                  -- Reported by Aneesh.
        (3) Fixed  ABI  breakage  caused in siginfo
        structure.
                -- Reported by Anshuman.
       

version v1: Initial version


Ram Pai (25):
  powerpc: define an additional vma bit for protection keys.
  powerpc: track allocation status of all pkeys
  powerpc: helper function to read,write AMR,IAMR,UAMOR registers
  powerpc: helper functions to initialize AMR, IAMR and UAMOR registers
  powerpc: cleaup AMR,iAMR when a key is allocated or freed
  powerpc: implementation for arch_set_user_pkey_access()
  powerpc: sys_pkey_alloc() and sys_pkey_free() system calls
  powerpc: ability to create execute-disabled pkeys
  powerpc: store and restore the pkey state across context switches
  powerpc: introduce execute-only pkey
  powerpc: ability to associate pkey to a vma
  powerpc: implementation for arch_override_mprotect_pkey()
  powerpc: map vma key-protection bits to pte key bits.
  powerpc: sys_pkey_mprotect() system call
  powerpc: Program HPTE key protection bits
  powerpc: helper to validate key-access permissions of a pte
  powerpc: check key protection for user page access
  powerpc: Macro the mask used for checking DSI exception
  powerpc: implementation for arch_vma_access_permitted()
  powerpc: Handle exceptions caused by pkey violation
  powerpc: capture AMR register content on pkey violation
  powerpc: introduce get_pte_pkey() helper
  powerpc: capture the violated protection key on fault
  powerpc: Deliver SEGV signal on pkey violation
  powerpc: Enable pkey subsystem

 arch/powerpc/include/asm/book3s/64/mmu-hash.h |   10 +
 arch/powerpc/include/asm/book3s/64/mmu.h      |   10 +
 arch/powerpc/include/asm/book3s/64/pgtable.h  |   69 +++++++-
 arch/powerpc/include/asm/cputable.h           |    8 +-
 arch/powerpc/include/asm/mman.h               |   16 ++-
 arch/powerpc/include/asm/mmu_context.h        |   18 ++-
 arch/powerpc/include/asm/paca.h               |    4 +
 arch/powerpc/include/asm/pkeys.h              |  255 ++++++++++++++++++++++++-
 arch/powerpc/include/asm/processor.h          |    5 +
 arch/powerpc/include/asm/reg.h                |    8 +-
 arch/powerpc/include/asm/systbl.h             |    3 +
 arch/powerpc/include/asm/unistd.h             |    6 +-
 arch/powerpc/include/uapi/asm/ptrace.h        |    1 +
 arch/powerpc/include/uapi/asm/unistd.h        |    3 +
 arch/powerpc/kernel/asm-offsets.c             |    6 +
 arch/powerpc/kernel/exceptions-64s.S          |    2 +-
 arch/powerpc/kernel/process.c                 |   25 +++
 arch/powerpc/kernel/prom.c                    |   19 ++
 arch/powerpc/kernel/signal_32.c               |    5 +
 arch/powerpc/kernel/signal_64.c               |    4 +
 arch/powerpc/kernel/traps.c                   |   15 ++
 arch/powerpc/mm/fault.c                       |   31 +++
 arch/powerpc/mm/hash_utils_64.c               |   26 +++
 arch/powerpc/mm/mmu_context_book3s64.c        |    2 +
 arch/powerpc/mm/pkeys.c                       |  256 +++++++++++++++++++++++++
 25 files changed, 787 insertions(+), 20 deletions(-)

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 01/25] powerpc: define an additional vma bit for protection keys.

Ram Pai
powerpc needs an additional vma bit to support 32 keys.
Till the additional vma bit lands in include/linux/mm.h
we have to define  it  in powerpc specific header file.
This is  needed to get pkeys working on power.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index acad7c2..4ccb8f5 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -2,6 +2,24 @@
 #define _ASM_PPC64_PKEYS_H
 
 extern bool pkey_inited;
+
+/*
+ * powerpc needs an additional vma bit to support 32 keys.
+ * Till the additional vma bit lands in include/linux/mm.h
+ * we have to carry the hunk below. This is  needed to get
+ * pkeys working on power. -- Ram
+ */
+#ifndef VM_HIGH_ARCH_BIT_4
+#define VM_HIGH_ARCH_BIT_4 36
+#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
+#define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
+#define VM_PKEY_BIT0 VM_HIGH_ARCH_0
+#define VM_PKEY_BIT1 VM_HIGH_ARCH_1
+#define VM_PKEY_BIT2 VM_HIGH_ARCH_2
+#define VM_PKEY_BIT3 VM_HIGH_ARCH_3
+#define VM_PKEY_BIT4 VM_HIGH_ARCH_4
+#endif
+
 #define ARCH_VM_PKEY_FLAGS 0
 
 static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 02/25] powerpc: track allocation status of all pkeys

Ram Pai
In reply to this post by Ram Pai
Total 32 keys are available on power7 and above. However
pkey 0,1 are reserved. So effectively we  have  30 pkeys.

On 4K kernels, we do not  have  5  bits  in  the  PTE to
represent  all the keys; we only have 3bits.Two of those
keys are reserved; pkey 0 and pkey 1. So effectively  we
have 6 pkeys.

This patch keeps track of reserved keys, allocated  keys
and keys that are currently free.

Also it  adds  skeletal  functions  and macros, that the
architecture-independent code expects to be available.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/mmu.h |    9 +++
 arch/powerpc/include/asm/mmu_context.h   |    1 +
 arch/powerpc/include/asm/pkeys.h         |   98 ++++++++++++++++++++++++++++-
 arch/powerpc/mm/mmu_context_book3s64.c   |    2 +
 arch/powerpc/mm/pkeys.c                  |    2 +
 5 files changed, 108 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 77529a3..104ad72 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -108,6 +108,15 @@ struct patb_entry {
 #ifdef CONFIG_SPAPR_TCE_IOMMU
  struct list_head iommu_group_mem_list;
 #endif
+
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ /*
+ * Each bit represents one protection key.
+ * bit set   -> key allocated
+ * bit unset -> key available for allocation
+ */
+ u32 pkey_allocation_map;
+#endif
 } mm_context_t;
 
 /*
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 4b93547..4705dab 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -184,6 +184,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
 
 #ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
 #define pkey_initialize()
+#define pkey_mm_init(mm)
 #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 4ccb8f5..def385f 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -2,6 +2,8 @@
 #define _ASM_PPC64_PKEYS_H
 
 extern bool pkey_inited;
+extern int pkeys_total; /* total pkeys as per device tree */
+extern u32 initial_allocation_mask;/* bits set for reserved keys */
 
 /*
  * powerpc needs an additional vma bit to support 32 keys.
@@ -20,21 +22,76 @@
 #define VM_PKEY_BIT4 VM_HIGH_ARCH_4
 #endif
 
-#define ARCH_VM_PKEY_FLAGS 0
+#define arch_max_pkey()  pkeys_total
+#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+ VM_PKEY_BIT3 | VM_PKEY_BIT4)
+
+#define pkey_alloc_mask(pkey) (0x1 << pkey)
+
+#define mm_pkey_allocation_map(mm) (mm->context.pkey_allocation_map)
+
+#define mm_set_pkey_allocated(mm, pkey) { \
+ mm_pkey_allocation_map(mm) |= pkey_alloc_mask(pkey); \
+}
+
+#define mm_set_pkey_free(mm, pkey) { \
+ mm_pkey_allocation_map(mm) &= ~pkey_alloc_mask(pkey); \
+}
+
+#define mm_set_pkey_is_allocated(mm, pkey) \
+ (mm_pkey_allocation_map(mm) & pkey_alloc_mask(pkey))
+
+#define mm_set_pkey_is_reserved(mm, pkey) (initial_allocation_mask & \
+ pkey_alloc_mask(pkey))
 
 static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
 {
- return (pkey == 0);
+ /* a reserved key is never considered as 'explicitly allocated' */
+ return ((pkey < arch_max_pkey()) &&
+ !mm_set_pkey_is_reserved(mm, pkey) &&
+ mm_set_pkey_is_allocated(mm, pkey));
 }
 
+/*
+ * Returns a positive, 5-bit key on success, or -1 on failure.
+ */
 static inline int mm_pkey_alloc(struct mm_struct *mm)
 {
- return -1;
+ /*
+ * Note: this is the one and only place we make sure
+ * that the pkey is valid as far as the hardware is
+ * concerned.  The rest of the kernel trusts that
+ * only good, valid pkeys come out of here.
+ */
+ u32 all_pkeys_mask = (u32)(~(0x0));
+ int ret;
+
+ if (!pkey_inited)
+ return -1;
+ /*
+ * Are we out of pkeys?  We must handle this specially
+ * because ffz() behavior is undefined if there are no
+ * zeros.
+ */
+ if (mm_pkey_allocation_map(mm) == all_pkeys_mask)
+ return -1;
+
+ ret = ffz((u32)mm_pkey_allocation_map(mm));
+ mm_set_pkey_allocated(mm, ret);
+ return ret;
 }
 
 static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
 {
- return -EINVAL;
+ if (!pkey_inited)
+ return -1;
+
+ if (!mm_pkey_is_allocated(mm, pkey))
+ return -EINVAL;
+
+ mm_set_pkey_free(mm, pkey);
+
+ return 0;
 }
 
 /*
@@ -58,12 +115,45 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
  return 0;
 }
 
+static inline void pkey_mm_init(struct mm_struct *mm)
+{
+ if (!pkey_inited)
+ return;
+ mm_pkey_allocation_map(mm) = initial_allocation_mask;
+}
+
 static inline void pkey_initialize(void)
 {
+ int os_reserved, i;
+
  /* disable the pkey system till everything
  * is in place. A patch further down the
  * line will enable it.
  */
  pkey_inited = false;
+
+ /* Lets assume 32 keys */
+ pkeys_total = 32;
+
+#ifdef CONFIG_PPC_4K_PAGES
+ /*
+ * the OS can manage only 8 pkeys
+ * due to its inability to represent
+ * them in the linux 4K-PTE.
+ */
+ os_reserved = pkeys_total-8;
+#else
+ os_reserved = 0;
+#endif
+ /*
+ * Bits are in LE format.
+ * NOTE: 1, 0 are reserved.
+ * key 0 is the default key, which allows read/write/execute.
+ * key 1 is recommended not to be used.
+ * PowerISA(3.0) page 1015, programming note.
+ */
+ initial_allocation_mask = ~0x0;
+ for (i = 2; i < (pkeys_total - os_reserved); i++)
+ initial_allocation_mask &= ~(0x1<<i);
 }
 #endif /*_ASM_PPC64_PKEYS_H */
diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index a3edf81..34a16f3 100644
--- a/arch/powerpc/mm/mmu_context_book3s64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -16,6 +16,7 @@
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/mm.h>
+#include <linux/pkeys.h>
 #include <linux/spinlock.h>
 #include <linux/idr.h>
 #include <linux/export.h>
@@ -120,6 +121,7 @@ static int hash__init_new_context(struct mm_struct *mm)
 
  subpage_prot_init_new_context(mm);
 
+ pkey_mm_init(mm);
  return index;
 }
 
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index c3acee1..37dacc5 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -16,3 +16,5 @@
 #include <linux/pkeys.h>                /* PKEY_*                       */
 
 bool pkey_inited;
+int  pkeys_total; /* total pkeys as per device tree */
+u32  initial_allocation_mask; /* bits set for reserved keys */
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 03/25] powerpc: helper function to read, write AMR, IAMR, UAMOR registers

Ram Pai
In reply to this post by Ram Pai
Implements helper functions to read and write the key related
registers; AMR, IAMR, UAMOR.

AMR register tracks the read,write permission of a key
IAMR register tracks the execute permission of a key
UAMOR register enables and disables a key

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h |   26 ++++++++++++++++++++++++++
 1 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 85bc987..d4da0e9 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -428,6 +428,32 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
  pte_update(mm, addr, ptep, 0, _PAGE_PRIVILEGED, 1);
 }
 
+#include <asm/reg.h>
+static inline u64 read_amr(void)
+{
+ return mfspr(SPRN_AMR);
+}
+static inline void write_amr(u64 value)
+{
+ mtspr(SPRN_AMR, value);
+}
+static inline u64 read_iamr(void)
+{
+ return mfspr(SPRN_IAMR);
+}
+static inline void write_iamr(u64 value)
+{
+ mtspr(SPRN_IAMR, value);
+}
+static inline u64 read_uamor(void)
+{
+ return mfspr(SPRN_UAMOR);
+}
+static inline void write_uamor(u64 value)
+{
+ mtspr(SPRN_UAMOR, value);
+}
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
        unsigned long addr, pte_t *ptep)
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 04/25] powerpc: helper functions to initialize AMR, IAMR and UAMOR registers

Ram Pai
In reply to this post by Ram Pai
Introduce  helper functions that can initialize the bits in the AMR,
IAMR and UAMOR register; the bits that correspond to the given pkey.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h |    1 +
 arch/powerpc/mm/pkeys.c          |   46 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 47 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index def385f..f1c55e0 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -25,6 +25,7 @@
 #define arch_max_pkey()  pkeys_total
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
  VM_PKEY_BIT3 | VM_PKEY_BIT4)
+#define AMR_BITS_PER_PKEY 2
 
 #define pkey_alloc_mask(pkey) (0x1 << pkey)
 
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 37dacc5..846d984 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -18,3 +18,49 @@
 bool pkey_inited;
 int  pkeys_total; /* total pkeys as per device tree */
 u32  initial_allocation_mask; /* bits set for reserved keys */
+
+#define PKEY_REG_BITS (sizeof(u64)*8)
+#define pkeyshift(pkey) (PKEY_REG_BITS - ((pkey+1) * AMR_BITS_PER_PKEY))
+
+static inline void init_amr(int pkey, u8 init_bits)
+{
+ u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
+ u64 old_amr = read_amr() & ~((u64)(0x3ul) << pkeyshift(pkey));
+
+ write_amr(old_amr | new_amr_bits);
+}
+
+static inline void init_iamr(int pkey, u8 init_bits)
+{
+ u64 new_iamr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
+ u64 old_iamr = read_iamr() & ~((u64)(0x3ul) << pkeyshift(pkey));
+
+ write_iamr(old_iamr | new_iamr_bits);
+}
+
+static void pkey_status_change(int pkey, bool enable)
+{
+ u64 old_uamor;
+
+ /* reset the AMR and IAMR bits for this key */
+ init_amr(pkey, 0x0);
+ init_iamr(pkey, 0x0);
+
+ /* enable/disable key */
+ old_uamor = read_uamor();
+ if (enable)
+ old_uamor |= (0x3ul << pkeyshift(pkey));
+ else
+ old_uamor &= ~(0x3ul << pkeyshift(pkey));
+ write_uamor(old_uamor);
+}
+
+void __arch_activate_pkey(int pkey)
+{
+ pkey_status_change(pkey, true);
+}
+
+void __arch_deactivate_pkey(int pkey)
+{
+ pkey_status_change(pkey, false);
+}
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 05/25] powerpc: cleaup AMR, iAMR when a key is allocated or freed

Ram Pai
In reply to this post by Ram Pai
cleanup the bits corresponding to a key in the AMR, and IAMR
register, when the key is newly allocated/activated or is freed.
We dont want some residual bits cause the hardware enforce
unintended behavior when the key is activated or freed.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h |   12 ++++++++++++
 1 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index f1c55e0..40bfaac 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -53,6 +53,8 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
  mm_set_pkey_is_allocated(mm, pkey));
 }
 
+extern void __arch_activate_pkey(int pkey);
+extern void __arch_deactivate_pkey(int pkey);
 /*
  * Returns a positive, 5-bit key on success, or -1 on failure.
  */
@@ -79,6 +81,12 @@ static inline int mm_pkey_alloc(struct mm_struct *mm)
 
  ret = ffz((u32)mm_pkey_allocation_map(mm));
  mm_set_pkey_allocated(mm, ret);
+
+ /*
+ * enable the key in the hardware
+ */
+ if (ret > 0)
+ __arch_activate_pkey(ret);
  return ret;
 }
 
@@ -90,6 +98,10 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
  if (!mm_pkey_is_allocated(mm, pkey))
  return -EINVAL;
 
+ /*
+ * Disable the key in the hardware
+ */
+ __arch_deactivate_pkey(pkey);
  mm_set_pkey_free(mm, pkey);
 
  return 0;
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 06/25] powerpc: implementation for arch_set_user_pkey_access()

Ram Pai
In reply to this post by Ram Pai
This patch provides the detailed implementation for
a user to allocate a key and enable it in the hardware.

It provides the plumbing, but it cannot be used till
the system call is implemented. The next patch  will
do so.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h |    9 ++++++++-
 arch/powerpc/mm/pkeys.c          |   28 ++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 40bfaac..aec3f49 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -23,6 +23,9 @@
 #endif
 
 #define arch_max_pkey()  pkeys_total
+#define AMR_RD_BIT 0x1UL
+#define AMR_WR_BIT 0x2UL
+#define IAMR_EX_BIT 0x1UL
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
  VM_PKEY_BIT3 | VM_PKEY_BIT4)
 #define AMR_BITS_PER_PKEY 2
@@ -122,10 +125,14 @@ static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
  return 0;
 }
 
+extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ unsigned long init_val);
 static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
  unsigned long init_val)
 {
- return 0;
+ if (!pkey_inited)
+ return -EINVAL;
+ return __arch_set_user_pkey_access(tsk, pkey, init_val);
 }
 
 static inline void pkey_mm_init(struct mm_struct *mm)
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 846d984..5b3531e 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -22,6 +22,11 @@
 #define PKEY_REG_BITS (sizeof(u64)*8)
 #define pkeyshift(pkey) (PKEY_REG_BITS - ((pkey+1) * AMR_BITS_PER_PKEY))
 
+static bool is_pkey_enabled(int pkey)
+{
+ return !!(read_uamor() & (0x3ul << pkeyshift(pkey)));
+}
+
 static inline void init_amr(int pkey, u8 init_bits)
 {
  u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
@@ -64,3 +69,26 @@ void __arch_deactivate_pkey(int pkey)
 {
  pkey_status_change(pkey, false);
 }
+
+/*
+ * set the access right in AMR IAMR and UAMOR register
+ * for @pkey to that specified in @init_val.
+ */
+int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ unsigned long init_val)
+{
+ u64 new_amr_bits = 0x0ul;
+
+ if (!is_pkey_enabled(pkey))
+ return -EINVAL;
+
+ /* Set the bits we need in AMR:  */
+ if (init_val & PKEY_DISABLE_ACCESS)
+ new_amr_bits |= AMR_RD_BIT | AMR_WR_BIT;
+ else if (init_val & PKEY_DISABLE_WRITE)
+ new_amr_bits |= AMR_WR_BIT;
+
+ init_amr(pkey, new_amr_bits);
+
+ return 0;
+}
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 07/25] powerpc: sys_pkey_alloc() and sys_pkey_free() system calls

Ram Pai
In reply to this post by Ram Pai
Finally this patch provides the ability for a process to
allocate and free a protection key.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/systbl.h      |    2 ++
 arch/powerpc/include/asm/unistd.h      |    4 +---
 arch/powerpc/include/uapi/asm/unistd.h |    2 ++
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h
index 1c94708..22dd776 100644
--- a/arch/powerpc/include/asm/systbl.h
+++ b/arch/powerpc/include/asm/systbl.h
@@ -388,3 +388,5 @@
 COMPAT_SYS_SPU(pwritev2)
 SYSCALL(kexec_file_load)
 SYSCALL(statx)
+SYSCALL(pkey_alloc)
+SYSCALL(pkey_free)
diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index 9ba11db..e0273bc 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -12,13 +12,11 @@
 #include <uapi/asm/unistd.h>
 
 
-#define NR_syscalls 384
+#define NR_syscalls 386
 
 #define __NR__exit __NR_exit
 
 #define __IGNORE_pkey_mprotect
-#define __IGNORE_pkey_alloc
-#define __IGNORE_pkey_free
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h
index b85f142..7993a07 100644
--- a/arch/powerpc/include/uapi/asm/unistd.h
+++ b/arch/powerpc/include/uapi/asm/unistd.h
@@ -394,5 +394,7 @@
 #define __NR_pwritev2 381
 #define __NR_kexec_file_load 382
 #define __NR_statx 383
+#define __NR_pkey_alloc 384
+#define __NR_pkey_free 385
 
 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 08/25] powerpc: ability to create execute-disabled pkeys

Ram Pai
In reply to this post by Ram Pai
powerpc has hardware support to disable execute on a pkey.
This patch enables the ability to create execute-disabled
keys.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h |   12 ++++++++++++
 arch/powerpc/mm/pkeys.c          |    5 +++++
 2 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index aec3f49..9d59619 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -22,6 +22,18 @@
 #define VM_PKEY_BIT4 VM_HIGH_ARCH_4
 #endif
 
+/* override any generic PKEY Permission defines */
+#undef  PKEY_DISABLE_ACCESS
+#define PKEY_DISABLE_ACCESS    0x1
+#undef  PKEY_DISABLE_WRITE
+#define PKEY_DISABLE_WRITE     0x2
+#undef  PKEY_DISABLE_EXECUTE
+#define PKEY_DISABLE_EXECUTE   0x4
+#undef  PKEY_ACCESS_MASK
+#define PKEY_ACCESS_MASK       (PKEY_DISABLE_ACCESS |\
+ PKEY_DISABLE_WRITE  |\
+ PKEY_DISABLE_EXECUTE)
+
 #define arch_max_pkey()  pkeys_total
 #define AMR_RD_BIT 0x1UL
 #define AMR_WR_BIT 0x2UL
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 5b3531e..c1c40c3 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -78,6 +78,7 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
  unsigned long init_val)
 {
  u64 new_amr_bits = 0x0ul;
+ u64 new_iamr_bits = 0x0ul;
 
  if (!is_pkey_enabled(pkey))
  return -EINVAL;
@@ -90,5 +91,9 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 
  init_amr(pkey, new_amr_bits);
 
+ if ((init_val & PKEY_DISABLE_EXECUTE))
+ new_iamr_bits |= IAMR_EX_BIT;
+
+ init_iamr(pkey, new_iamr_bits);
  return 0;
 }
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 09/25] powerpc: store and restore the pkey state across context switches

Ram Pai
In reply to this post by Ram Pai
Store and restore the AMR, IAMR and UAMOR register state of the task
before scheduling out and after scheduling in, respectively.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/pkeys.h     |    5 +++++
 arch/powerpc/include/asm/processor.h |    5 +++++
 arch/powerpc/kernel/process.c        |   25 +++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 9d59619..dac8751 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -147,6 +147,11 @@ static inline int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
  return __arch_set_user_pkey_access(tsk, pkey, init_val);
 }
 
+static inline bool arch_pkeys_enabled(void)
+{
+ return pkey_inited;
+}
+
 static inline void pkey_mm_init(struct mm_struct *mm)
 {
  if (!pkey_inited)
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 1189d04..dcb1cf0 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -309,6 +309,11 @@ struct thread_struct {
  struct thread_vr_state ckvr_state; /* Checkpointed VR state */
  unsigned long ckvrsave; /* Checkpointed VRSAVE */
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ unsigned long amr;
+ unsigned long iamr;
+ unsigned long uamor;
+#endif
 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER
  void* kvm_shadow_vcpu; /* KVM internal data */
 #endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 2ad725e..af373f4 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -42,6 +42,7 @@
 #include <linux/hw_breakpoint.h>
 #include <linux/uaccess.h>
 #include <linux/elf-randomize.h>
+#include <linux/pkeys.h>
 
 #include <asm/pgtable.h>
 #include <asm/io.h>
@@ -1096,6 +1097,13 @@ static inline void save_sprs(struct thread_struct *t)
  t->tar = mfspr(SPRN_TAR);
  }
 #endif
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (arch_pkeys_enabled()) {
+ t->amr = mfspr(SPRN_AMR);
+ t->iamr = mfspr(SPRN_IAMR);
+ t->uamor = mfspr(SPRN_UAMOR);
+ }
+#endif
 }
 
 static inline void restore_sprs(struct thread_struct *old_thread,
@@ -1131,6 +1139,16 @@ static inline void restore_sprs(struct thread_struct *old_thread,
  mtspr(SPRN_TAR, new_thread->tar);
  }
 #endif
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (arch_pkeys_enabled()) {
+ if (old_thread->amr != new_thread->amr)
+ mtspr(SPRN_AMR, new_thread->amr);
+ if (old_thread->iamr != new_thread->iamr)
+ mtspr(SPRN_IAMR, new_thread->iamr);
+ if (old_thread->uamor != new_thread->uamor)
+ mtspr(SPRN_UAMOR, new_thread->uamor);
+ }
+#endif
 }
 
 struct task_struct *__switch_to(struct task_struct *prev,
@@ -1689,6 +1707,13 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
  current->thread.tm_tfiar = 0;
  current->thread.load_tm = 0;
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ if (arch_pkeys_enabled()) {
+ current->thread.amr   = 0x0ul;
+ current->thread.iamr  = 0x0ul;
+ current->thread.uamor = 0x0ul;
+ }
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 }
 EXPORT_SYMBOL(start_thread);
 
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 10/25] powerpc: introduce execute-only pkey

Ram Pai
In reply to this post by Ram Pai
This patch provides the implementation of execute-only pkey.
The architecture-independent  expects the ability to create
and manage a special key which has execute-only permission.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/mmu.h |    1 +
 arch/powerpc/include/asm/pkeys.h         |    8 ++++-
 arch/powerpc/mm/pkeys.c                  |   57 ++++++++++++++++++++++++++++++
 3 files changed, 65 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 104ad72..0c0a2a8 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -116,6 +116,7 @@ struct patb_entry {
  * bit unset -> key available for allocation
  */
  u32 pkey_allocation_map;
+ s16 execute_only_pkey; /* key holding execute-only protection */
 #endif
 } mm_context_t;
 
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index dac8751..5b17d93 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -126,11 +126,15 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
  * Try to dedicate one of the protection keys to be used as an
  * execute-only protection key.
  */
+extern int __execute_only_pkey(struct mm_struct *mm);
 static inline int execute_only_pkey(struct mm_struct *mm)
 {
- return 0;
+ if (!pkey_inited)
+ return -1;
+ return __execute_only_pkey(mm);
 }
 
+
 static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
  int prot, int pkey)
 {
@@ -157,6 +161,8 @@ static inline void pkey_mm_init(struct mm_struct *mm)
  if (!pkey_inited)
  return;
  mm_pkey_allocation_map(mm) = initial_allocation_mask;
+ /* -1 means unallocated or invalid */
+ mm->context.execute_only_pkey = -1;
 }
 
 static inline void pkey_initialize(void)
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index c1c40c3..25749bf 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -97,3 +97,60 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
  init_iamr(pkey, new_iamr_bits);
  return 0;
 }
+
+static inline bool pkey_allows_readwrite(int pkey)
+{
+ int pkey_shift = pkeyshift(pkey);
+
+ if (!(read_uamor() & (0x3UL << pkey_shift)))
+ return true;
+
+ return !(read_amr() & ((AMR_RD_BIT|AMR_WR_BIT) << pkey_shift));
+}
+
+int __execute_only_pkey(struct mm_struct *mm)
+{
+ bool need_to_set_mm_pkey = false;
+ int execute_only_pkey = mm->context.execute_only_pkey;
+ int ret;
+
+ /* Do we need to assign a pkey for mm's execute-only maps? */
+ if (execute_only_pkey == -1) {
+ /* Go allocate one to use, which might fail */
+ execute_only_pkey = mm_pkey_alloc(mm);
+ if (execute_only_pkey < 0)
+ return -1;
+ need_to_set_mm_pkey = true;
+ }
+
+ /*
+ * We do not want to go through the relatively costly
+ * dance to set AMR if we do not need to.  Check it
+ * first and assume that if the execute-only pkey is
+ * readwrite-disabled than we do not have to set it
+ * ourselves.
+ */
+ if (!need_to_set_mm_pkey &&
+    !pkey_allows_readwrite(execute_only_pkey))
+ return execute_only_pkey;
+
+ /*
+ * Set up AMR so that it denies access for everything
+ * other than execution.
+ */
+ ret = __arch_set_user_pkey_access(current, execute_only_pkey,
+ (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE));
+ /*
+ * If the AMR-set operation failed somehow, just return
+ * 0 and effectively disable execute-only support.
+ */
+ if (ret) {
+ mm_set_pkey_free(mm, execute_only_pkey);
+ return -1;
+ }
+
+ /* We got one, store it and use it from here on out */
+ if (need_to_set_mm_pkey)
+ mm->context.execute_only_pkey = execute_only_pkey;
+ return execute_only_pkey;
+}
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 11/25] powerpc: ability to associate pkey to a vma

Ram Pai
In reply to this post by Ram Pai
arch-independent code expects the arch to  map
a  pkey  into the vma's protection bit setting.
The patch provides that ability.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/mman.h  |    8 +++++++-
 arch/powerpc/include/asm/pkeys.h |   12 ++++++++++++
 2 files changed, 19 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 30922f6..067eec2 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -13,6 +13,7 @@
 
 #include <asm/cputable.h>
 #include <linux/mm.h>
+#include <linux/pkeys.h>
 #include <asm/cpu_has_feature.h>
 
 /*
@@ -22,7 +23,12 @@
 static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
  unsigned long pkey)
 {
- return (prot & PROT_SAO) ? VM_SAO : 0;
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ return (((prot & PROT_SAO) ? VM_SAO : 0) |
+ pkey_to_vmflag_bits(pkey));
+#else
+ return ((prot & PROT_SAO) ? VM_SAO : 0);
+#endif
 }
 #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
 
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 5b17d93..a715a08 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -34,6 +34,18 @@
  PKEY_DISABLE_WRITE  |\
  PKEY_DISABLE_EXECUTE)
 
+static inline u64 pkey_to_vmflag_bits(u16 pkey)
+{
+ if (!pkey_inited)
+ return 0x0UL;
+
+ return (((pkey & 0x1UL) ? VM_PKEY_BIT0 : 0x0UL) |
+ ((pkey & 0x2UL) ? VM_PKEY_BIT1 : 0x0UL) |
+ ((pkey & 0x4UL) ? VM_PKEY_BIT2 : 0x0UL) |
+ ((pkey & 0x8UL) ? VM_PKEY_BIT3 : 0x0UL) |
+ ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
+}
+
 #define arch_max_pkey()  pkeys_total
 #define AMR_RD_BIT 0x1UL
 #define AMR_WR_BIT 0x2UL
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 12/25] powerpc: implementation for arch_override_mprotect_pkey()

Ram Pai
In reply to this post by Ram Pai
arch independent code calls arch_override_mprotect_pkey()
to return a pkey that best matches the requested protection.

This patch provides the implementation.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/mmu_context.h |    5 +++
 arch/powerpc/include/asm/pkeys.h       |   17 ++++++++++-
 arch/powerpc/mm/pkeys.c                |   47 ++++++++++++++++++++++++++++++++
 3 files changed, 67 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 4705dab..7232484 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -185,6 +185,11 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
 #ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
 #define pkey_initialize()
 #define pkey_mm_init(mm)
+
+static inline int vma_pkey(struct vm_area_struct *vma)
+{
+ return 0;
+}
 #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index a715a08..03f7ea2 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -46,6 +46,16 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
  ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
 }
 
+#define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
+ VM_PKEY_BIT3 | VM_PKEY_BIT4)
+
+static inline int vma_pkey(struct vm_area_struct *vma)
+{
+ if (!pkey_inited)
+ return 0;
+ return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
+}
+
 #define arch_max_pkey()  pkeys_total
 #define AMR_RD_BIT 0x1UL
 #define AMR_WR_BIT 0x2UL
@@ -146,11 +156,14 @@ static inline int execute_only_pkey(struct mm_struct *mm)
  return __execute_only_pkey(mm);
 }
 
-
+extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
+ int prot, int pkey);
 static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
  int prot, int pkey)
 {
- return 0;
+ if (!pkey_inited)
+ return 0;
+ return __arch_override_mprotect_pkey(vma, prot, pkey);
 }
 
 extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 25749bf..edcbf48 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -154,3 +154,50 @@ int __execute_only_pkey(struct mm_struct *mm)
  mm->context.execute_only_pkey = execute_only_pkey;
  return execute_only_pkey;
 }
+
+static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+{
+ /* Do this check first since the vm_flags should be hot */
+ if ((vma->vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) != VM_EXEC)
+ return false;
+
+ return (vma_pkey(vma) == vma->vm_mm->context.execute_only_pkey);
+}
+
+/*
+ * This should only be called for *plain* mprotect calls.
+ */
+int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
+ int pkey)
+{
+ /*
+ * Is this an mprotect_pkey() call?  If so, never
+ * override the value that came from the user.
+ */
+ if (pkey != -1)
+ return pkey;
+
+ /*
+ * If the currently associated pkey is execute-only,
+ * but the requested protection requires read or write,
+ * move it back to the default pkey.
+ */
+ if (vma_is_pkey_exec_only(vma) &&
+    (prot & (PROT_READ|PROT_WRITE)))
+ return 0;
+
+ /*
+ * the requested protection is execute-only. Hence
+ * lets use a execute-only pkey.
+ */
+ if (prot == PROT_EXEC) {
+ pkey = execute_only_pkey(vma->vm_mm);
+ if (pkey > 0)
+ return pkey;
+ }
+
+ /*
+ * nothing to override.
+ */
+ return vma_pkey(vma);
+}
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 13/25] powerpc: map vma key-protection bits to pte key bits.

Ram Pai
In reply to this post by Ram Pai
map  the  key  protection  bits of the vma to the pkey bits in
the PTE.

The Pte  bits used  for pkey  are  3,4,5,6  and 57. The  first
four bits are the same four bits that were freed up  initially
in this patch series. remember? :-) Without those four bits
this patch would'nt be possible.

BUT, On 4k kernel, bit 3, and 4 could not be freed up. remember?
Hence we have to be satisfied with 5,6 and 7.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h |   25 ++++++++++++++++++++++++-
 arch/powerpc/include/asm/mman.h              |    8 ++++++++
 arch/powerpc/include/asm/pkeys.h             |   12 ++++++++++++
 3 files changed, 44 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index d4da0e9..060a1b2 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -37,6 +37,7 @@
 #define _RPAGE_RSV2 0x0800000000000000UL
 #define _RPAGE_RSV3 0x0400000000000000UL
 #define _RPAGE_RSV4 0x0200000000000000UL
+#define _RPAGE_RSV5 0x00040UL
 
 #define _PAGE_PTE 0x4000000000000000UL /* distinguishes PTEs from pointers */
 #define _PAGE_PRESENT 0x8000000000000000UL /* pte contains a translation */
@@ -56,6 +57,25 @@
 /* Max physical address bit as per radix table */
 #define _RPAGE_PA_MAX 57
 
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+#ifdef CONFIG_PPC_64K_PAGES
+#define H_PAGE_PKEY_BIT0 _RPAGE_RSV1
+#define H_PAGE_PKEY_BIT1 _RPAGE_RSV2
+#else /* CONFIG_PPC_64K_PAGES */
+#define H_PAGE_PKEY_BIT0 0 /* _RPAGE_RSV1 is not available */
+#define H_PAGE_PKEY_BIT1 0 /* _RPAGE_RSV2 is not available */
+#endif /* CONFIG_PPC_64K_PAGES */
+#define H_PAGE_PKEY_BIT2 _RPAGE_RSV3
+#define H_PAGE_PKEY_BIT3 _RPAGE_RSV4
+#define H_PAGE_PKEY_BIT4 _RPAGE_RSV5
+#else /*  CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+#define H_PAGE_PKEY_BIT0 0
+#define H_PAGE_PKEY_BIT1 0
+#define H_PAGE_PKEY_BIT2 0
+#define H_PAGE_PKEY_BIT3 0
+#define H_PAGE_PKEY_BIT4 0
+#endif /*  CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
 /*
  * Max physical address bit we will use for now.
  *
@@ -116,13 +136,16 @@
 #define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
  _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
  _PAGE_SOFT_DIRTY)
+
+#define H_PAGE_PKEY  (H_PAGE_PKEY_BIT0 | H_PAGE_PKEY_BIT1 | H_PAGE_PKEY_BIT2 | \
+ H_PAGE_PKEY_BIT3 | H_PAGE_PKEY_BIT4)
 /*
  * Mask of bits returned by pte_pgprot()
  */
 #define PAGE_PROT_BITS  (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
  H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
  _PAGE_READ | _PAGE_WRITE |  _PAGE_DIRTY | _PAGE_EXEC | \
- _PAGE_SOFT_DIRTY)
+ _PAGE_SOFT_DIRTY | H_PAGE_PKEY)
 /*
  * We define 2 sets of base prot bits, one for basic pages (ie,
  * cacheable kernel and user pages) and one for non cacheable
diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
index 067eec2..3f7220f 100644
--- a/arch/powerpc/include/asm/mman.h
+++ b/arch/powerpc/include/asm/mman.h
@@ -32,12 +32,20 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
 }
 #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
 
+
 static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
 {
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+ return (vm_flags & VM_SAO) ?
+ __pgprot(_PAGE_SAO | vmflag_to_page_pkey_bits(vm_flags)) :
+ __pgprot(0 | vmflag_to_page_pkey_bits(vm_flags));
+#else
  return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : __pgprot(0);
+#endif
 }
 #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
 
+
 static inline bool arch_validate_prot(unsigned long prot)
 {
  if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 03f7ea2..1ded6df 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -46,6 +46,18 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
  ((pkey & 0x10UL) ? VM_PKEY_BIT4 : 0x0UL));
 }
 
+static inline u64 vmflag_to_page_pkey_bits(u64 vm_flags)
+{
+ if (!pkey_inited)
+ return 0x0UL;
+
+ return (((vm_flags & VM_PKEY_BIT0) ? H_PAGE_PKEY_BIT4 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT1) ? H_PAGE_PKEY_BIT3 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT2) ? H_PAGE_PKEY_BIT2 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT3) ? H_PAGE_PKEY_BIT1 : 0x0UL) |
+ ((vm_flags & VM_PKEY_BIT4) ? H_PAGE_PKEY_BIT0 : 0x0UL));
+}
+
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
  VM_PKEY_BIT3 | VM_PKEY_BIT4)
 
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 14/25] powerpc: sys_pkey_mprotect() system call

Ram Pai
In reply to this post by Ram Pai
Patch provides the ability for a process to
associate a pkey with a address range.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/systbl.h      |    1 +
 arch/powerpc/include/asm/unistd.h      |    4 +---
 arch/powerpc/include/uapi/asm/unistd.h |    1 +
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/systbl.h b/arch/powerpc/include/asm/systbl.h
index 22dd776..b33b551 100644
--- a/arch/powerpc/include/asm/systbl.h
+++ b/arch/powerpc/include/asm/systbl.h
@@ -390,3 +390,4 @@
 SYSCALL(statx)
 SYSCALL(pkey_alloc)
 SYSCALL(pkey_free)
+SYSCALL(pkey_mprotect)
diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index e0273bc..daf1ba9 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -12,12 +12,10 @@
 #include <uapi/asm/unistd.h>
 
 
-#define NR_syscalls 386
+#define NR_syscalls 387
 
 #define __NR__exit __NR_exit
 
-#define __IGNORE_pkey_mprotect
-
 #ifndef __ASSEMBLY__
 
 #include <linux/types.h>
diff --git a/arch/powerpc/include/uapi/asm/unistd.h b/arch/powerpc/include/uapi/asm/unistd.h
index 7993a07..71ae45e 100644
--- a/arch/powerpc/include/uapi/asm/unistd.h
+++ b/arch/powerpc/include/uapi/asm/unistd.h
@@ -396,5 +396,6 @@
 #define __NR_statx 383
 #define __NR_pkey_alloc 384
 #define __NR_pkey_free 385
+#define __NR_pkey_mprotect 386
 
 #endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 15/25] powerpc: Program HPTE key protection bits

Ram Pai
In reply to this post by Ram Pai
Map the PTE protection key bits to the HPTE key protection bits,
while creating HPTE  entries.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |    5 +++++
 arch/powerpc/include/asm/mmu_context.h        |    6 ++++++
 arch/powerpc/include/asm/pkeys.h              |   13 +++++++++++++
 arch/powerpc/mm/hash_utils_64.c               |    1 +
 4 files changed, 25 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 6981a52..f7a6ed3 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -90,6 +90,8 @@
 #define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
 #define HPTE_R_TS ASM_CONST(0x4000000000000000)
 #define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
+#define HPTE_R_KEY_BIT0 ASM_CONST(0x2000000000000000)
+#define HPTE_R_KEY_BIT1 ASM_CONST(0x1000000000000000)
 #define HPTE_R_RPN_SHIFT 12
 #define HPTE_R_RPN ASM_CONST(0x0ffffffffffff000)
 #define HPTE_R_RPN_3_0 ASM_CONST(0x01fffffffffff000)
@@ -104,6 +106,9 @@
 #define HPTE_R_C ASM_CONST(0x0000000000000080)
 #define HPTE_R_R ASM_CONST(0x0000000000000100)
 #define HPTE_R_KEY_LO ASM_CONST(0x0000000000000e00)
+#define HPTE_R_KEY_BIT2 ASM_CONST(0x0000000000000800)
+#define HPTE_R_KEY_BIT3 ASM_CONST(0x0000000000000400)
+#define HPTE_R_KEY_BIT4 ASM_CONST(0x0000000000000200)
 
 #define HPTE_V_1TB_SEG ASM_CONST(0x4000000000000000)
 #define HPTE_V_VRMA_MASK ASM_CONST(0x4001ffffff000000)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 7232484..2eb7f80 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -190,6 +190,12 @@ static inline int vma_pkey(struct vm_area_struct *vma)
 {
  return 0;
 }
+
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+{
+ return 0x0UL;
+}
+
 #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 
 #endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 1ded6df..4b7e3f4 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -72,6 +72,19 @@ static inline int vma_pkey(struct vm_area_struct *vma)
 #define AMR_RD_BIT 0x1UL
 #define AMR_WR_BIT 0x2UL
 #define IAMR_EX_BIT 0x1UL
+
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+{
+ if (!pkey_inited)
+ return 0x0UL;
+
+ return (((pteflags & H_PAGE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL));
+}
+
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
  VM_PKEY_BIT3 | VM_PKEY_BIT4)
 #define AMR_BITS_PER_PKEY 2
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index f88423b..545f291 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -231,6 +231,7 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
  */
  rflags |= HPTE_R_M;
 
+ rflags |= pte_to_hpte_pkey_bits(pteflags);
  return rflags;
 }
 
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 16/25] powerpc: helper to validate key-access permissions of a pte

Ram Pai
In reply to this post by Ram Pai
helper function that checks if the read/write/execute is allowed
on the pte.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h |    4 +++
 arch/powerpc/include/asm/pkeys.h             |   12 +++++++++++
 arch/powerpc/mm/pkeys.c                      |   28 ++++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 060a1b2..2bec0f6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -477,6 +477,10 @@ static inline void write_uamor(u64 value)
  mtspr(SPRN_UAMOR, value);
 }
 
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
+#endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
+
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
 static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
        unsigned long addr, pte_t *ptep)
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 4b7e3f4..ba7bff6 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -85,6 +85,18 @@ static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
  ((pteflags & H_PAGE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL));
 }
 
+static inline u16 pte_to_pkey_bits(u64 pteflags)
+{
+ if (!pkey_inited)
+ return 0x0UL;
+
+ return (((pteflags & H_PAGE_PKEY_BIT0) ? 0x10 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT1) ? 0x8 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT2) ? 0x4 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT3) ? 0x2 : 0x0UL) |
+ ((pteflags & H_PAGE_PKEY_BIT4) ? 0x1 : 0x0UL));
+}
+
 #define ARCH_VM_PKEY_FLAGS (VM_PKEY_BIT0 | VM_PKEY_BIT1 | VM_PKEY_BIT2 | \
  VM_PKEY_BIT3 | VM_PKEY_BIT4)
 #define AMR_BITS_PER_PKEY 2
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index edcbf48..8d756ef 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -201,3 +201,31 @@ int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
  */
  return vma_pkey(vma);
 }
+
+static bool pkey_access_permitted(int pkey, bool write, bool execute)
+{
+ int pkey_shift;
+ u64 amr;
+
+ if (!pkey)
+ return true;
+
+ pkey_shift = pkeyshift(pkey);
+ if (!(read_uamor() & (0x3UL << pkey_shift)))
+ return true;
+
+ if (execute && !(read_iamr() & (IAMR_EX_BIT << pkey_shift)))
+ return true;
+
+ amr = read_amr(); /* delay reading amr uptil absolutely needed*/
+ return ((!write && !(amr & (AMR_RD_BIT << pkey_shift))) ||
+ (write &&  !(amr & (AMR_WR_BIT << pkey_shift))));
+}
+
+bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
+{
+ if (!pkey_inited)
+ return true;
+ return pkey_access_permitted(pte_to_pkey_bits(pte),
+ write, execute);
+}
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 17/25] powerpc: check key protection for user page access

Ram Pai
In reply to this post by Ram Pai
Make sure that the kernel does not access user pages without
checking their key-protection.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 2bec0f6..d1e1460 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -479,6 +479,20 @@ static inline void write_uamor(u64 value)
 
 #ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
 extern bool arch_pte_access_permitted(u64 pte, bool write, bool execute);
+
+#define pte_access_permitted(pte, write) \
+ (pte_present(pte) && \
+ ((!(write) || pte_write(pte)) && \
+  arch_pte_access_permitted(pte_val(pte), !!write, 0)))
+
+/*
+ * We store key in pmd for huge tlb pages. So need
+ * to check for key protection.
+ */
+#define pmd_access_permitted(pmd, write) \
+ (pmd_present(pmd) && \
+ ((!(write) || pmd_write(pmd)) && \
+  arch_pte_access_permitted(pmd_val(pmd), !!write, 0)))
 #endif /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 18/25] powerpc: Macro the mask used for checking DSI exception

Ram Pai
In reply to this post by Ram Pai
Replace the magic number used to check for DSI exception
with a meaningful value.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/reg.h       |    7 ++++++-
 arch/powerpc/kernel/exceptions-64s.S |    2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 7e50e47..ee04bc0 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -272,16 +272,21 @@
 #define SPRN_DAR 0x013 /* Data Address Register */
 #define SPRN_DBCR 0x136 /* e300 Data Breakpoint Control Reg */
 #define SPRN_DSISR 0x012 /* Data Storage Interrupt Status Register */
+#define   DSISR_BIT32 0x80000000 /* not defined */
 #define   DSISR_NOHPTE 0x40000000 /* no translation found */
+#define   DSISR_PAGEATTR_CONFLT 0x20000000 /* page attribute conflict */
+#define   DSISR_BIT35 0x10000000 /* not defined */
 #define   DSISR_PROTFAULT 0x08000000 /* protection fault */
 #define   DSISR_BADACCESS 0x04000000 /* bad access to CI or G */
 #define   DSISR_ISSTORE 0x02000000 /* access was a store */
 #define   DSISR_DABRMATCH 0x00400000 /* hit data breakpoint */
-#define   DSISR_NOSEGMENT 0x00200000 /* SLB miss */
 #define   DSISR_KEYFAULT 0x00200000 /* Key fault */
+#define   DSISR_BIT43 0x00100000 /* not defined */
 #define   DSISR_UNSUPP_MMU 0x00080000 /* Unsupported MMU config */
 #define   DSISR_SET_RC 0x00040000 /* Failed setting of R/C bits */
 #define   DSISR_PGDIRFAULT      0x00020000      /* Fault on page directory */
+#define   DSISR_PAGE_FAULT_MASK (DSISR_BIT32 | DSISR_PAGEATTR_CONFLT | \
+ DSISR_BADACCESS | DSISR_DABRMATCH | DSISR_BIT43)
 #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */
 #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
 #define SPRN_CIR 0x11B /* Chip Information Register (hyper, R/0) */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index b886795..e154bfe 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1411,7 +1411,7 @@ USE_TEXT_SECTION()
  .balign IFETCH_ALIGN_BYTES
 do_hash_page:
 #ifdef CONFIG_PPC_STD_MMU_64
- andis. r0,r4,0xa450 /* weird error? */
+ andis. r0,r4,DSISR_PAGE_FAULT_MASK@h
  bne- handle_page_fault /* if not, try to insert a HPTE */
  CURRENT_THREAD_INFO(r11, r1)
  lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */
--
1.7.1

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[RFC v7 19/25] powerpc: implementation for arch_vma_access_permitted()

Ram Pai
In reply to this post by Ram Pai
This patch provides the implementation for
arch_vma_access_permitted(). Returns true if the
requested access is allowed by pkey associated with the
vma.

Signed-off-by: Ram Pai <[hidden email]>
---
 arch/powerpc/include/asm/mmu_context.h |    5 +++-
 arch/powerpc/mm/pkeys.c                |   43 ++++++++++++++++++++++++++++++++
 2 files changed, 47 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 2eb7f80..a1cfcca 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -175,6 +175,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
 {
 }
 
+#ifdef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
+bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ bool write, bool execute, bool foreign);
+#else /* CONFIG_PPC64_MEMORY_PROTECTION_KEYS */
 static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
  bool write, bool execute, bool foreign)
 {
@@ -182,7 +186,6 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
  return true;
 }
 
-#ifndef CONFIG_PPC64_MEMORY_PROTECTION_KEYS
 #define pkey_initialize()
 #define pkey_mm_init(mm)
 
diff --git a/arch/powerpc/mm/pkeys.c b/arch/powerpc/mm/pkeys.c
index 8d756ef..1424c79 100644
--- a/arch/powerpc/mm/pkeys.c
+++ b/arch/powerpc/mm/pkeys.c
@@ -229,3 +229,46 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
  return pkey_access_permitted(pte_to_pkey_bits(pte),
  write, execute);
 }
+
+/*
+ * We only want to enforce protection keys on the current process
+ * because we effectively have no access to AMR/IAMR for other
+ * processes or any way to tell *which * AMR/IAMR in a threaded
+ * process we could use.
+ *
+ * So do not enforce things if the VMA is not from the current
+ * mm, or if we are in a kernel thread.
+ */
+static inline bool vma_is_foreign(struct vm_area_struct *vma)
+{
+ if (!current->mm)
+ return true;
+ /*
+ * if the VMA is from another process, then AMR/IAMR has no
+ * relevance and should not be enforced.
+ */
+ if (current->mm != vma->vm_mm)
+ return true;
+
+ return false;
+}
+
+bool arch_vma_access_permitted(struct vm_area_struct *vma,
+ bool write, bool execute, bool foreign)
+{
+ int pkey;
+
+ if (!pkey_inited)
+ return true;
+
+ /* allow access if the VMA is not one from this process */
+ if (foreign || vma_is_foreign(vma))
+ return true;
+
+ pkey = vma_pkey(vma);
+
+ if (!pkey)
+ return true;
+
+ return pkey_access_permitted(pkey, write, execute);
+}
--
1.7.1

123
Loading...