[PATCH v2 0/4] ftrace/kprobe fixes

classic Classic list List threaded Threaded
13 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH v2 0/4] ftrace/kprobe fixes

Naveen N. Rao
This is v2 of some of the patches from:
https://www.mail-archive.com/linuxppc-dev@.../msg117777.html

The first two patches should probably go to -stable. The others can go
into v4.12.

Thanks,
Naveen

Naveen N. Rao (4):
  powerpc/kprobes: Pause function_graph tracing during jprobes handling
  powerpc/ftrace: Pass the correct stack pointer for
    DYNAMIC_FTRACE_WITH_REGS
  powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes
  powerpc/xmon: Disable ftrace while in xmon

 arch/powerpc/include/asm/kprobes.h             |  1 +
 arch/powerpc/kernel/kprobes.c                  | 17 +++++++++
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 52 ++++++++++++++++++++------
 arch/powerpc/xmon/xmon.c                       |  5 +++
 4 files changed, 63 insertions(+), 12 deletions(-)

--
2.12.2

Reply | Threaded
Open this post in threaded view
|

[PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling

Naveen N. Rao
This fixes a crash when function_graph and jprobes are used together.
This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
conflict between jprobes and function graph tracing"), but for powerpc.

Jprobes breaks function_graph tracing since the jprobe hook needs to use
jprobe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.

Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
Signed-off-by: Naveen N. Rao <[hidden email]>
---
v2: No changes since v1.

 arch/powerpc/kernel/kprobes.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index e1fb95385a9e..0554d6e66194 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
  regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
 #endif
 
+ /*
+ * jprobes use jprobe_return() which skips the normal return
+ * path of the function, and this messes up the accounting of the
+ * function graph tracer.
+ *
+ * Pause function graph tracing while performing the jprobe function.
+ */
+ pause_graph_tracing();
+
  return 1;
 }
 NOKPROBE_SYMBOL(setjmp_pre_handler);
@@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
  * saved regs...
  */
  memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
+ /* It's OK to start function graph tracing again */
+ unpause_graph_tracing();
  preempt_enable_no_resched();
  return 1;
 }
--
2.12.2

Reply | Threaded
Open this post in threaded view
|

[PATCH v2 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS

Naveen N. Rao
In reply to this post by Naveen N. Rao
For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
of registers in pt_regs, to capture the state _before_ ftrace_caller.
However, we are instead passing the stack pointer *after* allocating a
stack frame in ftrace_caller. Fix this by saving the proper value of r1
in pt_regs. Also, use SAVE_10GPRS() to simplify the code.

Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel
ftrace ABI")
Signed-off-by: Naveen N. Rao <[hidden email]>
---
v2: No changes since v1.

 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index 7c933a99f5d5..fa0921410fa4 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -45,10 +45,14 @@ _GLOBAL(ftrace_caller)
  stdu r1,-SWITCH_FRAME_SIZE(r1)
 
  /* Save all gprs to pt_regs */
- SAVE_8GPRS(0,r1)
- SAVE_8GPRS(8,r1)
- SAVE_8GPRS(16,r1)
- SAVE_8GPRS(24,r1)
+ SAVE_GPR(0, r1)
+ SAVE_10GPRS(2, r1)
+ SAVE_10GPRS(12, r1)
+ SAVE_10GPRS(22, r1)
+
+ /* Save previous stack pointer (r1) */
+ addi r8, r1, SWITCH_FRAME_SIZE
+ std r8, GPR1(r1)
 
  /* Load special regs for save below */
  mfmsr   r8
@@ -103,10 +107,10 @@ ftrace_call:
 #endif
 
  /* Restore gprs */
- REST_8GPRS(0,r1)
- REST_8GPRS(8,r1)
- REST_8GPRS(16,r1)
- REST_8GPRS(24,r1)
+ REST_GPR(0,r1)
+ REST_10GPRS(2,r1)
+ REST_10GPRS(12,r1)
+ REST_10GPRS(22,r1)
 
  /* Restore possibly modified LR */
  ld r0, _LINK(r1)
--
2.12.2

Reply | Threaded
Open this post in threaded view
|

[PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

Naveen N. Rao
In reply to this post by Naveen N. Rao
ftrace_caller() depends on a modified regs->nip to detect if a certain
function has been livepatched. However, with KPROBES_ON_FTRACE, it is
possible for regs->nip to have been modified by the kprobes pre_handler
(jprobes, for instance). In this case, we do not want to invoke the
livepatch_handler so as not to consume the livepatch stack.

To distinguish between the two (kprobes and livepatch), we check if
there is an active kprobe on the current function. If there is, then we
know for sure that it must have modified the NIP as we don't support
livepatching a kprobe'd function. In this case, we simply skip the
livepatch_handler and branch to the new NIP. Otherwise, the
livepatch_handler is invoked.

Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
KPROBES_ON_FTRACE")
Signed-off-by: Naveen N. Rao <[hidden email]>
---
v2: Changed to use current_kprobe->addr to determine jprobe vs.
livepatch.

 arch/powerpc/include/asm/kprobes.h             |  1 +
 arch/powerpc/kernel/kprobes.c                  |  6 +++++
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 32 ++++++++++++++++++++++----
 3 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index a83821f33ea3..8814a7249ceb 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -103,6 +103,7 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
 extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
 extern int kprobe_handler(struct pt_regs *regs);
 extern int kprobe_post_handler(struct pt_regs *regs);
+extern int is_current_kprobe_addr(unsigned long addr);
 #ifdef CONFIG_KPROBES_ON_FTRACE
 extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
    struct kprobe_ctlblk *kcb);
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 0554d6e66194..ec9d94c82fbd 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -43,6 +43,12 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
 
 struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
 
+int is_current_kprobe_addr(unsigned long addr)
+{
+ struct kprobe *p = kprobe_running();
+ return (p && (unsigned long)p->addr == addr) ? 1 : 0;
+}
+
 bool arch_within_kprobe_blacklist(unsigned long addr)
 {
  return  (addr >= (unsigned long)__kprobes_text_start &&
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index fa0921410fa4..e6837e85ec28 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -99,13 +99,37 @@ ftrace_call:
  bl ftrace_stub
  nop
 
- /* Load ctr with the possibly modified NIP */
- ld r3, _NIP(r1)
- mtctr r3
+ /* Load the possibly modified NIP */
+ ld r15, _NIP(r1)
+
 #ifdef CONFIG_LIVEPATCH
- cmpd r14,r3 /* has NIP been altered? */
+ cmpd r14, r15 /* has NIP been altered? */
+#endif
+
+#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
+ beq 1f
+
+ /* Check if there is an active kprobe on us */
+ subi r3, r14, 4
+ bl is_current_kprobe_addr
+ nop
+
+ /*
+ * If r3 == 1, then this is a kprobe/jprobe.
+ * else, this is livepatched function.
+ *
+ * The subsequent conditional branch for livepatch_handler
+ * will use the result of this compare. For kprobe/jprobe,
+ * we just need to branch to the new NIP, so nothing special
+ * is needed.
+ */
+ cmpdi r3, 1
+1:
 #endif
 
+ /* Load CTR with the possibly modified NIP */
+ mtctr r15
+
  /* Restore gprs */
  REST_GPR(0,r1)
  REST_10GPRS(2,r1)
--
2.12.2

Reply | Threaded
Open this post in threaded view
|

[PATCH v2 4/4] powerpc/xmon: Disable ftrace while in xmon

Naveen N. Rao
In reply to this post by Naveen N. Rao
Disable ftrace when we enter xmon so as not to clobber/pollute the
trace buffer. In addition, we cannot have function_graph enabled while
in xmon since we use setjmp/longjmp which confuses the function call
history maintained by function_graph.

Signed-off-by: Naveen N. Rao <[hidden email]>
---
v2: Disable ftrace, rather than just pausing function_graph.

 arch/powerpc/xmon/xmon.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index a728e1919613..e51ce32d21fd 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -28,6 +28,7 @@
 #include <linux/bug.h>
 #include <linux/nmi.h>
 #include <linux/ctype.h>
+#include <linux/ftrace.h>
 
 #include <asm/debugfs.h>
 #include <asm/ptrace.h>
@@ -456,10 +457,13 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
  int cpu;
  int secondary;
 #endif
+ int save_ftrace_enabled;
 
  local_irq_save(flags);
  hard_irq_disable();
 
+ save_ftrace_enabled = __ftrace_enabled_save();
+
  bp = in_breakpoint_table(regs->nip, &offset);
  if (bp != NULL) {
  regs->nip = bp->address + offset;
@@ -654,6 +658,7 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
  insert_cpu_bpts();
 
  touch_nmi_watchdog();
+ __ftrace_enabled_restore(save_ftrace_enabled);
  local_irq_restore(flags);
 
  return cmd != 'X' && cmd != EOF;
--
2.12.2

Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling

Masami Hiramatsu
In reply to this post by Naveen N. Rao
On Thu,  1 Jun 2017 16:18:15 +0530
"Naveen N. Rao" <[hidden email]> wrote:

> This fixes a crash when function_graph and jprobes are used together.
> This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> conflict between jprobes and function graph tracing"), but for powerpc.
>
> Jprobes breaks function_graph tracing since the jprobe hook needs to use
> jprobe_return(), which never returns back to the hook, but instead to
> the original jprobe'd function. The solution is to momentarily pause
> function_graph tracing before invoking the jprobe hook and re-enable it
> when returning back to the original jprobe'd function.
>
> Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> Signed-off-by: Naveen N. Rao <[hidden email]>

Acked-by: Masami Hiramatsu <[hidden email]>

Thanks!

> ---
> v2: No changes since v1.
>
>  arch/powerpc/kernel/kprobes.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index e1fb95385a9e..0554d6e66194 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
>   regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
>  #endif
>  
> + /*
> + * jprobes use jprobe_return() which skips the normal return
> + * path of the function, and this messes up the accounting of the
> + * function graph tracer.
> + *
> + * Pause function graph tracing while performing the jprobe function.
> + */
> + pause_graph_tracing();
> +
>   return 1;
>  }
>  NOKPROBE_SYMBOL(setjmp_pre_handler);
> @@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
>   * saved regs...
>   */
>   memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
> + /* It's OK to start function graph tracing again */
> + unpause_graph_tracing();
>   preempt_enable_no_resched();
>   return 1;
>  }
> --
> 2.12.2
>


--
Masami Hiramatsu <[hidden email]>
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

Masami Hiramatsu
In reply to this post by Naveen N. Rao
On Thu,  1 Jun 2017 16:18:17 +0530
"Naveen N. Rao" <[hidden email]> wrote:

> ftrace_caller() depends on a modified regs->nip to detect if a certain
> function has been livepatched. However, with KPROBES_ON_FTRACE, it is
> possible for regs->nip to have been modified by the kprobes pre_handler
> (jprobes, for instance). In this case, we do not want to invoke the
> livepatch_handler so as not to consume the livepatch stack.
>
> To distinguish between the two (kprobes and livepatch), we check if
> there is an active kprobe on the current function. If there is, then we
> know for sure that it must have modified the NIP as we don't support
> livepatching a kprobe'd function. In this case, we simply skip the
> livepatch_handler and branch to the new NIP. Otherwise, the
> livepatch_handler is invoked.

OK, this logic seems good to me, but I weould like to leave it for
PPC64 maintainer too.

Reviewed-by: Masami Hiramatsu <[hidden email]>

Thanks!

>
> Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
> KPROBES_ON_FTRACE")
> Signed-off-by: Naveen N. Rao <[hidden email]>
> ---
> v2: Changed to use current_kprobe->addr to determine jprobe vs.
> livepatch.
>
>  arch/powerpc/include/asm/kprobes.h             |  1 +
>  arch/powerpc/kernel/kprobes.c                  |  6 +++++
>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S | 32 ++++++++++++++++++++++----
>  3 files changed, 35 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
> index a83821f33ea3..8814a7249ceb 100644
> --- a/arch/powerpc/include/asm/kprobes.h
> +++ b/arch/powerpc/include/asm/kprobes.h
> @@ -103,6 +103,7 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
>  extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
>  extern int kprobe_handler(struct pt_regs *regs);
>  extern int kprobe_post_handler(struct pt_regs *regs);
> +extern int is_current_kprobe_addr(unsigned long addr);
>  #ifdef CONFIG_KPROBES_ON_FTRACE
>  extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
>     struct kprobe_ctlblk *kcb);
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 0554d6e66194..ec9d94c82fbd 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -43,6 +43,12 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>  
>  struct kretprobe_blackpoint kretprobe_blacklist[] = {{NULL, NULL}};
>  
> +int is_current_kprobe_addr(unsigned long addr)
> +{
> + struct kprobe *p = kprobe_running();
> + return (p && (unsigned long)p->addr == addr) ? 1 : 0;
> +}
> +
>  bool arch_within_kprobe_blacklist(unsigned long addr)
>  {
>   return  (addr >= (unsigned long)__kprobes_text_start &&
> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index fa0921410fa4..e6837e85ec28 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -99,13 +99,37 @@ ftrace_call:
>   bl ftrace_stub
>   nop
>  
> - /* Load ctr with the possibly modified NIP */
> - ld r3, _NIP(r1)
> - mtctr r3
> + /* Load the possibly modified NIP */
> + ld r15, _NIP(r1)
> +
>  #ifdef CONFIG_LIVEPATCH
> - cmpd r14,r3 /* has NIP been altered? */
> + cmpd r14, r15 /* has NIP been altered? */
> +#endif
> +
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> + beq 1f
> +
> + /* Check if there is an active kprobe on us */
> + subi r3, r14, 4
> + bl is_current_kprobe_addr
> + nop
> +
> + /*
> + * If r3 == 1, then this is a kprobe/jprobe.
> + * else, this is livepatched function.
> + *
> + * The subsequent conditional branch for livepatch_handler
> + * will use the result of this compare. For kprobe/jprobe,
> + * we just need to branch to the new NIP, so nothing special
> + * is needed.
> + */
> + cmpdi r3, 1
> +1:
>  #endif
>  
> + /* Load CTR with the possibly modified NIP */
> + mtctr r15
> +
>   /* Restore gprs */
>   REST_GPR(0,r1)
>   REST_10GPRS(2,r1)
> --
> 2.12.2
>


--
Masami Hiramatsu <[hidden email]>
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v2 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling

Steven Rostedt
In reply to this post by Masami Hiramatsu
On Thu, 8 Jun 2017 22:43:54 +0900
Masami Hiramatsu <[hidden email]> wrote:

> On Thu,  1 Jun 2017 16:18:15 +0530
> "Naveen N. Rao" <[hidden email]> wrote:
>
> > This fixes a crash when function_graph and jprobes are used together.
> > This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> > conflict between jprobes and function graph tracing"), but for powerpc.
> >
> > Jprobes breaks function_graph tracing since the jprobe hook needs to use
> > jprobe_return(), which never returns back to the hook, but instead to
> > the original jprobe'd function. The solution is to momentarily pause
> > function_graph tracing before invoking the jprobe hook and re-enable it
> > when returning back to the original jprobe'd function.
> >
> > Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> > Signed-off-by: Naveen N. Rao <[hidden email]>  
>
> Acked-by: Masami Hiramatsu <[hidden email]>

For what it's worth...

Acked-by: Steven Rostedt (VMware) <[hidden email]>

-- Steve

>
> Thanks!
>
> > ---
> > v2: No changes since v1.
> >
> >  arch/powerpc/kernel/kprobes.c | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index e1fb95385a9e..0554d6e66194 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -610,6 +610,15 @@ int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
> >   regs->gpr[2] = (unsigned long)(((func_descr_t *)jp->entry)->toc);
> >  #endif
> >  
> > + /*
> > + * jprobes use jprobe_return() which skips the normal return
> > + * path of the function, and this messes up the accounting of the
> > + * function graph tracer.
> > + *
> > + * Pause function graph tracing while performing the jprobe function.
> > + */
> > + pause_graph_tracing();
> > +
> >   return 1;
> >  }
> >  NOKPROBE_SYMBOL(setjmp_pre_handler);
> > @@ -635,6 +644,8 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs)
> >   * saved regs...
> >   */
> >   memcpy(regs, &kcb->jprobe_saved_regs, sizeof(struct pt_regs));
> > + /* It's OK to start function graph tracing again */
> > + unpause_graph_tracing();
> >   preempt_enable_no_resched();
> >   return 1;
> >  }
> > --
> > 2.12.2
> >  
>
>

Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

Michael Ellerman-2
In reply to this post by Naveen N. Rao
"Naveen N. Rao" <[hidden email]> writes:

> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index fa0921410fa4..e6837e85ec28 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -99,13 +99,37 @@ ftrace_call:
>   bl ftrace_stub
>   nop
>  
> - /* Load ctr with the possibly modified NIP */
> - ld r3, _NIP(r1)
> - mtctr r3
> + /* Load the possibly modified NIP */
> + ld r15, _NIP(r1)
> +
>  #ifdef CONFIG_LIVEPATCH
> - cmpd r14,r3 /* has NIP been altered? */
> + cmpd r14, r15 /* has NIP been altered? */
> +#endif
> +
> +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> + beq 1f
> +
> + /* Check if there is an active kprobe on us */
> + subi r3, r14, 4
> + bl is_current_kprobe_addr
> + nop
> +
> + /*
> + * If r3 == 1, then this is a kprobe/jprobe.
> + * else, this is livepatched function.
> + *
> + * The subsequent conditional branch for livepatch_handler
> + * will use the result of this compare. For kprobe/jprobe,
> + * we just need to branch to the new NIP, so nothing special
> + * is needed.
> + */
> + cmpdi r3, 1
> +1:
>  #endif

I added some more comments. Hopefully I got them right :D

 #if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
+       /* NIP has not been altered, skip over further checks */
        beq     1f
 
        /* Check if there is an active kprobe on us */
@@ -118,10 +119,11 @@ ftrace_call:
         * If r3 == 1, then this is a kprobe/jprobe.
         * else, this is livepatched function.
         *
-        * The subsequent conditional branch for livepatch_handler
-        * will use the result of this compare. For kprobe/jprobe,
-        * we just need to branch to the new NIP, so nothing special
-        * is needed.
+        * The conditional branch for livepatch_handler below will use the
+        * result of this comparison. For kprobe/jprobe, we just need to branch to
+        * the new NIP, not call livepatch_handler. The branch below is bne, so we
+        * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want
+        * CR0[EQ] = (r3 == 1).
         */
        cmpdi   r3, 1
 1:
@@ -147,7 +149,10 @@ ftrace_call:
        addi r1, r1, SWITCH_FRAME_SIZE
 
 #ifdef CONFIG_LIVEPATCH
-        /* Based on the cmpd above, if the NIP was altered handle livepatch */
+        /*
+        * Based on the cmpd or cmpdi above, if the NIP was altered and we're
+        * not on a kprobe/jprobe, then handle livepatch.
+        */
        bne-    livepatch_handler
 #endif


cheers
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH v2 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

Naveen N. Rao
On 2017/06/15 09:19PM, Michael Ellerman wrote:

> "Naveen N. Rao" <[hidden email]> writes:
>
> > diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > index fa0921410fa4..e6837e85ec28 100644
> > --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> > @@ -99,13 +99,37 @@ ftrace_call:
> >   bl ftrace_stub
> >   nop
> >  
> > - /* Load ctr with the possibly modified NIP */
> > - ld r3, _NIP(r1)
> > - mtctr r3
> > + /* Load the possibly modified NIP */
> > + ld r15, _NIP(r1)
> > +
> >  #ifdef CONFIG_LIVEPATCH
> > - cmpd r14,r3 /* has NIP been altered? */
> > + cmpd r14, r15 /* has NIP been altered? */
> > +#endif
> > +
> > +#if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> > + beq 1f
> > +
> > + /* Check if there is an active kprobe on us */
> > + subi r3, r14, 4
> > + bl is_current_kprobe_addr
> > + nop
> > +
> > + /*
> > + * If r3 == 1, then this is a kprobe/jprobe.
> > + * else, this is livepatched function.
> > + *
> > + * The subsequent conditional branch for livepatch_handler
> > + * will use the result of this compare. For kprobe/jprobe,
> > + * we just need to branch to the new NIP, so nothing special
> > + * is needed.
> > + */
> > + cmpdi r3, 1
> > +1:
> >  #endif
>
> I added some more comments. Hopefully I got them right :D
>
>  #if defined(CONFIG_LIVEPATCH) && defined(CONFIG_KPROBES_ON_FTRACE)
> +       /* NIP has not been altered, skip over further checks */
>         beq     1f
>
>         /* Check if there is an active kprobe on us */
> @@ -118,10 +119,11 @@ ftrace_call:
>          * If r3 == 1, then this is a kprobe/jprobe.
>          * else, this is livepatched function.
>          *
> -        * The subsequent conditional branch for livepatch_handler
> -        * will use the result of this compare. For kprobe/jprobe,
> -        * we just need to branch to the new NIP, so nothing special
> -        * is needed.
> +        * The conditional branch for livepatch_handler below will use the
> +        * result of this comparison. For kprobe/jprobe, we just need to branch to
> +        * the new NIP, not call livepatch_handler. The branch below is bne, so we
> +        * want CR0[EQ] to be true if this is a kprobe/jprobe. Which means we want
> +        * CR0[EQ] = (r3 == 1).
>          */
>         cmpdi   r3, 1
>  1:
> @@ -147,7 +149,10 @@ ftrace_call:
>         addi r1, r1, SWITCH_FRAME_SIZE
>
>  #ifdef CONFIG_LIVEPATCH
> -        /* Based on the cmpd above, if the NIP was altered handle livepatch */
> +        /*
> +        * Based on the cmpd or cmpdi above, if the NIP was altered and we're
> +        * not on a kprobe/jprobe, then handle livepatch.
> +        */
>         bne-    livepatch_handler
>  #endif

Oh yes, this makes the code logic a whole lot more clearer and explicit.  
Thanks for expanding on it!

- Naveen

Reply | Threaded
Open this post in threaded view
|

Re: [v2, 1/4] powerpc/kprobes: Pause function_graph tracing during jprobes handling

Michael Ellerman-3
In reply to this post by Naveen N. Rao
On Thu, 2017-06-01 at 10:48:15 UTC, "Naveen N. Rao" wrote:

> This fixes a crash when function_graph and jprobes are used together.
> This is essentially commit 237d28db036e ("ftrace/jprobes/x86: Fix
> conflict between jprobes and function graph tracing"), but for powerpc.
>
> Jprobes breaks function_graph tracing since the jprobe hook needs to use
> jprobe_return(), which never returns back to the hook, but instead to
> the original jprobe'd function. The solution is to momentarily pause
> function_graph tracing before invoking the jprobe hook and re-enable it
> when returning back to the original jprobe'd function.
>
> Fixes: 6794c78243bfd ("powerpc64: port of the function graph tracer")
> Signed-off-by: Naveen N. Rao <[hidden email]>
> Acked-by: Masami Hiramatsu <[hidden email]>
> Acked-by: Steven Rostedt (VMware) <[hidden email]>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/a9f8553e935f26cb5447f67e280946

cheers
Reply | Threaded
Open this post in threaded view
|

Re: [v2, 2/4] powerpc/ftrace: Pass the correct stack pointer for DYNAMIC_FTRACE_WITH_REGS

Michael Ellerman-3
In reply to this post by Naveen N. Rao
On Thu, 2017-06-01 at 10:48:16 UTC, "Naveen N. Rao" wrote:
> For DYNAMIC_FTRACE_WITH_REGS, we should be passing-in the original set
> of registers in pt_regs, to capture the state _before_ ftrace_caller.
> However, we are instead passing the stack pointer *after* allocating a
> stack frame in ftrace_caller. Fix this by saving the proper value of r1
> in pt_regs. Also, use SAVE_10GPRS() to simplify the code.
>
> Fixes: 153086644fd1f ("powerpc/ftrace: Add support for -mprofile-kernel
> ftrace ABI")
> Signed-off-by: Naveen N. Rao <[hidden email]>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/a4979a7e71eb8da976cbe4a0a1fa50

cheers
Reply | Threaded
Open this post in threaded view
|

Re: [v2, 3/4] powerpc/kprobes_on_ftrace: Skip livepatch_handler() for jprobes

Michael Ellerman-3
In reply to this post by Naveen N. Rao
On Thu, 2017-06-01 at 10:48:17 UTC, "Naveen N. Rao" wrote:

> ftrace_caller() depends on a modified regs->nip to detect if a certain
> function has been livepatched. However, with KPROBES_ON_FTRACE, it is
> possible for regs->nip to have been modified by the kprobes pre_handler
> (jprobes, for instance). In this case, we do not want to invoke the
> livepatch_handler so as not to consume the livepatch stack.
>
> To distinguish between the two (kprobes and livepatch), we check if
> there is an active kprobe on the current function. If there is, then we
> know for sure that it must have modified the NIP as we don't support
> livepatching a kprobe'd function. In this case, we simply skip the
> livepatch_handler and branch to the new NIP. Otherwise, the
> livepatch_handler is invoked.
>
> Fixes: ead514d5fb30a ("powerpc/kprobes: Add support for
> KPROBES_ON_FTRACE")
> Signed-off-by: Naveen N. Rao <[hidden email]>
> Reviewed-by: Masami Hiramatsu <[hidden email]>

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/c05b8c4474c03026aaa7f8872e7836

cheers