summary refs log tree commit diff
path: root/arch/parisc
diff options
context:
space:
mode:
authorJohn David Anglin <dave.anglin@bell.net>2019-06-02 19:12:40 -0400
committerHelge Deller <deller@gmx.de>2019-06-06 14:12:22 +0200
commit116d753308cf032159c7b7aa87c9605bb5354784 (patch)
treef2642cc73c48384f7b7934d0c794d27121018ffd /arch/parisc
parentec13c82d261b5a10e6f6e3273b60329d1146edbb (diff)
downloadlinux-116d753308cf032159c7b7aa87c9605bb5354784.tar.gz
parisc: Use lpa instruction to load physical addresses in driver code
Most I/O in the kernel is done using the kernel offset mapping.
However, there is one API that uses aliased kernel address ranges:

> The final category of APIs is for I/O to deliberately aliased address
> ranges inside the kernel.  Such aliases are set up by use of the
> vmap/vmalloc API.  Since kernel I/O goes via physical pages, the I/O
> subsystem assumes that the user mapping and kernel offset mapping are
> the only aliases.  This isn't true for vmap aliases, so anything in
> the kernel trying to do I/O to vmap areas must manually manage
> coherency.  It must do this by flushing the vmap range before doing
> I/O and invalidating it after the I/O returns.

For this reason, we should use the hardware lpa instruction to load the
physical address of kernel virtual addresses in the driver code.

I believe we only use the vmap/vmalloc API with old PA 1.x processors
which don't have a sba, so we don't hit this problem.

Tested on c3750, c8000 and rp3440.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Diffstat (limited to 'arch/parisc')
-rw-r--r--arch/parisc/include/asm/special_insns.h24
1 files changed, 24 insertions, 0 deletions
diff --git a/arch/parisc/include/asm/special_insns.h b/arch/parisc/include/asm/special_insns.h
index 3d4dd68e181b..a303ae9a77f4 100644
--- a/arch/parisc/include/asm/special_insns.h
+++ b/arch/parisc/include/asm/special_insns.h
@@ -2,6 +2,30 @@
 #ifndef __PARISC_SPECIAL_INSNS_H
 #define __PARISC_SPECIAL_INSNS_H
 
+#define lpa(va)	({			\
+	unsigned long pa;		\
+	__asm__ __volatile__(		\
+		"copy %%r0,%0\n\t"	\
+		"lpa %%r0(%1),%0"	\
+		: "=r" (pa)		\
+		: "r" (va)		\
+		: "memory"		\
+	);				\
+	pa;				\
+})
+
+#define lpa_user(va)	({		\
+	unsigned long pa;		\
+	__asm__ __volatile__(		\
+		"copy %%r0,%0\n\t"	\
+		"lpa %%r0(%%sr3,%1),%0"	\
+		: "=r" (pa)		\
+		: "r" (va)		\
+		: "memory"		\
+	);				\
+	pa;				\
+})
+
 #define mfctl(reg)	({		\
 	unsigned long cr;		\
 	__asm__ __volatile__(		\