summary refs log tree commit diff
path: root/arch
AgeCommit message (Collapse)Author
2023-08-26x86/CPU/AMD: Fix the DIV(0) initial fix attemptBorislav Petkov (AMD)
commit f58d6fbcb7c848b7f2469be339bc571f2e9d245b upstream. Initially, it was thought that doing an innocuous division in the #DE handler would take care to prevent any leaking of old data from the divider but by the time the fault is raised, the speculation has already advanced too far and such data could already have been used by younger operations. Therefore, do the innocuous division on every exit to userspace so that userspace doesn't see any potentially old data from integer divisions in kernel space. Do the same before VMRUN too, to protect host data from leaking into the guest too. Fixes: 77245f1c3c64 ("x86/CPU/AMD: Do not leak quotient data after a division by 0") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230811213824.10025-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/retpoline: Don't clobber RFLAGS during srso_safe_ret()Sean Christopherson
commit ba5ca5e5e6a1d55923e88b4a83da452166f5560e upstream. Use LEA instead of ADD when adjusting %rsp in srso_safe_ret{,_alias}() so as to avoid clobbering flags. Drop one of the INT3 instructions to account for the LEA consuming one more byte than the ADD. KVM's emulator makes indirect calls into a jump table of sorts, where the destination of each call is a small blob of code that performs fast emulation by executing the target instruction with fixed operands. E.g. to emulate ADC, fastop() invokes adcb_al_dl(): adcb_al_dl: <+0>: adc %dl,%al <+2>: jmp <__x86_return_thunk> A major motivation for doing fast emulation is to leverage the CPU to handle consumption and manipulation of arithmetic flags, i.e. RFLAGS is both an input and output to the target of the call. fastop() collects the RFLAGS result by pushing RFLAGS onto the stack and popping them back into a variable (held in %rdi in this case): asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n" <+71>: mov 0xc0(%r8),%rdx <+78>: mov 0x100(%r8),%rcx <+85>: push %rdi <+86>: popf <+87>: call *%rsi <+89>: nop <+90>: nop <+91>: nop <+92>: pushf <+93>: pop %rdi and then propagating the arithmetic flags into the vCPU's emulator state: ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK); <+64>: and $0xfffffffffffff72a,%r9 <+94>: and $0x8d5,%edi <+109>: or %rdi,%r9 <+122>: mov %r9,0x10(%r8) The failures can be most easily reproduced by running the "emulator" test in KVM-Unit-Tests. If you're feeling a bit of deja vu, see commit b63f20a778c8 ("x86/retpoline: Don't clobber RFLAGS during CALL_NOSPEC on i386"). In addition, this breaks booting of clang-compiled guest on a gcc-compiled host where the host contains the %rsp-modifying SRSO mitigations. [ bp: Massage commit message, extend, remove addresses. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/all/de474347-122d-54cd-eabf-9dcc95ab9eae@amd.com Reported-by: Srikanth Aithal <sraithal@amd.com> Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20230810013334.GA5354@dev-arch.thelio-3990X/ Link: https://lore.kernel.org/r/20230811155255.250835-1-seanjc@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/static_call: Fix __static_call_fixup()Peter Zijlstra
commit 54097309620ef0dc2d7083783dc521c6a5fef957 upstream. Christian reported spurious module load crashes after some of Song's module memory layout patches. Turns out that if the very last instruction on the very last page of the module is a 'JMP __x86_return_thunk' then __static_call_fixup() will trip a fault and die. And while the module rework made this slightly more likely to happen, it's always been possible. Fixes: ee88d363d156 ("x86,static_call: Use alternative RET encoding") Reported-by: Christian Bricart <christian@bricart.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Link: https://lkml.kernel.org/r/20230816104419.GA982867@hirez.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/srso: Explain the untraining sequences a bit moreBorislav Petkov (AMD)
commit 9dbd23e42ff0b10c9b02c9e649c76e5228241a8e upstream. The goal is to eventually have a proper documentation about all this. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814164447.GFZNpZ/64H4lENIe94@fat_crate.local Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Cleanup the untrain messPeter Zijlstra
commit e7c25c441e9e0fa75b4c83e0b26306b702cfe90d upstream. Since there can only be one active return_thunk, there only needs be one (matching) untrain_ret. It fundamentally doesn't make sense to allow multiple untrain_ret at the same time. Fold all the 3 different untrain methods into a single (temporary) helper stub. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121149.042774962@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Rename srso_(.*)_alias to srso_alias_\1Peter Zijlstra
commit 42be649dd1f2eee6b1fb185f1a231b9494cf095f upstream. For a more consistent namespace. [ bp: Fixup names in the doc too. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.976236447@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Rename original retbleed methodsPeter Zijlstra
commit d025b7bac07a6e90b6b98b487f88854ad9247c39 upstream. Rename the original retbleed return thunk and untrain_ret to retbleed_return_thunk() and retbleed_untrain_ret(). No functional changes. Suggested-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.909378169@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Clean up SRSO return thunk messPeter Zijlstra
commit d43490d0ab824023e11d0b57d0aeec17a6e0ca13 upstream. Use the existing configurable return thunk. There is absolute no justification for having created this __x86_return_thunk alternative. To clarify, the whole thing looks like: Zen3/4 does: srso_alias_untrain_ret: nop2 lfence jmp srso_alias_return_thunk int3 srso_alias_safe_ret: // aliasses srso_alias_untrain_ret just so add $8, %rsp ret int3 srso_alias_return_thunk: call srso_alias_safe_ret ud2 While Zen1/2 does: srso_untrain_ret: movabs $foo, %rax lfence call srso_safe_ret (jmp srso_return_thunk ?) int3 srso_safe_ret: // embedded in movabs instruction add $8,%rsp ret int3 srso_return_thunk: call srso_safe_ret ud2 While retbleed does: zen_untrain_ret: test $0xcc, %bl lfence jmp zen_return_thunk int3 zen_return_thunk: // embedded in the test instruction ret int3 Where Zen1/2 flush the BTB entry using the instruction decoder trick (test,movabs) Zen3/4 use BTB aliasing. SRSO adds a return sequence (srso_safe_ret()) which forces the function return instruction to speculate into a trap (UD2). This RET will then mispredict and execution will continue at the return site read from the top of the stack. Pick one of three options at boot (evey function can only ever return once). [ bp: Fixup commit message uarch details and add them in a comment in the code too. Add a comment about the srso_select_mitigation() dependency on retbleed_select_mitigation(). Add moar ifdeffery for 32-bit builds. Add a dummy srso_untrain_ret_alias() definition for 32-bit alternatives needing the symbol. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.842775684@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/alternative: Make custom return thunk unconditionalPeter Zijlstra
commit 095b8303f3835c68ac4a8b6d754ca1c3b6230711 upstream. There is infrastructure to rewrite return thunks to point to any random thunk one desires, unwrap that from CALL_THUNKS, which up to now was the sole user of that. [ bp: Make the thunks visible on 32-bit and add ifdeffery for the 32-bit builds. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.775293785@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Fix up srso_safe_ret() and __x86_return_thunk()Peter Zijlstra
commit af023ef335f13c8b579298fc432daeef609a9e60 upstream. vmlinux.o: warning: objtool: srso_untrain_ret() falls through to next function __x86_return_skl() vmlinux.o: warning: objtool: __x86_return_thunk() falls through to next function __x86_return_skl() This is because these functions (can) end with CALL, which objtool does not consider a terminating instruction. Therefore, replace the INT3 instruction (which is a non-fatal trap) with UD2 (which is a fatal-trap). This indicates execution will not continue past this point. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.637802730@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-26x86/cpu: Fix __x86_return_thunk symbol typePeter Zijlstra
commit 77f67119004296a9b2503b377d610e08b08afc2a upstream. Commit fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") reimplemented __x86_return_thunk with a mix of SYM_FUNC_START and SYM_CODE_END, this is not a sane combination. Since nothing should ever actually 'CALL' this, make it consistently CODE. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230814121148.571027074@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-23arm64/ptrace: Ensure that SME is set up for target when writing SSVE stateMark Brown
commit 5d0a8d2fba50e9c07cde4aad7fba28c008b07a5b upstream. When we use NT_ARM_SSVE to either enable streaming mode or change the vector length for a process we do not currently do anything to ensure that there is storage allocated for the SME specific register state. If the task had not previously used SME or we changed the vector length then the task will not have had TIF_SME set or backing storage for ZA/ZT allocated, resulting in inconsistent register sizes when saving state and spurious traps which flush the newly set register state. We should set TIF_SME to disable traps and ensure that storage is allocated for ZA and ZT if it is not already allocated. This requires modifying sme_alloc() to make the flush of any existing register state optional so we don't disturb existing state for ZA and ZT. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Reported-by: David Spickett <David.Spickett@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: <stable@vger.kernel.org> # 5.19.x Link: https://lore.kernel.org/r/20230810-arm64-fix-ptrace-race-v1-1-a5361fad2bd6@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-23arm64: dts: rockchip: Fix Wifi/Bluetooth on ROCK Pi 4 boardsYogesh Hegde
commit ebceec271e552a2b05e47d8ef0597052b1a39449 upstream. This patch fixes an issue affecting the Wifi/Bluetooth connectivity on ROCK Pi 4 boards. Commit f471b1b2db08 ("arm64: dts: rockchip: Fix Bluetooth on ROCK Pi 4 boards") introduced a problem with the clock configuration. Specifically, the clock-names property of the sdio-pwrseq node was not updated to 'lpo', causing the driver to wait indefinitely for the wrong clock signal 'ext_clock' instead of the expected one 'lpo'. This prevented the proper initialization of Wifi/Bluetooth chip on ROCK Pi 4 boards. To address this, this patch updates the clock-names property of the sdio-pwrseq node to "lpo" to align with the changes made to the bluetooth node. This patch has been tested on ROCK Pi 4B. Fixes: f471b1b2db08 ("arm64: dts: rockchip: Fix Bluetooth on ROCK Pi 4 boards") Cc: stable@vger.kernel.org Signed-off-by: Yogesh Hegde <yogi.kernel@gmail.com> Link: https://lore.kernel.org/r/ZLbATQRjOl09aLAp@zephyrusG14 Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-23riscv: uaccess: Return the number of bytes effectively not copiedAlexandre Ghiti
[ Upstream commit 4b05b993900dd3eba0fc83ef5c5ddc7d65d786c6 ] It was reported that the riscv kernel hangs while executing the test in [1]. Indeed, the test hangs when trying to write a buffer to a file. The problem is that the riscv implementation of raw_copy_from_user() does not return the correct number of bytes not written when an exception happens and is fixed up, instead it always returns the initial size to copy, even if some bytes were actually copied. generic_perform_write() pre-faults the user pages and bails out if nothing can be written, otherwise it will access the userspace buffer: here the riscv implementation keeps returning it was not able to copy any byte though the pre-faulting indicates otherwise. So generic_perform_write() keeps retrying to access the user memory and ends up in an infinite loop. Note that before the commit mentioned in [1] that introduced this regression, it worked because generic_perform_write() would bail out if only one byte could not be written. So fix this by returning the number of bytes effectively not written in __asm_copy_[to|from]_user() and __clear_user(), as it is expected. Link: https://lore.kernel.org/linux-riscv/20230309151841.bomov6hq3ybyp42a@debian/ [1] Fixes: ebcbd75e3962 ("riscv: Fix the bug in memory access fixup code") Reported-by: Bo YU <tsu.yubo@gmail.com> Closes: https://lore.kernel.org/linux-riscv/20230309151841.bomov6hq3ybyp42a@debian/#t Reported-by: Aurelien Jarno <aurelien@aurel32.net> Closes: https://lore.kernel.org/linux-riscv/ZNOnCakhwIeue3yr@aurel32.net/ Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by: Björn Töpel <bjorn@rivosinc.com> Tested-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Link: https://lore.kernel.org/r/20230811150604.1621784-1-alexghiti@rivosinc.com Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23arm64: dts: imx93: Fix anatop node sizeAlexander Stein
[ Upstream commit 78e869dd8b2ba19765ac9b05cdea3e432d1dc188 ] Although the memory map of i.MX93 reference manual rev. 2 claims that analog top has start address of 0x44480000 and end address of 0x4448ffff, this overlaps with TMU memory area starting at 0x44482000, as stated in section 73.6.1. As PLL configuration registers start at addresses up to 0x44481400, as used by clk-imx93, reduce the anatop size to 0x2000, so exclude the TMU area but keep all PLL registers inside. Fixes: ec8b5b5058ea ("arm64: dts: freescale: Add i.MX93 dtsi support") Signed-off-by: Alexander Stein <alexander.stein@ew.tq-group.com> Reviewed-by: Peng Fan <peng.fan@nxp.com> Reviewed-by: Jacky Bai <ping.bai@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23ARM: dts: imx: Set default tuning step for imx6sx usdhcXiaolei Wang
[ Upstream commit 0a2b96e42a0284c4fc03022236f656a085ca714a ] If the tuning step is not set, the tuning step is set to 1. For some sd cards, the following Tuning timeout will occur. Tuning failed, falling back to fixed sampling clock So set the default tuning step. This refers to the NXP vendor's commit below: https://github.com/nxp-imx/linux-imx/blob/lf-6.1.y/ arch/arm/boot/dts/imx6sx.dtsi#L1108-L1109 Fixes: 1e336aa0c025 ("mmc: sdhci-esdhc-imx: correct the tuning start tap and step setting") Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com> Reviewed-by: Fabio Estevam <festevam@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23arm64: dts: imx8mm: Drop CSI1 PHY reference clock configurationFabio Estevam
[ Upstream commit f02b53375e8f14b4c27a14f6e4fb6e89914fdc29 ] The CSI1 PHY reference clock is limited to 125 MHz according to: i.MX 8M Mini Applications Processor Reference Manual, Rev. 3, 11/2020 Table 5-1. Clock Root Table (continued) / page 307 Slice Index n = 123 . Currently the IMX8MM_CLK_CSI1_PHY_REF clock is configured to be fed directly from 1 GHz PLL2 , which overclocks them. Instead, drop the configuration altogether, which defaults the clock to 24 MHz REF clock input, which for the PHY reference clock is just fine. Based on a patch from Marek Vasut for the imx8mn. Fixes: e523b7c54c05 ("arm64: dts: imx8mm: Add CSI nodes") Signed-off-by: Fabio Estevam <festevam@denx.de> Reviewed-by: Marek Vasut <marex@denx.de> Reviewed-by: Marco Felsch <m.felsch@pengutronix.de> Reviewed-by: Adam Ford <aford173@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23ARM: dts: imx6: phytec: fix RTC interrupt levelAndrej Picej
[ Upstream commit 762b700982a1e0f562184363f19860c3b9bdd0bf ] RTC interrupt level should be set to "LOW". This was revealed by the introduction of commit: f181987ef477 ("rtc: m41t80: use IRQ flags obtained from fwnode") which changed the way IRQ type is obtained. Signed-off-by: Andrej Picej <andrej.picej@norik.com> Reviewed-by: Stefan Riedmüller <s.riedmueller@phytec.de> Fixes: 800d595151bb ("ARM: dts: imx6: Add initial support for phyBOARD-Mira") Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23ARM: dts: imx: align LED node names with dtschemaKrzysztof Kozlowski
[ Upstream commit 4b0d1f2738899dbcc7a026d826373530019aa31b ] The node names should be generic and DT schema expects certain pattern: imx50-kobo-aura.dtb: gpio-leds: 'on' does not match any of the regexes: '(^led-[0-9a-f]$|led)', 'pinctrl-[0-9]+' imx6dl-yapp4-draco.dtb: led-controller@30: 'chan@0', 'chan@1', 'chan@2' do not match any of the regexes: '^led@[0-8]$', '^multi-led@[0-8]$', 'pinctrl-[0-9]+' Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Stable-dep-of: 762b700982a1 ("ARM: dts: imx6: phytec: fix RTC interrupt level") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23arm64: dts: rockchip: Disable HS400 for eMMC on ROCK 4C+Christopher Obbard
[ Upstream commit 2bd1d2dd808c60532283e9cf05110bf1bf2f9079 ] There is some instablity with some eMMC modules on ROCK Pi 4 SBCs running in HS400 mode. This ends up resulting in some block errors after a while or after a "heavy" operation utilising the eMMC (e.g. resizing a filesystem). An example of these errors is as follows: [ 289.171014] mmc1: running CQE recovery [ 290.048972] mmc1: running CQE recovery [ 290.054834] mmc1: running CQE recovery [ 290.060817] mmc1: running CQE recovery [ 290.061337] blk_update_request: I/O error, dev mmcblk1, sector 1411072 op 0x1:(WRITE) flags 0x800 phys_seg 36 prio class 0 [ 290.061370] EXT4-fs warning (device mmcblk1p1): ext4_end_bio:348: I/O error 10 writing to inode 29547 starting block 176466) [ 290.061484] Buffer I/O error on device mmcblk1p1, logical block 172288 [ 290.061531] Buffer I/O error on device mmcblk1p1, logical block 172289 [ 290.061551] Buffer I/O error on device mmcblk1p1, logical block 172290 [ 290.061574] Buffer I/O error on device mmcblk1p1, logical block 172291 [ 290.061592] Buffer I/O error on device mmcblk1p1, logical block 172292 [ 290.061615] Buffer I/O error on device mmcblk1p1, logical block 172293 [ 290.061632] Buffer I/O error on device mmcblk1p1, logical block 172294 [ 290.061654] Buffer I/O error on device mmcblk1p1, logical block 172295 [ 290.061673] Buffer I/O error on device mmcblk1p1, logical block 172296 [ 290.061695] Buffer I/O error on device mmcblk1p1, logical block 172297 Disabling the Command Queue seems to stop the CQE recovery from running, but doesn't seem to improve the I/O errors. Until this can be investigated further, disable HS400 mode on the ROCK Pi 4 SBCs to at least stop I/O errors from occurring. Fixes: 246450344dad ("arm64: dts: rockchip: rk3399: Radxa ROCK 4C+") Signed-off-by: Christopher Obbard <chris.obbard@collabora.com> Link: https://lore.kernel.org/r/20230705144255.115299-3-chris.obbard@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23arm64: dts: rockchip: Disable HS400 for eMMC on ROCK Pi 4Christopher Obbard
[ Upstream commit cee572756aa2cb46e959e9797ad4b730b78a050b ] There is some instablity with some eMMC modules on ROCK Pi 4 SBCs running in HS400 mode. This ends up resulting in some block errors after a while or after a "heavy" operation utilising the eMMC (e.g. resizing a filesystem). An example of these errors is as follows: [ 289.171014] mmc1: running CQE recovery [ 290.048972] mmc1: running CQE recovery [ 290.054834] mmc1: running CQE recovery [ 290.060817] mmc1: running CQE recovery [ 290.061337] blk_update_request: I/O error, dev mmcblk1, sector 1411072 op 0x1:(WRITE) flags 0x800 phys_seg 36 prio class 0 [ 290.061370] EXT4-fs warning (device mmcblk1p1): ext4_end_bio:348: I/O error 10 writing to inode 29547 starting block 176466) [ 290.061484] Buffer I/O error on device mmcblk1p1, logical block 172288 [ 290.061531] Buffer I/O error on device mmcblk1p1, logical block 172289 [ 290.061551] Buffer I/O error on device mmcblk1p1, logical block 172290 [ 290.061574] Buffer I/O error on device mmcblk1p1, logical block 172291 [ 290.061592] Buffer I/O error on device mmcblk1p1, logical block 172292 [ 290.061615] Buffer I/O error on device mmcblk1p1, logical block 172293 [ 290.061632] Buffer I/O error on device mmcblk1p1, logical block 172294 [ 290.061654] Buffer I/O error on device mmcblk1p1, logical block 172295 [ 290.061673] Buffer I/O error on device mmcblk1p1, logical block 172296 [ 290.061695] Buffer I/O error on device mmcblk1p1, logical block 172297 Disabling the Command Queue seems to stop the CQE recovery from running, but doesn't seem to improve the I/O errors. Until this can be investigated further, disable HS400 mode on the ROCK Pi 4 SBCs to at least stop I/O errors from occurring. While we are here, set the eMMC maximum clock frequency to 1.5MHz to follow the ROCK 4C+. Fixes: 1b5715c602fd ("arm64: dts: rockchip: add ROCK Pi 4 DTS support") Signed-off-by: Christopher Obbard <chris.obbard@collabora.com> Tested-By: Folker Schwesinger <dev@folker-schwesinger.de> Link: https://lore.kernel.org/r/20230705144255.115299-2-chris.obbard@collabora.com Signed-off-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23arm64: dts: qcom: qrb5165-rb5: fix thermal zone conflictDmitry Baryshkov
[ Upstream commit 798f1df86e5709b7b6aedf493cc04c7fedbf544a ] The commit 3a786086c6f8 ("arm64: dts: qcom: Add missing "-thermal" suffix for thermal zones") renamed the thermal zone in the pm8150l.dtsi file to comply with the schema. However this resulted in a clash with the RB5 board file, which already contained the pm8150l-thermal zone for the on-board sensor. This resulted in the board file definition overriding the thermal zone defined in the PMIC include file (and thus the on-die PMIC temp alarm was not probing at all). Rename the thermal zone in qcom/qrb5165-rb5.dts to remove this override. Fixes: 3a786086c6f8 ("arm64: dts: qcom: Add missing "-thermal" suffix for thermal zones") Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Reviewed-by: Konrad Dybcio <konrad.dybcio@linaro.org> Link: https://lore.kernel.org/r/20230613131224.666668-1-dmitry.baryshkov@linaro.org Signed-off-by: Bjorn Andersson <andersson@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23powerpc/rtas_flash: allow user copy to flash block cache objectsNathan Lynch
commit 4f3175979e62de3b929bfa54a0db4b87d36257a7 upstream. With hardened usercopy enabled (CONFIG_HARDENED_USERCOPY=y), using the /proc/powerpc/rtas/firmware_update interface to prepare a system firmware update yields a BUG(): kernel BUG at mm/usercopy.c:102! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries Modules linked in: CPU: 0 PID: 2232 Comm: dd Not tainted 6.5.0-rc3+ #2 Hardware name: IBM,8408-E8E POWER8E (raw) 0x4b0201 0xf000004 of:IBM,FW860.50 (SV860_146) hv:phyp pSeries NIP: c0000000005991d0 LR: c0000000005991cc CTR: 0000000000000000 REGS: c0000000148c76a0 TRAP: 0700 Not tainted (6.5.0-rc3+) MSR: 8000000000029033 <SF,EE,ME,IR,DR,RI,LE> CR: 24002242 XER: 0000000c CFAR: c0000000001fbd34 IRQMASK: 0 [ ... GPRs omitted ... ] NIP usercopy_abort+0xa0/0xb0 LR usercopy_abort+0x9c/0xb0 Call Trace: usercopy_abort+0x9c/0xb0 (unreliable) __check_heap_object+0x1b4/0x1d0 __check_object_size+0x2d0/0x380 rtas_flash_write+0xe4/0x250 proc_reg_write+0xfc/0x160 vfs_write+0xfc/0x4e0 ksys_write+0x90/0x160 system_call_exception+0x178/0x320 system_call_common+0x160/0x2c4 The blocks of the firmware image are copied directly from user memory to objects allocated from flash_block_cache, so flash_block_cache must be created using kmem_cache_create_usercopy() to mark it safe for user access. Fixes: 6d07d1cd300f ("usercopy: Restrict non-usercopy caches to size 0") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Kees Cook <keescook@chromium.org> [mpe: Trim and indent oops] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230810-rtas-flash-vs-hardened-usercopy-v2-1-dcf63793a938@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-23ARM: dts: nxp/imx6sll: fix wrong property name in usbphy nodeXu Yang
[ Upstream commit ee70b908f77a9d8f689dea986f09e6d7dc481934 ] Property name "phy-3p0-supply" is used instead of "phy-reg_3p0-supply". Fixes: 9f30b6b1a957 ("ARM: dts: imx: Add basic dtsi file for imx6sll") cc: <stable@vger.kernel.org> Signed-off-by: Xu Yang <xu.yang_2@nxp.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23KVM: arm64: vgic-v4: Make the doorbell request robust w.r.t preemptionMarc Zyngier
[ Upstream commit b321c31c9b7b309dcde5e8854b741c8e6a9a05f0 ] Xiang reports that VMs occasionally fail to boot on GICv4.1 systems when running a preemptible kernel, as it is possible that a vCPU is blocked without requesting a doorbell interrupt. The issue is that any preemption that occurs between vgic_v4_put() and schedule() on the block path will mark the vPE as nonresident and *not* request a doorbell irq. This occurs because when the vcpu thread is resumed on its way to block, vcpu_load() will make the vPE resident again. Once the vcpu actually blocks, we don't request a doorbell anymore, and the vcpu won't be woken up on interrupt delivery. Fix it by tracking that we're entering WFI, and key the doorbell request on that flag. This allows us not to make the vPE resident when going through a preempt/schedule cycle, meaning we don't lose any state. Cc: stable@vger.kernel.org Fixes: 8e01d9a396e6 ("KVM: arm64: vgic-v4: Move the GICv4 residency flow to be driven by vcpu_load/put") Reported-by: Xiang Chen <chenxiang66@hisilicon.com> Suggested-by: Zenghui Yu <yuzenghui@huawei.com> Tested-by: Xiang Chen <chenxiang66@hisilicon.com> Co-developed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20230713070657.3873244-1-maz@kernel.org Signed-off-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23powerpc/kasan: Disable KCOV in KASAN codeBenjamin Gray
[ Upstream commit ccb381e1af1ace292153c88eb1fffa5683d16a20 ] As per the generic KASAN code in mm/kasan, disable KCOV with KCOV_INSTRUMENT := n in the makefile. This fixes a ppc64 boot hang when KCOV and KASAN are enabled. kasan_early_init() gets called before a PACA is initialised, but the KCOV hook expects a valid PACA. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Benjamin Gray <bgray@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230710044143.146840-1-bgray@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-23ARM: dts: imx6dl: prtrvt, prtvt7, prti6q, prtwd2: fix USB related warningsOleksij Rempel
[ Upstream commit 1d14bd943fa2bbdfda1efbcc080b298fed5f1803 ] Fix USB-related warnings in prtrvt, prtvt7, prti6q and prtwd2 device trees by disabling unused usbphynop1 and usbphynop2 USB PHYs and providing proper configuration for the over-current detection. This fixes the following warnings with the current kernel: usb_phy_generic usbphynop1: dummy supplies not allowed for exclusive requests usb_phy_generic usbphynop2: dummy supplies not allowed for exclusive requests imx_usb 2184200.usb: No over current polarity defined By the way, fix over-current detection on usbotg port for prtvt7, prti6q and prtwd2 boards. Only prtrvt do not have OC on USB OTG port. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-16alpha: remove __init annotation from exported page_is_ram()Masahiro Yamada
commit 6ccbd7fd474674654019a20177c943359469103a upstream. EXPORT_SYMBOL and __init is a bad combination because the .init.text section is freed up after the initialization. Commit c5a130325f13 ("ACPI/APEI: Add parameter check before error injection") exported page_is_ram(), hence the __init annotation should be removed. This fixes the modpost warning in ARCH=alpha builds: WARNING: modpost: vmlinux: page_is_ram: EXPORT_SYMBOL used for init symbol. Remove __init or EXPORT_SYMBOL. Fixes: c5a130325f13 ("ACPI/APEI: Add parameter check before error injection") Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86: Move gds_ucode_mitigated() declaration to headerArnd Bergmann
commit eb3515dc99c7c85f4170b50838136b2a193f8012 upstream. The declaration got placed in the .c file of the caller, but that causes a warning for the definition: arch/x86/kernel/cpu/bugs.c:682:6: error: no previous prototype for 'gds_ucode_mitigated' [-Werror=missing-prototypes] Move it to a header where both sides can observe it instead. Fixes: 81ac7e5d74174 ("KVM: Add GDS_NO support to KVM") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Cc: stable@kernel.org Link: https://lore.kernel.org/all/20230809130530.1913368-2-arnd%40kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/sev: Do not try to parse for the CC blob on non-AMD hardwareBorislav Petkov (AMD)
commit bee6cf1a80b54548a039e224c651bb15b644a480 upstream. Tao Liu reported a boot hang on an Intel Atom machine due to an unmapped EFI config table. The reason being that the CC blob which contains the CPUID page for AMD SNP guests is parsed for before even checking whether the machine runs on AMD hardware. Usually that's not a problem on !AMD hw - it simply won't find the CC blob's GUID and return. However, if any parts of the config table pointers array is not mapped, the kernel will #PF very early in the decompressor stage without any opportunity to recover. Therefore, do a superficial CPUID check before poking for the CC blob. This will fix the current issue on real hardware. It would also work as a guest on a non-lying hypervisor. For the lying hypervisor, the check is done again, *after* parsing the CC blob as the real CPUID page will be present then. Clear the #VC handler in case SEV-{ES,SNP} hasn't been detected, as a precaution. Fixes: c01fce9cef84 ("x86/compressed: Add SEV-SNP feature detection/setup") Reported-by: Tao Liu <ltao@redhat.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Tao Liu <ltao@redhat.com> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230601072043.24439-1-ltao@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/mm: Fix VDSO and VVAR placement on 5-level paging machinesKirill A. Shutemov
commit 1b8b1aa90c9c0e825b181b98b8d9e249dc395470 upstream. Yingcong has noticed that on the 5-level paging machine, VDSO and VVAR VMAs are placed above the 47-bit border: 8000001a9000-8000001ad000 r--p 00000000 00:00 0 [vvar] 8000001ad000-8000001af000 r-xp 00000000 00:00 0 [vdso] This might confuse users who are not aware of 5-level paging and expect all userspace addresses to be under the 47-bit border. So far problem has only been triggered with ASLR disabled, although it may also occur with ASLR enabled if the layout is randomized in a just right way. The problem happens due to custom placement for the VMAs in the VDSO code: vdso_addr() tries to place them above the stack and checks the result against TASK_SIZE_MAX, which is wrong. TASK_SIZE_MAX is set to the 56-bit border on 5-level paging machines. Use DEFAULT_MAP_WINDOW instead. Fixes: b569bab78d8d ("x86/mm: Prepare to expose larger address space to userspace") Reported-by: Yingcong Wu <yingcong.wu@intel.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20230803151609.22141-1-kirill.shutemov%40linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/cpu/amd: Enable Zenbleed fix for AMD Custom APU 0405Cristian Ciocaltea
commit 6dbef74aeb090d6bee7d64ef3fa82ae6fa53f271 upstream. Commit 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") provided a fix for the Zen2 VZEROUPPER data corruption bug affecting a range of CPU models, but the AMD Custom APU 0405 found on SteamDeck was not listed, although it is clearly affected by the vulnerability. Add this CPU variant to the Zenbleed erratum list, in order to unconditionally enable the fallback fix until a proper microcode update is available. Fixes: 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@collabora.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230811203705.1699914-1-cristian.ciocaltea@collabora.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/srso: Fix build breakage with the LLVM linkerNick Desaulniers
commit cbe8ded48b939b9d55d2c5589ab56caa7b530709 upstream. The assertion added to verify the difference in bits set of the addresses of srso_untrain_ret_alias() and srso_safe_ret_alias() would fail to link in LLVM's ld.lld linker with the following error: ld.lld: error: ./arch/x86/kernel/vmlinux.lds:210: at least one side of the expression must be absolute ld.lld: error: ./arch/x86/kernel/vmlinux.lds:211: at least one side of the expression must be absolute Use ABSOLUTE to evaluate the expression referring to at least one of the symbols so that LLD can evaluate the linker script. Also, add linker version info to the comment about XOR being unsupported in either ld.bfd or ld.lld until somewhat recently. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/llvm/CA+G9fYsdUeNu-gwbs0+T6XHi4hYYk=Y9725-wFhZ7gJMspLDRA@mail.gmail.com/ Reported-by: Nathan Chancellor <nathan@kernel.org> Reported-by: Daniel Kolesa <daniel@octaforge.org> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Suggested-by: Sven Volkinsfeld <thyrc@gmx.net> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://github.com/ClangBuiltLinux/linux/issues/1907 Link: https://lore.kernel.org/r/20230809-gds-v1-1-eaac90b0cbcc@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16riscv/kexec: handle R_RISCV_CALL_PLT relocation typeTorsten Duwe
commit d0b4f95a51038becce4bdab4789aa7ce59d4ea6e upstream. R_RISCV_CALL has been deprecated and replaced by R_RISCV_CALL_PLT. See Enum 18-19 in Table 3. Relocation types here: https://github.com/riscv-non-isa/riscv-elf-psabi-doc/blob/master/riscv-elf.adoc It was deprecated in ("Deprecated R_RISCV_CALL, prefer R_RISCV_CALL_PLT"): https://github.com/riscv-non-isa/riscv-elf-psabi-doc/commit/a0dced85018d7a0ec17023c9389cbd70b1dbc1b0 Recent tools (at least GNU binutils-2.40) already use R_RISCV_CALL_PLT. Kernels built with such binutils fail kexec_load_file(2) with: kexec_image: Unknown rela relocation: 19 kexec_image: Error loading purgatory ret=-8 The binary code at the call site remains the same, so tell arch_kexec_apply_relocations_add() to handle _PLT alike. Fixes: 838b3e28488f ("RISC-V: Load purgatory in kexec_file") Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Cc: Li Zhengyu <lizhengyu3@huawei.com> Cc: stable@vger.kernel.org Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/all/b046b164af8efd33bbdb7d4003273bdf9196a5b0.1690365011.git.petr.tesarik.ext@huawei.com/ Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16riscv,mmio: Fix readX()-to-delay() orderingAndrea Parri
commit 4eb2eb1b4c0eb07793c240744843498564a67b83 upstream. Section 2.1 of the Platform Specification [1] states: Unless otherwise specified by a given I/O device, I/O devices are on ordering channel 0 (i.e., they are point-to-point strongly ordered). which is not sufficient to guarantee that a readX() by a hart completes before a subsequent delay() on the same hart (cf. memory-barriers.txt, "Kernel I/O barrier effects"). Set the I(nput) bit in __io_ar() to restore the ordering, align inline comments. [1] https://github.com/riscv/riscv-platform-specs Signed-off-by: Andrea Parri <parri.andrea@gmail.com> Link: https://lore.kernel.org/r/20230803042738.5937-1-parri.andrea@gmail.com Fixes: fab957c11efe ("RISC-V: Atomic and Locking Code") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16riscv/kexec: load initrd high in available memoryTorsten Duwe
commit 49af7a2cd5f678217b8b4f86a29411aebebf3e78 upstream. When initrd is loaded low, the secondary kernel fails like this: INITRD: 0xdc581000+0x00eef000 overlaps in-use memory region This initrd load address corresponds to the _end symbol, but the reservation is aligned on PMD_SIZE, as explained by a comment in setup_bootmem(). It is technically possible to align the initrd load address accordingly, leaving a hole between the end of kernel and the initrd, but it is much simpler to allocate the initrd top-down. Fixes: 838b3e28488f ("RISC-V: Load purgatory in kexec_file") Signed-off-by: Torsten Duwe <duwe@suse.de> Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Cc: stable@vger.kernel.org Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/all/67c8eb9eea25717c2c8208d9bfbfaa39e6e2a1c6.1690365011.git.petr.tesarik.ext@huawei.com/ Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16KVM: SEV: only access GHCB fields oncePaolo Bonzini
commit 7588dbcebcbf0193ab5b76987396d0254270b04a upstream. A KVM guest using SEV-ES or SEV-SNP with multiple vCPUs can trigger a double fetch race condition vulnerability and invoke the VMGEXIT handler recursively. sev_handle_vmgexit() maps the GHCB page using kvm_vcpu_map() and then fetches the exit code using ghcb_get_sw_exit_code(). Soon after, sev_es_validate_vmgexit() fetches the exit code again. Since the GHCB page is shared with the guest, the guest is able to quickly swap the values with another vCPU and hence bypass the validation. One vmexit code that can be rejected by sev_es_validate_vmgexit() is SVM_EXIT_VMGEXIT; if sev_handle_vmgexit() observes it in the second fetch, the call to svm_invoke_exit_handler() will invoke sev_handle_vmgexit() again recursively. To avoid the race, always fetch the GHCB data from the places where sev_es_sync_from_ghcb stores it. Exploiting recursions on linux kernel has been proven feasible in the past, but the impact is mitigated by stack guard pages (CONFIG_VMAP_STACK). Still, if an attacker manages to call the handler multiple times, they can theoretically trigger a stack overflow and cause a denial-of-service, or potentially guest-to-host escape in kernel configurations without stack guard pages. Note that winning the race reliably in every iteration is very tricky due to the very tight window of the fetches; depending on the compiler settings, they are often consecutive because of optimization and inlining. Tested by booting an SEV-ES RHEL9 guest. Fixes: CVE-2023-4155 Fixes: 291bd20d5d88 ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT") Cc: stable@vger.kernel.org Reported-by: Andy Nguyen <theflow@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16KVM: SEV: snapshot the GHCB before accessing itPaolo Bonzini
commit 4e15a0ddc3ff40e8ea84032213976ecf774d7f77 upstream. Validation of the GHCB is susceptible to time-of-check/time-of-use vulnerabilities. To avoid them, we would like to always snapshot the fields that are read in sev_es_validate_vmgexit(), and not use the GHCB anymore after it returns. This means: - invoking sev_es_sync_from_ghcb() before any GHCB access, including before sev_es_validate_vmgexit() - snapshotting all fields including the valid bitmap and the sw_scratch field, which are currently not caching anywhere. The valid bitmap is the first thing to be copied out of the GHCB; then, further accesses will use the copy in svm->sev_es. Fixes: 291bd20d5d88 ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT") Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16Revert "loongarch/cpu: Switch to arch_cpu_finalize_init()"Greg Kroah-Hartman
This reverts commit 08e86d42e2c916e362d124e3bc6c824eb1862498 which is commit 9841c423164787feb8f1442f922b7d80a70c82f1 upstream. As Gunter reports: Building loongarch:defconfig ... failed -------------- Error log: <stdin>:569:2: warning: #warning syscall fstat not implemented [-Wcpp] arch/loongarch/kernel/setup.c: In function 'arch_cpu_finalize_init': arch/loongarch/kernel/setup.c:86:9: error: implicit declaration of function 'alternative_instructions' Actually introduced in v6.1.44 with commit 08e86d42e2c9 ("loongarch/cpu: Switch to arch_cpu_finalize_init()"). Alternative instruction support was only introduced for loongarch in v6.2 with commit 19e5eb15b00c ("LoongArch: Add alternative runtime patching mechanism"). So revert it from 6.1.y. Reported-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/fcd7b764-9047-22ba-a040-41b6ff99959c@roeck-us.net Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11x86/CPU/AMD: Do not leak quotient data after a division by 0Borislav Petkov (AMD)
commit 77245f1c3c6495521f6a3af082696ee2f8ce3921 upstream. Under certain circumstances, an integer division by 0 which faults, can leave stale quotient data from a previous division operation on Zen1 microarchitectures. Do a dummy division 0/1 before returning from the #DE exception handler in order to avoid any leaks of potentially sensitive data. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11arm64/ptrace: Don't enable SVE when setting streaming SVEMark Brown
commit 045aecdfcb2e060db142d83a0f4082380c465d2c upstream. Systems which implement SME without also implementing SVE are architecturally valid but were not initially supported by the kernel, unfortunately we missed one issue in the ptrace code. The SVE register setting code is shared between SVE and streaming mode SVE. When we set full SVE register state we currently enable TIF_SVE unconditionally, in the case where streaming SVE is being configured on a system that supports vanilla SVE this is not an issue since we always initialise enough state for both vector lengths but on a system which only support SME it will result in us attempting to restore the SVE vector length after having set streaming SVE registers. Fix this by making the enabling of SVE conditional on setting SVE vector state. If we set streaming SVE state and SVE was not already enabled this will result in a SVE access trap on next use of normal SVE, this will cause us to flush our register state but this is fine since the only way to trigger a SVE access trap would be to exit streaming mode which will cause the in register state to be flushed anyway. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-1-49df214bfb3e@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [Fix up backport -- broonie] Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11powerpc/mm/altmap: Fix altmap boundary checkAneesh Kumar K.V
[ Upstream commit 6722b25712054c0f903b839b8f5088438dd04df3 ] altmap->free includes the entire free space from which altmap blocks can be allocated. So when checking whether the kernel is doing altmap block free, compute the boundary correctly, otherwise memory hotunplug can fail. Fixes: 9ef34630a461 ("powerpc/mm: Fallback to RAM if the altmap is unusable") Signed-off-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230724181320.471386-1-aneesh.kumar@linux.ibm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-11arm64/fpsimd: Sync FPSIMD state with SVE for SME only systemsMark Brown
commit 507ea5dd92d23fcf10e4d1a68a443c86a49753ed upstream. Currently we guard FPSIMD/SVE state conversions with a check for the system supporting SVE but SME only systems may need to sync streaming mode SVE state so add a check for SME support too. These functions are only used by the ptrace code. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-2-49df214bfb3e@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11arm64/fpsimd: Clear SME state in the target task when setting the VLMark Brown
commit c9bb40b7f786662e33d71afe236442b0b61f0446 upstream. When setting SME vector lengths we clear TIF_SME to reenable SME traps, doing a reallocation of the backing storage on next use. We do this using clear_thread_flag() which operates on the current thread, meaning that when setting the vector length via ptrace we may both not force traps for the target task and force a spurious flush of any SME state that the tracing task may have. Clear the flag in the target task. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Reported-by: David Spickett <David.Spickett@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-tif-sme-v1-1-88312fd6fbfd@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11arm64/fpsimd: Sync and zero pad FPSIMD state for streaming SVEMark Brown
commit 69af56ae56a48a2522aad906c4461c6c7c092737 upstream. We have a function sve_sync_from_fpsimd_zeropad() which is used by the ptrace code to update the SVE state when the user writes to the the FPSIMD register set. Currently this checks that the task has SVE enabled but this will miss updates for tasks which have streaming SVE enabled if SVE has not been enabled for the thread, also do the conversion if the task has streaming SVE enabled. Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers") Signed-off-by: Mark Brown <broonie@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230803-arm64-fix-ptrace-ssve-no-sve-v1-3-49df214bfb3e@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11powerpc/ftrace: Create a dummy stackframe to fix stack unwindNaveen N Rao
commit 41a506ef71eb38d94fe133f565c87c3e06ccc072 upstream. With ppc64 -mprofile-kernel and ppc32 -pg, profiling instructions to call into ftrace are emitted right at function entry. The instruction sequence used is minimal to reduce overhead. Crucially, a stackframe is not created for the function being traced. This breaks stack unwinding since the function being traced does not have a stackframe for itself. As such, it never shows up in the backtrace: /sys/kernel/debug/tracing # echo 1 > /proc/sys/kernel/stack_tracer_enabled /sys/kernel/debug/tracing # cat stack_trace Depth Size Location (17 entries) ----- ---- -------- 0) 4144 32 ftrace_call+0x4/0x44 1) 4112 432 get_page_from_freelist+0x26c/0x1ad0 2) 3680 496 __alloc_pages+0x290/0x1280 3) 3184 336 __folio_alloc+0x34/0x90 4) 2848 176 vma_alloc_folio+0xd8/0x540 5) 2672 272 __handle_mm_fault+0x700/0x1cc0 6) 2400 208 handle_mm_fault+0xf0/0x3f0 7) 2192 80 ___do_page_fault+0x3e4/0xbe0 8) 2112 160 do_page_fault+0x30/0xc0 9) 1952 256 data_access_common_virt+0x210/0x220 10) 1696 400 0xc00000000f16b100 11) 1296 384 load_elf_binary+0x804/0x1b80 12) 912 208 bprm_execve+0x2d8/0x7e0 13) 704 64 do_execveat_common+0x1d0/0x2f0 14) 640 160 sys_execve+0x54/0x70 15) 480 64 system_call_exception+0x138/0x350 16) 416 416 system_call_common+0x160/0x2c4 Fix this by having ftrace create a dummy stackframe for the function being traced. With this, backtraces now capture the function being traced: /sys/kernel/debug/tracing # cat stack_trace Depth Size Location (17 entries) ----- ---- -------- 0) 3888 32 _raw_spin_trylock+0x8/0x70 1) 3856 576 get_page_from_freelist+0x26c/0x1ad0 2) 3280 64 __alloc_pages+0x290/0x1280 3) 3216 336 __folio_alloc+0x34/0x90 4) 2880 176 vma_alloc_folio+0xd8/0x540 5) 2704 416 __handle_mm_fault+0x700/0x1cc0 6) 2288 96 handle_mm_fault+0xf0/0x3f0 7) 2192 48 ___do_page_fault+0x3e4/0xbe0 8) 2144 192 do_page_fault+0x30/0xc0 9) 1952 608 data_access_common_virt+0x210/0x220 10) 1344 16 0xc0000000334bbb50 11) 1328 416 load_elf_binary+0x804/0x1b80 12) 912 64 bprm_execve+0x2d8/0x7e0 13) 848 176 do_execveat_common+0x1d0/0x2f0 14) 672 192 sys_execve+0x54/0x70 15) 480 64 system_call_exception+0x138/0x350 16) 416 416 system_call_common+0x160/0x2c4 This results in two additional stores in the ftrace entry code, but produces reliable backtraces. Fixes: 153086644fd1 ("powerpc/ftrace: Add support for -mprofile-kernel ftrace ABI") Cc: stable@vger.kernel.org Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://msgid.link/20230621051349.759567-1-naveen@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11x86/hyperv: Disable IBT when hypercall page lacks ENDBR instructionMichael Kelley
commit d5ace2a776442d80674eff9ed42e737f7dd95056 upstream. On hardware that supports Indirect Branch Tracking (IBT), Hyper-V VMs with ConfigVersion 9.3 or later support IBT in the guest. However, current versions of Hyper-V have a bug in that there's not an ENDBR64 instruction at the beginning of the hypercall page. Since hypercalls are made with an indirect call to the hypercall page, all hypercall attempts fail with an exception and Linux panics. A Hyper-V fix is in progress to add ENDBR64. But guard against the Linux panic by clearing X86_FEATURE_IBT if the hypercall page doesn't start with ENDBR. The VM will boot and run without IBT. If future Linux 32-bit kernels were to support IBT, additional hypercall page hackery would be needed to make IBT work for such kernels in a Hyper-V VM. Cc: stable@vger.kernel.org Signed-off-by: Michael Kelley <mikelley@microsoft.com> Link: https://lore.kernel.org/r/1690001476-98594-1-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11arm64: dts: stratix10: fix incorrect I2C property for SCL signalDinh Nguyen
commit db66795f61354c373ecdadbdae1ed253a96c47cb upstream. The correct dts property for the SCL falling time is "i2c-scl-falling-time-ns". Fixes: c8da1d15b8a4 ("arm64: dts: stratix10: i2c clock running out of spec") Cc: stable@vger.kernel.org Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-11KVM: s390: fix sthyi error handlingHeiko Carstens
[ Upstream commit 0c02cc576eac161601927b41634f80bfd55bfa9e ] Commit 9fb6c9b3fea1 ("s390/sthyi: add cache to store hypervisor info") added cache handling for store hypervisor info. This also changed the possible return code for sthyi_fill(). Instead of only returning a condition code like the sthyi instruction would do, it can now also return a negative error value (-ENOMEM). handle_styhi() was not changed accordingly. In case of an error, the negative error value would incorrectly injected into the guest PSW. Add proper error handling to prevent this, and update the comment which describes the possible return values of sthyi_fill(). Fixes: 9fb6c9b3fea1 ("s390/sthyi: add cache to store hypervisor info") Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com> Link: https://lore.kernel.org/r/20230727182939.2050744-1-hca@linux.ibm.com Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-08-11word-at-a-time: use the same return type for has_zero regardless of endiannessndesaulniers@google.com
[ Upstream commit 79e8328e5acbe691bbde029a52c89d70dcbc22f3 ] Compiling big-endian targets with Clang produces the diagnostic: fs/namei.c:2173:13: warning: use of bitwise '|' with boolean operands [-Wbitwise-instead-of-logical] } while (!(has_zero(a, &adata, &constants) | has_zero(b, &bdata, &constants))); ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ || fs/namei.c:2173:13: note: cast one or both operands to int to silence this warning It appears that when has_zero was introduced, two definitions were produced with different signatures (in particular different return types). Looking at the usage in hash_name() in fs/namei.c, I suspect that has_zero() is meant to be invoked twice per while loop iteration; using logical-or would not update `bdata` when `a` did not have zeros. So I think it's preferred to always return an unsigned long rather than a bool than update the while loop in hash_name() to use a logical-or rather than bitwise-or. [ Also changed powerpc version to do the same - Linus ] Link: https://github.com/ClangBuiltLinux/linux/issues/1832 Link: https://lore.kernel.org/lkml/20230801-bitwise-v1-1-799bec468dc4@google.com/ Fixes: 36126f8f2ed8 ("word-at-a-time: make the interfaces truly generic") Debugged-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Acked-by: Heiko Carstens <hca@linux.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>