RETGUARD
From: | Theo de Raadt <deraadt-AT-openbsd.org> | |
To: | tech-AT-openbsd.org | |
Subject: | RETGUARD | |
Date: | Sat, 19 Aug 2017 13:57:05 -0600 | |
Message-ID: | <21482.1503172625@cvs.openbsd.org> |
This year I went to BSDCAN in Ottawa. I spent much of it in the 'hallway track', and had an extended conversation with various people regarding our existing security mitigations and hopes for new ones in the future. I spoke a lot with Todd Mortimer. Apparently I told him that I felt return-address protection was impossible, so a few weeks later he sent a clang diff to address that issue... The first diff is for amd64 and i386 only -- in theory RISC architectures can follow this approach soon. The mechanism is like a userland 'stackghost' in the function prologue and epilogue. The preamble XOR's the return address at top of stack with the stack pointer value itself. This perturbs by introducing bits from ASLR. The function epilogue undoes the transform immediately before the RET instruction. ROP attack methods are impacted because existing gadgets are transformed to consist of "<gadget artifacts> <mangle ret address> RET". That pivots the return sequence off the ROP chain in a highly unpredictable and inconvenient fashion. The compiler diff handles this for all the C code, but the assembly functions have to be done by hand. I did this work first for amd64, and more recently for i386. I've fixed most of the functions and only a handful of complex ones remain. For those who know about polymorphism and pop/jmp or JOP, we believe once standard-RET is solved those concerns become easier to address seperately in the future. In any case a substantial reduction of gadgets is powerful. For those worried about introducing worse polymorphism with these "xor; ret" epilogues themselves, the nested gadgets for 64bit and 32bit variations are +1 "xor %esp,(%rsp); ret", +2 "and $0x24,%al; ret" and +3 "and $0xc3,%al; int3". Not bad. Over the last two weeks, we have received help and advice to ensure debuggers (gdb, egdb, ddb, lldb) can still handle these transformed callframes. Also in the kernel, we discovered we must use a smaller XOR, because otherwise userland addresses are generated, and cannot rely on SMEP as it is really new feature of the architecture. There were also issues with pthreads and dlsym, which leads to a series of uplifts around __builtin_return_address and DWARF CFI. Application of this diff doesn't require anything special, a system can simply be built twice. Or shortcut by building & installing gnu/usr.bin/clang first, then a full build. We are at the point where userland and base are fully working without regressions, and the remaining impacts are in a few larger ports which directly access the return address (for a variety of reasons). So work needs to continue with handling the RET-addr swizzle in those ports, and then we can move forward. Index: gnu/llvm/lib/Target/X86/CMakeLists.txt =================================================================== RCS file: /cvs/src/gnu/llvm/lib/Target/X86/CMakeLists.txt,v retrieving revision 1.1.1.3 diff -u -p -u -r1.1.1.3 CMakeLists.txt --- gnu/llvm/lib/Target/X86/CMakeLists.txt 24 Jan 2017 08:33:28 -0000 1.1.1.3 +++ gnu/llvm/lib/Target/X86/CMakeLists.txt 18 Aug 2017 21:15:04 -0000 @@ -45,6 +45,7 @@ set(sources X86MachineFunctionInfo.cpp X86OptimizeLEAs.cpp X86PadShortFunction.cpp + X86XorRetProtector.cpp X86RegisterInfo.cpp X86SelectionDAGInfo.cpp X86ShuffleDecodeConstantPool.cpp Index: gnu/llvm/lib/Target/X86/X86.h =================================================================== RCS file: /cvs/src/gnu/llvm/lib/Target/X86/X86.h,v retrieving revision 1.1.1.3 diff -u -p -u -r1.1.1.3 X86.h --- gnu/llvm/lib/Target/X86/X86.h 24 Jan 2017 08:33:27 -0000 1.1.1.3 +++ gnu/llvm/lib/Target/X86/X86.h 18 Aug 2017 21:15:04 -0000 @@ -50,6 +50,10 @@ FunctionPass *createX86IssueVZeroUpperPa /// This will prevent a stall when returning on the Atom. FunctionPass *createX86PadShortFunctions(); +/// Return a pass that adds xor instructions for return pointers +/// on the stack +FunctionPass *createX86XorRetProtectorPass(unsigned opt); + /// Return a pass that selectively replaces certain instructions (like add, /// sub, inc, dec, some shifts, and some multiplies) by equivalent LEA /// instructions, in order to eliminate execution delays in some processors. Index: gnu/llvm/lib/Target/X86/X86TargetMachine.cpp =================================================================== RCS file: /cvs/src/gnu/llvm/lib/Target/X86/X86TargetMachine.cpp,v retrieving revision 1.1.1.3 diff -u -p -u -r1.1.1.3 X86TargetMachine.cpp --- gnu/llvm/lib/Target/X86/X86TargetMachine.cpp 24 Jan 2017 08:33:27 -0000 1.1.1.3 +++ gnu/llvm/lib/Target/X86/X86TargetMachine.cpp 18 Aug 2017 21:15:04 -0000 @@ -260,6 +260,12 @@ UseVZeroUpper("x86-use-vzeroupper", cl:: cl::desc("Minimize AVX to SSE transition penalty"), cl::init(true)); +static cl::opt<unsigned> +XorRetProtector("x86-ret-protector", cl::NotHidden, + cl::desc("XOR return pointers in function preambles and before RETs." + "Argument = 1 for userland (xor full values)" + "Argument = 2 for kernel (xor lower half bits)"), + cl::init(0)); //===----------------------------------------------------------------------===// // X86 TTI query. //===----------------------------------------------------------------------===// @@ -402,4 +408,6 @@ void X86PassConfig::addPreEmitPass() { addPass(createX86FixupLEAs()); addPass(createX86EvexToVexInsts()); } + if (XorRetProtector) + addPass(createX86XorRetProtectorPass(XorRetProtector)); } Index: gnu/llvm/lib/Target/X86/X86XorRetProtector.cpp =================================================================== RCS file: gnu/llvm/lib/Target/X86/X86XorRetProtector.cpp diff -N gnu/llvm/lib/Target/X86/X86XorRetProtector.cpp --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ gnu/llvm/lib/Target/X86/X86XorRetProtector.cpp 18 Aug 2017 21:15:04 -0000 @@ -0,0 +1,148 @@ +//===-------- X86XorRetProtector.cpp - xor return pointers -----------===// +// +// The LLVM Compiler Infrastructure +// +// This file is distributed under the University of Illinois Open Source +// License. See LICENSE.TXT for details. +// +//===----------------------------------------------------------------------===// +// +// This file defines a pass that will xor the return pointer in +// each function preamble, and before any ret. +// +//===----------------------------------------------------------------------===// + +#include <algorithm> + +#include "X86.h" +#include "X86InstrInfo.h" +#include "X86Subtarget.h" +#include "X86InstrBuilder.h" +#include "llvm/ADT/Statistic.h" +#include "llvm/CodeGen/MachineFunctionPass.h" +#include "llvm/CodeGen/MachineInstrBuilder.h" +#include "llvm/CodeGen/MachineRegisterInfo.h" +#include "llvm/CodeGen/Passes.h" +#include "llvm/IR/Function.h" +#include "llvm/Support/Debug.h" +#include "llvm/Support/raw_ostream.h" +#include "llvm/Target/TargetInstrInfo.h" + +using namespace llvm; + +#define DEBUG_TYPE "x86-ret-protector" + +namespace { + struct X86XorRetProtector : public MachineFunctionPass { + static char ID; + X86XorRetProtector(bool kernel) : MachineFunctionPass(ID) + , STI(nullptr), TII(nullptr), isKernel(kernel) {} + + bool runOnMachineFunction(MachineFunction &MF) override; + + MachineFunctionProperties getRequiredProperties() const override { + return MachineFunctionProperties().set( + MachineFunctionProperties::Property::NoVRegs); + } + + StringRef getPassName() const override { + return "X86 XOR RET Instructions"; + } + + private: + void addXORInst(MachineBasicBlock &MBB, MachineInstr &MI); + + const X86Subtarget *STI; + const TargetInstrInfo *TII; + bool is64bit; + bool isKernel; + }; + + char X86XorRetProtector::ID = 0; +} + +FunctionPass *llvm::createX86XorRetProtectorPass(unsigned optval) { + return new X86XorRetProtector(optval == 2 ? true : false); +} + +/// runOnMachineFunction - Loop over all of the basic blocks, inserting +// XORs before each function and each ret +bool X86XorRetProtector::runOnMachineFunction(MachineFunction &MF) { + STI = &MF.getSubtarget<X86Subtarget>(); + TII = STI->getInstrInfo(); + is64bit = STI->is64Bit(); + + bool MadeChange = false; + for (auto &MBB : MF) { + for (auto &MI : MBB) { + if (MI.isReturn()) { + addXORInst(MBB, MI); + MadeChange = true; + } + } + } + if (MadeChange) { + for (auto &MBB : MF) { + if (!MBB.empty()) { + unsigned CFIIndex; + if (is64bit) { + if (isKernel) { + // cfi_escape exp RA len const -8 plus dup deref swap const4u 0xffffffff and xor + MCCFIInstruction CFIInst = MCCFIInstruction::createEscape(nullptr, + "\x16\x10\x0d\x09\xf8\x22\x12\x06\x16\x0c\xff\xff\xff\xff\x1a\x27"); + CFIIndex = MF.addFrameInst(CFIInst); + } else { /* userland */ + // cfi_escape exp RA len const -8 plus dup deref xor + MCCFIInstruction CFIInst = MCCFIInstruction::createEscape(nullptr, + "\x16\x10\x06\x09\xf8\x22\x12\x06\x27"); + CFIIndex = MF.addFrameInst(CFIInst); + } + } else { /* 32 bit */ + if (isKernel) { + // cfi_escape exp RA len const -4 plus dup deref swap const2u 0xffff and xor + MCCFIInstruction CFIInst = MCCFIInstruction::createEscape(nullptr, + "\x16\x08\x0b\x09\xfc\x22\x12\x06\x16\x0a\xff\xff\x1a\x27"); + CFIIndex = MF.addFrameInst(CFIInst); + } else { /* userland */ + // cfi_escape exp RA len const -4 plus dup deref xor + MCCFIInstruction CFIInst = MCCFIInstruction::createEscape(nullptr, + "\x16\x08\x06\x09\xfc\x22\x12\x06\x27"); + CFIIndex = MF.addFrameInst(CFIInst); + } + } + BuildMI(MBB, MBB.front(), MBB.front().getDebugLoc(), + TII->get(TargetOpcode::CFI_INSTRUCTION)) + .addCFIIndex(CFIIndex); + addXORInst(MBB, MBB.front()); + break; + } + } + } + return MadeChange; +} + +/// addXORInst - Add an xor before the given MBBI +void X86XorRetProtector::addXORInst(MachineBasicBlock &MBB, MachineInstr &MI) { + unsigned opcode, stackp, target; + if (is64bit) { + target = X86::RSP; + if (isKernel) { + opcode = X86::XOR32mr; + stackp = X86::ESP; + } else { + opcode = X86::XOR64mr; + stackp = X86::RSP; + } + } else { /* 32 bit */ + target = X86::ESP; + if (isKernel) { + opcode = X86::XOR16mr; + stackp = X86::SP; + } else { + opcode = X86::XOR32mr; + stackp = X86::ESP; + } + } + addDirectMem(BuildMI(MBB, MI, MI.getDebugLoc(), TII->get(opcode)), target) + .addReg(stackp); +} Index: gnu/llvm/tools/clang/include/clang/Driver/Options.td =================================================================== RCS file: /cvs/src/gnu/llvm/tools/clang/include/clang/Driver/Options.td,v retrieving revision 1.4 diff -u -p -u -r1.4 Options.td --- gnu/llvm/tools/clang/include/clang/Driver/Options.td 24 Jan 2017 08:39:08 -0000 1.4 +++ gnu/llvm/tools/clang/include/clang/Driver/Options.td 18 Aug 2017 21:15:04 -0000 @@ -1207,6 +1207,10 @@ def fstack_protector_strong : Flag<["-"] HelpText<"Use a strong heuristic to apply stack protectors to functions">; def fstack_protector : Flag<["-"], "fstack-protector">, Group<f_Group>, HelpText<"Enable stack protectors for functions potentially vulnerable to stack smashing">; +def fret_protector : Flag<["-"], "fret-protector">, Group<f_Group>, + HelpText<"Enable ret protection for all functions">; +def fno_ret_protector : Flag<["-"], "fno-ret-protector">, Group<f_Group>, + HelpText<"Disable ret protection">; def fstandalone_debug : Flag<["-"], "fstandalone-debug">, Group<f_Group>, Flags<[CoreOption]>, HelpText<"Emit full debug info for all types used by the program">; def fno_standalone_debug : Flag<["-"], "fno-standalone-debug">, Group<f_Group>, Flags<[CoreOption]>, Index: gnu/llvm/tools/clang/lib/Driver/Tools.cpp =================================================================== RCS file: /cvs/src/gnu/llvm/tools/clang/lib/Driver/Tools.cpp,v retrieving revision 1.14 diff -u -p -u -r1.14 Tools.cpp --- gnu/llvm/tools/clang/lib/Driver/Tools.cpp 28 Jul 2017 15:31:54 -0000 1.14 +++ gnu/llvm/tools/clang/lib/Driver/Tools.cpp 18 Aug 2017 21:15:04 -0000 @@ -5507,6 +5507,24 @@ void Clang::ConstructJob(Compilation &C, CmdArgs.push_back(Args.MakeArgString(Twine(StackProtectorLevel))); } + // -ret-protector + if (Args.hasFlag(options::OPT_fret_protector, options::OPT_fno_ret_protector, + true)) { + if (!Args.hasArg(options::OPT_pg)) { + CmdArgs.push_back(Args.MakeArgString("-D_RET_PROTECTOR")); + CmdArgs.push_back(Args.MakeArgString("-munwind-tables")); + CmdArgs.push_back(Args.MakeArgString("-mllvm")); + // Switch mode depending on kernel / nokernel + StringRef opt = "-x86-ret-protector=1"; + StringRef ker = "kernel"; + if (Arg *A = Args.getLastArg(options::OPT_mcmodel_EQ)) + if (A->getValue() == ker) + opt = "-x86-ret-protector=2"; + + CmdArgs.push_back(Args.MakeArgString(Twine(opt))); + } + } + // --param ssp-buffer-size= for (const Arg *A : Args.filtered(options::OPT__param)) { StringRef Str(A->getValue()); Index: gnu/usr.bin/clang/libLLVMX86CodeGen/Makefile =================================================================== RCS file: /cvs/src/gnu/usr.bin/clang/libLLVMX86CodeGen/Makefile,v retrieving revision 1.4 diff -u -p -u -r1.4 Makefile --- gnu/usr.bin/clang/libLLVMX86CodeGen/Makefile 9 Jul 2017 15:28:35 -0000 1.4 +++ gnu/usr.bin/clang/libLLVMX86CodeGen/Makefile 18 Aug 2017 21:15:04 -0000 @@ -25,6 +25,7 @@ SRCS= X86AsmPrinter.cpp \ X86InterleavedAccess.cpp \ X86MCInstLower.cpp \ X86MachineFunctionInfo.cpp \ + X86OptimizeLEAs.cpp \ X86PadShortFunction.cpp \ X86RegisterInfo.cpp \ X86SelectionDAGInfo.cpp \ @@ -36,7 +37,7 @@ SRCS= X86AsmPrinter.cpp \ X86VZeroUpper.cpp \ X86WinAllocaExpander.cpp \ X86WinEHState.cpp \ - X86OptimizeLEAs.cpp + X86XorRetProtector.cpp .PATH: ${.CURDIR}/../../../llvm/lib/Target/X86 Index: lib/csu/amd64/md_init.h =================================================================== RCS file: /cvs/src/lib/csu/amd64/md_init.h,v retrieving revision 1.6 diff -u -p -u -r1.6 md_init.h --- lib/csu/amd64/md_init.h 20 Mar 2016 02:32:39 -0000 1.6 +++ lib/csu/amd64/md_init.h 18 Aug 2017 14:39:40 -0000 @@ -50,6 +50,7 @@ " .type " #entry_pt ",@function \n" \ #entry_pt": \n" \ " .align 16 \n" \ + " xorq %rsp,(%rsp) # RETGUARD \n" \ " subq $8,%rsp \n" \ " .previous") @@ -58,6 +59,7 @@ __asm ( \ ".section "#sect",\"ax\",@progbits \n" \ " addq $8,%rsp \n" \ + " xorq %rsp,(%rsp) # RETGUARD \n" \ " ret \n" \ " .previous") @@ -114,11 +116,17 @@ " .type _dl_exit,@function \n" \ " .align 8 \n" \ "_dl_exit: \n" \ + " .cfi_startproc \n" \ + " xorq %rsp,(%rsp) # RETGUARD \n" \ + " .cfi_escape 0x16, 0x10, 0x06, 0x09, 0xf8, 0x22, 0x12, 0x06, 0x27\n" \ " movl $(1), %eax \n" \ " syscall \n" \ " jb 1f \n" \ + " xorq %rsp,(%rsp) # RETGUARD \n" \ " ret \n" \ "1: \n" \ " neg %rax \n" \ + " xorq %rsp,(%rsp) # RETGUARD \n" \ " ret \n" \ + " .cfi_endproc \n" \ " .previous") Index: lib/csu/i386/md_init.h =================================================================== RCS file: /cvs/src/lib/csu/i386/md_init.h,v retrieving revision 1.9 diff -u -p -u -r1.9 md_init.h --- lib/csu/i386/md_init.h 11 Aug 2017 20:13:31 -0000 1.9 +++ lib/csu/i386/md_init.h 18 Aug 2017 14:39:13 -0000 @@ -50,6 +50,7 @@ " .type " #entry_pt ",@function \n" \ #entry_pt": \n" \ " .align 16 \n" \ + " xorl %esp,(%esp) # RETGUARD \n" \ " pushl %ebp \n" \ " movl %esp,%ebp \n" \ " andl $~15,%esp \n" \ @@ -60,6 +61,7 @@ __asm ( \ ".section "#sect",\"ax\",@progbits \n" \ " leave \n" \ + " xorl %esp,(%esp) # RETGUARD \n" \ " ret \n" \ " .previous") @@ -122,7 +124,12 @@ " .globl _dl_exit \n" \ " .type _dl_exit,@function \n" \ "_dl_exit: \n" \ + " .cfi_startproc \n" \ + " .cfi_escape 0x16, 0x08, 0x06, 0x09, 0xfc, 0x22, 0x12, 0x06, 0x27 \n" \ + " xorl %esp,(%esp) # RETGUARD \n" \ " mov $1, %eax \n" \ " int $0x80 \n" \ + " xorl %esp,(%esp) # RETGUARD \n" \ " ret \n" \ + " .cfi_endproc \n" \ " .previous") Index: lib/libc/arch/amd64/DEFS.h =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/DEFS.h,v retrieving revision 1.1 diff -u -p -u -r1.1 DEFS.h --- lib/libc/arch/amd64/DEFS.h 14 Nov 2015 21:53:03 -0000 1.1 +++ lib/libc/arch/amd64/DEFS.h 18 Aug 2017 18:00:12 -0000 @@ -56,6 +56,6 @@ * END_STRONG(x) Like DEF_STRONG() in C; for standard/reserved C names * END_WEAK(x) Like DEF_WEAK() in C; for non-ISO C names */ -#define END_STRONG(x) END(x); _HIDDEN_FALIAS(x,x); END(_HIDDEN(x)) +#define END_STRONG(x) END(x); _HIDDEN_FALIAS(x,x); _ASM_SIZE(_HIDDEN(x)) #define END_WEAK(x) END_STRONG(x); .weak x Index: lib/libc/arch/amd64/SYS.h =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/SYS.h,v retrieving revision 1.20 diff -u -p -u -r1.20 SYS.h --- lib/libc/arch/amd64/SYS.h 6 Sep 2016 18:33:35 -0000 1.20 +++ lib/libc/arch/amd64/SYS.h 18 Aug 2017 18:00:34 -0000 @@ -52,8 +52,8 @@ #define SYSCALL_END_HIDDEN(x) \ END(_thread_sys_ ## x); \ _HIDDEN_FALIAS(x,_thread_sys_##x); \ - END(_HIDDEN(x)) -#define SYSCALL_END(x) SYSCALL_END_HIDDEN(x); END(x) + _ASM_SIZE(_HIDDEN(x)) +#define SYSCALL_END(x) SYSCALL_END_HIDDEN(x); _ASM_SIZE(x) #define SET_ERRNO \ @@ -66,9 +66,11 @@ #define _SYSCALL_NOERROR(x,y) \ SYSENTRY(x); \ + RETGUARD_START; \ SYSTRAP(y) #define _SYSCALL_HIDDEN_NOERROR(x,y) \ SYSENTRY_HIDDEN(x); \ + RETGUARD_START; \ SYSTRAP(y) #define SYSCALL_NOERROR(x) \ @@ -85,12 +87,15 @@ /* return, handling errno for failed calls */ #define _RSYSCALL_RET \ jc 99f; \ + RETGUARD_END; \ ret; \ 99: SET_ERRNO; \ + RETGUARD_END; \ ret #define PSEUDO_NOERROR(x,y) \ _SYSCALL_NOERROR(x,y); \ + RETGUARD_END; \ ret; \ SYSCALL_END(x) Index: lib/libc/arch/amd64/gen/fabs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fabs.S,v retrieving revision 1.7 diff -u -p -u -r1.7 fabs.S --- lib/libc/arch/amd64/gen/fabs.S 29 May 2015 08:50:12 -0000 1.7 +++ lib/libc/arch/amd64/gen/fabs.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ */ ENTRY(fabs) + RETGUARD_START movsd %xmm0, -8(%rsp) fldl -8(%rsp) fabs fstpl -8(%rsp) movsd -8(%rsp),%xmm0 + RETGUARD_END ret END(fabs) Index: lib/libc/arch/amd64/gen/flt_rounds.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/flt_rounds.S,v retrieving revision 1.7 diff -u -p -u -r1.7 flt_rounds.S --- lib/libc/arch/amd64/gen/flt_rounds.S 19 Aug 2017 18:23:00 -0000 1.7 +++ lib/libc/arch/amd64/gen/flt_rounds.S 19 Aug 2017 18:29:07 -0000 @@ -16,6 +16,7 @@ _map: .byte 0 /* round to zero */ ENTRY(__flt_rounds) + RETGUARD_START fnstcw -4(%rsp) movl -4(%rsp),%eax shrl $10,%eax @@ -26,5 +27,6 @@ ENTRY(__flt_rounds) #else movb _map(,%rax,1),%al #endif + RETGUARD_END ret END_STRONG(__flt_rounds) Index: lib/libc/arch/amd64/gen/fpgetmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpgetmask.S,v retrieving revision 1.2 diff -u -p -u -r1.2 fpgetmask.S --- lib/libc/arch/amd64/gen/fpgetmask.S 29 May 2015 08:50:12 -0000 1.2 +++ lib/libc/arch/amd64/gen/fpgetmask.S 18 Aug 2017 02:28:21 -0000 @@ -20,10 +20,12 @@ ENTRY(_fpgetmask) #else ENTRY(fpgetmask) #endif + RETGUARD_START fnstcw -4(%rsp) movl -4(%rsp),%eax notl %eax andl $63,%eax + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpgetmask) Index: lib/libc/arch/amd64/gen/fpgetround.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpgetround.S,v retrieving revision 1.2 diff -u -p -u -r1.2 fpgetround.S --- lib/libc/arch/amd64/gen/fpgetround.S 29 May 2015 08:50:12 -0000 1.2 +++ lib/libc/arch/amd64/gen/fpgetround.S 18 Aug 2017 02:28:21 -0000 @@ -19,10 +19,12 @@ ENTRY(_fpgetround) #else ENTRY(fpgetround) #endif + RETGUARD_START fnstcw -4(%rsp) movl -4(%rsp),%eax rorl $10,%eax andl $3,%eax + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpgetround) Index: lib/libc/arch/amd64/gen/fpgetsticky.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpgetsticky.S,v retrieving revision 1.2 diff -u -p -u -r1.2 fpgetsticky.S --- lib/libc/arch/amd64/gen/fpgetsticky.S 29 May 2015 08:50:12 -0000 1.2 +++ lib/libc/arch/amd64/gen/fpgetsticky.S 18 Aug 2017 02:28:21 -0000 @@ -20,11 +20,13 @@ ENTRY(_fpgetsticky) #else ENTRY(fpgetsticky) #endif + RETGUARD_START fnstsw -4(%rsp) stmxcsr -8(%rsp) movl -4(%rsp),%eax orl -8(%rsp),%eax andl $63,%eax + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpgetsticky) Index: lib/libc/arch/amd64/gen/fpsetmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpsetmask.S,v retrieving revision 1.2 diff -u -p -u -r1.2 fpsetmask.S --- lib/libc/arch/amd64/gen/fpsetmask.S 29 May 2015 08:50:12 -0000 1.2 +++ lib/libc/arch/amd64/gen/fpsetmask.S 18 Aug 2017 02:28:21 -0000 @@ -21,6 +21,7 @@ ENTRY(_fpsetmask) #else ENTRY(fpsetmask) #endif + RETGUARD_START fnstcw -4(%rsp) stmxcsr -8(%rsp) andl $63,%edi @@ -39,6 +40,7 @@ ENTRY(fpsetmask) fldcw -4(%rsp) ldmxcsr -8(%rsp) andl $63,%eax + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpsetmask) Index: lib/libc/arch/amd64/gen/fpsetround.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpsetround.S,v retrieving revision 1.2 diff -u -p -u -r1.2 fpsetround.S --- lib/libc/arch/amd64/gen/fpsetround.S 29 May 2015 08:50:12 -0000 1.2 +++ lib/libc/arch/amd64/gen/fpsetround.S 18 Aug 2017 02:28:21 -0000 @@ -22,6 +22,7 @@ ENTRY(_fpsetround) #else ENTRY(fpsetround) #endif + RETGUARD_START fnstcw -4(%rsp) stmxcsr -8(%rsp) @@ -46,6 +47,7 @@ ENTRY(fpsetround) ldmxcsr -8(%rsp) fldcw -4(%rsp) + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpsetround) Index: lib/libc/arch/amd64/gen/fpsetsticky.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/fpsetsticky.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpsetsticky.S --- lib/libc/arch/amd64/gen/fpsetsticky.S 29 May 2015 08:50:12 -0000 1.3 +++ lib/libc/arch/amd64/gen/fpsetsticky.S 18 Aug 2017 02:28:21 -0000 @@ -22,6 +22,8 @@ ENTRY(_fpsetsticky) #else ENTRY(fpsetsticky) #endif + RETGUARD_START + fnstenv -28(%rsp) stmxcsr -32(%rsp) @@ -43,6 +45,7 @@ ENTRY(fpsetsticky) ldmxcsr -32(%rsp) fldenv -28(%rsp) + RETGUARD_END ret #ifdef WEAK_ALIAS END(_fpsetsticky) Index: lib/libc/arch/amd64/gen/modf.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/modf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 modf.S --- lib/libc/arch/amd64/gen/modf.S 29 May 2015 08:50:12 -0000 1.5 +++ lib/libc/arch/amd64/gen/modf.S 18 Aug 2017 02:28:21 -0000 @@ -51,6 +51,7 @@ /* With CHOP mode on, frndint behaves as TRUNC does. Useful. */ ENTRY(modf) + RETGUARD_START /* * Set chop mode. @@ -88,5 +89,6 @@ ENTRY(modf) fstpl -8(%rsp) movsd -8(%rsp),%xmm0 + RETGUARD_END ret END(modf) Index: lib/libc/arch/amd64/gen/setjmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/gen/setjmp.S,v retrieving revision 1.7 diff -u -p -u -r1.7 setjmp.S --- lib/libc/arch/amd64/gen/setjmp.S 29 May 2016 22:39:21 -0000 1.7 +++ lib/libc/arch/amd64/gen/setjmp.S 18 Aug 2017 02:28:21 -0000 @@ -45,7 +45,7 @@ .globl __jmpxor __jmpxor: .zero 8*3 # (rbp, rsp, pc) - END(__jmpxor) +// END(__jmpxor) .type __jmpxor,@object /* Index: lib/libc/arch/amd64/net/htonl.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/net/htonl.S,v retrieving revision 1.2 diff -u -p -u -r1.2 htonl.S --- lib/libc/arch/amd64/net/htonl.S 29 May 2015 09:25:28 -0000 1.2 +++ lib/libc/arch/amd64/net/htonl.S 18 Aug 2017 02:28:21 -0000 @@ -5,7 +5,9 @@ #include <machine/asm.h> ENTRY(htonl) + RETGUARD_START movl %edi,%eax bswapl %eax + RETGUARD_END ret END(htonl) Index: lib/libc/arch/amd64/net/htons.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/net/htons.S,v retrieving revision 1.3 diff -u -p -u -r1.3 htons.S --- lib/libc/arch/amd64/net/htons.S 29 May 2015 09:25:28 -0000 1.3 +++ lib/libc/arch/amd64/net/htons.S 18 Aug 2017 02:28:21 -0000 @@ -5,7 +5,9 @@ #include <machine/asm.h> ENTRY(htons) + RETGUARD_START movl %edi,%eax xchgb %ah,%al + RETGUARD_END ret END(htons) Index: lib/libc/arch/amd64/net/ntohl.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/net/ntohl.S,v retrieving revision 1.3 diff -u -p -u -r1.3 ntohl.S --- lib/libc/arch/amd64/net/ntohl.S 29 May 2015 09:25:28 -0000 1.3 +++ lib/libc/arch/amd64/net/ntohl.S 18 Aug 2017 02:28:21 -0000 @@ -5,7 +5,9 @@ #include <machine/asm.h> ENTRY(ntohl) + RETGUARD_START movl %edi,%eax bswapl %eax + RETGUARD_END ret END(ntohl) Index: lib/libc/arch/amd64/net/ntohs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/net/ntohs.S,v retrieving revision 1.3 diff -u -p -u -r1.3 ntohs.S --- lib/libc/arch/amd64/net/ntohs.S 29 May 2015 09:25:28 -0000 1.3 +++ lib/libc/arch/amd64/net/ntohs.S 18 Aug 2017 02:28:21 -0000 @@ -5,7 +5,9 @@ #include <machine/asm.h> ENTRY(ntohs) + RETGUARD_START movl %edi,%eax xchgb %ah,%al + RETGUARD_END ret END(ntohs) Index: lib/libc/arch/amd64/string/bcmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/bcmp.S,v retrieving revision 1.6 diff -u -p -u -r1.6 bcmp.S --- lib/libc/arch/amd64/string/bcmp.S 14 Nov 2015 21:53:03 -0000 1.6 +++ lib/libc/arch/amd64/string/bcmp.S 18 Aug 2017 02:28:21 -0000 @@ -1,6 +1,7 @@ #include "DEFS.h" ENTRY(bcmp) + RETGUARD_START xorl %eax,%eax /* clear return value */ cld /* set compare direction forward */ @@ -17,5 +18,6 @@ ENTRY(bcmp) je L2 L1: incl %eax -L2: ret +L2: RETGUARD_END + ret END_WEAK(bcmp) Index: lib/libc/arch/amd64/string/bzero.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/bzero.S,v retrieving revision 1.6 diff -u -p -u -r1.6 bzero.S --- lib/libc/arch/amd64/string/bzero.S 14 Nov 2015 21:53:03 -0000 1.6 +++ lib/libc/arch/amd64/string/bzero.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "DEFS.h" ENTRY(bzero) + RETGUARD_START movq %rsi,%rdx cld /* set fill direction forward */ @@ -37,5 +38,6 @@ L1: movq %rdx,%rcx /* zero remainder by rep stosb + RETGUARD_END ret END_WEAK(bzero) Index: lib/libc/arch/amd64/string/ffs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/ffs.S,v retrieving revision 1.4 diff -u -p -u -r1.4 ffs.S --- lib/libc/arch/amd64/string/ffs.S 14 Nov 2015 21:53:03 -0000 1.4 +++ lib/libc/arch/amd64/string/ffs.S 18 Aug 2017 02:28:21 -0000 @@ -8,12 +8,15 @@ #include "DEFS.h" ENTRY(ffs) + RETGUARD_START bsfl %edi,%eax jz L1 /* ZF is set if all bits are 0 */ incl %eax /* bits numbered from 1, not 0 */ + RETGUARD_END ret _ALIGN_TEXT L1: xorl %eax,%eax /* clear result */ + RETGUARD_END ret END_WEAK(ffs) Index: lib/libc/arch/amd64/string/memchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/memchr.S,v retrieving revision 1.6 diff -u -p -u -r1.6 memchr.S --- lib/libc/arch/amd64/string/memchr.S 14 Nov 2015 21:53:03 -0000 1.6 +++ lib/libc/arch/amd64/string/memchr.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "DEFS.h" ENTRY(memchr) + RETGUARD_START movb %sil,%al /* set character to search for */ movq %rdx,%rcx /* set length of search */ testq %rcx,%rcx /* test for len == 0 */ @@ -16,7 +17,9 @@ ENTRY(memchr) scasb jne L1 /* scan failed, return null */ leaq -1(%rdi),%rax /* adjust result of scan */ + RETGUARD_END ret L1: xorq %rax,%rax + RETGUARD_END ret END_STRONG(memchr) Index: lib/libc/arch/amd64/string/memmove.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/memmove.S,v retrieving revision 1.6 diff -u -p -u -r1.6 memmove.S --- lib/libc/arch/amd64/string/memmove.S 14 Nov 2015 21:53:03 -0000 1.6 +++ lib/libc/arch/amd64/string/memmove.S 18 Aug 2017 02:28:21 -0000 @@ -41,11 +41,14 @@ */ ENTRY(bcopy) + RETGUARD_START xchgq %rdi,%rsi - /* fall into memmove */ + jmp 9f +END_WEAK(bcopy) ENTRY(memmove) - movq %rdi,%r11 /* save dest */ + RETGUARD_START +9: movq %rdi,%r11 /* save dest */ movq %rdx,%rcx movq %rdi,%rax subq %rsi,%rax @@ -66,6 +69,7 @@ ENTRY(memmove) rep movsb movq %r11,%rax + RETGUARD_END ret 1: addq %rcx,%rdi /* copy backwards. */ @@ -84,7 +88,7 @@ ENTRY(memmove) movsq movq %r11,%rax cld + RETGUARD_END ret // END(memcpy) END_STRONG(memmove) -END_WEAK(bcopy) Index: lib/libc/arch/amd64/string/memset.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/memset.S,v retrieving revision 1.6 diff -u -p -u -r1.6 memset.S --- lib/libc/arch/amd64/string/memset.S 14 Nov 2015 21:53:03 -0000 1.6 +++ lib/libc/arch/amd64/string/memset.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "DEFS.h" ENTRY(memset) + RETGUARD_START movq %rsi,%rax andq $0xff,%rax movq %rdx,%rcx @@ -52,5 +53,6 @@ L1: rep stosb movq %r11,%rax + RETGUARD_END ret END_STRONG(memset) Index: lib/libc/arch/amd64/string/strchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/strchr.S,v retrieving revision 1.8 diff -u -p -u -r1.8 strchr.S --- lib/libc/arch/amd64/string/strchr.S 14 Nov 2015 21:53:03 -0000 1.8 +++ lib/libc/arch/amd64/string/strchr.S 18 Aug 2017 02:28:21 -0000 @@ -44,6 +44,7 @@ WEAK_ALIAS(index, strchr) */ ENTRY(strchr) + RETGUARD_START movabsq $0x0101010101010101,%r8 movzbq %sil,%rdx /* value to search for (c) */ @@ -85,6 +86,7 @@ ENTRY(strchr) bsf %r11,%r11 /* 7, 15, 23 ... 63 */ 8: shr $3,%r11 /* 0, 1, 2 .. 7 */ lea -8(%r11,%rdi),%rax + RETGUARD_END ret /* End of string, check whether char is before NUL */ @@ -97,6 +99,7 @@ ENTRY(strchr) cmp %r11,%rax jae 8b /* return 'found' if same - searching for NUL */ 11: xor %eax,%eax /* char not found */ + RETGUARD_END ret /* Source misaligned: read aligned word and make low bytes invalid */ Index: lib/libc/arch/amd64/string/strcmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/strcmp.S,v retrieving revision 1.7 diff -u -p -u -r1.7 strcmp.S --- lib/libc/arch/amd64/string/strcmp.S 14 Nov 2015 21:53:03 -0000 1.7 +++ lib/libc/arch/amd64/string/strcmp.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "DEFS.h" ENTRY(strcmp) + RETGUARD_START /* * Align s1 to word boundary. * Consider unrolling loop? @@ -68,5 +69,6 @@ ENTRY(strcmp) movzbq %al,%rax movzbq %dl,%rdx subq %rdx,%rax + RETGUARD_END ret END_STRONG(strcmp) Index: lib/libc/arch/amd64/string/strlen.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/strlen.S,v retrieving revision 1.7 diff -u -p -u -r1.7 strlen.S --- lib/libc/arch/amd64/string/strlen.S 11 Dec 2015 00:05:46 -0000 1.7 +++ lib/libc/arch/amd64/string/strlen.S 18 Aug 2017 02:28:21 -0000 @@ -112,6 +112,7 @@ */ ENTRY(strlen) + RETGUARD_START movabsq $0x0101010101010101,%r8 test $7,%dil @@ -139,6 +140,7 @@ ENTRY(strlen) bsf %rdx,%rdx /* 7, 15, 23 ... 63 */ shr $3,%rdx /* 0, 1, 2 ... 7 */ lea -8(%rax,%rdx),%rax + RETGUARD_END ret /* Misaligned, read aligned word and make low bytes non-zero */ Index: lib/libc/arch/amd64/string/strrchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/string/strrchr.S,v retrieving revision 1.8 diff -u -p -u -r1.8 strrchr.S --- lib/libc/arch/amd64/string/strrchr.S 14 Nov 2015 21:53:03 -0000 1.8 +++ lib/libc/arch/amd64/string/strrchr.S 18 Aug 2017 02:28:21 -0000 @@ -11,6 +11,7 @@ WEAK_ALIAS(rindex, strrchr) ENTRY(strrchr) + RETGUARD_START movzbq %sil,%rcx /* zero return value */ @@ -120,5 +121,6 @@ ENTRY(strrchr) jne .Lloop .Ldone: + RETGUARD_END ret END_STRONG(strrchr) Index: lib/libc/arch/amd64/sys/brk.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/sys/brk.S,v retrieving revision 1.10 diff -u -p -u -r1.10 brk.S --- lib/libc/arch/amd64/sys/brk.S 19 Aug 2017 18:24:06 -0000 1.10 +++ lib/libc/arch/amd64/sys/brk.S 19 Aug 2017 18:29:07 -0000 @@ -45,11 +45,12 @@ .data __minbrk: .quad _end - END(__minbrk) + _ASM_SIZE(__minbrk) .type __minbrk,@object .weak brk ENTRY(brk) + RETGUARD_START cmpq %rdi,__minbrk(%rip) jb 1f movq __minbrk(%rip),%rdi @@ -58,8 +59,10 @@ ENTRY(brk) jc 1f movq %rdi,__curbrk(%rip) xorl %eax,%eax + RETGUARD_END ret 1: SET_ERRNO + RETGUARD_END ret END(brk) Index: lib/libc/arch/amd64/sys/sbrk.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/sys/sbrk.S,v retrieving revision 1.10 diff -u -p -u -r1.10 sbrk.S --- lib/libc/arch/amd64/sys/sbrk.S 19 Aug 2017 18:24:06 -0000 1.10 +++ lib/libc/arch/amd64/sys/sbrk.S 19 Aug 2017 19:08:23 -0000 @@ -50,11 +50,12 @@ .data __curbrk: .quad _end - END(__curbrk) + _ASM_SIZE(__curbrk) .type __curbrk,@object .weak sbrk ENTRY(sbrk) + RETGUARD_START movq __curbrk(%rip),%rax movslq %edi,%rsi movq %rsi,%rdi @@ -63,8 +64,10 @@ ENTRY(sbrk) jc 1f movq __curbrk(%rip),%rax addq %rsi,__curbrk(%rip) + RETGUARD_END ret 1: SET_ERRNO + RETGUARD_END ret END(sbrk) Index: lib/libc/arch/amd64/sys/sigpending.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/sys/sigpending.S,v retrieving revision 1.3 diff -u -p -u -r1.3 sigpending.S --- lib/libc/arch/amd64/sys/sigpending.S 17 Jun 2015 03:04:50 -0000 1.3 +++ lib/libc/arch/amd64/sys/sigpending.S 18 Aug 2017 02:28:21 -0000 @@ -42,5 +42,6 @@ SYSCALL(sigpending) movl %eax,(%rdi) # store old mask xorl %eax,%eax + RETGUARD_END ret SYSCALL_END(sigpending) Index: lib/libc/arch/amd64/sys/sigprocmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/sys/sigprocmask.S,v retrieving revision 1.9 diff -u -p -u -r1.9 sigprocmask.S --- lib/libc/arch/amd64/sys/sigprocmask.S 7 May 2016 19:05:21 -0000 1.9 +++ lib/libc/arch/amd64/sys/sigprocmask.S 18 Aug 2017 02:28:21 -0000 @@ -40,6 +40,7 @@ #include "SYS.h" SYSENTRY_HIDDEN(sigprocmask) + RETGUARD_START testq %rsi,%rsi # check new sigset pointer jnz 1f # if not null, indirect movl $1,%edi # SIG_BLOCK @@ -52,8 +53,10 @@ SYSENTRY_HIDDEN(sigprocmask) movl %eax,(%rdx) # store old mask 3: xorl %eax,%eax + RETGUARD_END ret 1: SET_ERRNO + RETGUARD_END ret SYSCALL_END_HIDDEN(sigprocmask) Index: lib/libc/arch/amd64/sys/sigsuspend.S =================================================================== RCS file: /cvs/src/lib/libc/arch/amd64/sys/sigsuspend.S,v retrieving revision 1.7 diff -u -p -u -r1.7 sigsuspend.S --- lib/libc/arch/amd64/sys/sigsuspend.S 7 May 2016 19:05:21 -0000 1.7 +++ lib/libc/arch/amd64/sys/sigsuspend.S 18 Aug 2017 19:59:18 -0000 @@ -40,8 +40,10 @@ #include "SYS.h" SYSENTRY_HIDDEN(sigsuspend) + RETGUARD_START movl (%rdi),%edi # indirect to mask arg SYSTRAP(sigsuspend) SET_ERRNO + RETGUARD_END ret SYSCALL_END_HIDDEN(sigsuspend) Index: lib/libc/arch/i386/SYS.h =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/SYS.h,v retrieving revision 1.26 diff -u -p -u -r1.26 SYS.h --- lib/libc/arch/i386/SYS.h 1 Jun 2017 12:14:48 -0000 1.26 +++ lib/libc/arch/i386/SYS.h 18 Aug 2017 18:01:11 -0000 @@ -56,7 +56,7 @@ * END_STRONG(x) Like DEF_STRONG() in C; for standard/reserved C names * END_WEAK(x) Like DEF_WEAK() in C; for non-ISO C names */ -#define END_STRONG(x) END(x); _HIDDEN_FALIAS(x,x); END(_HIDDEN(x)) +#define END_STRONG(x) END(x); _HIDDEN_FALIAS(x,x); _ASM_SIZE(_HIDDEN(x)) #define END_WEAK(x) END_STRONG(x); .weak x @@ -71,18 +71,19 @@ /* Use both _thread_sys_{syscall} and [weak] {syscall}. */ #define SYSENTRY(x) \ - ENTRY(_thread_sys_##x); \ - WEAK_ALIAS(x, _thread_sys_##x) + ENTRY(_thread_sys_##x); \ + WEAK_ALIAS(x, _thread_sys_##x) #define SYSENTRY_HIDDEN(x) \ - ENTRY(_thread_sys_ ## x) -#define __END_HIDDEN(x) END(_thread_sys_ ## x); \ - _HIDDEN_FALIAS(x,_thread_sys_ ## x); \ - END(_HIDDEN(x)) -#define __END(x) __END_HIDDEN(x); END(x) + ENTRY(_thread_sys_ ## x) +#define __END_HIDDEN(x) END(_thread_sys_ ## x); \ + _HIDDEN_FALIAS(x,_thread_sys_ ## x); \ + _ASM_SIZE(_HIDDEN(x)) +#define __END(x) \ + __END_HIDDEN(x); _ASM_SIZE(x) #define __DO_SYSCALL(x) \ - movl $(SYS_ ## x),%eax; \ - int $0x80 + movl $(SYS_ ## x),%eax; \ + int $0x80 #define SET_ERRNO() \ movl %eax,%gs:(TCB_OFFSET_ERRNO); \ @@ -95,53 +96,58 @@ /* perform a syscall */ #define _SYSCALL_NOERROR(x,y) \ - SYSENTRY(x); \ - __DO_SYSCALL(y); + SYSENTRY(x); \ + RETGUARD_START; \ + __DO_SYSCALL(y); #define _SYSCALL_HIDDEN_NOERROR(x,y) \ - SYSENTRY_HIDDEN(x); \ - __DO_SYSCALL(y); + SYSENTRY_HIDDEN(x); \ + RETGUARD_START; \ + __DO_SYSCALL(y); #define SYSCALL_NOERROR(x) \ - _SYSCALL_NOERROR(x,x) + _SYSCALL_NOERROR(x,x) /* perform a syscall, set errno */ #define _SYSCALL(x,y) \ - .text; \ - .align 2; \ - _SYSCALL_NOERROR(x,y) \ - HANDLE_ERRNO() + .text; \ + .align 2; \ + _SYSCALL_NOERROR(x,y) \ + HANDLE_ERRNO() #define _SYSCALL_HIDDEN(x,y) \ - .text; \ - .align 2; \ - _SYSCALL_HIDDEN_NOERROR(x,y) \ - HANDLE_ERRNO() + .text; \ + .align 2; \ + _SYSCALL_HIDDEN_NOERROR(x,y) \ + HANDLE_ERRNO() #define SYSCALL(x) \ - _SYSCALL(x,x) + _SYSCALL(x,x) #define SYSCALL_HIDDEN(x) \ - _SYSCALL_HIDDEN(x,y) + _SYSCALL_HIDDEN(x,y) /* perform a syscall, return */ #define PSEUDO_NOERROR(x,y) \ - _SYSCALL_NOERROR(x,y); \ - ret; \ - __END(x) + _SYSCALL_NOERROR(x,y); \ + RETGUARD_END; \ + ret; \ + __END(x) /* perform a syscall, set errno, return */ #define PSEUDO(x,y) \ - _SYSCALL(x,y); \ - ret; \ - __END(x) + _SYSCALL(x,y); \ + RETGUARD_END; \ + ret; \ + __END(x) #define PSEUDO_HIDDEN(x,y) \ - _SYSCALL_HIDDEN(x,y); \ - ret; \ - __END_HIDDEN(x) + _SYSCALL_HIDDEN(x,y); \ + RETGUARD_END; \ + ret; \ + __END_HIDDEN(x) /* perform a syscall with the same name, set errno, return */ #define RSYSCALL(x) \ - PSEUDO(x,x); + PSEUDO(x,x); #define RSYSCALL_HIDDEN(x) \ - PSEUDO_HIDDEN(x,x) + PSEUDO_HIDDEN(x,x) #define SYSCALL_END(x) __END(x) #define SYSCALL_END_HIDDEN(x) \ - __END_HIDDEN(x) + __END_HIDDEN(x) Index: lib/libc/arch/i386/gen/divsi3.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/divsi3.S,v retrieving revision 1.5 diff -u -p -u -r1.5 divsi3.S --- lib/libc/arch/i386/gen/divsi3.S 7 Aug 2005 11:30:38 -0000 1.5 +++ lib/libc/arch/i386/gen/divsi3.S 18 Aug 2017 02:28:21 -0000 @@ -34,7 +34,10 @@ #include <machine/asm.h> ENTRY(__divsi3) + RETGUARD_START movl 4(%esp),%eax cltd idivl 8(%esp) + RETGUARD_END ret +END(__divsi3) Index: lib/libc/arch/i386/gen/fabs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fabs.S,v retrieving revision 1.9 diff -u -p -u -r1.9 fabs.S --- lib/libc/arch/i386/gen/fabs.S 8 Jul 2011 22:28:33 -0000 1.9 +++ lib/libc/arch/i386/gen/fabs.S 18 Aug 2017 02:28:21 -0000 @@ -34,6 +34,9 @@ #include <machine/asm.h> ENTRY(fabs) + RETGUARD_START fldl 4(%esp) fabs + RETGUARD_END ret +END(fabs) Index: lib/libc/arch/i386/gen/fixdfsi.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fixdfsi.S,v retrieving revision 1.5 diff -u -p -u -r1.5 fixdfsi.S --- lib/libc/arch/i386/gen/fixdfsi.S 7 Aug 2005 11:30:38 -0000 1.5 +++ lib/libc/arch/i386/gen/fixdfsi.S 18 Aug 2017 02:28:21 -0000 @@ -34,7 +34,10 @@ #include <machine/asm.h> ENTRY(__fixdfsi) + RETGUARD_START fldl 4(%esp) fistpl 4(%esp) movl 4(%esp),%eax + RETGUARD_END ret +END(__fixdfsi) Index: lib/libc/arch/i386/gen/fixunsdfsi.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fixunsdfsi.S,v retrieving revision 1.7 diff -u -p -u -r1.7 fixunsdfsi.S --- lib/libc/arch/i386/gen/fixunsdfsi.S 14 Nov 2014 07:31:13 -0000 1.7 +++ lib/libc/arch/i386/gen/fixunsdfsi.S 18 Aug 2017 02:28:21 -0000 @@ -34,6 +34,7 @@ #include <machine/asm.h> ENTRY(__fixunsdfsi) + RETGUARD_START fldl 4(%esp) /* argument double to accum stack */ frndint /* create integer */ #ifdef __PIC__ @@ -50,6 +51,7 @@ ENTRY(__fixunsdfsi) fistpl 4(%esp) movl 4(%esp),%eax + RETGUARD_END ret 1: @@ -64,6 +66,8 @@ ENTRY(__fixunsdfsi) fistpl 4(%esp) /* convert */ movl 4(%esp),%eax orl $0x80000000,%eax /* restore bias */ + RETGUARD_END ret +END(__fixunsdfsi) fbiggestsigned: .double 2147483648.0 Index: lib/libc/arch/i386/gen/flt_rounds.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/flt_rounds.S,v retrieving revision 1.6 diff -u -p -u -r1.6 flt_rounds.S --- lib/libc/arch/i386/gen/flt_rounds.S 19 Aug 2017 18:23:00 -0000 1.6 +++ lib/libc/arch/i386/gen/flt_rounds.S 19 Aug 2017 18:29:07 -0000 @@ -15,6 +15,7 @@ _map: .byte 0 /* round to zero */ ENTRY(__flt_rounds) + RETGUARD_START subl $4,%esp fnstcw (%esp) movl (%esp),%eax @@ -29,5 +30,6 @@ ENTRY(__flt_rounds) movb _map(,%eax,1),%al #endif addl $4,%esp + RETGUARD_END ret END_STRONG(__flt_rounds); Index: lib/libc/arch/i386/gen/fpgetmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpgetmask.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpgetmask.S --- lib/libc/arch/i386/gen/fpgetmask.S 7 Aug 2005 11:30:38 -0000 1.3 +++ lib/libc/arch/i386/gen/fpgetmask.S 18 Aug 2017 02:28:21 -0000 @@ -7,10 +7,13 @@ #include <machine/asm.h> ENTRY(fpgetmask) + RETGUARD_START subl $4,%esp fnstcw (%esp) movl (%esp),%eax notl %eax andl $63,%eax addl $4,%esp + RETGUARD_END ret +END(fpgetmask) Index: lib/libc/arch/i386/gen/fpgetround.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpgetround.S,v retrieving revision 1.4 diff -u -p -u -r1.4 fpgetround.S --- lib/libc/arch/i386/gen/fpgetround.S 21 Jun 2009 00:38:22 -0000 1.4 +++ lib/libc/arch/i386/gen/fpgetround.S 18 Aug 2017 02:28:21 -0000 @@ -7,10 +7,13 @@ #include <machine/asm.h> ENTRY(fpgetround) + RETGUARD_START subl $4,%esp fnstcw (%esp) movl (%esp),%eax rorl $10,%eax andl $3,%eax addl $4,%esp + RETGUARD_END ret +END(fpgetround) Index: lib/libc/arch/i386/gen/fpgetsticky.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpgetsticky.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpgetsticky.S --- lib/libc/arch/i386/gen/fpgetsticky.S 7 Aug 2005 11:30:38 -0000 1.3 +++ lib/libc/arch/i386/gen/fpgetsticky.S 18 Aug 2017 02:28:21 -0000 @@ -7,9 +7,12 @@ #include <machine/asm.h> ENTRY(fpgetsticky) + RETGUARD_START subl $4,%esp fnstsw (%esp) movl (%esp),%eax andl $63,%eax addl $4,%esp + RETGUARD_END ret +END(fpgetsticky) Index: lib/libc/arch/i386/gen/fpsetmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpsetmask.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpsetmask.S --- lib/libc/arch/i386/gen/fpsetmask.S 7 Aug 2005 11:30:38 -0000 1.3 +++ lib/libc/arch/i386/gen/fpsetmask.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(fpsetmask) + RETGUARD_START subl $4,%esp fnstcw (%esp) @@ -24,4 +25,6 @@ ENTRY(fpsetmask) fldcw (%esp) addl $4,%esp + RETGUARD_END ret +END(fpsetmask) Index: lib/libc/arch/i386/gen/fpsetround.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpsetround.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpsetround.S --- lib/libc/arch/i386/gen/fpsetround.S 7 Aug 2005 11:30:38 -0000 1.3 +++ lib/libc/arch/i386/gen/fpsetround.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(fpsetround) + RETGUARD_START subl $4,%esp fnstcw (%esp) @@ -25,4 +26,6 @@ ENTRY(fpsetround) fldcw (%esp) addl $4,%esp + RETGUARD_END ret +END(fpsetround) Index: lib/libc/arch/i386/gen/fpsetsticky.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/fpsetsticky.S,v retrieving revision 1.3 diff -u -p -u -r1.3 fpsetsticky.S --- lib/libc/arch/i386/gen/fpsetsticky.S 7 Aug 2005 11:30:38 -0000 1.3 +++ lib/libc/arch/i386/gen/fpsetsticky.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(fpsetsticky) + RETGUARD_START subl $28,%esp fnstenv (%esp) @@ -23,4 +24,6 @@ ENTRY(fpsetsticky) fldenv (%esp) addl $28,%esp + RETGUARD_END ret +END(fpsetsticky) Index: lib/libc/arch/i386/gen/modf.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/modf.S,v retrieving revision 1.7 diff -u -p -u -r1.7 modf.S --- lib/libc/arch/i386/gen/modf.S 8 Jul 2011 22:28:33 -0000 1.7 +++ lib/libc/arch/i386/gen/modf.S 18 Aug 2017 02:28:21 -0000 @@ -43,6 +43,7 @@ /* With CHOP mode on, frndint behaves as TRUNC does. Useful. */ ENTRY(modf) + RETGUARD_START pushl %ebp movl %esp,%ebp subl $16,%esp @@ -65,4 +66,6 @@ ENTRY(modf) jmp L1 L1: leave + RETGUARD_END ret +END(modf) Index: lib/libc/arch/i386/gen/setjmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/setjmp.S,v retrieving revision 1.11 diff -u -p -u -r1.11 setjmp.S --- lib/libc/arch/i386/gen/setjmp.S 30 May 2016 02:11:21 -0000 1.11 +++ lib/libc/arch/i386/gen/setjmp.S 18 Aug 2017 18:03:48 -0000 @@ -39,7 +39,7 @@ .hidden __jmpxor __jmpxor: .zero 4*3 # (eip, esp, ebp) - END(__jmpxor) + _ASM_SIZE(__jmpxor) .type __jmpxor,@object Index: lib/libc/arch/i386/gen/udivsi3.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/gen/udivsi3.S,v retrieving revision 1.5 diff -u -p -u -r1.5 udivsi3.S --- lib/libc/arch/i386/gen/udivsi3.S 7 Aug 2005 11:30:38 -0000 1.5 +++ lib/libc/arch/i386/gen/udivsi3.S 18 Aug 2017 02:28:21 -0000 @@ -34,7 +34,10 @@ #include <machine/asm.h> ENTRY(__udivsi3) + RETGUARD_START movl 4(%esp),%eax xorl %edx,%edx divl 8(%esp) + RETGUARD_END ret +END(__udivsi3) Index: lib/libc/arch/i386/net/htonl.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/net/htonl.S,v retrieving revision 1.4 diff -u -p -u -r1.4 htonl.S --- lib/libc/arch/i386/net/htonl.S 28 Oct 2009 06:49:54 -0000 1.4 +++ lib/libc/arch/i386/net/htonl.S 18 Aug 2017 02:28:21 -0000 @@ -34,8 +34,11 @@ /* netorder = htonl(hostorder) */ ENTRY(htonl) + RETGUARD_START movl 4(%esp),%eax rorw $8,%ax roll $16,%eax rorw $8,%ax + RETGUARD_END ret +END(htonl) Index: lib/libc/arch/i386/net/htons.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/net/htons.S,v retrieving revision 1.4 diff -u -p -u -r1.4 htons.S --- lib/libc/arch/i386/net/htons.S 28 Oct 2009 06:49:54 -0000 1.4 +++ lib/libc/arch/i386/net/htons.S 18 Aug 2017 02:28:21 -0000 @@ -34,6 +34,9 @@ /* netorder = htons(hostorder) */ ENTRY(htons) + RETGUARD_START movzwl 4(%esp),%eax rorw $8,%ax + RETGUARD_END ret +END(htons) Index: lib/libc/arch/i386/net/ntohl.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/net/ntohl.S,v retrieving revision 1.4 diff -u -p -u -r1.4 ntohl.S --- lib/libc/arch/i386/net/ntohl.S 28 Oct 2009 06:49:54 -0000 1.4 +++ lib/libc/arch/i386/net/ntohl.S 18 Aug 2017 02:28:21 -0000 @@ -34,8 +34,11 @@ /* hostorder = ntohl(netorder) */ ENTRY(ntohl) + RETGUARD_START movl 4(%esp),%eax rorw $8,%ax roll $16,%eax rorw $8,%ax + RETGUARD_END ret +END(ntohl) Index: lib/libc/arch/i386/net/ntohs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/net/ntohs.S,v retrieving revision 1.4 diff -u -p -u -r1.4 ntohs.S --- lib/libc/arch/i386/net/ntohs.S 28 Oct 2009 06:49:54 -0000 1.4 +++ lib/libc/arch/i386/net/ntohs.S 18 Aug 2017 02:28:21 -0000 @@ -34,6 +34,9 @@ /* hostorder = ntohs(netorder) */ ENTRY(ntohs) + RETGUARD_START movzwl 4(%esp),%eax rorw $8,%ax + RETGUARD_END ret +END(ntohs) Index: lib/libc/arch/i386/stdlib/abs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/stdlib/abs.S,v retrieving revision 1.6 diff -u -p -u -r1.6 abs.S --- lib/libc/arch/i386/stdlib/abs.S 13 Sep 2015 16:27:59 -0000 1.6 +++ lib/libc/arch/i386/stdlib/abs.S 18 Aug 2017 02:28:21 -0000 @@ -33,9 +33,11 @@ #include "SYS.h" ENTRY(abs) + RETGUARD_START movl 4(%esp),%eax testl %eax,%eax jns 1f negl %eax -1: ret +1: RETGUARD_END + ret END_STRONG(abs) Index: lib/libc/arch/i386/stdlib/div.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/stdlib/div.S,v retrieving revision 1.6 diff -u -p -u -r1.6 div.S --- lib/libc/arch/i386/stdlib/div.S 13 Sep 2015 16:27:59 -0000 1.6 +++ lib/libc/arch/i386/stdlib/div.S 18 Aug 2017 02:28:21 -0000 @@ -6,11 +6,13 @@ #include "SYS.h" ENTRY(div) + RETGUARD_START movl 4(%esp),%eax movl 8(%esp),%ecx cdq idiv %ecx movl %eax,4(%esp) movl %edx,8(%esp) + RETGUARD_END ret END_STRONG(div) Index: lib/libc/arch/i386/stdlib/labs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/stdlib/labs.S,v retrieving revision 1.6 diff -u -p -u -r1.6 labs.S --- lib/libc/arch/i386/stdlib/labs.S 13 Sep 2015 16:27:59 -0000 1.6 +++ lib/libc/arch/i386/stdlib/labs.S 18 Aug 2017 02:28:21 -0000 @@ -33,9 +33,11 @@ #include "SYS.h" ENTRY(labs) + RETGUARD_START movl 4(%esp),%eax testl %eax,%eax jns 1f negl %eax -1: ret +1: RETGUARD_END + ret END_STRONG(labs) Index: lib/libc/arch/i386/stdlib/ldiv.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/stdlib/ldiv.S,v retrieving revision 1.6 diff -u -p -u -r1.6 ldiv.S --- lib/libc/arch/i386/stdlib/ldiv.S 13 Sep 2015 16:27:59 -0000 1.6 +++ lib/libc/arch/i386/stdlib/ldiv.S 18 Aug 2017 02:28:21 -0000 @@ -6,11 +6,13 @@ #include "SYS.h" ENTRY(ldiv) + RETGUARD_START movl 4(%esp),%eax movl 8(%esp),%ecx cdq idiv %ecx movl %eax,4(%esp) movl %edx,8(%esp) + RETGUARD_END ret END_STRONG(ldiv) Index: lib/libc/arch/i386/string/bcmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/bcmp.S,v retrieving revision 1.4 diff -u -p -u -r1.4 bcmp.S --- lib/libc/arch/i386/string/bcmp.S 31 Aug 2015 02:53:56 -0000 1.4 +++ lib/libc/arch/i386/string/bcmp.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "SYS.h" ENTRY(bcmp) + RETGUARD_START pushl %edi pushl %esi movl 12(%esp),%edi @@ -29,5 +30,6 @@ ENTRY(bcmp) L1: incl %eax L2: popl %esi popl %edi + RETGUARD_END ret END_WEAK(bcmp) Index: lib/libc/arch/i386/string/bzero.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/bzero.S,v retrieving revision 1.5 diff -u -p -u -r1.5 bzero.S --- lib/libc/arch/i386/string/bzero.S 31 Aug 2015 02:53:56 -0000 1.5 +++ lib/libc/arch/i386/string/bzero.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "SYS.h" ENTRY(bzero) + RETGUARD_START pushl %edi movl 8(%esp),%edi movl 12(%esp),%edx @@ -40,5 +41,6 @@ L1: movl %edx,%ecx /* zero remainder by stosb popl %edi + RETGUARD_END ret END_WEAK(bzero) Index: lib/libc/arch/i386/string/ffs.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/ffs.S,v retrieving revision 1.5 diff -u -p -u -r1.5 ffs.S --- lib/libc/arch/i386/string/ffs.S 19 Aug 2017 18:25:50 -0000 1.5 +++ lib/libc/arch/i386/string/ffs.S 19 Aug 2017 18:29:07 -0000 @@ -7,12 +7,15 @@ #include "SYS.h" ENTRY(ffs) + RETGUARD_START bsfl 4(%esp),%eax jz L1 /* ZF is set if all bits are 0 */ incl %eax /* bits numbered from 1, not 0 */ + RETGUARD_END ret .align 2,0xcc L1: xorl %eax,%eax /* clear result */ + RETGUARD_END ret END_WEAK(ffs) Index: lib/libc/arch/i386/string/memchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/memchr.S,v retrieving revision 1.5 diff -u -p -u -r1.5 memchr.S --- lib/libc/arch/i386/string/memchr.S 19 Aug 2017 18:25:50 -0000 1.5 +++ lib/libc/arch/i386/string/memchr.S 19 Aug 2017 18:29:07 -0000 @@ -7,6 +7,7 @@ #include "SYS.h" ENTRY(memchr) + RETGUARD_START pushl %edi movl 8(%esp),%edi /* string address */ movl 12(%esp),%eax /* set character to search for */ @@ -19,9 +20,11 @@ ENTRY(memchr) jne L1 /* scan failed, return null */ leal -1(%edi),%eax /* adjust result of scan */ popl %edi + RETGUARD_END ret .align 2,0xcc L1: xorl %eax,%eax popl %edi + RETGUARD_END ret END_STRONG(memchr) Index: lib/libc/arch/i386/string/memcmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/memcmp.S,v retrieving revision 1.5 diff -u -p -u -r1.5 memcmp.S --- lib/libc/arch/i386/string/memcmp.S 31 Aug 2015 02:53:56 -0000 1.5 +++ lib/libc/arch/i386/string/memcmp.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "SYS.h" ENTRY(memcmp) + RETGUARD_START pushl %edi pushl %esi movl 12(%esp),%edi @@ -28,6 +29,7 @@ ENTRY(memcmp) xorl %eax,%eax /* we match, return zero */ popl %esi popl %edi + RETGUARD_END ret L5: movl $4,%ecx /* We know that one of the next */ @@ -40,5 +42,6 @@ L6: movzbl -1(%edi),%eax /* Perform un subl %edx,%eax popl %esi popl %edi + RETGUARD_END ret END_STRONG(memcmp) Index: lib/libc/arch/i386/string/memmove.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/memmove.S,v retrieving revision 1.6 diff -u -p -u -r1.6 memmove.S --- lib/libc/arch/i386/string/memmove.S 31 Aug 2015 02:53:56 -0000 1.6 +++ lib/libc/arch/i386/string/memmove.S 18 Aug 2017 02:28:21 -0000 @@ -40,17 +40,20 @@ * into memmove(), which handles overlapping regions. */ ENTRY(bcopy) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%esi movl 16(%esp),%edi jmp docopy +END_STRONG(bcopy) /* * memmove(caddr_t dst, caddr_t src, size_t len); * Copy len bytes, coping with overlapping space. */ ENTRY(memmove) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%edi @@ -66,6 +69,7 @@ docopy: * memcpy() doesn't worry about overlap and always copies forward */ // ENTRY(memcpy) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%edi @@ -82,6 +86,7 @@ docopyf: movsb popl %edi popl %esi + RETGUARD_END ret _ALIGN_TEXT @@ -103,6 +108,6 @@ docopyf: popl %edi popl %esi cld + RETGUARD_END ret END_STRONG(memmove) -END_WEAK(bcopy) Index: lib/libc/arch/i386/string/memset.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/memset.S,v retrieving revision 1.5 diff -u -p -u -r1.5 memset.S --- lib/libc/arch/i386/string/memset.S 31 Aug 2015 02:53:56 -0000 1.5 +++ lib/libc/arch/i386/string/memset.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include "SYS.h" ENTRY(memset) + RETGUARD_START pushl %edi pushl %ebx movl 12(%esp),%edi @@ -52,5 +53,6 @@ L1: rep popl %eax /* pop address of buffer */ popl %ebx popl %edi + RETGUARD_END ret END_STRONG(memset) Index: lib/libc/arch/i386/string/strcat.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strcat.S,v retrieving revision 1.9 diff -u -p -u -r1.9 strcat.S --- lib/libc/arch/i386/string/strcat.S 31 Aug 2015 02:53:56 -0000 1.9 +++ lib/libc/arch/i386/string/strcat.S 18 Aug 2017 02:28:21 -0000 @@ -20,6 +20,7 @@ */ ENTRY(strcat) + RETGUARD_START pushl %edi /* save edi */ movl 8(%esp),%edi /* dst address */ movl 12(%esp),%edx /* src address */ @@ -70,5 +71,6 @@ L1: movb (%edx),%al /* unroll loop, but jnz L1 L2: popl %eax /* pop destination address */ popl %edi /* restore edi */ + RETGUARD_END ret END(strcat) Index: lib/libc/arch/i386/string/strchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strchr.S,v retrieving revision 1.7 diff -u -p -u -r1.7 strchr.S --- lib/libc/arch/i386/string/strchr.S 31 Aug 2015 02:53:56 -0000 1.7 +++ lib/libc/arch/i386/string/strchr.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ WEAK_ALIAS(index, strchr) ENTRY(strchr) + RETGUARD_START movl 4(%esp),%eax movb 8(%esp),%cl .align 2,0x90 @@ -21,5 +22,6 @@ L1: jnz L1 xorl %eax,%eax L2: + RETGUARD_END ret END_STRONG(strchr) Index: lib/libc/arch/i386/string/strcmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strcmp.S,v retrieving revision 1.4 diff -u -p -u -r1.4 strcmp.S --- lib/libc/arch/i386/string/strcmp.S 31 Aug 2015 02:53:56 -0000 1.4 +++ lib/libc/arch/i386/string/strcmp.S 18 Aug 2017 02:28:21 -0000 @@ -13,6 +13,7 @@ */ ENTRY(strcmp) + RETGUARD_START movl 0x04(%esp),%eax movl 0x08(%esp),%edx jmp L2 /* Jump into the loop! */ @@ -78,5 +79,6 @@ L2: movb (%eax),%cl L3: movzbl (%eax),%eax /* unsigned comparison */ movzbl (%edx),%edx subl %edx,%eax + RETGUARD_END ret END_STRONG(strcmp) Index: lib/libc/arch/i386/string/strcpy.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strcpy.S,v retrieving revision 1.9 diff -u -p -u -r1.9 strcpy.S --- lib/libc/arch/i386/string/strcpy.S 31 Aug 2015 02:53:56 -0000 1.9 +++ lib/libc/arch/i386/string/strcpy.S 18 Aug 2017 02:28:21 -0000 @@ -20,6 +20,7 @@ */ ENTRY(strcpy) + RETGUARD_START movl 4(%esp),%ecx /* dst address */ movl 8(%esp),%edx /* src address */ pushl %ecx /* push dst address */ @@ -60,5 +61,6 @@ L1: movb (%edx),%al /* unroll loop, but testb %al,%al jnz L1 L2: popl %eax /* pop dst address */ + RETGUARD_END ret END(strcpy) Index: lib/libc/arch/i386/string/strncmp.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strncmp.S,v retrieving revision 1.5 diff -u -p -u -r1.5 strncmp.S --- lib/libc/arch/i386/string/strncmp.S 19 Aug 2017 18:25:50 -0000 1.5 +++ lib/libc/arch/i386/string/strncmp.S 19 Aug 2017 18:29:07 -0000 @@ -13,6 +13,7 @@ */ ENTRY(strncmp) + RETGUARD_END pushl %ebx movl 8(%esp),%eax movl 12(%esp),%ecx @@ -106,9 +107,11 @@ L3: movzbl (%eax),%eax /* unsigned comp movzbl (%ecx),%ecx subl %ecx,%eax popl %ebx + RETGUARD_END ret .align 2,0xcc L4: xorl %eax,%eax popl %ebx + RETGUARD_END ret END_STRONG(strncmp) Index: lib/libc/arch/i386/string/strrchr.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/string/strrchr.S,v retrieving revision 1.7 diff -u -p -u -r1.7 strrchr.S --- lib/libc/arch/i386/string/strrchr.S 31 Aug 2015 02:53:56 -0000 1.7 +++ lib/libc/arch/i386/string/strrchr.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ WEAK_ALIAS(rindex, strrchr) ENTRY(strrchr) + RETGUARD_START pushl %ebx movl 8(%esp),%edx movb 12(%esp),%cl @@ -24,5 +25,6 @@ L2: testb %bl,%bl /* null terminator??? */ jnz L1 popl %ebx + RETGUARD_END ret END_STRONG(strrchr) Index: lib/libc/arch/i386/sys/brk.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/sys/brk.S,v retrieving revision 1.13 diff -u -p -u -r1.13 brk.S --- lib/libc/arch/i386/sys/brk.S 19 Aug 2017 18:24:06 -0000 1.13 +++ lib/libc/arch/i386/sys/brk.S 19 Aug 2017 18:29:07 -0000 @@ -39,11 +39,12 @@ .data __minbrk: .long _end - END(__minbrk) + _ASM_SIZE(__minbrk) .type __minbrk,@object .weak brk ENTRY(brk) + RETGUARD_START #ifdef __PIC__ movl 4(%esp),%ecx PIC_PROLOGUE @@ -77,8 +78,10 @@ ENTRY(brk) xorl %eax,%eax movl %ecx,__curbrk #endif + RETGUARD_END ret 2: SET_ERRNO() + RETGUARD_END ret END(brk) Index: lib/libc/arch/i386/sys/sbrk.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/sys/sbrk.S,v retrieving revision 1.13 diff -u -p -u -r1.13 sbrk.S --- lib/libc/arch/i386/sys/sbrk.S 19 Aug 2017 18:24:06 -0000 1.13 +++ lib/libc/arch/i386/sys/sbrk.S 19 Aug 2017 18:29:07 -0000 @@ -39,11 +39,12 @@ .data __curbrk: .long _end - END(__curbrk) + _ASM_SIZE(__curbrk) .type __curbrk,@object .weak sbrk ENTRY(sbrk) + RETGUARD_START #ifdef __PIC__ movl 4(%esp),%ecx PIC_PROLOGUE @@ -71,8 +72,10 @@ ENTRY(sbrk) movl __curbrk,%eax addl %ecx,__curbrk #endif + RETGUARD_END ret 2: SET_ERRNO() + RETGUARD_END ret END(sbrk) Index: lib/libc/arch/i386/sys/sigpending.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/sys/sigpending.S,v retrieving revision 1.5 diff -u -p -u -r1.5 sigpending.S --- lib/libc/arch/i386/sys/sigpending.S 5 Sep 2015 06:22:47 -0000 1.5 +++ lib/libc/arch/i386/sys/sigpending.S 18 Aug 2017 02:28:21 -0000 @@ -37,5 +37,6 @@ SYSCALL(sigpending) movl 4(%esp),%ecx # fetch pointer to... movl %eax,(%ecx) # store old mask xorl %eax,%eax + RETGUARD_END ret SYSCALL_END(sigpending) Index: lib/libc/arch/i386/sys/sigprocmask.S =================================================================== RCS file: /cvs/src/lib/libc/arch/i386/sys/sigprocmask.S,v retrieving revision 1.12 diff -u -p -u -r1.12 sigprocmask.S --- lib/libc/arch/i386/sys/sigprocmask.S 7 May 2016 19:05:21 -0000 1.12 +++ lib/libc/arch/i386/sys/sigprocmask.S 18 Aug 2017 02:28:21 -0000 @@ -34,6 +34,7 @@ #include "SYS.h" SYSENTRY_HIDDEN(sigprocmask) +// RETGUARD_START movl 8(%esp),%ecx # fetch new sigset pointer testl %ecx,%ecx # check new sigset pointer jnz 1f # if not null, indirect @@ -51,8 +52,10 @@ SYSENTRY_HIDDEN(sigprocmask) movl %eax,(%ecx) # store old mask out: xorl %eax,%eax +// RETGUARD_END ret 1: SET_ERRNO() +// RETGUARD_END ret SYSCALL_END_HIDDEN(sigprocmask) Index: lib/libm/arch/amd64/abi.h =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/abi.h,v retrieving revision 1.5 diff -u -p -u -r1.5 abi.h --- lib/libm/arch/amd64/abi.h 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/abi.h 18 Aug 2017 18:02:43 -0000 @@ -64,5 +64,5 @@ * END_STD(x) Like DEF_STD() in C; for standard/reserved C names * END_NONSTD(x) Like DEF_NONSTD() in C; for non-ISO C names */ -#define END_STD(x) END(x); _HIDDEN_FALIAS(x,x); END(_HIDDEN(x)) +#define END_STD(x) END(x); _HIDDEN_FALIAS(x,x); _ASM_SIZE(_HIDDEN(x)) #define END_NONSTD(x) END_STD(x); .weak x Index: lib/libm/arch/amd64/e_acos.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_acos.S,v retrieving revision 1.5 diff -u -p -u -r1.5 e_acos.S --- lib/libm/arch/amd64/e_acos.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/e_acos.S 18 Aug 2017 02:28:21 -0000 @@ -10,6 +10,7 @@ /* acos = atan (sqrt(1 - x^2) / x) */ ENTRY(acos) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE /* x */ fld %st(0) @@ -20,5 +21,6 @@ ENTRY(acos) fxch %st(1) fpatan XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(acos) Index: lib/libm/arch/amd64/e_asin.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_asin.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_asin.S --- lib/libm/arch/amd64/e_asin.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_asin.S 18 Aug 2017 02:28:21 -0000 @@ -10,6 +10,7 @@ /* asin = atan (x / sqrt(1 - x^2)) */ ENTRY(asin) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE /* x */ fld %st(0) @@ -19,5 +20,6 @@ ENTRY(asin) fsqrt /* sqrt (1 - x^2) */ fpatan XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(asin) Index: lib/libm/arch/amd64/e_atan2.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_atan2.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_atan2.S --- lib/libm/arch/amd64/e_atan2.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_atan2.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(atan2) + RETGUARD_START XMM_TWO_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fldl ARG_DOUBLE_TWO fpatan XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(atan2) Index: lib/libm/arch/amd64/e_atan2f.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_atan2f.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_atan2f.S --- lib/libm/arch/amd64/e_atan2f.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_atan2f.S 18 Aug 2017 02:28:21 -0000 @@ -1,4 +1,4 @@ -/* $OpenBSD: e_atan2f.S,v 1.4 2016/09/12 19:47:01 guenther Exp $ */ +R/* $OpenBSD: e_atan2f.S,v 1.4 2016/09/12 19:47:01 guenther Exp $ */ /* * Written by J.T. Conklin <jtc@NetBSD.org>. * Public domain. @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(atan2f) + RETGUARD_START XMM_TWO_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE flds ARG_FLOAT_TWO fpatan XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(atan2f) Index: lib/libm/arch/amd64/e_exp.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_exp.S,v retrieving revision 1.6 diff -u -p -u -r1.6 e_exp.S --- lib/libm/arch/amd64/e_exp.S 12 Sep 2016 19:47:01 -0000 1.6 +++ lib/libm/arch/amd64/e_exp.S 18 Aug 2017 02:28:21 -0000 @@ -42,6 +42,7 @@ /* e^x = 2^(x * log2(e)) */ ENTRY(exp) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE /* * If x is +-Inf, then the subtraction would give Inf-Inf = NaN. @@ -82,6 +83,7 @@ ENTRY(exp) fldcw -8(%rsp) 1: XMM_DOUBLE_EPILOGUE + RETGUARD_END ret x_Inf_or_NaN: @@ -94,9 +96,11 @@ x_Inf_or_NaN: cmpl $0,-8(%rsp) jne x_not_minus_Inf xorpd %xmm0,%xmm0 + RETGUARD_END ret x_not_minus_Inf: movsd ARG_DOUBLE_ONE,%xmm0 + RETGUARD_END ret END_STD(exp) Index: lib/libm/arch/amd64/e_fmod.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_fmod.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_fmod.S --- lib/libm/arch/amd64/e_fmod.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_fmod.S 18 Aug 2017 02:28:21 -0000 @@ -10,6 +10,7 @@ ENTRY(fmod) + RETGUARD_START XMM_TWO_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_TWO fldl ARG_DOUBLE_ONE @@ -19,5 +20,6 @@ ENTRY(fmod) jc 1b fstp %st(1) XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(fmod) Index: lib/libm/arch/amd64/e_log.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_log.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_log.S --- lib/libm/arch/amd64/e_log.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_log.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(log) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldln2 fldl ARG_DOUBLE_ONE fyl2x XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(log) Index: lib/libm/arch/amd64/e_log10.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_log10.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_log10.S --- lib/libm/arch/amd64/e_log10.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_log10.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(log10) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldlg2 fldl ARG_DOUBLE_ONE fyl2x XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(log10) Index: lib/libm/arch/amd64/e_remainder.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_remainder.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_remainder.S --- lib/libm/arch/amd64/e_remainder.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_remainder.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(remainder) + RETGUARD_START XMM_TWO_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_TWO fldl ARG_DOUBLE_ONE @@ -18,5 +19,6 @@ ENTRY(remainder) jc 1b fstp %st(1) XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(remainder) Index: lib/libm/arch/amd64/e_remainderf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_remainderf.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_remainderf.S --- lib/libm/arch/amd64/e_remainderf.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_remainderf.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(remainderf) + RETGUARD_START XMM_TWO_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_TWO flds ARG_FLOAT_ONE @@ -18,5 +19,6 @@ ENTRY(remainderf) jc 1b fstp %st(1) XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(remainderf) Index: lib/libm/arch/amd64/e_scalb.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_scalb.S,v retrieving revision 1.4 diff -u -p -u -r1.4 e_scalb.S --- lib/libm/arch/amd64/e_scalb.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/e_scalb.S 18 Aug 2017 02:28:21 -0000 @@ -9,11 +9,13 @@ #include "abi.h" ENTRY(scalb) + RETGUARD_START XMM_TWO_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_TWO fldl ARG_DOUBLE_ONE fscale fstp %st(1) /* bug fix for fp stack overflow */ XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_NONSTD(scalb) Index: lib/libm/arch/amd64/e_sqrt.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_sqrt.S,v retrieving revision 1.5 diff -u -p -u -r1.5 e_sqrt.S --- lib/libm/arch/amd64/e_sqrt.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/e_sqrt.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,8 @@ #include "abi.h" ENTRY(sqrt) + RETGUARD_START sqrtsd %xmm0,%xmm0 + RETGUARD_END ret END_STD(sqrt) Index: lib/libm/arch/amd64/e_sqrtf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_sqrtf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 e_sqrtf.S --- lib/libm/arch/amd64/e_sqrtf.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/e_sqrtf.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,8 @@ #include "abi.h" ENTRY(sqrtf) + RETGUARD_START sqrtss %xmm0,%xmm0 + RETGUARD_END ret END_STD(sqrtf) Index: lib/libm/arch/amd64/e_sqrtl.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/e_sqrtl.S,v retrieving revision 1.2 diff -u -p -u -r1.2 e_sqrtl.S --- lib/libm/arch/amd64/e_sqrtl.S 12 Sep 2016 19:47:01 -0000 1.2 +++ lib/libm/arch/amd64/e_sqrtl.S 18 Aug 2017 02:28:21 -0000 @@ -8,7 +8,9 @@ #include "abi.h" ENTRY(sqrtl) + RETGUARD_START fldt 8(%rsp) fsqrt + RETGUARD_END ret END_STD(sqrtl) Index: lib/libm/arch/amd64/s_atan.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_atan.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_atan.S --- lib/libm/arch/amd64/s_atan.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_atan.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(atan) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fld1 fpatan XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(atan) Index: lib/libm/arch/amd64/s_atanf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_atanf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_atanf.S --- lib/libm/arch/amd64/s_atanf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_atanf.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(atanf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fld1 fpatan XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(atanf) Index: lib/libm/arch/amd64/s_ceil.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_ceil.S,v retrieving revision 1.4 diff -u -p -u -r1.4 s_ceil.S --- lib/libm/arch/amd64/s_ceil.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/s_ceil.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(ceil) + RETGUARD_START fstcw -12(%rsp) movw -12(%rsp),%dx orw $0x0800,%dx @@ -21,5 +22,6 @@ ENTRY(ceil) fldcw -12(%rsp) fstpl -8(%rsp) movsd -8(%rsp),%xmm0 + RETGUARD_END ret END_STD(ceil) Index: lib/libm/arch/amd64/s_ceilf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_ceilf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 s_ceilf.S --- lib/libm/arch/amd64/s_ceilf.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/s_ceilf.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(ceilf) + RETGUARD_START fstcw -8(%rsp) movw -8(%rsp),%dx orw $0x0800,%dx @@ -19,5 +20,6 @@ ENTRY(ceilf) fldcw -8(%rsp) fstps -4(%rsp) movss -4(%rsp),%xmm0 + RETGUARD_END ret END(ceilf) Index: lib/libm/arch/amd64/s_copysign.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_copysign.S,v retrieving revision 1.6 diff -u -p -u -r1.6 s_copysign.S --- lib/libm/arch/amd64/s_copysign.S 22 Dec 2016 16:11:26 -0000 1.6 +++ lib/libm/arch/amd64/s_copysign.S 18 Aug 2017 02:28:21 -0000 @@ -14,10 +14,12 @@ .quad 0x7fffffffffffffff ENTRY(copysign) + RETGUARD_START movq .Lpos(%rip),%xmm2 movq .Lneg(%rip),%xmm3 pand %xmm2,%xmm1 pand %xmm3,%xmm0 por %xmm1,%xmm0 + RETGUARD_END ret END_STD(copysign) Index: lib/libm/arch/amd64/s_copysignf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_copysignf.S,v retrieving revision 1.6 diff -u -p -u -r1.6 s_copysignf.S --- lib/libm/arch/amd64/s_copysignf.S 22 Dec 2016 16:11:26 -0000 1.6 +++ lib/libm/arch/amd64/s_copysignf.S 18 Aug 2017 02:28:21 -0000 @@ -14,10 +14,12 @@ .long 0x80000000 ENTRY(copysignf) + RETGUARD_START movss .Lpos(%rip),%xmm2 movss .Lneg(%rip),%xmm3 pand %xmm2,%xmm1 pand %xmm3,%xmm0 por %xmm1,%xmm0 + RETGUARD_END ret END_STD(copysignf) Index: lib/libm/arch/amd64/s_cos.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_cos.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_cos.S --- lib/libm/arch/amd64/s_cos.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_cos.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(cos) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fcos @@ -16,6 +17,7 @@ ENTRY(cos) andw $0x400,%ax jnz 1f XMM_DOUBLE_EPILOGUE + RETGUARD_END ret 1: fldpi fadd %st(0) @@ -27,5 +29,6 @@ ENTRY(cos) fstp %st(1) fcos XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(cos) Index: lib/libm/arch/amd64/s_cosf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_cosf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_cosf.S --- lib/libm/arch/amd64/s_cosf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_cosf.S 18 Aug 2017 02:28:21 -0000 @@ -10,9 +10,11 @@ /* A float's domain isn't large enough to require argument reduction. */ ENTRY(cosf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fcos XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(cosf) Index: lib/libm/arch/amd64/s_floor.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_floor.S,v retrieving revision 1.4 diff -u -p -u -r1.4 s_floor.S --- lib/libm/arch/amd64/s_floor.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/s_floor.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include "abi.h" ENTRY(floor) + RETGUARD_START movsd %xmm0, -8(%rsp) fstcw -12(%rsp) movw -12(%rsp),%dx @@ -20,5 +21,6 @@ ENTRY(floor) fldcw -12(%rsp) fstpl -8(%rsp) movsd -8(%rsp),%xmm0 + RETGUARD_END ret END_STD(floor) Index: lib/libm/arch/amd64/s_floorf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_floorf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 s_floorf.S --- lib/libm/arch/amd64/s_floorf.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/s_floorf.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include "abi.h" ENTRY(floorf) + RETGUARD_START movss %xmm0, -4(%rsp) fstcw -8(%rsp) movw -8(%rsp),%dx @@ -20,5 +21,6 @@ ENTRY(floorf) fldcw -8(%rsp) fstps -4(%rsp) movss -4(%rsp),%xmm0 + RETGUARD_END ret END_STD(floorf) Index: lib/libm/arch/amd64/s_ilogb.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_ilogb.S,v retrieving revision 1.4 diff -u -p -u -r1.4 s_ilogb.S --- lib/libm/arch/amd64/s_ilogb.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/s_ilogb.S 18 Aug 2017 02:28:21 -0000 @@ -8,11 +8,13 @@ #include "abi.h" ENTRY(ilogb) + RETGUARD_START movsd %xmm0,-8(%rsp) fldl -8(%rsp) fxtract fstp %st fistpl -8(%rsp) movl -8(%rsp),%eax + RETGUARD_END ret END_STD(ilogb) Index: lib/libm/arch/amd64/s_ilogbf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_ilogbf.S,v retrieving revision 1.4 diff -u -p -u -r1.4 s_ilogbf.S --- lib/libm/arch/amd64/s_ilogbf.S 12 Sep 2016 19:47:01 -0000 1.4 +++ lib/libm/arch/amd64/s_ilogbf.S 18 Aug 2017 02:28:21 -0000 @@ -8,11 +8,13 @@ #include "abi.h" ENTRY(ilogbf) + RETGUARD_START movss %xmm0,-4(%rsp) flds -4(%rsp) fxtract fstp %st fistpl -4(%rsp) movl -4(%rsp),%eax + RETGUARD_END ret END_STD(ilogbf) Index: lib/libm/arch/amd64/s_llrint.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_llrint.S,v retrieving revision 1.2 diff -u -p -u -r1.2 s_llrint.S --- lib/libm/arch/amd64/s_llrint.S 12 Sep 2016 19:47:01 -0000 1.2 +++ lib/libm/arch/amd64/s_llrint.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,8 @@ #include <machine/asm.h> ENTRY(llrint) + RETGUARD_START cvtsd2si %xmm0, %rax + RETGUARD_END ret END(llrint) Index: lib/libm/arch/amd64/s_llrintf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_llrintf.S,v retrieving revision 1.2 diff -u -p -u -r1.2 s_llrintf.S --- lib/libm/arch/amd64/s_llrintf.S 12 Sep 2016 19:47:01 -0000 1.2 +++ lib/libm/arch/amd64/s_llrintf.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,8 @@ #include "abi.h" ENTRY(llrintf) + RETGUARD_START cvtss2si %xmm0, %rax + RETGUARD_END ret END_STD(llrintf) Index: lib/libm/arch/amd64/s_log1p.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_log1p.S,v retrieving revision 1.5 diff -u -p -u -r1.5 s_log1p.S --- lib/libm/arch/amd64/s_log1p.S 19 Aug 2017 18:27:19 -0000 1.5 +++ lib/libm/arch/amd64/s_log1p.S 19 Aug 2017 18:29:09 -0000 @@ -40,6 +40,7 @@ */ ENTRY(log1p) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fabs @@ -62,6 +63,7 @@ use_fyl2x: faddp fyl2x XMM_DOUBLE_EPILOGUE + RETGUARD_END ret .align 4,0xcc @@ -70,5 +72,6 @@ use_fyl2xp1: fldl ARG_DOUBLE_ONE fyl2xp1 XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(log1p) Index: lib/libm/arch/amd64/s_log1pf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_log1pf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 s_log1pf.S --- lib/libm/arch/amd64/s_log1pf.S 19 Aug 2017 18:27:19 -0000 1.5 +++ lib/libm/arch/amd64/s_log1pf.S 19 Aug 2017 18:29:09 -0000 @@ -40,6 +40,7 @@ */ ENTRY(log1pf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fabs @@ -62,6 +63,7 @@ use_fyl2x: faddp fyl2x XMM_FLOAT_EPILOGUE + RETGUARD_END ret .align 4,0xcc @@ -70,5 +72,6 @@ use_fyl2xp1: flds ARG_FLOAT_ONE fyl2xp1 XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(log1pf) Index: lib/libm/arch/amd64/s_logb.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_logb.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_logb.S --- lib/libm/arch/amd64/s_logb.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_logb.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(logb) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fxtract fstp %st XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(logb) Index: lib/libm/arch/amd64/s_logbf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_logbf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_logbf.S --- lib/libm/arch/amd64/s_logbf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_logbf.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(logbf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fxtract fstp %st XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(logbf) Index: lib/libm/arch/amd64/s_lrint.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_lrint.S,v retrieving revision 1.2 diff -u -p -u -r1.2 s_lrint.S --- lib/libm/arch/amd64/s_lrint.S 12 Sep 2016 19:47:01 -0000 1.2 +++ lib/libm/arch/amd64/s_lrint.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,8 @@ #include <machine/asm.h> ENTRY(lrint) + RETGUARD_START cvtsd2si %xmm0, %rax + RETGUARD_END ret END(lrint) Index: lib/libm/arch/amd64/s_lrintf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_lrintf.S,v retrieving revision 1.2 diff -u -p -u -r1.2 s_lrintf.S --- lib/libm/arch/amd64/s_lrintf.S 12 Sep 2016 19:47:01 -0000 1.2 +++ lib/libm/arch/amd64/s_lrintf.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,8 @@ #include <machine/asm.h> ENTRY(lrintf) + RETGUARD_START cvtss2si %xmm0, %rax + RETGUARD_END ret END(lrintf) Index: lib/libm/arch/amd64/s_rint.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_rint.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_rint.S --- lib/libm/arch/amd64/s_rint.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_rint.S 18 Aug 2017 02:28:21 -0000 @@ -9,9 +9,11 @@ #include "abi.h" ENTRY(rint) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE frndint XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(rint) Index: lib/libm/arch/amd64/s_rintf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_rintf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_rintf.S --- lib/libm/arch/amd64/s_rintf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_rintf.S 18 Aug 2017 02:28:21 -0000 @@ -9,9 +9,11 @@ #include "abi.h" ENTRY(rintf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE frndint XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(rintf) Index: lib/libm/arch/amd64/s_scalbnf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_scalbnf.S,v retrieving revision 1.5 diff -u -p -u -r1.5 s_scalbnf.S --- lib/libm/arch/amd64/s_scalbnf.S 12 Sep 2016 19:47:01 -0000 1.5 +++ lib/libm/arch/amd64/s_scalbnf.S 18 Aug 2017 02:28:21 -0000 @@ -12,6 +12,7 @@ ldexpf = scalbnf ENTRY(scalbnf) + RETGUARD_START movss %xmm0,-8(%rsp) movl %edi,-4(%rsp) fildl -4(%rsp) @@ -20,5 +21,6 @@ ENTRY(scalbnf) fstp %st(1) /* bug fix for fp stack overflow */ fstps -8(%rsp) movss -8(%rsp),%xmm0 + RETGUARD_END ret END_STD(scalbnf) Index: lib/libm/arch/amd64/s_significand.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_significand.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_significand.S --- lib/libm/arch/amd64/s_significand.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_significand.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(significand) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fxtract fstp %st(1) XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(significand) Index: lib/libm/arch/amd64/s_significandf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_significandf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_significandf.S --- lib/libm/arch/amd64/s_significandf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_significandf.S 18 Aug 2017 02:28:21 -0000 @@ -9,10 +9,12 @@ #include "abi.h" ENTRY(significandf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fxtract fstp %st(1) XMM_FLOAT_EPILOGUE + RETGUARD_END ret END(significandf) Index: lib/libm/arch/amd64/s_sin.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_sin.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_sin.S --- lib/libm/arch/amd64/s_sin.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_sin.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(sin) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fsin @@ -16,6 +17,7 @@ ENTRY(sin) andw $0x400,%ax jnz 1f XMM_DOUBLE_EPILOGUE + RETGUARD_END ret 1: fldpi fadd %st(0) @@ -27,5 +29,6 @@ ENTRY(sin) fstp %st(1) fsin XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END_STD(sin) Index: lib/libm/arch/amd64/s_sinf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_sinf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_sinf.S --- lib/libm/arch/amd64/s_sinf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_sinf.S 18 Aug 2017 02:28:21 -0000 @@ -10,9 +10,11 @@ /* A float's domain isn't large enough to require argument reduction. */ ENTRY(sinf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fsin XMM_FLOAT_EPILOGUE + RETGUARD_END ret END_STD(sinf) Index: lib/libm/arch/amd64/s_tan.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_tan.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_tan.S --- lib/libm/arch/amd64/s_tan.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_tan.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include "abi.h" ENTRY(tan) + RETGUARD_START XMM_ONE_ARG_DOUBLE_PROLOGUE fldl ARG_DOUBLE_ONE fptan @@ -17,6 +18,7 @@ ENTRY(tan) jnz 1f fstp %st(0) XMM_DOUBLE_EPILOGUE + RETGUARD_END ret 1: fldpi fadd %st(0) @@ -29,5 +31,6 @@ ENTRY(tan) fptan fstp %st(0) XMM_DOUBLE_EPILOGUE + RETGUARD_END ret END(tan) Index: lib/libm/arch/amd64/s_tanf.S =================================================================== RCS file: /cvs/src/lib/libm/arch/amd64/s_tanf.S,v retrieving revision 1.3 diff -u -p -u -r1.3 s_tanf.S --- lib/libm/arch/amd64/s_tanf.S 12 Sep 2016 19:47:01 -0000 1.3 +++ lib/libm/arch/amd64/s_tanf.S 18 Aug 2017 02:28:21 -0000 @@ -10,10 +10,12 @@ /* A float's domain isn't large enough to require argument reduction. */ ENTRY(tanf) + RETGUARD_START XMM_ONE_ARG_FLOAT_PROLOGUE flds ARG_FLOAT_ONE fptan fstp %st(0) XMM_FLOAT_EPILOGUE + RETGUARD_END ret END(tanf) Index: lib/libm/arch/i387/DEFS.h =================================================================== RCS file: /cvs/src/lib/libm/arch/i387/DEFS.h,v retrieving revision 1.1 diff -u -p -u -r1.1 DEFS.h --- lib/libm/arch/i387/DEFS.h 12 Sep 2016 19:47:02 -0000 1.1 +++ lib/libm/arch/i387/DEFS.h 18 Aug 2017 18:02:51 -0000 @@ -25,5 +25,5 @@ * END_STD(x) Like DEF_STD() in C; for standard/reserved C names * END_NONSTD(x) Like DEF_NONSTD() in C; for non-ISO C names */ -#define END_STD(x) END(x); _HIDDEN_FALIAS(x,x); END(_HIDDEN(x)) +#define END_STD(x) END(x); _HIDDEN_FALIAS(x,x); _ASM_SIZE(_HIDDEN(x)) #define END_NONSTD(x) END_STD(x); .weak x Index: libexec/ld.so/amd64/ldasm.S =================================================================== RCS file: /cvs/src/libexec/ld.so/amd64/ldasm.S,v retrieving revision 1.27 diff -u -p -u -r1.27 ldasm.S --- libexec/ld.so/amd64/ldasm.S 15 Aug 2017 00:26:02 -0000 1.27 +++ libexec/ld.so/amd64/ldasm.S 18 Aug 2017 02:28:21 -0000 @@ -75,11 +75,15 @@ _dl_start: .type __CONCAT(_dl_,n), @function ;\ .align 16,0xcc ;\ __CONCAT(_dl_,n): ;\ + .cfi_startproc ;\ + RETGUARD_START ;\ movl $(__CONCAT(SYS_,c)), %eax ;\ movq %rcx, %r10 ;\ syscall ;\ jb 1f ;\ - ret + RETGUARD_END ;\ + ret ;\ + .cfi_endproc DL_SYSCALL(open) DL_SYSCALL(fstat) @@ -106,6 +110,7 @@ DL_SYSCALL(thrkill) 1: /* error: result = -errno; - handled here. */ neg %rax + RETGUARD_END ret @@ -114,6 +119,8 @@ DL_SYSCALL(thrkill) .type _dl_bind_start,@function _dl_bind_start: .cfi_startproc + .cfi_escape 0x16, 0x10, 0x06, 0x09, 0xf8, 0x22, 0x12, 0x06, 0x27 + xorq %rsp,16(%rsp) # RETGUARD_START, sort of .cfi_adjust_cfa_offset 16 pushfq # save registers .cfi_adjust_cfa_offset 8 @@ -182,6 +189,7 @@ _dl_bind_start: .cfi_adjust_cfa_offset -8 /*.cfi_restore %rflags */ + xorq %rsp,16(%rsp) # RETGUARD, sort of leaq 8(%rsp),%rsp # Discard reloff, do not change eflags .cfi_adjust_cfa_offset -8 ret Index: libexec/ld.so/i386/ldasm.S =================================================================== RCS file: /cvs/src/libexec/ld.so/i386/ldasm.S,v retrieving revision 1.31 diff -u -p -u -r1.31 ldasm.S --- libexec/ld.so/i386/ldasm.S 16 Aug 2017 19:48:49 -0000 1.31 +++ libexec/ld.so/i386/ldasm.S 18 Aug 2017 02:28:21 -0000 @@ -92,9 +92,13 @@ _dl_start: .global __CONCAT(_dl_,n) ;\ .type __CONCAT(_dl_,n),@function ;\ __CONCAT(_dl_,n): ;\ + .cfi_startproc ;\ + RETGUARD_START ;\ __DO_SYSCALL(c) ;\ jb .L_cerr ;\ - ret + RETGUARD_END ;\ + ret ;\ + .cfi_endproc DL_SYSCALL(open) DL_SYSCALL(fstat) @@ -121,6 +125,7 @@ DL_SYSCALL(thrkill) .L_cerr: /* error: result = -errno; - handled here. */ neg %eax + RETGUARD_END ret .align 16,0xcc Index: sys/arch/amd64/amd64/acpi_wakecode.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/acpi_wakecode.S,v retrieving revision 1.40 diff -u -p -u -r1.40 acpi_wakecode.S --- sys/arch/amd64/amd64/acpi_wakecode.S 28 Jun 2017 07:16:58 -0000 1.40 +++ sys/arch/amd64/amd64/acpi_wakecode.S 18 Aug 2017 02:28:21 -0000 @@ -393,6 +393,7 @@ _ACPI_TRMP_OFFSET(.Lhibernate_resume_vec /* Jump to the S3 resume vector */ ljmp $(_ACPI_RM_CODE_SEG), $.Lacpi_s3_vector_real +NEND(hibernate_resume_machdep) NENTRY(hibernate_drop_to_real_mode) .code64 @@ -431,10 +432,12 @@ _ACPI_TRMP_OFFSET(.Lhibernate_resume_vec _ACPI_TRMP_OFFSET(.Lhib_hlt_real) hlt ljmp $(_ACPI_RM_CODE_SEG), $.Lhib_hlt_real +NEND(hibernate_drop_to_real_mode) .code64 /* Switch to hibernate resume pagetable */ NENTRY(hibernate_activate_resume_pt_machdep) + RETGUARD_START /* Enable large pages */ movq %cr4, %rax orq $(CR4_PSE), %rax @@ -449,23 +452,31 @@ NENTRY(hibernate_activate_resume_pt_mach jmp 1f 1: nop + RETGUARD_END ret +NEND(hibernate_activate_resume_pt_machdep) /* * Switch to the private resume-time hibernate stack */ NENTRY(hibernate_switch_stack_machdep) + xorl $(HIBERNATE_STACK_PAGE + HIBERNATE_STACK_OFFSET),(%rsp) # RETGUARD movq (%rsp), %rax movq %rax, HIBERNATE_STACK_PAGE + HIBERNATE_STACK_OFFSET movq $(HIBERNATE_STACK_PAGE + HIBERNATE_STACK_OFFSET), %rax movq %rax, %rsp /* On our own stack from here onward */ + RETGUARD_END ret +NEND(hibernate_switch_stack_machdep) NENTRY(hibernate_flush) + RETGUARD_START invlpg HIBERNATE_INFLATE_PAGE + RETGUARD_END ret +NEND(hibernate_flush) #endif /* HIBERNATE */ /* @@ -662,6 +673,7 @@ _C_LABEL(acpi_tramp_data_end): .code64 NENTRY(acpi_savecpu) movq (%rsp), %rax + RETGUARD_START # 2nd instruction movq %rax, .Lacpi_saved_ret movq %rbx, .Lacpi_saved_rbx @@ -752,4 +764,6 @@ NENTRY(acpi_savecpu) str .Lacpi_saved_tr movl $1, %eax + RETGUARD_END ret +NEND(acpi_savecpu) Index: sys/arch/amd64/amd64/aes_intel.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/aes_intel.S,v retrieving revision 1.9 diff -u -p -u -r1.9 aes_intel.S --- sys/arch/amd64/amd64/aes_intel.S 26 Mar 2013 15:47:01 -0000 1.9 +++ sys/arch/amd64/amd64/aes_intel.S 18 Aug 2017 02:28:21 -0000 @@ -106,6 +106,8 @@ _key_expansion_128: _key_expansion_256a: + .cfi_startproc + RETGUARD_START pshufd $0b11111111,%xmm1,%xmm1 shufps $0b00010000,%xmm0,%xmm4 pxor %xmm4,%xmm0 @@ -114,9 +116,13 @@ _key_expansion_256a: pxor %xmm1,%xmm0 movaps %xmm0,(%rcx) add $0x10,%rcx + RETGUARD_END ret + .cfi_endproc _key_expansion_192a: + .cfi_startproc + RETGUARD_START pshufd $0b01010101,%xmm1,%xmm1 shufps $0b00010000,%xmm0,%xmm4 pxor %xmm4,%xmm0 @@ -137,9 +143,13 @@ _key_expansion_192a: shufps $0b01001110,%xmm2,%xmm1 movaps %xmm1,16(%rcx) add $0x20,%rcx + RETGUARD_END ret + .cfi_endproc _key_expansion_192b: + .cfi_startproc + RETGUARD_START pshufd $0b01010101,%xmm1,%xmm1 shufps $0b00010000,%xmm0,%xmm4 pxor %xmm4,%xmm0 @@ -155,9 +165,13 @@ _key_expansion_192b: movaps %xmm0,(%rcx) add $0x10,%rcx + RETGUARD_END ret + .cfi_endproc _key_expansion_256b: + .cfi_startproc + RETGUARD_START pshufd $0b10101010,%xmm1,%xmm1 shufps $0b00010000,%xmm2,%xmm4 pxor %xmm4,%xmm2 @@ -166,12 +180,14 @@ _key_expansion_256b: pxor %xmm1,%xmm2 movaps %xmm2,(%rcx) add $0x10,%rcx + RETGUARD_END ret - + .cfi_endproc /* * void aesni_set_key(struct aesni_session *ses, uint8_t *key, size_t len) */ ENTRY(aesni_set_key) + RETGUARD_START movups (%rsi),%xmm0 # user key (first 16 bytes) movaps %xmm0,(%rdi) lea 0x10(%rdi),%rcx # key addr @@ -267,17 +283,22 @@ ENTRY(aesni_set_key) sub $0x10,%rsi cmp %rcx,%rdi jb 4b + RETGUARD_END ret +END(aesni_set_key) /* * void aesni_enc(struct aesni_session *ses, uint8_t *dst, uint8_t *src) */ ENTRY(aesni_enc) + RETGUARD_START movl 480(KEYP),KLEN # key length movups (INP),STATE # input call _aesni_enc1 movups STATE,(OUTP) # output + RETGUARD_END ret +END(aesni_enc) /* * _aesni_enc1: internal ABI @@ -292,6 +313,8 @@ ENTRY(aesni_enc) * TKEYP (T1) */ _aesni_enc1: + .cfi_startproc + RETGUARD_START movaps (KEYP),KEY # key mov KEYP,TKEYP pxor KEY,STATE # round 0 @@ -333,7 +356,9 @@ _aesni_enc1: aesenc KEY,STATE movaps 0x70(TKEYP),KEY aesenclast KEY,STATE + RETGUARD_END ret + .cfi_endproc /* * _aesni_enc4: internal ABI @@ -354,6 +379,8 @@ _aesni_enc1: * TKEYP (T1) */ _aesni_enc4: + .cfi_startproc + RETGUARD_START movaps (KEYP),KEY # key mov KEYP,TKEYP pxor KEY,STATE1 # round 0 @@ -440,18 +467,23 @@ _aesni_enc4: aesenclast KEY,STATE2 aesenclast KEY,STATE3 aesenclast KEY,STATE4 + RETGUARD_END ret + .cfi_endproc /* * void aesni_dec(struct aesni_session *ses, uint8_t *dst, uint8_t *src) */ ENTRY(aesni_dec) + RETGUARD_START mov 480(KEYP),KLEN # key length add $240,KEYP movups (INP),STATE # input call _aesni_dec1 movups STATE,(OUTP) # output + RETGUARD_END ret +END(aesni_dec) /* * _aesni_dec1: internal ABI @@ -466,6 +498,8 @@ ENTRY(aesni_dec) * TKEYP (T1) */ _aesni_dec1: + .cfi_startproc + RETGUARD_START movaps (KEYP),KEY # key mov KEYP,TKEYP pxor KEY,STATE # round 0 @@ -507,7 +541,9 @@ _aesni_dec1: aesdec KEY,STATE movaps 0x70(TKEYP),KEY aesdeclast KEY,STATE + RETGUARD_END ret + .cfi_endproc /* * _aesni_dec4: internal ABI @@ -528,6 +564,8 @@ _aesni_dec1: * TKEYP (T1) */ _aesni_dec4: + .cfi_startproc + RETGUARD_START movaps (KEYP),KEY # key mov KEYP,TKEYP pxor KEY,STATE1 # round 0 @@ -614,7 +652,9 @@ _aesni_dec4: aesdeclast KEY,STATE2 aesdeclast KEY,STATE3 aesdeclast KEY,STATE4 + RETGUARD_END ret + .cfi_endproc #if 0 /* @@ -622,6 +662,7 @@ _aesni_dec4: * size_t len) */ ENTRY(aesni_ecb_enc) + RETGUARD_START test LEN,LEN # check length jz 3f mov 480(KEYP),KLEN @@ -658,13 +699,16 @@ ENTRY(aesni_ecb_enc) cmp $16,LEN jge 2b 3: + RETGUARD_END ret +END(aesni_ecb_enc) /* * void aesni_ecb_dec(struct aesni_session *ses, uint8_t *dst, uint8_t *src, * size_t len); */ ENTRY(aesni_ecb_dec) + RETGUARD_START test LEN,LEN jz 3f mov 480(KEYP),KLEN @@ -702,7 +746,9 @@ ENTRY(aesni_ecb_dec) cmp $16,LEN jge 2b 3: + RETGUARD_END ret +END(aesni_ecb_dec) #endif /* @@ -710,6 +756,7 @@ ENTRY(aesni_ecb_dec) * size_t len, uint8_t *iv) */ ENTRY(aesni_cbc_enc) + RETGUARD_START cmp $16,LEN jb 2f mov 480(KEYP),KLEN @@ -727,13 +774,16 @@ ENTRY(aesni_cbc_enc) jge 1b movups STATE,(IVP) 2: + RETGUARD_END ret +END(aesni_cbc_enc) /* * void aesni_cbc_dec(struct aesni_session *ses, uint8_t *dst, uint8_t *src, * size_t len, uint8_t *iv) */ ENTRY(aesni_cbc_dec) + RETGUARD_START cmp $16,LEN jb 4f mov 480(KEYP),KLEN @@ -784,7 +834,9 @@ ENTRY(aesni_cbc_dec) 3: movups IV,(IVP) 4: + RETGUARD_END ret +END(aesni_cbc_dec) /* * _aesni_inc_init: internal ABI @@ -799,6 +851,8 @@ ENTRY(aesni_cbc_dec) * BSWAP_MASK == endian swapping mask */ _aesni_inc_init: + .cfi_startproc + RETGUARD_START movdqa CTR,IV pslldq $8,IV movdqu .Lbswap_mask,BSWAP_MASK @@ -806,7 +860,9 @@ _aesni_inc_init: mov $1,TCTR_LOW movd TCTR_LOW,INC movd CTR,TCTR_LOW + RETGUARD_END ret + .cfi_endproc /* * _aesni_inc: internal ABI @@ -824,6 +880,8 @@ _aesni_inc_init: * TCTR_LOW: == lower dword of CTR */ _aesni_inc: + .cfi_startproc + RETGUARD_START paddq INC,CTR add $1,TCTR_LOW jnc 1f @@ -833,13 +891,16 @@ _aesni_inc: 1: movaps CTR,IV pshufb BSWAP_MASK,IV + RETGUARD_END ret + .cfi_endproc /* * void aesni_ctr_enc(struct aesni_session *ses, uint8_t *dst, uint8_t *src, * size_t len, uint8_t *icb) */ ENTRY(aesni_ctr_enc) + RETGUARD_START cmp $16,LEN jb 4f mov 480(KEYP),KLEN @@ -893,9 +954,13 @@ ENTRY(aesni_ctr_enc) 3: movq IV,(IVP) 4: + RETGUARD_END ret +END(aesni_ctr_enc) _aesni_gmac_gfmul: + .cfi_startproc + RETGUARD_START movdqa %xmm0,%xmm3 pclmulqdq $0x00,%xmm1,%xmm3 # xmm3 holds a0*b0 movdqa %xmm0,%xmm4 @@ -959,12 +1024,15 @@ _aesni_gmac_gfmul: pxor %xmm8,%xmm2 pxor %xmm2,%xmm3 pxor %xmm3,%xmm6 # the result is in xmm6 + RETGUARD_END ret + .cfi_endproc /* * void aesni_gmac_update(GHASH_CTX *ghash, uint8_t *src, size_t len) */ ENTRY(aesni_gmac_update) + RETGUARD_START cmp $16,%rdx jb 2f @@ -990,26 +1058,32 @@ ENTRY(aesni_gmac_update) movdqu %xmm6,16(%rdi) movdqu %xmm6,32(%rdi) 2: + RETGUARD_END ret +END(aesni_gmac_update) /* * void aesni_gmac_final(struct aesni_sess *ses, uint8_t *tag, * uint8_t *icb, uint8_t *hashstate) */ ENTRY(aesni_gmac_final) + RETGUARD_START movl 480(KEYP),KLEN # key length movdqu (INP),STATE # icb call _aesni_enc1 movdqu (HSTATE),IN pxor IN,STATE movdqu STATE,(OUTP) # output + RETGUARD_END ret +END(aesni_gmac_final) /* * void aesni_xts_enc(struct aesni_xts_ctx *xts, uint8_t *dst, uint8_t *src, * size_t len, uint8_t *iv) */ ENTRY(aesni_xts_enc) + RETGUARD_START cmp $16,%rcx jb 2f @@ -1031,13 +1105,16 @@ ENTRY(aesni_xts_enc) cmp $16,%rcx jge 1b 2: + RETGUARD_END ret +END(aesni_xts_enc) /* * void aesni_xts_dec(struct aesni_xts_ctx *xts, uint8_t *dst, uint8_t *src, * size_t len, uint8_t *iv) */ ENTRY(aesni_xts_dec) + RETGUARD_START cmp $16,%rcx jb 2f @@ -1060,7 +1137,9 @@ ENTRY(aesni_xts_dec) cmp $16,%rcx jge 1b 2: + RETGUARD_END ret +END(aesni_xts_dec) /* * Prepare tweak as E_k2(IV). IV is specified as LE representation of a @@ -1070,6 +1149,8 @@ ENTRY(aesni_xts_dec) * xts is in %rdi, iv is in %r8 and we return the tweak in %xmm3. */ _aesni_xts_tweak: + .cfi_startproc + RETGUARD_START mov (%r8),%r10 movd %r10,%xmm0 # Last 64-bits of IV are always zero. mov KEYP,%r11 @@ -1078,12 +1159,16 @@ _aesni_xts_tweak: call _aesni_enc1 movdqa %xmm0,%xmm3 mov %r11,KEYP + RETGUARD_END ret + .cfi_endproc /* * Exponentiate AES XTS tweak (in %xmm3). */ _aesni_xts_tweak_exp: + .cfi_startproc + RETGUARD_START pextrw $7,%xmm3,%r10 pextrw $3,%xmm3,%r11 psllq $1,%xmm3 # Left shift. @@ -1101,4 +1186,6 @@ _aesni_xts_tweak_exp: xor $0x87,%r11 # AES XTS alpha - GF(2^128). pinsrw $0,%r11,%xmm3 2: + RETGUARD_END ret + .cfi_endproc Index: sys/arch/amd64/amd64/copy.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/copy.S,v retrieving revision 1.8 diff -u -p -u -r1.8 copy.S --- sys/arch/amd64/amd64/copy.S 12 May 2017 19:25:19 -0000 1.8 +++ sys/arch/amd64/amd64/copy.S 18 Aug 2017 02:28:21 -0000 @@ -69,6 +69,7 @@ */ ENTRY(kcopy) + RETGUARD_START movq CPUVAR(CURPCB),%rax pushq PCB_ONFAULT(%rax) leaq _C_LABEL(copy_fault)(%rip),%r11 @@ -93,6 +94,7 @@ ENTRY(kcopy) movq CPUVAR(CURPCB),%rdx popq PCB_ONFAULT(%rdx) xorq %rax,%rax + RETGUARD_END ret 1: addq %rcx,%rdi # copy backward @@ -114,9 +116,12 @@ ENTRY(kcopy) movq CPUVAR(CURPCB),%rdx popq PCB_ONFAULT(%rdx) xorq %rax,%rax + RETGUARD_END ret +END(kcopy) ENTRY(copyout) + RETGUARD_START pushq $0 xchgq %rdi,%rsi @@ -149,9 +154,12 @@ ENTRY(copyout) SMAP_CLAC popq PCB_ONFAULT(%rdx) xorl %eax,%eax + RETGUARD_END ret +END(copyout) ENTRY(copyin) + RETGUARD_START movq CPUVAR(CURPCB),%rax pushq $0 leaq _C_LABEL(copy_fault)(%rip),%r11 @@ -186,18 +194,24 @@ ENTRY(copyin) movq CPUVAR(CURPCB),%rdx popq PCB_ONFAULT(%rdx) xorl %eax,%eax + RETGUARD_END ret +END(copyin) NENTRY(copy_efault) movq $EFAULT,%rax +NEND(copy_efault) NENTRY(copy_fault) SMAP_CLAC movq CPUVAR(CURPCB),%rdx popq PCB_ONFAULT(%rdx) + RETGUARD_END ret +NEND(copy_efault) ENTRY(copyoutstr) + RETGUARD_START xchgq %rdi,%rsi movq %rdx,%r8 movq %rcx,%r9 @@ -237,8 +251,10 @@ ENTRY(copyoutstr) jae _C_LABEL(copystr_efault) movq $ENAMETOOLONG,%rax jmp copystr_return +END(copyoutstr) ENTRY(copyinstr) + RETGUARD_START xchgq %rdi,%rsi movq %rdx,%r8 movq %rcx,%r9 @@ -279,9 +295,11 @@ ENTRY(copyinstr) jae _C_LABEL(copystr_efault) movq $ENAMETOOLONG,%rax jmp copystr_return +END(copyinstr) -ENTRY(copystr_efault) +NENTRY(copystr_efault) movl $EFAULT,%eax +NEND(copystr_efault) ENTRY(copystr_fault) copystr_return: @@ -294,9 +312,12 @@ copystr_return: subq %rdx,%r8 movq %r8,(%r9) -8: ret +8: RETGUARD_END + ret +END(copystr_fault) ENTRY(copystr) + RETGUARD_START xchgq %rdi,%rsi movq %rdx,%r8 @@ -323,7 +344,9 @@ ENTRY(copystr) subq %rdx,%r8 movq %r8,(%rcx) -7: ret +7: RETGUARD_END + ret +END(copystr) .globl _C_LABEL(_stac) _C_LABEL(_stac): Index: sys/arch/amd64/amd64/db_trace.c =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/db_trace.c,v retrieving revision 1.34 diff -u -p -u -r1.34 db_trace.c --- sys/arch/amd64/amd64/db_trace.c 14 Aug 2017 16:32:37 -0000 1.34 +++ sys/arch/amd64/amd64/db_trace.c 18 Aug 2017 05:34:10 -0000 @@ -73,6 +73,14 @@ struct db_variable * db_eregs = db_regs */ #define INKERNEL(va) (((vaddr_t)(va)) >= VM_MIN_KERNEL_ADDRESS) +/* Kernel uses xor %esp,(%rsp) for RETGUARD/-fret-protector */ +#if defined(PROF) || defined(GPROF) +# define GETPC(frame) (db_get_value((db_addr_t)&frame->f_retaddr, 8, FALSE)) +#else +# define GETPC(frame) (db_get_value((db_addr_t)&frame->f_retaddr, 8, FALSE)\ + ^ (unsigned int)&(frame->f_retaddr)) +#endif + #define NONE 0 #define TRAP 1 #define SYSCALL 2 @@ -111,8 +119,7 @@ db_nextframe(struct callframe **fp, db_a switch (is_trap) { case NONE: - *ip = (db_addr_t) - db_get_value((db_addr_t)&(*fp)->f_retaddr, 8, FALSE); + *ip = (db_addr_t)GETPC((*fp)); *fp = (struct callframe *) db_get_value((db_addr_t)&(*fp)->f_frame, 8, FALSE); break; @@ -211,8 +218,7 @@ db_stack_trace_print(db_expr_t addr, boo } else { frame = (struct callframe *)addr; } - callpc = (db_addr_t) - db_get_value((db_addr_t)&frame->f_retaddr, 8, FALSE); + callpc = (db_addr_t)GETPC(frame); frame = (struct callframe *)frame->f_frame; } @@ -286,9 +292,7 @@ db_stack_trace_print(db_expr_t addr, boo if (lastframe == 0 && offset == 0 && !have_addr && !is_trap) { /* Frame really belongs to next callpc */ lastframe = (struct callframe *)(ddb_regs.tf_rsp-8); - callpc = (db_addr_t) - db_get_value((db_addr_t)&lastframe->f_retaddr, - 8, FALSE); + callpc = (db_addr_t)GETPC(lastframe); continue; } @@ -350,7 +354,7 @@ db_save_stack_trace(struct db_stack_trac frame = __builtin_frame_address(0); - callpc = db_get_value((db_addr_t)&frame->f_retaddr, 8, FALSE); + callpc = GETPC(frame); frame = frame->f_frame; lastframe = NULL; @@ -372,7 +376,7 @@ db_save_stack_trace(struct db_stack_trac if (is_trap == NONE) { lastframe = frame; - callpc = frame->f_retaddr; + callpc = GETPC(frame); frame = frame->f_frame; } else { if (is_trap == INTERRUPT) { @@ -409,7 +413,7 @@ db_get_pc(struct trapframe *tf) { struct callframe *cf = (struct callframe *)(tf->tf_rsp - sizeof(long)); - return db_get_value((db_addr_t)&cf->f_retaddr, sizeof(long), 0); + return GETPC(cf); } vaddr_t Index: sys/arch/amd64/amd64/locore.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/locore.S,v retrieving revision 1.87 diff -u -p -u -r1.87 locore.S --- sys/arch/amd64/amd64/locore.S 6 Jul 2017 06:17:04 -0000 1.87 +++ sys/arch/amd64/amd64/locore.S 18 Aug 2017 02:28:21 -0000 @@ -282,6 +282,7 @@ NENTRY(lgdt) pushq $GSEL(GCODE_SEL, SEL_KPL) pushq %rax lretq +NEND(lgdt) ENTRY(setjmp) /* @@ -301,6 +302,7 @@ ENTRY(setjmp) movq %rdx,56(%rax) xorl %eax,%eax ret +END(setjmp) ENTRY(longjmp) movq %rdi,%rax @@ -316,6 +318,7 @@ ENTRY(longjmp) xorl %eax,%eax incl %eax ret +END(longjmp) /*****************************************************************************/ @@ -324,6 +327,7 @@ ENTRY(longjmp) * Switch from "old" proc to "new". */ ENTRY(cpu_switchto) + RETGUARD_START pushq %rbx pushq %rbp pushq %r12 @@ -362,7 +366,12 @@ ENTRY(cpu_switchto) btrq %rdi,PM_CPUS(%rcx) /* Save stack pointers. */ - movq %rsp,PCB_RSP(%r13) + movq %rsp,%rax + addq $6*8,%rax + xorl %eax,(%rax) # RETGUARD + subq $6*8,%rax + movq %rax,PCB_RSP(%r13) + movq %rbp,PCB_RBP(%r13) switch_exited: @@ -391,7 +400,12 @@ restore_saved: movq P_ADDR(%r12),%r13 /* Restore stack pointers. */ - movq PCB_RSP(%r13),%rsp + movq PCB_RSP(%r13),%rax + addq $6*8,%rax + xorl %eax,(%rax) # RETGUARD + subq $6*8,%rax + movq %rax,%rsp + movq PCB_RBP(%r13),%rbp movq CPUVAR(TSS),%rcx @@ -439,34 +453,47 @@ switch_return: popq %r12 popq %rbp popq %rbx + RETGUARD_END ret +END(cpu_switchto) ENTRY(cpu_idle_enter) + RETGUARD_START movq _C_LABEL(cpu_idle_enter_fcn),%rax cmpq $0,%rax je 1f jmpq *%rax 1: + RETGUARD_END ret +END(cpu_idle_enter) ENTRY(cpu_idle_cycle) + RETGUARD_START movq _C_LABEL(cpu_idle_cycle_fcn),%rax cmpq $0,%rax je 1f call *%rax + RETGUARD_END ret 1: sti hlt + RETGUARD_END ret +END(cpu_idle_cycle) ENTRY(cpu_idle_leave) + RETGUARD_START movq _C_LABEL(cpu_idle_leave_fcn),%rax cmpq $0,%rax je 1f + RETGUARD_END jmpq *%rax 1: + RETGUARD_END ret +END(cpu_idle_leave) .globl _C_LABEL(panic) @@ -475,6 +502,7 @@ NENTRY(switch_pmcpu_set) movabsq $switch_active,%rdi call _C_LABEL(panic) /* NOTREACHED */ +NEND(switch_pmcpu_set) .section .rodata switch_active: @@ -486,11 +514,16 @@ switch_active: * Update pcb, saving current processor state. */ ENTRY(savectx) + RETGUARD_START /* Save stack pointers. */ - movq %rsp,PCB_RSP(%rdi) + movq %rsp,%rax + xorl %eax,(%rax) # undo RETGUARD + movq %rax,PCB_RSP(%rdi) movq %rbp,PCB_RBP(%rdi) + RETGUARD_END ret +END(savectx) IDTVEC(syscall32) sysret /* go away please */ @@ -614,7 +647,7 @@ NENTRY(proc_trampoline) call *%r12 movq CPUVAR(CURPROC),%r14 jmp .Lsyscall_check_asts - +NEND(proc_trampoline) /* * Return via iretq, for real interrupts and signal returns @@ -659,7 +692,7 @@ NENTRY(intr_fast_exit) .globl _C_LABEL(doreti_iret) _C_LABEL(doreti_iret): iretq - +NEND(intr_fast_exit) #if !defined(GPROF) && defined(DDBPROF) .Lprobe_fixup: @@ -692,6 +725,7 @@ _C_LABEL(doreti_iret): #endif /* !defined(GPROF) && defined(DDBPROF) */ ENTRY(pagezero) + RETGUARD_START movq $-PAGE_SIZE,%rdx subq %rdx,%rdi xorq %rax,%rax @@ -703,7 +737,9 @@ ENTRY(pagezero) addq $32,%rdx jne 1b sfence + RETGUARD_END ret +END(pagezero) #if NXEN > 0 /* Hypercall page needs to be page aligned */ Index: sys/arch/amd64/amd64/mutex.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/mutex.S,v retrieving revision 1.13 diff -u -p -u -r1.13 mutex.S --- sys/arch/amd64/amd64/mutex.S 29 Jun 2017 17:17:28 -0000 1.13 +++ sys/arch/amd64/amd64/mutex.S 18 Aug 2017 02:28:21 -0000 @@ -39,12 +39,16 @@ * all the functions in the same place. */ ENTRY(__mtx_init) + RETGUARD_START movl %esi, MTX_WANTIPL(%rdi) movl $0, MTX_OLDIPL(%rdi) movq $0, MTX_OWNER(%rdi) + RETGUARD_END ret +END(__mtx_init) ENTRY(__mtx_enter) + RETGUARD_START 1: movl MTX_WANTIPL(%rdi), %eax movq CPUVAR(SELF), %rcx movl CPU_INFO_ILEVEL(%rcx), %edx # oipl = cpl; @@ -65,6 +69,7 @@ ENTRY(__mtx_enter) #ifdef DIAGNOSTIC incl CPU_INFO_MUTEX_LEVEL(%rcx) #endif + RETGUARD_END ret /* We failed to obtain the lock. splx, spin and retry. */ @@ -92,8 +97,10 @@ mtx_lockingself: .asciz "mtx_enter: locking against myself" .text #endif +END(__mtx_enter) ENTRY(__mtx_enter_try) + RETGUARD_START 1: movl MTX_WANTIPL(%rdi), %eax movq CPUVAR(SELF), %rcx movl CPU_INFO_ILEVEL(%rcx), %edx # oipl = cpl; @@ -115,6 +122,7 @@ ENTRY(__mtx_enter_try) incl CPU_INFO_MUTEX_LEVEL(%rcx) #endif movq $1, %rax + RETGUARD_END ret /* We failed to obtain the lock. splx and return 0. */ @@ -128,6 +136,7 @@ ENTRY(__mtx_enter_try) je 3f #endif xorq %rax, %rax + RETGUARD_END ret #ifdef DIAGNOSTIC @@ -139,9 +148,10 @@ mtx_lockingtry: .asciz "mtx_enter_try: locking against myself" .text #endif - +END(__mtx_enter_try) ENTRY(__mtx_leave) + RETGUARD_START movq %rdi, %rax #ifdef DIAGNOSTIC movq CPUVAR(SELF), %rcx @@ -157,6 +167,7 @@ ENTRY(__mtx_leave) je 1f call _C_LABEL(spllower) 1: + RETGUARD_END ret #ifdef DIAGNOSTIC @@ -168,3 +179,4 @@ mtx_leave_held: .asciz "mtx_leave: lock not held" .text #endif +END(__mtx_leave) Index: sys/arch/amd64/amd64/spl.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/spl.S,v retrieving revision 1.11 diff -u -p -u -r1.11 spl.S --- sys/arch/amd64/amd64/spl.S 20 May 2016 14:37:53 -0000 1.11 +++ sys/arch/amd64/amd64/spl.S 18 Aug 2017 02:28:21 -0000 @@ -85,18 +85,24 @@ .globl _C_LABEL(splhigh), _C_LABEL(splx) .align 16, 0xcc -_C_LABEL(splhigh): +ENTRY(splhigh) + RETGUARD_START movl $IPL_HIGH,%eax xchgl %eax,CPUVAR(ILEVEL) + RETGUARD_END ret +END(splhigh) .align 16, 0xcc -_C_LABEL(splx): +ENTRY(splx) + RETGUARD_START movl 4(%esp),%eax movl %eax,CPUVAR(ILEVEL) testl %eax,%eax jnz _C_LABEL(Xspllower) + RETGUARD_END ret +END(splx) #endif /* PROF || GPROF */ #endif @@ -115,10 +121,18 @@ _C_LABEL(splx): * the sending CPU will never see the that CPU accept the IPI */ IDTVEC(spllower) + .cfi_startproc + RETGUARD_START + _PROF_PROLOGUE pushq %rbx pushq %r13 movl %edi,%ebx + + movq %rsp,%rax + addq $16,%rax + xorq %rax,(%rax) + leaq 1f(%rip),%r13 # address to resume loop at 1: movl %ebx,%eax # get cpl movq CPUVAR(IUNMASK)(,%rax,8),%rax @@ -130,11 +144,17 @@ IDTVEC(spllower) movq CPUVAR(ISOURCES)(,%rax,8),%rax jmp *IS_RECURSE(%rax) 2: + movq %rsp,%rax + addq $2*8,%rax + xorq %rax,(%rax) + movl %ebx,CPUVAR(ILEVEL) sti popq %r13 popq %rbx + RETGUARD_END ret + .cfi_endproc /* * Handle return from interrupt after device handler finishes. Index: sys/arch/amd64/amd64/vector.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/vector.S,v retrieving revision 1.49 diff -u -p -u -r1.49 vector.S --- sys/arch/amd64/amd64/vector.S 29 Jun 2017 17:17:28 -0000 1.49 +++ sys/arch/amd64/amd64/vector.S 18 Aug 2017 02:28:21 -0000 @@ -236,6 +236,7 @@ NENTRY(resume_iret) INTR_SAVE_GPRS sti jmp calltrap +NEND(resume_iret) /* * All traps go through here. Call the generic trap handler, and @@ -296,7 +297,10 @@ calltrap: #endif /* DDB */ movl %ebx,CPUVAR(ILEVEL) jmp 2b +#endif /* DIAGNOSTIC */ +NEND(alltraps) +#ifdef DIAGNOSTIC .section .rodata spl_lowered: .asciz "WARNING: SPL NOT LOWERED ON TRAP EXIT %x %x\n" @@ -326,8 +330,8 @@ spl_lowered: /* XXX See comment in locore.s */ #define XINTR(name,num) Xintr_##name##num - .globl _C_LABEL(x2apic_eoi) -_C_LABEL(x2apic_eoi): +NENTRY(x2apic_eoi) + RETGUARD_START pushq %rax pushq %rcx pushq %rdx @@ -338,7 +342,9 @@ _C_LABEL(x2apic_eoi): popq %rdx popq %rcx popq %rax + RETGUARD_END ret +NEND(x2apic_eoi) #if NLAPIC > 0 #ifdef MULTIPROCESSOR Index: sys/arch/amd64/amd64/vmm_support.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/amd64/vmm_support.S,v retrieving revision 1.9 diff -u -p -u -r1.9 vmm_support.S --- sys/arch/amd64/amd64/vmm_support.S 30 May 2017 17:49:47 -0000 1.9 +++ sys/arch/amd64/amd64/vmm_support.S 18 Aug 2017 02:28:21 -0000 @@ -28,23 +28,12 @@ #define VMX_FAIL_LAUNCH_INVALID_VMCS 2 #define VMX_FAIL_LAUNCH_VALID_VMCS 3 - .global _C_LABEL(vmxon) - .global _C_LABEL(vmxoff) - .global _C_LABEL(vmclear) - .global _C_LABEL(vmptrld) - .global _C_LABEL(vmptrst) - .global _C_LABEL(vmwrite) - .global _C_LABEL(vmread) - .global _C_LABEL(invvpid) - .global _C_LABEL(invept) - .global _C_LABEL(vmx_enter_guest) - .global _C_LABEL(vmm_dispatch_intr) - .global _C_LABEL(svm_enter_guest) - .text .code64 .align 16,0xcc -_C_LABEL(vmm_dispatch_intr): + +ENTRY(vmm_dispatch_intr) + RETGUARD_START movq %rsp, %r11 /* r11 = temporary register */ andq $0xFFFFFFFFFFFFFFF0, %rsp movw %ss, %ax @@ -55,87 +44,124 @@ _C_LABEL(vmm_dispatch_intr): pushq %rax cli callq *%rdi + RETGUARD_END ret +END(vmm_dispatch_intr) -_C_LABEL(vmxon): +ENTRY(vmxon) + RETGUARD_START vmxon (%rdi) jz failed_on jc failed_on xorq %rax, %rax + RETGUARD_END ret failed_on: movq $0x01, %rax + RETGUARD_END ret +END(vmxon) -_C_LABEL(vmxoff): +ENTRY(vmxoff) + RETGUARD_START vmxoff jz failed_off jc failed_off xorq %rax, %rax + RETGUARD_END ret failed_off: movq $0x01, %rax + RETGUARD_END ret +END(vmxoff) -_C_LABEL(vmclear): +ENTRY(vmclear) + RETGUARD_START vmclear (%rdi) jz failed_clear jc failed_clear xorq %rax, %rax + RETGUARD_END ret failed_clear: movq $0x01, %rax + RETGUARD_END ret +END(vmclear) -_C_LABEL(vmptrld): +ENTRY(vmptrld) + RETGUARD_START vmptrld (%rdi) jz failed_ptrld jc failed_ptrld xorq %rax, %rax + RETGUARD_END ret failed_ptrld: movq $0x01, %rax + RETGUARD_END ret +END(vmptrld) -_C_LABEL(vmptrst): +ENTRY(vmptrst) + RETGUARD_START vmptrst (%rdi) jz failed_ptrst jc failed_ptrst xorq %rax, %rax + RETGUARD_END ret failed_ptrst: movq $0x01, %rax + RETGUARD_END ret +END(vmptrst) -_C_LABEL(vmwrite): +ENTRY(vmwrite) + RETGUARD_START vmwrite %rsi, %rdi jz failed_write jc failed_write xorq %rax, %rax + RETGUARD_END ret failed_write: movq $0x01, %rax + RETGUARD_END ret +END(vmwrite) -_C_LABEL(vmread): +ENTRY(vmread) + RETGUARD_START vmread %rdi, (%rsi) jz failed_read jc failed_read xorq %rax, %rax + RETGUARD_END ret failed_read: movq $0x01, %rax + RETGUARD_END ret +END(vmread) -_C_LABEL(invvpid): +ENTRY(invvpid) + RETGUARD_START invvpid (%rsi), %rdi + RETGUARD_END ret +END(invvpid) -_C_LABEL(invept): +ENTRY(invept) + RETGUARD_START invept (%rsi), %rdi + RETGUARD_END ret +END(invept) -_C_LABEL(vmx_enter_guest): +ENTRY(vmx_enter_guest) + RETGUARD_START movq %rdx, %r8 /* resume flag */ testq %r8, %r8 jnz skip_init @@ -385,9 +411,12 @@ restore_host: popfq movq %rdi, %rax + RETGUARD_END ret +END(vmx_enter_guest) -_C_LABEL(svm_enter_guest): +ENTRY(svm_enter_guest) + RETGUARD_START clgi movq %rdi, %r8 pushfq @@ -587,4 +616,6 @@ restore_host_svm: movq %rdi, %rax stgi + RETGUARD_END ret +END(svm_enter_guest) Index: sys/arch/amd64/conf/ld.script =================================================================== RCS file: /cvs/src/sys/arch/amd64/conf/ld.script,v retrieving revision 1.7 diff -u -p -u -r1.7 ld.script --- sys/arch/amd64/conf/ld.script 6 Jul 2017 06:21:56 -0000 1.7 +++ sys/arch/amd64/conf/ld.script 18 Aug 2017 02:28:21 -0000 @@ -29,14 +29,6 @@ PHDRS openbsd_randomize PT_OPENBSD_RANDOMIZE; } -/* - * If we want the text/rodata/data sections aligned on 2M boundaries, - * we could use the following instead. Note, file size would increase - * due to necessary padding. - * - *__ALIGN_SIZE = 0x200000; - */ -__ALIGN_SIZE = 0x1000; __kernel_base = 0xffffffff80000000; __kernel_virt_base = __kernel_base + 0x1000000; __kernel_phys_base = 0x1000000; @@ -56,7 +48,7 @@ SECTIONS _etext = .; /* Move rodata to the next page, so we can nuke X and W bit on them */ - . = ALIGN(__ALIGN_SIZE); + . = ALIGN(0x1000); __kernel_rodata_phys = (. - __kernel_virt_base) + 0x1000000; .rodata : AT (__kernel_rodata_phys) { @@ -77,7 +69,7 @@ SECTIONS _erodata = .; /* Move data to the next page, so we can add W bit on them */ - . = ALIGN(__ALIGN_SIZE); + . = ALIGN(0x1000); __kernel_data_phys = (. - __kernel_virt_base) + 0x1000000; .data : AT (__kernel_data_phys) { Index: sys/arch/amd64/include/asm.h =================================================================== RCS file: /cvs/src/sys/arch/amd64/include/asm.h,v retrieving revision 1.8 diff -u -p -u -r1.8 asm.h --- sys/arch/amd64/include/asm.h 29 Jun 2017 17:36:16 -0000 1.8 +++ sys/arch/amd64/include/asm.h 18 Aug 2017 17:58:23 -0000 @@ -49,6 +49,18 @@ # define _C_LABEL(x) x #define _ASM_LABEL(x) x +#ifdef _KERNEL /* 32 bit */ +#define RETGUARD_CFI .cfi_escape 0x16, 0x10, 0x0d, 0x09, 0xf8, 0x22, 0x12, \ + 0x06, 0x16, 0x0c, 0xff, 0xff, 0xff, 0xff, 0x1a, 0x27 +#define RETGUARD_START RETGUARD_CFI; xorl %esp,(%rsp) +#define RETGUARD_END xorl %esp,(%rsp) +#else +#define RETGUARD_CFI .cfi_escape 0x16, 0x10, 0x06, 0x09, 0xf8, 0x22, 0x12, \ + 0x06, 0x27 +#define RETGUARD_START RETGUARD_CFI; xorq %rsp,(%rsp) +#define RETGUARD_END xorq %rsp,(%rsp) +#endif + #define CVAROFF(x,y) (_C_LABEL(x)+y)(%rip) #ifdef __STDC__ @@ -92,10 +104,12 @@ # define _PROF_PROLOGUE #endif -#define ENTRY(y) _ENTRY(_C_LABEL(y)); _PROF_PROLOGUE -#define NENTRY(y) _ENTRY(_C_LABEL(y)) +#define ENTRY(y) _ENTRY(_C_LABEL(y)); _PROF_PROLOGUE; .cfi_startproc +#define NENTRY(y) _ENTRY(_C_LABEL(y)); .cfi_startproc #define ASENTRY(y) _ENTRY(_ASM_LABEL(y)); _PROF_PROLOGUE -#define END(y) .size y, . - y +#define _ASM_SIZE(y) .size y, . - y +#define END(y) .cfi_endproc; _ASM_SIZE(y) +#define NEND(y) .cfi_endproc #define STRONG_ALIAS(alias,sym) \ .global alias; \ Index: sys/arch/amd64/include/cdefs.h =================================================================== RCS file: /cvs/src/sys/arch/amd64/include/cdefs.h,v retrieving revision 1.3 diff -u -p -u -r1.3 cdefs.h --- sys/arch/amd64/include/cdefs.h 28 Mar 2013 17:30:45 -0000 1.3 +++ sys/arch/amd64/include/cdefs.h 18 Aug 2017 02:39:39 -0000 @@ -18,4 +18,19 @@ __asm__(".section .gnu.warning." __STRING(sym) \ " ; .ascii \"" msg "\" ; .text") +/* + * Fix __builtin_return_address() when compile with -fxor-ret-protector. + */ +#if defined(_RET_PROTECTOR) +# if defined(_KERNEL) +# define __builtin_return_address(d) ((void *) \ + ((size_t)__builtin_return_address(d) ^ \ + (unsigned int)__builtin_frame_address(d) + sizeof(void *))) +# else +# define __builtin_return_address(d) ((void *) \ + ((size_t)__builtin_return_address(d) ^ \ + (size_t)__builtin_frame_address(d) + sizeof(void *))) +# endif +#endif + #endif /* !_MACHINE_CDEFS_H_ */ Index: sys/arch/amd64/stand/cdboot/srt0.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/cdboot/srt0.S,v retrieving revision 1.3 diff -u -p -u -r1.3 srt0.S --- sys/arch/amd64/stand/cdboot/srt0.S 29 Oct 2012 13:54:56 -0000 1.3 +++ sys/arch/amd64/stand/cdboot/srt0.S 18 Aug 2017 02:28:21 -0000 @@ -204,6 +204,7 @@ ENTRY(debugchar) movb %al, (%ebx) popl %ebx ret +END(debugchar) .code16 Index: sys/arch/amd64/stand/efiboot/eficall.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/efiboot/eficall.S,v retrieving revision 1.1 diff -u -p -u -r1.1 eficall.S --- sys/arch/amd64/stand/efiboot/eficall.S 2 Sep 2015 01:52:25 -0000 1.1 +++ sys/arch/amd64/stand/efiboot/eficall.S 18 Aug 2017 02:28:21 -0000 @@ -62,3 +62,4 @@ ENTRY(efi_call) mov %rbp, %rsp pop %rbp retq +END(efi_call) Index: sys/arch/amd64/stand/libsa/gidt.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/libsa/gidt.S,v retrieving revision 1.11 diff -u -p -u -r1.11 gidt.S --- sys/arch/amd64/stand/libsa/gidt.S 27 Oct 2012 15:43:42 -0000 1.11 +++ sys/arch/amd64/stand/libsa/gidt.S 18 Aug 2017 02:28:21 -0000 @@ -160,6 +160,7 @@ ENTRY(_rtt) /* Again... */ movl $0, %esp /* segment violation */ ret +END(_rtt) #define IPROC(n) X##n #define IEMU(n) IPROC(emu##n) @@ -462,5 +463,4 @@ ENTRY(bootbuf) /* Jump to buffer */ ljmp $0x0, $0x7c00 - - .end +END(bootbuf) Index: sys/arch/amd64/stand/libsa/pxe_call.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/libsa/pxe_call.S,v retrieving revision 1.4 diff -u -p -u -r1.4 pxe_call.S --- sys/arch/amd64/stand/libsa/pxe_call.S 2 Jan 2006 00:26:29 -0000 1.4 +++ sys/arch/amd64/stand/libsa/pxe_call.S 18 Aug 2017 02:28:21 -0000 @@ -82,6 +82,7 @@ _C_LABEL(bangpxe_seg) = . - 2 popl %ebx popl %ebp ret +END(pxecall_bangpxe) ENTRY(pxecall_pxenv) .code32 @@ -125,6 +126,7 @@ _C_LABEL(pxenv_seg) = . - 2 popl %ebx popl %ebp ret +END(pxecall_pxenv) /* * prot_to_real() Index: sys/arch/amd64/stand/libsa/random_amd64.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/libsa/random_amd64.S,v retrieving revision 1.5 diff -u -p -u -r1.5 random_amd64.S --- sys/arch/amd64/stand/libsa/random_amd64.S 12 Feb 2016 21:36:33 -0000 1.5 +++ sys/arch/amd64/stand/libsa/random_amd64.S 18 Aug 2017 02:28:21 -0000 @@ -104,3 +104,4 @@ usetsc: done: popq %rbx retq +END(mdrandom) Index: sys/arch/amd64/stand/libsa/random_i386.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/libsa/random_i386.S,v retrieving revision 1.10 diff -u -p -u -r1.10 random_i386.S --- sys/arch/amd64/stand/libsa/random_i386.S 12 Feb 2016 21:36:33 -0000 1.10 +++ sys/arch/amd64/stand/libsa/random_i386.S 18 Aug 2017 02:28:21 -0000 @@ -104,3 +104,4 @@ usetsc: done: popal ret +END(mdrandom) Index: sys/arch/amd64/stand/pxeboot/srt0.S =================================================================== RCS file: /cvs/src/sys/arch/amd64/stand/pxeboot/srt0.S,v retrieving revision 1.3 diff -u -p -u -r1.3 srt0.S --- sys/arch/amd64/stand/pxeboot/srt0.S 29 Oct 2012 14:18:11 -0000 1.3 +++ sys/arch/amd64/stand/pxeboot/srt0.S 18 Aug 2017 02:28:21 -0000 @@ -199,6 +199,7 @@ ENTRY(debugchar) movb %al, (%ebx) popl %ebx ret +END(debugchar) .code16 Index: sys/arch/i386/conf/Makefile.i386 =================================================================== RCS file: /cvs/src/sys/arch/i386/conf/Makefile.i386,v retrieving revision 1.117 diff -u -p -u -r1.117 Makefile.i386 --- sys/arch/i386/conf/Makefile.i386 12 Aug 2017 20:26:11 -0000 1.117 +++ sys/arch/i386/conf/Makefile.i386 18 Aug 2017 02:28:21 -0000 @@ -29,7 +29,7 @@ CWARNFLAGS= -Werror -Wall -Wimplicit-fun -Wframe-larger-than=2047 CMACHFLAGS= -CMACHFLAGS+= -ffreestanding ${NOPIE_FLAGS} +CMACHFLAGS+= -mcmodel=kernel -ffreestanding ${NOPIE_FLAGS} SORTR= sort -R .if ${IDENT:M-DNO_PROPOLICE} CMACHFLAGS+= -fno-stack-protector Index: sys/arch/i386/i386/acpi_wakecode.S =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/acpi_wakecode.S,v retrieving revision 1.29 diff -u -p -u -r1.29 acpi_wakecode.S --- sys/arch/i386/i386/acpi_wakecode.S 28 Jun 2017 08:51:36 -0000 1.29 +++ sys/arch/i386/i386/acpi_wakecode.S 18 Aug 2017 02:28:21 -0000 @@ -350,10 +350,12 @@ _ACPI_TRMP_LABEL(.Lhibernate_resume_vect /* Jump to the S3 resume vector */ ljmp $(_ACPI_RM_CODE_SEG), $.Lacpi_s3_vector_real +NEND(hibernate_resume_machdep) .code32 /* Switch to hibernate resume pagetable */ NENTRY(hibernate_activate_resume_pt_machdep) + RETGUARD_START /* Enable large pages */ movl %cr4, %eax orl $(CR4_PSE), %eax @@ -384,8 +386,9 @@ NENTRY(hibernate_activate_resume_pt_mach jmp 1f 1: nop + RETGUARD_END ret - +NEND(hibernate_activate_resume_pt_machdep) /* * Switch to the private resume-time hibernate stack */ @@ -397,10 +400,14 @@ NENTRY(hibernate_switch_stack_machdep) /* On our own stack from here onward */ ret +NEND(hibernate_switch_stack_machdep) NENTRY(hibernate_flush) + RETGUARD_START invlpg HIBERNATE_INFLATE_PAGE + RETGUARD_END ret +NEND(hibernate_flush) #endif /* HIBERNATE */ /* @@ -578,6 +585,7 @@ _C_LABEL(acpi_tramp_data_end): .code32 NENTRY(acpi_savecpu) movl (%esp), %eax + RETGUARD_START # 2nd instruction movl %eax, .Lacpi_saved_ret movw %cs, .Lacpi_saved_cs @@ -613,4 +621,6 @@ NENTRY(acpi_savecpu) str .Lacpi_saved_tr movl $1, %eax + RETGUARD_END ret +NEND(acpi_savecpu) Index: sys/arch/i386/i386/apmcall.S =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/apmcall.S,v retrieving revision 1.6 diff -u -p -u -r1.6 apmcall.S --- sys/arch/i386/i386/apmcall.S 28 Nov 2013 19:30:46 -0000 1.6 +++ sys/arch/i386/i386/apmcall.S 18 Aug 2017 02:28:21 -0000 @@ -43,6 +43,7 @@ _C_LABEL(apm_cli): */ .text ENTRY(apmcall) + RETGUARD_START pushl %ebp movl %esp, %ebp pushl %ebx @@ -104,6 +105,7 @@ ENTRY(apmcall) popl %esi popl %ebx popl %ebp + RETGUARD_END ret - +END(apmcall) .end Index: sys/arch/i386/i386/db_trace.c =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/db_trace.c,v retrieving revision 1.29 diff -u -p -u -r1.29 db_trace.c --- sys/arch/i386/i386/db_trace.c 11 Aug 2017 20:50:15 -0000 1.29 +++ sys/arch/i386/i386/db_trace.c 18 Aug 2017 05:34:09 -0000 @@ -68,6 +68,14 @@ struct db_variable *db_eregs = db_regs + */ #define INKERNEL(va) (((vaddr_t)(va)) >= VM_MIN_KERNEL_ADDRESS) +/* Kernel uses xor %sp,(%esp) for RETGUARD/-fret-protector */ +#if defined(PROF) || defined(GPROF) +# define GETPC(frame) (db_get_value((db_addr_t)&frame->f_retaddr, 4, FALSE)) +#else +# define GETPC(frame) (db_get_value((db_addr_t)&frame->f_retaddr, 4, FALSE)\ + ^ (unsigned short)&(frame->f_retaddr)) +#endif + #define NONE 0 #define TRAP 1 #define SYSCALL 2 @@ -124,8 +132,7 @@ db_nextframe(struct callframe **fp, db_a switch (is_trap) { case NONE: - *ip = (db_addr_t) - db_get_value((int) &(*fp)->f_retaddr, 4, FALSE); + *ip = (db_addr_t)GETPC((*fp)); *fp = (struct callframe *) db_get_value((int) &(*fp)->f_frame, 4, FALSE); break; @@ -221,12 +228,10 @@ db_stack_trace_print(db_expr_t addr, boo return; } frame = (struct callframe *)p->p_addr->u_pcb.pcb_ebp; - callpc = (db_addr_t) - db_get_value((int)&frame->f_retaddr, 4, FALSE); + callpc = (db_addr_t)GETPC(frame); } else { frame = (struct callframe *)addr; - callpc = (db_addr_t) - db_get_value((int)&frame->f_retaddr, 4, FALSE); + callpc = (db_addr_t)GETPC(frame); } lastframe = 0; @@ -284,8 +289,7 @@ db_stack_trace_print(db_expr_t addr, boo if (lastframe == 0 && offset == 0 && !have_addr && !is_trap) { /* Frame really belongs to next callpc */ lastframe = (struct callframe *)(ddb_regs.tf_esp-4); - callpc = (db_addr_t) - db_get_value((int)&lastframe->f_retaddr, 4, FALSE); + callpc = (db_addr_t)GETPC(frame); continue; } @@ -331,7 +335,7 @@ db_save_stack_trace(struct db_stack_trac unsigned int i; frame = __builtin_frame_address(0); - callpc = db_get_value((int)&frame->f_retaddr, 4, FALSE); + callpc = (db_addr_t)GETPC(frame); lastframe = NULL; for (i = 0; i < DB_STACK_TRACE_MAX && frame != NULL; i++) { @@ -378,7 +382,7 @@ db_get_pc(struct trapframe *tf) else cf = (struct callframe *)(tf->tf_esp - sizeof(long)); - return db_get_value((db_addr_t)&cf->f_retaddr, sizeof(long), 0); + return (db_addr_t)GETPC(cf); } vaddr_t Index: sys/arch/i386/i386/in_cksum.s =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/in_cksum.s,v retrieving revision 1.9 diff -u -p -u -r1.9 in_cksum.s --- sys/arch/i386/i386/in_cksum.s 29 Jun 2017 17:17:28 -0000 1.9 +++ sys/arch/i386/i386/in_cksum.s 18 Aug 2017 02:28:21 -0000 @@ -117,6 +117,7 @@ /* LINTSTUB: Func: int in4_cksum(struct mbuf *m, u_int8_t nxt, int off, int len) */ ENTRY(in4_cksum) + RETGUARD_START pushl %ebp pushl %ebx pushl %esi @@ -157,10 +158,12 @@ ENTRY(in4_cksum) * doesn't explode. */ jmp .Lin4_entry +END(in4_cksum) /* LINTSTUB: Func: int in_cksum(struct mbuf *m, int len) */ ENTRY(in_cksum) + RETGUARD_START pushl %ebp pushl %ebx pushl %esi @@ -352,6 +355,7 @@ ENTRY(in_cksum) popl %esi popl %ebx popl %ebp + RETGUARD_END ret .Lout_of_mbufs: @@ -359,6 +363,7 @@ ENTRY(in_cksum) call _C_LABEL(printf) leal 4(%esp), %esp jmp .Lreturn +END(in_cksum) .section .rodata cksum_ood: Index: sys/arch/i386/i386/kvm86call.S =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/kvm86call.S,v retrieving revision 1.7 diff -u -p -u -r1.7 kvm86call.S --- sys/arch/i386/i386/kvm86call.S 25 Apr 2015 21:31:24 -0000 1.7 +++ sys/arch/i386/i386/kvm86call.S 18 Aug 2017 02:28:21 -0000 @@ -152,7 +152,7 @@ ENTRY(kvm86_call) popl %eax addl $8,%esp iret - +END(kvm86_call) /* void kvm86_ret(struct trapframe *, int) */ ENTRY(kvm86_ret) @@ -226,3 +226,4 @@ ENTRY(kvm86_ret) popl %esi popl %ebp ret /* back to kvm86_call()'s caller */ +END(kvm86_ret) Index: sys/arch/i386/i386/locore.s =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/locore.s,v retrieving revision 1.178 diff -u -p -u -r1.178 locore.s --- sys/arch/i386/i386/locore.s 6 Jul 2017 06:17:05 -0000 1.178 +++ sys/arch/i386/i386/locore.s 18 Aug 2017 03:18:16 -0000 @@ -246,6 +246,7 @@ NENTRY(proc_trampoline) addl $4,%esp INTRFASTEXIT /* NOTREACHED */ +NEND(proc_trampoline) /* This must come before any use of the CODEPATCH macros */ .section .codepatch,"a" @@ -311,6 +312,7 @@ _C_LABEL(sigfillsiz): * Copy len bytes, abort on fault. */ ENTRY(kcopy) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -344,6 +346,7 @@ ENTRY(kcopy) #ifdef DDB leave #endif + RETGUARD_END ret .align 4,0xcc @@ -371,7 +374,9 @@ ENTRY(kcopy) #ifdef DDB leave #endif + RETGUARD_END ret +END(kcopy) /*****************************************************************************/ @@ -385,6 +390,7 @@ ENTRY(kcopy) * Copy len bytes into the user's address space. */ ENTRY(copyout) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -432,13 +438,16 @@ ENTRY(copyout) #ifdef DDB leave #endif + RETGUARD_END ret +END(copyout) /* * copyin(caddr_t from, caddr_t to, size_t len); * Copy len bytes from the user's address space. */ ENTRY(copyin) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -484,7 +493,9 @@ ENTRY(copyin) #ifdef DDB leave #endif + RETGUARD_END ret +END(copyin) ENTRY(copy_fault) SMAP_CLAC @@ -496,7 +507,9 @@ ENTRY(copy_fault) #ifdef DDB leave #endif + RETGUARD_END ret +END(copy_fault) /* * copyoutstr(caddr_t from, caddr_t to, size_t maxlen, size_t *lencopied); @@ -506,6 +519,7 @@ ENTRY(copy_fault) * return 0 or EFAULT. */ ENTRY(copyoutstr) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -553,6 +567,7 @@ ENTRY(copyoutstr) jae _C_LABEL(copystr_fault) movl $ENAMETOOLONG,%eax jmp copystr_return +END(copyoutstr) /* * copyinstr(caddr_t from, caddr_t to, size_t maxlen, size_t *lencopied); @@ -562,6 +577,7 @@ ENTRY(copyoutstr) * return 0 or EFAULT. */ ENTRY(copyinstr) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -608,6 +624,7 @@ ENTRY(copyinstr) jae _C_LABEL(copystr_fault) movl $ENAMETOOLONG,%eax jmp copystr_return +END(copyinstr) ENTRY(copystr_fault) movl $EFAULT,%eax @@ -629,7 +646,9 @@ copystr_return: #ifdef DDB leave #endif + RETGUARD_END ret +END(copystr_fault) /* * copystr(caddr_t from, caddr_t to, size_t maxlen, size_t *lencopied); @@ -638,6 +657,7 @@ copystr_return: * string is too long, return ENAMETOOLONG; else return 0. */ ENTRY(copystr) + RETGUARD_START #ifdef DDB pushl %ebp movl %esp,%ebp @@ -678,7 +698,9 @@ ENTRY(copystr) #ifdef DDB leave #endif + RETGUARD_END ret +END(copystr) /*****************************************************************************/ @@ -709,6 +731,7 @@ NENTRY(lgdt) pushl $GSEL(GCODE_SEL, SEL_KPL) pushl %eax lret +NEND(lgdt) ENTRY(setjmp) movl 4(%esp),%eax @@ -721,6 +744,7 @@ ENTRY(setjmp) movl %edx,20(%eax) # save eip xorl %eax,%eax # return (0); ret +END(setjmp) ENTRY(longjmp) movl 4(%esp),%eax @@ -734,6 +758,7 @@ ENTRY(longjmp) xorl %eax,%eax # return (1); incl %eax ret +END(longjmp) /*****************************************************************************/ @@ -817,33 +842,46 @@ switch_exited: popl %esi popl %ebx ret +END(cpu_switchto) ENTRY(cpu_idle_enter) + RETGUARD_START movl _C_LABEL(cpu_idle_enter_fcn),%eax cmpl $0,%eax je 1f + RETGUARD_END jmpl *%eax 1: + RETGUARD_END ret +END(cpu_idle_enter) ENTRY(cpu_idle_cycle) + RETGUARD_START movl _C_LABEL(cpu_idle_cycle_fcn),%eax cmpl $0,%eax je 1f call *%eax + RETGUARD_END ret 1: sti hlt + RETGUARD_END ret +END(cpu_idle_cycle) ENTRY(cpu_idle_leave) + RETGUARD_START movl _C_LABEL(cpu_idle_leave_fcn),%eax cmpl $0,%eax je 1f + RETGUARD_END jmpl *%eax 1: + RETGUARD_END ret +END(cpu_idle_cycle) /* * savectx(struct pcb *pcb); @@ -861,6 +899,7 @@ ENTRY(savectx) movl %ecx,PCB_FLAGS(%edx) ret +END(savectx) /*****************************************************************************/ @@ -991,22 +1030,27 @@ IDTVEC(align) */ NENTRY(resume_iret) ZTRAP(T_PROTFLT) +NEND(resume_iret) NENTRY(resume_pop_ds) pushl %es movl $GSEL(GDATA_SEL, SEL_KPL),%eax movw %ax,%es +NEND(resume_pop_ds) NENTRY(resume_pop_es) pushl %gs xorl %eax,%eax /* $GSEL(GNULL_SEL, SEL_KPL) == 0 */ movw %ax,%gs +NEND(resume_pop_es) NENTRY(resume_pop_gs) pushl %fs movl $GSEL(GCPU_SEL, SEL_KPL),%eax movw %ax,%fs +NEND(resume_pop_gs) NENTRY(resume_pop_fs) movl $T_PROTFLT,TF_TRAPNO(%esp) sti jmp calltrap +NEND(resume_pop_fs) /* * All traps go through here. Call the generic trap handler, and @@ -1083,7 +1127,10 @@ calltrap: #endif /* DDB */ movl %ebx,CPL jmp 2b +#endif /* DIAGNOSTIC */ +NEND(alltraps) +#ifdef DIAGNOSTIC .section .rodata spl_lowered: .asciz "WARNING: SPL NOT LOWERED ON TRAP EXIT\n" @@ -1148,6 +1195,7 @@ IDTVEC(syscall) */ ENTRY(bzero) + RETGUARD_START pushl %edi movl 8(%esp),%edi movl 12(%esp),%edx @@ -1207,10 +1255,13 @@ ENTRY(bzero) stosb popl %edi + RETGUARD_END ret +END(bzero) #if !defined(SMALL_KERNEL) ENTRY(sse2_pagezero) + RETGUARD_START pushl %ebx movl 8(%esp),%ecx movl %ecx,%eax @@ -1223,9 +1274,12 @@ ENTRY(sse2_pagezero) jne 1b sfence popl %ebx + RETGUARD_END ret +END(sse2_pagezero) ENTRY(i686_pagezero) + RETGUARD_START pushl %edi pushl %ebx @@ -1241,6 +1295,7 @@ ENTRY(i686_pagezero) popl %ebx popl %edi + RETGUARD_END ret .align 4,0x90 @@ -1271,13 +1326,16 @@ ENTRY(i686_pagezero) popl %ebx popl %edi + RETGUARD_END ret +END(i686_pagezero) #endif /* * int cpu_paenable(void *); */ ENTRY(cpu_paenable) + RETGUARD_END movl $-1, %eax testl $CPUID_PAE, _C_LABEL(cpu_feature) jz 1f @@ -1312,7 +1370,9 @@ ENTRY(cpu_paenable) popl %edi popl %esi 1: + RETGUARD_END ret +END(cpu_paenable) #if NLAPIC > 0 #include <i386/i386/apicvec.s> Index: sys/arch/i386/i386/mutex.S =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/mutex.S,v retrieving revision 1.12 diff -u -p -u -r1.12 mutex.S --- sys/arch/i386/i386/mutex.S 29 Jun 2017 17:17:28 -0000 1.12 +++ sys/arch/i386/i386/mutex.S 18 Aug 2017 02:28:21 -0000 @@ -31,6 +31,7 @@ * all the functions in the same place. */ ENTRY(__mtx_init) + RETGUARD_START pushl %ebp movl %esp, %ebp movl 8(%esp), %eax @@ -41,11 +42,14 @@ ENTRY(__mtx_init) movl %edx, MTX_LOCK(%eax) movl %edx, MTX_OWNER(%eax) leave + RETGUARD_END ret +END(__mtx_init) #define SOFF 8 ENTRY(__mtx_enter) + RETGUARD_START pushl %ebp movl %esp, %ebp 1: movl SOFF(%ebp), %ecx @@ -69,6 +73,7 @@ ENTRY(__mtx_enter) movl %eax, MTX_OWNER(%ecx) movl %edx, MTX_OLDIPL(%ecx) leave + RETGUARD_END ret /* We failed to obtain the lock. splx, spin and retry. */ @@ -90,7 +95,10 @@ ENTRY(__mtx_enter) #ifdef DIAGNOSTIC 5: pushl $mtx_lockingself call _C_LABEL(panic) +#endif +END(__mtx_enter) +#ifdef DIAGNOSTIC .section .rodata mtx_lockingself: .asciz "mtx_enter: locking against myself" @@ -98,6 +106,7 @@ mtx_lockingself: #endif ENTRY(__mtx_enter_try) + RETGUARD_START pushl %ebp movl %esp, %ebp 1: movl SOFF(%ebp), %ecx @@ -122,6 +131,7 @@ ENTRY(__mtx_enter_try) movl %edx, MTX_OLDIPL(%ecx) movl $1, %eax leave + RETGUARD_END ret /* We failed to obtain the lock. splx and return zero. */ @@ -136,12 +146,16 @@ ENTRY(__mtx_enter_try) #endif xorl %eax, %eax leave + RETGUARD_END ret #ifdef DIAGNOSTIC 4: pushl $mtx_lockingtry call _C_LABEL(panic) +#endif +END(__mtx_enter_try) +#ifdef DIAGNOSTIC .section .rodata mtx_lockingtry: .asciz "mtx_enter_try: locking against myself" @@ -150,6 +164,7 @@ mtx_lockingtry: ENTRY(__mtx_leave) + RETGUARD_START pushl %ebp movl %esp, %ebp movl SOFF(%ebp), %ecx @@ -166,12 +181,16 @@ ENTRY(__mtx_leave) movl %eax, MTX_LOCK(%ecx) call _C_LABEL(splx) leave + RETGUARD_END ret #ifdef DIAGNOSTIC 1: pushl $mtx_leave_held call _C_LABEL(panic) +#endif +END(__mtx_leave) +#ifdef DIAGNOSTIC .section .rodata mtx_leave_held: .asciz "mtx_leave: lock not held" Index: sys/arch/i386/i386/vmm_support.S =================================================================== RCS file: /cvs/src/sys/arch/i386/i386/vmm_support.S,v retrieving revision 1.3 diff -u -p -u -r1.3 vmm_support.S --- sys/arch/i386/i386/vmm_support.S 6 Jul 2017 04:32:30 -0000 1.3 +++ sys/arch/i386/i386/vmm_support.S 18 Aug 2017 02:28:21 -0000 @@ -28,19 +28,9 @@ #define VMX_FAIL_LAUNCH_VALID_VMCS 3 .text - .global _C_LABEL(vmxon) - .global _C_LABEL(vmxoff) - .global _C_LABEL(vmclear) - .global _C_LABEL(vmptrld) - .global _C_LABEL(vmptrst) - .global _C_LABEL(vmwrite) - .global _C_LABEL(vmread) - .global _C_LABEL(invvpid) - .global _C_LABEL(invept) - .global _C_LABEL(vmx_enter_guest) - .global _C_LABEL(vmm_dispatch_intr) -_C_LABEL(vmm_dispatch_intr): +ENTRY(vmm_dispatch_intr) + RETGUARD_START movl %esp, %eax andl $0xFFFFFFF0, %esp pushl %ss @@ -51,74 +41,101 @@ _C_LABEL(vmm_dispatch_intr): movl 4(%eax), %eax calll *%eax addl $0x8, %esp + RETGUARD_END ret +END(vmm_dispatch_intr) -_C_LABEL(vmxon): +ENTRY(vmxon) + RETGUARD_START movl 4(%esp), %eax vmxon (%eax) jz failed_on jc failed_on xorl %eax, %eax + RETGUARD_END ret failed_on: movl $0x01, %eax + RETGUARD_END ret +END(vmxon) -_C_LABEL(vmxoff): +ENTRY(vmxoff) + RETGUARD_START vmxoff jz failed_off jc failed_off xorl %eax, %eax + RETGUARD_END ret failed_off: movl $0x01, %eax + RETGUARD_END ret +END(vmxoff) -_C_LABEL(vmclear): +ENTRY(vmclear) + RETGUARD_START movl 0x04(%esp), %eax vmclear (%eax) jz failed_clear jc failed_clear xorl %eax, %eax + RETGUARD_END ret failed_clear: movl $0x01, %eax + RETGUARD_END ret +END(vmclear) -_C_LABEL(vmptrld): +ENTRY(vmptrld) + RETGUARD_START movl 4(%esp), %eax vmptrld (%eax) jz failed_ptrld jc failed_ptrld xorl %eax, %eax + RETGUARD_END ret failed_ptrld: movl $0x01, %eax + RETGUARD_END ret +END(vmptrld) -_C_LABEL(vmptrst): +ENTRY(vmptrst) + RETGUARD_START movl 0x04(%esp), %eax vmptrst (%eax) jz failed_ptrst jc failed_ptrst xorl %eax, %eax + RETGUARD_END ret failed_ptrst: movl $0x01, %eax + RETGUARD_END ret +END(vmptrst) -_C_LABEL(vmwrite): +ENTRY(vmwrite) + RETGUARD_START movl 0x04(%esp), %eax vmwrite 0x08(%esp), %eax jz failed_write jc failed_write xorl %eax, %eax + RETGUARD_END ret failed_write: movl $0x01, %eax + RETGUARD_END ret +END(vmwrite) -_C_LABEL(vmread): +ENTRY(vmread) + RETGUARD_START pushl %ebx movl 0x08(%esp), %ebx movl 0x0c(%esp), %eax @@ -127,26 +144,35 @@ _C_LABEL(vmread): jc failed_read popl %ebx xorl %eax, %eax + RETGUARD_END ret failed_read: popl %ebx movl $0x01, %eax + RETGUARD_END ret +END(vmread) -_C_LABEL(invvpid): +ENTRY(invvpid) + RETGUARD_START pushl %ebx movl 0x08(%esp), %eax movl 0x0c(%esp), %ebx invvpid (%ebx), %eax popl %ebx + RETGUARD_END ret +END(invvpid) -_C_LABEL(invept): +ENTRY(invept) movl 0x04(%esp), %eax invept 0x08(%esp), %eax + RETGUARD_END ret +END(invept) -_C_LABEL(vmx_enter_guest): +ENTRY(vmx_enter_guest) + RETGUARD_START pushl %ebx pushl %ecx pushl %edx @@ -288,3 +314,5 @@ restore_host: xorl %eax, %eax ret + RETGUARD_END +END(vmx_enter_guest) Index: sys/arch/i386/include/asm.h =================================================================== RCS file: /cvs/src/sys/arch/i386/include/asm.h,v retrieving revision 1.15 diff -u -p -u -r1.15 asm.h --- sys/arch/i386/include/asm.h 29 Jun 2017 17:36:16 -0000 1.15 +++ sys/arch/i386/include/asm.h 18 Aug 2017 17:58:22 -0000 @@ -61,6 +61,18 @@ #define _C_LABEL(name) name #define _ASM_LABEL(x) x +#ifdef _KERNEL /* 16 bit */ +#define RETGUARD_CFI .cfi_escape 0x16, 0x08, 0x0b, 0x09, 0xfc, 0ax22, 0x12,\ + 0x06, 0x16, 0x0a, 0xff, 0xff, 0x1a, 0x27 +#define RETGUARD_START RETGUARD_CFI; xor %sp,(%esp) +#define RETGUARD_END xor %sp,(%esp) +#else /* 32 bit */ +#define RETGUARD_CFI .cfi_escape 0x16, 0x08, 0x06, 0x09, 0xfc, 0x22, 0x12,\ + 0x06, 0x27 +#define RETGUARD_START RETGUARD_CFI; xorl %esp,(%esp) +#define RETGUARD_END xorl %esp,(%esp) +#endif + #define CVAROFF(x, y) _C_LABEL(x) + y #ifdef __STDC__ @@ -103,11 +115,13 @@ # define _PROF_PROLOGUE #endif -#define ENTRY(y) _ENTRY(_C_LABEL(y)); _PROF_PROLOGUE -#define NENTRY(y) _ENTRY(_C_LABEL(y)) +#define ENTRY(y) _ENTRY(_C_LABEL(y)); _PROF_PROLOGUE; .cfi_startproc +#define NENTRY(y) _ENTRY(_C_LABEL(y)); .cfi_startproc #define ASENTRY(y) _ENTRY(_ASM_LABEL(y)); _PROF_PROLOGUE #define NASENTRY(y) _ENTRY(_ASM_LABEL(y)) -#define END(y) .size y, . - y +#define _ASM_SIZE(y) .size y, . - y +#define END(y) .cfi_endproc; _ASM_SIZE(y) +#define NEND(y) .cfi_endproc #define ALTENTRY(name) .globl _C_LABEL(name); _C_LABEL(name): Index: sys/arch/i386/include/cdefs.h =================================================================== RCS file: /cvs/src/sys/arch/i386/include/cdefs.h,v retrieving revision 1.10 diff -u -p -u -r1.10 cdefs.h --- sys/arch/i386/include/cdefs.h 28 Mar 2013 17:30:45 -0000 1.10 +++ sys/arch/i386/include/cdefs.h 18 Aug 2017 02:39:34 -0000 @@ -18,4 +18,23 @@ __asm__(".section .gnu.warning." __STRING(sym) \ " ; .ascii \"" msg "\" ; .text") +/* + * Fix __builtin_return_address() when compile with -fxor-ret-protector. + */ +#if defined(_RET_PROTECTOR) +# if defined(_KERNEL) +# define __builtin_return_address(d) ((void *) \ + ((size_t)__builtin_return_address(d) ^ \ + (unsigned short)__builtin_frame_address(d) + sizeof(void *))) +# else +# define __builtin_return_address(d) ((void *) \ + ((size_t)__builtin_return_address(d) ^ \ + (size_t)__builtin_frame_address(d) + sizeof(void *))) +# endif +#endif + +#if defined(_KERNEL) +#define _KERNEL_XORRET unsigned short +#endif + #endif /* !_MACHINE_CDEFS_H_ */ Index: sys/arch/i386/stand/cdboot/srt0.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/cdboot/srt0.S,v retrieving revision 1.3 diff -u -p -u -r1.3 srt0.S --- sys/arch/i386/stand/cdboot/srt0.S 31 Oct 2012 14:31:30 -0000 1.3 +++ sys/arch/i386/stand/cdboot/srt0.S 19 Aug 2017 02:33:33 -0000 @@ -204,6 +204,7 @@ ENTRY(debugchar) movb %al, (%ebx) popl %ebx ret +END(debugchar) .code16 Index: sys/arch/i386/stand/libsa/debug_i386.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/libsa/debug_i386.S,v retrieving revision 1.12 diff -u -p -u -r1.12 debug_i386.S --- sys/arch/i386/stand/libsa/debug_i386.S 9 Mar 2004 19:12:12 -0000 1.12 +++ sys/arch/i386/stand/libsa/debug_i386.S 19 Aug 2017 02:33:33 -0000 @@ -122,3 +122,4 @@ ENTRY(check_regs) movl $0x47374736, (%edi) #endif ret +END(check_regs) Index: sys/arch/i386/stand/libsa/gidt.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/libsa/gidt.S,v retrieving revision 1.36 diff -u -p -u -r1.36 gidt.S --- sys/arch/i386/stand/libsa/gidt.S 31 Oct 2012 13:55:58 -0000 1.36 +++ sys/arch/i386/stand/libsa/gidt.S 19 Aug 2017 02:33:33 -0000 @@ -161,6 +161,7 @@ ENTRY(_rtt) /* Again... */ movl $0, %esp /* segment violation */ ret +END(_rtt) #define IPROC(n) X##n #define IEMU(n) IPROC(emu##n) @@ -465,5 +466,6 @@ ENTRY(bootbuf) /* Jump to buffer */ ljmp $0x0, $0x7c00 +END(bootbuf) .end Index: sys/arch/i386/stand/libsa/pxe_call.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/libsa/pxe_call.S,v retrieving revision 1.4 diff -u -p -u -r1.4 pxe_call.S --- sys/arch/i386/stand/libsa/pxe_call.S 2 Jan 2006 00:26:29 -0000 1.4 +++ sys/arch/i386/stand/libsa/pxe_call.S 19 Aug 2017 02:33:33 -0000 @@ -82,6 +82,7 @@ _C_LABEL(bangpxe_seg) = . - 2 popl %ebx popl %ebp ret +END(pxecall_bangpxe) ENTRY(pxecall_pxenv) .code32 @@ -125,6 +126,7 @@ _C_LABEL(pxenv_seg) = . - 2 popl %ebx popl %ebp ret +END(pxecall_pxenv) /* * prot_to_real() Index: sys/arch/i386/stand/libsa/random_i386.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/libsa/random_i386.S,v retrieving revision 1.10 diff -u -p -u -r1.10 random_i386.S --- sys/arch/i386/stand/libsa/random_i386.S 12 Feb 2016 21:36:33 -0000 1.10 +++ sys/arch/i386/stand/libsa/random_i386.S 19 Aug 2017 02:33:33 -0000 @@ -104,3 +104,4 @@ usetsc: done: popal ret +END(mdrandom) Index: sys/arch/i386/stand/pxeboot/srt0.S =================================================================== RCS file: /cvs/src/sys/arch/i386/stand/pxeboot/srt0.S,v retrieving revision 1.3 diff -u -p -u -r1.3 srt0.S --- sys/arch/i386/stand/pxeboot/srt0.S 31 Oct 2012 14:31:30 -0000 1.3 +++ sys/arch/i386/stand/pxeboot/srt0.S 19 Aug 2017 02:33:33 -0000 @@ -199,6 +199,7 @@ ENTRY(debugchar) movb %al, (%ebx) popl %ebx ret +END(debugchar) .code16 Index: sys/lib/libkern/arch/amd64/bcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/bcmp.S,v retrieving revision 1.3 diff -u -p -u -r1.3 bcmp.S --- sys/lib/libkern/arch/amd64/bcmp.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/amd64/bcmp.S 18 Aug 2017 02:28:21 -0000 @@ -1,6 +1,7 @@ #include <machine/asm.h> ENTRY(bcmp) + RETGUARD_START xorl %eax,%eax /* clear return value */ movq %rdx,%rcx /* compare by words */ @@ -16,4 +17,6 @@ ENTRY(bcmp) je L2 L1: incl %eax -L2: ret +L2: RETGUARD_END + ret +END(bcmp) Index: sys/lib/libkern/arch/amd64/bzero.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/bzero.S,v retrieving revision 1.4 diff -u -p -u -r1.4 bzero.S --- sys/lib/libkern/arch/amd64/bzero.S 29 Nov 2014 18:51:23 -0000 1.4 +++ sys/lib/libkern/arch/amd64/bzero.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(bzero) + RETGUARD_START movq %rsi,%rdx xorq %rax,%rax /* set fill data to 0 */ @@ -36,4 +37,6 @@ L1: movq %rdx,%rcx /* zero remainder by rep stosb + RETGUARD_END ret +END(bzero) Index: sys/lib/libkern/arch/amd64/ffs.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/ffs.S,v retrieving revision 1.2 diff -u -p -u -r1.2 ffs.S --- sys/lib/libkern/arch/amd64/ffs.S 24 Nov 2007 19:28:25 -0000 1.2 +++ sys/lib/libkern/arch/amd64/ffs.S 18 Aug 2017 02:28:21 -0000 @@ -7,11 +7,15 @@ #include <machine/asm.h> ENTRY(ffs) + RETGUARD_START bsfl %edi,%eax jz L1 /* ZF is set if all bits are 0 */ incl %eax /* bits numbered from 1, not 0 */ + RETGUARD_END ret _ALIGN_TEXT L1: xorl %eax,%eax /* clear result */ + RETGUARD_END ret +END(ffs) Index: sys/lib/libkern/arch/amd64/htonl.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/htonl.S,v retrieving revision 1.1 diff -u -p -u -r1.1 htonl.S --- sys/lib/libkern/arch/amd64/htonl.S 25 Nov 2007 18:25:34 -0000 1.1 +++ sys/lib/libkern/arch/amd64/htonl.S 18 Aug 2017 02:28:21 -0000 @@ -44,6 +44,10 @@ _ENTRY(_C_LABEL(htonl)) _ENTRY(_C_LABEL(ntohl)) _ENTRY(_C_LABEL(bswap32)) _PROF_PROLOGUE + .cfi_startproc + RETGUARD_START movl %edi,%eax bswap %eax + RETGUARD_END ret +END(_C_LABEL(htonl)) Index: sys/lib/libkern/arch/amd64/htons.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/htons.S,v retrieving revision 1.1 diff -u -p -u -r1.1 htons.S --- sys/lib/libkern/arch/amd64/htons.S 25 Nov 2007 18:25:34 -0000 1.1 +++ sys/lib/libkern/arch/amd64/htons.S 18 Aug 2017 02:28:21 -0000 @@ -44,6 +44,10 @@ _ENTRY(_C_LABEL(htons)) _ENTRY(_C_LABEL(ntohs)) _ENTRY(_C_LABEL(bswap16)) _PROF_PROLOGUE + .cfi_startproc + RETGUARD_START movl %edi,%eax xchgb %ah,%al + RETGUARD_END ret +END(_C_LABEL(htons)) Index: sys/lib/libkern/arch/amd64/memchr.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/memchr.S,v retrieving revision 1.3 diff -u -p -u -r1.3 memchr.S --- sys/lib/libkern/arch/amd64/memchr.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/amd64/memchr.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(memchr) + RETGUARD_START movb %sil,%al /* set character to search for */ movq %rdx,%rcx /* set length of search */ testq %rcx,%rcx /* test for len == 0 */ @@ -15,6 +16,9 @@ ENTRY(memchr) scasb jne L1 /* scan failed, return null */ leaq -1(%rdi),%rax /* adjust result of scan */ + RETGUARD_END ret L1: xorq %rax,%rax + RETGUARD_END ret +END(memchr) Index: sys/lib/libkern/arch/amd64/memcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/memcmp.S,v retrieving revision 1.3 diff -u -p -u -r1.3 memcmp.S --- sys/lib/libkern/arch/amd64/memcmp.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/amd64/memcmp.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(memcmp) + RETGUARD_START movq %rdx,%rcx /* compare by longs */ shrq $3,%rcx repe @@ -20,6 +21,7 @@ ENTRY(memcmp) jne L6 /* do we match? */ xorl %eax,%eax /* we match, return zero */ + RETGUARD_END ret L5: movl $8,%ecx /* We know that one of the next */ @@ -32,4 +34,7 @@ L6: xorl %eax,%eax /* Perform unsigned xorl %edx,%edx movb -1(%rsi),%dl subl %edx,%eax + RETGUARD_END ret +END(memcmp) + Index: sys/lib/libkern/arch/amd64/memmove.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/memmove.S,v retrieving revision 1.5 diff -u -p -u -r1.5 memmove.S --- sys/lib/libkern/arch/amd64/memmove.S 29 Nov 2014 18:51:23 -0000 1.5 +++ sys/lib/libkern/arch/amd64/memmove.S 18 Aug 2017 02:28:21 -0000 @@ -41,10 +41,14 @@ */ ENTRY(bcopy) + RETGUARD_START xchgq %rdi,%rsi - /* fall into memmove */ + jmp 9f /* go do memmove */ +END(bcopy) ENTRY(memmove) + RETGUARD_START +9: movq %rdi,%r11 /* save dest */ movq %rdx,%rcx movq %rdi,%rax @@ -52,8 +56,10 @@ ENTRY(memmove) cmpq %rcx,%rax /* overlapping? */ jb 1f jmp 2f /* nope */ +END(memmove) ENTRY(memcpy) + RETGUARD_START movq %rdi,%r11 /* save dest */ movq %rdx,%rcx 2: @@ -65,6 +71,7 @@ ENTRY(memcpy) rep movsb movq %r11,%rax + RETGUARD_END ret 1: addq %rcx,%rdi /* copy backwards. */ @@ -83,4 +90,6 @@ ENTRY(memcpy) movsq movq %r11,%rax cld + RETGUARD_END ret +END(memcpy) Index: sys/lib/libkern/arch/amd64/memset.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/memset.S,v retrieving revision 1.5 diff -u -p -u -r1.5 memset.S --- sys/lib/libkern/arch/amd64/memset.S 29 Nov 2014 18:51:23 -0000 1.5 +++ sys/lib/libkern/arch/amd64/memset.S 18 Aug 2017 02:28:21 -0000 @@ -7,6 +7,7 @@ #include <machine/asm.h> ENTRY(memset) + RETGUARD_START movq %rsi,%rax andq $0xff,%rax movq %rdx,%rcx @@ -50,4 +51,6 @@ L1: rep stosb movq %r11,%rax + RETGUARD_END ret +END(memset) Index: sys/lib/libkern/arch/amd64/scanc.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/scanc.S,v retrieving revision 1.3 diff -u -p -u -r1.3 scanc.S --- sys/lib/libkern/arch/amd64/scanc.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/amd64/scanc.S 18 Aug 2017 02:28:21 -0000 @@ -36,6 +36,7 @@ #include <machine/asm.h> ENTRY(scanc) + RETGUARD_START movq %rdx,%r11 movb %cl,%dl movl %edi,%ecx @@ -51,4 +52,6 @@ ENTRY(scanc) jnz 1b 2: movl %ecx,%eax + RETGUARD_END ret +END(scanc) Index: sys/lib/libkern/arch/amd64/skpc.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/skpc.S,v retrieving revision 1.3 diff -u -p -u -r1.3 skpc.S --- sys/lib/libkern/arch/amd64/skpc.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/amd64/skpc.S 18 Aug 2017 02:28:21 -0000 @@ -36,6 +36,7 @@ #include <machine/asm.h> ENTRY(skpc) + RETGUARD_START movl %edi,%eax movq %rsi,%rcx movq %rdx,%rdi @@ -45,4 +46,6 @@ ENTRY(skpc) incq %rcx 1: movl %ecx,%eax + RETGUARD_END ret +END(skpc) Index: sys/lib/libkern/arch/amd64/strchr.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/strchr.S,v retrieving revision 1.3 diff -u -p -u -r1.3 strchr.S --- sys/lib/libkern/arch/amd64/strchr.S 9 Dec 2014 15:13:57 -0000 1.3 +++ sys/lib/libkern/arch/amd64/strchr.S 18 Aug 2017 02:28:21 -0000 @@ -44,6 +44,7 @@ STRONG_ALIAS(index, strchr) */ ENTRY(strchr) + RETGUARD_START movabsq $0x0101010101010101,%r8 movzbq %sil,%rdx /* value to search for (c) */ @@ -85,6 +86,7 @@ ENTRY(strchr) bsf %r11,%r11 /* 7, 15, 23 ... 63 */ 8: shr $3,%r11 /* 0, 1, 2 .. 7 */ lea -8(%r11,%rdi),%rax + RETGUARD_END ret /* End of string, check whether char is before NUL */ @@ -97,6 +99,7 @@ ENTRY(strchr) cmp %r11,%rax jae 8b /* return 'found' if same - searching for NUL */ 11: xor %eax,%eax /* char not found */ + RETGUARD_END ret /* Source misaligned: read aligned word and make low bytes invalid */ @@ -123,3 +126,4 @@ ENTRY(strchr) sar %cl,%r10 /* top bytes 0xff */ and %r10,%rax /* clear lsb from unwanted low bytes */ jmp 21b +END(strchr) Index: sys/lib/libkern/arch/amd64/strcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/strcmp.S,v retrieving revision 1.3 diff -u -p -u -r1.3 strcmp.S --- sys/lib/libkern/arch/amd64/strcmp.S 9 Dec 2014 15:13:57 -0000 1.3 +++ sys/lib/libkern/arch/amd64/strcmp.S 18 Aug 2017 02:28:21 -0000 @@ -9,6 +9,7 @@ #include <machine/asm.h> ENTRY(strcmp) + RETGUARD_START /* * Align s1 to word boundary. * Consider unrolling loop? @@ -68,4 +69,6 @@ ENTRY(strcmp) movzbq %al,%rax movzbq %dl,%rdx subq %rdx,%rax + RETGUARD_END ret +END(strcmp) Index: sys/lib/libkern/arch/amd64/strlen.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/strlen.S,v retrieving revision 1.5 diff -u -p -u -r1.5 strlen.S --- sys/lib/libkern/arch/amd64/strlen.S 20 Mar 2016 16:50:30 -0000 1.5 +++ sys/lib/libkern/arch/amd64/strlen.S 18 Aug 2017 02:28:21 -0000 @@ -112,6 +112,7 @@ */ ENTRY(strlen) + RETGUARD_START movabsq $0x0101010101010101,%r8 test $7,%dil @@ -139,6 +140,7 @@ ENTRY(strlen) bsf %rdx,%rdx /* 7, 15, 23 ... 63 */ shr $3,%rdx /* 0, 1, 2 ... 7 */ lea -8(%rax,%rdx),%rax + RETGUARD_END ret /* Misaligned, read aligned word and make low bytes non-zero */ @@ -154,3 +156,4 @@ ENTRY(strlen) dec %rsi or %rsi,%rdx /* low bytes now non-zero */ jmp 2b +END(strlen) Index: sys/lib/libkern/arch/amd64/strrchr.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/amd64/strrchr.S,v retrieving revision 1.3 diff -u -p -u -r1.3 strrchr.S --- sys/lib/libkern/arch/amd64/strrchr.S 9 Dec 2014 15:13:57 -0000 1.3 +++ sys/lib/libkern/arch/amd64/strrchr.S 18 Aug 2017 02:28:21 -0000 @@ -11,6 +11,7 @@ STRONG_ALIAS(rindex, strrchr) ENTRY(strrchr) + RETGUARD_START movzbq %sil,%rcx /* zero return value */ @@ -120,4 +121,6 @@ ENTRY(strrchr) jne .Lloop .Ldone: + RETGUARD_END ret +END(strrchr) Index: sys/lib/libkern/arch/i386/bcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/bcmp.S,v retrieving revision 1.3 diff -u -p -u -r1.3 bcmp.S --- sys/lib/libkern/arch/i386/bcmp.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/i386/bcmp.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include <machine/asm.h> ENTRY(bcmp) + RETGUARD_START pushl %edi pushl %esi movl 12(%esp),%edi @@ -29,4 +30,6 @@ ENTRY(bcmp) L1: incl %eax L2: popl %esi popl %edi + RETGUARD_END ret +END(bcmp) Index: sys/lib/libkern/arch/i386/ffs.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/ffs.S,v retrieving revision 1.2 diff -u -p -u -r1.2 ffs.S --- sys/lib/libkern/arch/i386/ffs.S 27 Sep 1996 06:47:45 -0000 1.2 +++ sys/lib/libkern/arch/i386/ffs.S 18 Aug 2017 02:28:21 -0000 @@ -8,11 +8,15 @@ #include <machine/asm.h> ENTRY(ffs) + RETGUARD_START bsfl 4(%esp),%eax jz L1 /* ZF is set if all bits are 0 */ incl %eax /* bits numbered from 1, not 0 */ + RETGUARD_END ret .align 2 L1: xorl %eax,%eax /* clear result */ + RETGUARD_END ret +END(ffs) Index: sys/lib/libkern/arch/i386/htonl.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/htonl.S,v retrieving revision 1.4 diff -u -p -u -r1.4 htonl.S --- sys/lib/libkern/arch/i386/htonl.S 25 Nov 2007 18:25:35 -0000 1.4 +++ sys/lib/libkern/arch/i386/htonl.S 18 Aug 2017 02:28:21 -0000 @@ -38,10 +38,31 @@ /* netorder = htonl(hostorder) AND hostorder = ntohl(netorder) */ ENTRY(ntohl) + RETGUARD_START + movl 4(%esp),%eax + rorw $8,%ax + roll $16,%eax + rorw $8,%ax + RETGUARD_END + ret +END(ntohl) + ENTRY(htonl) + RETGUARD_START + movl 4(%esp),%eax + rorw $8,%ax + roll $16,%eax + rorw $8,%ax + RETGUARD_END + ret +END(htonl) + ENTRY(swap32) + RETGUARD_START movl 4(%esp),%eax rorw $8,%ax roll $16,%eax rorw $8,%ax + RETGUARD_END ret +END(swap32) Index: sys/lib/libkern/arch/i386/htons.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/htons.S,v retrieving revision 1.4 diff -u -p -u -r1.4 htons.S --- sys/lib/libkern/arch/i386/htons.S 25 Nov 2007 18:25:35 -0000 1.4 +++ sys/lib/libkern/arch/i386/htons.S 18 Aug 2017 02:28:21 -0000 @@ -38,8 +38,25 @@ /* netorder = htons(hostorder) AND hostorder = ntohs(netorder) */ ENTRY(htons) + RETGUARD_START + movzwl 4(%esp),%eax + rorw $8,%ax + RETGUARD_END + ret +END(htons) + ENTRY(ntohs) + RETGUARD_START + movzwl 4(%esp),%eax + rorw $8,%ax + RETGUARD_END + ret +END(ntohs) + ENTRY(swap16) + RETGUARD_START movzwl 4(%esp),%eax rorw $8,%ax + RETGUARD_END ret +END(swap16) Index: sys/lib/libkern/arch/i386/memchr.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/memchr.S,v retrieving revision 1.2 diff -u -p -u -r1.2 memchr.S --- sys/lib/libkern/arch/i386/memchr.S 29 Nov 2014 18:51:23 -0000 1.2 +++ sys/lib/libkern/arch/i386/memchr.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include <machine/asm.h> ENTRY(memchr) + RETGUARD_START pushl %edi movl 8(%esp),%edi /* string address */ movl 12(%esp),%eax /* set character to search for */ @@ -19,8 +20,11 @@ ENTRY(memchr) jne L1 /* scan failed, return null */ leal -1(%edi),%eax /* adjust result of scan */ popl %edi + RETGUARD_END ret .align 2,0x90 L1: xorl %eax,%eax popl %edi + RETGUARD_END ret +END(memchr) Index: sys/lib/libkern/arch/i386/memcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/memcmp.S,v retrieving revision 1.2 diff -u -p -u -r1.2 memcmp.S --- sys/lib/libkern/arch/i386/memcmp.S 29 Nov 2014 18:51:23 -0000 1.2 +++ sys/lib/libkern/arch/i386/memcmp.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include <machine/asm.h> ENTRY(memcmp) + RETGUARD_START pushl %edi pushl %esi movl 12(%esp),%edi @@ -28,6 +29,7 @@ ENTRY(memcmp) xorl %eax,%eax /* we match, return zero */ popl %esi popl %edi + RETGUARD_END ret L5: movl $4,%ecx /* We know that one of the next */ @@ -40,4 +42,6 @@ L6: movzbl -1(%edi),%eax /* Perform un subl %edx,%eax popl %esi popl %edi + RETGUARD_END ret +END(memcmp) Index: sys/lib/libkern/arch/i386/memmove.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/memmove.S,v retrieving revision 1.7 diff -u -p -u -r1.7 memmove.S --- sys/lib/libkern/arch/i386/memmove.S 29 Nov 2014 18:51:23 -0000 1.7 +++ sys/lib/libkern/arch/i386/memmove.S 18 Aug 2017 02:28:21 -0000 @@ -41,17 +41,20 @@ * into memmove(), which handles overlapping regions. */ ENTRY(bcopy) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%esi movl 16(%esp),%edi jmp docopy +END(bcopy) /* * memmove(caddr_t dst, caddr_t src, size_t len); * Copy len bytes, coping with overlapping space. */ ENTRY(memmove) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%edi @@ -63,10 +66,13 @@ docopy: cmpl %ecx,%eax # overlapping? jb 1f jmp docopyf # nope +END(memmove) + /* * memcpy() doesn't worry about overlap and always copies forward */ ENTRY(memcpy) + RETGUARD_START pushl %esi pushl %edi movl 12(%esp),%edi @@ -83,6 +89,7 @@ docopyf: movsb popl %edi popl %esi + RETGUARD_END ret _ALIGN_TEXT @@ -104,5 +111,6 @@ docopyf: popl %edi popl %esi cld + RETGUARD_END ret - +END(memcpy) Index: sys/lib/libkern/arch/i386/memset.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/memset.S,v retrieving revision 1.4 diff -u -p -u -r1.4 memset.S --- sys/lib/libkern/arch/i386/memset.S 29 Nov 2014 18:51:23 -0000 1.4 +++ sys/lib/libkern/arch/i386/memset.S 18 Aug 2017 02:28:21 -0000 @@ -8,6 +8,7 @@ #include <machine/asm.h> ENTRY(memset) + RETGUARD_START pushl %edi pushl %ebx movl 12(%esp),%edi @@ -51,4 +52,6 @@ L1: rep popl %eax /* pop address of buffer */ popl %ebx popl %edi + RETGUARD_START ret +END(memset) Index: sys/lib/libkern/arch/i386/scanc.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/scanc.S,v retrieving revision 1.3 diff -u -p -u -r1.3 scanc.S --- sys/lib/libkern/arch/i386/scanc.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/i386/scanc.S 18 Aug 2017 02:28:21 -0000 @@ -33,6 +33,7 @@ #include "DEFS.h" ENTRY(scanc) + RETGUARD_START movl 4(%esp),%ecx testl %ecx,%ecx jz 3f @@ -53,4 +54,6 @@ ENTRY(scanc) popl %esi 3: movl %ecx,%eax + RETGUARD_END ret +END(scanc) Index: sys/lib/libkern/arch/i386/skpc.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/skpc.S,v retrieving revision 1.3 diff -u -p -u -r1.3 skpc.S --- sys/lib/libkern/arch/i386/skpc.S 29 Nov 2014 18:51:23 -0000 1.3 +++ sys/lib/libkern/arch/i386/skpc.S 18 Aug 2017 02:28:21 -0000 @@ -33,6 +33,7 @@ #include "DEFS.h" ENTRY(skpc) + RETGUARD_START pushl %edi movl 16(%esp),%edi movl 12(%esp),%ecx @@ -44,4 +45,6 @@ ENTRY(skpc) 1: movl %ecx,%eax popl %edi + RETGUARD_END ret +END(skpc) Index: sys/lib/libkern/arch/i386/strcmp.S =================================================================== RCS file: /cvs/src/sys/lib/libkern/arch/i386/strcmp.S,v retrieving revision 1.2 diff -u -p -u -r1.2 strcmp.S --- sys/lib/libkern/arch/i386/strcmp.S 27 Sep 1996 06:47:49 -0000 1.2 +++ sys/lib/libkern/arch/i386/strcmp.S 18 Aug 2017 02:28:21 -0000 @@ -14,6 +14,7 @@ */ ENTRY(strcmp) + RETGUARD_START movl 0x04(%esp),%eax movl 0x08(%esp),%edx jmp L2 /* Jump into the loop! */ @@ -79,4 +80,6 @@ L2: movb (%eax),%cl L3: movzbl (%eax),%eax /* unsigned comparison */ movzbl (%edx),%edx subl %edx,%eax + RETGUARD_END ret +END(strcmp)