# HG changeset patch # User Morris Meyer # Date 1370634180 14400 # Node ID 0f7ca53be9294c179d7e032d8defe6d48fdc3f41 # Parent 78a1232be4184532139abbb9bca213e9abe6c23e CR-806: Changes to build Graal for SPARC diff -r 78a1232be418 -r 0f7ca53be929 graal/com.oracle.graal.hotspot.sparc/src/com/oracle/graal/hotspot/sparc/SPARCHotSpotGraalRuntime.java --- a/graal/com.oracle.graal.hotspot.sparc/src/com/oracle/graal/hotspot/sparc/SPARCHotSpotGraalRuntime.java Fri Jun 07 16:10:07 2013 +0200 +++ b/graal/com.oracle.graal.hotspot.sparc/src/com/oracle/graal/hotspot/sparc/SPARCHotSpotGraalRuntime.java Fri Jun 07 15:43:00 2013 -0400 @@ -25,6 +25,7 @@ import com.oracle.graal.api.code.*; import com.oracle.graal.hotspot.*; import com.oracle.graal.hotspot.meta.*; +import com.oracle.graal.sparc.*; /** * SPARC specific implementation of {@link HotSpotGraalRuntime}. @@ -44,10 +45,15 @@ return graalRuntime(); } + protected static Architecture createArchitecture() { + return new SPARC(); + } + @Override protected TargetDescription createTarget() { - // SPARC: Create target description. - throw new InternalError("NYI"); + final int stackFrameAlignment = 16; + final int implicitNullCheckLimit = 4096; + return new TargetDescription(createArchitecture(), true, stackFrameAlignment, implicitNullCheckLimit, true); } @Override diff -r 78a1232be418 -r 0f7ca53be929 make/solaris/makefiles/buildtree.make --- a/make/solaris/makefiles/buildtree.make Fri Jun 07 16:10:07 2013 +0200 +++ b/make/solaris/makefiles/buildtree.make Fri Jun 07 15:43:00 2013 -0400 @@ -229,7 +229,9 @@ echo "$(call gamma-path,altsrc,os/$(OS_FAMILY)/vm) \\"; \ echo "$(call gamma-path,commonsrc,os/$(OS_FAMILY)/vm) \\"; \ echo "$(call gamma-path,altsrc,os/posix/vm) \\"; \ - echo "$(call gamma-path,commonsrc,os/posix/vm)"; \ + echo "$(call gamma-path,commonsrc,os/posix/vm) \\"; \ + echo "$(call gamma-path,altsrc,gpu/ptx) \\"; \ + echo "$(call gamma-path,commonsrc,gpu/ptx)"; \ echo; \ echo "Src_Dirs_I = \\"; \ echo "$(call gamma-path,altsrc,share/vm/prims) \\"; \ @@ -244,8 +246,9 @@ echo "$(call gamma-path,commonsrc,os_cpu/$(OS_FAMILY)_$(ARCH)/vm) \\"; \ echo "$(call gamma-path,altsrc,os/$(OS_FAMILY)/vm) \\"; \ echo "$(call gamma-path,commonsrc,os/$(OS_FAMILY)/vm) \\"; \ - echo "$(call gamma-path,altsrc,os/posix/vm) \\"; \ - echo "$(call gamma-path,commonsrc,os/posix/vm)"; \ + echo "$(call gamma-path,commonsrc,os/posix/vm) \\"; \ + echo "$(call gamma-path,altsrc,gpu) \\"; \ + echo "$(call gamma-path,commonsrc,gpu)"; \ [ -n "$(CFLAGS_BROWSE)" ] && \ echo && echo "CFLAGS_BROWSE = $(CFLAGS_BROWSE)"; \ [ -n "$(ENABLE_FULL_DEBUG_SYMBOLS)" ] && \ diff -r 78a1232be418 -r 0f7ca53be929 make/solaris/makefiles/vm.make --- a/make/solaris/makefiles/vm.make Fri Jun 07 16:10:07 2013 +0200 +++ b/make/solaris/makefiles/vm.make Fri Jun 07 15:43:00 2013 -0400 @@ -194,7 +194,9 @@ COMPILER2_PATHS += $(GENERATED)/adfiles GRAAL_PATHS += $(call altsrc,$(HS_COMMON_SRC)/share/vm/graal) +GRAAL_PATHS += $(call altsrc,$(HS_COMMON_SRC)/gpu/ptx) GRAAL_PATHS += $(HS_COMMON_SRC)/share/vm/graal +GRAAL_PATHS += $(HS_COMMON_SRC)/gpu/ptx # Include dirs per type. Src_Dirs/CORE := $(CORE_PATHS) diff -r 78a1232be418 -r 0f7ca53be929 mx/commands.py --- a/mx/commands.py Fri Jun 07 16:10:07 2013 +0200 +++ b/mx/commands.py Fri Jun 07 15:43:00 2013 -0400 @@ -232,6 +232,8 @@ machine = platform.uname()[4] if machine in ['amd64', 'AMD64', 'x86_64', 'i86pc']: return 'amd64' + if machine in ['sun4v']: + return 'sparc' if machine == 'i386' and mx.get_os() == 'darwin': try: # Support for Snow Leopard and earlier version of MacOSX diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/codeInstaller_sparc.hpp --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/cpu/sparc/vm/codeInstaller_sparc.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -0,0 +1,53 @@ +/* + * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. + * + * This code is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 only, as + * published by the Free Software Foundation. + * + * This code is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * version 2 for more details (a copy is included in the LICENSE file that + * accompanied this code). + * + * You should have received a copy of the GNU General Public License version + * 2 along with this work; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA + * or visit www.oracle.com if you need additional information or have any + * questions. + */ +#ifndef CPU_SPARC_VM_CODEINSTALLER_SPARC_HPP +#define CPU_SPARC_VM_CODEINSTALLER_SPARC_HPP + +inline jint CodeInstaller::pd_next_offset(NativeInstruction* inst, jint pc_offset, oop method) { + fatal("CodeInstaller::pd_next_offset - sparc unimp"); + return 0; +} + +inline void CodeInstaller::pd_site_DataPatch(oop constant, oop kind, bool inlined, + address instruction, int alignment, char typeChar) { + fatal("CodeInstaller::pd_site_DataPatch - sparc unimp"); +} + +inline void CodeInstaller::pd_relocate_CodeBlob(CodeBlob* cb, NativeInstruction* inst) { + fatal("CodeInstaller::pd_relocate_CodeBlob - sparc unimp"); +} + +inline void CodeInstaller::pd_relocate_ForeignCall(NativeInstruction* inst, jlong foreign_call_destination) { + fatal("CodeInstaller::pd_relocate_ForeignCall - sparc unimp"); +} + +inline void CodeInstaller::pd_relocate_JavaMethod(oop method, jint pc_offset) { + fatal("CodeInstaller::pd_relocate_JavaMethod - sparc unimp"); +} + +inline int32_t* CodeInstaller::pd_locate_operand(address instruction) { + fatal("CodeInstaller::pd_locate_operand - sparc unimp"); + return (int32_t*)0; +} + +#endif // CPU_SPARC_VM_CODEINSTALLER_SPARC_HPP diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/graalGlobals_sparc.hpp --- a/src/cpu/sparc/vm/graalGlobals_sparc.hpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/cpu/sparc/vm/graalGlobals_sparc.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -31,6 +31,32 @@ // Sets the default values for platform dependent flags used by the Graal compiler. // (see graalGlobals.hpp) -define_pd_global(intx, GraalSafepointPollOffset, 0 ); +#if !defined(COMPILER1) && !defined(COMPILER2) +define_pd_global(bool, BackgroundCompilation, true ); +define_pd_global(bool, UseTLAB, true ); +define_pd_global(bool, ResizeTLAB, true ); +define_pd_global(bool, InlineIntrinsics, true ); +define_pd_global(bool, PreferInterpreterNativeStubs, false); +define_pd_global(bool, TieredCompilation, false); +define_pd_global(intx, BackEdgeThreshold, 100000); + +define_pd_global(intx, OnStackReplacePercentage, 933 ); +define_pd_global(intx, FreqInlineSize, 325 ); +define_pd_global(intx, NewSizeThreadIncrease, 4*K ); +define_pd_global(uintx,MetaspaceSize, 12*M ); +define_pd_global(bool, NeverActAsServerClassMachine, false); +define_pd_global(uint64_t,MaxRAM, 1ULL*G); +define_pd_global(bool, CICompileOSR, true ); +define_pd_global(bool, ProfileTraps, true ); +define_pd_global(bool, UseOnStackReplacement, true ); +define_pd_global(intx, CompileThreshold, 10000); +define_pd_global(intx, InitialCodeCacheSize, 16*M ); +define_pd_global(intx, ReservedCodeCacheSize, 64*M ); +define_pd_global(bool, ProfileInterpreter, true ); +define_pd_global(intx, CodeCacheExpansionSize, 64*K ); +define_pd_global(uintx,CodeCacheMinBlockLength, 4); +define_pd_global(intx, TypeProfileWidth, 8); +define_pd_global(intx, MethodProfileWidth, 4); +#endif #endif // CPU_SPARC_VM_GRAALGLOBALS_SPARC_HPP diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/interpreterGenerator_sparc.hpp --- a/src/cpu/sparc/vm/interpreterGenerator_sparc.hpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/cpu/sparc/vm/interpreterGenerator_sparc.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -27,11 +27,16 @@ friend class AbstractInterpreterGenerator; + address generate_deopt_entry_for(TosState state, int step); + private: address generate_normal_entry(bool synchronized); address generate_native_entry(bool synchronized); - address generate_abstract_entry(void); +#ifdef GRAAL + address generate_execute_compiled_method_entry(); +#endif + address generate_abstract_entry(void); address generate_math_entry(AbstractInterpreter::MethodKind kind); address generate_empty_entry(void); address generate_accessor_entry(void); diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/nativeInst_sparc.cpp --- a/src/cpu/sparc/vm/nativeInst_sparc.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/cpu/sparc/vm/nativeInst_sparc.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -24,6 +24,8 @@ #include "precompiled.hpp" #include "asm/macroAssembler.hpp" +#include "asm/macroAssembler.inline.hpp" +#include "code/codeCache.hpp" #include "memory/resourceArea.hpp" #include "nativeInst_sparc.hpp" #include "oops/oop.inline.hpp" diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/sharedRuntime_sparc.cpp --- a/src/cpu/sparc/vm/sharedRuntime_sparc.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/cpu/sparc/vm/sharedRuntime_sparc.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -1826,6 +1826,20 @@ verify_oop_args(masm, method, sig_bt, regs); vmIntrinsics::ID iid = method->intrinsic_id(); + +#ifdef GRAAL + if (iid == vmIntrinsics::_CompilerToVMImpl_executeCompiledMethod) { + // We are called from compiled code here. The three object arguments + // are already in the correct registers (j_rarg0, jrarg1, jrarg2). The + // fourth argument (j_rarg3) is a raw pointer to the nmethod. Make a tail + // call to its verified entry point. + __ set(nmethod::verified_entry_point_offset(), O0); + __ JMP(O0, 0); + __ delayed()->nop(); + return; + } +#endif + // Now write the args into the outgoing interpreter space bool has_receiver = false; Register receiver_reg = noreg; diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/sparc/vm/templateInterpreter_sparc.cpp --- a/src/cpu/sparc/vm/templateInterpreter_sparc.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/cpu/sparc/vm/templateInterpreter_sparc.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -213,7 +213,7 @@ } -address TemplateInterpreterGenerator::generate_deopt_entry_for(TosState state, int step) { +address InterpreterGenerator::generate_deopt_entry_for(TosState state, int step) { address entry = __ pc(); __ get_constant_pool_cache(LcpoolCache); // load LcpoolCache { Label L; @@ -813,6 +813,19 @@ return generate_accessor_entry(); } +#ifdef GRAAL + +// Interpreter stub for calling a compiled method with 3 object arguments +address InterpreterGenerator::generate_execute_compiled_method_entry() { + address entry_point = __ pc(); + + __ stop("graal-sparc unimp"); + + return entry_point; +} + +#endif + // // Interpreter stub for calling a native method. (asm interpreter) // This sets up a somewhat different looking stack for calling the native method diff -r 78a1232be418 -r 0f7ca53be929 src/cpu/x86/vm/codeInstaller_x86.hpp --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/src/cpu/x86/vm/codeInstaller_x86.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -0,0 +1,191 @@ +/* + * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved. + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. + * + * This code is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 only, as + * published by the Free Software Foundation. + * + * This code is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * version 2 for more details (a copy is included in the LICENSE file that + * accompanied this code). + * + * You should have received a copy of the GNU General Public License version + * 2 along with this work; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA + * or visit www.oracle.com if you need additional information or have any + * questions. + */ +#ifndef CPU_SPARC_VM_CODEINSTALLER_X86_HPP +#define CPU_SPARC_VM_CODEINSTALLER_X86_HPP + +#include "compiler/disassembler.hpp" +#include "runtime/javaCalls.hpp" +#include "graal/graalEnv.hpp" +#include "graal/graalCompiler.hpp" +#include "graal/graalCodeInstaller.hpp" +#include "graal/graalJavaAccess.hpp" +#include "graal/graalCompilerToVM.hpp" +#include "graal/graalRuntime.hpp" +#include "asm/register.hpp" +#include "classfile/vmSymbols.hpp" +#include "code/vmreg.hpp" + +inline jint CodeInstaller::pd_next_offset(NativeInstruction* inst, jint pc_offset, oop method) { + if (inst->is_call() || inst->is_jump()) { + assert(NativeCall::instruction_size == (int)NativeJump::instruction_size, "unexpected size"); + return (pc_offset + NativeCall::instruction_size); + } else if (inst->is_mov_literal64()) { + // mov+call instruction pair + jint offset = pc_offset + NativeMovConstReg::instruction_size; + u_char* call = (u_char*) (_instructions->start() + offset); + assert((call[0] == 0x40 || call[0] == 0x41) && call[1] == 0xFF, "expected call with rex/rexb prefix byte"); + offset += 3; /* prefix byte + opcode byte + modrm byte */ + return (offset); + } else if (inst->is_call_reg()) { + // the inlined vtable stub contains a "call register" instruction + assert(method != NULL, "only valid for virtual calls"); + return (pc_offset + ((NativeCallReg *) inst)->next_instruction_offset()); + } else { + fatal("unsupported type of instruction for call site"); + } +} + +inline void CodeInstaller::pd_site_DataPatch(oop constant, oop kind, bool inlined, + address instruction, int alignment, char typeChar) { + switch (typeChar) { + case 'z': + case 'b': + case 's': + case 'c': + case 'i': + fatal("int-sized values not expected in DataPatch"); + break; + case 'f': + case 'j': + case 'd': { + if (inlined) { + address operand = Assembler::locate_operand(instruction, Assembler::imm_operand); + *((jlong*) operand) = Constant::primitive(constant); + } else { + address operand = Assembler::locate_operand(instruction, Assembler::disp32_operand); + address next_instruction = Assembler::locate_next_instruction(instruction); + int size = _constants->size(); + if (alignment > 0) { + guarantee(alignment <= _constants->alignment(), "Alignment inside constants section is restricted by alignment of section begin"); + size = align_size_up(size, alignment); + } + // we don't care if this is a long/double/etc., the primitive field contains the right bits + address dest = _constants->start() + size; + _constants->set_end(dest + BytesPerLong); + *(jlong*) dest = Constant::primitive(constant); + + long disp = dest - next_instruction; + assert(disp == (jint) disp, "disp doesn't fit in 32 bits"); + *((jint*) operand) = (jint) disp; + + _instructions->relocate(instruction, section_word_Relocation::spec((address) dest, CodeBuffer::SECT_CONSTS), Assembler::disp32_operand); + TRACE_graal_3("relocating (%c) at %p/%p with destination at %p (%d)", typeChar, instruction, operand, dest, size); + } + break; + } + case 'a': { + address operand = Assembler::locate_operand(instruction, Assembler::imm_operand); + Handle obj = Constant::object(constant); + + jobject value = JNIHandles::make_local(obj()); + *((jobject*) operand) = value; + _instructions->relocate(instruction, oop_Relocation::spec_for_immediate(), Assembler::imm_operand); + TRACE_graal_3("relocating (oop constant) at %p/%p", instruction, operand); + break; + } + default: + fatal(err_msg("unexpected Kind (%d) in DataPatch", typeChar)); + break; + } +} + +inline void CodeInstaller::pd_relocate_CodeBlob(CodeBlob* cb, NativeInstruction* inst) { + if (cb->is_nmethod()) { + nmethod* nm = (nmethod*) cb; + nativeJump_at((address)inst)->set_jump_destination(nm->verified_entry_point()); + } else { + nativeJump_at((address)inst)->set_jump_destination(cb->code_begin()); + } + _instructions->relocate((address)inst, runtime_call_Relocation::spec(), Assembler::call32_operand); +} + +inline void CodeInstaller::pd_relocate_ForeignCall(NativeInstruction* inst, jlong foreign_call_destination) { + if (inst->is_call()) { + // NOTE: for call without a mov, the offset must fit a 32-bit immediate + // see also CompilerToVM.getMaxCallTargetOffset() + NativeCall* call = nativeCall_at((address) (inst)); + call->set_destination((address) foreign_call_destination); + _instructions->relocate(call->instruction_address(), runtime_call_Relocation::spec(), Assembler::call32_operand); + } else if (inst->is_mov_literal64()) { + NativeMovConstReg* mov = nativeMovConstReg_at((address) (inst)); + mov->set_data((intptr_t) foreign_call_destination); + _instructions->relocate(mov->instruction_address(), runtime_call_Relocation::spec(), Assembler::imm_operand); + } else { + NativeJump* jump = nativeJump_at((address) (inst)); + jump->set_jump_destination((address) foreign_call_destination); + _instructions->relocate((address)inst, runtime_call_Relocation::spec(), Assembler::call32_operand); + } + TRACE_graal_3("relocating (foreign call) at %p", inst); +} + +inline void CodeInstaller::pd_relocate_JavaMethod(oop hotspot_method, jint pc_offset) { +#ifdef ASSERT + Method* method = NULL; + // we need to check, this might also be an unresolved method + if (hotspot_method->is_a(HotSpotResolvedJavaMethod::klass())) { + method = getMethodFromHotSpotMethod(hotspot_method); + } +#endif + switch (_next_call_type) { + case MARK_INLINE_INVOKE: + break; + case MARK_INVOKEVIRTUAL: + case MARK_INVOKEINTERFACE: { + assert(method == NULL || !method->is_static(), "cannot call static method with invokeinterface"); + + NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); + call->set_destination(SharedRuntime::get_resolve_virtual_call_stub()); + _instructions->relocate(call->instruction_address(), + virtual_call_Relocation::spec(_invoke_mark_pc), + Assembler::call32_operand); + break; + } + case MARK_INVOKESTATIC: { + assert(method == NULL || method->is_static(), "cannot call non-static method with invokestatic"); + + NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); + call->set_destination(SharedRuntime::get_resolve_static_call_stub()); + _instructions->relocate(call->instruction_address(), + relocInfo::static_call_type, Assembler::call32_operand); + break; + } + case MARK_INVOKESPECIAL: { + assert(method == NULL || !method->is_static(), "cannot call static method with invokespecial"); + NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); + call->set_destination(SharedRuntime::get_resolve_opt_virtual_call_stub()); + _instructions->relocate(call->instruction_address(), + relocInfo::opt_virtual_call_type, Assembler::call32_operand); + break; + } + default: + fatal("invalid _next_call_type value"); + break; + } +} + +inline int32_t* CodeInstaller::pd_locate_operand(address instruction) { + return (int32_t*) Assembler::locate_operand(instruction, Assembler::disp32_operand); +} + +#endif // CPU_SPARC_VM_CODEINSTALLER_X86_HPP + diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/classfile/vmSymbols.hpp --- a/src/share/vm/classfile/vmSymbols.hpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/classfile/vmSymbols.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -314,6 +314,7 @@ template(com_oracle_graal_hotspot_meta_HotSpotMonitorValue, "com/oracle/graal/hotspot/meta/HotSpotMonitorValue") \ template(com_oracle_graal_hotspot_debug_LocalImpl, "com/oracle/graal/hotspot/debug/LocalImpl") \ AMD64_ONLY(template(com_oracle_graal_hotspot_amd64_AMD64HotSpotGraalRuntime,"com/oracle/graal/hotspot/amd64/AMD64HotSpotGraalRuntime"))\ + SPARC_ONLY(template(com_oracle_graal_hotspot_sparc_SPARCHotSpotGraalRuntime,"com/oracle/graal/hotspot/sparc/SPARCHotSpotGraalRuntime"))\ /* graal.api.meta */ \ template(com_oracle_graal_api_meta_Constant, "com/oracle/graal/api/meta/Constant") \ template(com_oracle_graal_api_meta_ConstantPool, "com/oracle/graal/api/meta/ConstantPool") \ diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/graal/graalCodeInstaller.cpp --- a/src/share/vm/graal/graalCodeInstaller.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/graal/graalCodeInstaller.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -35,18 +35,23 @@ #include "code/vmreg.hpp" #ifdef TARGET_ARCH_x86 +# include "codeInstaller_x86.hpp" # include "vmreg_x86.inline.hpp" #endif #ifdef TARGET_ARCH_sparc +# include "codeInstaller_sparc.hpp" # include "vmreg_sparc.inline.hpp" #endif #ifdef TARGET_ARCH_zero +# include "codeInstaller_zero.hpp" # include "vmreg_zero.inline.hpp" #endif #ifdef TARGET_ARCH_arm +# include "codeInstaller_arm.hpp" # include "vmreg_arm.inline.hpp" #endif #ifdef TARGET_ARCH_ppc +# include "codeInstaller_ppc.hpp" # include "vmreg_ppc.inline.hpp" #endif @@ -199,8 +204,16 @@ } return value; #else +#ifdef TARGET_ARCH_sparc + ScopeValue* value = new LocationValue(Location::new_reg_loc(locationType, as_FloatRegister(number)->as_VMReg())); + if (type == T_DOUBLE) { + second = value; + } + return value; +#else ShouldNotReachHere("Platform currently does not support floating point values."); #endif +#endif } } else if (value->is_a(StackSlot::klass())) { if (type == T_DOUBLE) { @@ -704,36 +717,15 @@ assert((hotspot_method ? 1 : 0) + (foreign_call ? 1 : 0) == 1, "Call site needs exactly one type"); NativeInstruction* inst = nativeInstruction_at(_instructions->start() + pc_offset); - jint next_pc_offset = 0x0; - if (inst->is_call() || inst->is_jump()) { - assert(NativeCall::instruction_size == (int)NativeJump::instruction_size, "unexpected size"); - next_pc_offset = pc_offset + NativeCall::instruction_size; - } else if (inst->is_mov_literal64()) { - // mov+call instruction pair - next_pc_offset = pc_offset + NativeMovConstReg::instruction_size; - u_char* call = (u_char*) (_instructions->start() + next_pc_offset); - assert((call[0] == 0x40 || call[0] == 0x41) && call[1] == 0xFF, "expected call with rex/rexb prefix byte"); - next_pc_offset += 3; /* prefix byte + opcode byte + modrm byte */ - } else if (inst->is_call_reg()) { - // the inlined vtable stub contains a "call register" instruction - assert(hotspot_method != NULL, "only valid for virtual calls"); - next_pc_offset = pc_offset + ((NativeCallReg *) inst)->next_instruction_offset(); - } else { - fatal("unsupported type of instruction for call site"); - } - + jint next_pc_offset = CodeInstaller::pd_next_offset(inst, pc_offset, hotspot_method); + if (target->is_a(SystemDictionary::HotSpotInstalledCode_klass())) { assert(inst->is_jump(), "jump expected"); CodeBlob* cb = (CodeBlob*) (address) HotSpotInstalledCode::codeBlob(target); assert(cb != NULL, "npe"); - if (cb->is_nmethod()) { - nmethod* nm = (nmethod*) cb; - nativeJump_at((address)inst)->set_jump_destination(nm->verified_entry_point()); - } else { - nativeJump_at((address)inst)->set_jump_destination(cb->code_begin()); - } - _instructions->relocate((address)inst, runtime_call_Relocation::spec(), Assembler::call32_operand); + + CodeInstaller::pd_relocate_CodeBlob(cb, inst); return; } @@ -750,65 +742,14 @@ if (foreign_call != NULL) { jlong foreign_call_destination = HotSpotForeignCallLinkage::address(foreign_call); - if (inst->is_call()) { - // NOTE: for call without a mov, the offset must fit a 32-bit immediate - // see also CompilerToVM.getMaxCallTargetOffset() - NativeCall* call = nativeCall_at((address) (inst)); - call->set_destination((address) foreign_call_destination); - _instructions->relocate(call->instruction_address(), runtime_call_Relocation::spec(), Assembler::call32_operand); - } else if (inst->is_mov_literal64()) { - NativeMovConstReg* mov = nativeMovConstReg_at((address) (inst)); - mov->set_data((intptr_t) foreign_call_destination); - _instructions->relocate(mov->instruction_address(), runtime_call_Relocation::spec(), Assembler::imm_operand); - } else { - NativeJump* jump = nativeJump_at((address) (inst)); - jump->set_jump_destination((address) foreign_call_destination); - _instructions->relocate((address)inst, runtime_call_Relocation::spec(), Assembler::call32_operand); - } - TRACE_graal_3("relocating (foreign call) at %p", inst); + + CodeInstaller::pd_relocate_ForeignCall(inst, foreign_call_destination); } else { // method != NULL assert(hotspot_method != NULL, "unexpected JavaMethod"); -#ifdef ASSERT - Method* method = NULL; - // we need to check, this might also be an unresolved method - if (hotspot_method->is_a(HotSpotResolvedJavaMethod::klass())) { - method = getMethodFromHotSpotMethod(hotspot_method); - } -#endif assert(debug_info != NULL, "debug info expected"); TRACE_graal_3("method call"); - switch (_next_call_type) { - case MARK_INLINE_INVOKE: - break; - case MARK_INVOKEVIRTUAL: - case MARK_INVOKEINTERFACE: { - assert(method == NULL || !method->is_static(), "cannot call static method with invokeinterface"); - - NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); - call->set_destination(SharedRuntime::get_resolve_virtual_call_stub()); - _instructions->relocate(call->instruction_address(), virtual_call_Relocation::spec(_invoke_mark_pc), Assembler::call32_operand); - break; - } - case MARK_INVOKESTATIC: { - assert(method == NULL || method->is_static(), "cannot call non-static method with invokestatic"); - - NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); - call->set_destination(SharedRuntime::get_resolve_static_call_stub()); - _instructions->relocate(call->instruction_address(), relocInfo::static_call_type, Assembler::call32_operand); - break; - } - case MARK_INVOKESPECIAL: { - assert(method == NULL || !method->is_static(), "cannot call static method with invokespecial"); - NativeCall* call = nativeCall_at(_instructions->start() + pc_offset); - call->set_destination(SharedRuntime::get_resolve_opt_virtual_call_stub()); - _instructions->relocate(call->instruction_address(), relocInfo::opt_virtual_call_type, Assembler::call32_operand); - break; - } - default: - fatal("invalid _next_call_type value"); - break; - } + CodeInstaller::pd_relocate_JavaMethod(hotspot_method, pc_offset); } _next_call_type = MARK_INVOKE_INVALID; if (debug_info != NULL) { @@ -823,59 +764,15 @@ oop kind = Constant::kind(constant); address instruction = _instructions->start() + pc_offset; - char typeChar = Kind::typeChar(kind); switch (typeChar) { - case 'z': - case 'b': - case 's': - case 'c': - case 'i': - fatal("int-sized values not expected in DataPatch"); - break; case 'f': case 'j': - case 'd': { + case 'd': record_metadata_in_constant(constant, _oop_recorder); - if (inlined) { - address operand = Assembler::locate_operand(instruction, Assembler::imm_operand); - *((jlong*) operand) = Constant::primitive(constant); - } else { - address operand = Assembler::locate_operand(instruction, Assembler::disp32_operand); - address next_instruction = Assembler::locate_next_instruction(instruction); - int size = _constants->size(); - if (alignment > 0) { - guarantee(alignment <= _constants->alignment(), "Alignment inside constants section is restricted by alignment of section begin"); - size = align_size_up(size, alignment); - } - // we don't care if this is a long/double/etc., the primitive field contains the right bits - address dest = _constants->start() + size; - _constants->set_end(dest + BytesPerLong); - *(jlong*) dest = Constant::primitive(constant); - - long disp = dest - next_instruction; - assert(disp == (jint) disp, "disp doesn't fit in 32 bits"); - *((jint*) operand) = (jint) disp; - - _instructions->relocate(instruction, section_word_Relocation::spec((address) dest, CodeBuffer::SECT_CONSTS), Assembler::disp32_operand); - TRACE_graal_3("relocating (%c) at %p/%p with destination at %p (%d)", typeChar, instruction, operand, dest, size); - } - break; - } - case 'a': { - address operand = Assembler::locate_operand(instruction, Assembler::imm_operand); - Handle obj = Constant::object(constant); - - jobject value = JNIHandles::make_local(obj()); - *((jobject*) operand) = value; - _instructions->relocate(instruction, oop_Relocation::spec_for_immediate(), Assembler::imm_operand); - TRACE_graal_3("relocating (oop constant) at %p/%p", instruction, operand); - break; - } - default: - fatal(err_msg("unexpected Kind (%d) in DataPatch", typeChar)); break; } + CodeInstaller::pd_site_DataPatch(constant, kind, inlined, instruction, alignment, typeChar); } void CodeInstaller::site_Mark(CodeBuffer& buffer, jint pc_offset, oop site) { @@ -920,7 +817,8 @@ break; case MARK_POLL_NEAR: { NativeInstruction* ni = nativeInstruction_at(instruction); - int32_t* disp = (int32_t*) Assembler::locate_operand(instruction, Assembler::disp32_operand); + int32_t* disp = (int32_t*) pd_locate_operand(instruction); + // int32_t* disp = (int32_t*) Assembler::locate_operand(instruction, Assembler::disp32_operand); int32_t offset = *disp; // The Java code installed the polling page offset into the disp32 operand intptr_t new_disp = (intptr_t) (os::get_polling_page() + offset) - (intptr_t) ni; *disp = (int32_t)new_disp; @@ -930,7 +828,8 @@ break; case MARK_POLL_RETURN_NEAR: { NativeInstruction* ni = nativeInstruction_at(instruction); - int32_t* disp = (int32_t*) Assembler::locate_operand(instruction, Assembler::disp32_operand); + int32_t* disp = (int32_t*) pd_locate_operand(instruction); + // int32_t* disp = (int32_t*) Assembler::locate_operand(instruction, Assembler::disp32_operand); int32_t offset = *disp; // The Java code installed the polling page offset into the disp32 operand intptr_t new_disp = (intptr_t) (os::get_polling_page() + offset) - (intptr_t) ni; *disp = (int32_t)new_disp; diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/graal/graalCodeInstaller.hpp --- a/src/share/vm/graal/graalCodeInstaller.hpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/graal/graalCodeInstaller.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -75,6 +75,13 @@ Dependencies* _dependencies; ExceptionHandlerTable _exception_handler_table; + jint pd_next_offset(NativeInstruction* inst, jint pc_offset, oop method); + void pd_site_DataPatch(oop constant, oop kind, bool inlined, address instruction, int alignment, char typeChar); + void pd_relocate_CodeBlob(CodeBlob* cb, NativeInstruction* inst); + void pd_relocate_ForeignCall(NativeInstruction* inst, jlong foreign_call_destination); + void pd_relocate_JavaMethod(oop method, jint pc_offset); + int32_t* pd_locate_operand(address instruction); + public: CodeInstaller(Handle& comp_result, GraalEnv::CodeInstallResult& result, CodeBlob*& cb, Handle installed_code, Handle triggered_deoptimizations); @@ -106,4 +113,20 @@ }; +#ifdef TARGET_ARCH_x86 +# include "codeInstaller_x86.hpp" +#endif +#ifdef TARGET_ARCH_sparc +# include "codeInstaller_sparc.hpp" +#endif +#ifdef TARGET_ARCH_zero +# error +#endif +#ifdef TARGET_ARCH_arm +# error +#endif +#ifdef TARGET_ARCH_ppc +# error +#endif + #endif // SHARE_VM_GRAAL_GRAAL_CODE_INSTALLER_HPP diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/graal/graalCompiler.cpp --- a/src/share/vm/graal/graalCompiler.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/graal/graalCompiler.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -51,7 +51,7 @@ uintptr_t heap_end = (uintptr_t) Universe::heap()->reserved_region().end(); uintptr_t allocation_end = heap_end + ((uintptr_t)16) * 1024 * 1024 * 1024; - guarantee(heap_end < allocation_end, "heap end too close to end of address space (might lead to erroneous TLAB allocations)"); + AMD64_ONLY(guarantee(heap_end < allocation_end, "heap end too close to end of address space (might lead to erroneous TLAB allocations)")); NOT_LP64(error("check TLAB allocation code for address space conflicts")); _deopted_leaf_graph_count = 0; diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/graal/graalCompilerToVM.cpp --- a/src/share/vm/graal/graalCompilerToVM.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/graal/graalCompilerToVM.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -658,8 +658,10 @@ set_boolean("useAESIntrinsics", UseAESIntrinsics); set_boolean("useTLAB", UseTLAB); set_boolean("useG1GC", UseG1GC); +#ifdef TARGET_ARCH_x86 set_int("useSSE", UseSSE); set_int("useAVX", UseAVX); +#endif set_int("codeEntryAlignment", CodeEntryAlignment); set_int("stackShadowPages", StackShadowPages); set_int("hubOffset", oopDesc::klass_offset_in_bytes()); @@ -695,9 +697,11 @@ set_int("klassHasFinalizerFlag", JVM_ACC_HAS_FINALIZER); set_int("threadExceptionOopOffset", in_bytes(JavaThread::exception_oop_offset())); set_int("threadExceptionPcOffset", in_bytes(JavaThread::exception_pc_offset())); +#ifdef TARGET_ARCH_x86 set_boolean("isPollingPageFar", Assembler::is_polling_page_far()); + set_int("runtimeCallStackSize", (jint)frame::arg_reg_save_area_bytes); +#endif set_int("classMirrorOffset", in_bytes(Klass::java_mirror_offset())); - set_int("runtimeCallStackSize", (jint)frame::arg_reg_save_area_bytes); set_int("klassModifierFlagsOffset", in_bytes(Klass::modifier_flags_offset())); set_int("klassAccessFlagsOffset", in_bytes(Klass::access_flags_offset())); set_int("klassOffset", java_lang_Class::klass_offset_in_bytes()); @@ -744,7 +748,9 @@ set_int("threadTlabSizeOffset", in_bytes(JavaThread::tlab_size_offset())); set_int("threadAllocatedBytesOffset", in_bytes(JavaThread::allocated_bytes_offset())); set_int("threadLastJavaSpOffset", in_bytes(JavaThread::last_Java_sp_offset())); +#ifdef TARGET_ARCH_x86 set_int("threadLastJavaFpOffset", in_bytes(JavaThread::last_Java_fp_offset())); +#endif set_int("threadLastJavaPcOffset", in_bytes(JavaThread::last_Java_pc_offset())); set_int("threadObjectResultOffset", in_bytes(JavaThread::vm_result_offset())); set_int("tlabSlowAllocationsOffset", in_bytes(JavaThread::tlab_slow_allocations_offset())); diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/graal/graalVMToCompiler.cpp --- a/src/share/vm/graal/graalVMToCompiler.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/graal/graalVMToCompiler.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -51,6 +51,9 @@ #ifdef AMD64 Symbol* name = vmSymbols::com_oracle_graal_hotspot_amd64_AMD64HotSpotGraalRuntime(); #endif +#ifdef SPARC + Symbol* name = vmSymbols::com_oracle_graal_hotspot_sparc_SPARCHotSpotGraalRuntime(); +#endif KlassHandle klass = loadClass(name); JavaValue result(T_OBJECT); diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/runtime/fieldDescriptor.cpp --- a/src/share/vm/runtime/fieldDescriptor.cpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/runtime/fieldDescriptor.cpp Fri Jun 07 15:43:00 2013 -0400 @@ -27,9 +27,6 @@ #include "classfile/vmSymbols.hpp" #include "memory/resourceArea.hpp" #include "memory/universe.inline.hpp" -#include "oops/annotations.hpp" -#include "oops/instanceKlass.hpp" -#include "oops/fieldStreams.hpp" #include "runtime/fieldDescriptor.hpp" #include "runtime/handles.inline.hpp" #include "runtime/signature.hpp" diff -r 78a1232be418 -r 0f7ca53be929 src/share/vm/runtime/fieldDescriptor.hpp --- a/src/share/vm/runtime/fieldDescriptor.hpp Fri Jun 07 16:10:07 2013 +0200 +++ b/src/share/vm/runtime/fieldDescriptor.hpp Fri Jun 07 15:43:00 2013 -0400 @@ -25,7 +25,10 @@ #ifndef SHARE_VM_RUNTIME_FIELDDESCRIPTOR_HPP #define SHARE_VM_RUNTIME_FIELDDESCRIPTOR_HPP +#include "oops/annotations.hpp" #include "oops/constantPool.hpp" +#include "oops/fieldStreams.hpp" +#include "oops/instanceKlass.hpp" #include "oops/symbol.hpp" #include "runtime/fieldType.hpp" #include "utilities/accessFlags.hpp"