diff src/share/vm/gc_implementation/g1/heapRegion.inline.hpp @ 2433:abdfc822206f

7023069: G1: Introduce symmetric locking in the slow allocation path 7023151: G1: refactor the code that operates on _cur_alloc_region to be re-used for allocs by the GC threads 7018286: G1: humongous allocation attempts should take the GC locker into account Summary: First, this change replaces the asymmetric locking scheme in the G1 slow alloc path by a summetric one. Second, it factors out the code that operates on _cur_alloc_region so that it can be re-used for allocations by the GC threads in the future. Reviewed-by: stefank, brutisso, johnc
author tonyp
date Wed, 30 Mar 2011 10:26:59 -0400
parents f95d63e2154a
children 2ace1c4ee8da
line wrap: on
line diff
--- a/src/share/vm/gc_implementation/g1/heapRegion.inline.hpp	Tue Mar 29 22:36:16 2011 -0400
+++ b/src/share/vm/gc_implementation/g1/heapRegion.inline.hpp	Wed Mar 30 10:26:59 2011 -0400
@@ -38,15 +38,8 @@
 // this is used for larger LAB allocations only.
 inline HeapWord* G1OffsetTableContigSpace::par_allocate(size_t size) {
   MutexLocker x(&_par_alloc_lock);
-  // This ought to be just "allocate", because of the lock above, but that
-  // ContiguousSpace::allocate asserts that either the allocating thread
-  // holds the heap lock or it is the VM thread and we're at a safepoint.
-  // The best I (dld) could figure was to put a field in ContiguousSpace
-  // meaning "locking at safepoint taken care of", and set/reset that
-  // here.  But this will do for now, especially in light of the comment
-  // above.  Perhaps in the future some lock-free manner of keeping the
-  // coordination.
-  HeapWord* res = ContiguousSpace::par_allocate(size);
+  // Given that we take the lock no need to use par_allocate() here.
+  HeapWord* res = ContiguousSpace::allocate(size);
   if (res != NULL) {
     _offsets.alloc_block(res, size);
   }