Mercurial > hg > graal-compiler
comparison src/share/vm/gc_implementation/g1/g1BlockOffsetTable.hpp @ 1886:72a161e62cc4
6991377: G1: race between concurrent refinement and humongous object allocation
Summary: There is a race between the concurrent refinement threads and the humongous object allocation that can cause the concurrent refinement threads to corrupt the part of the BOT that it is being initialized by the humongous object allocation operation. The solution is to do the humongous object allocation in careful steps to ensure that the concurrent refinement threads always have a consistent view over the BOT, region contents, and top. The fix includes some very minor tidying up in sparsePRT.
Reviewed-by: jcoomes, johnc, ysr
author | tonyp |
---|---|
date | Sat, 16 Oct 2010 17:12:19 -0400 |
parents | c18cbe5936b8 |
children | f95d63e2154a |
comparison
equal
deleted
inserted
replaced
1885:a5c514e74487 | 1886:72a161e62cc4 |
---|---|
434 inline void verify_not_unallocated(HeapWord* blk, size_t size) const { | 434 inline void verify_not_unallocated(HeapWord* blk, size_t size) const { |
435 verify_not_unallocated(blk, blk + size); | 435 verify_not_unallocated(blk, blk + size); |
436 } | 436 } |
437 | 437 |
438 void check_all_cards(size_t left_card, size_t right_card) const; | 438 void check_all_cards(size_t left_card, size_t right_card) const; |
439 | |
440 virtual void set_for_starts_humongous(HeapWord* new_end); | |
439 }; | 441 }; |
440 | 442 |
441 // A subtype of BlockOffsetArray that takes advantage of the fact | 443 // A subtype of BlockOffsetArray that takes advantage of the fact |
442 // that its underlying space is a ContiguousSpace, so that its "active" | 444 // that its underlying space is a ContiguousSpace, so that its "active" |
443 // region can be more efficiently tracked (than for a non-contiguous space). | 445 // region can be more efficiently tracked (than for a non-contiguous space). |
482 alloc_block(blk, blk+size); | 484 alloc_block(blk, blk+size); |
483 } | 485 } |
484 | 486 |
485 HeapWord* block_start_unsafe(const void* addr); | 487 HeapWord* block_start_unsafe(const void* addr); |
486 HeapWord* block_start_unsafe_const(const void* addr) const; | 488 HeapWord* block_start_unsafe_const(const void* addr) const; |
489 | |
490 virtual void set_for_starts_humongous(HeapWord* new_end); | |
487 }; | 491 }; |