Home ⌂Doc Index ◂Up ▴
Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::arena Class Reference

#include <arena.h>

Inheritance diagram for tbb::internal::arena:
Collaboration diagram for tbb::internal::arena:

Public Types

enum  new_work_type { work_spawned, wakeup, work_enqueued }
 Types of work advertised by advertise_new_work() More...
 
typedef padded< arena_basebase_type
 
typedef uintptr_t pool_state_t
 

Public Member Functions

 arena (market &, unsigned max_num_workers, unsigned num_reserved_slots)
 Constructor. More...
 
mail_outboxmailbox (affinity_id id)
 Get reference to mailbox corresponding to given affinity_id. More...
 
void free_arena ()
 Completes arena shutdown, destructs and deallocates it. More...
 
unsigned num_workers_active () const
 The number of workers active in the arena. More...
 
bool is_recall_requested () const
 Check if the recall is requested by the market. More...
 
template<arena::new_work_type work_type>
void advertise_new_work ()
 If necessary, raise a flag that there is new job in arena. More...
 
bool is_out_of_work ()
 Check if there is job anywhere in arena. More...
 
void enqueue_task (task &, intptr_t, FastRandom &)
 enqueue a task into starvation-resistance queue More...
 
void process (generic_scheduler &)
 Registers the worker with the arena and enters TBB scheduler dispatch loop. More...
 
template<unsigned ref_param>
void on_thread_leaving ()
 Notification that worker or master leaves its arena. More...
 
bool has_enqueued_tasks ()
 Check for the presence of enqueued tasks at all priority levels. More...
 
template<bool as_worker>
size_t occupy_free_slot (generic_scheduler &s)
 Tries to occupy a slot in the arena. On success, returns the slot index; if no slot is available, returns out_of_arena. More...
 
size_t occupy_free_slot_in_range (generic_scheduler &s, size_t lower, size_t upper)
 Tries to occupy a slot in the specified range. More...
 

Static Public Member Functions

static arenaallocate_arena (market &, unsigned num_slots, unsigned num_reserved_slots)
 Allocate an instance of arena. More...
 
static int unsigned num_arena_slots (unsigned num_slots)
 
static int allocation_size (unsigned num_slots)
 
static bool is_busy_or_empty (pool_state_t s)
 No tasks to steal or snapshot is being taken. More...
 

Public Attributes

arena_slot my_slots [1]
 
- Public Attributes inherited from tbb::internal::padded_base< arena_base, NFS_MaxLineSize, sizeof(arena_base) % NFS_MaxLineSize >
char pad [S - R]
 
- Public Attributes inherited from tbb::internal::arena_base
unsigned my_num_workers_allotted
 The number of workers that have been marked out by the resource manager to service the arena. More...
 
atomic< unsigned > my_references
 Reference counter for the arena. More...
 
atomic< unsigned > my_limit
 The maximal number of currently busy slots. More...
 
task_stream< num_priority_levelsmy_task_stream
 Task pool for the tasks scheduled via task::enqueue() method. More...
 
unsigned my_max_num_workers
 The number of workers requested by the master thread owning the arena. More...
 
int my_num_workers_requested
 The number of workers that are currently requested from the resource manager. More...
 
tbb::atomic< uintptr_t > my_pool_state
 Current task pool state and estimate of available tasks amount. More...
 
marketmy_market
 The market that owns this arena. More...
 
uintptr_t my_aba_epoch
 ABA prevention marker. More...
 
cpu_ctl_env my_cpu_ctl_env
 FPU control settings of arena's master thread captured at the moment of arena instantiation. More...
 
unsigned my_num_slots
 The number of slots in the arena. More...
 
unsigned my_num_reserved_slots
 The number of reserved slots (can be occupied only by masters). More...
 
concurrent_monitor my_exit_monitors
 Waiting object for master threads that cannot join the arena. More...
 
- Public Attributes inherited from tbb::internal::padded_base< intrusive_list_node, NFS_MaxLineSize, sizeof(intrusive_list_node) % NFS_MaxLineSize >
char pad [S - R]
 
- Public Attributes inherited from tbb::internal::intrusive_list_node
intrusive_list_nodemy_prev_node
 
intrusive_list_nodemy_next_node
 

Static Public Attributes

static const pool_state_t SNAPSHOT_EMPTY = 0
 No tasks to steal since last snapshot was taken. More...
 
static const pool_state_t SNAPSHOT_FULL = pool_state_t(-1)
 At least one task has been offered for stealing since the last snapshot started. More...
 
static const unsigned ref_external_bits = 12
 The number of least significant bits for external references. More...
 
static const unsigned ref_external = 1
 Reference increment values for externals and workers. More...
 
static const unsigned ref_worker = 1<<ref_external_bits
 
static const size_t out_of_arena = ~size_t(0)
 

Private Member Functions

void restore_priority_if_need ()
 If enqueued tasks found, restore arena priority and task presence status. More...
 

Detailed Description

Definition at line 276 of file arena.h.

Member Typedef Documentation

◆ base_type

Definition at line 281 of file arena.h.

◆ pool_state_t

Definition at line 315 of file arena.h.

Member Enumeration Documentation

◆ new_work_type

Types of work advertised by advertise_new_work()

Enumerator
work_spawned 
wakeup 
work_enqueued 

Definition at line 284 of file arena.h.

Constructor & Destructor Documentation

◆ arena()

tbb::internal::arena::arena ( market m,
unsigned  max_num_workers,
unsigned  num_reserved_slots 
)

Constructor.

Definition at line 226 of file arena.cpp.

226  {
227  __TBB_ASSERT( !my_guard, "improperly allocated arena?" );
228  __TBB_ASSERT( sizeof(my_slots[0]) % NFS_GetLineSize()==0, "arena::slot size not multiple of cache line size" );
229  __TBB_ASSERT( (uintptr_t)this % NFS_GetLineSize()==0, "arena misaligned" );
230 #if __TBB_TASK_PRIORITY
231  __TBB_ASSERT( !my_reload_epoch && !my_orphaned_tasks && !my_skipped_fifo_priority, "New arena object is not zeroed" );
232 #endif /* __TBB_TASK_PRIORITY */
233  my_market = &m;
234  my_limit = 1;
235  // Two slots are mandatory: for the master, and for 1 worker (required to support starvation resistant tasks).
236  my_num_slots = num_arena_slots(num_slots);
237  my_num_reserved_slots = num_reserved_slots;
238  my_max_num_workers = num_slots-num_reserved_slots;
239  my_references = ref_external; // accounts for the master
240 #if __TBB_TASK_PRIORITY
241  my_bottom_priority = my_top_priority = normalized_normal_priority;
242 #endif /* __TBB_TASK_PRIORITY */
243  my_aba_epoch = m.my_arenas_aba_epoch;
244 #if __TBB_ARENA_OBSERVER
245  my_observers.my_arena = this;
246 #endif
247 #if __TBB_PREVIEW_RESUMABLE_TASKS
248  my_co_cache.init(4 * num_slots);
249 #endif
251  // Construct slots. Mark internal synchronization elements for the tools.
252  for( unsigned i = 0; i < my_num_slots; ++i ) {
253  __TBB_ASSERT( !my_slots[i].my_scheduler && !my_slots[i].task_pool, NULL );
254  __TBB_ASSERT( !my_slots[i].task_pool_ptr, NULL );
255  __TBB_ASSERT( !my_slots[i].my_task_pool_size, NULL );
256 #if __TBB_PREVIEW_RESUMABLE_TASKS
257  __TBB_ASSERT( !my_slots[i].my_scheduler_is_recalled, NULL );
258 #endif
259  ITT_SYNC_CREATE(my_slots + i, SyncType_Scheduler, SyncObj_WorkerTaskPool);
260  mailbox(i+1).construct();
261  ITT_SYNC_CREATE(&mailbox(i+1), SyncType_Scheduler, SyncObj_Mailbox);
262  my_slots[i].hint_for_pop = i;
263 #if __TBB_PREVIEW_CRITICAL_TASKS
264  my_slots[i].hint_for_critical = i;
265 #endif
266 #if __TBB_STATISTICS
267  my_slots[i].my_counters = new ( NFS_Allocate(1, sizeof(statistics_counters), NULL) ) statistics_counters;
268 #endif /* __TBB_STATISTICS */
269  }
271  ITT_SYNC_CREATE(&my_task_stream, SyncType_Scheduler, SyncObj_TaskStream);
272 #if __TBB_PREVIEW_CRITICAL_TASKS
273  my_critical_task_stream.initialize(my_num_slots);
274  ITT_SYNC_CREATE(&my_critical_task_stream, SyncType_Scheduler, SyncObj_CriticalTaskStream);
275 #endif
276 #if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
277  my_local_concurrency_mode = false;
278  my_global_concurrency_mode = false;
279 #endif
280 #if !__TBB_FP_CONTEXT
282 #endif
283 }
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:161
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
arena_slot my_slots[1]
Definition: arena.h:390
uintptr_t my_aba_epoch
ABA prevention marker.
Definition: arena.h:235
unsigned my_max_num_workers
The number of workers requested by the master thread owning the arena.
Definition: arena.h:185
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
market * my_market
The market that owns this arena.
Definition: arena.h:232
cpu_ctl_env my_cpu_ctl_env
FPU control settings of arena's master thread captured at the moment of arena instantiation.
Definition: arena.h:239
unsigned hint_for_pop
Hint provided for operations with the container of starvation-resistant tasks.
atomic< unsigned > my_references
Reference counter for the arena.
Definition: arena.h:153
#define ITT_SYNC_CREATE(obj, type, name)
Definition: itt_notify.h:115
void initialize(unsigned n_lanes)
Definition: task_stream.h:83
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
size_t __TBB_EXPORTED_FUNC NFS_GetLineSize()
Cache/sector line size.
void construct()
Construct *this as a mailbox from zeroed memory.
Definition: mailbox.h:169
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:305
static int unsigned num_arena_slots(unsigned num_slots)
Definition: arena.h:296
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:327

References __TBB_ASSERT, tbb::internal::mail_outbox::construct(), tbb::internal::cpu_ctl_env::get_env(), tbb::internal::arena_slot_line2::hint_for_pop, tbb::internal::task_stream< Levels >::initialize(), ITT_SYNC_CREATE, mailbox(), tbb::internal::arena_base::my_aba_epoch, tbb::internal::market::my_arenas_aba_epoch, tbb::internal::arena_base::my_cpu_ctl_env, tbb::internal::arena_base::my_limit, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_max_num_workers, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_base::my_num_slots, tbb::internal::arena_base::my_references, my_slots, tbb::internal::arena_base::my_task_stream, tbb::internal::NFS_Allocate(), tbb::internal::NFS_GetLineSize(), num_arena_slots(), and ref_external.

Referenced by allocate_arena().

Here is the call graph for this function:
Here is the caller graph for this function:

Member Function Documentation

◆ advertise_new_work()

template<arena::new_work_type work_type>
void tbb::internal::arena::advertise_new_work ( )

If necessary, raise a flag that there is new job in arena.

Definition at line 484 of file arena.h.

484  {
485  if( work_type == work_enqueued ) {
486 #if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
487  if ( as_atomic(my_market->my_num_workers_soft_limit) == 0 && as_atomic(my_global_concurrency_mode) == false )
488  my_market->enable_mandatory_concurrency(this);
489 
490  if ( my_max_num_workers == 0 && my_num_reserved_slots == 1 ) {
491  __TBB_ASSERT(!my_local_concurrency_mode, NULL);
492  my_local_concurrency_mode = true;
494  my_max_num_workers = 1;
496  return;
497  }
498 #endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
499  // Local memory fence here and below is required to avoid missed wakeups; see the comment below.
500  // Starvation resistant tasks require concurrency, so missed wakeups are unacceptable.
501  atomic_fence();
502  }
503  else if( work_type == wakeup ) {
504  __TBB_ASSERT(my_max_num_workers!=0, "Unexpected worker wakeup request");
505  atomic_fence();
506  }
507  // Double-check idiom that, in case of spawning, is deliberately sloppy about memory fences.
508  // Technically, to avoid missed wakeups, there should be a full memory fence between the point we
509  // released the task pool (i.e. spawned task) and read the arena's state. However, adding such a
510  // fence might hurt overall performance more than it helps, because the fence would be executed
511  // on every task pool release, even when stealing does not occur. Since TBB allows parallelism,
512  // but never promises parallelism, the missed wakeup is not a correctness problem.
513  pool_state_t snapshot = my_pool_state;
514  if( is_busy_or_empty(snapshot) ) {
515  // Attempt to mark as full. The compare_and_swap below is a little unusual because the
516  // result is compared to a value that can be different than the comparand argument.
517  if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, snapshot )==SNAPSHOT_EMPTY ) {
518  if( snapshot!=SNAPSHOT_EMPTY ) {
519  // This thread read "busy" into snapshot, and then another thread transitioned
520  // my_pool_state to "empty" in the meantime, which caused the compare_and_swap above
521  // to fail. Attempt to transition my_pool_state from "empty" to "full".
522  if( my_pool_state.compare_and_swap( SNAPSHOT_FULL, SNAPSHOT_EMPTY )!=SNAPSHOT_EMPTY ) {
523  // Some other thread transitioned my_pool_state from "empty", and hence became
524  // responsible for waking up workers.
525  return;
526  }
527  }
528  // This thread transitioned pool from empty to full state, and thus is responsible for
529  // telling the market that there is work to do.
530 #if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
531  if( work_type == work_spawned ) {
532  if( my_local_concurrency_mode ) {
534  __TBB_ASSERT(!governor::local_scheduler()->is_worker(), "");
535  // There was deliberate oversubscription on 1 core for sake of starvation-resistant tasks.
536  // Now a single active thread (must be the master) supposedly starts a new parallel region
537  // with relaxed sequential semantics, and oversubscription should be avoided.
538  // Demand for workers has been decreased to 0 during SNAPSHOT_EMPTY, so just keep it.
539  my_max_num_workers = 0;
540  my_local_concurrency_mode = false;
541  return;
542  }
543  if ( as_atomic(my_global_concurrency_mode) == true )
544  my_market->mandatory_concurrency_disable( this );
545  }
546 #endif /* __TBB_ENQUEUE_ENFORCED_CONCURRENCY */
547  // TODO: investigate adjusting of arena's demand by a single worker.
549  }
550  }
551 }
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:195
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
unsigned my_max_num_workers
The number of workers requested by the master thread owning the arena.
Definition: arena.h:185
uintptr_t pool_state_t
Definition: arena.h:315
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:318
void adjust_demand(arena &, int delta)
Request that arena's need in workers should be adjusted.
Definition: market.cpp:557
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129
market * my_market
The market that owns this arena.
Definition: arena.h:232
static bool is_busy_or_empty(pool_state_t s)
No tasks to steal or snapshot is being taken.
Definition: arena.h:331
static const pool_state_t SNAPSHOT_FULL
At least one task has been offered for stealing since the last snapshot started.
Definition: arena.h:321
unsigned my_num_workers_soft_limit
Current application-imposed limit on the number of workers (see set_active_num_workers())
Definition: market.h:78
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:339
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572

References __TBB_ASSERT, tbb::internal::market::adjust_demand(), tbb::internal::as_atomic(), tbb::atomic_fence(), is_busy_or_empty(), tbb::internal::governor::local_scheduler(), tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_max_num_workers, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::market::my_num_workers_soft_limit, tbb::internal::arena_base::my_pool_state, SNAPSHOT_EMPTY, SNAPSHOT_FULL, wakeup, work_enqueued, and work_spawned.

Referenced by tbb::internal::generic_scheduler::get_task(), tbb::internal::generic_scheduler::local_spawn(), and tbb::internal::generic_scheduler::steal_task_from().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocate_arena()

arena & tbb::internal::arena::allocate_arena ( market m,
unsigned  num_slots,
unsigned  num_reserved_slots 
)
static

Allocate an instance of arena.

Definition at line 285 of file arena.cpp.

285  {
286  __TBB_ASSERT( sizeof(base_type) + sizeof(arena_slot) == sizeof(arena), "All arena data fields must go to arena_base" );
287  __TBB_ASSERT( sizeof(base_type) % NFS_GetLineSize() == 0, "arena slots area misaligned: wrong padding" );
288  __TBB_ASSERT( sizeof(mail_outbox) == NFS_MaxLineSize, "Mailbox padding is wrong" );
289  size_t n = allocation_size(num_arena_slots(num_slots));
290  unsigned char* storage = (unsigned char*)NFS_Allocate( 1, n, NULL );
291  // Zero all slots to indicate that they are empty
292  memset( storage, 0, n );
293  return *new( storage + num_arena_slots(num_slots) * sizeof(mail_outbox) ) arena(m, num_slots, num_reserved_slots);
294 }
arena(market &, unsigned max_num_workers, unsigned num_reserved_slots)
Constructor.
Definition: arena.cpp:226
static int allocation_size(unsigned num_slots)
Definition: arena.h:300
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
padded< arena_base > base_type
Definition: arena.h:281
const size_t NFS_MaxLineSize
Compile-time constant that is upper bound on cache line/sector size.
Definition: tbb_stddef.h:216
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
size_t __TBB_EXPORTED_FUNC NFS_GetLineSize()
Cache/sector line size.
static int unsigned num_arena_slots(unsigned num_slots)
Definition: arena.h:296

References __TBB_ASSERT, allocation_size(), arena(), tbb::internal::NFS_Allocate(), tbb::internal::NFS_GetLineSize(), tbb::internal::NFS_MaxLineSize, and num_arena_slots().

Referenced by tbb::internal::market::create_arena().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocation_size()

static int tbb::internal::arena::allocation_size ( unsigned  num_slots)
inlinestatic

Definition at line 300 of file arena.h.

300  {
301  return sizeof(base_type) + num_slots * (sizeof(mail_outbox) + sizeof(arena_slot));
302  }
padded< arena_base > base_type
Definition: arena.h:281

Referenced by allocate_arena(), and free_arena().

Here is the caller graph for this function:

◆ enqueue_task()

void tbb::internal::arena::enqueue_task ( task t,
intptr_t  prio,
FastRandom random 
)

enqueue a task into starvation-resistance queue

Definition at line 597 of file arena.cpp.

598 {
599 #if __TBB_RECYCLE_TO_ENQUEUE
600  __TBB_ASSERT( t.state()==task::allocated || t.state()==task::to_enqueue, "attempt to enqueue task with inappropriate state" );
601 #else
602  __TBB_ASSERT( t.state()==task::allocated, "attempt to enqueue task that is not in 'allocated' state" );
603 #endif
604  t.prefix().state = task::ready;
605  t.prefix().extra_state |= es_task_enqueued; // enqueued task marker
606 
607 #if TBB_USE_ASSERT
608  if( task* parent = t.parent() ) {
609  internal::reference_count ref_count = parent->prefix().ref_count;
610  __TBB_ASSERT( ref_count!=0, "attempt to enqueue task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
611  __TBB_ASSERT( ref_count>0, "attempt to enqueue task whose parent has a ref_count<0" );
612  parent->prefix().extra_state |= es_ref_count_active;
613  }
614  __TBB_ASSERT(t.prefix().affinity==affinity_id(0), "affinity is ignored for enqueued tasks");
615 #endif /* TBB_USE_ASSERT */
616 #if __TBB_PREVIEW_CRITICAL_TASKS
617 
618 #if __TBB_TASK_PRIORITY
620 #else
621  bool is_critical = internal::is_critical( t );
622 #endif
624  if( is_critical ) {
625  // TODO: consider using of 'scheduler::handled_as_critical'
627  generic_scheduler* s = governor::local_scheduler_if_initialized();
628  ITT_NOTIFY(sync_releasing, &my_critical_task_stream);
629  if( s && s->my_arena_slot ) {
630  // Scheduler is initialized and it is attached to the arena,
631  // propagate isolation level to critical task
632 #if __TBB_TASK_ISOLATION
633  t.prefix().isolation = s->my_innermost_running_task->prefix().isolation;
634 #endif
635  unsigned& lane = s->my_arena_slot->hint_for_critical;
636  my_critical_task_stream.push( &t, 0, tbb::internal::subsequent_lane_selector(lane) );
637  } else {
638  // Either scheduler is not initialized or it is not attached to the arena
639  // use random lane for the task
640  my_critical_task_stream.push( &t, 0, internal::random_lane_selector(random) );
641  }
642  advertise_new_work<work_spawned>();
643  return;
644  }
645 #endif /* __TBB_PREVIEW_CRITICAL_TASKS */
646 
648 #if __TBB_TASK_PRIORITY
649  intptr_t p = prio ? normalize_priority(priority_t(prio)) : normalized_normal_priority;
650  assert_priority_valid(p);
651 #if __TBB_PREVIEW_CRITICAL_TASKS && __TBB_CPF_BUILD
652  my_task_stream.push( &t, p, internal::random_lane_selector(random) );
653 #else
654  my_task_stream.push( &t, p, random );
655 #endif
656  if ( p != my_top_priority )
657  my_market->update_arena_priority( *this, p );
658 #else /* !__TBB_TASK_PRIORITY */
659  __TBB_ASSERT_EX(prio == 0, "the library is not configured to respect the task priority");
660 #if __TBB_PREVIEW_CRITICAL_TASKS && __TBB_CPF_BUILD
661  my_task_stream.push( &t, 0, internal::random_lane_selector(random) );
662 #else
663  my_task_stream.push( &t, 0, random );
664 #endif
665 #endif /* !__TBB_TASK_PRIORITY */
666  advertise_new_work<work_enqueued>();
667 #if __TBB_TASK_PRIORITY
668  if ( p != my_top_priority )
669  my_market->update_arena_priority( *this, p );
670 #endif /* __TBB_TASK_PRIORITY */
671 }
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
void push(task *source, int level, FastRandom &random)
Push a task into a lane.
Definition: task_stream.h:101
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
void make_critical(task &t)
Definition: task.h:1013
priority_t
Definition: task.h:317
task is in ready pool, or is going to be put there, or was just taken off.
Definition: task.h:641
market * my_market
The market that owns this arena.
Definition: arena.h:232
static const int priority_critical
Definition: task.h:313
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void const char const char int ITT_FORMAT __itt_group_sync p
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:139
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define __TBB_ASSERT_EX(predicate, comment)
"Extended" version is useful to suppress warnings if a variable is only used with an assert
Definition: tbb_stddef.h:167
static generic_scheduler * local_scheduler_if_initialized()
Definition: governor.h:139
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
Set if ref_count might be changed by another thread. Used for debugging.
intptr_t reference_count
A reference count.
Definition: task.h:131
bool is_critical(task &t)
Definition: task.h:1014
task object is freshly allocated or recycled.
Definition: task.h:643

References __TBB_ASSERT, tbb::internal::task_prefix::affinity, tbb::task::allocated, tbb::internal::es_ref_count_active, tbb::internal::es_task_enqueued, tbb::internal::task_prefix::extra_state, tbb::internal::is_critical(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, tbb::internal::governor::local_scheduler_if_initialized(), tbb::internal::make_critical(), parent, tbb::task::parent(), tbb::task::prefix(), tbb::internal::priority_critical, tbb::task::ready, s, tbb::internal::task_prefix::state, tbb::task::state(), and sync_releasing.

Referenced by tbb::internal::custom_scheduler< SchedulerTraits >::tally_completion_of_predecessor().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_arena()

void tbb::internal::arena::free_arena ( )

Completes arena shutdown, destructs and deallocates it.

Definition at line 296 of file arena.cpp.

296  {
297  __TBB_ASSERT( is_alive(my_guard), NULL );
298  __TBB_ASSERT( !my_references, "There are threads in the dying arena" );
299  __TBB_ASSERT( !my_num_workers_requested && !my_num_workers_allotted, "Dying arena requests workers" );
300  __TBB_ASSERT( my_pool_state == SNAPSHOT_EMPTY || !my_max_num_workers, "Inconsistent state of a dying arena" );
301 #if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
302  __TBB_ASSERT( !my_global_concurrency_mode, NULL );
303 #endif
304 #if !__TBB_STATISTICS_EARLY_DUMP
305  GATHER_STATISTIC( dump_arena_statistics() );
306 #endif
307  poison_value( my_guard );
308  intptr_t drained = 0;
309  for ( unsigned i = 0; i < my_num_slots; ++i ) {
310  __TBB_ASSERT( !my_slots[i].my_scheduler, "arena slot is not empty" );
311  // TODO: understand the assertion and modify
312  // __TBB_ASSERT( my_slots[i].task_pool == EmptyTaskPool, NULL );
313  __TBB_ASSERT( my_slots[i].head == my_slots[i].tail, NULL ); // TODO: replace by is_quiescent_local_task_pool_empty
315 #if __TBB_STATISTICS
316  NFS_Free( my_slots[i].my_counters );
317 #endif /* __TBB_STATISTICS */
318  drained += mailbox(i+1).drain();
319  }
320  __TBB_ASSERT( my_task_stream.drain()==0, "Not all enqueued tasks were executed");
321 #if __TBB_PREVIEW_RESUMABLE_TASKS
322  // Cleanup coroutines/schedulers cache
323  my_co_cache.cleanup();
324 #endif
325 #if __TBB_PREVIEW_CRITICAL_TASKS
326  __TBB_ASSERT( my_critical_task_stream.drain()==0, "Not all critical tasks were executed");
327 #endif
328 #if __TBB_COUNT_TASK_NODES
329  my_market->update_task_node_count( -drained );
330 #endif /* __TBB_COUNT_TASK_NODES */
331  // remove an internal reference
332  my_market->release( /*is_public=*/false, /*blocking_terminate=*/false );
333 #if __TBB_TASK_GROUP_CONTEXT
334  __TBB_ASSERT( my_default_ctx, "Master thread never entered the arena?" );
335  my_default_ctx->~task_group_context();
336  NFS_Free(my_default_ctx);
337 #endif /* __TBB_TASK_GROUP_CONTEXT */
338 #if __TBB_ARENA_OBSERVER
339  if ( !my_observers.empty() )
340  my_observers.clear();
341 #endif /* __TBB_ARENA_OBSERVER */
342  void* storage = &mailbox(my_num_slots);
343  __TBB_ASSERT( my_references == 0, NULL );
345  this->~arena();
346 #if TBB_USE_ASSERT > 1
347  memset( storage, 0, allocation_size(my_num_slots) );
348 #endif /* TBB_USE_ASSERT */
349  NFS_Free( storage );
350 }
#define GATHER_STATISTIC(x)
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
#define poison_value(g)
arena(market &, unsigned max_num_workers, unsigned num_reserved_slots)
Constructor.
Definition: arena.cpp:226
static int allocation_size(unsigned num_slots)
Definition: arena.h:300
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:195
intptr_t drain()
Drain the mailbox.
Definition: mailbox.h:179
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id head
arena_slot my_slots[1]
Definition: arena.h:390
int my_num_workers_requested
The number of workers that are currently requested from the resource manager.
Definition: arena.h:188
unsigned my_max_num_workers
The number of workers requested by the master thread owning the arena.
Definition: arena.h:185
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:318
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
unsigned my_num_workers_allotted
The number of workers that have been marked out by the resource manager to service the arena.
Definition: arena.h:147
market * my_market
The market that owns this arena.
Definition: arena.h:232
void free_task_pool()
Deallocate task pool that was allocated by means of allocate_task_pool.
atomic< unsigned > my_references
Reference counter for the arena.
Definition: arena.h:153
intptr_t drain()
Destroys all remaining tasks in every lane. Returns the number of destroyed tasks.
Definition: task_stream.h:145
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
bool release(bool is_public, bool blocking_terminate)
Decrements market's refcount and destroys it in the end.
Definition: market.cpp:175
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id __itt_relation __itt_id tail
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:305

References __TBB_ASSERT, allocation_size(), tbb::internal::task_stream< Levels >::drain(), tbb::internal::mail_outbox::drain(), tbb::internal::arena_slot::free_task_pool(), GATHER_STATISTIC, head, mailbox(), tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_max_num_workers, tbb::internal::arena_base::my_num_slots, tbb::internal::arena_base::my_num_workers_allotted, tbb::internal::arena_base::my_num_workers_requested, tbb::internal::arena_base::my_pool_state, tbb::internal::arena_base::my_references, my_slots, tbb::internal::arena_base::my_task_stream, tbb::internal::NFS_Free(), poison_value, tbb::internal::market::release(), SNAPSHOT_EMPTY, and tail.

Referenced by tbb::internal::market::try_destroy_arena().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ has_enqueued_tasks()

bool tbb::internal::arena::has_enqueued_tasks ( )

Check for the presence of enqueued tasks at all priority levels.

Definition at line 426 of file arena.cpp.

426  {
427  // Look for enqueued tasks at all priority levels
428  for ( int p = 0; p < num_priority_levels; ++p )
429  if ( !my_task_stream.empty(p) )
430  return true;
431  return false;
432 }
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
static const intptr_t num_priority_levels
void const char const char int ITT_FORMAT __itt_group_sync p
bool empty(int level)
Checks existence of a task.
Definition: task_stream.h:138

References tbb::internal::task_stream< Levels >::empty(), tbb::internal::arena_base::my_task_stream, tbb::internal::num_priority_levels, and p.

Referenced by restore_priority_if_need().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_busy_or_empty()

static bool tbb::internal::arena::is_busy_or_empty ( pool_state_t  s)
inlinestatic

No tasks to steal or snapshot is being taken.

Definition at line 331 of file arena.h.

331 { return s < SNAPSHOT_FULL; }
static const pool_state_t SNAPSHOT_FULL
At least one task has been offered for stealing since the last snapshot started.
Definition: arena.h:321
void const char const char int ITT_FORMAT __itt_group_sync s

References s, and SNAPSHOT_FULL.

Referenced by advertise_new_work().

Here is the caller graph for this function:

◆ is_out_of_work()

bool tbb::internal::arena::is_out_of_work ( )

Check if there is job anywhere in arena.

Return true if no job or if arena is being cleaned up.

Definition at line 454 of file arena.cpp.

454  {
455  // TODO: rework it to return at least a hint about where a task was found; better if the task itself.
456  for(;;) {
457  pool_state_t snapshot = my_pool_state;
458  switch( snapshot ) {
459  case SNAPSHOT_EMPTY:
460  return true;
461  case SNAPSHOT_FULL: {
462  // Use unique id for "busy" in order to avoid ABA problems.
463  const pool_state_t busy = pool_state_t(&busy);
464  // Request permission to take snapshot
465  if( my_pool_state.compare_and_swap( busy, SNAPSHOT_FULL )==SNAPSHOT_FULL ) {
466  // Got permission. Take the snapshot.
467  // NOTE: This is not a lock, as the state can be set to FULL at
468  // any moment by a thread that spawns/enqueues new task.
469  size_t n = my_limit;
470  // Make local copies of volatile parameters. Their change during
471  // snapshot taking procedure invalidates the attempt, and returns
472  // this thread into the dispatch loop.
473 #if __TBB_TASK_PRIORITY
474  uintptr_t reload_epoch = __TBB_load_with_acquire( my_reload_epoch );
475  intptr_t top_priority = my_top_priority;
476  // Inspect primary task pools first
477 #endif /* __TBB_TASK_PRIORITY */
478  size_t k;
479  for( k=0; k<n; ++k ) {
480  if( my_slots[k].task_pool != EmptyTaskPool &&
482  {
483  // k-th primary task pool is nonempty and does contain tasks.
484  break;
485  }
486  if( my_pool_state!=busy )
487  return false; // the work was published
488  }
489  __TBB_ASSERT( k <= n, NULL );
490  bool work_absent = k == n;
491 #if __TBB_PREVIEW_CRITICAL_TASKS
492  bool no_critical_tasks = my_critical_task_stream.empty(0);
493  work_absent &= no_critical_tasks;
494 #endif
495 #if __TBB_TASK_PRIORITY
496  // Variable tasks_present indicates presence of tasks at any priority
497  // level, while work_absent refers only to the current priority.
498  bool tasks_present = !work_absent || my_orphaned_tasks;
499  bool dequeuing_possible = false;
500  if ( work_absent ) {
501  // Check for the possibility that recent priority changes
502  // brought some tasks to the current priority level
503 
504  uintptr_t abandonment_epoch = my_abandonment_epoch;
505  // Master thread's scheduler needs special handling as it
506  // may be destroyed at any moment (workers' schedulers are
507  // guaranteed to be alive while at least one thread is in arena).
508  // The lock below excludes concurrency with task group state change
509  // propagation and guarantees lifetime of the master thread.
510  the_context_state_propagation_mutex.lock();
511  work_absent = !may_have_tasks( my_slots[0].my_scheduler, tasks_present, dequeuing_possible );
512  the_context_state_propagation_mutex.unlock();
513  // The following loop is subject to data races. While k-th slot's
514  // scheduler is being examined, corresponding worker can either
515  // leave to RML or migrate to another arena.
516  // But the races are not prevented because all of them are benign.
517  // First, the code relies on the fact that worker thread's scheduler
518  // object persists until the whole library is deinitialized.
519  // Second, in the worst case the races can only cause another
520  // round of stealing attempts to be undertaken. Introducing complex
521  // synchronization into this coldest part of the scheduler's control
522  // flow does not seem to make sense because it both is unlikely to
523  // ever have any observable performance effect, and will require
524  // additional synchronization code on the hotter paths.
525  for( k = 1; work_absent && k < n; ++k ) {
526  if( my_pool_state!=busy )
527  return false; // the work was published
528  work_absent = !may_have_tasks( my_slots[k].my_scheduler, tasks_present, dequeuing_possible );
529  }
530  // Preclude premature switching arena off because of a race in the previous loop.
531  work_absent = work_absent
532  && !__TBB_load_with_acquire(my_orphaned_tasks)
533  && abandonment_epoch == my_abandonment_epoch;
534  }
535 #endif /* __TBB_TASK_PRIORITY */
536  // Test and test-and-set.
537  if( my_pool_state==busy ) {
538 #if __TBB_TASK_PRIORITY
539  bool no_fifo_tasks = my_task_stream.empty(top_priority);
540  work_absent = work_absent && (!dequeuing_possible || no_fifo_tasks)
541  && top_priority == my_top_priority && reload_epoch == my_reload_epoch;
542 #else
543  bool no_fifo_tasks = my_task_stream.empty(0);
544  work_absent = work_absent && no_fifo_tasks;
545 #endif /* __TBB_TASK_PRIORITY */
546  if( work_absent ) {
547 #if __TBB_TASK_PRIORITY
548  if ( top_priority > my_bottom_priority ) {
549  if ( my_market->lower_arena_priority(*this, top_priority - 1, reload_epoch)
550  && !my_task_stream.empty(top_priority) )
551  {
552  atomic_update( my_skipped_fifo_priority, top_priority, std::less<intptr_t>());
553  }
554  }
555  else if ( !tasks_present && !my_orphaned_tasks && no_fifo_tasks ) {
556 #endif /* __TBB_TASK_PRIORITY */
557  // save current demand value before setting SNAPSHOT_EMPTY,
558  // to avoid race with advertise_new_work.
559  int current_demand = (int)my_max_num_workers;
560  if( my_pool_state.compare_and_swap( SNAPSHOT_EMPTY, busy )==busy ) {
561  // This thread transitioned pool to empty state, and thus is
562  // responsible for telling the market that there is no work to do.
563  my_market->adjust_demand( *this, -current_demand );
565  return true;
566  }
567  return false;
568 #if __TBB_TASK_PRIORITY
569  }
570 #endif /* __TBB_TASK_PRIORITY */
571  }
572  // Undo previous transition SNAPSHOT_FULL-->busy, unless another thread undid it.
573  my_pool_state.compare_and_swap( SNAPSHOT_FULL, busy );
574  }
575  }
576  return false;
577  }
578  default:
579  // Another thread is taking a snapshot.
580  return false;
581  }
582  }
583 }
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
void restore_priority_if_need()
If enqueued tasks found, restore arena priority and task presence status.
Definition: arena.cpp:434
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:195
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:161
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id head
arena_slot my_slots[1]
Definition: arena.h:390
unsigned my_max_num_workers
The number of workers requested by the master thread owning the arena.
Definition: arena.h:185
T1 atomic_update(tbb::atomic< T1 > &dst, T2 newValue, Pred compare)
Atomically replaces value of dst with newValue if they satisfy condition of compare predicate.
Definition: tbb_misc.h:186
uintptr_t pool_state_t
Definition: arena.h:315
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:318
void adjust_demand(arena &, int delta)
Request that arena's need in workers should be adjusted.
Definition: market.cpp:557
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id __itt_relation __itt_id ITT_FORMAT p const wchar_t int ITT_FORMAT __itt_group_mark d int
market * my_market
The market that owns this arena.
Definition: arena.h:232
static const pool_state_t SNAPSHOT_FULL
At least one task has been offered for stealing since the last snapshot started.
Definition: arena.h:321
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id __itt_relation __itt_id tail
bool empty(int level)
Checks existence of a task.
Definition: task_stream.h:138
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:735
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:709
#define EmptyTaskPool
Definition: scheduler.h:46

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::market::adjust_demand(), tbb::internal::atomic_update(), tbb::internal::task_stream< Levels >::empty(), EmptyTaskPool, head, int, tbb::internal::arena_base::my_limit, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_max_num_workers, tbb::internal::arena_base::my_pool_state, my_slots, tbb::internal::arena_base::my_task_stream, restore_priority_if_need(), SNAPSHOT_EMPTY, SNAPSHOT_FULL, and tail.

Referenced by on_thread_leaving().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_recall_requested()

bool tbb::internal::arena::is_recall_requested ( ) const
inline

Check if the recall is requested by the market.

Definition at line 339 of file arena.h.

339  {
341  }
unsigned num_workers_active() const
The number of workers active in the arena.
Definition: arena.h:334
unsigned my_num_workers_allotted
The number of workers that have been marked out by the resource manager to service the arena.
Definition: arena.h:147

References tbb::internal::arena_base::my_num_workers_allotted, and num_workers_active().

Referenced by process().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ mailbox()

mail_outbox& tbb::internal::arena::mailbox ( affinity_id  id)
inline

Get reference to mailbox corresponding to given affinity_id.

Definition at line 305 of file arena.h.

305  {
306  __TBB_ASSERT( 0<id, "affinity id must be positive integer" );
307  __TBB_ASSERT( id <= my_num_slots, "affinity id out of bounds" );
308 
309  return ((mail_outbox*)this)[-(int)id];
310  }
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain ITT_FORMAT p const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_string_handle unsigned long long ITT_FORMAT lu const __itt_domain __itt_id __itt_string_handle __itt_metadata_type size_t void ITT_FORMAT p const __itt_domain __itt_id __itt_string_handle const wchar_t size_t ITT_FORMAT lu const __itt_domain __itt_id __itt_relation __itt_id ITT_FORMAT p const wchar_t int ITT_FORMAT __itt_group_mark d int
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165

References __TBB_ASSERT, int, and tbb::internal::arena_base::my_num_slots.

Referenced by arena(), tbb::internal::generic_scheduler::attach_mailbox(), free_arena(), and tbb::internal::generic_scheduler::prepare_for_spawning().

Here is the caller graph for this function:

◆ num_arena_slots()

static int unsigned tbb::internal::arena::num_arena_slots ( unsigned  num_slots)
inlinestatic

Definition at line 296 of file arena.h.

296  {
297  return max(2u, num_slots);
298  }
T max(const T &val1, const T &val2)
Utility template function returning greater of the two values.
Definition: tbb_misc.h:119

References tbb::internal::max().

Referenced by allocate_arena(), and arena().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ num_workers_active()

unsigned tbb::internal::arena::num_workers_active ( ) const
inline

The number of workers active in the arena.

Definition at line 334 of file arena.h.

334  {
336  }
atomic< unsigned > my_references
Reference counter for the arena.
Definition: arena.h:153
static const unsigned ref_external_bits
The number of least significant bits for external references.
Definition: arena.h:324

References tbb::internal::arena_base::my_references, and ref_external_bits.

Referenced by tbb::internal::market::arena_in_need(), and is_recall_requested().

Here is the caller graph for this function:

◆ occupy_free_slot()

template<bool as_worker>
size_t tbb::internal::arena::occupy_free_slot ( generic_scheduler s)

Tries to occupy a slot in the arena. On success, returns the slot index; if no slot is available, returns out_of_arena.

Definition at line 130 of file arena.cpp.

130  {
131  // Firstly, masters try to occupy reserved slots
132  size_t index = as_worker ? out_of_arena : occupy_free_slot_in_range( s, 0, my_num_reserved_slots );
133  if ( index == out_of_arena ) {
134  // Secondly, all threads try to occupy all non-reserved slots
136  // Likely this arena is already saturated
137  if ( index == out_of_arena )
138  return out_of_arena;
139  }
140 
141  ITT_NOTIFY(sync_acquired, my_slots + index);
142  atomic_update( my_limit, (unsigned)(index + 1), std::less<unsigned>() );
143  return index;
144 }
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:161
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
arena_slot my_slots[1]
Definition: arena.h:390
T1 atomic_update(tbb::atomic< T1 > &dst, T2 newValue, Pred compare)
Atomically replaces value of dst with newValue if they satisfy condition of compare predicate.
Definition: tbb_misc.h:186
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
static const size_t out_of_arena
Definition: arena.h:382
void const char const char int ITT_FORMAT __itt_group_sync s
size_t occupy_free_slot_in_range(generic_scheduler &s, size_t lower, size_t upper)
Tries to occupy a slot in the specified range.
Definition: arena.cpp:115
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112

References tbb::internal::atomic_update(), ITT_NOTIFY, tbb::internal::arena_base::my_limit, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_base::my_num_slots, my_slots, occupy_free_slot_in_range(), out_of_arena, and s.

Referenced by process().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ occupy_free_slot_in_range()

size_t tbb::internal::arena::occupy_free_slot_in_range ( generic_scheduler s,
size_t  lower,
size_t  upper 
)

Tries to occupy a slot in the specified range.

Definition at line 115 of file arena.cpp.

115  {
116  if ( lower >= upper ) return out_of_arena;
117  // Start search for an empty slot from the one we occupied the last time
118  size_t index = s.my_arena_index;
119  if ( index < lower || index >= upper ) index = s.my_random.get() % (upper - lower) + lower;
120  __TBB_ASSERT( index >= lower && index < upper, NULL );
121  // Find a free slot
122  for ( size_t i = index; i < upper; ++i )
123  if ( occupy_slot(my_slots[i].my_scheduler, s) ) return i;
124  for ( size_t i = lower; i < index; ++i )
125  if ( occupy_slot(my_slots[i].my_scheduler, s) ) return i;
126  return out_of_arena;
127 }
arena_slot my_slots[1]
Definition: arena.h:390
static const size_t out_of_arena
Definition: arena.h:382
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
static bool occupy_slot(generic_scheduler *&slot, generic_scheduler &s)
Definition: arena.cpp:111

References __TBB_ASSERT, my_slots, tbb::internal::occupy_slot(), out_of_arena, and s.

Referenced by occupy_free_slot().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ on_thread_leaving()

template<unsigned ref_param>
void tbb::internal::arena::on_thread_leaving ( )
inline

Notification that worker or master leaves its arena.

Definition at line 394 of file arena.h.

394  {
395  //
396  // Implementation of arena destruction synchronization logic contained various
397  // bugs/flaws at the different stages of its evolution, so below is a detailed
398  // description of the issues taken into consideration in the framework of the
399  // current design.
400  //
401  // In case of using fire-and-forget tasks (scheduled via task::enqueue())
402  // master thread is allowed to leave its arena before all its work is executed,
403  // and market may temporarily revoke all workers from this arena. Since revoked
404  // workers never attempt to reset arena state to EMPTY and cancel its request
405  // to RML for threads, the arena object is destroyed only when both the last
406  // thread is leaving it and arena's state is EMPTY (that is its master thread
407  // left and it does not contain any work).
408  // Thus resetting arena to EMPTY state (as earlier TBB versions did) should not
409  // be done here (or anywhere else in the master thread to that matter); doing so
410  // can result either in arena's premature destruction (at least without
411  // additional costly checks in workers) or in unnecessary arena state changes
412  // (and ensuing workers migration).
413  //
414  // A worker that checks for work presence and transitions arena to the EMPTY
415  // state (in snapshot taking procedure arena::is_out_of_work()) updates
416  // arena::my_pool_state first and only then arena::my_num_workers_requested.
417  // So the check for work absence must be done against the latter field.
418  //
419  // In a time window between decrementing the active threads count and checking
420  // if there is an outstanding request for workers. New worker thread may arrive,
421  // finish remaining work, set arena state to empty, and leave decrementing its
422  // refcount and destroying. Then the current thread will destroy the arena
423  // the second time. To preclude it a local copy of the outstanding request
424  // value can be stored before decrementing active threads count.
425  //
426  // But this technique may cause two other problem. When the stored request is
427  // zero, it is possible that arena still has threads and they can generate new
428  // tasks and thus re-establish non-zero requests. Then all the threads can be
429  // revoked (as described above) leaving this thread the last one, and causing
430  // it to destroy non-empty arena.
431  //
432  // The other problem takes place when the stored request is non-zero. Another
433  // thread may complete the work, set arena state to empty, and leave without
434  // arena destruction before this thread decrements the refcount. This thread
435  // cannot destroy the arena either. Thus the arena may be "orphaned".
436  //
437  // In both cases we cannot dereference arena pointer after the refcount is
438  // decremented, as our arena may already be destroyed.
439  //
440  // If this is the master thread, the market is protected by refcount to it.
441  // In case of workers market's liveness is ensured by the RML connection
442  // rundown protocol, according to which the client (i.e. the market) lives
443  // until RML server notifies it about connection termination, and this
444  // notification is fired only after all workers return into RML.
445  //
446  // Thus if we decremented refcount to zero we ask the market to check arena
447  // state (including the fact if it is alive) under the lock.
448  //
449  uintptr_t aba_epoch = my_aba_epoch;
450  market* m = my_market;
451  __TBB_ASSERT(my_references >= ref_param, "broken arena reference counter");
452 #if __TBB_STATISTICS_EARLY_DUMP
453  // While still holding a reference to the arena, compute how many external references are left.
454  // If just one, dump statistics.
455  if ( modulo_power_of_two(my_references,ref_worker)==ref_param ) // may only be true with ref_external
456  GATHER_STATISTIC( dump_arena_statistics() );
457 #endif
458 #if __TBB_ENQUEUE_ENFORCED_CONCURRENCY
459  // When there is no workers someone must free arena, as
460  // without workers, no one calls is_out_of_work().
461  // Skip workerless arenas because they have no demand for workers.
462  // TODO: consider more strict conditions for the cleanup,
463  // because it can create the demand of workers,
464  // but the arena can be already empty (and so ready for destroying)
465  // TODO: Fix the race: while we check soft limit and it might be changed.
466  if( ref_param==ref_external && my_num_slots != my_num_reserved_slots
467  && 0 == m->my_num_workers_soft_limit && !my_global_concurrency_mode ) {
468  bool is_out = false;
469  for (int i=0; i<num_priority_levels; i++) {
470  is_out = is_out_of_work();
471  if (is_out)
472  break;
473  }
474  // We expect, that in worst case it's enough to have num_priority_levels-1
475  // calls to restore priorities and yet another is_out_of_work() to conform
476  // that no work was found. But as market::set_active_num_workers() can be called
477  // concurrently, can't guarantee last is_out_of_work() return true.
478  }
479 #endif
480  if ( (my_references -= ref_param ) == 0 )
481  m->try_destroy_arena( this, aba_epoch );
482 }
#define GATHER_STATISTIC(x)
argument_integer_type modulo_power_of_two(argument_integer_type arg, divisor_integer_type divisor)
A function to compute arg modulo divisor where divisor is a power of 2.
Definition: tbb_stddef.h:382
static const intptr_t num_priority_levels
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
uintptr_t my_aba_epoch
ABA prevention marker.
Definition: arena.h:235
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
static const unsigned ref_worker
Definition: arena.h:328
market * my_market
The market that owns this arena.
Definition: arena.h:232
atomic< unsigned > my_references
Reference counter for the arena.
Definition: arena.h:153
bool is_out_of_work()
Check if there is job anywhere in arena.
Definition: arena.cpp:454
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:327

References __TBB_ASSERT, GATHER_STATISTIC, is_out_of_work(), tbb::internal::modulo_power_of_two(), tbb::internal::arena_base::my_aba_epoch, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_base::my_num_slots, tbb::internal::market::my_num_workers_soft_limit, tbb::internal::arena_base::my_references, tbb::internal::num_priority_levels, ref_external, ref_worker, and tbb::internal::market::try_destroy_arena().

Referenced by tbb::internal::generic_scheduler::cleanup_master().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ process()

void tbb::internal::arena::process ( generic_scheduler s)

Registers the worker with the arena and enters TBB scheduler dispatch loop.

Definition at line 146 of file arena.cpp.

146  {
147  __TBB_ASSERT( is_alive(my_guard), NULL );
148  __TBB_ASSERT( governor::is_set(&s), NULL );
149  __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
150  __TBB_ASSERT( s.worker_outermost_level(), NULL );
151 
152  __TBB_ASSERT( my_num_slots > 1, NULL );
153 
154  size_t index = occupy_free_slot</*as_worker*/true>( s );
155  if ( index == out_of_arena )
156  goto quit;
157 
158  __TBB_ASSERT( index >= my_num_reserved_slots, "Workers cannot occupy reserved slots" );
159  s.attach_arena( this, index, /*is_master*/false );
160 
161 #if !__TBB_FP_CONTEXT
163 #endif
164 
165 #if __TBB_ARENA_OBSERVER
166  __TBB_ASSERT( !s.my_last_local_observer, "There cannot be notified local observers when entering arena" );
167  my_observers.notify_entry_observers( s.my_last_local_observer, /*worker=*/true );
168 #endif /* __TBB_ARENA_OBSERVER */
169 
170  // Task pool can be marked as non-empty if the worker occupies the slot left by a master.
171  if ( s.my_arena_slot->task_pool != EmptyTaskPool ) {
172  __TBB_ASSERT( s.my_inbox.is_idle_state(false), NULL );
173  s.local_wait_for_all( *s.my_dummy_task, NULL );
174  __TBB_ASSERT( s.my_inbox.is_idle_state(true), NULL );
175  }
176 
177  for ( ;; ) {
178  __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
179  __TBB_ASSERT( s.worker_outermost_level(), NULL );
180  __TBB_ASSERT( is_alive(my_guard), NULL );
181  __TBB_ASSERT( s.is_quiescent_local_task_pool_reset(),
182  "Worker cannot leave arena while its task pool is not reset" );
183  __TBB_ASSERT( s.my_arena_slot->task_pool == EmptyTaskPool, "Empty task pool is not marked appropriately" );
184  // This check prevents relinquishing more than necessary workers because
185  // of the non-atomicity of the decision making procedure
186  if ( is_recall_requested() )
187  break;
188  // Try to steal a task.
189  // Passing reference count is technically unnecessary in this context,
190  // but omitting it here would add checks inside the function.
191  task* t = s.receive_or_steal_task( __TBB_ISOLATION_ARG( s.my_dummy_task->prefix().ref_count, no_isolation ) );
192  if (t) {
193  // A side effect of receive_or_steal_task is that my_innermost_running_task can be set.
194  // But for the outermost dispatch loop it has to be a dummy task.
195  s.my_innermost_running_task = s.my_dummy_task;
196  s.local_wait_for_all(*s.my_dummy_task,t);
197  }
198  }
199 #if __TBB_ARENA_OBSERVER
200  my_observers.notify_exit_observers( s.my_last_local_observer, /*worker=*/true );
201  s.my_last_local_observer = NULL;
202 #endif /* __TBB_ARENA_OBSERVER */
203 #if __TBB_TASK_PRIORITY
204  if ( s.my_offloaded_tasks )
205  orphan_offloaded_tasks( s );
206 #endif /* __TBB_TASK_PRIORITY */
207 #if __TBB_STATISTICS
208  ++s.my_counters.arena_roundtrips;
209  *my_slots[index].my_counters += s.my_counters;
210  s.my_counters.reset();
211 #endif /* __TBB_STATISTICS */
212  __TBB_store_with_release( my_slots[index].my_scheduler, (generic_scheduler*)NULL );
213  s.my_arena_slot = 0; // detached from slot
214  s.my_inbox.detach();
215  __TBB_ASSERT( s.my_inbox.is_idle_state(true), NULL );
216  __TBB_ASSERT( s.my_innermost_running_task == s.my_dummy_task, NULL );
217  __TBB_ASSERT( s.worker_outermost_level(), NULL );
218  __TBB_ASSERT( is_alive(my_guard), NULL );
219 quit:
220  // In contrast to earlier versions of TBB (before 3.0 U5) now it is possible
221  // that arena may be temporarily left unpopulated by threads. See comments in
222  // arena::on_thread_leaving() for more details.
223  on_thread_leaving<ref_worker>();
224 }
const isolation_tag no_isolation
Definition: task.h:144
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:253
arena_slot my_slots[1]
Definition: arena.h:390
bool is_recall_requested() const
Check if the recall is requested by the market.
Definition: arena.h:339
size_t occupy_free_slot(generic_scheduler &s)
Tries to occupy a slot in the arena. On success, returns the slot index; if no slot is available,...
Definition: arena.cpp:130
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:250
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:713
cpu_ctl_env my_cpu_ctl_env
FPU control settings of arena's master thread captured at the moment of arena instantiation.
Definition: arena.h:239
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define __TBB_ISOLATION_ARG(arg1, isolation)
static const size_t out_of_arena
Definition: arena.h:382
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define EmptyTaskPool
Definition: scheduler.h:46

References __TBB_ASSERT, __TBB_ISOLATION_ARG, tbb::internal::__TBB_store_with_release(), EmptyTaskPool, is_recall_requested(), tbb::internal::governor::is_set(), tbb::internal::arena_base::my_cpu_ctl_env, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_base::my_num_slots, my_slots, tbb::internal::no_isolation, occupy_free_slot(), out_of_arena, s, and tbb::internal::cpu_ctl_env::set_env().

Referenced by tbb::internal::market::process().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ restore_priority_if_need()

void tbb::internal::arena::restore_priority_if_need ( )
private

If enqueued tasks found, restore arena priority and task presence status.

Definition at line 434 of file arena.cpp.

434  {
435  // Check for the presence of enqueued tasks "lost" on some of
436  // priority levels because updating arena priority and switching
437  // arena into "populated" (FULL) state happen non-atomically.
438  // Imposing atomicity would require task::enqueue() to use a lock,
439  // which is unacceptable.
440  if ( has_enqueued_tasks() ) {
441  advertise_new_work<work_enqueued>();
442 #if __TBB_TASK_PRIORITY
443  // update_arena_priority() expects non-zero arena::my_num_workers_requested,
444  // so must be called after advertise_new_work<work_enqueued>()
445  for ( int p = 0; p < num_priority_levels; ++p )
446  if ( !my_task_stream.empty(p) ) {
447  if ( p < my_bottom_priority || p > my_top_priority )
448  my_market->update_arena_priority(*this, p);
449  }
450 #endif
451  }
452 }
task_stream< num_priority_levels > my_task_stream
Task pool for the tasks scheduled via task::enqueue() method.
Definition: arena.h:172
static const intptr_t num_priority_levels
market * my_market
The market that owns this arena.
Definition: arena.h:232
bool has_enqueued_tasks()
Check for the presence of enqueued tasks at all priority levels.
Definition: arena.cpp:426
void const char const char int ITT_FORMAT __itt_group_sync p
bool empty(int level)
Checks existence of a task.
Definition: task_stream.h:138

References tbb::internal::task_stream< Levels >::empty(), has_enqueued_tasks(), tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_task_stream, tbb::internal::num_priority_levels, and p.

Referenced by is_out_of_work().

Here is the call graph for this function:
Here is the caller graph for this function:

Member Data Documentation

◆ my_slots

◆ out_of_arena

const size_t tbb::internal::arena::out_of_arena = ~size_t(0)
static

Definition at line 382 of file arena.h.

Referenced by occupy_free_slot(), occupy_free_slot_in_range(), and process().

◆ ref_external

const unsigned tbb::internal::arena::ref_external = 1
static

Reference increment values for externals and workers.

Definition at line 327 of file arena.h.

Referenced by arena(), tbb::internal::generic_scheduler::cleanup_master(), and on_thread_leaving().

◆ ref_external_bits

const unsigned tbb::internal::arena::ref_external_bits = 12
static

The number of least significant bits for external references.

Definition at line 324 of file arena.h.

Referenced by num_workers_active().

◆ ref_worker

const unsigned tbb::internal::arena::ref_worker = 1<<ref_external_bits
static

Definition at line 328 of file arena.h.

Referenced by tbb::internal::market::arena_in_need(), and on_thread_leaving().

◆ SNAPSHOT_EMPTY

const pool_state_t tbb::internal::arena::SNAPSHOT_EMPTY = 0
static

No tasks to steal since last snapshot was taken.

Definition at line 318 of file arena.h.

Referenced by advertise_new_work(), free_arena(), is_out_of_work(), tbb::internal::market::try_destroy_arena(), and tbb::internal::generic_scheduler::wait_until_empty().

◆ SNAPSHOT_FULL

const pool_state_t tbb::internal::arena::SNAPSHOT_FULL = pool_state_t(-1)
static

At least one task has been offered for stealing since the last snapshot started.

Definition at line 321 of file arena.h.

Referenced by advertise_new_work(), is_busy_or_empty(), and is_out_of_work().


The documentation for this class was generated from the following files:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.