Home ⌂Doc Index ◂Up ▴
Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::pipeline_root_task Class Reference
Inheritance diagram for tbb::internal::pipeline_root_task:
Collaboration diagram for tbb::internal::pipeline_root_task:

Public Member Functions

 pipeline_root_task (pipeline &pipeline)
 
- Public Member Functions inherited from tbb::task
virtual ~task ()
 Destructor. More...
 
internal::allocate_continuation_proxyallocate_continuation ()
 Returns proxy for overloaded new that allocates a continuation task of *this. More...
 
internal::allocate_child_proxyallocate_child ()
 Returns proxy for overloaded new that allocates a child task of *this. More...
 
void recycle_as_continuation ()
 Change this to be a continuation of its former self. More...
 
void recycle_as_safe_continuation ()
 Recommended to use, safe variant of recycle_as_continuation. More...
 
void recycle_as_child_of (task &new_parent)
 Change this to be a child of new_parent. More...
 
void recycle_to_reexecute ()
 Schedule this for reexecution after current execute() returns. More...
 
void set_ref_count (int count)
 Set reference count. More...
 
void increment_ref_count ()
 Atomically increment reference count. More...
 
int add_ref_count (int count)
 Atomically adds to reference count and returns its new value. More...
 
int decrement_ref_count ()
 Atomically decrement reference count and returns its new value. More...
 
void spawn_and_wait_for_all (task &child)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void __TBB_EXPORTED_METHOD spawn_and_wait_for_all (task_list &list)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void wait_for_all ()
 Wait for reference count to become one, and set reference count to zero. More...
 
taskparent () const
 task on whose behalf this task is working, or NULL if this is a root. More...
 
void set_parent (task *p)
 sets parent task pointer to specified value More...
 
task_group_contextcontext ()
 This method is deprecated and will be removed in the future. More...
 
task_group_contextgroup ()
 Pointer to the task group descriptor. More...
 
bool is_stolen_task () const
 True if task was stolen from the task pool of another thread. More...
 
bool is_enqueued_task () const
 True if the task was enqueued. More...
 
state_type state () const
 Current execution state. More...
 
int ref_count () const
 The internal reference count. More...
 
bool __TBB_EXPORTED_METHOD is_owned_by_current_thread () const
 Obsolete, and only retained for the sake of backward compatibility. Always returns true. More...
 
void set_affinity (affinity_id id)
 Set affinity for this task. More...
 
affinity_id affinity () const
 Current affinity of this task. More...
 
virtual void __TBB_EXPORTED_METHOD note_affinity (affinity_id id)
 Invoked by scheduler to notify task that it ran on unexpected thread. More...
 
void __TBB_EXPORTED_METHOD change_group (task_group_context &ctx)
 Moves this task from its current group into another one. More...
 
bool cancel_group_execution ()
 Initiates cancellation of all tasks in this cancellation group and its subordinate groups. More...
 
bool is_cancelled () const
 Returns true if the context has received cancellation request. More...
 
__TBB_DEPRECATED void set_group_priority (priority_t p)
 Changes priority of the task group this task belongs to. More...
 
__TBB_DEPRECATED priority_t group_priority () const
 Retrieves current priority of the task group this task belongs to. More...
 

Private Member Functions

taskexecute () __TBB_override
 Should be overridden by derived classes. More...
 

Private Attributes

pipeline & my_pipeline
 
bool do_segment_scanning
 

Additional Inherited Members

- Public Types inherited from tbb::task
enum  state_type {
  executing, reexecute, ready, allocated,
  freed, recycle
}
 Enumeration of task states that the scheduler considers. More...
 
typedef internal::affinity_id affinity_id
 An id as used for specifying affinity. More...
 
- Static Public Member Functions inherited from tbb::task
static internal::allocate_root_proxy allocate_root ()
 Returns proxy for overloaded new that allocates a root task. More...
 
static internal::allocate_root_with_context_proxy allocate_root (task_group_context &ctx)
 Returns proxy for overloaded new that allocates a root task associated with user supplied context. More...
 
static void spawn_root_and_wait (task &root)
 Spawn task allocated by allocate_root, wait for it to complete, and deallocate it. More...
 
static void spawn_root_and_wait (task_list &root_list)
 Spawn root tasks on list and wait for all of them to finish. More...
 
static void enqueue (task &t)
 Enqueue task for starvation-resistant execution. More...
 
static void enqueue (task &t, priority_t p)
 Enqueue task for starvation-resistant execution on the specified priority level. More...
 
static void enqueue (task &t, task_arena &arena, priority_t p=priority_t(0))
 Enqueue task in task_arena. More...
 
static task &__TBB_EXPORTED_FUNC self ()
 The innermost task being executed or destroyed by the current thread at the moment. More...
 
- Protected Member Functions inherited from tbb::task
 task ()
 Default constructor. More...
 

Detailed Description

Definition at line 400 of file pipeline.cpp.

Constructor & Destructor Documentation

◆ pipeline_root_task()

tbb::internal::pipeline_root_task::pipeline_root_task ( pipeline &  pipeline)
inline

Definition at line 472 of file pipeline.cpp.

472  : my_pipeline(pipeline), do_segment_scanning(false)
473  {
474  __TBB_ASSERT( my_pipeline.filter_list, NULL );
475  filter* first = my_pipeline.filter_list;
476  if( (first->my_filter_mode & first->version_mask) >= __TBB_PIPELINE_VERSION(5) ) {
477  // Scanning the pipeline for segments
478  filter* head_of_previous_segment = first;
479  for( filter* subfilter=first->next_filter_in_pipeline;
480  subfilter!=NULL;
481  subfilter=subfilter->next_filter_in_pipeline )
482  {
483  if( subfilter->prev_filter_in_pipeline->is_bound() && !subfilter->is_bound() ) {
484  do_segment_scanning = true;
485  head_of_previous_segment->next_segment = subfilter;
486  head_of_previous_segment = subfilter;
487  }
488  }
489  }
490  }
#define __TBB_PIPELINE_VERSION(x)
Definition: pipeline.h:41
auto first(Container &c) -> decltype(begin(c))
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165

References __TBB_ASSERT, __TBB_PIPELINE_VERSION, do_segment_scanning, tbb::internal::first(), my_pipeline, and tbb::filter::next_segment.

Here is the call graph for this function:

Member Function Documentation

◆ execute()

task* tbb::internal::pipeline_root_task::execute ( )
inlineprivatevirtual

Should be overridden by derived classes.

Implements tbb::task.

Definition at line 404 of file pipeline.cpp.

404  {
405  if( !my_pipeline.end_of_input )
406  if( !my_pipeline.filter_list->is_bound() )
407  if( my_pipeline.input_tokens > 0 ) {
409  set_ref_count(1);
410  return new( allocate_child() ) stage_task( my_pipeline );
411  }
412  if( do_segment_scanning ) {
413  filter* current_filter = my_pipeline.filter_list->next_segment;
414  /* first non-thread-bound filter that follows thread-bound one
415  and may have valid items to process */
416  filter* first_suitable_filter = current_filter;
417  while( current_filter ) {
418  __TBB_ASSERT( !current_filter->is_bound(), "filter is thread-bound?" );
419  __TBB_ASSERT( current_filter->prev_filter_in_pipeline->is_bound(), "previous filter is not thread-bound?" );
420  if( !my_pipeline.end_of_input || current_filter->has_more_work())
421  {
422  task_info info;
423  info.reset();
424  task* bypass = NULL;
425  int refcnt = 0;
426  task_list list;
427  // No new tokens are created; it's OK to process all waiting tokens.
428  // If the filter is serial, the second call to return_item will return false.
429  while( current_filter->my_input_buffer->return_item(info, !current_filter->is_serial()) ) {
430  task* t = new( allocate_child() ) stage_task( my_pipeline, current_filter, info );
431  if( ++refcnt == 1 )
432  bypass = t;
433  else // there's more than one task
434  list.push_back(*t);
435  // TODO: limit the list size (to arena size?) to spawn tasks sooner
436  __TBB_ASSERT( refcnt <= int(my_pipeline.token_counter), "token counting error" );
437  info.reset();
438  }
439  if( refcnt ) {
440  set_ref_count( refcnt );
441  if( refcnt > 1 )
442  spawn(list);
444  return bypass;
445  }
446  current_filter = current_filter->next_segment;
447  if( !current_filter ) {
448  if( !my_pipeline.end_of_input ) {
450  return this;
451  }
452  current_filter = first_suitable_filter;
453  __TBB_Yield();
454  }
455  } else {
456  /* The preceding pipeline segment is empty.
457  Fast-forward to the next post-TBF segment. */
458  first_suitable_filter = first_suitable_filter->next_segment;
459  current_filter = first_suitable_filter;
460  }
461  } /* while( current_filter ) */
462  return NULL;
463  } else {
464  if( !my_pipeline.end_of_input ) {
466  return this;
467  }
468  return NULL;
469  }
470  }
void recycle_as_continuation()
Change this to be a continuation of its former self.
Definition: task.h:711
internal::allocate_child_proxy & allocate_child()
Returns proxy for overloaded new that allocates a child task of *this.
Definition: task.h:681
#define __TBB_Yield()
Definition: ibm_aix51.h:44
friend class task_list
Definition: task.h:990
task()
Default constructor.
Definition: task.h:625
void set_ref_count(int count)
Set reference count.
Definition: task.h:761
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165

References __TBB_ASSERT, __TBB_Yield, tbb::task::allocate_child(), do_segment_scanning, tbb::filter::has_more_work(), tbb::filter::is_bound(), tbb::filter::is_serial(), tbb::filter::my_input_buffer, my_pipeline, tbb::filter::next_segment, tbb::filter::prev_filter_in_pipeline, tbb::task_list::push_back(), tbb::task::recycle_as_continuation(), tbb::internal::task_info::reset(), tbb::internal::input_buffer::return_item(), and tbb::task::set_ref_count().

Here is the call graph for this function:

Member Data Documentation

◆ do_segment_scanning

bool tbb::internal::pipeline_root_task::do_segment_scanning
private

Definition at line 402 of file pipeline.cpp.

Referenced by execute(), and pipeline_root_task().

◆ my_pipeline

pipeline& tbb::internal::pipeline_root_task::my_pipeline
private

Definition at line 401 of file pipeline.cpp.

Referenced by execute(), and pipeline_root_task().


The documentation for this class was generated from the following file:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.