Home ⌂Doc Index ◂Up ▴
Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::interface9::internal::start_reduce< Range, Body, Partitioner > Class Template Reference

Task type used to split the work of parallel_reduce. More...

#include <parallel_reduce.h>

Inheritance diagram for tbb::interface9::internal::start_reduce< Range, Body, Partitioner >:
Collaboration diagram for tbb::interface9::internal::start_reduce< Range, Body, Partitioner >:

Public Member Functions

 start_reduce (const Range &range, Body *body, Partitioner &partitioner)
 Constructor used for root task. More...
 
 start_reduce (start_reduce &parent_, typename Partitioner::split_type &split_obj)
 Splitting constructor used to generate children. More...
 
 start_reduce (start_reduce &parent_, const Range &r, depth_t d)
 Construct right child from the given range as response to the demand. More...
 
void run_body (Range &r)
 Run body for range. More...
 
void offer_work (typename Partitioner::split_type &split_obj)
 spawn right task, serves as callback for partitioner More...
 
void offer_work (const Range &r, depth_t d=0)
 spawn right task, serves as callback for partitioner More...
 
- Public Member Functions inherited from tbb::task
virtual ~task ()
 Destructor. More...
 
internal::allocate_continuation_proxyallocate_continuation ()
 Returns proxy for overloaded new that allocates a continuation task of *this. More...
 
internal::allocate_child_proxyallocate_child ()
 Returns proxy for overloaded new that allocates a child task of *this. More...
 
void recycle_as_continuation ()
 Change this to be a continuation of its former self. More...
 
void recycle_as_safe_continuation ()
 Recommended to use, safe variant of recycle_as_continuation. More...
 
void recycle_as_child_of (task &new_parent)
 Change this to be a child of new_parent. More...
 
void recycle_to_reexecute ()
 Schedule this for reexecution after current execute() returns. More...
 
void set_ref_count (int count)
 Set reference count. More...
 
void increment_ref_count ()
 Atomically increment reference count. More...
 
int add_ref_count (int count)
 Atomically adds to reference count and returns its new value. More...
 
int decrement_ref_count ()
 Atomically decrement reference count and returns its new value. More...
 
void spawn_and_wait_for_all (task &child)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void __TBB_EXPORTED_METHOD spawn_and_wait_for_all (task_list &list)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void wait_for_all ()
 Wait for reference count to become one, and set reference count to zero. More...
 
taskparent () const
 task on whose behalf this task is working, or NULL if this is a root. More...
 
void set_parent (task *p)
 sets parent task pointer to specified value More...
 
task_group_contextcontext ()
 This method is deprecated and will be removed in the future. More...
 
task_group_contextgroup ()
 Pointer to the task group descriptor. More...
 
bool is_stolen_task () const
 True if task was stolen from the task pool of another thread. More...
 
bool is_enqueued_task () const
 True if the task was enqueued. More...
 
state_type state () const
 Current execution state. More...
 
int ref_count () const
 The internal reference count. More...
 
bool __TBB_EXPORTED_METHOD is_owned_by_current_thread () const
 Obsolete, and only retained for the sake of backward compatibility. Always returns true. More...
 
void set_affinity (affinity_id id)
 Set affinity for this task. More...
 
affinity_id affinity () const
 Current affinity of this task. More...
 
void __TBB_EXPORTED_METHOD change_group (task_group_context &ctx)
 Moves this task from its current group into another one. More...
 
bool cancel_group_execution ()
 Initiates cancellation of all tasks in this cancellation group and its subordinate groups. More...
 
bool is_cancelled () const
 Returns true if the context has received cancellation request. More...
 
__TBB_DEPRECATED void set_group_priority (priority_t p)
 Changes priority of the task group this task belongs to. More...
 
__TBB_DEPRECATED priority_t group_priority () const
 Retrieves current priority of the task group this task belongs to. More...
 

Static Public Member Functions

static void run (const Range &range, Body &body, Partitioner &partitioner)
 
static void run (const Range &range, Body &body, Partitioner &partitioner, task_group_context &context)
 
- Static Public Member Functions inherited from tbb::task
static internal::allocate_root_proxy allocate_root ()
 Returns proxy for overloaded new that allocates a root task. More...
 
static internal::allocate_root_with_context_proxy allocate_root (task_group_context &ctx)
 Returns proxy for overloaded new that allocates a root task associated with user supplied context. More...
 
static void spawn_root_and_wait (task &root)
 Spawn task allocated by allocate_root, wait for it to complete, and deallocate it. More...
 
static void spawn_root_and_wait (task_list &root_list)
 Spawn root tasks on list and wait for all of them to finish. More...
 
static void enqueue (task &t)
 Enqueue task for starvation-resistant execution. More...
 
static void enqueue (task &t, priority_t p)
 Enqueue task for starvation-resistant execution on the specified priority level. More...
 
static void enqueue (task &t, task_arena &arena, priority_t p=priority_t(0))
 Enqueue task in task_arena. More...
 
static task &__TBB_EXPORTED_FUNC self ()
 The innermost task being executed or destroyed by the current thread at the moment. More...
 

Private Types

typedef finish_reduce< Body > finish_type
 

Private Member Functions

taskexecute () __TBB_override
 Should be overridden by derived classes. More...
 
void note_affinity (affinity_id id) __TBB_override
 Update affinity info, if any. More...
 

Private Attributes

Body * my_body
 
Range my_range
 
Partitioner::task_partition_type my_partition
 
reduction_context my_context
 

Friends

template<typename Body_ >
class finish_reduce
 

Additional Inherited Members

- Public Types inherited from tbb::task
enum  state_type {
  executing, reexecute, ready, allocated,
  freed, recycle
}
 Enumeration of task states that the scheduler considers. More...
 
typedef internal::affinity_id affinity_id
 An id as used for specifying affinity. More...
 
- Protected Member Functions inherited from tbb::task
 task ()
 Default constructor. More...
 

Detailed Description

template<typename Range, typename Body, typename Partitioner>
class tbb::interface9::internal::start_reduce< Range, Body, Partitioner >

Task type used to split the work of parallel_reduce.

Definition at line 85 of file parallel_reduce.h.

Member Typedef Documentation

◆ finish_type

template<typename Range, typename Body, typename Partitioner>
typedef finish_reduce<Body> tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::finish_type
private

Definition at line 86 of file parallel_reduce.h.

Constructor & Destructor Documentation

◆ start_reduce() [1/3]

template<typename Range, typename Body, typename Partitioner>
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( const Range &  range,
Body *  body,
Partitioner &  partitioner 
)
inline

Constructor used for root task.

Definition at line 101 of file parallel_reduce.h.

◆ start_reduce() [2/3]

template<typename Range, typename Body, typename Partitioner>
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( start_reduce< Range, Body, Partitioner > &  parent_,
typename Partitioner::split_type &  split_obj 
)
inline

Splitting constructor used to generate children.

parent_ becomes left child. Newly constructed object is right child.

Definition at line 110 of file parallel_reduce.h.

110  :
111  my_body(parent_.my_body),
112  my_range(parent_.my_range, split_obj),
113  my_partition(parent_.my_partition, split_obj),
115  {
116  my_partition.set_affinity(*this);
117  parent_.my_context = left_child;
118  }
Partitioner::task_partition_type my_partition

References tbb::interface9::internal::left_child, and tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context.

◆ start_reduce() [3/3]

template<typename Range, typename Body, typename Partitioner>
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( start_reduce< Range, Body, Partitioner > &  parent_,
const Range &  r,
depth_t  d 
)
inline

Construct right child from the given range as response to the demand.

parent_ remains left child. Newly constructed object is right child.

Definition at line 121 of file parallel_reduce.h.

121  :
122  my_body(parent_.my_body),
123  my_range(r),
124  my_partition(parent_.my_partition, split()),
126  {
127  my_partition.set_affinity(*this);
128  my_partition.align_depth( d ); // TODO: move into constructor of partitioner
129  parent_.my_context = left_child;
130  }
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d
Partitioner::task_partition_type my_partition

References d, tbb::interface9::internal::left_child, and tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context.

Member Function Documentation

◆ execute()

template<typename Range , typename Body , typename Partitioner >
task * tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::execute ( )
privatevirtual

Should be overridden by derived classes.

Implements tbb::task.

Definition at line 181 of file parallel_reduce.h.

181  {
182  my_partition.check_being_stolen( *this );
183  if( my_context==right_child ) {
184  finish_type* parent_ptr = static_cast<finish_type*>(parent());
185  if( !itt_load_word_with_acquire(parent_ptr->my_body) ) { // TODO: replace by is_stolen_task() or by parent_ptr->ref_count() == 2???
186  my_body = new( parent_ptr->zombie_space.begin() ) Body(*my_body,split());
187  parent_ptr->has_right_zombie = true;
188  }
189  } else __TBB_ASSERT(my_context==root_task,NULL);// because left leaf spawns right leafs without recycling
190  my_partition.execute(*this, my_range);
191  if( my_context==left_child ) {
192  finish_type* parent_ptr = static_cast<finish_type*>(parent());
193  __TBB_ASSERT(my_body!=parent_ptr->zombie_space.begin(),NULL);
194  itt_store_word_with_release(parent_ptr->my_body, my_body );
195  }
196  return NULL;
197  }
void itt_store_word_with_release(tbb::atomic< T > &dst, U src)
Partitioner::task_partition_type my_partition
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
T itt_load_word_with_acquire(const tbb::atomic< T > &src)
task * parent() const
task on whose behalf this task is working, or NULL if this is a root.
Definition: task.h:865

References __TBB_ASSERT, tbb::interface9::internal::finish_reduce< Body >::has_right_zombie, tbb::internal::itt_load_word_with_acquire(), tbb::internal::itt_store_word_with_release(), tbb::interface9::internal::left_child, tbb::interface9::internal::finish_reduce< Body >::my_body, parent, tbb::interface9::internal::right_child, tbb::interface9::internal::root_task, and tbb::interface9::internal::finish_reduce< Body >::zombie_space.

Here is the call graph for this function:

◆ note_affinity()

template<typename Range, typename Body, typename Partitioner>
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::note_affinity ( affinity_id  id)
inlineprivatevirtual

Update affinity info, if any.

Reimplemented from tbb::task.

Definition at line 93 of file parallel_reduce.h.

93  {
94  my_partition.note_affinity( id );
95  }
Partitioner::task_partition_type my_partition

◆ offer_work() [1/2]

template<typename Range, typename Body, typename Partitioner>
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::offer_work ( typename Partitioner::split_type &  split_obj)
inline

spawn right task, serves as callback for partitioner

Definition at line 154 of file parallel_reduce.h.

154  {
155  task *tasks[2];
156  allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
157  new((void*)tasks[0]) finish_type(my_context);
158  new((void*)tasks[1]) start_reduce(*this, split_obj);
159  spawn(*tasks[1]);
160  }
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
void * allocate_sibling(task *start_for_task, size_t bytes)
allocate right task with new parent
task()
Default constructor.
Definition: task.h:625

References tbb::interface9::internal::allocate_sibling().

Here is the call graph for this function:

◆ offer_work() [2/2]

template<typename Range, typename Body, typename Partitioner>
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::offer_work ( const Range &  r,
depth_t  d = 0 
)
inline

spawn right task, serves as callback for partitioner

Definition at line 162 of file parallel_reduce.h.

162  {
163  task *tasks[2];
164  allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
165  new((void*)tasks[0]) finish_type(my_context);
166  new((void*)tasks[1]) start_reduce(*this, r, d);
167  spawn(*tasks[1]);
168  }
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
void * allocate_sibling(task *start_for_task, size_t bytes)
allocate right task with new parent
task()
Default constructor.
Definition: task.h:625
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d

References tbb::interface9::internal::allocate_sibling(), and d.

Here is the call graph for this function:

◆ run() [1/2]

template<typename Range, typename Body, typename Partitioner>
static void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run ( const Range &  range,
Body &  body,
Partitioner &  partitioner 
)
inlinestatic

Definition at line 131 of file parallel_reduce.h.

131  {
132  if( !range.empty() ) {
133 #if !__TBB_TASK_GROUP_CONTEXT || TBB_JOIN_OUTER_TASK_GROUP
134  task::spawn_root_and_wait( *new(task::allocate_root()) start_reduce(range,&body,partitioner) );
135 #else
136  // Bound context prevents exceptions from body to affect nesting or sibling algorithms,
137  // and allows users to handle exceptions safely by wrapping parallel_for in the try-block.
138  task_group_context context(PARALLEL_REDUCE);
139  task::spawn_root_and_wait( *new(task::allocate_root(context)) start_reduce(range,&body,partitioner) );
140 #endif /* __TBB_TASK_GROUP_CONTEXT && !TBB_JOIN_OUTER_TASK_GROUP */
141  }
142  }
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
static internal::allocate_root_proxy allocate_root()
Returns proxy for overloaded new that allocates a root task.
Definition: task.h:663
task_group_context * context()
This method is deprecated and will be removed in the future.
Definition: task.h:878
static void spawn_root_and_wait(task &root)
Spawn task allocated by allocate_root, wait for it to complete, and deallocate it.
Definition: task.h:808

References tbb::task::allocate_root(), and tbb::task::spawn_root_and_wait().

Referenced by tbb::parallel_reduce().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ run() [2/2]

template<typename Range, typename Body, typename Partitioner>
static void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run ( const Range &  range,
Body &  body,
Partitioner &  partitioner,
task_group_context context 
)
inlinestatic

Definition at line 144 of file parallel_reduce.h.

144  {
145  if( !range.empty() )
146  task::spawn_root_and_wait( *new(task::allocate_root(context)) start_reduce(range,&body,partitioner) );
147  }
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
static internal::allocate_root_proxy allocate_root()
Returns proxy for overloaded new that allocates a root task.
Definition: task.h:663
task_group_context * context()
This method is deprecated and will be removed in the future.
Definition: task.h:878
static void spawn_root_and_wait(task &root)
Spawn task allocated by allocate_root, wait for it to complete, and deallocate it.
Definition: task.h:808

References tbb::task::allocate_root(), and tbb::task::spawn_root_and_wait().

Here is the call graph for this function:

◆ run_body()

template<typename Range, typename Body, typename Partitioner>
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run_body ( Range &  r)
inline

Run body for range.

Definition at line 150 of file parallel_reduce.h.

150 { (*my_body)( r ); }

Friends And Related Function Documentation

◆ finish_reduce

template<typename Range, typename Body, typename Partitioner>
template<typename Body_ >
friend class finish_reduce
friend

Definition at line 97 of file parallel_reduce.h.

Member Data Documentation

◆ my_body

template<typename Range, typename Body, typename Partitioner>
Body* tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_body
private

Definition at line 87 of file parallel_reduce.h.

◆ my_context

template<typename Range, typename Body, typename Partitioner>
reduction_context tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context
private

◆ my_partition

template<typename Range, typename Body, typename Partitioner>
Partitioner::task_partition_type tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_partition
private

Definition at line 89 of file parallel_reduce.h.

◆ my_range

template<typename Range, typename Body, typename Partitioner>
Range tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_range
private

Definition at line 88 of file parallel_reduce.h.


The documentation for this class was generated from the following file:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.