Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::pipeline_root_task Class Reference
Inheritance diagram for tbb::internal::pipeline_root_task:
Collaboration diagram for tbb::internal::pipeline_root_task:

Public Member Functions

 pipeline_root_task (pipeline &pipeline)
 
- Public Member Functions inherited from tbb::task
virtual ~task ()
 Destructor. More...
 
internal::allocate_continuation_proxy & allocate_continuation ()
 Returns proxy for overloaded new that allocates a continuation task of *this. More...
 
internal::allocate_child_proxy & allocate_child ()
 Returns proxy for overloaded new that allocates a child task of *this. More...
 
void recycle_as_continuation ()
 Change this to be a continuation of its former self. More...
 
void recycle_as_safe_continuation ()
 Recommended to use, safe variant of recycle_as_continuation. More...
 
void recycle_as_child_of (task &new_parent)
 Change this to be a child of new_parent. More...
 
void recycle_to_reexecute ()
 Schedule this for reexecution after current execute() returns. More...
 
void set_ref_count (int count)
 Set reference count. More...
 
void increment_ref_count ()
 Atomically increment reference count. More...
 
int add_ref_count (int count)
 Atomically adds to reference count and returns its new value. More...
 
int decrement_ref_count ()
 Atomically decrement reference count and returns its new value. More...
 
void spawn_and_wait_for_all (task &child)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void __TBB_EXPORTED_METHOD spawn_and_wait_for_all (task_list &list)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void wait_for_all ()
 Wait for reference count to become one, and set reference count to zero. More...
 
taskparent () const
 task on whose behalf this task is working, or NULL if this is a root. More...
 
void set_parent (task *p)
 sets parent task pointer to specified value More...
 
task_group_contextcontext ()
 This method is deprecated and will be removed in the future. More...
 
task_group_contextgroup ()
 Pointer to the task group descriptor. More...
 
bool is_stolen_task () const
 True if task was stolen from the task pool of another thread. More...
 
state_type state () const
 Current execution state. More...
 
int ref_count () const
 The internal reference count. More...
 
bool __TBB_EXPORTED_METHOD is_owned_by_current_thread () const
 Obsolete, and only retained for the sake of backward compatibility. Always returns true. More...
 
void set_affinity (affinity_id id)
 Set affinity for this task. More...
 
affinity_id affinity () const
 Current affinity of this task. More...
 
virtual void __TBB_EXPORTED_METHOD note_affinity (affinity_id id)
 Invoked by scheduler to notify task that it ran on unexpected thread. More...
 
void __TBB_EXPORTED_METHOD change_group (task_group_context &ctx)
 Moves this task from its current group into another one. More...
 
bool cancel_group_execution ()
 Initiates cancellation of all tasks in this cancellation group and its subordinate groups. More...
 
bool is_cancelled () const
 Returns true if the context has received cancellation request. More...
 
void set_group_priority (priority_t p)
 Changes priority of the task group this task belongs to. More...
 
priority_t group_priority () const
 Retrieves current priority of the task group this task belongs to. More...
 

Private Member Functions

taskexecute () __TBB_override
 Should be overridden by derived classes. More...
 

Private Attributes

pipelinemy_pipeline
 
bool do_segment_scanning
 

Additional Inherited Members

- Public Types inherited from tbb::task
enum  state_type {
  executing, reexecute, ready, allocated,
  freed, recycle
}
 Enumeration of task states that the scheduler considers. More...
 
typedef internal::affinity_id affinity_id
 An id as used for specifying affinity. More...
 
- Static Public Member Functions inherited from tbb::task
static internal::allocate_root_proxy allocate_root ()
 Returns proxy for overloaded new that allocates a root task. More...
 
static internal::allocate_root_with_context_proxy allocate_root (task_group_context &ctx)
 Returns proxy for overloaded new that allocates a root task associated with user supplied context. More...
 
static void spawn_root_and_wait (task &root)
 Spawn task allocated by allocate_root, wait for it to complete, and deallocate it. More...
 
static void spawn_root_and_wait (task_list &root_list)
 Spawn root tasks on list and wait for all of them to finish. More...
 
static void enqueue (task &t)
 Enqueue task for starvation-resistant execution. More...
 
static void enqueue (task &t, priority_t p)
 Enqueue task for starvation-resistant execution on the specified priority level. More...
 
static task &__TBB_EXPORTED_FUNC self ()
 The innermost task being executed or destroyed by the current thread at the moment. More...
 
- Protected Member Functions inherited from tbb::task
 task ()
 Default constructor. More...
 

Detailed Description

Definition at line 404 of file pipeline.cpp.

Constructor & Destructor Documentation

◆ pipeline_root_task()

tbb::internal::pipeline_root_task::pipeline_root_task ( pipeline pipeline)
inline

Definition at line 476 of file pipeline.cpp.

476  : my_pipeline(pipeline), do_segment_scanning(false)
477  {
479  filter* first = my_pipeline.filter_list;
480  if( (first->my_filter_mode & first->version_mask) >= __TBB_PIPELINE_VERSION(5) ) {
481  // Scanning the pipeline for segments
482  filter* head_of_previous_segment = first;
483  for( filter* subfilter=first->next_filter_in_pipeline;
484  subfilter!=NULL;
485  subfilter=subfilter->next_filter_in_pipeline )
486  {
487  if( subfilter->prev_filter_in_pipeline->is_bound() && !subfilter->is_bound() ) {
488  do_segment_scanning = true;
489  head_of_previous_segment->next_segment = subfilter;
490  head_of_previous_segment = subfilter;
491  }
492  }
493  }
494  }
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
auto first(Container &c) -> decltype(begin(c))
filter * filter_list
Pointer to first filter in the pipeline.
Definition: pipeline.h:268
#define __TBB_PIPELINE_VERSION(x)
Definition: pipeline.h:42

References __TBB_ASSERT, __TBB_PIPELINE_VERSION, do_segment_scanning, tbb::pipeline::filter_list, tbb::internal::first(), my_pipeline, and tbb::filter::next_segment.

Here is the call graph for this function:

Member Function Documentation

◆ execute()

task* tbb::internal::pipeline_root_task::execute ( )
inlineprivatevirtual

Should be overridden by derived classes.

Implements tbb::task.

Definition at line 408 of file pipeline.cpp.

408  {
411  if( my_pipeline.input_tokens > 0 ) {
413  set_ref_count(1);
414  return new( allocate_child() ) stage_task( my_pipeline );
415  }
416  if( do_segment_scanning ) {
417  filter* current_filter = my_pipeline.filter_list->next_segment;
418  /* first non-thread-bound filter that follows thread-bound one
419  and may have valid items to process */
420  filter* first_suitable_filter = current_filter;
421  while( current_filter ) {
422  __TBB_ASSERT( !current_filter->is_bound(), "filter is thread-bound?" );
423  __TBB_ASSERT( current_filter->prev_filter_in_pipeline->is_bound(), "previous filter is not thread-bound?" );
424  if( !my_pipeline.end_of_input || current_filter->has_more_work())
425  {
426  task_info info;
427  info.reset();
428  task* bypass = NULL;
429  int refcnt = 0;
430  task_list list;
431  // No new tokens are created; it's OK to process all waiting tokens.
432  // If the filter is serial, the second call to return_item will return false.
433  while( current_filter->my_input_buffer->return_item(info, !current_filter->is_serial()) ) {
434  task* t = new( allocate_child() ) stage_task( my_pipeline, current_filter, info );
435  if( ++refcnt == 1 )
436  bypass = t;
437  else // there's more than one task
438  list.push_back(*t);
439  // TODO: limit the list size (to arena size?) to spawn tasks sooner
440  __TBB_ASSERT( refcnt <= int(my_pipeline.token_counter), "token counting error" );
441  info.reset();
442  }
443  if( refcnt ) {
444  set_ref_count( refcnt );
445  if( refcnt > 1 )
446  spawn(list);
448  return bypass;
449  }
450  current_filter = current_filter->next_segment;
451  if( !current_filter ) {
452  if( !my_pipeline.end_of_input ) {
454  return this;
455  }
456  current_filter = first_suitable_filter;
457  __TBB_Yield();
458  }
459  } else {
460  /* The preceding pipeline segment is empty.
461  Fast-forward to the next post-TBF segment. */
462  first_suitable_filter = first_suitable_filter->next_segment;
463  current_filter = first_suitable_filter;
464  }
465  } /* while( current_filter ) */
466  return NULL;
467  } else {
468  if( !my_pipeline.end_of_input ) {
470  return this;
471  }
472  return NULL;
473  }
474  }
internal::allocate_child_proxy & allocate_child()
Returns proxy for overloaded new that allocates a child task of *this.
Definition: task.h:654
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
filter * next_segment
Pointer to the next "segment" of filters, or NULL if not required.
Definition: pipeline.h:192
#define __TBB_Yield()
Definition: ibm_aix51.h:48
filter * filter_list
Pointer to first filter in the pipeline.
Definition: pipeline.h:268
task()
Default constructor.
Definition: task.h:602
atomic< internal::Token > input_tokens
Number of idle tokens waiting for input stage.
Definition: pipeline.h:277
atomic< internal::Token > token_counter
Global counter of tokens.
Definition: pipeline.h:280
void set_ref_count(int count)
Set reference count.
Definition: task.h:734
void recycle_as_continuation()
Change this to be a continuation of its former self.
Definition: task.h:684
friend class task_list
Definition: task.h:929
bool is_bound() const
True if filter is thread-bound.
Definition: pipeline.h:139
bool end_of_input
False until fetch_input returns NULL.
Definition: pipeline.h:283

References __TBB_ASSERT, __TBB_Yield, tbb::task::allocate_child(), do_segment_scanning, tbb::pipeline::end_of_input, tbb::pipeline::filter_list, tbb::filter::has_more_work(), tbb::pipeline::input_tokens, tbb::filter::is_bound(), tbb::filter::is_serial(), tbb::filter::my_input_buffer, my_pipeline, tbb::filter::next_segment, tbb::filter::prev_filter_in_pipeline, tbb::task_list::push_back(), tbb::task::recycle_as_continuation(), tbb::internal::task_info::reset(), tbb::task::set_ref_count(), and tbb::pipeline::token_counter.

Here is the call graph for this function:

Member Data Documentation

◆ do_segment_scanning

bool tbb::internal::pipeline_root_task::do_segment_scanning
private

Definition at line 406 of file pipeline.cpp.

Referenced by execute(), and pipeline_root_task().

◆ my_pipeline

pipeline& tbb::internal::pipeline_root_task::my_pipeline
private

Definition at line 405 of file pipeline.cpp.

Referenced by execute(), and pipeline_root_task().


The documentation for this class was generated from the following file:

Copyright © 2005-2019 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.