Home ⌂Doc Index ◂Up ▴
Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::queuing_mutex::scoped_lock Class Reference

The scoped locking pattern. More...

#include <queuing_mutex.h>

Inheritance diagram for tbb::queuing_mutex::scoped_lock:
Collaboration diagram for tbb::queuing_mutex::scoped_lock:

Public Member Functions

 scoped_lock ()
 Construct lock that has not acquired a mutex. More...
 
 scoped_lock (queuing_mutex &m)
 Acquire lock on given mutex. More...
 
 ~scoped_lock ()
 Release lock (if lock is held). More...
 
void __TBB_EXPORTED_METHOD acquire (queuing_mutex &m)
 Acquire lock on given mutex. More...
 
bool __TBB_EXPORTED_METHOD try_acquire (queuing_mutex &m)
 Acquire lock on given mutex if free (i.e. non-blocking) More...
 
void __TBB_EXPORTED_METHOD release ()
 Release lock. More...
 

Private Member Functions

void initialize ()
 Initialize fields to mean "no lock held". More...
 
- Private Member Functions inherited from tbb::internal::no_copy
 no_copy (const no_copy &)=delete
 
 no_copy ()=default
 

Private Attributes

queuing_mutexmutex
 The pointer to the mutex owned, or NULL if not holding a mutex. More...
 
scoped_locknext
 The pointer to the next competitor for a mutex. More...
 
uintptr_t going
 The local spin-wait variable. More...
 

Detailed Description

The scoped locking pattern.

It helps to avoid the common problem of forgetting to release lock. It also nicely provides the "node" for queuing locks.

Definition at line 44 of file queuing_mutex.h.

Constructor & Destructor Documentation

◆ scoped_lock() [1/2]

tbb::queuing_mutex::scoped_lock::scoped_lock ( )
inline

Construct lock that has not acquired a mutex.

Equivalent to zero-initialization of *this.

Definition at line 57 of file queuing_mutex.h.

57 {initialize();}
void initialize()
Initialize fields to mean "no lock held".
Definition: queuing_mutex.h:46

References initialize().

Here is the call graph for this function:

◆ scoped_lock() [2/2]

tbb::queuing_mutex::scoped_lock::scoped_lock ( queuing_mutex m)
inline

Acquire lock on given mutex.

Definition at line 60 of file queuing_mutex.h.

60  {
61  initialize();
62  acquire(m);
63  }
void initialize()
Initialize fields to mean "no lock held".
Definition: queuing_mutex.h:46
void __TBB_EXPORTED_METHOD acquire(queuing_mutex &m)
Acquire lock on given mutex.

References acquire(), and initialize().

Here is the call graph for this function:

◆ ~scoped_lock()

tbb::queuing_mutex::scoped_lock::~scoped_lock ( )
inline

Release lock (if lock is held).

Definition at line 66 of file queuing_mutex.h.

66  {
67  if( mutex ) release();
68  }
void __TBB_EXPORTED_METHOD release()
Release lock.
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Definition: queuing_mutex.h:81

References mutex, and release().

Here is the call graph for this function:

Member Function Documentation

◆ acquire()

void tbb::queuing_mutex::scoped_lock::acquire ( queuing_mutex m)

Acquire lock on given mutex.

A method to acquire queuing_mutex lock.

Definition at line 28 of file queuing_mutex.cpp.

29 {
30  __TBB_ASSERT( !this->mutex, "scoped_lock is already holding a mutex");
31 
32  // Must set all fields before the fetch_and_store, because once the
33  // fetch_and_store executes, *this becomes accessible to other threads.
34  mutex = &m;
35  next = NULL;
36  going = 0;
37 
38  // The fetch_and_store must have release semantics, because we are
39  // "sending" the fields initialized above to other processors.
40  scoped_lock* pred = m.q_tail.fetch_and_store<tbb::release>(this);
41  if( pred ) {
42  ITT_NOTIFY(sync_prepare, mutex);
43 #if TBB_USE_ASSERT
44  __TBB_control_consistency_helper(); // on "m.q_tail"
45  __TBB_ASSERT( !pred->next, "the predecessor has another successor!");
46 #endif
47  pred->next = this;
48  spin_wait_while_eq( going, 0ul );
49  }
50  ITT_NOTIFY(sync_acquired, mutex);
51 
52  // Force acquire so that user's critical section receives correct values
53  // from processor that was previously in the user's critical section.
55 }
scoped_lock()
Construct lock that has not acquired a mutex.
Definition: queuing_mutex.h:57
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:391
Release.
Definition: atomic.h:59
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
uintptr_t going
The local spin-wait variable.
Definition: queuing_mutex.h:90
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Definition: queuing_mutex.h:81
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
scoped_lock * next
The pointer to the next competitor for a mutex.
Definition: queuing_mutex.h:84
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:709

References __TBB_ASSERT, __TBB_control_consistency_helper, tbb::internal::__TBB_load_with_acquire(), ITT_NOTIFY, next, tbb::queuing_mutex::q_tail, tbb::release, and tbb::internal::spin_wait_while_eq().

Referenced by scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ initialize()

void tbb::queuing_mutex::scoped_lock::initialize ( )
inlineprivate

Initialize fields to mean "no lock held".

Definition at line 46 of file queuing_mutex.h.

46  {
47  mutex = NULL;
48  going = 0;
49 #if TBB_USE_ASSERT
51 #endif /* TBB_USE_ASSERT */
52  }
uintptr_t going
The local spin-wait variable.
Definition: queuing_mutex.h:90
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Definition: queuing_mutex.h:81
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
scoped_lock * next
The pointer to the next competitor for a mutex.
Definition: queuing_mutex.h:84

References going, mutex, next, and tbb::internal::poison_pointer().

Referenced by scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ release()

void tbb::queuing_mutex::scoped_lock::release ( )

Release lock.

A method to release queuing_mutex lock.

Definition at line 81 of file queuing_mutex.cpp.

82 {
83  __TBB_ASSERT(this->mutex!=NULL, "no lock acquired");
84 
86  if( !next ) {
87  if( this == mutex->q_tail.compare_and_swap<tbb::release>(NULL, this) ) {
88  // this was the only item in the queue, and the queue is now empty.
89  goto done;
90  }
91  // Someone in the queue
93  }
94  __TBB_ASSERT(next,NULL);
96 done:
97  initialize();
98 }
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.
scoped_lock()
Construct lock that has not acquired a mutex.
Definition: queuing_mutex.h:57
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:391
void initialize()
Initialize fields to mean "no lock held".
Definition: queuing_mutex.h:46
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:713
Release.
Definition: atomic.h:59
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
uintptr_t going
The local spin-wait variable.
Definition: queuing_mutex.h:90
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Definition: queuing_mutex.h:81
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
scoped_lock * next
The pointer to the next competitor for a mutex.
Definition: queuing_mutex.h:84

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, tbb::release, tbb::internal::spin_wait_while_eq(), and sync_releasing.

Referenced by ~scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ try_acquire()

bool tbb::queuing_mutex::scoped_lock::try_acquire ( queuing_mutex m)

Acquire lock on given mutex if free (i.e. non-blocking)

A method to acquire queuing_mutex if it is free.

Definition at line 58 of file queuing_mutex.cpp.

59 {
60  __TBB_ASSERT( !this->mutex, "scoped_lock is already holding a mutex");
61 
62  // Must set all fields before the fetch_and_store, because once the
63  // fetch_and_store executes, *this becomes accessible to other threads.
64  next = NULL;
65  going = 0;
66 
67  // The CAS must have release semantics, because we are
68  // "sending" the fields initialized above to other processors.
69  if( m.q_tail.compare_and_swap<tbb::release>(this, NULL) )
70  return false;
71 
72  // Force acquire so that user's critical section receives correct values
73  // from processor that was previously in the user's critical section.
75  mutex = &m;
76  ITT_NOTIFY(sync_acquired, mutex);
77  return true;
78 }
Release.
Definition: atomic.h:59
uintptr_t going
The local spin-wait variable.
Definition: queuing_mutex.h:90
queuing_mutex * mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
Definition: queuing_mutex.h:81
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:112
scoped_lock * next
The pointer to the next competitor for a mutex.
Definition: queuing_mutex.h:84
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:709

References __TBB_ASSERT, tbb::internal::__TBB_load_with_acquire(), ITT_NOTIFY, tbb::queuing_mutex::q_tail, and tbb::release.

Here is the call graph for this function:

Member Data Documentation

◆ going

uintptr_t tbb::queuing_mutex::scoped_lock::going
private

The local spin-wait variable.

Inverted (0 - blocked, 1 - acquired the mutex) for the sake of zero-initialization. Defining it as an entire word instead of a byte seems to help performance slightly.

Definition at line 90 of file queuing_mutex.h.

Referenced by initialize().

◆ mutex

queuing_mutex* tbb::queuing_mutex::scoped_lock::mutex
private

The pointer to the mutex owned, or NULL if not holding a mutex.

Definition at line 81 of file queuing_mutex.h.

Referenced by initialize(), and ~scoped_lock().

◆ next

scoped_lock* tbb::queuing_mutex::scoped_lock::next
private

The pointer to the next competitor for a mutex.

Definition at line 84 of file queuing_mutex.h.

Referenced by acquire(), and initialize().


The documentation for this class was generated from the following files:

Copyright © 2005-2020 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.