Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::queuing_rw_mutex::scoped_lock Class Reference

The scoped locking pattern. More...

#include <queuing_rw_mutex.h>

Inheritance diagram for tbb::queuing_rw_mutex::scoped_lock:
Collaboration diagram for tbb::queuing_rw_mutex::scoped_lock:

Public Member Functions

 scoped_lock ()
 Construct lock that has not acquired a mutex. More...
 
 scoped_lock (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex. More...
 
 ~scoped_lock ()
 Release lock (if lock is held). More...
 
void acquire (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex. More...
 
bool try_acquire (queuing_rw_mutex &m, bool write=true)
 Acquire lock on given mutex if free (i.e. non-blocking) More...
 
void release ()
 Release lock. More...
 
bool upgrade_to_writer ()
 Upgrade reader to become a writer. More...
 
bool downgrade_to_reader ()
 Downgrade writer to become a reader. More...
 

Private Types

typedef unsigned char state_t
 

Private Member Functions

void initialize ()
 Initialize fields to mean "no lock held". More...
 
void acquire_internal_lock ()
 Acquire the internal lock. More...
 
bool try_acquire_internal_lock ()
 Try to acquire the internal lock. More...
 
void release_internal_lock ()
 Release the internal lock. More...
 
void wait_for_release_of_internal_lock ()
 Wait for internal lock to be released. More...
 
void unblock_or_wait_on_internal_lock (uintptr_t)
 A helper function. More...
 
- Private Member Functions inherited from tbb::internal::no_copy
 no_copy ()
 Allow default construction. More...
 

Private Attributes

queuing_rw_mutexmy_mutex
 The pointer to the mutex owned, or NULL if not holding a mutex. More...
 
scoped_lock *__TBB_atomic my_prev
 The pointer to the previous and next competitors for a mutex. More...
 
scoped_lock *__TBB_atomic *__TBB_atomic my_next
 
atomic< state_tmy_state
 State of the request: reader, writer, active reader, other service states. More...
 
unsigned char __TBB_atomic my_going
 The local spin-wait variable. More...
 
unsigned char my_internal_lock
 A tiny internal lock. More...
 

Detailed Description

The scoped locking pattern.

It helps to avoid the common problem of forgetting to release lock. It also nicely provides the "node" for queuing locks.

Definition at line 54 of file queuing_rw_mutex.h.

Member Typedef Documentation

◆ state_t

typedef unsigned char tbb::queuing_rw_mutex::scoped_lock::state_t
private

Definition at line 106 of file queuing_rw_mutex.h.

Constructor & Destructor Documentation

◆ scoped_lock() [1/2]

tbb::queuing_rw_mutex::scoped_lock::scoped_lock ( )
inline

Construct lock that has not acquired a mutex.

Equivalent to zero-initialization of *this.

Definition at line 70 of file queuing_rw_mutex.h.

70 {initialize();}
void initialize()
Initialize fields to mean "no lock held".

References initialize().

Here is the call graph for this function:

◆ scoped_lock() [2/2]

tbb::queuing_rw_mutex::scoped_lock::scoped_lock ( queuing_rw_mutex m,
bool  write = true 
)
inline

Acquire lock on given mutex.

Definition at line 73 of file queuing_rw_mutex.h.

73  {
74  initialize();
75  acquire(m,write);
76  }
void acquire(queuing_rw_mutex &m, bool write=true)
Acquire lock on given mutex.
void initialize()
Initialize fields to mean "no lock held".

References acquire(), and initialize().

Here is the call graph for this function:

◆ ~scoped_lock()

tbb::queuing_rw_mutex::scoped_lock::~scoped_lock ( )
inline

Release lock (if lock is held).

Definition at line 79 of file queuing_rw_mutex.h.

79  {
80  if( my_mutex ) release();
81  }
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.

References my_mutex, and release().

Here is the call graph for this function:

Member Function Documentation

◆ acquire()

void tbb::queuing_rw_mutex::scoped_lock::acquire ( queuing_rw_mutex m,
bool  write = true 
)

Acquire lock on given mutex.

A method to acquire queuing_rw_mutex lock.

Definition at line 144 of file queuing_rw_mutex.cpp.

145 {
146  __TBB_ASSERT( !my_mutex, "scoped_lock is already holding a mutex");
147 
148  // Must set all fields before the fetch_and_store, because once the
149  // fetch_and_store executes, *this becomes accessible to other threads.
150  my_mutex = &m;
156 
157  queuing_rw_mutex::scoped_lock* pred = m.q_tail.fetch_and_store<tbb::release>(this);
158 
159  if( write ) { // Acquiring for write
160 
161  if( pred ) {
162  ITT_NOTIFY(sync_prepare, my_mutex);
163  pred = tricky_pointer(pred) & ~FLAG;
164  __TBB_ASSERT( !( uintptr_t(pred) & FLAG ), "use of corrupted pointer!" );
165 #if TBB_USE_ASSERT
166  __TBB_control_consistency_helper(); // on "m.q_tail"
167  __TBB_ASSERT( !__TBB_load_relaxed(pred->my_next), "the predecessor has another successor!");
168 #endif
169  __TBB_store_with_release(pred->my_next,this);
171  }
172 
173  } else { // Acquiring for read
174 #if DO_ITT_NOTIFY
175  bool sync_prepare_done = false;
176 #endif
177  if( pred ) {
178  unsigned short pred_state;
179  __TBB_ASSERT( !__TBB_load_relaxed(my_prev), "the predecessor is already set" );
180  if( uintptr_t(pred) & FLAG ) {
181  /* this is only possible if pred is an upgrading reader and it signals us to wait */
182  pred_state = STATE_UPGRADE_WAITING;
183  pred = tricky_pointer(pred) & ~FLAG;
184  } else {
185  // Load pred->my_state now, because once pred->my_next becomes
186  // non-NULL, we must assume that *pred might be destroyed.
187  pred_state = pred->my_state.compare_and_swap<tbb::acquire>(STATE_READER_UNBLOCKNEXT, STATE_READER);
188  }
190  __TBB_ASSERT( !( uintptr_t(pred) & FLAG ), "use of corrupted pointer!" );
191 #if TBB_USE_ASSERT
192  __TBB_control_consistency_helper(); // on "m.q_tail"
193  __TBB_ASSERT( !__TBB_load_relaxed(pred->my_next), "the predecessor has another successor!");
194 #endif
195  __TBB_store_with_release(pred->my_next,this);
196  if( pred_state != STATE_ACTIVEREADER ) {
197 #if DO_ITT_NOTIFY
198  sync_prepare_done = true;
199  ITT_NOTIFY(sync_prepare, my_mutex);
200 #endif
202  }
203  }
204 
205  // The protected state must have been acquired here before it can be further released to any other reader(s):
207  if( old_state!=STATE_READER ) {
208 #if DO_ITT_NOTIFY
209  if( !sync_prepare_done )
210  ITT_NOTIFY(sync_prepare, my_mutex);
211 #endif
212  // Failed to become active reader -> need to unblock the next waiting reader first
213  __TBB_ASSERT( my_state==STATE_READER_UNBLOCKNEXT, "unexpected state" );
215  /* my_state should be changed before unblocking the next otherwise it might finish
216  and another thread can get our old state and left blocked */
219  }
220  }
221 
222  ITT_NOTIFY(sync_acquired, my_mutex);
223 
224  // Force acquire so that user's critical section receives correct values
225  // from processor that was previously in the user's critical section.
227 }
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:713
unsigned char my_internal_lock
A tiny internal lock.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
Acquire.
Definition: atomic.h:47
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:743
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:395
const unsigned char RELEASED
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:120
void spin_wait_until_eq(const volatile T &location, const U value)
Spin UNTIL the value of the variable is equal to a given value.
Definition: tbb_machine.h:403
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
scoped_lock()
Construct lock that has not acquired a mutex.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:739
value_type compare_and_swap(value_type value, value_type comparand)
Definition: atomic.h:289
scoped_lock *__TBB_atomic *__TBB_atomic my_next
static const tricky_pointer::word FLAG
Mask for low order bit of a pointer.
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:64
unsigned char __TBB_atomic my_going
The local spin-wait variable.
Release.
Definition: atomic.h:49
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:717
tricky_atomic_pointer< queuing_rw_mutex::scoped_lock > tricky_pointer

References __TBB_ASSERT, __TBB_control_consistency_helper, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::acquire, tbb::internal::atomic_impl< T >::compare_and_swap(), tbb::FLAG, ITT_NOTIFY, my_next, my_state, tbb::queuing_rw_mutex::q_tail, tbb::release, tbb::RELEASED, tbb::internal::spin_wait_until_eq(), tbb::internal::spin_wait_while_eq(), tbb::STATE_ACTIVEREADER, tbb::STATE_READER, tbb::STATE_READER_UNBLOCKNEXT, tbb::STATE_UPGRADE_WAITING, and tbb::STATE_WRITER.

Referenced by scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ acquire_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::acquire_internal_lock ( )
inlineprivate

Acquire the internal lock.

Definition at line 59 of file queuing_rw_mutex.cpp.

60 {
61  // Usually, we would use the test-test-and-set idiom here, with exponential backoff.
62  // But so far, experiments indicate there is no value in doing so here.
63  while( !try_acquire_internal_lock() ) {
64  __TBB_Pause(1);
65  }
66 }
bool try_acquire_internal_lock()
Try to acquire the internal lock.
void __TBB_Pause(int32_t)
Definition: tbb_machine.h:335

References __TBB_Pause().

Here is the call graph for this function:

◆ downgrade_to_reader()

bool tbb::queuing_rw_mutex::scoped_lock::downgrade_to_reader ( )

Downgrade writer to become a reader.

Definition at line 364 of file queuing_rw_mutex.cpp.

365 {
366  if ( my_state == STATE_ACTIVEREADER ) return true; // Already a reader
367 
370  if( ! __TBB_load_relaxed(my_next) ) {
371  // the following load of q_tail must not be reordered with setting STATE_READER above
372  if( this==my_mutex->q_tail.load<full_fence>() ) {
374  if( old_state==STATE_READER )
375  return true; // Downgrade completed
376  }
377  /* wait for the next to register */
378  spin_wait_while_eq( my_next, (void*)NULL );
379  }
381  __TBB_ASSERT( n, "still no successor at this point!" );
382  if( n->my_state & STATE_COMBINED_WAITINGREADER )
383  __TBB_store_with_release(n->my_going,1);
384  else if( n->my_state==STATE_UPGRADE_WAITING )
385  // the next waiting for upgrade means this writer was upgraded before.
386  n->my_state = STATE_UPGRADE_LOSER;
388  return true;
389 }
Sequential consistency.
Definition: atomic.h:45
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:713
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:395
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:120
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
scoped_lock()
Construct lock that has not acquired a mutex.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:739
value_type compare_and_swap(value_type value, value_type comparand)
Definition: atomic.h:289
scoped_lock *__TBB_atomic *__TBB_atomic my_next
Release.
Definition: atomic.h:49
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:717

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_with_release(), tbb::full_fence, ITT_NOTIFY, my_going, my_state, tbb::release, tbb::internal::spin_wait_while_eq(), tbb::STATE_ACTIVEREADER, tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_READER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_WAITING, and sync_releasing.

Here is the call graph for this function:

◆ initialize()

void tbb::queuing_rw_mutex::scoped_lock::initialize ( )
inlineprivate

Initialize fields to mean "no lock held".

Definition at line 56 of file queuing_rw_mutex.h.

56  {
57  my_mutex = NULL;
58  my_internal_lock = 0;
59  my_going = 0;
60 #if TBB_USE_ASSERT
61  my_state = 0xFF; // Set to invalid state
64 #endif /* TBB_USE_ASSERT */
65  }
unsigned char my_internal_lock
A tiny internal lock.
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:309
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
scoped_lock *__TBB_atomic *__TBB_atomic my_next
unsigned char __TBB_atomic my_going
The local spin-wait variable.

References my_going, my_internal_lock, my_mutex, my_next, my_prev, my_state, and tbb::internal::poison_pointer().

Referenced by scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ release()

void tbb::queuing_rw_mutex::scoped_lock::release ( )

Release lock.

A method to release queuing_rw_mutex lock.

Definition at line 258 of file queuing_rw_mutex.cpp.

259 {
260  __TBB_ASSERT(my_mutex!=NULL, "no lock acquired");
261 
263 
264  if( my_state == STATE_WRITER ) { // Acquired for write
265 
266  // The logic below is the same as "writerUnlock", but elides
267  // "return" from the middle of the routine.
268  // In the statement below, acquire semantics of reading my_next is required
269  // so that following operations with fields of my_next are safe.
271  if( !n ) {
272  if( this == my_mutex->q_tail.compare_and_swap<tbb::release>(NULL, this) ) {
273  // this was the only item in the queue, and the queue is now empty.
274  goto done;
275  }
278  }
279  __TBB_store_relaxed(n->my_going, 2); // protect next queue node from being destroyed too early
280  if( n->my_state==STATE_UPGRADE_WAITING ) {
281  // the next waiting for upgrade means this writer was upgraded before.
283  queuing_rw_mutex::scoped_lock* tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), NULL);
284  n->my_state = STATE_UPGRADE_LOSER;
285  __TBB_store_with_release(n->my_going,1);
287  } else {
288  __TBB_ASSERT( my_state & (STATE_COMBINED_WAITINGREADER | STATE_WRITER), "unexpected state" );
289  __TBB_ASSERT( !( uintptr_t(__TBB_load_relaxed(n->my_prev)) & FLAG ), "use of corrupted pointer!" );
290  __TBB_store_relaxed(n->my_prev, (scoped_lock*)0);
291  __TBB_store_with_release(n->my_going,1);
292  }
293 
294  } else { // Acquired for read
295 
296  queuing_rw_mutex::scoped_lock *tmp = NULL;
297 retry:
298  // Addition to the original paper: Mark my_prev as in use
299  queuing_rw_mutex::scoped_lock *pred = tricky_pointer::fetch_and_add<tbb::acquire>(&my_prev, FLAG);
300 
301  if( pred ) {
302  if( !(pred->try_acquire_internal_lock()) )
303  {
304  // Failed to acquire the lock on pred. The predecessor either unlinks or upgrades.
305  // In the second case, it could or could not know my "in use" flag - need to check
306  tmp = tricky_pointer::compare_and_swap<tbb::release>(&my_prev, pred, tricky_pointer(pred) | FLAG );
307  if( !(uintptr_t(tmp) & FLAG) ) {
308  // Wait for the predecessor to change my_prev (e.g. during unlink)
310  // Now owner of pred is waiting for _us_ to release its lock
311  pred->release_internal_lock();
312  }
313  // else the "in use" flag is back -> the predecessor didn't get it and will release itself; nothing to do
314 
315  tmp = NULL;
316  goto retry;
317  }
318  __TBB_ASSERT(pred && pred->my_internal_lock==ACQUIRED, "predecessor's lock is not acquired");
321 
322  __TBB_store_with_release(pred->my_next,static_cast<scoped_lock *>(NULL));
323 
324  if( !__TBB_load_relaxed(my_next) && this != my_mutex->q_tail.compare_and_swap<tbb::release>(pred, this) ) {
325  spin_wait_while_eq( my_next, (void*)NULL );
326  }
327  __TBB_ASSERT( !get_flag(__TBB_load_relaxed(my_next)), "use of corrupted pointer" );
328 
329  // ensure acquire semantics of reading 'my_next'
330  if( scoped_lock *const l_next = __TBB_load_with_acquire(my_next) ) { // I->next != nil, TODO: rename to n after clearing up and adapting the n in the comment two lines below
331  // Equivalent to I->next->prev = I->prev but protected against (prev[n]&FLAG)!=0
332  tmp = tricky_pointer::fetch_and_store<tbb::release>(&(l_next->my_prev), pred);
333  // I->prev->next = I->next;
335  __TBB_store_with_release(pred->my_next, my_next);
336  }
337  // Safe to release in the order opposite to acquiring which makes the code simpler
338  pred->release_internal_lock();
339 
340  } else { // No predecessor when we looked
341  acquire_internal_lock(); // "exclusiveLock(&I->EL)"
343  if( !n ) {
344  if( this != my_mutex->q_tail.compare_and_swap<tbb::release>(NULL, this) ) {
347  } else {
348  goto unlock_self;
349  }
350  }
351  __TBB_store_relaxed(n->my_going, 2); // protect next queue node from being destroyed too early
352  tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), NULL);
353  __TBB_store_with_release(n->my_going,1);
354  }
355 unlock_self:
357  }
358 done:
360 
361  initialize();
362 }
void unblock_or_wait_on_internal_lock(uintptr_t)
A helper function.
uintptr_t get_flag(queuing_rw_mutex::scoped_lock *ptr)
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:713
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:743
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:395
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void acquire_internal_lock()
Acquire the internal lock.
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:120
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
scoped_lock()
Construct lock that has not acquired a mutex.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:739
scoped_lock *__TBB_atomic *__TBB_atomic my_next
static const tricky_pointer::word FLAG
Mask for low order bit of a pointer.
unsigned char __TBB_atomic my_going
The local spin-wait variable.
Release.
Definition: atomic.h:49
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:717
const unsigned char ACQUIRED
void initialize()
Initialize fields to mean "no lock held".
tricky_atomic_pointer< queuing_rw_mutex::scoped_lock > tricky_pointer

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::ACQUIRED, tbb::FLAG, tbb::get_flag(), ITT_NOTIFY, my_going, my_internal_lock, my_next, my_prev, my_state, tbb::release, release_internal_lock(), tbb::internal::spin_wait_while_eq(), tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_WAITING, tbb::STATE_WRITER, sync_releasing, and try_acquire_internal_lock().

Referenced by ~scoped_lock().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ release_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::release_internal_lock ( )
inlineprivate

Release the internal lock.

Definition at line 68 of file queuing_rw_mutex.cpp.

69 {
71 }
unsigned char my_internal_lock
A tiny internal lock.
const unsigned char RELEASED
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:717

References tbb::internal::__TBB_store_with_release(), and tbb::RELEASED.

Referenced by release().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ try_acquire()

bool tbb::queuing_rw_mutex::scoped_lock::try_acquire ( queuing_rw_mutex m,
bool  write = true 
)

Acquire lock on given mutex if free (i.e. non-blocking)

A method to acquire queuing_rw_mutex if it is free.

Definition at line 230 of file queuing_rw_mutex.cpp.

231 {
232  __TBB_ASSERT( !my_mutex, "scoped_lock is already holding a mutex");
233 
234  if( load<relaxed>(m.q_tail) )
235  return false; // Someone already took the lock
236 
237  // Must set all fields before the fetch_and_store, because once the
238  // fetch_and_store executes, *this becomes accessible to other threads.
241  __TBB_store_relaxed(my_going, 0); // TODO: remove dead assignment?
244 
245  // The CAS must have release semantics, because we are
246  // "sending" the fields initialized above to other processors.
247  if( m.q_tail.compare_and_swap<tbb::release>(this, NULL) )
248  return false; // Someone already took the lock
249  // Force acquire so that user's critical section receives correct values
250  // from processor that was previously in the user's critical section.
252  my_mutex = &m;
253  ITT_NOTIFY(sync_acquired, my_mutex);
254  return true;
255 }
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:713
unsigned char my_internal_lock
A tiny internal lock.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:743
const unsigned char RELEASED
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:120
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
scoped_lock()
Construct lock that has not acquired a mutex.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
scoped_lock *__TBB_atomic *__TBB_atomic my_next
unsigned char __TBB_atomic my_going
The local spin-wait variable.
Release.
Definition: atomic.h:49

References __TBB_ASSERT, tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), ITT_NOTIFY, tbb::queuing_rw_mutex::q_tail, tbb::release, tbb::RELEASED, tbb::STATE_ACTIVEREADER, and tbb::STATE_WRITER.

Here is the call graph for this function:

◆ try_acquire_internal_lock()

bool tbb::queuing_rw_mutex::scoped_lock::try_acquire_internal_lock ( )
inlineprivate

Try to acquire the internal lock.

Returns true if lock was successfully acquired.

Definition at line 54 of file queuing_rw_mutex.cpp.

55 {
56  return as_atomic(my_internal_lock).compare_and_swap<tbb::acquire>(ACQUIRED,RELEASED) == RELEASED;
57 }
unsigned char my_internal_lock
A tiny internal lock.
Acquire.
Definition: atomic.h:47
const unsigned char RELEASED
atomic< T > & as_atomic(T &t)
Definition: atomic.h:547
const unsigned char ACQUIRED

References tbb::acquire, tbb::ACQUIRED, tbb::internal::as_atomic(), and tbb::RELEASED.

Referenced by release().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ unblock_or_wait_on_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::unblock_or_wait_on_internal_lock ( uintptr_t  flag)
inlineprivate

A helper function.

Definition at line 78 of file queuing_rw_mutex.cpp.

78  {
79  if( flag )
81  else
83 }
void release_internal_lock()
Release the internal lock.
void wait_for_release_of_internal_lock()
Wait for internal lock to be released.

◆ upgrade_to_writer()

bool tbb::queuing_rw_mutex::scoped_lock::upgrade_to_writer ( )

Upgrade reader to become a writer.

Returns whether the upgrade happened without releasing and re-acquiring the lock

Definition at line 391 of file queuing_rw_mutex.cpp.

392 {
393  if ( my_state == STATE_WRITER ) return true; // Already a writer
394 
395  queuing_rw_mutex::scoped_lock * tmp;
396  queuing_rw_mutex::scoped_lock * me = this;
397 
400 requested:
401  __TBB_ASSERT( !(uintptr_t(__TBB_load_relaxed(my_next)) & FLAG), "use of corrupted pointer!" );
403  if( this != my_mutex->q_tail.compare_and_swap<tbb::release>(tricky_pointer(me)|FLAG, this) ) {
404  spin_wait_while_eq( my_next, (void*)NULL );
405  queuing_rw_mutex::scoped_lock * n;
406  n = tricky_pointer::fetch_and_add<tbb::acquire>(&my_next, FLAG);
407  unsigned short n_state = n->my_state;
408  /* the next reader can be blocked by our state. the best thing to do is to unblock it */
409  if( n_state & STATE_COMBINED_WAITINGREADER )
410  __TBB_store_with_release(n->my_going,1);
411  tmp = tricky_pointer::fetch_and_store<tbb::release>(&(n->my_prev), this);
413  if( n_state & (STATE_COMBINED_READER | STATE_UPGRADE_REQUESTED) ) {
414  // save n|FLAG for simplicity of following comparisons
415  tmp = tricky_pointer(n)|FLAG;
416  for( atomic_backoff b; __TBB_load_relaxed(my_next)==tmp; b.pause() ) {
418  if( __TBB_load_with_acquire(my_next)==tmp )
420  goto waiting;
421  }
422  }
424  goto requested;
425  } else {
426  __TBB_ASSERT( n_state & (STATE_WRITER | STATE_UPGRADE_WAITING), "unexpected state");
429  }
430  } else {
431  /* We are in the tail; whoever comes next is blocked by q_tail&FLAG */
433  } // if( this != my_mutex->q_tail... )
435 
436 waiting:
437  __TBB_ASSERT( !( intptr_t(__TBB_load_relaxed(my_next)) & FLAG ), "use of corrupted pointer!" );
438  __TBB_ASSERT( my_state & STATE_COMBINED_UPGRADING, "wrong state at upgrade waiting_retry" );
439  __TBB_ASSERT( me==this, NULL );
440  ITT_NOTIFY(sync_prepare, my_mutex);
441  /* if no one was blocked by the "corrupted" q_tail, turn it back */
442  my_mutex->q_tail.compare_and_swap<tbb::release>( this, tricky_pointer(me)|FLAG );
443  queuing_rw_mutex::scoped_lock * pred;
444  pred = tricky_pointer::fetch_and_add<tbb::acquire>(&my_prev, FLAG);
445  if( pred ) {
446  bool success = pred->try_acquire_internal_lock();
447  pred->my_state.compare_and_swap<tbb::release>(STATE_UPGRADE_WAITING, STATE_UPGRADE_REQUESTED);
448  if( !success ) {
449  tmp = tricky_pointer::compare_and_swap<tbb::release>(&my_prev, pred, tricky_pointer(pred)|FLAG );
450  if( uintptr_t(tmp) & FLAG ) {
452  pred = __TBB_load_relaxed(my_prev);
453  } else {
455  pred->release_internal_lock();
456  }
457  } else {
459  pred->release_internal_lock();
461  pred = __TBB_load_relaxed(my_prev);
462  }
463  if( pred )
464  goto waiting;
465  } else {
466  // restore the corrupted my_prev field for possible further use (e.g. if downgrade back to reader)
468  }
469  __TBB_ASSERT( !pred && !__TBB_load_relaxed(my_prev), NULL );
470 
471  // additional lifetime issue prevention checks
472  // wait for the successor to finish working with my fields
474  // now wait for the predecessor to finish working with my fields
476 
477  // Acquire critical section indirectly from previous owner or directly from predecessor (TODO: not clear).
478  __TBB_control_consistency_helper(); // on either "my_mutex->q_tail" or "my_going" (TODO: not clear)
479 
480  bool result = ( my_state != STATE_UPGRADE_LOSER );
483 
484  ITT_NOTIFY(sync_acquired, my_mutex);
485  return result;
486 }
void unblock_or_wait_on_internal_lock(uintptr_t)
A helper function.
uintptr_t get_flag(queuing_rw_mutex::scoped_lock *ptr)
T __TBB_load_with_acquire(const volatile T &location)
Definition: tbb_machine.h:713
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:169
Acquire.
Definition: atomic.h:47
atomic< scoped_lock * > q_tail
The last competitor requesting the lock.
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:743
void spin_wait_while_eq(const volatile T &location, U value)
Spin WHILE the value of the variable is equal to a given value.
Definition: tbb_machine.h:395
scoped_lock *__TBB_atomic my_prev
The pointer to the previous and next competitors for a mutex.
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void acquire_internal_lock()
Acquire the internal lock.
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:120
atomic< state_t > my_state
State of the request: reader, writer, active reader, other service states.
queuing_rw_mutex * my_mutex
The pointer to the mutex owned, or NULL if not holding a mutex.
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:739
void release_internal_lock()
Release the internal lock.
void wait_for_release_of_internal_lock()
Wait for internal lock to be released.
value_type compare_and_swap(value_type value, value_type comparand)
Definition: atomic.h:289
scoped_lock *__TBB_atomic *__TBB_atomic my_next
static const tricky_pointer::word FLAG
Mask for low order bit of a pointer.
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:64
unsigned char __TBB_atomic my_going
The local spin-wait variable.
Release.
Definition: atomic.h:49
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:717
tricky_atomic_pointer< queuing_rw_mutex::scoped_lock > tricky_pointer

References __TBB_ASSERT, __TBB_control_consistency_helper, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_load_with_acquire(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), tbb::acquire, tbb::FLAG, tbb::get_flag(), ITT_NOTIFY, my_going, my_prev, my_state, tbb::release, tbb::internal::spin_wait_while_eq(), tbb::STATE_COMBINED_READER, tbb::STATE_COMBINED_UPGRADING, tbb::STATE_COMBINED_WAITINGREADER, tbb::STATE_UPGRADE_LOSER, tbb::STATE_UPGRADE_REQUESTED, tbb::STATE_UPGRADE_WAITING, tbb::STATE_WRITER, and sync_releasing.

Here is the call graph for this function:

◆ wait_for_release_of_internal_lock()

void tbb::queuing_rw_mutex::scoped_lock::wait_for_release_of_internal_lock ( )
inlineprivate

Wait for internal lock to be released.

Definition at line 73 of file queuing_rw_mutex.cpp.

74 {
76 }
unsigned char my_internal_lock
A tiny internal lock.
const unsigned char RELEASED
void spin_wait_until_eq(const volatile T &location, const U value)
Spin UNTIL the value of the variable is equal to a given value.
Definition: tbb_machine.h:403

References tbb::RELEASED, and tbb::internal::spin_wait_until_eq().

Here is the call graph for this function:

Member Data Documentation

◆ my_going

unsigned char __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_going
private

The local spin-wait variable.

Corresponds to "spin" in the pseudocode but inverted for the sake of zero-initialization

Definition at line 113 of file queuing_rw_mutex.h.

Referenced by downgrade_to_reader(), initialize(), release(), and upgrade_to_writer().

◆ my_internal_lock

unsigned char tbb::queuing_rw_mutex::scoped_lock::my_internal_lock
private

A tiny internal lock.

Definition at line 116 of file queuing_rw_mutex.h.

Referenced by initialize(), and release().

◆ my_mutex

queuing_rw_mutex* tbb::queuing_rw_mutex::scoped_lock::my_mutex
private

The pointer to the mutex owned, or NULL if not holding a mutex.

Definition at line 101 of file queuing_rw_mutex.h.

Referenced by initialize(), and ~scoped_lock().

◆ my_next

scoped_lock* __TBB_atomic * __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_next
private

Definition at line 104 of file queuing_rw_mutex.h.

Referenced by acquire(), initialize(), and release().

◆ my_prev

scoped_lock* __TBB_atomic tbb::queuing_rw_mutex::scoped_lock::my_prev
private

The pointer to the previous and next competitors for a mutex.

Definition at line 104 of file queuing_rw_mutex.h.

Referenced by initialize(), release(), and upgrade_to_writer().

◆ my_state

atomic<state_t> tbb::queuing_rw_mutex::scoped_lock::my_state
private

State of the request: reader, writer, active reader, other service states.

Definition at line 109 of file queuing_rw_mutex.h.

Referenced by acquire(), downgrade_to_reader(), initialize(), release(), and upgrade_to_writer().


The documentation for this class was generated from the following files:

Copyright © 2005-2019 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.