Tuesday 15 January 2013

c - Is there a similar mechanism to Python's threading.Event in OpenMP? -


I make a list of solutions to implement the functionality of Python's threading.Event I am trying to [1] c. In general, when synchronization between threads is required, then the first mechanism to be used / interpreted is a lock (aka mutex). Python's threading.Event class is another mechanism for synchronization that can be used to block threads from a specific situation.

pthread I think this situation is possible to do this with variable properties [2].

About omp , is it possible? Based on what happens in Python, I have the Fictional type event and events :

  The following example is written  int nthreads; Event * evt; Events Que * Queue; #pragma omp parallel private (evt) {#pragma omp single {nthreads = omp_get_num_threads () - 1; } If (! Omp_get_thread_num ()) / * Master thread * / {while (nthreads) {evt = events_queue_pop (queue); Evt_set (evt); }} Other / * Other threads * / {evt = alloc_event (); Events_queue_append (queue, evt); / * Each thread waits for master thread to set its event * / evt_wait (evt); Free_event (evt); #pragma omp Important {nthreads--; }}}   

As you can see, I python threading. With the lock , I can have the same effect as #pragma omp important (for example, I protect nthreads with it) The problem is threading. Event I can not find anything for OpenMP

[1]

[2]

note : This solution is not correct. Edit at the end of this reply.

Good ... I think how I got it. Given the source of the threading modules of Python, [1], it seems really simple.

Fifo of reentrant lock case that has been implemented in OpenMP as omp_nest_lock_t ). Whenever a event.wait ([timeout]) is called , a new lock is added to FIFO and immediately acquired twice (Second time the second time will be blocked until the release!). Then, when Event.set () is called, all lock in FIFO is issued and removed from it.

I hope this answer

[1]

EDIT: I have found an article that says that this solution is not correct and it's about This problem speaks in:

[2]

No comments:

Post a Comment