Banker's algorithm with pthreads

int threadsi = 3;
int threadsj = 3;
:
pthread_create(&ProcCurr[0][0], &attr,
watch_count, (void *)r1);
pthread_create(&ProcCurr[1][0], &attr,
inc_count, (void *)r2);
pthread_create(&ProcCurr[2][0], &attr,
inc_count, (void *)r3);
:
for(i=0; i<=threadsi; i++){
for(j=0; j<=threadsj; j++){
pthread_join(ProcCurr[i][j],NULL);
}
}
You appear to be starting three threads and
then joining on nine of them, six of
which will have zerofilled thread ID values.
That's unlikely to end well :)

Related to : Banker's algorithm with pthreads

HHVM: pthreads PHP classes 
ZTS is a prerequisite of pthreads.
ZTS is not used as the default because ZTS mode has some overhead
associated with it.
The HHVM documentation is just a clone of the phpdoc repository,
restyled, with some additional sections added for HHVM, this is why
the pthreads documentation shows up in HHVM docs.
HHVM does not and will not support pthreads, or at least, I won't be
supporting it, and I w

Dynamic Matrix Multiplication with Pthreads 
Just about the error:
work>MC[0][0] = 0.0.//can't use MC, MB, MA here!!
MC was declared as double (*MC)[] and you try to use it as a two
dimensional array like you had declared it double MC[N]{M]. You can
use a two (or more) dimensional array like you did if and only if the
first dimension was fixed or if you alloc it row by row.
So your program could be:
#include <pthread.h>
#incl

STL algorithm/functional 
You unfortunately cannot use bind* with function pointers directly. To
work around this, you’d normally use std::ptr_fun but in your case
that won’t work either1. So the way forward is to wrap std::max into
a functor:
template <typename T>
struct max : std::binary_function<T, T, T> {
T operator ()(T value, T min) const {
return std::max(value, min);
}
};
Usage:

What is the Big Oh of these two implementations of the same algorithm? 
Is it correct that the Big Oh of the first implementation is O(n),
and the Big Oh of the second implementation is O(n^2)?
Yes. Hash operations are considered to have constant cost.
I guess the tradeoffs are that the first one uses additional
storage space, whereas the second implementation doesn't use
additional storage (i.e. is inplace)?
Yes. You should also note that the constant v

Every possible combination algorithm 
For a given n there are always 2^n ways, as for each position we can
choose 2 differents symbols. For a general number of symbols, the
usual approach would be backtracking, but since you only have two
symbols, an easier approach using bitmasks works.
Notice that the numbers between 0 and 2^n  1 written in binary
contain all possible bitmasks of lenght n, so you can just "print the
numbers in bin



