|Dealing with duplication between unit and integration tests|
This is part of the problem with tests which are
too fine grained and are tightly coupled with the
Personally I would write tests which focus on the
behaviour of the algorithm and would consider this
'a unit'. The fact that it is broken into several
classes is an implementation detail, in the same
way that breaking down a public method's
functionality into several smaller private
|reflection and symmetry in back tracking queens|
I will try to answer by example on a simplified
variation of the problem, it's the same queens
problem but on 4x4 board.
One possible solution to the problem is (1,2),
_ _ Q _
Q _ _ _
_ _ _ Q
_ Q _ _
Another solution is (2,1), (4,2), (1,3), (4,3):
_ Q _ _
_ _ _ Q
Q _ _ _
_ _ Q _
However, by generating one solution, I could
immediately create the o
|Big O analysis for method with multiple parameters|
You look at what the program does, and calculate
how many primitive operations will be performed
depending on your input. Sometimes that
calculation is simple, sometimes it's hard.
Usually it involves mathematics. Mathematics is
tough. Life is tough.
In your first example, can you perhaps figure out
how many assignments to arr [i] and how many
assignments to arr [j] are being made?
|Divide Huge Array of Numbers in Buckets|
If non uniform, and you want equal # of buckets,
simply make your own hashtable w/ your own lambda
If you have 1000 numbers from 1-1000, and you want
10 buckets, simply give a hashcode of 1-100 to be
0, and 101-200 to be 1. This is really easy to do
- you can just do (maxNum (in first instance it's
100) -1)/100 (which is 1000/numOfBuckets) to find
the index of the array inside of your hash
|Algorithm to find adjacent cells in a matrix|
Yes. If you really need to find the neighbors,
then you have an option to use graphs.
Graphs are basically vertex classes w/ their
adjacent vertexes, forming an edge. We can see
here that 2 forms an edge w/ 5, and 1 form an edge
w/ 5, etc.
If you're going to need to know the neighbors VERY
frequently(because this is inefficient if you're
not), then implement your own vertex class,
wrapping the v
|Why this code gives WA for Petersen Graph(codechef)?|
The code fails almost every test case I tried. I
think the problem is in traverse, in the if
statement conditions within the for loops (lines
45 and 51).
Here, you want index x, such that z[x].p is equal
to v. v is not always the correct index, so z[v]
is incorrect. Likewise in the other line. Try test
cases 'EE' and 'ABCD'.
It would be easiest to reorder the Z array in
|Complexity of this prime number search algorithm|
Check for Pi(n) function. It's approximation is
Overall algorithm (a kind of Sieve of Eratosthenes
implementation) complexity is evaluated as
|How to detect if a file has changed?|
Since this is a word processor program, it can
have a history of actions as well. You can
maintain 2 stack, one for historical actions
(changes that have already been incorporated), and
another for future actions (changes that had been
applied, but now have been reverted in a linear
For example, every character typed in sequence can
be one item in the actions stack, and delting it
|Given string x,y and z. Determine if z is a shuffle|
You never, ever HAVE TO use recursive algorithms.
You're free to do so, but you can ALWAYS use an
In this example, you could use arrays or allocate
a memory block large enough to hold the result.
|Basic decryption for simple encryption algorithm|
We iterate through each
character(CharInEncryptedString) in the encrypted
if (CharInEncryptedString - Key <= 126)
DecryptedChar = CharInEncryptedString - Key;
DecryptedChar = ((CharInEncryptedString - Key)
+ 127) -32;
|An efficient way to assign user_ids to huge dataset under certain conditions|
use python dictionary as a lookup table to store
node_ids and their corresponding user_ids.
Retrieve tx_id, node_id list ordered by tx_id, and
if a node_id appeared with two tx_ids, the tx
which comes later will find that the node_id
already stored in python dictionary and get the
user_id from dict.
This is union-find partitioning problem, the
question is how to unite sets(tx in your case) if
|What's a more efficient implementation of this puzzle?|
You don't have to try all numbers. You can instead
use a different strategy, summed up as "try
appending a digit".
Which digit? Well, a digit such that
it forms a prime together with your current last
the prime formed has not occurred in the number
This should be done recursively (not iteratively),
because you may run out of options and then you'd
have to backtrack and try a diff
|Generating prime numbers in poly-time|
To show that I can generate a list of the first J
primes in time polynomial in J, I need to work out
the cost of however I am generating the list.
If I am generating the list by checking numbers
one after the other and discarding non-primes,
there are two parts to the cost of generating the
list - how long it takes to check each number, and
how many numbers I need to check.
If primes were vanish
|What if I do not use G transpose in calculating Strongly Connected Components?|
The vertices in strongly connected component are,
by definiton, connected to each other (by a path,
not necessarily by direct edge). if you make first
DFS call on vertex X, you find out "which vertices
is X connected to" (X -> N). To make sure that all
those vertices are connected to X (N -> X) and
therefore validate strong connectivity you need to
traverse the edges in reversed directions. The ea
|Dividing an array into optimum no of equal sum sublists|
Finding a single subset with a given weight is NP
hard. If you've some way to identify all the
subsets of a given weight and whose costs are less
than $300 then you need to solve an exact cover
problem, which is NP hard in general. So you can't
expect to find any algorithm with less than
exponential complexity in the worst case.
But what I'd try here is this:
let W = total weight of all packages
Lets call the derangement function f for clarity.
At f(n), there are n hats and n people. Everyone
can choose from n-1 hats. Person 1 takes hat i
from n-1 choices. Person i still has n-1 hats to
choose from and everyone else has n-2 has to
choose from (they can't choose their own hat or
Now we need two cases for what person i does.
Think of this as
Person i takes hat 1
Person i doesn't
|How to iterate through all cases when partitioning objects|
I managed to find a solution through a combination
of recursion and loop.
Here's the pseudo code (I have no idea how to
write pseudo code...I'm denoting a list by [a; b;
// Returns a list of integers in range [0, k].
function num_list k =
// Recursively generate all the possible
partitions with [total] objects
// and [groups] partitions. Returns a list of list
|Algorithm: How to find closest element, having coordinates and dimension|
Let's say your object's center is (x,y) with width
w, height h.
The objects (rectangles, I suppose) in the array
has center (xi, yi) and width wi, hi.
Your object will connect to the others either from
right-edge, top-edge or bottom-edge whose
R1 - R2: ((x+(w/2)), (y-(h/2))) - ((x+(w/2))),
T1 - T2: ((x-(w/2)), (y+(h/2))) - ((x+(w/2))),
B1 - B2: ((x-(w/
|Developing player rankings with ELO|
Don't get offended, it is your table that doesn't
Elo system is based on the premise that a rating
is an accurate estimate of the strength, and
difference of ratings accurately predicts an
outcome of a match (a player better by 200 point
is expected to score 75%). If an actual outcome
does not agree with a prediction, it means that
ratings do not reflect strength, hence must be
|How to transform two set of discrete points ( vectors ) to help plotting them on a common scale|
I feel you need to upsample / interpolate the
vector with fewer samples to get more samples and
downsample / decimate the
vector with higher samples to get fewer samples (
In essence matching the sampling rate of both the
I used scipy.signal.resample to do the up / down
I tried to simulate your situation using two
random vectors of unequal sample sizes.
See if this helps
|Heap Sort Space Complexity|
The implementation of heapsort that you've
described above sure doesn't look like it works in
constant space for precisely the reason that
you're worried about.
However, that doesn't mean that it's not possible
to implement heapsort in O(1) auxiliary space.
Typically, an implementation of heapsort would
reorder the elements in the array to implicitly
store a binary max-heap. One nifty detail abou
|complex root finding algorithm|
Your function is not a polynomial, because it
contains the exponential function. The
Newton-Raphson method is often used for numerical
root-finding. It is described at length at
|Every possible combination algorithm|
For a given n there are always 2^n ways, as for
each position we can choose 2 differents symbols.
For a general number of symbols, the usual
approach would be backtracking, but since you only
have two symbols, an easier approach using
Notice that the numbers between 0 and 2^n - 1
written in binary contain all possible bitmasks of
lenght n, so you can just "print the numbers in
|RSA Cryptosystem - Retrieve m|
General solution for arbitrary e
Since d and e are modular inverses modulo m you
also have de-1 = cm for some constant c.
We also have k^m = 1 (mod n) when gcd(k,n) = 1,
and therefore also k^(de-1) = 1 (mod n).
m is at least divisible by 4 (since p and q are
odd) so split d*e-1 into t*2^s with t odd and
calculate k^t (mod n), when you square the result
s times (mod n) you will eventually reach
|Heap-like data structure with fast random access?|
If few members are affected when the best entity
is grabbed, then you might be able to improve the
runtime by using a linked list and an unordered
map (each with the original set of entities), and
a max heap. After removing the best entity from
the end of the linked list you'll use the map to
locate the affected entities, removing them from
the list and adding the non-worthless entities to
|How do you pin point the location of a user with 3 nodes, using Triangulation?|
Your approach is predicated on some flimsy
Trilateration is determining a position in space
based on three (or four, if working in three
dimensions) distance measurements to known
locations. Triangulation is determining a position
in space based on three angular (what direction is
the signal coming from) measurements to known
locations. The three Raspberry PI nodes are fixed
|Run time of reversing the words in a string|
In your algorithm:
split has linear complexity in the length of the
Assuming that by
you actually meant
string wordsReversed = "";
and that by
wordsReversed.join(" ", reversedWord);
you actually meant
wordsReversed += " " + reversedWord;
then the body of the outer foreach loop has linear
complexity in the length of word since both the
|Depth First Search Algorithm Prolog|
You just need to make facts that describe the
valid moves in a graph.
So for instance, if you have nodes A, B, C, and D,
every edge on the graph would have a mov() fact.
If A had edges to B and C, and B had an edge to D,
your facts would be
Basically, draw a graph and write a fact like the
above for every path from a node to another node.
|Algorithms for dividing an array into n parts|
This is a variant of the partition problem (see
details). In fact a solution to this can solve
that one (take an array, pad with 0s, and then
solve this problem) so this problem is NP hard.
There is a dynamic programming approach that is
pseudo-polynomial. For each i from 0 to the size
of the array, you keep track of all possible
|Finding a "complete" convex hull in less than O(n^2)|
Here's an O(log h)-time algorithm that, given a
convex hull with h vertices in sorted order and a
query point, tests whether the query point lies on
the hull. From the hull, compute a point in the
interior by averaging three of its vertices. Call
this point the origin. Partition the plane into
wedges bounded by rays from the origin through
hull vertices. Use binary search with an
|Algorithm which finds biggest n nodes in a tree|
And a standard heap selection algorithm won't
The basic algorithm is (assuming that k is the
number of items you want to select)
create an empty min-heap
for each node (depth-first search)
if heap.count < k
else if node.Value < heap.Peek.Value()
When the for loop is done, your heap contains the
|Can someone explain me why the following algorithm for LIS is not O(n)?|
Your current implementation of lis/1 function is
O(n), I don't see any reasons to doubt. But there
is some problem. You implementation doesn't
actually calculate valid LIS. Try
for error example. Longest increasing sequense is
[1,2,3,4], right? But your algorith returns 6 as
First error in your algorithm in that you increase
result each time, when you
|How to compute sum of sum with changing upper range|
I think the condition you're looking for is:
min((1+epsilon)*k - 1, k);
The g-1 is supposed to be evaluated after the
right hand side loop is done, and take the last
value of g.
|How can I check if a number is Colorful Number?|
A straightforward solution is to enumerate all
products and record them in a hash map.
You enumerate all products in a double loop:
by increasing the starting index;
then by increasing ending index, each time
multiplying by the current digit.
3, 3.2, 3.2.4, 220.127.116.11; 2, 2.4, 2.4.5; 4, 4.5; 5
You can verify that this generates all products.
(It also generates the product of the full
|Random choosing number in array without repeated|
I think index=randi(gamma,1); is not right because
it says select number t randomly but you select
index randomly and assign t=u(index).
See if it works,
k = 9;
u = 1 : k;
N = 12;
gamma = k;
for j = 1 : N
t = randi(gamma,1);
temp = u(t);
u(t) = u(gamma);
u(gamma) = temp;
gamma = gamma - 1;
if gamma == 0
gamma = k;
|SPARQL Query Computational Complexity|
SPARQL itself is PSPACE-complete. You can
probably only come up with the best case
complexity for any given query. The real-world
complexity will depend on the implementation of
the database to some degree.
|How do I calculate a new coordinate of a point on a circle’s circumference?|
If I understand correctly, you're doing a
projection of a point A onto your circle. Let's
say the radius of the circle is r and the distance
from the center of the circle to A is d. Then you
need λ such that r = λ d, or λ = r/d.
For the circle center at the origin, you have d =
Sqrt(x^2 + y^2).
|Generate Price Data from 3 variables and data|
if you are looking for something simple and that
is just based on the available data just using SQL
will suffice. You need to GROUP BY, use AVG and
filter with WHERE.
If you are looking for something fancier and are
looking to make predictions based on limited data
or incomplete queries, you should have a look at
things like regressions trees.
|Optimizing K-means algorithm|
Second step: The intervals are on the distance
metric, not on coordinates of data.
Fifth step: This is essentially distance to the
second closest cluster.
Let me know if you still don't understand the