Simpler solution to problem 6.13

#1

Hello,

I think I found much simpler solution than presented in the book to problem 6.13. What do you think?

(For clarity, checks for valid input are omitted).

Given an array A of n elements and a permutation P, apply P to A using only constant additional storage. Use A itself to store the result.

public static int[] permuteArray(int[] a, int[] p) {
	int permIdx = 0;
	int toBeReplaced;
	int replacement = a[0];
	#	
	for (int i = 0; i < a.length; i++) {
		toBeReplaced  = a[p[permIdx]];
		a[p[permIdx]] = replacement;
		permIdx       = p[permIdx];
		replacement   = toBeReplaced;
	}
        #
	return Arrays.copyOf(a, a.length);
}

(Couldn’t figure out how to enter a line break, so I used a dummy ‘#’ character).

Thanks,
Lukasz

0 Likes

#2

Hey Lukasz,

Sorry for the late reply. First of all, thanks for proposing your idea to this forum, and we authors like to receive questions, feedback, and comments from our readers.

I think there is one thing missing in your algorithm is that it falsely assumes the permutation sequence will form a cycle. For example, given the permutation <2, 3, 0, 1> applied to A = < a, b, c, d> where your algorithm will not permute anything.

Thanks,
Tsung-Hsien

1 Like

#3

Hi Tsung-Hsien,

Thanks for your response. You are right, the algorithm fails given this input.

BTW. Your book is very good and it helped me to prepare for interviews. But I wish that all code examples were in Java!

Regards,
Lukasz

0 Likes

#4

Hey Lukasz,

We do have Java codes and they are available at http://elementsofprogramminginterviews.com/solutions/. Please take a look of that!

Tsung-Hsien

0 Likes

#5

OK, that’s great. Thanks!

0 Likes

#6

Hey Tsun-Hsien,

I had a similar simpler solution to Lukasz’s and from what I can tell it doesn’t assume that the permutation sequence will form a cycle. Let me know what you think? :blush:

#include <vector>
#include <iostream>
#include <algorithm>

template<typename T>
std::ostream& operator<<(std::ostream& s, const std::vector<T>& v) {
    s.put('[');
    char comma[3] = {'\0', ' ', '\0'};
    for (const auto& e : v) {
        s << comma << e;
        comma[0] = ',';
    }
    return s << ']';
}

using namespace std;

void premute(vector<int> P, vector<char> &A) {
        for(int i = 0; i < int(P.size()); ++i) {
                swap(A[i], A[P[i]]);
                swap(P[i], P[P[i]]);
        }
}

int main() {
        vector<int> P {2,3,0,1};
        vector<char> A {'a','b','c','d'};
        premute(P,A);
        cout << A << '\n';
        return 0;
} 

Output:

[c, d, a, b]

Also thanks for the awesome book! It has been great help!

Luis

0 Likes

#7

Ok I just realized that you still need this to be O(n^2) and it kind of reminds me to bubble-sort. I believe this solution should work for all cases:

void premute(vector<int> P, vector<char> &A) {
    for(int j = 0; j < int(P.size()); ++j) {
        for(int i = 0; i < int(P.size()); ++i) {
            swap(A[i], A[P[i]]);
            swap(P[i], P[P[i]]);
        }
    }
}

Let me know what you think.

Best,
Luis

0 Likes

#8

Hey Luis,

Thanks for your effort of devising this algorithm. However, could you explain the rationale behind your algorithm? Assuming that I am interviewing you now, you should explain to me why your algorithm works :smile:

0 Likes

#10

Hi Tsun-Hsien,

Thank you for your answer! So as I mentioned before the algorithm reminds me of bubble sort, because in the end we want to have the values of P sorted, we can think of the problem as a simpler sort where we have information that tells us where each element should go.

On every swap we make we are guaranteed to have one of the two elements in the correct position, since we know where it should go. So if we do this n times we should end up with all the elements in the correct order.

In a way after each swap we are bubbling (going back to the analogy) one of the elements to its correct -permanent- position. So in the next pass the elements in their correct position won’t move giving a chance to the ones that are not in the correct position to move to their correct place.

I am not certain if O(n^2) complexity is the tightest and I think it could be tighter, since as you mentioned before, if a cycle is present there is a benefit to the algorithm.

An optimization could be to check if the elements are sorted or not as we go to avoid doing to many iterations in the outer loop. This would make the code more complicated, but we could keep a counter of sorted elements and mark the ones that are in place with a -1 to not double count those.

Thanks for your help and reponse smile Let me know what you think!

Luis

0 Likes

#11

Hey Luis,

Sorry for my late reply. It took me some time to understand how your algorithm works, and I think yours are correct, and may be the simplest algorithm I have ever seen, which is really good because sometimes direct algorithm is the key to the optimal algorithm. The only drawback I am seeing now is the space complexity which implicitly uses O(n) storage. Note that I personally think this algorithm is a good candidate for introducing as a brute-force algorithm which everyone should be able to devise during the interview, and I probably will add this in the following revision.

Please let me know what you think, and we are looking forward your feedback.

Tsung-Hsien

1 Like

#12

Hi Tsung-Hsien thank you for your feedback I really appreciate you taking the time to respond to the forum questions :smile: I was wondering were the O(n) storage came from? Is that from using the swap function?

0 Likes

#13

Hey Luis,

The O(n) space came from the fact you modifying P array. In the book, we have an algorithm which modifies P also but also reverts it at the end of computation, and yours is similar to that.

0 Likes

#14

I think I came with something very close to what is written above, and simpler than the book solution, but I’m stuck when analyzing the complexity of reaching a fixed point of the permutation (not worst case)… could anyone point me in the good direction ? It seems worst case n^2 but is there a tighter bound ?
v is the element vector, and p the permutation vector.

template
void Prob6_13(vector &v, vector p)
{
	for (int i = 0; i < v.size();++i)
	{
		while (i != p[i])
		{
			std::swap(v[p[i]], v[i]);
			std::swap(p[p[i]], p[i]);
		}
	}
}
0 Likes

#15

Hey zerbullon,

First of all, thanks for your question. I think there are two problems in your algorithm:

  1. It might generate wrong result as you did not update i in the while loop.
  2. Even you update i in the while loop, you might permute an element that is already permuted before due to earlier cycle.

You might try to use the code in Github to test your program as we have sample test cases and checker in that.

Please let me know if you have any problem.

0 Likes

#16

Hi and thanks for the reply !
I had a look at your test suite, and I also had mine (hereafter), and the code I posted seems to pass both… so, if I/you or anyone happens to find an input that breaks my code and share it that would be very instructive :wink:

void Prob6_13D()
{
	random_device rnb_init;
	default_random_engine generator(rnb_init());
	uniform_int_distribution distrib(1, 100);
	int n = distrib(generator);
	vector v(n), p(n), check(n);

	for (int i = 0; i < n; ++i) 
	{
		v[i]=i;
		p[i]=i;
	}
	RandomShuffle(p);
	check = v;
	Prob6_13(v, p);	
	for (int i = 0; i < v.size(); ++i)
	{
		assert(v[p[i]] == check[i]);
	}
	cout << "Ok\n";
}
0 Likes

#17

Hey zebullon,

Could you show the code of RandomShuffle()? I would like to see how your code is as your algorithm looks simple and neat to me. Could you email that to tsung.hsien.lee@gmail.com? Thanks!

0 Likes

#18

Hi Tsung-Hsien,

What does modifying the P array without reverting it back leads to a O(N) space complexity? How much space that the P array will occupy in memory is independent from the input size. I see it as O(1). Can you please help me see where the O(N) is coming from with respect to the input size?

If the P array occupies 2 kbytes of memory, it doesn’t matter whether you modify it or not. It will always occupy 2 kbytes of memory. I don’t understand how not reverting it back to its original content leads to a O(n) space complexity. Can you please elaborate?

Thanks!
JOC

0 Likes

#19

In practice, it probably does not make any difference, and in theory, space complexity is defined based on the fact of memory your algorithm touched while executing.

However, in extreme case (or worst case), it actually increase the space complexity. In this problem, we shift all the numbers by a constant amount which might actually increase some space usage. For example. the input array size might be as large as 2^31-1, and it was good to use a 32-bit integer initially to hold those elements. However, since you need to shift all of those numbers with the array size (i.e., 2^31-1), you need 64-bit integer in each element now otherwise the algorithm won’t work. Here you will see that the space complexity increase by O(n) as you need to increase the space for each element from 32-bit to 64-bit.

Let me know if you have any other question, and it is an interesting one for sure.

0 Likes

#20

Hi Tsung,

Thanks for your reply.

I do see where you are coming from. However, the fact that the space complexity might increase to O(n) when dealing with a 32-bit integer doesn’t apply to this problem. There is no left shifting going on. It’s deal with simply adding and subtracting.

Your point does make sense, but I don’t see how it apply to this problem. Let’s say that you didn’t revert the P array back in the answer in the book, this would still be O(1) in my opinion because a 32-bit integer would never overflow based on the logic of the algorithm.

Thanks for your input! I appreciate it very much.

0 Likes

#21

I think it still applied to this problem, if the array size is too large, we will be forced to use 64-bit instead of 32-bit even each element is 32-bit at most due to our algorithm.

For the scenario of we don’t revert P, it still occurs time complexity since you either copy P while calling the function or you don’t copy P while calling the function but when the caller side needs to use original P, caller side needs to store the original copy somewhere. That is the reason why there is a space complexity O(n) associated it always.

0 Likes