Can someone explain the lowest set bit erased solution that led to O(k) time complexity. I don’t get it. What is lowest set bit?
Lowest set bit means the lowest bit which is set to 1. This problem relies on the trick of x & (x - 1) to clean those bits are set to 1 iteratively. You can try to print the value of x in binary representation during each iteration to help you understand that. We have the new EPI Judge framework for you to try that.
In my understanding, The lowest set bit (LSB) is the first set bit or 1 that appears when going right to left.
So, for number: 10110010, the lowest set bit is at 2nd position from right.
1011 00’1’(<this one) 0
Now, coming back to that bit fiddle trick of
x&(x-1) with with x=5.
In order to fully understand this, I hope you’re comfortable with subtracting two numbers in binary. Ex: say x=5 i.e 101 in binary. x-1 or 4 i.e 100 in binary.
So, let’s relook at,
You see what happened there?
The LSB of 5 (i.e 101) which is the first bit from right to left , it was set or ‘1’. Now it’s cleared or reset aka ‘0’, after the operation x&(x-1)
Now, time complexity of a O(k) comes from the fact that, if for a word of size ‘n’, there are ‘k’ bits set to 1, then it would take k operations to get to the left most bit in n.
Time complexity in worst case is O(n) because, in worst case there would be k=n bits set.
I scratched my head quite long before I actually got it myself. Hope this was helpful.