Chapter 4: Questions about time complexity when using lookup table

#1

Hi All,

I just started using EPI (new book). For both problems 4.1 and 4.3 (chapter 4), I noticed they don’t take into consideration the time/space complexity of the look up table. Is that accurate for interview purpose as well?

If you need to write compilable code in an interview, you will have to take into consideration the writing the codes for the look up table. Therefore, when doing time/space complexity analysis, those solutions using look up table will no longer be optimal.

I understand that this is the best way to do it in production code since this will greatly increase the speed if you are doing those computation very often. However, I am skeptical on how an interviewers will perceive it.

If you need to do time/space analysis that involve your complete solution, it will not be a good solution at all. For example, in question 4.1, the solution that has a O(s) where s is the number of bits set to 1 will end up being faster than the look up table that has a solution of O(n/L) where n is the number of bits in the input size and L is the width of the look up table.

Can someone please comment on what is appropriate and better for interview settings?

Thanks!
JOC

2 Likes

#2

Hey @joc,

It is a good question that why we don’t take consideration of lookup table space/time complexity. The main reason is that those lookup table is static across all kinds of inputs. Say you might pay a huge time/space to compute this once, however, no matter how input change, those values will be that always. It is like axiom for that problem which is timeless.

For example, take problem 4.1 as an example, the parity of 1011 is 1, so we build a lookup table when input is 1011, we return 1. This is always true no matter we reboot machine or not, no matter the computation is done at a remote datacenter or not. Therefore, this kind of computation will truly be done one time across the lifetime since the company operates till its end. As a result, we don’t count the time/space complexity of lookup table for those problems if we know the calls of parity() will dominate the major running time if we performed parity() multiple times.

1 Like

#3

Hi Tsung,

Thanks for your quick reply. This is very much appreciated. However, I kind of get lost at your explanation at some point.

" This is always true no matter we reboot machine or not, no matter the computation is done at a remote datacenter or not.Therefore, this kind of computation will truly be done one time across the lifetime since the company operates till its end."

if you reboot the machine, all programs are gone from memory. Every single time the machine gets rebooted, memory needs to be initialize before anything happens. I don’t understand what you mean by the bolden statement above. The lookup table program must run once for every single time that you reboot the machine.

Every single time that you reboot a machine, there is a firmware (UEFI/BIOS) that runs an initialize the memory and several other hardware peripherals. For some embedded systems, this feature is embedded in the bootloader that takes care of system memory early initialization.

Everything else in your explanation makes sense to me. I understand how to elaborate a question like this in a interview and see if interviewer is okay with such a solution before coding it up. Lookup table will only run once and is independent of the input.

However, I still don’t see why you claim that it will always run once “accross the lifetime since the company operates till the end”. I believe that you are assuming that the company will never power off the datacenter or they will never be any issues requiring such a thing. Right?

Anyways, even if it’s once per boot-up, I think the time/space complexity can be ignored as you mentioned.

Thanks!
JOC

1 Like

#4

Hey @joc,

Thanks for your question, and what I wanted to say is focusing on the property that those lookup table contents won’t change over time. In reality, we rarely reboot machine, and for those lookup table contents, it is very likely you store those in a remote virtual machine that always provide those values when you need it. Those VM is like something in datacenter that will never power off.

In practice, the lookup table method only works if we know this function will be called so many times that eventually the one-time initialization cost is outweighed by the costs of function calls. In the scenario of web services, it is very common. Otherwise, the price of lookup table is high usually in terms of time and space as you mentioned.

1 Like

#5

Hi Tsung,

You are amazing. Thanks for your quick reply.

I am new with EPI, and didn’t know that response to this Forum was that quick.

I definitely need to go and recommend more friends of mine to buy your book. You are doing an amazing job keeping up with this Forum.

Since this forum welcome any interview algorithm related questions for discussion, I think I will be bugging you quite a bit.

Thanks a million!
JOC

1 Like

#6

Hey @joc,

Many thanks for your good words, and it is totally on us and the community to create an environment to discuss interesting questions. Also, many thanks for your support and keep ask any questions you have!

2 Likes