This website contains ALL LeetCode **Premium** problems for
**FREE!!**.

All leaked interview problems are collected from Internet.

All leaked interview problems are collected from Internet.

Note: This is a companion problem to the System Design problem: Design TinyURL.

TinyURL is a URL shortening service where you enter a URL such as `https://leetcode.com/problems/design-tinyurl`

and it returns a short URL such as `http://tinyurl.com/4e9iAk`

.

Design the `encode`

and `decode`

methods for the TinyURL service. There is no restriction on how your encode/decode algorithm should work. You just need to ensure that a URL can be encoded to a tiny URL and the tiny URL can be decoded to the original URL.

b'

\n## Solution

\n

\n#### Approach #1 Using Simple Counter[Accepted]

\n

\n#### Approach #2 Variable-length Encoding[Accepted]

\n

\n#### Approach #3 Using hashcode[Accepted]

\n

\n#### Approach #4 Using random number[Accepted]

\n

\n#### Approach #5 Random fixed-length encoding[Accepted]

\n

\n

'
\n\n

\n\n

In order to encode the URL, we make use of a counter(), which is incremented for every new URL encountered. We put the URL along with its encoded count() in a HashMap. This way we can retrieve it later at the time of decoding easily.

\n\n**Performance Analysis**

- \n
- \n
The range of URLs that can be decoded is limited by the range of .

\n \n - \n
If excessively large number of URLs have to be encoded, after the range of is exceeded, integer overflow could lead to overwriting the previous URLs\' encodings, leading to the performance degradation.

\n \n - \n
The length of the URL isn\'t necessarily shorter than the incoming . It is only dependent on the relative order in which the URLs are encoded.

\n \n - \n
One problem with this method is that it is very easy to predict the next code generated, since the pattern can be detected by generating a few encoded URLs.

\n \n

\n

**Algorithm**

In this case, we make use of variable length encoding to encode the given URLs. For every , we choose a variable codelength for the input URL, which can be any length between 0 and 61. Further, instead of using only numbers as the Base System for encoding the URLSs, we make use of a set of integers and alphabets to be used for encoding.

\n\n**Performance Analysis**

- \n
- \n
The number of URLs that can be encoded is, again, dependent on the range of , since, the same will be generated after overflow of integers.

\n \n - \n
The length of the encoded URLs isn\'t necessarily short, but is to some extent dependent on the order in which the incoming \'s are encountered. For example, the codes generated will have the lengths in the following order: 1(62 times), 2(62 times) and so on.

\n \n - \n
The performance is quite good, since the same code will be repeated only after the integer overflow limit, which is quite large.

\n \n - \n
In this case also, the next code generated could be predicted by the use of some calculations.

\n \n

\n

**Algorithm**

In this method, we make use of an inbuilt function to determine a code for mapping every URL. Again, the mapping is stored in a HashMap for decoding.

\nThe hash code for a String object is computed(using int arithmetic) as \xe2\x88\x92

\n\n , where s[i] is the ith character of the string, n is the length of the string.

\n\n**Performance Analysis**

- \n
- \n
The number of URLs that can be encoded is limited by the range of , since uses integer calculations.

\n \n - \n
The average length of the encoded URL isn\'t directly related to the incoming length.

\n \n - \n
The doesn\'t generate unique codes for different string. This property of getting the same code for two different inputs is called collision. Thus, as the number of encoded URLs increases, the probability of collisions increases, which leads to failure.

\n \n - \n
The following figure demonstrates the mapping of different objects to the same hashcode and the increasing probability of collisions with increasing number of objects.

\n \n

- \n
- \n
Thus, it isn\'t necessary that the collisions start occuring only after a certain number of strings have been encoded, but they could occur way before the limit is even near to the . This is similar to birthday paradox i.e. the probability of two people having the same birthday is nearly 50% if we consider only 23 people and 99.9% with just 70 people.

\n \n - \n
Predicting the encoded URL isn\'t easy in this scheme.

\n \n

\n

**Algorithm**

In this case, we generate a random integer to be used as the code. In case the generated code happens to be already mapped to some previous , we generate a new random integer to be used as the code. The data is again stored in a HashMap to help in the decoding process.

\n\n**Performance Analysis**

- \n
- \n
The number of URLs that can be encoded is limited by the range of .

\n \n - \n
The average length of the codes generated is independent of the \'s length, since a random integer is used.

\n \n - \n
The length of the URL isn\'t necessarily shorter than the incoming . It is only dependent on the relative order in which the URLs are encoded.

\n \n - \n
Since a random number is used for coding, again, as in the previous case, the number of collisions could increase with the increasing number of input strings, leading to performance degradation.

\n \n - \n
Determining the encoded URL isn\'t possible in this scheme, since we make use of random numbers.

\n \n

\n

**Algorithm**

In this case, again, we make use of the set of numbers and alphabets to generate the coding for the given URLs, similar to Approach 2. But in this case, the length of the code is fixed to 6 only. Further, random characters from the string to form the characters of the code. In case, the code generated collides with some previously generated code, we form a new random code.

\n\n**Performance Analysis**

- \n
- \n
The number of URLs that can be encoded is quite large in this case, nearly of the order .

\n \n - \n
The length of the encoded URLs is fixed to 6 units, which is a significant reduction for very large URLs.

\n \n - \n
The performance of this scheme is quite good, due to a very less probability of repeated same codes generated.

\n \n - \n
We can increase the number of encodings possible as well, by increasing the length of the encoded strings. Thus, there exists a tradeoff between the length of the code and the number of encodings possible.

\n \n - \n
Predicting the encoding isn\'t possible in this scheme since random numbers are used.

\n \n

\n

Analysis written by: @vinod23

\n