Guys, thank you all for the advice. This was an interesting discussion, but I did not speak of *literally* two or three identical values in a row.

The problem for me is 4-5 or more identical values in a row, which happens quite often on random short ranges.

**XeduR @Spyric** this example is not correct, because it does not describe the probability. In your example, there are 243 search options, three of which will *always* have five identical numbers in a row. You can also count four identical numbers in a row from a five-element string; three and two identical numbers; but it’s all combinatorics.

This does not mean at all that if we in lua generate 243 times a five-element array, then there will always be 3 arrays, all contain the same values. I have checked, random will give out from 0 to 8 arrays with the same values (any). So this 0 to 8 is our *probability*. We can, of course, repeat the procedure 5,000 times, then yes, average will be 3 arrays with the same values, but this is already the law of large numbers.

I’m working with random () on-demand, and if I record, for example, an array of 1000 random values, there I could found 8-12 rows of five identical values (any). And four is around 20-25. If unpack() and watch on it, you could found sections with a good random (when the values regularly change or are repeated no more than 2 times) and bad (values go one after another 4-8 times).

Realization of probability could be various, it is another question. Most programming languages use a uniform probability distribution, which means that the generated random number can reach any point in the range with equal probability. However, in lua all this works well only on wide ranges (0-100) and large samples (more than 1000).

My task is exactly the opposite - a short-range and a permanent stream of random (), so I don’t have a sample. I generate 1-2 objects per second, and on the screen another 6-8 objects from previous generations. And from time to time it is clearly visible that the probability is either uniform or fixated on one value.

Just as an example, let’s say I’m playing poker and receive cards from an endless deck, open them, and all are jokers. From the point of view of probability/LLN theory, everything is correct, since the deck is infinite, then an infinite number of standard 54-card decks are shuffled in it and jokers percentage in the entire deck strictly 0.037%.

But the very quality of this shuffle is very, very bad since the player regularly encounters only jokers, aces or jacks, several aces or jacks of the same suit, and so on.

And I asked for some simple way to improve the operation of the random mechanism in the sense of its “shuffling”. But the discussion showed that I need to create some kind of partially randomized, partially determined algorithm.