The obvious but not-so-good way to write this is:
def is_prime(n):
return math.factorial(n-1) % n == n-1
But this is not good enough for full credit because the
size of the number $(n-1)!$ is massive (exponential in the number of bits)
and by computing the entire factorial first, the gigantic number has to be
stored, and there might not be enough space. The factorial of 1023, a 10-bit
number, contains 8,762 bits, so how the hell are you going to store the factorial
of a 1000 bit number? (Do the math!) You should have learned, when we
studied modular arithmetic, that we can do the modulo after each
multiplication. For example,
$$
(5 \cdot 4 \cdot 3 \cdot 2 \cdot 1) \bmod 6
$$
is equal to
$$
(((5 \cdot 4 \bmod 6) \cdot 3 \bmod 6) \cdot 2 \bmod 6) \cdot 1 \bmod 6
$$
So if you want full credit, then would avoid generating the massive factorial,
and you should modulo at each step:
def is_prime(n):
p = 1
for i in range(2,n):
p = (p * i) % n
return p == (n-1)
However, why bother? Both of these are completely useless in practice, because
computing $(n − 1)! \;\textrm{mod}\; n$ is slow,
ridiculously
slow.
It’s way slower than even doing trial division, which we know is
already terrible. In fact, to see how horrific
things are find the best known complexity for modular factorial at
the
Wikipedia complexity page. The intuition is that when doing modular factorials,
we are doing about as many multiplications as the magnitude of the number; we don’t
know how to divide-and-conquer! So this approach is utterly useless. No one uses it.