This really depends on the level of math you’re expecting for your intuition, but for me it really clicked when I understood it in terms of linear algebra.
A function is like a vector, but instead of having two or three dimensions you have a continuous number of them. Adding functions component-wise works just like adding vectors.
Just like regular vectors, you can choose to represent functions in a different basis. So you choose a family of other functions (call it a basis) that’s big enough to represent any other you want. For a lot of reasons [1, 2], a very good choice is the set of complex exponentials g_w (x)=exp (2πiwx), for every real w. It’s an infinite family, but that’s what you need to deal with the diversity of functions that exist.
So you try to find the linear combination of exponentials that sum to your original function. You need a coefficient for each w, so call it c (w) for simplicity. After fixing the basis, the coefficients really have all the information to describe your function. They’re an important object, and we call c (w) the Fourier transform.
How do you find the coefficients? Just project your original function onto a particular exp (2πiwx), that is, take the inner product. Usually the inner product is the sum of the products of coefficients. Since functions are continuously-valued, you use an integral instead of a sum. This is your formula for the Fourier transform.
I known there are technical conditions I am glossing over, but this is the intuition of it for me. [1] There is an intuition for these exponentials. Complex exponentials are periodic functions, so you are decomposing a function in its constituent frequencies. You could also separate the exponential into a sin and cos, and will obtain other common formulas for the Fourier transform. [2] Exponentials are like “eigenvectors” to the derivative operation (taking the derivative is just multiplying by a constant), so they’re really useful in differential equations as well.