Factoring integers into prime factors has a reputation as an extraordinarily difficult problem. If you read some popular accounts, you get the impression that humanity has worked hard on this problem for centuries, if not millennia, and that the chances of an efficient algorithm are negligible. If true, that would be great, because some important cryptosystems rely on the difficulty of factoring. Unfortunately, it's mostly hype based on wishful thinking. Enough people have tried to find efficient factoring algorithms that we can be confident the problem isn't easy, but there's no reason to think it's impossible.
The first thing to realize is that until the advent of public key cryptography in the 1970's, few people cared about factoring. Some people were interested in it for its intrinsic beauty, but nobody thought it was good for anything, and it certainly wasn't the notorious unsolved problem it is today. If anything, it was mildly obscure.
Even today, how many researchers have ever seriously tried to find an efficient factoring algorithm? I don't know, since most people who fail don't announce their attempts (and who knows how much classified work there has been), but it's not hard to estimate. Let's say a serious attempt consists of several months of work by an expert, someone who knows enough number theory to read the literature on this problem. Then the number of people who have seriously tried must be on the order of magnitude of 100. It would be ridiculous to say "100 smart people tried and failed to solve this problem, so it's clearly impossible," but that's what most claims regarding the difficulty of factoring amount to.
It's important to keep in mind that there is no conceptual reason why factoring should be difficult. By contrast, one can make a compelling case for why P is different from NP, but factoring is almost undoubtedly not NP-hard. People have made immense progress in developing factoring algorithms, and there's no reason to think we've hit a fundamental barrier.
To discuss this in more detail, we'll need a little notation. Define Lε,c(n) = ec(log n)ε(log log n)1-ε. These function s interpolate between L1,c(n) = nc and L0,c(n) = (log n)c.
Until the 1970's, the best algorithms known for factoring all had running times of the form L1,c(n) for some constant c. In other words, they took time exponential in the number of digits needed to write n (but note that even here, lowering c can represent important progress). In the 1970's, the state of the art reached L1/2,c(n), which is subexponential. This can already be seen as closing half the gap between exponential and polynomial time. Some people suspected that this was the true complexity of factoring, but in the late 1980's the number field sieve reduced the running time to L1/3,c(n). Since then, the only progress has been in lowering c (and making the techniques more practical), but why should everything stop here? We're already two thirds of the way to a polynomial-time factoring algorithm, and I'd guess that we can go almost all of the way. Specifically, I suspect that for each ε>0, there is an algorithm that runs in time Lε,c(n) (where c may grow as ε tends to zero). I have no opinion as to whether ε=0 can be achieved.
Of course, I have no real evidence for my views; if I had any good ideas for factoring, I'd be working on them instead of writing this. On the other hand, the people who talk about the great difficulty of factoring have equally little evidence. The net result is that it's reasonable to hope that factoring is difficult, but you should regard that as a provisional hypothesis rather than a certainty.