I will however note, there is one possible solution to this, but its not without problems.
It involves putting a turing machine into a region of space in which, time (and space, there unified) is essentially curved around on itself. So the turing machine has infinite time within which to compute any given finite problem. So if you limit the simulation to a finite problem, in principle it can be computed.
The problem as im sure some of you are realizing is how you make that concept useful.
As
1) you have to believe in Closed timelike curve's, which.. well.. I dunno.
2) Once you put something into a Closed timelike curve, how the f**k do you extract useful information from it. Passsing into it, will involve passing an event horizon. Yes, event horizons of black hole fame, of point of no return fame.
3) It requires the curve itself to be of sufficient leangh, that the finite problem can still be computed in it, or causality has to actually allow the possibility of the function exceeding the time alotted to it, for example what happens when the computer comes back to the start is not at all clear. Does it get to continue on with the program at 67% or is it forced to restart at 0% like a record skipping. I dunno I've nae been in one.
4) hopefully tidal forces and other fun things about Closed timelike curve's won't come into play, as make things ridiculously complicated. Even a CPU of today would be quickly torn to pieces by them. Obviously though your computational units cannot themselves be singularities.
This is the best solution I am aware of (I obsessed over this for a while and this comes from one of my (in my opinion brilliant) teachers) to my computational space problem.
Basic gist is, exploting quirks of space-time in order to get the computational power you'd need, but space-time is a bitch when you try and play with at. As noted, the region of space we are talking about, will at least resemble a black hole.
I might edit this post, as I'm not sure I've gotten this right though.