@Prokofievian, regarding the horizon problem, as I understand it (and please correct me if I'm wrong), the main issue appears to be that you don't know the ideal minimum you have to push back the horizon in any given instance without pushing it all the way to find out. I was just wondering whether this could be minimized by doing just that over and over to produce a statistical measure of how far you should need to go before diminishing returns set in. Of course, it could just as easily be an insurmountable problem in AI and that would be really interesting.
Edit: I gather this is what you were referring to in the chess thread,
I realize it was only the label I didn't recall seeing before.