Actually, no. For example, optimal strategy (at least in the sense of game theory) for rock-paper-scissors is to choose randomly every time. Re-doing draws, you'll expect to win 1/2 of the time. And now note that even if you are playing against a complete dimwit who picks his nose and picks rock every time, you still only expect to win 1/2 of the time. Your optimal strategyWhen playing against an irrational enemy with an optimal strategy, you should expect to win more than if they were being 100% rational. Right?
guarantees
you the most you can possibly be guaranteed. And often your optimal strategy limits
you to this as well.In contrast, you might argue that using psychology you could expect to do better than win 1/2 of the time. There are two fallacies in this thinking: a) You simply
can't
do better than 1/2, even using every psychological trick in the book, if your opponent has decided to play randomly his/herself, and b) If someone knows more of the psychology than you, you risk winning less
than 1/2 of the time.I don't know enough about AI to answer your open question, but I'm pretty sure that the problem is not NP-complete...we're just recursively generating probabilities (see my post in optimal doc strategy), which is certainly polynomial time.
Cam