I'm a bit confused about how to optimize my SQL queries. I have a mid-complex query with some joins to run a hundred to thousand times per second (two tables on SSD, one table in RAM, 4 JOINs)
How can I minimize the overhead for execution? Is there any way to precompile a query so that MySQL does not need to analyze, optimize and compile the query each single time?
I know I can use a prepared statement to precompile a query that is then being executed multiple times in the same session. But what to do if you have multiple sessions and only one query per session? Are prepared statements being cached across different sessions? I don't think so.
Then I thought that stored procedures were the best way, because they're said to be precompiled. Now I read that this assumption is totally wrong, they are in fact not precompiled.
Is the any way to share client-sessions in MySQL, e.g. to use prepared statements in the first session inherited by the following sessions?
Last idea is to write a multithreaded socket server to act like a MySQL client-proxy. But that seems a little bit exaggerative to me. ;-)
I use PHP as Apache2 module. Is there any chance to "store" the MySQL session in shared memory, so that the following HTTP requests can use the existing MySQL session instead of starting a new one? So that I can use prepared statements across different HTTP requests?