If Servers 1 to 3 are on the same network, you could install memcache on each of the application servers without worry because Memcache is designed for clustered architecture. This simply means that you can run as MANY instances as you want but your application 'sees' it as 1 giant memory cache.
To paraphrase from the memcached's project wiki:
//in your configuration file:
$MEMCACHE_SERVERS = array(
"10.1.1.1", //web1
"10.1.1.2", //web2
"10.1.1.3", //web3
);
//at the 'bootstrapping' phase of your app somewhere:
$memcache = new Memcache();
foreach($MEMCACHE_SERVERS as $server){
$memcache->addServer ( $server );
}
Is your question is related to scaling? If so:
I've seen some people say to have your cache server on the DB server itself. IMHO, this is not very effective as you would want to give your DB server as much physical RAM as you can possibly afford (depending on how large your web app is in terms of traffic and load).
I would allocate a portion of memory on each of the application servers (Server 2 and Server 3) for caching purposes. That way, if you want to scale out, you just provision one more application server, checkout your source code and add it to your network. This way the size of your memory cache would grow in a linear manner (more or less) as you add for application servers to your server pool.
All of the above assumes all the servers are on 1 network obviously.