On unix and therefore linux, text is the universal interface. You can pass text via stdin/stdout and parameters to scripts and binary programs, and also convey information with environmental variables. All you have to do is parse the text. But Data Structures like hash-maps,linked lists and tree sets,etc can not be passed between distinct processes.
So using this methods are classics on command like scripts and binary programs, after that there is using file locking to read text file databases which is messy to say the least.
And then is the realm of binary data passing between processes, which MUST be encoded in some kind of common protocol. Here you start playing with file pipes, unix sockets and network sockets.
I won't get into details of each, network sockets even the local loopback device is very favored today for universal client/server style of communication. because allows multiplexing of communications and is easily portable.
I won't give you code, you have plenty of data to google now and code for every single kind of communication would be way too big.
Here are some tutorials on sockets:
www.linuxhowtos.org/C_C++/socket.htm
http://gnosis.cx/publish/programming/sockets.html
AS one last caveat. Sockets are not as simple as stdin/stdout data passing, so your needs really must be complex enough to justify the use of sockets data transmision.
stdin/stdout is practical for most wrapping situation, specially for glue code.
Edit: there is also a "third option", using a database like mysql or postgres, where the sockets are wrapped by the database api. An the API and bindings for many popular languages as PHP, C, java, perl, python, ruby, etc, allow inter process data exchange in a ordered and safe way. But then you have to learn a database api, sql, maybe some normalization and best practices... Your experience and available features will be greater but not necessarily quicker or easier.