(Here is an optional Digression on JIPS.)
The phrase "client/server" is a big buzzword in the industry nowadays. In a sense, every network application has to be a client/server application, since the user executes a process on their host whose job it is to communicate over the network with a process running on a remote host. (But note that the converse is not necessarily true: a client process can communicate with a server process on the same host, so there does not have to be a network in between.) When we speak about client/server applications nowadays, though, we think primarily of client software that interacts with the user in an intuitive and user-friendly way, while interacting with a server process on the remote host by means of a protocol that is well-documented but of which, in general, the user has not - and does not need to have - the slightest idea.
The traditional applications on the Internet include the terminal emulation application - TELNET, and the file transfer application - FTP. Electronic mail is also a well-established network application, at least for sending plain text through a 7-bit path. More recent developments in email include 8-bit support, and the automatic inclusion of attachments in various formats, but unfortunately this functionality is not sufficiently widely deployed to be used with complete confidence. It only causes frustration when a user on e.g an IBM mainframe finds themselves the lucky recipient of a BINHEXed Macintosh file containing a document in WriteNow format (to take just one example!).
The distribution of information over the world-wide Internet is an issue of great importance, and has been addressed in many different ways, using the tools already at our disposal: TELNET (for example to a library catalogue, or to a conferencing system like CONTACT/VMSHARE/SEASCOM), FTP, email, Usenet News etc. One might prefer one or another of these mechanisms for a particular pattern of usage, but no one of them is ideal for all situations.
Being impatient with the existing tools, the University of Minnesota addressed the problem of disseminating information by devising a low-overhead protocol (Gopher). Some excellent client software has been written to use this protocol, for many different platforms. Various tools (Veronica search, Gopher-to-anonymous-FTP gateway, Gopher-to-Archie gateway , X500 directory gateway etc.) facilitiate tracking-down of information, and acquiring it. It made provision for calling up many of the existing tools, that users had previously had to call up directly, such as PH (phone/address book lookup), image display software, TELNET (to library catalogues for example), and so forth. Initially it did not provide for any feedback from the user, so it was primarily targetted at the dissemination of information from a relatively small number of information providers, to a relatively large population of consumers: but the things that it did, it did very neatly, and with low overhead. And, since the information could be obtained via automatic gateways to existing sources (e.g Usenet news) as well as directly from real humans, Gopher servers slotted nicely into the existing framework.
A team at CERN took a much more ambitious approach, in conceiving the "World Wide Web". This aims to comprise all available on-line information. Although it has its own network protocol (http) and its own preferred document format (html, hypertext mark-up language), it is nevertheless open-ended to other information access methods and formats.