An interesting note found on a harddisk
In "The Genesis of a Database Computer - A conversation with Jack Shemer and Phil Neches of Teradata Corporation - IEEE Computer Nov. 1984":
This article gives the context of the The DBC/1012 system with two interface processors, four access module processors, and four Winchester disk units. When fully extended to 1024 processors operating in parallel, the system will be capable of storing a terabyte (trillion bytes) of data.
We read:
Shemer: Another factor [in building the database computer] was the relational data model - the fourth generation of database management software. People wanted it but could not afford it, nor was it practical. The reason was that it took a tremendous number of MIPS to deliver the functionality of a relational system. However, running the software on a mainframe practically relegated the big computer to the level of a personal computer. Consequently, the user environment has retained what I call the machine-friendly forerunners, namely the hierarchical and network database management systems that emerged in the 60's. These approaches were designed to process efficiently in single data stream machine environments, while the relational model admitted to parallel processing.
In the relational model, data is not explicitly ordered, since data items don't have pointers embedded in the data. Rather than traversing a family tree or hierarchy, you're dealing with rows and columns that represent the way most people like to view information. The relational system is synonymous with people-friendly; it's what people want, what the end user and the application programmer desire.
The big problem was to make the relational system cost- and performance-effective. The only way to do that was to provide a great many processing cycles at low cost.
...
Computer: It was an IBM scientist, E. F. Codd, who originally conceived the relational database model. What is IBM doing now?
Shemer: IBM has taken what I regard as a two-phased approach. On the one hand, it has IMS and DLl for the production environment. They use the hierarchical approach of the 60's, now almost 20 years old. IBM appears to be committed to that investment; it is telling users to keep IMS for high-volume applications. On the other hand, it has a new relational product called DB2 that is intended for the what-if query in the end-user environment. It is for the ultimate information user who may be a novice programmer or somebody not well versed in programming at all.
As I see it, IBM has effectively segmented the database world into two disjointed environments. It has essentially stated that the relational system it will deliver under DB2 is not efficient in accommodating production processing demands. In other words, keep IMS for account rendition, master file maintenance, etc., and use DB2 for what-if queries. It is a real dilemma for users. Moreover, this approach complicates matters. You already have an IMS database, let's say. To build a relational database, you have to have a utility program to extract information from the IMS master file. You now have two databases. What's more, they run on different machine environments, producing multiple versions of the truth. One file or the other is always out of date. Having two databases is a step backward, because one of the prime reasons for creating database management systems in the 60's was to allow multiple applications to have access to the same data. That data should have the same value at the same instant of time for both the production application environment and the what-if query environment.