September 26, 2003
Last Revised: May 23, 2012 4:50 PM
We live in an electronic information era and Lucid Minds expects to launch itself into a service helping to design and shape that era. With this Policy I am beginning presentation of simple explanations for some of the many different words and terms we find as we move forward in this field of Electronic Information Systems. I will also delve deeply into the philosophy of "programming" since there is such an obvious relationship betweem the process of thinking and computer programming and computer memory.
This Company Policy was first written in 2003, but on April 11, 2007 I had just made an important decision to move ahead with the modernization of our electronic information system by using dBase Plus, onlyh to change my mind a week later to usVisual Basic.
Here is the guiding philosophy as used in the new visiual Basic:
Many years ago Vibrant Life was far ahead of many others -- implemented the latest computer technology with dBase III, and a sophisticated program that has worked well for 20 years. But it was than I, Karl Loren, who spent MANY hundreds of hours learning that language. In the modern era it has proven to be much more practical to pick a PERSON first -- one to do the programming. That then started with Mark and his previous familiarity with Visual Basic and the related help offered by Loren Grayson to assist Mark as might be needed.
Electronic Information Modernization, Migration, Integration, Maintenance and Standardization Plan
Visual Basic is an OOP and that phrase is thoroughly (if technically) explained by the CEO of HERE
As we move forward, we are, ourselves, entering what is new territory for us. Admittedly many others have been into and through these areas long before us, but we have decided that our approach to these new areas would be on a self-development, self-learning basis, rather than hiring consultants who are already expert in these areas.
So, we are going through the "growing pains" that many others have gone through, many years ago, and which still other many may well go through in the future. The new feet are now bigger than So, if you have had trouble understanding these areas, yourself, you may well find that the material in these Policies, and pages, will ease your passage through.
There is a philosophical concept here, developed by Mr. Hubbard.
There are three conditions of existence.
These three conditions comprise life.
They are be, do and have.
The condition of being is defined as the assumption of a category of identity. It could be said to be the role in a game and an example of beingness could be one's own name. Another example would be one's profession. Another example would be one's physical characteristics. Each or all of these things could be called one's beingness. Beingness is assumed by oneself or given to one's self, or is attained. For example, in the playing of a game each player has his own beingness.
The second condition of existence is doing. By doing we mean action, function, accomplishment, the attainment of goals, the fulfilling of purpose or any change of position in space.
The third condition is havingness. By havingness we mean owning, possessing, being capable of commanding, positioning, taking charge of objects, energies or spaces.
The essential definition of having is to be able to touch or permeate or to direct the disposition of.
The game of life demands that one assume a beingness in order to accomplish a doingness in the direction of havingness.
These three conditions are given in an order of seniority where life is concerned. The ability to be is more important than the ability to do. The ability to do is more important than the ability to have. In most people all three conditions are sufficiently confused that they are best understood in reverse order. When one has clarified the idea of possession or havingness, one can then proceed to clarify doingness for general activity and when this is done one understands beingness or identity.
It is an essential to a successful existence that each of these three conditions be clarified and understood. The ability to assume or to grant beingness is probably the highest of human virtues. It is even more important to be able to permit other people to have beingness than to be oneself to assume it. (The Fundamentals of Thought)
Generally it is not wise to "explain" what any quote by Mr. Hubbard (LRH) means. The quote contains its own meaning. I would point out, however, that LRH mentions that for most people they best understand this "Conditions of Existence" concept by going at it in "reverse order," and that would suggest that we "have" data to be manipulated, and that the manipulation is a "doingness."
As a result of "having" and "doing" many people think they are thus able to achieve a "beingness." Philosophically this is not true, but it serves for now. The information here about computer programming does not particularly touch on "beingness." Presumably a business would have and manipulate data in order to be successful.
But, this paragraph, and the next several, are my thinking on the LRH quote and I invite you to have your own understanding of it as it might relate to programming.
One possible understanding is that a person ("being") makes observations of the physical universe (a "doing") and this results in "having" an image (a "thought" or a "havingness") of some data that can then be stored (in memory -- whether it is a hard disk or a mind). He can then, later return to that earlier-observed data (a "havingness") and manipulate it (a "doingness") all, possibly, to change or enhance his "beingness."
Man makes the mistake of thinking that havingness comes first -- that is the same error as thinking that "data" comes first in the computer programming concept. Almost all the data we deal with is "yesterday's data," and was originated from an observation by a being. Computer programming does not much deal with the original source of data. In truth, the observation comes first, then the placing of that data into storage and, sometimes, the further manipulation of that data.
What is the original source of any data? It is an observation by a being. He had to assume an identity, first, and then "do the observing" and then "have that result" in mind or stored in some other type of storage medium.
Man, and computers can help with this, is constantly observing new data, and storing it. He regularly compares the new data with old data and draws conclusions. He deals in differences and similarities of data. "Two apples" is equal to "two apples" only if you are loose about the word "equal," since obviously the first two apples are different from the second two apples. Computers are still far behind the human mind in this regard.
Man then needs a means of looking at and manipulating the stored data (in his mind) and often uses a computer to assist in this process. His "program" of looking at and manipulating data is often called "thinking" which is, of course, "doing." And, thus, he can change his beingness, or assume a new one, or discard one that no longer serves his purposes. Modern computer programs can be understood within this philosophical framework.
Years ago we pioneered with dBase III. Essentially the program I developed then is sometimes called the "data and function programming" or the "file-based" approach. At that time there was not much better in public use.
Data was (and is) stored in database tables, like an address book. These tables can contain thousands of records. Each record has detailed information all related to one "name," for instance. These tables, and records were and are passive receptacles of data. That was true up to recently. There are now more modern methods of constructing databases.
This passive data was then manipulated with something called a "program." Programs, back then, were relatively simple to understand. There is an example below. These Programs were written into a computer -- showed up on the computer screen -- were written on lines. Each line contained one instruction for the computer to follow. These lines were also referred to as "code" because they contained some new vocabulary and grammar. Thus, many of the words in the instruction had rather normal English-language meanings, but there were specialized definitions of these words, and specialized grammar rules in which this program language operated.
Each line contains code. The program itself, then, consists of many lines of code. In my case I wrote these lines over a period of years and eventually wound up with more than 50,000 lines of code, in many hunks, or separate "programs" all of which constitute our old "system." All of these programs were a part of one Information System. Most of these separate programs worked with the same batch of database tables.
The code, using today's terms, "queried" the passive data in the database. The code instructed the computer to look into a table, grabbed this or that data, compared it with some other data, replaced some with others, and generally manipulated the data.
These programs are intended to be "run" by having the computer start at the first line of code, read that line and do that instruction, then go to the next line. Theoretically the program could start with line one and keep going to line 50,000. But, in fact, the program has many places where lines of code are skipped, or where you can jump back or forward to a section of code.
The lines of code could be called "commands" or "instructions." In today's looser vocabulary they might even be called "functions" and "queries."
In the example below, the "doingness" or command is in red. The "object" of that verb or "doingness" is shown in green. Items in blue are user actions based on choices shown on screen.
The "commands" would be, for instance:
Use the Master file of names
Search for Smith, send a query looking for "Smith"
Display the Smith record so we can be sure we have the right one
If "correct" is selected, skip to command #6
If "not correct" is selected, do XXX
Find the field in that record that shows the "last date we heard from Smith."
Replace that field with today's date
Find the field in that record that shows the number of times we have heard from Smith.
Add one to that number
Print a copy of that record on paper
Go back to the beginning where a new name can be entered
The words in red were called "commands" some 20 years ago. The "function" of the first command, "use," was to pick among the various data base tables one that had the name, "Master," and was the "file of names." The second command was to search (or find) a particular record in the data base then "in use." And so it went. Today, these commands could be called "queries."
You'll notice that these commands are verbs -- plain ordinary English verbs. Verbs often have "objects." The first line of code tells the program to "use" and then the "direct object" of that verb is "the Master Name File." A full set of grammar is needed to fully write code. I have only given a few illustrations above. (Verbs can also have adverbs, instead of objects, so "Go to the beginning" has "Go as a verb and "beginning" is an adverb that tells "where.")
These are so simple that they can be explained in a common one-on-one conversation, usually.
Note that the command was very separate from the data. It would be hard to say that the data was a "long distance" from the command, but in terms of ease of query, and even electronic time, there was a great distance between the origin of the query and the object of the query -- the data.
The commands on lines #4 and #5 are another very common type of "function" in this old language. Every modern program has to include some sort of command that says, "If A is true, do X, if it is not true, do Y." Even those, however, were "commands" and the "command words" are in red.
There were only some 200 different command words in dBase III so this was and is a very simple language. It was designed to work within the "DOS" environment and did not allow for all the fancy screens that are so common within the Microsoft Windows environment.
DOS (Disk Operating System) is a very simple language for manipulating data on the hard disk of a computer. There are relatively few commands within DOS, and data is stored, in essence, in random places, physically, on the disk -- with a "File Allocation Table" keeping track of where stuff was stored. DOS has been pretty much replaced by the "Windows Operating System" which has the same purpose, but does it in a far more complex and useful way.
The dBase program language, and MySQL (both described on this page) are, in reality, nothing more than greatly improved methods, compared to DOS, of placing, storing and manipulating data. However, dBase works on a local hard drive, on your local computer while MySQL works on a server.
You can see that this dBase was a straightforward language to learn and that you could mostly understand the actual meaning of the program language words used.
Here is one reference:
Until recently, programs were thought of as a series of procedures that acted upon data. A procedure, or function, is a set of specific instructions [or commands] executed one after another. The data was quite separate from the procedures, and the trick in programming was to keep track of which functions called which other functions, and what data was changed. To make sense of this potentially confusing situation, structured programming was created.
The principle idea behind structured programming is as simple as the idea of divide and conquer. A computer program can be thought of as consisting of a set of tasks. Any task that is too complex to be described simply would be broken down into a set of smaller component tasks, until the tasks were small and easy to understand. (Source)
The Vibrant Life program, written in dBase III language was "structured" because without "structuring" the 50,000 separate lines of code could be confusing and virtually impossible to run on the basis of starting with line one and going through to line 50,000 each time the program was used.
I will refer more to dBase III because it turns out that the very seeds of the modern programs were within the early ones.
For instance, data within the dBase system had several different categories. There was data stored in the category of "date," "numerical" (with and without decimals), "character" (text words), "logical" (true or false) and a couple others. Data about a "date" was manipulated with special commands that dealt with date, and could not be used to manipulate numerical data (when you add "one" to the numeral "31" you get "32" but if you add "one day" to "March 31" you get "April 1"). Some commands could work with any type of data.
The term "class" is used today to mean a "group" or "category" of things that are similar in some way. We'll bump into "class" later on. It has a vital importance when you talk about one of the newest type of databases -- "object oriented data base."
Within the history that has brought us this far there was a change from the dBase III which I started with when something called "relational" databases were designed.
You can consider a "relation" here as like a handshake! There is some sort of connection between two different "tables" of data.
"Relational databases" had been invented when I started using dBase III, but I had not heard of them. I had to create "relationships" among the various database tables which I created and used.
For instance, I had one table (Master) with the names and addresses of customers. This table also had a unique identification number for each person. Then there was another table of "orders" received. The table of orders did NOT show the name of the person placing the order. The table had the date, the item ordered, the quantity, the price, shipping, etc. But the records that contained the orders also contained the ID number of the person placing the order. So, you could look at the ID number in the order record, then go to the name table, find that same ID number and thus get the name and address that was associated with the order.
This was all done quickly by the program I wrote, but I had to create these linkages, or relationships among the various tables, myself. This was called a "file-based" database. Each "table" (using today's terminology) was a separate "file" and I had to create relationships among the files. When this relationship was active (both files were "open"), both tables could be looked at with ease, and at the same time.
Then along came "relational" database design. In this type of database the usual "linkages" or "relationships" were DESIGNED into the database itself so that the program of commands did not have to take over this task.
The very design of the new forms of the Name table and the Order table, in a Relational Database, included a permanent link between the ID number in the Name table and the ID number of the customer in the Order table. With this done the program could easily find the name that went with every order.
This idea was touted as a really big deal at the time, but I accomplished the same thing with the old dBase III tables which were not "relational."
Many new forms of Relational Databases were terribly flawed and a Dr. Codd wrote what became a widely accepted set of criteria of what a GOOD relational database should be like.
Here is a quote from a good history of "relational databases."
The publication of Codd's rules resulted in a considerable amount of relational database research done in the early 1970s. By 1974, IBM had surfaced with a prototype of a relational database called System/R. The System/R project ended in 1979, but two significant accomplishments are accredited to that project. The relational data model's viability was sufficiently proven to the world and the project included significant work on a database query language. (source)
My decision has not only been to use dBase Plus as our programming language but to put that application on a server (actually our own local server) so that the application can be accessed by anyone to whom we give the password and access to the server.
There is a large "dividing line" between databases on your local computer versus those located on a server.
For one thing, related to us, we hope to have people working for us, from their homes, or other locations than our office. These people would need to be able to access the database in order to get information about people they were, for instance, going to call or send a letter to.
Also, we would like our customers to be able to go to a web site (our server) and get access into our database, and with proper password, etc., gain access to their own record for looking and even making changes.
There are other more significant difference that appeal to those making a decision as to whether to use a local computer or a server.
The primary difference between "local engines" and remote server engines is where the data manipulation, processing, storage and retrieval take place. Not the physical location of the table data, but rather, the location at which the processing action occurs.
In a traditional dBASE application, the engine runs on the client - the computeron which the application resides. Each user of the application has can use this distant application with no special program or files on his local PC. In Client/Server applications, there's only a single engine running on a remote server, available to any number of workstations running applications.
A subtle distinction? Not at all. With a copy of the application on each user's PC saving, retrieving and indexing .dbf tables all over the network, the probability of one of them failing (or one user kicking out the plug on their PC, or one user "experimenting" with an interactive copy of dBASE) is magnified by the number of workstations connected to the shared tables on the server. There has never been a dBASE developer who hanot encountered corrupted incexes or blob files that had to be "fixed" by retroing a backup. This kind of corruption is typical of local databases for a simple reason: each user is playing with an assortment of live data, live indexes and live blobs. The very idea of a couple of hundred users each having various 8K pieces of your mission-critical data floating around in the memory of their workstations at any given time should terrify you (I know it does me.). (source -- top of page)
For many years the dBase program was ONLY available in a version that would work on the local computer. However, there is now a version designed to be operated on a server -- called the "InterBase" version. We could actually "stay" with our old dBase III and upgrade it to the newer form, "InterBase" and have this new database on the server. This version of dBase has an interesting further feature that it can be managed with SQL. InterBase is also an open source code. dBase Plus is far advanced over the InterBase system.
Once you had a database with built-in relationships the next thing you would want to do was to get information OUT of that database.
You might also want to use that database on a server to provide the CONTENT of web pages. That would be a great step upward for the maintenance of web pages, particularly for web sites with as many pages as Vibrant Life has --- some 100,000 pages, at least 10,000 separate files.
Maintenance of a content-driven site can be a real pain, too. Many sites (perhaps yours?) feel locked into a dry, outdated design because rewriting those hundreds of HTML files [try 10,000 files] to reflect a new look would take forever. Server-side includes (SSIs) [which we use liberally, called "include page" within "web components" in Front Page and now PHP "includes" in Dreamwever] can help alleviate the burden a little, but you still end up with hundreds of files that need to be maintained should you wish to make a fundamental change to your site.
The solution to these headaches is database-driven site design. By achieving complete separation between your site's design and the content you want to present, you can work with each without disturbing the other. Instead of writing an HTML file for every page of your site, you only need to write a page for each kind of information you want to be able to present. Instead of endlessly pasting new content into your tired page layouts, create a simple content management system that allows the writers to post new content themselves without a lick of HTML!
[Let's be sure that if we do this the search engines can still crawl through the pages and give us rankings.]
In this book, I'll provide you with a hands-on look at what's involved in building a database-driven Website. We'll use two tools for this, both of which may be new to you: the PHP scripting language and the MySQL relational database management system. If your Web host provides PHP and MySQL support, you're in great shape. If not, we'll be looking at the setup procedures under Linux, Windows, and Mac OS X, so don't sweat it. (source)
Here is one the first places, among many that I reviewed, that refers to the database created by MySQL as a relational database. I had, initially, thought that the SQL database was an object-oriented database. Such a thing exists, but it is NOT created by SQL, or MySQL.
So, if we design relational databases they would have to be something different from the old version of dBase because dBase would only operate in a DOS environment, but a database to provide web page content would presumably have to operate on a server platform. The new dBase Plus creates that.
MySQL is really two separate "things." The words, themselves, "structure query langauge" refer to the function of MySQL at sending queries to a database. That part of the package was designed to work with many different types of databases, but the MySQL also allows you to create a database. It is relational. It is not an "Object Oriented Database." The new dBase Plus is both relational and OOP.
Since I did a lot of research on Object Oriented Programming, I might as well take advantage of that and present it for possible future reference. That information is located HERE.
I will deal with the type of database created by MySQL later, but first let's start with the query language which is part of MySQL.
What is a query?
You knew, for instance, that Mr. Smith had phoned, and wanted to buy 4 bottles of XX. You then needed to find his record in the database so you could have the shipping address, credit card and other pertinent information. This meant you "queried" the data base. You sent a query to the database looking for Mr. Smith.
The early history of these "queries" goes back before databases were being widely used.
The original design of these databases was such that they had to be located within a DOS operating system. Servers are just computers like the one in your home, but they are bigger and faster. They never use DOS as their operating system, so the old dBase, for instance, would not work on a server. There are "Windows-based servers" (which we will soon have) and the new dBase Plus program works on that platform.
As companies began to see the value of placing their databases on servers, instead of local hard disks, at the same time larger and larger companies saw the need for larger and larger databases, even including databases that had origins in multi-national locations, even in merged corporate environments where many "different" server-based databases existed that had to, somehow, be merged into one information system. There was always a need for some system to "query" any database.
When databases became larger, more complicated, on the server, maintained by different people, there was then even more need not only for a "query language" but a commonly-agreed on query language -- one that would work with many different types of databases.
I am starting to use, now, the term "query language," but you will recognize that this is just a fancy term for "language of commands and instructions -- words and grammar."
When you add in the marvel of "hyperlinks" within a database, and within the program used for querying a data base, you can see the need for an "adult query language." When you add in that some databases contained "graphics," sound files and whole books as single fields, you see a growing complexity in the queries needed to find the data you want -- or to change it.
The number of application software packages which can handle graphics and image data has been increasing steadily since 1980, but during the 1970's, when the relational database first emerged, there were hardly any. Many people tried to link software to relational databases that had been successful in the business area, but they found that it is difficult to use relational databases for complex data items because they have a relatively low processing efficiency for this kind of data. And the same problem occurred with deductive databases, because the beginning of research in deductive databases was to integrate relational databases and programming logic. So both types of databases reached an impasse on the question of how to handle complex data. (source)
SQL is the original form, in an "open source code" of a "structured query language."
"Structured" means what it meant about dBase. Pieces of programs written in dBase were "structured" so that this one piece could be used here, there, and again. In other words a common programming function is the "search function." You enter a name "Smith" on a line and ask the program to search for "Smith" in the database. This could be a one-line command, but the actual instruction may need to change the lower case entry into UPPER CASE data stored in the table. Or, the instruction may have to include some provision: "If more than one 'Smith' then display a list of the first nine," or some such.
So, you could develop a "module" or a "piece" of a program that was used often and just insert that piece in every place where a "search function" was needed. This, then, was a "structured" language.
So, SQL allowed different commands to be used in any place needed. Here is a list of the common MySQL commands.
SQL was also "open source" code.
Since it is "open source" any one can get a copy, spend whatever time he wishes, making whatever changes he wishes, and when his "version" is sufficiently distinguishable from any others, he can give it a name and offer it to the public.
SQL (Structured Query Language) is a database sublanguage for querying and modifying relational databases. It was developed by IBM Research in the mid 70's and standardized by ANSI in 1986. (source)
The original "SQL" was finally finalized and standardized by ANSI in 1986.
When we wrote the first edition of The Practical SQL Handbook, the American National Standards Institute (ANSI) had already approved the 1986 SQL standard. The International Standards Organization (ISO) adopted it in 1987. Both ANSI and ISO helped create the 1989 version. The 1986 standards were skimpy, lacking features that most commercial vendors offered. The 1989 standards were more complete but still left many important elements undefined. (source)
MySQL creates databases with appearances rather similar to dBase databases. The database has rows and columns. The columns are called fields. The rows are called records. Here is a description of the different "field types" that are allowed in MySQL.
Microsoft developed its own version referred to as "MS SQL" or "Microsoft SQL." The Microsoft version is quite different from the original SQL, is NOT open code, and you have to pay a fee to obtain it.
The actual origin of MySQL is here:
The idea for MySQL started in the mid-1990s, when Widenius was working with TCX. TCX clients doing data warehousing started asking for a browser-based rather than a standard GUI interface. "We started to look around for some language that would be easy to embed in Perl or something like that, and when you look around, SQL is probably the appropriate choice," says Widenius. After checking the available commercial and open source servers they found that the available programs were too slow for even the medium sized databases of 5-10 million rows. They turned to the developer of mSQL to see if he would be interested in implementing the server software needed, but he wasn't, so they decided to develop their own. By that time, Widenius had gathered a lot of knowledge, and knew exactly what functionality he needed. The first implementation took him only three months to program, "but it didn't do much," he said. "I already had something in place ... I just ripped out the GUI and put in a simple SQL parser." (Source)
The commercial world apparently views Microsoft SQL as having many more features, at a cost, than the free availability of "SQL" or "MySQL." There are other "free" and "fee" forms of various databases, all relational, but still differing in many respects. Most of the time you can find comparisons among the free ones, but seldom find comparisons between the free ones and the Microsoft form of SQL.
MySQL's claim to fame is that it provides a reasonable set of features, such as built-in SQL functions, that follow the 80/20 rule: It has the 20 percent of SQL capabilities that are needed for 80 percent of database applications. Developers of simple applications can live without the remaining features, such as stored procedures and subqueries, or can work around them with creative client-side programming. (source)
That same source, as above, features an excellent presentation of the ways by which you can test what database system you might want to use. In our case a heavy preference was simply that MySQL was already a part of our Server Environment, well tested to run there and, of course, was free. Read the entire analysis for who might use MySQL and who might look for something different. Later on I found that my tech help was not able to produce the replacement of the Master/Entry program using MySQL and PHP, so I decided to abandon that line of work and use dBase Plus and do the programming myself or with the help of some local programmer.
The instruction manual for MySQL is best studied on the web, in any of several different formats. Click here for the one that I have found most useful.
Microsoft SQL, of course, requires a Microsoft Server platform to operate. We had been using a Linux system for our webs. As soon as you switch from the Linux platform we were using for many years, to a Microsoft Server you get higher costs, possibly better performance, but probably also a set of features much more in need by much larger companies than ours. Generally I've found that "real" computer technicians detest Microsoft and have emotional reactions about it. I suspect that is because MS keeps their code secret and is large. They may not be the best, but they are the best known and often the most used. In terms of SQL, however, it appears that this is enough still in the field of the technician that the technicians are the kings.
We will have Windows 2003 (Server) on a local PC as our server and thus avoid the problem of changing the type of platform serving us for hosting our web sites.
Even though Linux and MySQL may be the very best for us now, we need to keep some amount of attention on our size and growth. At some point we may need to be thinking of a "migration" to Microsoft SQL, and a Microsoft Platform.
Here, according to Microsoft, are industry trends that should be taken into account in choosing a database:
Current industry trends indicate that:
- Data storage capacity increases about every 18 months.
- Data storage costs decrease by about 50 percent every 12 months.
- Processor speeds continue to increase.
- Overall costs of a processor operation continue to decrease.
- More customers are shifting from 32-bit to 64-bit Microsoft Windows® platforms for the most demanding tasks.
What does all this mean? With higher base storage levels, more data can and will be stored in repositories such as databases. As more and more data is created, technologies are needed to store, manage, and analyze that data to solve business problems. This, in turn, requires that databases become smarter in terms of how the system scales to meet the demands of greater data volumes and real-time data analysis. (source)
The next thing to look at, within this big picture, is how or whether MySQL could do all that we needed. I will conclude right now that it cannot, and explain that later. But, since I found that MySQL could not do all that we needed, it was logical to look at the "PHP" language which Nick and Tom had selected to work WITH MySQL, and serve as the interface between web page displays (in HTML) and database information in a relational database.
PHP is a program that provides an interface between HTML pages and the data obtained from MySQL Tables. It is so useful for this purpose that it is extremely common to see MySQL/PHP combinations on many web sites. Thus, MySQL can easily send a query to a Table, and display the result on the screen of an Linux operating system, but NOT on the screen of a web page.
PHP is a language that cannot, with its own commands, query a table, but it is designed so that it can USE a MySQL command within a PHP instruction. There is some overlap between the two different programs so generally there has to be coordination between the people using each of these languages to have a smooth interface.
Here is a word on the source of instructions for the PHP language.
To understand the implications of using a database connection, you need to understand the class hierarchy of the data objects.
These are the "objects" as that word is used in the term "Object Oriented Programming:"
1. The dQuery
2. The Session
3. Some database object
4. A Query object
At the top of the hierarchy is dQuery itself. Next is the Session class. A session represents a separate user task, and is required primarily for DBF and DB table security. dQuery supports up to 2048 simultaneous sessions. When dQuery first starts, it already has a default session. Unless your application needs to log in as more than one person simultaneously, there is usually no need to create your own session objects.
Each session contains one or more Database objects. You have access to that database's tables once you setup the database connection, activate the Database object, and, if necessary, log in. You may also log transactions, or buffer updates, to each database to allow you to rollback, abandon, or post changes as desired.
The Query object acts primarily as a container for an SQL statement and the set of rows, or rowset, that results from it. A rowset represents all or part of a single table or group of related tables. There is only one rowset per query, but you may have more than one query, and therefore more than one rowset, per database. A rowset maintains the current record or row, and therefore contains the typical navigation, buffering, and filtering methods.
The SQL statement may also contain parameters, which are represented in the Query object’s params array.
Finally, a rowset also contains a fields property, which is an array of field objects that contain information about the fields and the values of the fields for the current row. There are events that allow you to morph the values so that the values stored in the table are different than the values displayed. Each field object can also be linked to a visual component through the component’s dataLink property to form a link between the user interface and the table. When the two objects are linked in this way, they are said to be dataLinked.
Putting the data objects together
If you’re using Standard tables only, at the minimum you create a query, which gets assigned to the default database in the default session, set the SQL statement and make the query active. If the query is successful, it generates a rowset, and you can access the data through the fields array.
When accessing tables through a database connection, you will need to create a new database, create the query, assign the database to the query, then set the SQL and make the query active.
If you use the Form or Report designers, you design these relationships visually and code is generated.
Using stored procedures
The object hierarchy for using stored procedures in an SQL-server database is very similar to the one used for accessing tables. The difference is that a StoredProc object is used instead of a Query object. Above the StoredProc object, the Database and Session objects do the same thing. If the stored procedure returns a rowset, the StoredProc object contains a rowset, just like a Query object.
A StoredProc object also has a params array, but instead of simple values to substitute into an SQL statement in a Query object, the params array of a StoredProc object contains Parameter objects. Each object describes both the type of parameter—input, output, or result—and the value of that parameter.
Before running the stored procedure, input values are set. After the stored procedure runs, output and result values can be read from the params array, or data can be accessed through its rowset. (Source dBase Plus Help)
Quotes from L. Ron Hubbard are copyright 1994 © by the L. Ron Hubbard Library. All rights reserved.