Tuesday, July 28, 2020

PHP Computer Programming, its Importance & Applications



The PHP computer programming or Hypertext Preprocessor (PHP), created by a Rasmus Lerdorf in 1994, is a general-purpose scripting programming language that allows web developers in web development and to create dynamic content that interacts with databases.


PHP stands one of the most popular server-side programmatic languages that communicate back and forth with a server to create a dynamic web page for the user. Almost all websites available on the internet; it’s a fact that these websites/blogs have been created by PHP installation; also, the website page that you are reading this blog is built in PHP.


Learning PHP Computer programming language is an object-oriented programming language. It’s a must-to-learn language for a developer who’s ambitious and willing to create dynamic web pages or work on web applications’ development.


So if you are new to PHP, keep in mind, it’s not that easy and the language to jump straight into if you have no experience before. As the Syntax and other PHP language elements might be quite confusing for the beginner, getting a grasp of the basic programming concepts first is the best start - a pro tip for a beginner.


Furthermore, learning and working on Javascript is a client-side scripting language that will be an excellent approach. However, there is nothing to get scared at all while learning PHP because learning capability differs from person to person, and it’s fine to make a start if you pick things quickly…


Let’s take a little look at why to learn PHP and why it’s important?


Importance of PHP Computer Programming Language


Many ask why it’s essential and vital for a programmer or web & application developer to learn the PHP programming language.


Furthermore, for students of IT, software engineering, and specifically involved in the web development domain, it works like a breath for life as they are never considered as developers unless they know each and everything about PHP, its applications.


      PHP is a recursive acronym PHP “Hypertext Preprocessor” - a server-side scripting language embedded in HTML. It has its usages to manage dynamic content, databases, session tracking, and even build entire e-commerce sites or stores.


      The language stays integrated with various other accessible databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server.


      It is surprisingly zippy in its execution, more specifically when compiled as an Apache module on the Unix side. When the MySQL server starts, it executes every query, whether it’s complicated or more uncomplicated within the record-setting time.


      A large number of major protocols such as POP3, IMAP, and LDAP work with PHP support. PHP4 added support for Java and distributed object architectures such as COM and CORBA makes n-tier development a probability.


      PHP language, with its Syntax C-like, tries to be as forgiving as possible.


Versions of PHP programming Language


PHP in its initials was a small open source project, and its evolutions made people get its popularity, and thus as more and more people found out how useful it is. There are different PHP versions unleashed so far, but the first version was released in 1994 by a Danish-Canadian programmer Rasmus Lerdorf.


Its versions are …


      PHP 3 and 4

      PHP 5

      PHP 6 and Unicode

      PHP 7


Hello World using PHP


It’s a very fun language for those who have a thirst and are willing to learn it with a passion. You can create pages of any style, color, or type for the websites and applications. Here, you can have a look at this small conventional PHP Hello World Program.

You can try it yourself using this Demo link: http://tpcg.io/qFWzNE

Applications of PHP

PHP stands one of the best programmatic languages widely used over the web. Some of its top applications are:


      PHP performs system functions such as: using files on a system it creates, opens, reads, writes, and closes them.

      PHP can handle forms such as gathering data from records, saving it to a file, sending data via email, and returning data to the user back.

      It’s used in adding, deleting, and modifying the elements within your database

      Access cookies variables and set cookies.

      PHP enables programmers to restrict users to access some pages of your website, or someone else’s that you are working on.

      Encrypting data is also one of its applications.


Prerequisites for learning PHP

If you’re all set to learn PHP programming language, it’s necessary for you to have at least some basic understanding of these prerequisites: Programming, Internet, Database, and MySQL.

Ready for learning PHP computer programming language? Well, you can learn this language from W3Schools, follow the link: https://www.w3schools.com/php/default.asp


    Written by: zahid_chaudhry 

Note: This article that has been written by Zahid Chaudhry is a property of Bor3d.net and can not be copied, printed or shared except the original URL directed to this post or the written permission of a Bor3d.net member. All rights belong to Bor3d.net.

Saturday, July 25, 2020

Differences between C and C++

In C++ there are only two variants of the function main: int main() and int main(int argc, char **argv).

The return type of main is int, and not void;
The function main cannot be overloaded (for other than the abovementioned signatures);
It is not required to use an explicit return statement at the end of main. If omitted main returns 0;
The value of argv[argc] equals 0;
The `third char **envp parameter' is not defined by the C++ standard and should be avoided. Instead, the global variable extern char **environ should be declared providing access to the program's environment variables. Its final element has the value 0;
A C++ program ends normally when the main function returns. Using a function try block (cf. section 10.11) for main is also considered a normal end of a C++ program. When a C++ ends normally, destructors (cf. section 9.2) of globally defined objects are activated. A function like exit(3) does not normally end a C++ program and using such functions is therefore deprecated.
According to the ANSI/ISO definition, `end of line comment' is implemented in the syntax of C++. This comment starts with // and ends at the end-of-line marker. The standard C comment, delimited by /* and */ can still be used in C++: int main() { // this is end-of-line comment // one comment per line /* this is standard-C comment, covering multiple lines */ }

Despite the example, it is advised not to use C type comment inside the body of C++ functions. Sometimes existing code must temporarily be suppressed, e.g., for testing purposes. In those cases it's very practical to be able to use standard C comment. If such suppressed code itself contains such comment, it would result in nested comment-lines, resulting in compiler errors. Therefore, the rule of thumb is not to use C type comment inside the body of C++ functions (alternatively, #if 0 until #endif pair of preprocessor directives could of course also be used).

C++ uses very strict type checking. A prototype must be known for each function before it is called, and the call must match the prototype. The program int main() { printf("Hello World\n"); }

often compiles under C, albeit with a warning that printf() is an unknown function. But C++ compilers (should) fail to produce code in such cases. The error is of course caused by the missing #include <stdio.h> (which in C++ is more commonly included as #include <cstdio> directive).

And while we're at it: as we've seen in C++ main always uses the int return value. Although it is possible to define int main() without explicitly defining a return statement, within main it is not possible to use a return statement without an explicit int-expression. For example: int main() { return; // won't compile: expects int expression, e.g. // return 1; }
In C++ it is possible to define functions having identical names but performing different actions. The functions must differ in their parameter lists (and/or in their const attribute). An example is given below: #include <stdio.h> void show(int val) { printf("Integer: %d\n", val); } void show(double val) { printf("Double: %lf\n", val); } void show(char const *val) { printf("String: %s\n", val); } int main() { show(12); show(3.1415); show("Hello World!\n"); }
In the above program three functions show are defined, only differing in their parameter lists, expecting an int, double and char *, respectively. The functions have identical names. Functions having identical names but different parameter lists are called overloaded. The act of defining such functions is called `function overloading'.

The C++ compiler implements function overloading in a rather simple way. Although the functions share their names (in this example show), the compiler (and hence the linker) use quite different names. The conversion of a name in the source file to an internally used name is called `name mangling'. E.g., the C++ compiler might convert the prototype void show (int) to the internal name VshowI, while an analogous function having a char * argument might be called VshowCP. The actual names that are used internally depend on the compiler and are not relevant for the programmer, except where these names show up in e.g., a listing of the content of a library.

Some additional remarks with respect to function overloading:
Do not use function overloading for functions doing conceptually different tasks. In the example above, the functions show are still somewhat related (they print information to the screen).

However, it is also quite possible to define two functions lookup, one of which would find a name in a list while the other would determine the video mode. In this case the behavior of those two functions have nothing in common. It would therefore be more practical to use names which suggest their actions; say, findname and videoMode.
C++ does not allow identically named functions to differ only in their return values, as it is always the programmer's choice to either use or ignore a function's return value. E.g., the fragmentprintf("Hello World!\n");

provides no information about the return value of the function printf. Two functions printf which only differ in their return types would therefore not be distinguishable to the compiler.
In chapter 7 the notion of const member functions is introduced (cf. section 7.7). Here it is merely mentioned that classes normally have so-called member functions associated with them (see, e.g., chapter 5 for an informal introduction to the concept). Apart from overloading member functions using different parameter lists, it is then also possible to overload member functions by their const attributes. In those cases, classes may have pairs of identically named member functions, having identical parameter lists. Then, these functions are overloaded by their const attribute. In such cases only one of these functions must have the const attribute.
In C++ it is possible to provide `default arguments' when defining a function. These arguments are supplied by the compiler when they are not specified by the programmer. For example: #include <stdio.h> void showstring(char *str = "Hello World!\n"); int main() { showstring("Here's an explicit argument.\n"); showstring(); // in fact this says: // showstring("Hello World!\n"); }

The possibility to omit arguments in situations where default arguments are defined is just a nice touch: it is the compiler who supplies the lacking argument unless it is explicitly specified at the call. The code of the program will neither be shorter nor more efficient when default arguments are used.

Functions may be defined with more than one default argument: void two_ints(int a = 1, int b = 4); int main() { two_ints(); // arguments: 1, 4 two_ints(20); // arguments: 20, 4 two_ints(20, 5); // arguments: 20, 5 }

When the function two_ints is called, the compiler supplies one or two arguments whenever necessary. A statement like two_ints(,6) is, however, not allowed: when arguments are omitted they must be on the right-hand side.

Default arguments must be known at compile-time since at that moment arguments are supplied to functions. Therefore, the default arguments must be mentioned at the function's declaration, rather than at its implementation: // sample header file extern void two_ints(int a = 1, int b = 4); // code of function in, say, two.cc void two_ints(int a, int b) { ... }

It is an error to supply default arguments in function definitions. When the function is used by other sources the compiler reads the header file rather than the function definition. Consequently the compiler has no way to determine the values of default function arguments. Current compilers generate compile-time errors when detecting default arguments in function definitions.In C++ all zero values are coded as 0. In C NULL is often used in the context of pointers. This difference is purely stylistic, though one that is widely adopted. In C++ NULL should be avoided (as it is a macro, and macros can --and therefore should-- easily be avoided in C++, see also section 8.1.4). Instead 0 can almost always be used.

Almost always, but not always. As C++ allows function overloading (cf. section 2.5.4) the programmer might be confronted with an unexpected function selection in the situation shown in section 2.5.4: #include <stdio.h> void show(int val) { printf("Integer: %d\n", val); } void show(double val) { printf("Double: %lf\n", val); } void show(char const *val) { printf("String: %s\n", val); } int main() { show(12); show(3.1415); show("Hello World!\n"); }

In this situation a programmer intending to call show(char const *) might call show(0). But this doesn't work, as 0 is interpreted as int and so show(int) is called. But calling show(NULL) doesn't work either, as C++ usually defines NULL as 0, rather than ((void *)0). So, show(int) is called once again. To solve these kinds of problems the new C++ standard introduces the keyword nullptr representing the 0 pointer. In the current example the programmer should call show(nullptr) to avoid the selection of the wrong function. The nullptr value can also be used to initialize pointer variables. E.g., int *ip = nullptr; // OK int value = nullptr; // error: value is no pointer

2.5.7: The `void' parameter listIn C, a function prototype with an empty parameter list, such as void func();

means that the argument list of the declared function is not prototyped: for functions using this prototype the compiler does not warn against calling func with any set of arguments. In C the keyword void is used when it is the explicit intent to declare a function with no arguments at all, as in: void func(void);

As C++ enforces strict type checking, in C++ an empty parameter list indicates the total absence of parameters. The keyword void is thus omitted.

2.5.8: The `#define __cplusplus'Each C++ compiler which conforms to the ANSI/ISO standard defines the symbol __cplusplus: it is as if each source file were prefixed with the preprocessor directive #define __cplusplus.

We shall see examples of the usage of this symbol in the following sections.

2.5.9: Using standard C functionsNormal C functions, e.g., which are compiled and collected in a run-time library, can also be used in C++ programs. Such functions, however, must be declared as C functions.

As an example, the following code fragment declares a function xmalloc as a C function: extern "C" void *xmalloc(int size);

This declaration is analogous to a declaration in C, except that the prototype is prefixed with extern "C".

A slightly different way to declare C functions is the following: extern "C" { // C-declarations go in here }

It is also possible to place preprocessor directives at the location of the declarations. E.g., a C header file myheader.h which declares C functions can be included in a C++ source file as follows: extern "C" { #include <myheader.h> }

Although these two approaches may be used, they are actually seldom encountered in C++ sources. A more frequently used method to declare external C functions is encountered in the next section.

2.5.10: Header files for both C and C++The combination of the predefined symbol __cplusplus and the possibility to define extern "C" functions offers the ability to create header files for both C and C++. Such a header file might, e.g., declare a group of functions which are to be used in both C and C++ programs.

The setup of such a header file is as follows: #ifdef __cplusplus extern "C" { #endif /* declaration of C-data and functions are inserted here. E.g., */ void *xmalloc(int size); #ifdef __cplusplus } #endif

Using this setup, a normal C header file is enclosed by extern "C" { which occurs near the top of the file and by }, which occurs near the bottom of the file. The #ifdef directives test for the type of the compilation: C or C++. The `standard' C header files, such as stdio.h, are built in this manner and are therefore usable for both C and C++.

In addition C++ headers should support include guards. In C++ it is usually undesirable to include the same header file twice in the same source file. Such multiple inclusions can easily be avoided by including an #ifndef directive in the header file. For example: #ifndef MYHEADER_H_ #define MYHEADER_H_ // declarations of the header file is inserted here, // using #ifdef __cplusplus etc. directives #endif

When this file is initially scanned by the preprocessor, the symbol MYHEADER_H_ is not yet defined. The #ifndef condition succeeds and all declarations are scanned. In addition, the symbol MYHEADER_H_ is defined.

When this file is scanned next while compiling the same source file, the symbol MYHEADER_H_ has been defined and consequently all information between the #ifndef and #endif directives is skipped by the compiler.

In this context the symbol name MYHEADER_H_ serves only for recognition purposes. E.g., the name of the header file can be used for this purpose, in capitals, with an underscore character instead of a dot.

Apart from all this, the custom has evolved to give C header files the extension .h, and to give C++ header files no extension. For example, the standard iostreams cin, cout and cerr are available after including the header file iostream, rather than iostream.h. In the Annotations this convention is used with the standard C++ header files, but not necessarily everywhere else.

There is more to be said about header files. Section 7.11 provides an in-depth discussion of the preferred organization of C++ header files. In addition, starting with the C++2a standard modules are available resulting in a somewhat more efficient way of handling declarations than offered by the traditional header files. In the C++ Annotations modules are covered in chapter 7, section 7.12.

2.5.11: Defining local variablesAlthough already available in the C programming language, local variables should only be defined once they're needed. Although doing so requires a little getting used to, eventually it tends to produce more readable, maintainable and often more efficient code than defining variables at the beginning of compound statements. We suggest to apply the following rules of thumb when defining local variables:
Local variables should be created at `intuitively right' places, such as in the example below. This does not only entail the for-statement, but also all situations where a variable is only needed, say, half-way through the function.
More in general, variables should be defined in such a way that their scope is as limited and localized as possible. When avoidable local variables are not defined at the beginning of functions but rather where they're first used.
It is considered good practice to avoid global variables. It is fairly easy to lose track of which global variable is used for what purpose. In C++ global variables are seldom required, and by localizing variables the well known phenomenon of using the same variable for multiple purposes, thereby invalidating each individual purpose of the variable, can easily be prevented.

If considered appropriate, nested blocks can be used to localize auxiliary variables. However, situations exist where local variables are considered appropriate inside nested statements. The just mentioned for statement is of course a case in point, but local variables can also be defined within the condition clauses of if-else statements, within selection clauses of switch statements and condition clauses of while statements. Variables thus defined are available to the full statement, including its nested statements. For example, consider the following switch statement: #include <stdio.h> int main() { switch (int c = getchar()) { case 'a': case 'e': case 'i': case 'o': case 'u': printf("Saw vowel %c\n", c); break; case EOF: printf("Saw EOF\n"); break; case '0' ... '9': printf("Saw number character %c\n", c); break; default: printf("Saw other character, hex value 0x%2x\n", c); } }
Note the location of the definition of the character `c': it is defined in the expression part of the switch statement. This implies that `c' is available only to the switch statement itself, including its nested (sub)statements, but not outside the scope of the switch.

The same approach can be used with if and while statements: a variable that is defined in the condition part of an if and while statement is available in their nested statements. There are some caveats, though:
The variable definition must result in a variable which is initialized to a numeric or logical value;
The variable definition cannot be nested (e.g., using parentheses) within a more complex expression.The latter point of attention should come as no big surprise: in order to be able to evaluate the logical condition of an if or while statement, the value of the variable must be interpretable as either zero (false) or non-zero (true). Usually this is no problem, but in C++ objects (like objects of the type std::string (cf. chapter 5)) are often returned by functions. Such objects may or may not be interpretable as numeric values. If not (as is the case with std::string objects), then such variables can not be defined at the condition or expression clauses of condition- or repetition statements. The following example will therefore not compile: if (std::string myString = getString()) // assume getString returns { // a std::string value // process myString }

The above example requires additional clarification. Often a variable can profitably be given local scope, but an extra check is required immediately following its initialization. The initialization and the test cannot both be combined in one expression. Instead two nested statements are required. Consequently, the following example won't compile either: if ((int c = getchar()) && strchr("aeiou", c)) printf("Saw a vowel\n");

If such a situation occurs, either use two nested if statements, or localize the definition of int c using a nested compound statement: if (int c = getchar()) // nested if-statements if (strchr("aeiou", c)) printf("Saw a vowel\n"); { // nested compound statement int c = getchar(); if (c && strchr("aeiou", c)) printf("Saw a vowel\n"); }

2.5.12: The keyword `typedef'The keyword typedef is still used in C++, but is not required anymore when defining union, struct or enum definitions. This is illustrated in the following example: struct SomeStruct { int a; double d; char string[80]; };

When a struct, union or other compound type is defined, the tag of this type can be used as type name (this is SomeStruct in the above example): SomeStruct what; what.d = 3.1415;

2.5.13: Functions as part of a structIn C++ we may define functions as members of structs. Here we encounter the first concrete example of an object: as previously described (see section 2.4), an object is a structure containing data while specialized functions exist to manipulate those data.

A definition of a struct Point is provided by the code fragment below. In this structure, two int data fields and one function draw are declared. struct Point // definition of a screen-dot { int x; // coordinates int y; // x/y void draw(); // drawing function };

A similar structure could be part of a painting program and could, e.g., represent a pixel. With respect to this struct it should be noted that:
The function draw mentioned in the struct definition is a mere declaration. The actual code of the function defining the actions performed by the function is found elsewhere (the concept of functions inside structs is further discussed in section 3.2).
The size of the struct Point is equal to the size of its two ints. A function declared inside the structure does not affect its size. The compiler implements this behavior by allowing the function draw to be available only in the context of a Point.The Point structure could be used as follows: Point a; // two points on Point b; // the screen a.x = 0; // define first dot a.y = 10; // and draw it a.draw(); b = a; // copy a to b b.y = 20; // redefine y-coord b.draw(); // and draw it

As shown in the above example a function that is part of the structure may be selected using the dot (.) (the arrow (->) operator is used when pointers to objects are available). This is therefore identical to the way data fields of structures are selected.

The idea behind this syntactic construction is that several types may contain functions having identical names. E.g., a structure representing a circle might contain three int values: two values for the coordinates of the center of the circle and one value for the radius. Analogously to the Point structure, a Circle may now have a function draw to draw the circle.

2.5.14: Evaluation order of operandsTraditionally, the evaluation order of expressions of operands of binary operators is, except for the boolean operators and and or, not defined. C++ changed this for postfix expressions, assignment expressions (including compound assignments), and shift operators:
Expressions using postfix operators (like index operators and member selectors) are evaluated from left to right (do not confuse this with postfix increment or decrement operators, which cannot be concatenated (e.g., variable++++ does not compile)).
Assignment expressions are evaluated from right to left;
Operands of shift operators are evaluated from left to right.

In the following examples first is evaluated before second, before third, before fourth, whether they are single variables, parenthesized expressions, or function calls: first.second fourth += third = second += first first << second << third << fourth first >> second >> third >> fourth

In addition, when overloading an operator, the function implementing the overloaded operator is evaluated like the built-in operator it overloads, and not in the way function calls are generally ordered.

Source: C++ Annotations Version 11.4.0 Frank B. Brokken University of Groningen, PO Box 407, 9700 AK Groningen The Netherlands Published at the University of Groningen

C++'s history

The first implementation of C++ was developed in the 1980s at the AT&T Bell Labs, where the Unix operating system was created. C++ was originally a `pre-compiler', similar to the preprocessor of C, converting special constructions in its source code to plain C. Back then this code was compiled by a standard C compiler. The `pre-code', which was read by the C++ pre-compiler, was usually located in a file with the extension .cc, .C or .cpp. This file would then be converted to a C source file with the extension .c, which was thereupon compiled and linked. The nomenclature of C++ source files remains: the extensions .cc and .cpp are still used. However, the preliminary work of a C++ pre-compiler is nowadays usually performed during the actual compilation process. Often compilers determine the language used in a source file from its extension. This holds true for Borland's and Microsoft's C++ compilers, which assume a C++ source for an extension .cpp. The GNU compiler g++, which is available on many Unix platforms, assumes for C++ the extension .cc. The fact that C++ used to be compiled into C code is also visible from the fact that C++ is a superset of C: C++ offers the full C grammar and supports all C-library functions, and adds to this features of its own. This makes the transition from C to C++ quite easy. Programmers familiar with C may start `programming in C++' by using source files having extensions .cc or .cpp instead of .c, and may then comfortably slip into all the possibilities offered by C++. No abrupt change of habits is required.

Source: C++ Annotations Version 11.4.0 Frank B. Brokken University of Groningen, PO Box 407, 9700 AK Groningen The Netherlands Published at the University of Groningen

the machine-readable text: methods of conversion


Although the Workshop did not include a systematic examination of the methods for converting texts from paper (or from facsimile images) into machine-readable form, nevertheless, various speakers touched upon this matter. For example, WEIBEL reported that OCLC has experimented with a merging of multiple optical character recognition systems that will reduce errors from an unacceptable rate of 5 characters out of every l,000 to an unacceptable rate of 2 characters out of every l,000.

Pamela ANDRE presented an overview of NAL's Text Digitization Program and Judith ZIDAR discussed the technical details. ZIDAR explained how NAL purchased hardware and software capable of performing optical character recognition (OCR) and text conversion and used its own staff to convert texts. The process, ZIDAR said, required extensive editing and project staff found themselves considering alternatives, including rekeying and/or creating abstracts or summaries of texts. NAL reckoned costs at $7 per page. By way of contrast, Ricky ERWAY explained that American Memory had decided from the start to contract out conversion to external service bureaus. The criteria used to select these contractors were cost and quality of results, as opposed to methods of conversion. ERWAY noted that historical documents or books often do not lend themselves to OCR. Bound materials represent a special problem. In her experience, quality control—inspecting incoming materials, counting errors in samples—posed the most time-consuming aspect of contracting out conversion. ERWAY reckoned American Memory's costs at $4 per page, but cautioned that fewer cost-elements had been included than in NAL's figure.

Source: Project Gutenberg's LOC Workshop on Electronic Texts, by Library of Congress

TPC, The Phone Company

My apologies for using the United States as an example so many times, but…most of my experience has been in the US.
Asychnronous Availability of Information
One of the major advantages of electronic information is that you don't have to schedule yourself to match others in their schedules.
This is very important. Just this very week I have been waiting for a power supply for one of my computers, just because the schedule of the person who has it was not in sync with the schedule of the person picking it up. The waste has been enormous, and trips all the way across an entire town are wasted, while the computer lies unused.
The same things happens with libraries and stores of all kinds around the world. How many times have you tried a phone call, a meeting, a purchase, a repair, a return or a variety of other things, and ended up not making these connections?
No longer, with things that are available electronically over the Nets. You don't have to wait until the door of the library swings open to get that book you want for an urgent piece of research; you don't have to wait until a person is available to send them an instant message; you don't have to wait for the evening news on tv….
This is called Asyncronous Communication…meaning those schedules don't have to match exactly any more to have a meaningful and quick conversation. A minute here, there or wherever can be saved instead of wasted and the whole communication still travels at near instantaneous speed, without the cost of ten telegrams, ten phone calls, etc.
You can be watching television and jump up and put a few minutes into sending, or answering, your email and would not miss anything but the commercials.
"Commercials" bring to mind another form of asynchronous communication…taping a tv or radio show and watching a show in 40 minutes instead of an hour because you do not have to sit through 1 minute of "not-show" per 2 minutes of show. No only to you not have to be home on Thursday night to watch your favorite TV show any more, but those pesky commercials can be edited out, allowing you to see three shows in the time it used to take to watch two.
This kind of efficiency can have a huge effect on you or your children. . .unless you WANT them to see 40 ads per hour on television, or spend hours copying notes from an assortment of library books carried miles from, and back to, the libraries. Gone are the piles of 3x5 cards past students and scholars have heaped before time in efforts to organize mid-term papers for 9, 12, 16 or 20 years of institutionalized education. Whole rainforests of trees can be saved, not to mention the billions of hours of an entire population's educated scribbling that should have been spent between the ears instead of between paper and hand, cramping the thought and style of generations upon generations of those of us without photographic memories to take the place of the written word.
Now we all can have photographic memories, we can quote, with total accuracy, millions of 3x5 cards worth of huge encyclopedias of information, all without getting up for any reason other than eating, drinking and stretching.
Research in this area indicates that 90% of the time the previous generations spent for research papers was spent traipsing through the halls, stairways and bookstacks of libraries; searching through 10 to 100 books for each of the ones selected for further research; and searching on 10-100 pages for each quote worthy of making it into the sacred piles of 3x5 cards; then searching the card piles for those fit for the even more sacred sheets of paper a first draft was written on. Even counting the fanatical dedication of those who go through several drafts before a presentation draft is finally achieved the researchers agree that 90% of this kind of work is spent in "hunting and gathering" the information and only 10% of this time is spent "digesting" the information.
If you understand that civilization was based on the new invention called "the plow," which changed the habits of "hunting and gathering" peoples into civilized cities… then you might be able to understand the the changes the computer and computer networks are making to those using them instead of the primitive hunting and gathering jobs we used to spend 90% of our time on.
In mid-19th Century the United States was over 90% in an agrarian economy, spending nearly all of its efforts for raising food to feed an empty belly. Mid-20th Century's advances reversed that ratio, so that only 10% was being used for the belly, 90% for civilization.
The same thing will be said for feeding the mind, if our civilization ever gets around deciding that spending the majority of our research time in a physical, rather than mental, portion of the educational process.
Think of it this way, if it takes only 10% as long to do the work to write a research paper, we are likely to get either 10 times as many research papers, or papers which are 10 times as good, or some combination…just like we ended up with 10 times as much food for the body when we turned from hunting and gathering food to agriculture at the beginnings of civilization…then we would excpect a similar transition to a civilization of the future.
If mankind is defined as the animal who thinks; thinking more and better increases the degree to which we are the human species. Decreasing our ability to think is going to decrease our humanity…and yet I am living in what a large number of people define as the prime example of an advanced country…where half the adult population can't read at a functional level. [From the US Adult Literacy Report of 1994]

Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

why URLs aren't U

This chapter discusses why URLs aren't U,
Why Universal Resource Locators Are Not Universal
When I first tried the experimental Gopher sites, I asked the inventors of Gopher if their system could be oriented to also support FTP, should a person be more inclined for going after something one already had researched: rather than the "browsing" that was being done so often on those Gopher servers.
The answer was technically "yes," but realistically "no," in that while Gophers COULD be configured such that every file would be accessible by BOTH Gopher and FTP, the real intent of Gopher was to bypass FTP and eventually replace it as the primary method of surfing the Internet.
I tried to explain to them that "surfing" the Internet is much more time consuming as well as wasteful of bandwidth [this at a time when all bandwidth was still free, and we were only trying to make things run faster, as opposed to actually saving money.

Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

Internet As Chandelier

—————————-ORIGINAL MESSAGE————————————— Hart undoubtedly saw academia as a series of dark brown dream shapes, disorganized, nightmarish, each with its set of rules for nearly everything: style of writing, footnoting, limited subject matter, and each with little reference to each other.
————————————-REPLY————————————————— What he wanted to see was knowledge in the form of a chandelier, with each subject area powered by the full intensity of the flow of information, and each sending sparks of light to other areas, which would then incorporate and reflect them to others, a never ending flexion and reflection, an illumination of the mind, soul and heart of Wo/Mankind as could not be rivalled by a diamond of the brightest and purest clarity.
Instead, he saw petty feudal tyrants, living in dark poorly lit, poorly heated, well defended castles: living on a limited diet, a diet of old food, stored away for long periods of time, salted or pickled or rotted or fermented. Light from the outside isn't allowed in, for with it could come the spears and arrows of life and the purpose of the castle was to keep the noble life in, and all other forms of life out. Thus the nobility would continue a program of inbreeding which would inevitably be outclassed by an entirely random reflexion of the world's gene pool.
A chandelier sends light in every direction, light of all colors and intensities. No matter where you stand, there are sparkles, some of which are aimed at you, and you alone, some of which are also seen by others: yet, there is no spot of darkness, neither are there spots of overwhelming intensity, as one might expect a sparkling source of lights to give off. Instead, the area is an evenly lit paradise, with direct and indirect light for all, and at least a few sparkles for everyone, some of which arrive, pass and stand still as we watch.
But the system is designed to eliminate sparkles, reflections or any but the most general lighting. Scholars are encouraged to a style and location of writing which guarantee that 99 and 44 one hundredths of the people who read their work will be colleagues, already a part of that inbred nobility of their fields.
We are already aware that most of our great innovations are made from leaps from field to field, that the great thinkers apply an item here in this field which was gleaned from that field: thus are created the leaps which create new fields which widen fields of human endeavor in general.
Yet, our petty nobles, cased away in their casements, encased in their tradition, always reject the founding of these new fields, fearing their own fields can only be dimmed by comparison. This is true, but only by their own self-design. If their field were open to light from the outside, then the new field would be part of their field, but by walling up the space around themselves, a once new and shining group of enterprising revolutionaries could only condemn themselves to awaiting the ravages of time, tarnish and ignorance as they become ignorant of the outside world while the outside world becomes ignorant of them.
So, I plead with you, for your sake, my sake, for everyone's, to open windows in your mind, in your field, in your writing and in your thinking; to let illumination both in and out, to come from underneath and from behind the bastions of your defenses, and to embrace the light and the air, to see and to breathe, to be seen and to be breathed by the rest of Wo/Mankind.
Let your light reflect and be reflected by the other jewels in a crown of achievement more radiant than anything we have ever had the chance to see or to be before. Join the world!
A Re-Visitation to the Chandelier by Michael S. Hart
Every so often I get a note from a scholar with questions and comments about the Project Gutenberg Edition of this or that. Most of the time this appears to be either idle speculation— since there is never any further feedback about passages this or that edition does better in the eye of particular scholars or the feedback is of the "holier than thou" variety in which the scholar claims to have found errors in our edition, which the scholar then refuses to enumerate.
As for the first, there can certainly be little interest in a note that appears, even after follow-up queries, of that idle brand of inquiry.
As to the second, we are always glad to receive a correction, that is one of the great powers of etext, that corrections be made easily and quickly when compared to paper editions, with the corrections being made available to those who already had the previous editions, at no extra charge.
However, when someone is an expert scholar in a field they do have a certain responsibility to have their inquiries be some reasonable variety, with a reasonable input, in order to have a reasonable output. To complain that there is a problem w/o pointing out the problem has a rich and powerful vocabulary I do not feel is appropriate for this occasion. We have put an entirely out-of-proportion cash reward on these errors at one time or another and still have not received any indications a scholar has actually ever found them, which would not be more difficult than finding errors in any other etexts, especially ones not claiming an beginning accuracy of only 99.9%.
However, if these corrections WERE forthcoming, then the 99.9 would soon approach 99.95, which is the reference error level referred to several times in the Library of Congress Workshop on Electronic Text Proceedings.
On the other hand, just as the Project Gutenberg's efficiency would drop dramatically if we insisted our first edition of a book were over 99.5% accurate, so too, should efficiency drop dramatically if we were ever to involve ourselves in any type of discussion resembling "How many angels can dance on a pin- head." The fact is, that our editions are NOT targeted to an audience specifically interested in whether Shakespeare would have said:
"To be or not to be"
"To be, or not to be"
"To be; or not to be"
"To be: or not to be"
"To be—or not to be"
This kind of conversation is and should be limited to the few dozen to few hundred scholars who are properly interested. A book designed for access by hundreds of millions cannot spend that amount of time on an issue that is of minimal relevance, at least minimal to 99.9% of the potential readers. However, we DO intend to distribute a wide variety of Shakespeare, and the contributions of such scholars would be much appreciated, were it ever given, just as we have released several editions of the Bible, Paradise Lost and even Aesop's Fables.
In the end, when we have 30 different editions of Shakespeare on line simulateously, this will probably not even be worthy, as it hardly is today, of a footnote. . .I only answer out of respect for the process of creating these editions as soon as possible, to improve the literacy and education of the masses as soon as possible.
For those who would prefer to see that literacy and education continue to wallow in the mire, I can only say that a silence on your part creates its just reward. Your expertise dies an awful death when it is smothered by hiding your light under a bushel, as someone whom is celebrated today once said:
Matthew 5:15
Neither do men light a candle, and put it under a bushel, but on
candlestick; and it giveth light unto all that are in the house.
Mark 4:21
And he said unto them, Is a candle brought to be put under a
or under a bed? and not to be set on a candlestick?
Luke 8:16 No man, when he hath lighted a candle, covereth it with a vessel, or putteth it under a bed; but setteth it on a candlestick, that they which enter in may see the light.
Luke 11:33 No man, when he hath lighted a candle, putteth it in a secret place, neither under a bushel, but on a candlestick, that they which come in may see the light.

Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

Michael Hart - Chapter Zero

Michael Hart is trying to change Human Nature. He says Human Nature is all that is stopping the Internet from saving the world. The Internet, he says, is a primitive combination of Star Trek communicators, transporters and replicators; and can and will bring nearly everything to nearly everyone. "I type in Shakespeare and everyone, everywhere, and from now until the end of history as we know it—everyone will have a copy instantaneously, on request. Not only books, but the pictures, paintings, music. . .anything that will be digitized. . .which will eventually include it all. A few years ago I wrote some articles about 3-D replication [Stereographic Lithography] in which I told of processes, in use today, that videotaped and played back fastforward on a VCR, look just like something appearing in Star Trek replicators. Last month I saw an article about a stove a person could program from anyhere on the Internet. . .you could literally `fax someone a pizza' or other meals, the `faxing a pizza' being a standard joke among Internetters for years, describing one way to tell when the future can be said to have arrived." For a billion or so people who own or borrow computers it might be said "The Future Is Now" because they can get at 250 Project Gutenberg Electronic Library items, including Shakespeare, Beethoven, and Neil Armstrong landing on the Moon in the same year the Internet was born. This is item #250, and we hope it will save the Internet, and the world. . .and not be a futile, quixotic effort. Let's face it, a country with an Adult Illiteracy Rate of 47% is not nearly as likely to develop a cure for AIDS as a country with an Adult Literacy Rate of 99%. However, Michael Hart says the Internet has changed a lot in the last year, and not in the direction that will take the Project Gutenberg Etexts into the homes of the 47% of the adult population of the United States that is said to be functionally illiterate by the 1994 US Report on Adult Literacy. He has been trying to ensure that there is not going to be an "Information Rich" and "Information Poor," as a result of a Feudal Dark Ages approach to this coming "Age of Information". . .he has been trying since 1971, a virtual "First Citizen" of the Internet since he might be the first person on the Internet who was NOT paid to work on the Internet/ARPANet or its member computers. Flashback In either case, he was probably one of the first 100 on a fledgling Net and certainly the first to post information of a general nature for others on the Net to download; it was the United States' Declaration of Independence. This was followed by the U.S. Bill of Rights, and then a whole Etext of the U.S. Constitution, etc. You might consider, just for the ten minutes the first two might require, the reading of the first two of these documents that were put on the Internet starting 24 years ago: and maybe reading the beginning of the third. The people who provided his Internet account thought this whole concept was nuts, but the files didn't take a whole lot of space, and the 200th Anniversary of the Revolution [of the United States against England] was coming up, and parchment replicas of all the Revolution's Documents were found nearly everywhere at the time. The idea of putting the Complete Works of Shakespeare, the Bible, the Q'uran, and more on the Net was still pure Science Fiction to any but Mr. Hart at the time. For the first 17 years of this project, the only responses received were of the order of "You want to put Shakespeare on a computer!? You must be NUTS!" and that's where it stayed until the "Great Growth Spurt" hit the Internet in 1987-88. All of a sudden, the Internet hit "Critical Mass" and there were enough people to start a conversation on nearly any subject, including, of all things, electronic books, and, for the first time, Project Gutenberg received a message saying the Etext for everyone concept was a good idea. That watershed event caused a ripple effect. With others finally interested in Etext, a "Mass Marketing Approach," and such it was, was finally appropriate, and the release of Alice in Wonderland and Peter Pan signalled beginnings of a widespread production and consumption of Etexts. In Appendix A you will find a listing of these 250, in order of their release. Volunteers began popping up, right on schedule, to assist in the creation or distribution of what Project Gutenberg hoped would be 10,000 items by the end of 2001, only just 30 years after the first Etext was posted on the Net. Flash Forward Today there are about 500 volunteers at Project Gutenberg and they are spread all over the globe, from people doing their favorite book then never being heard from again, to PhD's, department heads, vice-presidents, and lawyers who do reams of copyright research, and some who have done in excess of 20 Etexts pretty much by themselves; appreciate is too small a word for how Michael feel about these, and tears would be the only appropriate gesture. There are approximately 400 million computers today, with the traditional 1% of them being on the Internet, and the traditional ratio of about 10 users per Internet node has continued, too, as there are about 40 million people on a vast series of Internet gateways. Ratios like these have been a virtual constant through Internet development. If there is only an average of 2.5 people on each of 400M computers, that is a billion people, just in 1995. There will probably be a billion computers in the world by 2001 when Project Gutenberg hopes to have 10,000 items online. If only 10% of those computers contain the average Etexts from Project Gutenberg that will mean Project Gutenberg's goal of giving away one trillion Etexts will be completed at that time, not counting that more than one person will be able to use any of these copies. If the average would still be 2.5 people per computer, then only 4% of all the computers would be required to have reached one trillion. [10,000 Etexts to 100,000,000 people equals one trillion] Hart's dream as adequately expressed by "Grolier's" CDROM Electronic Encyclopedia has been his signature block with permission, for years, but this idea is now threatened by those who feel threatened by Unlimited Distribution: ===================================================== | The trend of library policy is clearly toward | the ideal of making all information available | without delay to all people. | |The Software Toolworks Illustrated Encyclopedia (TM) |(c) 1990, 1991 Grolier Electronic Publishing, Inc. ============================================= Michael S. Hart, Professor of Electronic Text Executive Director of Project Gutenberg Etext Illinois Benedictine College, Lisle, IL 60532 No official connection to U of Illinois—UIUC hart@uiucvmd.bitnet and hart@vmd.cso.uiuc.edu Internet User Number 100 [approximately] [TM] Break Down the Bars of Ignorance & Illiteracy On the Carnegie Libraries' 100th Anniversary! Human Nature such as it is, has presented a great deal of resistance to the free distribution of anything, even air and water, over the millennia. Hart hopes the Third Millennium A.D. can be different. But it will require an evolution in human nature and even perhaps a revolution in human nature. So far, the history of humankind has been a history of an ideal of monopoly: one tribe gets the lever, or a wheel, or copper, iron or steel, and uses it to command, control or otherwise lord it over another tribe. When there is a big surplus, trade routes begin to open up, civilizations begin to expand, and good times are had by all. When the huge surplus is NOT present, the first three estates lord it over the rest in virtually the same manner as historic figures have done through the ages: "I have got this and you don't." [Nyah nyah naa naa naa!] *** *** Now that ownership of the basic library of human thoughts is potentially available to every human being on Earth—I have been watching the various attempts to keep this from actually being available to everyone on the planet: this is what I have seen: 1. Ridicule Those who would prefer to think their worlds would be destroyed by infinite availability of books such as: Alice in Wonderland, Peter Pan, Aesop's Fables or the Complete Works of Shakespeare, Milton or others, have ridiculed the efforts of those who would give them to all free of charge by arguing about whether it should be: "To be or not to be" or "To be [,] or not to be" or "To be [;] or not to be"/"To be [:] or not to be" or whatever; and that whatever their choices are, for this earthshaking matter, that no other choice should be possible to anyone else. My choice of editions is final because I have a scholarly opinion. 1A. My response has been to refuse to discuss: "How many angels can dance on the head of a pin," [or many other matters of similar importance]. I know this was once considered of utmost importance, BUT IN A COUNTRY WHERE HALF THE ADULTS COULD NOT EVEN READ SHAKESPEARE IF IT WERE GIVEN TO THEM, I feel the general literacy and literary requirements overtake a decision such as theirs. If they honestly wanted the best version of Shakespeare [in their estimations] to be the default version on the Internet, they wouldn't have refused to create just such an edition, wouldn't have shot down my suggested plan to help them make it . . .for so many years. . .nor, when they finally did agree, they wouldn't have let an offer from a largest wannabee Etext provider to provide them with discount prices, and undermine their resolve to create a super quality public domain edition of Shakespeare. It was an incredible commentary on the educational system in that the Shakespeare edition we finally did use for a standard Internet Etext was donated by a commercial— yes—commercial vendor, who sells it for a living. In fact, I must state for the record, that education, as an institution, has had very little to do with the creation and distribution of Public Domain Etexts for the public, and that contributions by the commercial, capitalistic corporations has been the primary force, by a large margin, that funds Project Gutenberg. The 500 volunteers we have come exclusively from smaller, less renowned institutions of education, without any, not one that I can think of, from any of the major or near major educational institutions of the world. It would appear that those Seven Deadly Sins listed a few paragraphs previously have gone a long way to the proof of the saying that "Power corrupts and absolute power corrupts absolutely." Power certainly accrues to those who covet it and the proof of the pudding is that all of the powerful club we have approached have refused to assist in the very new concept of truly Universal Education. Members of those top educational institutions managed to subscribe to our free newsletter often enough, but not one of them ever volunteered to do a book or even to donate a dollar for what they have received: even send in lists of errors they say they have noticed. Not one. [There is a word for the act of complaining about something without [literally] lifting a finger] The entire body of freely available Etexts has been a product of the "little people." 2. Cost Inflation When Etexts were first coming it, estimates were sent around the Internet that it took $10,000 to create an Etexts, and that therefore it would take $100,000,000 to create the proposed Project Gutenberg Library. $500,000,000 was supposedly donated to create Etexts, by one famous foundation, duly reported by the media, but these Etexts have not found their way into hands, or minds, of the public, nor will they very soon I am afraid, though I would love to be put out of business [so to say] by the act of these institutions' release of the thousands of Etexts some of them already have, and that others have been talking about for years. My response was, has been, and will be, simply to get the Etexts out there, on time, and with no budget. A simple proof that the problem does not exist. If the team of Project Gutenberg volunteers can produce this number of Etexts and provide it to the entire world's computerized population, then the zillions of dollars you hear being donated to the creations of electronic libraries by various government and private donations should be used to keep the Information Superhighway a free and productive place for all, not just for those 1% of computers that have already found a home there. 3. Graphics and Markup versus Plain Vanilla ASCII The one thing you will see in common with ALL of such graphics and markup proposals is LIMITED DISTRIBUTION as a way of life. The purpose of each one of these is and always has been to keep knowledge in the hands of the few and away from the minds of the many. I predict that in the not-too-distant-future that all materials will either be circulating on the Internet, or that they will be jealously guarded by owners whom I described with the Seven Deadly Sins. If there is ever such a thing as the "Tri-corder," of Star Trek fame, I am sure there simultaneously has to be developed a "safe" in which those who don't want a whole population to have what they have will "lock" a valuable object to ensure its uniqueness; the concept of which I am speaking is illustrated by this story: "A butler announces a delivery, by very distinguished members of a very famous auction house. The master— for he IS master—beckons him to his study desk where the butler deposits his silver tray, containing a big triangular stamp, then turns to go. What some of these projects with tens of millions for their "Electronic Libraries" are doing to ensure this is for THEM and not for everyone is to prepare Etexts in a manner in which no normal person would either be willing or able to read them. Shakespeare's Hamlet is a tiny file in PVASCII, small enough for half a dozen copies to fit [uncompressed!] on a $.23 floppy disk that fits in your pocket. But, if it is preserved as a PICTURE of each page, then it will take so much space that it would be difficult to carry around even a single copy in that pocket unless it were on a floppy sized optical disk, and even then I don't think it would fit. Another way to ensure no normal person would read it, to mark it up so blatantly that the human eyes should have difficulty in scansion, stuttering around pages, rather than sliding easily over them; the information contained in this "markup" is deemed crucial by those esoteric scholars who think it is of vital importance that a coffee cup stain appears at the lower right of a certain page, and that "Act I" be followed by [<ACT ONE>] to ensure everyone knows this is actually where this is where an act or scene or whatever starts. You probably would not believe how much money has had the honor of being spent on these kinds of projects a normal person is intentionlly deprived of through the mixture is just plain HIDING the files, to making the files so BIG you can't download them, to making them so WEIRD you wouldn't read them if you got them. The concept of requiring all documents to be formatted in a certain manner such that only a certain program can read them has been proposed more often then you might ever want to imagine, for the TWIN PURPOSES OF PROFIT AND LIMITED DISTRIBUTION in a medium which requires a virtue of UNLIMITED DISTRIBUTION to keep it growing. Every day I read articles, proposals, proceedings for various conferences that promote LIMITED DISTRIBUTION on the Nets. . .simply to raise the prestige or money to keep some small oligarchy in power. This is truly a time of POWER TO THE PEOPLE as people say in the United States. What we have here is a conflict between the concepts that everything SHOULD be in LIMITED DISTRIBUTION, and that of the opposing concept of UNLIMITED DISTRIBUTION. If you look over the table of contents on the next pages, you will see that each of these item stresses the greater and greater differences between an history which has been dedicated to the preservation of Limited Distribution and something so new it has no history longer than 25 years—

Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

The `-P' convention

Turning a word into a question by appending the syllable `P'; from the LISP convention of appending the letter `P' to denote a predicate (a boolean-valued function). The question should expect a yes/no answer, though it needn't. (See T and NIL.) At dinnertime:
Q: ``Foodp?''
A: ``Yeah, I'm pretty hungry.'' or ``T!''
At any time:
Q: ``State-of-the-world-P?''
A: (Straight) ``I'm about to go home.''
A: (Humorous) ``Yes, the world has a state.''
On the phone to Florida:
Q: ``State-p Florida?''
A: ``Been reading JARGON.TXT again, eh?''

[One of the best of these is a Gosperism. Once, when we were at a Chinese restaurant, Bill Gosper wanted to know whether someone would like to share with him a two-person-sized bowl of soup. His inquiry was: "Split-p soup?" -- GLS]

Node:Overgeneralization, Next:Spoken Inarticulations, Previous:The -P convention, Up:Jargon Construction


A very conspicuous feature of jargon is the frequency with which techspeak items such as names of program tools, command language primitives, and even assembler opcodes are applied to contexts outside of computing wherever hackers find amusing analogies to them. Thus (to cite one of the best-known examples) Unix hackers often grep for things rather than searching for them. Many of the lexicon entries are generalizations of exactly this kind.

Hackers enjoy overgeneralization on the grammatical level as well. Many hackers love to take various words and add the wrong endings to them to make nouns and verbs, often by extending a standard rule to nonuniform cases (or vice versa). For example, because
porous => porosity
generous => generosity

hackers happily generalize:
mysterious => mysteriosity
ferrous => ferrosity
obvious => obviosity
dubious => dubiosity

Another class of common construction uses the suffix `-itude' to abstract a quality from just about any adjective or noun. This usage arises especially in cases where mainstream English would perform the same abstraction through `-iness' or `-ingness'. Thus:
win => winnitude (a common exclamation)
loss => lossitude
cruft => cruftitude
lame => lameitude

Some hackers cheerfully reverse this transformation; they argue, for example, that the horizontal degree lines on a globe ought to be called `lats' -- after all, they're measuring latitude!

Also, note that all nouns can be verbed. E.g.: "All nouns can be verbed", "I'll mouse it up", "Hang on while I clipboard it over", "I'm grepping the files". English as a whole is already heading in this direction (towards pure-positional grammar like Chinese); hackers are simply a bit ahead of the curve.

The suffix "-full" can also be applied in generalized and fanciful ways, as in "As soon as you have more than one cachefull of data, the system starts thrashing," or "As soon as I have more than one headfull of ideas, I start writing it all down." A common use is "screenfull", meaning the amount of text that will fit on one screen, usually in text mode where you have no choice as to character size. Another common form is "bufferfull".

However, hackers avoid the unimaginative verb-making techniques characteristic of marketroids, bean-counters, and the Pentagon; a hacker would never, for example, `productize', `prioritize', or `securitize' things. Hackers have a strong aversion to bureaucratic bafflegab and regard those who use it with contempt.

Similarly, all verbs can be nouned. This is only a slight overgeneralization in modern English; in hackish, however, it is good form to mark them in some standard nonstandard way. Thus:
win => winnitude, winnage
disgust => disgustitude
hack => hackification

Further, note the prevalence of certain kinds of nonstandard plural forms. Some of these go back quite a ways; the TMRC Dictionary includes an entry which implies that the plural of `mouse' is meeces, and notes that the defined plural of `caboose' is `cabeese'. This latter has apparently been standard (or at least a standard joke) among railfans (railroad enthusiasts) for many years.

On a similarly Anglo-Saxon note, almost anything ending in `x' may form plurals in `-xen' (see VAXen and boxen in the main text). Even words ending in phonetic /k/ alone are sometimes treated this way; e.g., `soxen' for a bunch of socks. Other funny plurals are `frobbotzim' for the plural of `frobbozz' (see frobnitz) and `Unices' and `Twenices' (rather than `Unixes' and `Twenexes'; see Unix, TWENEX in main text). But note that `Twenexen' was never used, and `Unixen' was not sighted in the wild until the year 2000, thirty years after it might logically have come into use; it has been suggested that this is because `-ix' and `-ex' are Latin singular endings that attract a Latinate plural. Finally, it has been suggested to general approval that the plural of `mongoose' ought to be `polygoose'.

The pattern here, as with other hackish grammatical quirks, is generalization of an inflectional rule that in English is either an import or a fossil (such as the Hebrew plural ending `-im', or the Anglo-Saxon plural suffix `-en') to cases where it isn't normally considered to apply.

This is not `poor grammar', as hackers are generally quite well aware of what they are doing when they distort the language. It is grammatical creativity, a form of playfulness. It is done not to impress but to amuse, and never at the expense of clarity.

Source: The New Hacker's Dictionary version 4.2.2, by various editors

Friday, July 24, 2020

PHP: Resource variables

Resource variables hold special handles to opened files, database connections, streams, image canvas areas and
the like (as it is stated in the manual) .

$fp = fopen( 'file.ext', 'r' ); // fopen() is the function to open a file on disk as a resource.
var_dump($fp) ; // output: resource(2) of type (stream)

To get the type of a variable as a string, use the gettype( ) function:

echo gettype(l); // outputs "integer"
echo gettype(true) ; // "boolean"

Source: PHP Notes for Professionals by Goal Kicker - Internet Archive Data

PHP: Arrays

An array is like a list of values. The simplest form of an array is indexed by integer, and ordered by the index, with the first element lying at index 0.

$foo = array(1, 2, 3); // An array of integers

$bar = ["A", true, 123 => 5]; // Short array syntax, PHP 5.4+

echo $bar[0]; // Returns "A"

echo $bar[1]; // Returns true

echo $bar[123]; // Returns 5

echo $bar[1234]; // Returns null

Arrays can also associate a key other than an integer index to a value. In PHP, all arrays are associative arrays
behind the scenes, but when we refer to an 'associative array' distinctly, we usually mean one that contains one or
more keys that aren't integers.

$array = array( );

$array[ "foo" ] = "bar";

$array[ "baz" ] = "quux";

$array[42] = "hello";
echo $array[ "foo" ]; // Outputs "bar"
echo $array [ "bar" ]; // Outputs "quux"
echo $array[42]; // Outputs "hello"

Source: PHP Notes for Professionals by Goal Kicker - Internet Archive Data

IBM 1401 Data Processing System

When companies order an IBM 1401 Data Processing System, methods-programming staffs are given the responsibility of translating the requirements of management into finished applications. 1401 Programming Systems are helping cut the costs of getting the computer into operation by simplifying and expediting the work of these methods staffs. Modern, high-speed computers, such as the 1401, are marvelous electronic instruments, but they represent only portions of data processing systems. Well-tested programming languages for communication with computers must accompany the systems. It is through these languages that the computer itself is used to perform many of the tedious functions that the programmer would otherwise have to perform. A few minutes of computer time in translating the program can be equal to many, many hours of staff time in writing instructions coded in the language of the computer. The combination of a modern computer plus modern programming languages is the key to profitable data processing. This brochure explains modern IBM Programming Languages and their significance to management.

"What Is A 1401 Program?" A program is a series of instructions that direct the 1401 as it solves an application. "What Is A Stored Program Machine?" A stored program machine is one which stores its own instructions in magnetic form and is capable of acting on those instructions to complete the application assigned. The 1401 uses a stored program. "What Are 1401 Programming Systems?" There are two types: (1) Systems that provide the programmer with a simplified vocabulary of statements to use in writing programs, and (2) Pre-written programs, which take care of many of the everyday operations of the 1401. What 1401 Programming Systems Mean To Management: INCREASED PROGRAMMING EFFICIENCY Programmers can concentrate on the application and results rather than on a multitude of "bookkeeping" functions, such as keeping track of storage locations. FASTER TRANSLATION OF MANAGEMENT REQUIREMENTS INTO USABLE RESULTS Simplified programming routines allow programmers to write more instructions in less time. SHORTER TRAINING PERIODS Programmers use a language more familiar to them rather than having to learn detailed machine codes. REDUCED PROGRAMMING COSTS Many pre-written programs are supplied by IBM, eliminating necessity of customers' staffs writing their own. MORE AVAILABLE 1401 TIME Pre-written programs have already been tested by IBM, reducing tedious checking operations on the computer. EASIER TO UNDERSTAND PROGRAMS Programs are written in symbolic or application-oriented form instead of computer language. This enables management to communicate more easily with the programming staff. FASTER REPORTS ON OPERATIONS Routines such as those designed for report writing permit faster translation of management requirements into usable information. [5]IBM Programming Systems Symbolic Programming Systems These systems permit programs to be written using meaningful names (symbols) rather than actual machine language. Autocoder This is an advanced symbolic programming system. It allows generation of multiple machine instructions from one source statement, free-form coding, and an automatic assembly process through magnetic tape. COBOL COBOL is a problem-oriented programming language for commercial applications.[A] COBOL permits a programmer to use language based on English words and phrases in describing an application. Input/Output Control System This system provides the programmer with a packaged means of accomplishing input and output requirements. Utility Programs These are pre-written instructions to perform many of the everyday operations of an installation. Subroutines These are routines for multiplication, division, dozens conversion, and program error detection aids. Tape Utilities These are generalized instructions, particularly useful to 1401 customers who also use larger data processing systems. They facilitate the transfer of data between IBM cards, magnetic tapes, and printers. They also provide for some 1401 processing while the transfer of data is taking place. Tape Sort Programs Data can be sorted and classified at high speed for further processing by use of these generalized sorting routines. Report Program Generator The programmer uses simplified, descriptive language with which he is already familiar to obtain reports swiftly and efficiently. FORTRAN (Contraction of FORmula TRANslator) Engineers and mathematicians state problems in familiar algebraic language for solution by the computer. RAMAC® File Organization Routines are supplied for simplifying organization of records for storage in the 1401 Random Access File. [6] Here's how one of the 1401 programming systems—Report Program Generator—works to increase programming efficiency 1401 computers produce important reports for management in record time because of their outstanding processing and printing abilities. In addition to this rapid machine processing of input data used in reports, still more speed is achieved by the rapid preparation of programs to produce the reports. This is possible because of the IBM Report Program Generator, a unique system which permits programs to be created with a minimum of time and effort.

Source: IBM 1401 Programming Systems, by Anonymous