Tuesday, August 11, 2020

Bluetooth Security

0 comments
These days, all communication technology faces the issue of privacy and identity theft, with Bluetooth being no exception. Almost everyone knows that email services and networks require security. What users of Bluetooth need to realize is that Bluetooth also requires security measures as well. The good news for Bluetooth users is that the security scares, like most scares, are normally over dramatized and blown entirely out of proportion. The truth being told, these issues are easy to manage, with various measures already in place to provide security for Bluetooth technology. It's true that there has been some Bluetooth phones that have been hacked into. Most devices that are hacked into are normally those that don't have any type of security at all. According to Bluetooth specialists, in order to hack into a Bluetooth device, the hacker must: 1. Force two paired devices to break their connection. 2. Steal the packets that are used to resend the pin. 3. Decode the pin. Of course, the hacker must also be within range of the device, and using very expensive developer type equipment. Most specialists recommend that you have a longer pin, with 8 digits being recommended. Fundamentals of security The "pairing process" is one of the most basic levels of security for Bluetooth devices. Pairing, is two or more Bluetooth devices that recognize each other by the profiles they share - in most cases they both must enter the same pin. The core specifications for Bluetooth use an encryption algorithm, which is completely and entirely secure. Once the devices pair with each other, they too become entirely secure. Until they have successfully paired, the Bluetooth devices won't communicate with each other. Due to this pairing process and the fact that it is short range - Bluetooth technology is considered to be secure. As the news has indicated, experienced hackers have developed ways to get around this level of basic security. There are ways to get around this threat, as you can install software to prevent hackers from getting in. With Bluetooth becoming more and more popular, it's really no wonder that security is always in question. As Bluetooth gets bigger and better, security will always be something that no one really takes lightly. If you've been concerned about Bluetooth security in the past, rest assured that newer devices will offer bigger and better security. Preventing hackers from getting in is something every owner is concerned about - and the manufacturer's are very aware. Other wireless technology such as Garmin GPS utilizes complex wireless systems that let you know where you are, on any place in the world.

Monday, August 3, 2020

Computers - how they have advanced

0 comments
Even over the last decade, the advances in computer technology have been immense. Computers can do more today than ever before, faster and at a better value price. Unfortunately, this also means that shopping for a computer can be confusing, as it is hard to know what you actually need and what is just an extra that’s nice to have. Hopefully this article can clear up a few of the mysteries for you.

First of all, let’s look at processors. The two main companies producing processors today are Intel (Pentium processors) and AMD (Athlon processors). Although fanatics on each side swear otherwise, there is little difference between them, performance-wise. In almost all cases, more expensive processors will simply run faster.

However, it is important to consider that the performance of your processor can be limited by how much memory (RAM) your computer has. For high-end processors, you should make sure to get at least a gigabyte of RAM, although lower-end systems will be fine with less. RAM is especially important if you plan to use the system for gaming or other graphics-intensive applications.

Hard disk space, at this point, probably isn’t worth caring too much about. Even the cheapest computers now come with ridiculous amounts of hard disk space, far more than you are ever likely to use. It is much better to upgrade to a DVD re-writer drive than to upgrade your hard disk space. DVDs hold so much data that however big your hard drive is, it is unlikely to hold more than a cheap spindle of DVD-RW discs – and they’re re-writable, so you only need to buy them once.

The only other thing you really need to worry about is the graphics card (sound cards are all the same these days). Again, if you’re going to be doing anything graphically-intensive, then research this further and get a good one (be warned that it can be expensive). For the average user, though, the graphics card that comes with the processor is likely to be fine, even for many less-demanding or older games.


ACT! Software Takes Customer and Contact Management to the Next Level

0 comments
When it comes to software solutions that improve your productivity by enabling you to manage your contacts and customers, over the past 20 years ACT! has proven that is unparalleled. According to ACT consultants, the software allows users to track sales opportunities, manage everyday responsibilities, increase effective communication, and organize contacts. 

The newest version of the software, ACT 2008, features an interactive dashboard that gives you a 360-degree view of your work. You can see the big picture, and then drill down for details, while also being able to write emails, view opportunities, and schedule meetings. The dashboard is available for all versions of the ACT 2008 software (ACT, ACT Premium, and ACT Premium for Web). 

For those needing a vertical software solution, ACT has a product for real estate professionals. ACT certified consultants note that the version for real estate professionals creates integrated information about buyers, sellers, and properties that is easy to reference. It also allows Realtors to take a property listing from the inquiry stage all the way through the closing stage with exquisite detail. Most importantly, it enables real estate professionals to access critical calendar information, as well as buyer, seller, and property information through mobile computing devices. Having relevant information at your fingertips - regardless of where you are - is a critical factor to your success. 

ACT's vertical solution for financial professionals is similar to ACT for Real Estate Professionals in that it provides mobile portability, but it also assists financial service professionals in collecting important, finance-specific information on clients. In addition, it helps those in the financial field comply with company-wide and industry standards. 

ACT also has a number of partners who provide add-on solutions to the already robust ACT 2008 software. These include data and document management, addressing and shipping solutions, email and direct mail marketing add-ons, faxing capabilities, project management and sales management, import and export solutions, and graphics and mapping add-ons.

When it comes to implementing ACT software, your best bet is to engage the services of ACT consultants. Getting ACT help can take many forms. For example, because ACT certified consultants are fully trained in ACT 2008, they can review your current business practices and suggest ways to customize the software to maximize your company's productivity. They can also utilize their extensive experience to train your staff or your systems administrator, who can in turn train new employees to use the system. In addition, ACT consultants can assist you in integrating everything from handheld computers to servers. Some are even remote sales force automation experts, and can expedite the process of gaining remote access to your databases. And, should the unthinkable happen, the best ACT consultants are also specialists in database recovery.

There's no question that ACT is the premiere customer and contact management solution being used today. Licensing the software and engaging the services of ACT consultants can transform the way companies work and can improve productivity and performance across the board.


Accounting Software – Which One Should You Choose

0 comments
Who hasn’t heard of accounting? I guess, nobody. This is the part any normal, functioning business should have to deal with the company’s money and investments. Its history goes back to ancient Greece, where a primitive type of accountants existed. Accounting’s modern history dates back to the beginning of the 19th century, when the big companies emerged. Initially, and by initially I mean up to twenty years ago, the entire process was done by hand and pencil. This changed with the emerging of personal computers. They changed the way people looked at accounting and accounts. But as the PCs evolved so did the accounting software. I will try to help you in your search for the best accounting software, by noting a few sites and a small review of their software.

www.accsoft-ch.com/ Account Pro

This is complex, yet easy-to-use accounts software. It comes in two versions: Account Pro (has every feature enabled) and Account Pro Lite (a simpler version). It is multilingual, multicurrency and can be linked on up to three computers. It can work with projects and cost centers and the discount and tax transactions can be done automatically. Another plus is that it can work with both the British and American type of accounting.

www.clarisys.ca/ Executive

Business requiring sophisticated accounting will benefit the most from Executive. Three types of invoices, multiple bank account capabilities and multicurrency system are just a few of the features the programmers developed for Executive. Eve though initial Executive can be used on one computer only, it can be upgrade to an unlimited number of users. Other features include: up to 5 currencies simultaneously and unlimited number of budgets. Any report can be printed in any currency and separate transactions journals are held for the different currencies as well.

www.simplyaccounting.com/ Simply Accounting 2005

This software has all the requirements of an multi-user, large business accounting software should have. It has specialized options for manufacturing, inventory and service. It is notable that the number of possible currencies can be virtually unlimited. Data can be analyzed and accessed simultaneously by multiple users. It has a powerful search engine to help you find just the record you are looking for. Reports can be created trough Microsoft® Word and Excel.

www.microsoft.com/office/ Microsoft Office Small Business
Accounting 2006

The latest accounting software from the giant Microsoft has quite a few pluses, such as a competitive price, it is easy to use and learn, well built accounting tools for small business and goes perfectly with the Microsoft® Office suite. Unfortunately it depends a little too much on the Office suite, making it inaccessible to people who use other productivity suites.

www.peachtree.com.Peachtree by Sage Complete
Accounting 2006

Peachtree Complete Accounting is multi-user, robust accounting software. It provides the user with valuable information on accounts and staff. It has advanced features like Bill Pay and Online Bank Reconciliation. Time and billing, fixed assets and job costing, are just a few of the basic features it includes.

Whatever software you might choose just remember this: Accounting is the backbone of every business.


Accessorizing Computers

0 comments
What Comes Out of the Box is a Really Just a Starter Kit

Yesterday, we spent about three hours trying to convince a client of ours that brand new computers just don't come equipped with the all things that most computers need in a PC. We tried to convince him that a fully functional computer is one that is personalized with specially selected hardware and software accessories - and that the computer purchased at the store doesn't come with these things. Unfortunately, all of our convincing was to our avail. Our client insisted that he should never need more than what came with his boxed product and that we were just trying "bilk" more money out of him.

As computer consultants, it's our job and mission to make sure our clients are 100% satisfied when they walk out our offices. But our job is unnecessarily made harder when people don't take the time to learn about computer accessories and familiarize themselves with the limitations of store-bought computers. Hopefully by the time you finish reading this article, you'll understand the lesson that we were trying to teach our client: "What comes out of the box is really just a starter kit."

The typical computer package comes with a CPU unit, keyboard, mouse, and speaker set. That may be just fine for some, but most people require more than that especially in today's "connected" society. Today's users require full multimedia capabilities, a wide range of graphics tools, and accommodations for the various portables we now enjoy. These extras aren't included with "what comes out of the box," and the only way to get them is to accessorize.

To illustrate the importance of accessorizing, we like to use the "plain dough" analogy. Let's say that a brand new computer is a batch of plain dough - waiting to be flavored and baked into something useful. If we want to use this dough to make a delicious batch of chocolate chip cookies, we would need to "accessorize" this dough with chocolate chips and a little brown sugar. If we want to use this dough into in a warm loaf of sesame seed bread on the other hand, we'd need to "accessorize" the dough with yeast and sesame seeds.

Like "plain dough," the brand new computer isn't very useful by itself. It needs accessorizing.

Depending on what's needed, accessorizing doesn't need to be expensive.  In fact, you can get away with paying a minimal amount for extra software and hardware if these accessories are for children. It's when these accessories are work requirements or when they're needed to produce works of quality for any other reason that they can become rather expensive. And this expense applies to microphones, digital cameras, PDAs, scanners, video cams, and more.

Regardless of cost, it's important to understand that accessories can become "necessities," and that the best time to get them is the moment you buy a new computer. Waiting too long to accessorize can cause more problems than necessary because while you wait, manufacturers continuously develop new technologies - technologies that your computer won't be able to accommodate in the future. Once you're ready to accessorize, the new products on the market are too advanced for your computer and they just won't work. This is a typical problem experienced by those who want to use hardware designed for Windows Vista on a Windows XP or Windows 2000 machine. 



Saturday, August 1, 2020

An Easy Way to Build a Home Network

0 comments
If you have more than one computer at home, you are probably better off connecting them or networking them together to use the same resources and receive the same internet connection.  A home network is very easy to build.  Here are some tips.

There are two main types of home networks, wired and wireless.  While wired home networks are popular, but they require you to run wires from one computer to another.  You might have to drill holes in a wall or run wire under the carpet.  

If you don’t want cords running all over the place, you can easily get rid of them with a wireless home network.  Wireless home networks are extremely simple to set up.  You only need a wireless router and a wireless networking card for each additional computer you would like to hook up.  Most wireless networks can send and receive data lightning fast at megabytes per second.  

Wireless networks are extremely inexpensive, you can hook up a few computers for less than $200.  The great part of wireless networks is that all computers on the wireless network can use the same internet or other resources such as a scanning machine and printer.  So if you are looking for a great way to share data and resources among your home computer, choose to build a home network.



All About Internet Fax Services

0 comments
Are you interested in Internet fax services? The advantages of web fax services are many and subscribing to internet fax services is worth the money you spend on it. This article will help you to know all about internet fax services, advantages of online fax services, and what online fax service providers can offer to their customers.

The main advantage of internet fax services is that online fax technology makes sending and receiving fax easy and fast. One can even use the service even if he/she does not have access to POTS (Plain Old Telephone) as the internet fax services act as an intermediary to handle the fax sending and receiving process. This allows the users to send and receive faxes, especially those marked with “URGENT” anytime they want.

With online fax service you need not have to worry about providing an extra space for you fax machine. Internet fax services can save much of your office space and you can use utilize internet fax services to handle all of your specialized communication needs.

Internet fax services can save your hard earned money to purchase expensive ink cartridges or having the funds to pay the office supplier to send the faxes for you. You can send the fax messages at the convenience of your home or office.

The maximum infrastructure one may require is a computer, a reliable internet connection and an email ID.

Sending internet fax message is quite easy – what a user has to do is to simply type the message in the text space of the electronic mail and press the send button. The fax message will be transfer the network based server of the service provider, where it is converted into a suitable file format before eventually forwarding it to the recipient’s mail box or fax machine, whichever is applicable.

Online fax facility is offered by several companies – most of the companies ask the users to join in any of their monthly subscription packages. There are also several companies which offer free internet fax services. In free online fax services, the user cannot enjoy all the advantages or features offered in a paid internet fax service. That is there may be limitations such as only a specific number of fax messages per month or the user can only receive messages. Therefore, selecting a paid online fax service will be better for the user to enjoy all the benefits of internet fax service.

Web faxing or email faxing is an efficient technology which bridges the gap between traditional fax machines and web based communication. Internet fax services eliminate the operational cost and complexity of using fax machines and provide the users the flexibility and easiness to send and receive fax messages. After all, internet fax service is another communicating tool which makes life easier!!


Accelerator Software Provides a Faster Download

0 comments
Over half of all households that connect to the Internet have a broadband connection these days, mostly cable or DSL. Which means the other half does not and still uses dial-up. Modems are much faster than they used to be in the early days of computing, but today's websites are larger and require a lot of bandwidth to load quickly. To make matters worse for those on slower connections, even simple software updates are now often dozens of megabytes and can take a long time to download. What it all means is that modem users need a break!

Fortunately, there are things that can be done to make a connection faster. You see, the operating system software on today's computers is not optimized for fast downloads. Microsoft's primary goal is simply to make sure Windows works with all the different hardware out there. Compatibility is important, of course, but it can be frustrating when things just don't work as well as they should.

But not everything is your computer's fault. Your Internet service provider, too, is primarily concerned with reliability (good), compatibility (good), and moving as much traffic as possible with as little investment as possible (not so good). Further, while the Internet moves at electronic speed, not all connections are equal. You may have noticed that downloading pictures from the same exact website is sometimes faster and other times much slower. That may be because the server is very busy, but it can also be because your connection is taking some detours instead of directly getting on the highway.

What does it all mean? It means that between hardware and software designed for compatibility rather than performance, and Internet connections that may not necessarily favor individual dial-up customers, you may simply not get the speed your computer is capable of and that you are paying for. This is bad news for those who frequently download movies, music or pictures.

Fortunately, there are solutions, and I don't mean getting a new computer or waiting until you have broadband access. One such solution is download accelerators. They can greatly increase the speed and reliability of your downloads. How do they do it? By optimizing the way your computer works and by making sure your data downloads the fastest and most direct route possible. With a download accelerator, you are no longer at the mercy of some remote traffic routing computer. Instead, the accelerator in your own system determines the best way to download data as quickly and efficiently as possible.

But speed is not the only benefit of a good download accelerator. How often has it happened to you that a connection times out or is interrupted before a file has downloaded completely? Probably quite often. And then you have to start all over. A download accelerator will keep track of things and will simply pick up where you left off if a connection gets dropped. Imagine how much time you save.

The bottom line is clear. You have better things to do than wait for downloads to complete. If you want to regain control of your Internet connection, accelerate downloads and restore, or just web browsing in general, a good accelerator is invaluable.


A must-know about computer and internet glossary

0 comments
Computer-related things tend to have a language all their own. While you do not need to know all of it, there are many confusing words and phrases that you are going to come across sooner or later. 

Bandwidth. Bandwidth is the amount of data that your website can send each second, as well as the amount of data that the visitor to your website can receive. If either one does not have enough bandwidth, then the website will appear slowly. 

For this reason, you should choose a host with plenty of bandwidth, as well as testing that your site doesn't take too long to download on slow connections.

Browser. A browser is the software (see below) that visitors to your site use to view it. The most popular browser is Microsoft's Internet Explorer, which comes with Windows.

Cookie. Cookies are data files that your site can save on the computer of someone who visits that site, to allow it to remember who they are if they return. 

FTP. File Transfer Protocol. This is a common method of uploading (see below) files to your website.
Javascript. A common language for writing 'scripts' on websites, which are small programs that make the site more interactive. Another common cause of problems for visitors.

JPEG. Joint Photographic Experts Group. This is the name of the most popular format for pictures on the web, named after the group that came up with it. If you want to put pictures on your website, you should save them as JPEGs.

Hardware. Hardware is computer equipment that physically exists. It is the opposite of software.
Hosting. If you've got a website out there on the Internet, then you'll be paying someone for hosting. It is the service of making your site available for people to see.

HTML. HyperText Markup Language. A kind of code used to indicate how web pages should be displayed, using a system of small 'tags'. The 'b' tag, for example, causes text to appear in bold, and the 'img' tag displays a picture.

Hyperlink. A hyperlink is when a piece of text on a website can be clicked to take you to another site, or another page on the same site. For example, if clicking your email address on your website allows someone to email you, then your email address is a hyperlink.

Programming. This is when the computer is given instructions to tell it what to do, using one of many 'programming languages'. Programming languages for the web include PHP and Perl.

Server. The server is where your website is stored, and it is the server that people are connecting to when they visit the site. Note that server refers both to the hardware and software of this system.

Software. Programs that run on the computer, or that make your website work. Microsoft Word is software, for example, as is Apache (the most popular web server software). Opposite of hardware.

Spider. Do not be scared if a spider visits your website! Spiders are simply programs used by search engines to scan your site and help them decide where it should appear when people search. It is good to be visited by spiders, as it means you should start appearing in search engines soon.

Upload. Uploading is when you transfer data from your own computer to your website. For example, you might upload your logo, or an article you've written. Opposite of download.

URL. Uniform Resource Locator. This is just a short way of saying 'web address', meaning what you have to type in to get to your website.


Tuesday, July 28, 2020

PHP Computer Programming, its Importance & Applications

0 comments

                         

The PHP computer programming or Hypertext Preprocessor (PHP), created by a Rasmus Lerdorf in 1994, is a general-purpose scripting programming language that allows web developers in web development and to create dynamic content that interacts with databases.

 

PHP stands one of the most popular server-side programmatic languages that communicate back and forth with a server to create a dynamic web page for the user. Almost all websites available on the internet; it’s a fact that these websites/blogs have been created by PHP installation; also, the website page that you are reading this blog is built in PHP.

 

Learning PHP Computer programming language is an object-oriented programming language. It’s a must-to-learn language for a developer who’s ambitious and willing to create dynamic web pages or work on web applications’ development.

 

So if you are new to PHP, keep in mind, it’s not that easy and the language to jump straight into if you have no experience before. As the Syntax and other PHP language elements might be quite confusing for the beginner, getting a grasp of the basic programming concepts first is the best start - a pro tip for a beginner.

 

Furthermore, learning and working on Javascript is a client-side scripting language that will be an excellent approach. However, there is nothing to get scared at all while learning PHP because learning capability differs from person to person, and it’s fine to make a start if you pick things quickly…

 

Let’s take a little look at why to learn PHP and why it’s important?

 

Importance of PHP Computer Programming Language

 

Many ask why it’s essential and vital for a programmer or web & application developer to learn the PHP programming language.

 

Furthermore, for students of IT, software engineering, and specifically involved in the web development domain, it works like a breath for life as they are never considered as developers unless they know each and everything about PHP, its applications.

 

      PHP is a recursive acronym PHP “Hypertext Preprocessor” - a server-side scripting language embedded in HTML. It has its usages to manage dynamic content, databases, session tracking, and even build entire e-commerce sites or stores.

 

      The language stays integrated with various other accessible databases, including MySQL, PostgreSQL, Oracle, Sybase, Informix, and Microsoft SQL Server.

 

      It is surprisingly zippy in its execution, more specifically when compiled as an Apache module on the Unix side. When the MySQL server starts, it executes every query, whether it’s complicated or more uncomplicated within the record-setting time.

 

      A large number of major protocols such as POP3, IMAP, and LDAP work with PHP support. PHP4 added support for Java and distributed object architectures such as COM and CORBA makes n-tier development a probability.

 

      PHP language, with its Syntax C-like, tries to be as forgiving as possible.

 

Versions of PHP programming Language

 

PHP in its initials was a small open source project, and its evolutions made people get its popularity, and thus as more and more people found out how useful it is. There are different PHP versions unleashed so far, but the first version was released in 1994 by a Danish-Canadian programmer Rasmus Lerdorf.

 

Its versions are …

 

      PHP 3 and 4

      PHP 5

      PHP 6 and Unicode

      PHP 7

 

Hello World using PHP

 

It’s a very fun language for those who have a thirst and are willing to learn it with a passion. You can create pages of any style, color, or type for the websites and applications. Here, you can have a look at this small conventional PHP Hello World Program.

You can try it yourself using this Demo link: http://tpcg.io/qFWzNE

Applications of PHP

PHP stands one of the best programmatic languages widely used over the web. Some of its top applications are:

 

      PHP performs system functions such as: using files on a system it creates, opens, reads, writes, and closes them.

      PHP can handle forms such as gathering data from records, saving it to a file, sending data via email, and returning data to the user back.

      It’s used in adding, deleting, and modifying the elements within your database

      Access cookies variables and set cookies.

      PHP enables programmers to restrict users to access some pages of your website, or someone else’s that you are working on.

      Encrypting data is also one of its applications.

 

Prerequisites for learning PHP

If you’re all set to learn PHP programming language, it’s necessary for you to have at least some basic understanding of these prerequisites: Programming, Internet, Database, and MySQL.

Ready for learning PHP computer programming language? Well, you can learn this language from W3Schools, follow the link: https://www.w3schools.com/php/default.asp

 

    Written by: zahid_chaudhry 

Note: This article that has been written by Zahid Chaudhry is a property of Bor3d.net and can not be copied, printed or shared except the original URL directed to this post or the written permission of a Bor3d.net member. All rights belong to Bor3d.net.


Saturday, July 25, 2020

Differences between C and C++

0 comments
In C++ there are only two variants of the function main: int main() and int main(int argc, char **argv).

The return type of main is int, and not void;
The function main cannot be overloaded (for other than the abovementioned signatures);
It is not required to use an explicit return statement at the end of main. If omitted main returns 0;
The value of argv[argc] equals 0;
The `third char **envp parameter' is not defined by the C++ standard and should be avoided. Instead, the global variable extern char **environ should be declared providing access to the program's environment variables. Its final element has the value 0;
A C++ program ends normally when the main function returns. Using a function try block (cf. section 10.11) for main is also considered a normal end of a C++ program. When a C++ ends normally, destructors (cf. section 9.2) of globally defined objects are activated. A function like exit(3) does not normally end a C++ program and using such functions is therefore deprecated.
According to the ANSI/ISO definition, `end of line comment' is implemented in the syntax of C++. This comment starts with // and ends at the end-of-line marker. The standard C comment, delimited by /* and */ can still be used in C++: int main() { // this is end-of-line comment // one comment per line /* this is standard-C comment, covering multiple lines */ }


Despite the example, it is advised not to use C type comment inside the body of C++ functions. Sometimes existing code must temporarily be suppressed, e.g., for testing purposes. In those cases it's very practical to be able to use standard C comment. If such suppressed code itself contains such comment, it would result in nested comment-lines, resulting in compiler errors. Therefore, the rule of thumb is not to use C type comment inside the body of C++ functions (alternatively, #if 0 until #endif pair of preprocessor directives could of course also be used).




C++ uses very strict type checking. A prototype must be known for each function before it is called, and the call must match the prototype. The program int main() { printf("Hello World\n"); }


often compiles under C, albeit with a warning that printf() is an unknown function. But C++ compilers (should) fail to produce code in such cases. The error is of course caused by the missing #include <stdio.h> (which in C++ is more commonly included as #include <cstdio> directive).

And while we're at it: as we've seen in C++ main always uses the int return value. Although it is possible to define int main() without explicitly defining a return statement, within main it is not possible to use a return statement without an explicit int-expression. For example: int main() { return; // won't compile: expects int expression, e.g. // return 1; }
In C++ it is possible to define functions having identical names but performing different actions. The functions must differ in their parameter lists (and/or in their const attribute). An example is given below: #include <stdio.h> void show(int val) { printf("Integer: %d\n", val); } void show(double val) { printf("Double: %lf\n", val); } void show(char const *val) { printf("String: %s\n", val); } int main() { show(12); show(3.1415); show("Hello World!\n"); }
In the above program three functions show are defined, only differing in their parameter lists, expecting an int, double and char *, respectively. The functions have identical names. Functions having identical names but different parameter lists are called overloaded. The act of defining such functions is called `function overloading'.

The C++ compiler implements function overloading in a rather simple way. Although the functions share their names (in this example show), the compiler (and hence the linker) use quite different names. The conversion of a name in the source file to an internally used name is called `name mangling'. E.g., the C++ compiler might convert the prototype void show (int) to the internal name VshowI, while an analogous function having a char * argument might be called VshowCP. The actual names that are used internally depend on the compiler and are not relevant for the programmer, except where these names show up in e.g., a listing of the content of a library.

Some additional remarks with respect to function overloading:
Do not use function overloading for functions doing conceptually different tasks. In the example above, the functions show are still somewhat related (they print information to the screen).

However, it is also quite possible to define two functions lookup, one of which would find a name in a list while the other would determine the video mode. In this case the behavior of those two functions have nothing in common. It would therefore be more practical to use names which suggest their actions; say, findname and videoMode.
C++ does not allow identically named functions to differ only in their return values, as it is always the programmer's choice to either use or ignore a function's return value. E.g., the fragmentprintf("Hello World!\n");


provides no information about the return value of the function printf. Two functions printf which only differ in their return types would therefore not be distinguishable to the compiler.
In chapter 7 the notion of const member functions is introduced (cf. section 7.7). Here it is merely mentioned that classes normally have so-called member functions associated with them (see, e.g., chapter 5 for an informal introduction to the concept). Apart from overloading member functions using different parameter lists, it is then also possible to overload member functions by their const attributes. In those cases, classes may have pairs of identically named member functions, having identical parameter lists. Then, these functions are overloaded by their const attribute. In such cases only one of these functions must have the const attribute.
In C++ it is possible to provide `default arguments' when defining a function. These arguments are supplied by the compiler when they are not specified by the programmer. For example: #include <stdio.h> void showstring(char *str = "Hello World!\n"); int main() { showstring("Here's an explicit argument.\n"); showstring(); // in fact this says: // showstring("Hello World!\n"); }


The possibility to omit arguments in situations where default arguments are defined is just a nice touch: it is the compiler who supplies the lacking argument unless it is explicitly specified at the call. The code of the program will neither be shorter nor more efficient when default arguments are used.

Functions may be defined with more than one default argument: void two_ints(int a = 1, int b = 4); int main() { two_ints(); // arguments: 1, 4 two_ints(20); // arguments: 20, 4 two_ints(20, 5); // arguments: 20, 5 }


When the function two_ints is called, the compiler supplies one or two arguments whenever necessary. A statement like two_ints(,6) is, however, not allowed: when arguments are omitted they must be on the right-hand side.

Default arguments must be known at compile-time since at that moment arguments are supplied to functions. Therefore, the default arguments must be mentioned at the function's declaration, rather than at its implementation: // sample header file extern void two_ints(int a = 1, int b = 4); // code of function in, say, two.cc void two_ints(int a, int b) { ... }


It is an error to supply default arguments in function definitions. When the function is used by other sources the compiler reads the header file rather than the function definition. Consequently the compiler has no way to determine the values of default function arguments. Current compilers generate compile-time errors when detecting default arguments in function definitions.In C++ all zero values are coded as 0. In C NULL is often used in the context of pointers. This difference is purely stylistic, though one that is widely adopted. In C++ NULL should be avoided (as it is a macro, and macros can --and therefore should-- easily be avoided in C++, see also section 8.1.4). Instead 0 can almost always be used.

Almost always, but not always. As C++ allows function overloading (cf. section 2.5.4) the programmer might be confronted with an unexpected function selection in the situation shown in section 2.5.4: #include <stdio.h> void show(int val) { printf("Integer: %d\n", val); } void show(double val) { printf("Double: %lf\n", val); } void show(char const *val) { printf("String: %s\n", val); } int main() { show(12); show(3.1415); show("Hello World!\n"); }


In this situation a programmer intending to call show(char const *) might call show(0). But this doesn't work, as 0 is interpreted as int and so show(int) is called. But calling show(NULL) doesn't work either, as C++ usually defines NULL as 0, rather than ((void *)0). So, show(int) is called once again. To solve these kinds of problems the new C++ standard introduces the keyword nullptr representing the 0 pointer. In the current example the programmer should call show(nullptr) to avoid the selection of the wrong function. The nullptr value can also be used to initialize pointer variables. E.g., int *ip = nullptr; // OK int value = nullptr; // error: value is no pointer



2.5.7: The `void' parameter listIn C, a function prototype with an empty parameter list, such as void func();


means that the argument list of the declared function is not prototyped: for functions using this prototype the compiler does not warn against calling func with any set of arguments. In C the keyword void is used when it is the explicit intent to declare a function with no arguments at all, as in: void func(void);


As C++ enforces strict type checking, in C++ an empty parameter list indicates the total absence of parameters. The keyword void is thus omitted.


2.5.8: The `#define __cplusplus'Each C++ compiler which conforms to the ANSI/ISO standard defines the symbol __cplusplus: it is as if each source file were prefixed with the preprocessor directive #define __cplusplus.

We shall see examples of the usage of this symbol in the following sections.


2.5.9: Using standard C functionsNormal C functions, e.g., which are compiled and collected in a run-time library, can also be used in C++ programs. Such functions, however, must be declared as C functions.

As an example, the following code fragment declares a function xmalloc as a C function: extern "C" void *xmalloc(int size);


This declaration is analogous to a declaration in C, except that the prototype is prefixed with extern "C".

A slightly different way to declare C functions is the following: extern "C" { // C-declarations go in here }


It is also possible to place preprocessor directives at the location of the declarations. E.g., a C header file myheader.h which declares C functions can be included in a C++ source file as follows: extern "C" { #include <myheader.h> }


Although these two approaches may be used, they are actually seldom encountered in C++ sources. A more frequently used method to declare external C functions is encountered in the next section.


2.5.10: Header files for both C and C++The combination of the predefined symbol __cplusplus and the possibility to define extern "C" functions offers the ability to create header files for both C and C++. Such a header file might, e.g., declare a group of functions which are to be used in both C and C++ programs.

The setup of such a header file is as follows: #ifdef __cplusplus extern "C" { #endif /* declaration of C-data and functions are inserted here. E.g., */ void *xmalloc(int size); #ifdef __cplusplus } #endif


Using this setup, a normal C header file is enclosed by extern "C" { which occurs near the top of the file and by }, which occurs near the bottom of the file. The #ifdef directives test for the type of the compilation: C or C++. The `standard' C header files, such as stdio.h, are built in this manner and are therefore usable for both C and C++.

In addition C++ headers should support include guards. In C++ it is usually undesirable to include the same header file twice in the same source file. Such multiple inclusions can easily be avoided by including an #ifndef directive in the header file. For example: #ifndef MYHEADER_H_ #define MYHEADER_H_ // declarations of the header file is inserted here, // using #ifdef __cplusplus etc. directives #endif


When this file is initially scanned by the preprocessor, the symbol MYHEADER_H_ is not yet defined. The #ifndef condition succeeds and all declarations are scanned. In addition, the symbol MYHEADER_H_ is defined.

When this file is scanned next while compiling the same source file, the symbol MYHEADER_H_ has been defined and consequently all information between the #ifndef and #endif directives is skipped by the compiler.

In this context the symbol name MYHEADER_H_ serves only for recognition purposes. E.g., the name of the header file can be used for this purpose, in capitals, with an underscore character instead of a dot.

Apart from all this, the custom has evolved to give C header files the extension .h, and to give C++ header files no extension. For example, the standard iostreams cin, cout and cerr are available after including the header file iostream, rather than iostream.h. In the Annotations this convention is used with the standard C++ header files, but not necessarily everywhere else.

There is more to be said about header files. Section 7.11 provides an in-depth discussion of the preferred organization of C++ header files. In addition, starting with the C++2a standard modules are available resulting in a somewhat more efficient way of handling declarations than offered by the traditional header files. In the C++ Annotations modules are covered in chapter 7, section 7.12.


2.5.11: Defining local variablesAlthough already available in the C programming language, local variables should only be defined once they're needed. Although doing so requires a little getting used to, eventually it tends to produce more readable, maintainable and often more efficient code than defining variables at the beginning of compound statements. We suggest to apply the following rules of thumb when defining local variables:
Local variables should be created at `intuitively right' places, such as in the example below. This does not only entail the for-statement, but also all situations where a variable is only needed, say, half-way through the function.
More in general, variables should be defined in such a way that their scope is as limited and localized as possible. When avoidable local variables are not defined at the beginning of functions but rather where they're first used.
It is considered good practice to avoid global variables. It is fairly easy to lose track of which global variable is used for what purpose. In C++ global variables are seldom required, and by localizing variables the well known phenomenon of using the same variable for multiple purposes, thereby invalidating each individual purpose of the variable, can easily be prevented.

If considered appropriate, nested blocks can be used to localize auxiliary variables. However, situations exist where local variables are considered appropriate inside nested statements. The just mentioned for statement is of course a case in point, but local variables can also be defined within the condition clauses of if-else statements, within selection clauses of switch statements and condition clauses of while statements. Variables thus defined are available to the full statement, including its nested statements. For example, consider the following switch statement: #include <stdio.h> int main() { switch (int c = getchar()) { case 'a': case 'e': case 'i': case 'o': case 'u': printf("Saw vowel %c\n", c); break; case EOF: printf("Saw EOF\n"); break; case '0' ... '9': printf("Saw number character %c\n", c); break; default: printf("Saw other character, hex value 0x%2x\n", c); } }
Note the location of the definition of the character `c': it is defined in the expression part of the switch statement. This implies that `c' is available only to the switch statement itself, including its nested (sub)statements, but not outside the scope of the switch.

The same approach can be used with if and while statements: a variable that is defined in the condition part of an if and while statement is available in their nested statements. There are some caveats, though:
The variable definition must result in a variable which is initialized to a numeric or logical value;
The variable definition cannot be nested (e.g., using parentheses) within a more complex expression.The latter point of attention should come as no big surprise: in order to be able to evaluate the logical condition of an if or while statement, the value of the variable must be interpretable as either zero (false) or non-zero (true). Usually this is no problem, but in C++ objects (like objects of the type std::string (cf. chapter 5)) are often returned by functions. Such objects may or may not be interpretable as numeric values. If not (as is the case with std::string objects), then such variables can not be defined at the condition or expression clauses of condition- or repetition statements. The following example will therefore not compile: if (std::string myString = getString()) // assume getString returns { // a std::string value // process myString }


The above example requires additional clarification. Often a variable can profitably be given local scope, but an extra check is required immediately following its initialization. The initialization and the test cannot both be combined in one expression. Instead two nested statements are required. Consequently, the following example won't compile either: if ((int c = getchar()) && strchr("aeiou", c)) printf("Saw a vowel\n");


If such a situation occurs, either use two nested if statements, or localize the definition of int c using a nested compound statement: if (int c = getchar()) // nested if-statements if (strchr("aeiou", c)) printf("Saw a vowel\n"); { // nested compound statement int c = getchar(); if (c && strchr("aeiou", c)) printf("Saw a vowel\n"); }



2.5.12: The keyword `typedef'The keyword typedef is still used in C++, but is not required anymore when defining union, struct or enum definitions. This is illustrated in the following example: struct SomeStruct { int a; double d; char string[80]; };


When a struct, union or other compound type is defined, the tag of this type can be used as type name (this is SomeStruct in the above example): SomeStruct what; what.d = 3.1415;



2.5.13: Functions as part of a structIn C++ we may define functions as members of structs. Here we encounter the first concrete example of an object: as previously described (see section 2.4), an object is a structure containing data while specialized functions exist to manipulate those data.

A definition of a struct Point is provided by the code fragment below. In this structure, two int data fields and one function draw are declared. struct Point // definition of a screen-dot { int x; // coordinates int y; // x/y void draw(); // drawing function };


A similar structure could be part of a painting program and could, e.g., represent a pixel. With respect to this struct it should be noted that:
The function draw mentioned in the struct definition is a mere declaration. The actual code of the function defining the actions performed by the function is found elsewhere (the concept of functions inside structs is further discussed in section 3.2).
The size of the struct Point is equal to the size of its two ints. A function declared inside the structure does not affect its size. The compiler implements this behavior by allowing the function draw to be available only in the context of a Point.The Point structure could be used as follows: Point a; // two points on Point b; // the screen a.x = 0; // define first dot a.y = 10; // and draw it a.draw(); b = a; // copy a to b b.y = 20; // redefine y-coord b.draw(); // and draw it


As shown in the above example a function that is part of the structure may be selected using the dot (.) (the arrow (->) operator is used when pointers to objects are available). This is therefore identical to the way data fields of structures are selected.

The idea behind this syntactic construction is that several types may contain functions having identical names. E.g., a structure representing a circle might contain three int values: two values for the coordinates of the center of the circle and one value for the radius. Analogously to the Point structure, a Circle may now have a function draw to draw the circle.


2.5.14: Evaluation order of operandsTraditionally, the evaluation order of expressions of operands of binary operators is, except for the boolean operators and and or, not defined. C++ changed this for postfix expressions, assignment expressions (including compound assignments), and shift operators:
Expressions using postfix operators (like index operators and member selectors) are evaluated from left to right (do not confuse this with postfix increment or decrement operators, which cannot be concatenated (e.g., variable++++ does not compile)).
Assignment expressions are evaluated from right to left;
Operands of shift operators are evaluated from left to right.

In the following examples first is evaluated before second, before third, before fourth, whether they are single variables, parenthesized expressions, or function calls: first.second fourth += third = second += first first << second << third << fourth first >> second >> third >> fourth


In addition, when overloading an operator, the function implementing the overloaded operator is evaluated like the built-in operator it overloads, and not in the way function calls are generally ordered.

Source: C++ Annotations Version 11.4.0 Frank B. Brokken University of Groningen, PO Box 407, 9700 AK Groningen The Netherlands Published at the University of Groningen

C++'s history

0 comments
The first implementation of C++ was developed in the 1980s at the AT&T Bell Labs, where the Unix operating system was created. C++ was originally a `pre-compiler', similar to the preprocessor of C, converting special constructions in its source code to plain C. Back then this code was compiled by a standard C compiler. The `pre-code', which was read by the C++ pre-compiler, was usually located in a file with the extension .cc, .C or .cpp. This file would then be converted to a C source file with the extension .c, which was thereupon compiled and linked. The nomenclature of C++ source files remains: the extensions .cc and .cpp are still used. However, the preliminary work of a C++ pre-compiler is nowadays usually performed during the actual compilation process. Often compilers determine the language used in a source file from its extension. This holds true for Borland's and Microsoft's C++ compilers, which assume a C++ source for an extension .cpp. The GNU compiler g++, which is available on many Unix platforms, assumes for C++ the extension .cc. The fact that C++ used to be compiled into C code is also visible from the fact that C++ is a superset of C: C++ offers the full C grammar and supports all C-library functions, and adds to this features of its own. This makes the transition from C to C++ quite easy. Programmers familiar with C may start `programming in C++' by using source files having extensions .cc or .cpp instead of .c, and may then comfortably slip into all the possibilities offered by C++. No abrupt change of habits is required.


Source: C++ Annotations Version 11.4.0 Frank B. Brokken University of Groningen, PO Box 407, 9700 AK Groningen The Netherlands Published at the University of Groningen

the machine-readable text: methods of conversion

0 comments

Although the Workshop did not include a systematic examination of the methods for converting texts from paper (or from facsimile images) into machine-readable form, nevertheless, various speakers touched upon this matter. For example, WEIBEL reported that OCLC has experimented with a merging of multiple optical character recognition systems that will reduce errors from an unacceptable rate of 5 characters out of every l,000 to an unacceptable rate of 2 characters out of every l,000.

Pamela ANDRE presented an overview of NAL's Text Digitization Program and Judith ZIDAR discussed the technical details. ZIDAR explained how NAL purchased hardware and software capable of performing optical character recognition (OCR) and text conversion and used its own staff to convert texts. The process, ZIDAR said, required extensive editing and project staff found themselves considering alternatives, including rekeying and/or creating abstracts or summaries of texts. NAL reckoned costs at $7 per page. By way of contrast, Ricky ERWAY explained that American Memory had decided from the start to contract out conversion to external service bureaus. The criteria used to select these contractors were cost and quality of results, as opposed to methods of conversion. ERWAY noted that historical documents or books often do not lend themselves to OCR. Bound materials represent a special problem. In her experience, quality control—inspecting incoming materials, counting errors in samples—posed the most time-consuming aspect of contracting out conversion. ERWAY reckoned American Memory's costs at $4 per page, but cautioned that fewer cost-elements had been included than in NAL's figure.


Source: Project Gutenberg's LOC Workshop on Electronic Texts, by Library of Congress

TPC, The Phone Company

0 comments
My apologies for using the United States as an example so many times, but…most of my experience has been in the US.
Asychnronous Availability of Information
One of the major advantages of electronic information is that you don't have to schedule yourself to match others in their schedules.
This is very important. Just this very week I have been waiting for a power supply for one of my computers, just because the schedule of the person who has it was not in sync with the schedule of the person picking it up. The waste has been enormous, and trips all the way across an entire town are wasted, while the computer lies unused.
The same things happens with libraries and stores of all kinds around the world. How many times have you tried a phone call, a meeting, a purchase, a repair, a return or a variety of other things, and ended up not making these connections?
No longer, with things that are available electronically over the Nets. You don't have to wait until the door of the library swings open to get that book you want for an urgent piece of research; you don't have to wait until a person is available to send them an instant message; you don't have to wait for the evening news on tv….
This is called Asyncronous Communication…meaning those schedules don't have to match exactly any more to have a meaningful and quick conversation. A minute here, there or wherever can be saved instead of wasted and the whole communication still travels at near instantaneous speed, without the cost of ten telegrams, ten phone calls, etc.
You can be watching television and jump up and put a few minutes into sending, or answering, your email and would not miss anything but the commercials.
"Commercials" bring to mind another form of asynchronous communication…taping a tv or radio show and watching a show in 40 minutes instead of an hour because you do not have to sit through 1 minute of "not-show" per 2 minutes of show. No only to you not have to be home on Thursday night to watch your favorite TV show any more, but those pesky commercials can be edited out, allowing you to see three shows in the time it used to take to watch two.
This kind of efficiency can have a huge effect on you or your children. . .unless you WANT them to see 40 ads per hour on television, or spend hours copying notes from an assortment of library books carried miles from, and back to, the libraries. Gone are the piles of 3x5 cards past students and scholars have heaped before time in efforts to organize mid-term papers for 9, 12, 16 or 20 years of institutionalized education. Whole rainforests of trees can be saved, not to mention the billions of hours of an entire population's educated scribbling that should have been spent between the ears instead of between paper and hand, cramping the thought and style of generations upon generations of those of us without photographic memories to take the place of the written word.
Now we all can have photographic memories, we can quote, with total accuracy, millions of 3x5 cards worth of huge encyclopedias of information, all without getting up for any reason other than eating, drinking and stretching.
Research in this area indicates that 90% of the time the previous generations spent for research papers was spent traipsing through the halls, stairways and bookstacks of libraries; searching through 10 to 100 books for each of the ones selected for further research; and searching on 10-100 pages for each quote worthy of making it into the sacred piles of 3x5 cards; then searching the card piles for those fit for the even more sacred sheets of paper a first draft was written on. Even counting the fanatical dedication of those who go through several drafts before a presentation draft is finally achieved the researchers agree that 90% of this kind of work is spent in "hunting and gathering" the information and only 10% of this time is spent "digesting" the information.
If you understand that civilization was based on the new invention called "the plow," which changed the habits of "hunting and gathering" peoples into civilized cities… then you might be able to understand the the changes the computer and computer networks are making to those using them instead of the primitive hunting and gathering jobs we used to spend 90% of our time on.
In mid-19th Century the United States was over 90% in an agrarian economy, spending nearly all of its efforts for raising food to feed an empty belly. Mid-20th Century's advances reversed that ratio, so that only 10% was being used for the belly, 90% for civilization.
The same thing will be said for feeding the mind, if our civilization ever gets around deciding that spending the majority of our research time in a physical, rather than mental, portion of the educational process.
Think of it this way, if it takes only 10% as long to do the work to write a research paper, we are likely to get either 10 times as many research papers, or papers which are 10 times as good, or some combination…just like we ended up with 10 times as much food for the body when we turned from hunting and gathering food to agriculture at the beginnings of civilization…then we would excpect a similar transition to a civilization of the future.
***
If mankind is defined as the animal who thinks; thinking more and better increases the degree to which we are the human species. Decreasing our ability to think is going to decrease our humanity…and yet I am living in what a large number of people define as the prime example of an advanced country…where half the adult population can't read at a functional level. [From the US Adult Literacy Report of 1994]



Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

why URLs aren't U

0 comments
This chapter discusses why URLs aren't U,
Why Universal Resource Locators Are Not Universal
When I first tried the experimental Gopher sites, I asked the inventors of Gopher if their system could be oriented to also support FTP, should a person be more inclined for going after something one already had researched: rather than the "browsing" that was being done so often on those Gopher servers.
The answer was technically "yes," but realistically "no," in that while Gophers COULD be configured such that every file would be accessible by BOTH Gopher and FTP, the real intent of Gopher was to bypass FTP and eventually replace it as the primary method of surfing the Internet.
I tried to explain to them that "surfing" the Internet is much more time consuming as well as wasteful of bandwidth [this at a time when all bandwidth was still free, and we were only trying to make things run faster, as opposed to actually saving money.

Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

Internet As Chandelier

0 comments
—————————-ORIGINAL MESSAGE————————————— Hart undoubtedly saw academia as a series of dark brown dream shapes, disorganized, nightmarish, each with its set of rules for nearly everything: style of writing, footnoting, limited subject matter, and each with little reference to each other.
————————————-REPLY————————————————— What he wanted to see was knowledge in the form of a chandelier, with each subject area powered by the full intensity of the flow of information, and each sending sparks of light to other areas, which would then incorporate and reflect them to others, a never ending flexion and reflection, an illumination of the mind, soul and heart of Wo/Mankind as could not be rivalled by a diamond of the brightest and purest clarity.
Instead, he saw petty feudal tyrants, living in dark poorly lit, poorly heated, well defended castles: living on a limited diet, a diet of old food, stored away for long periods of time, salted or pickled or rotted or fermented. Light from the outside isn't allowed in, for with it could come the spears and arrows of life and the purpose of the castle was to keep the noble life in, and all other forms of life out. Thus the nobility would continue a program of inbreeding which would inevitably be outclassed by an entirely random reflexion of the world's gene pool.
A chandelier sends light in every direction, light of all colors and intensities. No matter where you stand, there are sparkles, some of which are aimed at you, and you alone, some of which are also seen by others: yet, there is no spot of darkness, neither are there spots of overwhelming intensity, as one might expect a sparkling source of lights to give off. Instead, the area is an evenly lit paradise, with direct and indirect light for all, and at least a few sparkles for everyone, some of which arrive, pass and stand still as we watch.
But the system is designed to eliminate sparkles, reflections or any but the most general lighting. Scholars are encouraged to a style and location of writing which guarantee that 99 and 44 one hundredths of the people who read their work will be colleagues, already a part of that inbred nobility of their fields.
We are already aware that most of our great innovations are made from leaps from field to field, that the great thinkers apply an item here in this field which was gleaned from that field: thus are created the leaps which create new fields which widen fields of human endeavor in general.
Yet, our petty nobles, cased away in their casements, encased in their tradition, always reject the founding of these new fields, fearing their own fields can only be dimmed by comparison. This is true, but only by their own self-design. If their field were open to light from the outside, then the new field would be part of their field, but by walling up the space around themselves, a once new and shining group of enterprising revolutionaries could only condemn themselves to awaiting the ravages of time, tarnish and ignorance as they become ignorant of the outside world while the outside world becomes ignorant of them.
So, I plead with you, for your sake, my sake, for everyone's, to open windows in your mind, in your field, in your writing and in your thinking; to let illumination both in and out, to come from underneath and from behind the bastions of your defenses, and to embrace the light and the air, to see and to breathe, to be seen and to be breathed by the rest of Wo/Mankind.
Let your light reflect and be reflected by the other jewels in a crown of achievement more radiant than anything we have ever had the chance to see or to be before. Join the world!
[chandel2.txt]
A Re-Visitation to the Chandelier by Michael S. Hart
Every so often I get a note from a scholar with questions and comments about the Project Gutenberg Edition of this or that. Most of the time this appears to be either idle speculation— since there is never any further feedback about passages this or that edition does better in the eye of particular scholars or the feedback is of the "holier than thou" variety in which the scholar claims to have found errors in our edition, which the scholar then refuses to enumerate.
As for the first, there can certainly be little interest in a note that appears, even after follow-up queries, of that idle brand of inquiry.
As to the second, we are always glad to receive a correction, that is one of the great powers of etext, that corrections be made easily and quickly when compared to paper editions, with the corrections being made available to those who already had the previous editions, at no extra charge.
However, when someone is an expert scholar in a field they do have a certain responsibility to have their inquiries be some reasonable variety, with a reasonable input, in order to have a reasonable output. To complain that there is a problem w/o pointing out the problem has a rich and powerful vocabulary I do not feel is appropriate for this occasion. We have put an entirely out-of-proportion cash reward on these errors at one time or another and still have not received any indications a scholar has actually ever found them, which would not be more difficult than finding errors in any other etexts, especially ones not claiming an beginning accuracy of only 99.9%.
However, if these corrections WERE forthcoming, then the 99.9 would soon approach 99.95, which is the reference error level referred to several times in the Library of Congress Workshop on Electronic Text Proceedings.
On the other hand, just as the Project Gutenberg's efficiency would drop dramatically if we insisted our first edition of a book were over 99.5% accurate, so too, should efficiency drop dramatically if we were ever to involve ourselves in any type of discussion resembling "How many angels can dance on a pin- head." The fact is, that our editions are NOT targeted to an audience specifically interested in whether Shakespeare would have said:
"To be or not to be"
"To be, or not to be"
"To be; or not to be"
"To be: or not to be"
"To be—or not to be"
This kind of conversation is and should be limited to the few dozen to few hundred scholars who are properly interested. A book designed for access by hundreds of millions cannot spend that amount of time on an issue that is of minimal relevance, at least minimal to 99.9% of the potential readers. However, we DO intend to distribute a wide variety of Shakespeare, and the contributions of such scholars would be much appreciated, were it ever given, just as we have released several editions of the Bible, Paradise Lost and even Aesop's Fables.
In the end, when we have 30 different editions of Shakespeare on line simulateously, this will probably not even be worthy, as it hardly is today, of a footnote. . .I only answer out of respect for the process of creating these editions as soon as possible, to improve the literacy and education of the masses as soon as possible.
For those who would prefer to see that literacy and education continue to wallow in the mire, I can only say that a silence on your part creates its just reward. Your expertise dies an awful death when it is smothered by hiding your light under a bushel, as someone whom is celebrated today once said:
Matthew 5:15
Neither do men light a candle, and put it under a bushel, but on
a
candlestick; and it giveth light unto all that are in the house.
Mark 4:21
And he said unto them, Is a candle brought to be put under a
bushel,
or under a bed? and not to be set on a candlestick?
Luke 8:16 No man, when he hath lighted a candle, covereth it with a vessel, or putteth it under a bed; but setteth it on a candlestick, that they which enter in may see the light.
Luke 11:33 No man, when he hath lighted a candle, putteth it in a secret place, neither under a bushel, but on a candlestick, that they which come in may see the light.




Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart

Michael Hart - Chapter Zero

0 comments
Michael Hart is trying to change Human Nature. He says Human Nature is all that is stopping the Internet from saving the world. The Internet, he says, is a primitive combination of Star Trek communicators, transporters and replicators; and can and will bring nearly everything to nearly everyone. "I type in Shakespeare and everyone, everywhere, and from now until the end of history as we know it—everyone will have a copy instantaneously, on request. Not only books, but the pictures, paintings, music. . .anything that will be digitized. . .which will eventually include it all. A few years ago I wrote some articles about 3-D replication [Stereographic Lithography] in which I told of processes, in use today, that videotaped and played back fastforward on a VCR, look just like something appearing in Star Trek replicators. Last month I saw an article about a stove a person could program from anyhere on the Internet. . .you could literally `fax someone a pizza' or other meals, the `faxing a pizza' being a standard joke among Internetters for years, describing one way to tell when the future can be said to have arrived." For a billion or so people who own or borrow computers it might be said "The Future Is Now" because they can get at 250 Project Gutenberg Electronic Library items, including Shakespeare, Beethoven, and Neil Armstrong landing on the Moon in the same year the Internet was born. This is item #250, and we hope it will save the Internet, and the world. . .and not be a futile, quixotic effort. Let's face it, a country with an Adult Illiteracy Rate of 47% is not nearly as likely to develop a cure for AIDS as a country with an Adult Literacy Rate of 99%. However, Michael Hart says the Internet has changed a lot in the last year, and not in the direction that will take the Project Gutenberg Etexts into the homes of the 47% of the adult population of the United States that is said to be functionally illiterate by the 1994 US Report on Adult Literacy. He has been trying to ensure that there is not going to be an "Information Rich" and "Information Poor," as a result of a Feudal Dark Ages approach to this coming "Age of Information". . .he has been trying since 1971, a virtual "First Citizen" of the Internet since he might be the first person on the Internet who was NOT paid to work on the Internet/ARPANet or its member computers. Flashback In either case, he was probably one of the first 100 on a fledgling Net and certainly the first to post information of a general nature for others on the Net to download; it was the United States' Declaration of Independence. This was followed by the U.S. Bill of Rights, and then a whole Etext of the U.S. Constitution, etc. You might consider, just for the ten minutes the first two might require, the reading of the first two of these documents that were put on the Internet starting 24 years ago: and maybe reading the beginning of the third. The people who provided his Internet account thought this whole concept was nuts, but the files didn't take a whole lot of space, and the 200th Anniversary of the Revolution [of the United States against England] was coming up, and parchment replicas of all the Revolution's Documents were found nearly everywhere at the time. The idea of putting the Complete Works of Shakespeare, the Bible, the Q'uran, and more on the Net was still pure Science Fiction to any but Mr. Hart at the time. For the first 17 years of this project, the only responses received were of the order of "You want to put Shakespeare on a computer!? You must be NUTS!" and that's where it stayed until the "Great Growth Spurt" hit the Internet in 1987-88. All of a sudden, the Internet hit "Critical Mass" and there were enough people to start a conversation on nearly any subject, including, of all things, electronic books, and, for the first time, Project Gutenberg received a message saying the Etext for everyone concept was a good idea. That watershed event caused a ripple effect. With others finally interested in Etext, a "Mass Marketing Approach," and such it was, was finally appropriate, and the release of Alice in Wonderland and Peter Pan signalled beginnings of a widespread production and consumption of Etexts. In Appendix A you will find a listing of these 250, in order of their release. Volunteers began popping up, right on schedule, to assist in the creation or distribution of what Project Gutenberg hoped would be 10,000 items by the end of 2001, only just 30 years after the first Etext was posted on the Net. Flash Forward Today there are about 500 volunteers at Project Gutenberg and they are spread all over the globe, from people doing their favorite book then never being heard from again, to PhD's, department heads, vice-presidents, and lawyers who do reams of copyright research, and some who have done in excess of 20 Etexts pretty much by themselves; appreciate is too small a word for how Michael feel about these, and tears would be the only appropriate gesture. There are approximately 400 million computers today, with the traditional 1% of them being on the Internet, and the traditional ratio of about 10 users per Internet node has continued, too, as there are about 40 million people on a vast series of Internet gateways. Ratios like these have been a virtual constant through Internet development. If there is only an average of 2.5 people on each of 400M computers, that is a billion people, just in 1995. There will probably be a billion computers in the world by 2001 when Project Gutenberg hopes to have 10,000 items online. If only 10% of those computers contain the average Etexts from Project Gutenberg that will mean Project Gutenberg's goal of giving away one trillion Etexts will be completed at that time, not counting that more than one person will be able to use any of these copies. If the average would still be 2.5 people per computer, then only 4% of all the computers would be required to have reached one trillion. [10,000 Etexts to 100,000,000 people equals one trillion] Hart's dream as adequately expressed by "Grolier's" CDROM Electronic Encyclopedia has been his signature block with permission, for years, but this idea is now threatened by those who feel threatened by Unlimited Distribution: ===================================================== | The trend of library policy is clearly toward | the ideal of making all information available | without delay to all people. | |The Software Toolworks Illustrated Encyclopedia (TM) |(c) 1990, 1991 Grolier Electronic Publishing, Inc. ============================================= Michael S. Hart, Professor of Electronic Text Executive Director of Project Gutenberg Etext Illinois Benedictine College, Lisle, IL 60532 No official connection to U of Illinois—UIUC hart@uiucvmd.bitnet and hart@vmd.cso.uiuc.edu Internet User Number 100 [approximately] [TM] Break Down the Bars of Ignorance & Illiteracy On the Carnegie Libraries' 100th Anniversary! Human Nature such as it is, has presented a great deal of resistance to the free distribution of anything, even air and water, over the millennia. Hart hopes the Third Millennium A.D. can be different. But it will require an evolution in human nature and even perhaps a revolution in human nature. So far, the history of humankind has been a history of an ideal of monopoly: one tribe gets the lever, or a wheel, or copper, iron or steel, and uses it to command, control or otherwise lord it over another tribe. When there is a big surplus, trade routes begin to open up, civilizations begin to expand, and good times are had by all. When the huge surplus is NOT present, the first three estates lord it over the rest in virtually the same manner as historic figures have done through the ages: "I have got this and you don't." [Nyah nyah naa naa naa!] *** *** Now that ownership of the basic library of human thoughts is potentially available to every human being on Earth—I have been watching the various attempts to keep this from actually being available to everyone on the planet: this is what I have seen: 1. Ridicule Those who would prefer to think their worlds would be destroyed by infinite availability of books such as: Alice in Wonderland, Peter Pan, Aesop's Fables or the Complete Works of Shakespeare, Milton or others, have ridiculed the efforts of those who would give them to all free of charge by arguing about whether it should be: "To be or not to be" or "To be [,] or not to be" or "To be [;] or not to be"/"To be [:] or not to be" or whatever; and that whatever their choices are, for this earthshaking matter, that no other choice should be possible to anyone else. My choice of editions is final because I have a scholarly opinion. 1A. My response has been to refuse to discuss: "How many angels can dance on the head of a pin," [or many other matters of similar importance]. I know this was once considered of utmost importance, BUT IN A COUNTRY WHERE HALF THE ADULTS COULD NOT EVEN READ SHAKESPEARE IF IT WERE GIVEN TO THEM, I feel the general literacy and literary requirements overtake a decision such as theirs. If they honestly wanted the best version of Shakespeare [in their estimations] to be the default version on the Internet, they wouldn't have refused to create just such an edition, wouldn't have shot down my suggested plan to help them make it . . .for so many years. . .nor, when they finally did agree, they wouldn't have let an offer from a largest wannabee Etext provider to provide them with discount prices, and undermine their resolve to create a super quality public domain edition of Shakespeare. It was an incredible commentary on the educational system in that the Shakespeare edition we finally did use for a standard Internet Etext was donated by a commercial— yes—commercial vendor, who sells it for a living. In fact, I must state for the record, that education, as an institution, has had very little to do with the creation and distribution of Public Domain Etexts for the public, and that contributions by the commercial, capitalistic corporations has been the primary force, by a large margin, that funds Project Gutenberg. The 500 volunteers we have come exclusively from smaller, less renowned institutions of education, without any, not one that I can think of, from any of the major or near major educational institutions of the world. It would appear that those Seven Deadly Sins listed a few paragraphs previously have gone a long way to the proof of the saying that "Power corrupts and absolute power corrupts absolutely." Power certainly accrues to those who covet it and the proof of the pudding is that all of the powerful club we have approached have refused to assist in the very new concept of truly Universal Education. Members of those top educational institutions managed to subscribe to our free newsletter often enough, but not one of them ever volunteered to do a book or even to donate a dollar for what they have received: even send in lists of errors they say they have noticed. Not one. [There is a word for the act of complaining about something without [literally] lifting a finger] The entire body of freely available Etexts has been a product of the "little people." 2. Cost Inflation When Etexts were first coming it, estimates were sent around the Internet that it took $10,000 to create an Etexts, and that therefore it would take $100,000,000 to create the proposed Project Gutenberg Library. $500,000,000 was supposedly donated to create Etexts, by one famous foundation, duly reported by the media, but these Etexts have not found their way into hands, or minds, of the public, nor will they very soon I am afraid, though I would love to be put out of business [so to say] by the act of these institutions' release of the thousands of Etexts some of them already have, and that others have been talking about for years. My response was, has been, and will be, simply to get the Etexts out there, on time, and with no budget. A simple proof that the problem does not exist. If the team of Project Gutenberg volunteers can produce this number of Etexts and provide it to the entire world's computerized population, then the zillions of dollars you hear being donated to the creations of electronic libraries by various government and private donations should be used to keep the Information Superhighway a free and productive place for all, not just for those 1% of computers that have already found a home there. 3. Graphics and Markup versus Plain Vanilla ASCII The one thing you will see in common with ALL of such graphics and markup proposals is LIMITED DISTRIBUTION as a way of life. The purpose of each one of these is and always has been to keep knowledge in the hands of the few and away from the minds of the many. I predict that in the not-too-distant-future that all materials will either be circulating on the Internet, or that they will be jealously guarded by owners whom I described with the Seven Deadly Sins. If there is ever such a thing as the "Tri-corder," of Star Trek fame, I am sure there simultaneously has to be developed a "safe" in which those who don't want a whole population to have what they have will "lock" a valuable object to ensure its uniqueness; the concept of which I am speaking is illustrated by this story: "A butler announces a delivery, by very distinguished members of a very famous auction house. The master— for he IS master—beckons him to his study desk where the butler deposits his silver tray, containing a big triangular stamp, then turns to go. What some of these projects with tens of millions for their "Electronic Libraries" are doing to ensure this is for THEM and not for everyone is to prepare Etexts in a manner in which no normal person would either be willing or able to read them. Shakespeare's Hamlet is a tiny file in PVASCII, small enough for half a dozen copies to fit [uncompressed!] on a $.23 floppy disk that fits in your pocket. But, if it is preserved as a PICTURE of each page, then it will take so much space that it would be difficult to carry around even a single copy in that pocket unless it were on a floppy sized optical disk, and even then I don't think it would fit. Another way to ensure no normal person would read it, to mark it up so blatantly that the human eyes should have difficulty in scansion, stuttering around pages, rather than sliding easily over them; the information contained in this "markup" is deemed crucial by those esoteric scholars who think it is of vital importance that a coffee cup stain appears at the lower right of a certain page, and that "Act I" be followed by [<ACT ONE>] to ensure everyone knows this is actually where this is where an act or scene or whatever starts. You probably would not believe how much money has had the honor of being spent on these kinds of projects a normal person is intentionlly deprived of through the mixture is just plain HIDING the files, to making the files so BIG you can't download them, to making them so WEIRD you wouldn't read them if you got them. The concept of requiring all documents to be formatted in a certain manner such that only a certain program can read them has been proposed more often then you might ever want to imagine, for the TWIN PURPOSES OF PROFIT AND LIMITED DISTRIBUTION in a medium which requires a virtue of UNLIMITED DISTRIBUTION to keep it growing. Every day I read articles, proposals, proceedings for various conferences that promote LIMITED DISTRIBUTION on the Nets. . .simply to raise the prestige or money to keep some small oligarchy in power. This is truly a time of POWER TO THE PEOPLE as people say in the United States. What we have here is a conflict between the concepts that everything SHOULD be in LIMITED DISTRIBUTION, and that of the opposing concept of UNLIMITED DISTRIBUTION. If you look over the table of contents on the next pages, you will see that each of these item stresses the greater and greater differences between an history which has been dedicated to the preservation of Limited Distribution and something so new it has no history longer than 25 years—


Source: Project Gutenberg's A Brief History of the Internet, by Michael Hart