The Other Side of the Search Gods Abracadabra!


Thousands of servers ...billions of web pages.... the possibility of individually sifting through the WWW is null. The search engine gods cull the information you need from the Internet...from tracking down an elusive expert for communication to presenting the most unconventional views on the planet. Name it and click it. Beyond all the hype created about the web heavens they rule, let's attempt to keep the argument balanced. From Google to Voice of the Shuttle (for humanities research) these ubiquitous gods that enrich the net, can be unfair ...and do wear pitfalls. And considering the rate at which the Internet continues to grow, the problems of these gods are only exacerbated further.

Primarily, what you need to digest is the fact that search engines fall short of Mandrake's magic mechanism! They simply don't create URLs out of thin air but instead send their spiders crawling across those sites that have rendered prayers (and expensive offerings!) to them for consideration. Even when sites like Google claim to have a massive 3 billion web pages in its database, a large portion of the web nation is invisible to these spiders. To think they are simply ignorant of the Invisible Web. This invisible web holds that content, normal search engines can't index because the information on many web sites is in databases that are only searchable within that site. Sites like www.imdb.com - The Internet Movie Database and www.completeplanet.com - The Complete Planet that cover this area are perhaps the only way you can access content from that portion of the Internet, invisible to the search gods. Here, you don't perform a direct content search but search for the resources that may access the content. (Meaning - be sure to set aside considerable time for digging.)

None of the search engines indexes everything on the Web (I mean none). Tried research literature on popular search engines? AltaVista to Yahoo, will list thousands of sources on education, human resource development, etc. etc. but mostly from magazines, newspapers, and various organizations' own Web pages, rather than from research journals and dissertations- the main sources of research literature. That's because most of the journals and dissertations are not yet available publicly on the Web. Thought they'll get you all that's hosted on the web? Think again.

The Web is huge and growing exponentially. Simple searches, using a single word or phrase, will often yield thousands of "hits", most of which will be irrelevant. A layman going in for a piece of info to the internet has to deal with a more severe issue - too much information! And if you don't learn how to control the information overload from these websites, returned by a search result, roll out the red carpet for some frustration. A very common problem results from sites that have a lot of pages with similar content. For e.g., if a discussion thread (in a forum) goes on for a hundred posts there will be a hundred pages all with similar titles, each containing a wee bit of information. Now instead of just one link, all hundred of those darn pages will crop up your search result, crowding out other relevant site. Regardless of all the sophistication technology has brought in, many well thought-out search phrases produce list after list of irrelevant web pages. The typical search still requires sifting through dirt to find the gold. If you are not specific enough, you may get too many irrelevant hits.

As said, these search engines do not actually search the web directly but their centralized server instead. And unless this database is updated continually to index modified, moved, deleted or renamed documents, you will land yourself amidst broken links and stale copies of web pages. So if they inadequately handle dynamic web pages whose content changes frequently, chances are for the information they reference to quickly go out-of-date. After they wage their never ending war with over-zealous promoters (spamdexers rather), where do they have time to keep their databases current and their search algorithms tuned? No surprise if a perfectly worthwhile site may go unlisted!

Similarly, many of the Web search engines are undergoing rapid development and are not well documented. You will have only an approximate idea of how they are working, and unknown shortcomings may cause them to miss desired information. Not to mention, amongst the first class information, the web also houses false, misleading, deceptive and dressed up information actually produced by charlatans. The Web itself is unstable and tomorrow they may not find you the site they found you today. Well if you could predict them, they would not be god!...would they?! The syntax (word order and punctuation) for various types of complex searches varies some from search engine to search engine, and small errors in the syntax can seriously compromise the search. For instance, try the same phrase search on different search engines and you'll know what I mean. Novices... read this line - using search engines does involve a learning curve. Many beginning Internet users, because of these disadvantages, become discouraged and frustrated.

Like a journalist put it, "Not showing favoritism to its business clients is certainly a rare virtue in these times." Search engines have increasingly turned to two significant revenue streams. Paid placement: In addition to the main editorial-driven search results, the search engines display a second - and sometimes third - listing that's usually commercial in nature. The more you pay, the higher you'll appear in the search results. Paid inclusion: An advertiser or content partner pays the search engine to crawl its site and include the results in the main editorial listing. So?...more likely to be in the hit list but then again - no guarantees. Of course those refusing to favor certain devotees are industry leaders like Google that publishes paid listings, but clearly marks them as 'Sponsored Links.'

The possibility of these 'for-profit' search gods (which haven't yet made much profit) for taking fees to skew their searches, can't be ruled out. But as a searcher, the hit list you are provided with by the engine should obviously rank in the order of relevancy and interest. Search command languages can often be complex and confusing and the ranking algorithm is unique to each god based on the number of occurrences of the search phrase in a page, if it appears in the page title, or in a heading, or the URL itself, or the meta tag etc. or on a weighted average of a number of these relevance scores. E.g. Google uses its patented PageRank TM and ranks the importance of search results by examining the links that lead to a specific site. The more links that lead to a site, the higher the site is ranked. Pop on popularity!

Alta Vista, HotBot, Lycos, Infoseek and MSN Search use keyword indexes - fast access to millions of documents. The lack of an index structure and poor accuracy of the size of the WWW, will not make searching any easier. Large number of sites indexed. Keyword searching can be difficult to get right.
In reality, however, the prevalence of a certain keyword is not always in proportion to the relevance of a page.

By using keywords to determine how each page will be ranked in search results and not simply counting the number of instances of a word on a page, search engines are attempting to make the rankings better by assigning more weight to things like titles, subheadings, and so on.
Now, unless you have a clear idea of what you're looking for, it may be difficult or impossible to use a keyword search, especially if the vocabulary of the subject is unfamiliar. Similarly, the concept based search of Excite (instead of individual words, the words that you enter into a search are grouped and attempted to determine the meaning) is a difficult task and yields inconsistent results.

Besides who reviews or evaluates these sites for quality or authority? They are simply compiled by a computer program. These active search engines rely on computerized retrieval mechanisms called "spiders", "crawlers", or "robots", to visit Web sites, on a regular basis and retrieve relevant keywords to index and store in a searchable database. And from this huge database yields often unmanageable and comprehensive results....results whose relevance is determined by their computers. The irrelevant sites (high percentage of noise, as it's called), questionable ranking mechanisms and poor quality control may be the result of less human involvement to weed out junk. Thought human intervention would solve all probes....read on.

From the very first search engine - Yahoo to about.com, Snap.com, Magellan, NetGuide, Go Network, LookSmart, NBCi and Starting Point, all subject directories index and review documents under categories - making them more manageable. Unlike active search engines, these passive or human-selected search engines like don't roam the web directly and are human controlled, relying on individual submissions. Perhaps the easiest to use in town, but the indexing structure these search engines cover only a small portion of the actual number of WWW sites and thus is certainly not your bet if you intend specific, narrow or complex topics.

Subject designations may be arbitrary, confusing or wrong. A search looks for matches only in the descriptions submitted. Never contains full text of the web they link to - you can only search what you see titles, descriptions, subject categories, etc. Human-labor intensive process limits database currency, size, rate of growth and timeliness. You may have to branch through the categories repeatedly before arriving at the right page. They may be several months behind the times because of the need for human organization. Try looking for some obscure topic....chances for the people that maintain the directory to have excluded those pages. Obviously, machines can blindly count keywords but they can't make common-sense judgement as humans can. But then why does human-edited directories respond with all this junk?!

And here's about those meta search engines. A comprehensive search on the entire WWW usingThe Big Hub, Dogpile, Highway61, Internet Sleuth or Savvysearch, covering as many documents as possible may sound as good an idea as a one stop shopping.Meta search engines do not create their own databases. They rely on existing active and passive search engine indexes to retrieve search results. And the very fact that they access multiple keyword indexes reduces their response time. It sure does save your time by searching several search engines at once but at the expense of redundant, unwanted and overwhelming results....much more - important misses. The default search mode differs from search site to search site, so the same search is not always appropriate in different search engine software. The quality and size of the databases vary widely.

Weighted Search Engines like Ask Jeeves and RagingSearch allows the user to type queries in plain English without advanced searching knowledge, again at the expense of inaccurate and undetailed searching. Review or Ranking Sources like Argus Clearinghouse,eBlast and Librarian's Index to the Internet. They evaluate website quality from sources they find or accept submissions from but cover a minimal number of sites.

As a webmaster, your site registration with the biggest billboards in Times Square can get you closer to bingo! for the searcher. Those who didn't even know you existed before are in your living room in New York time!

Your URL registration is a no-brainer, considering the generation of flocking traffic to your site. Certainly a quick and inexpensive method, yet is only a component of the overall marketing strategy that in itself offers no guarantees, no instant results and demands continued effort for the webmaster. Commerce rules the web. Like how a notable Internet caveman put it, "Web publishers also find dealing with search engines to be a frustrating pursuit. Everybody wants their pages to be easy for the world to find, but getting your site listed can be tough. Search sites may take a long time to list your site, may never list it at all, and may drop it after a few months for no reason. If you resubmit often, as it is very tempting to do, you may even be branded a spamdexer and barred from a search site. And as for trying to get a good ranking, forget it! You have to keep up with all the arcane and ever-changing rules of a dozen different search engines, and adjust the keywords on your pages just so...all the while fighting against the very plausible theory that in fact none of this stuff matters, and the search sites assign rankings at random or by whim.

"To make the best use of Web search engines--to find what you need and avoid an avalanche of irrelevant hits-- pick search engines that are well suited to your needs. And lest you'd want to cry "Ye immortal gods! where in the world are we?", spend a few hours becoming moderately proficient with each. Each works somewhat differently, most importantly in respect to how you broaden or narrow a search.

Finding the appropriate search engine for your particular information need, can be frustrating. To effectively use these search engines, it is important to understand what they are, how they work, and how they differ. For e.g. while using a meta search engine, remember that each engine has its own methods of displaying and ranking results. Remember, search strategies affect the results. If the user is unaware of basic search strategies, results may be spotty.

Quoting Charlie Morris (the former editor of The Web developer's journal) - "Search engines and directories survive, and indeed flourish, because they're all we've got. If you want to use the wealth of information that is the Web, you've got to be able to find what you want, and search engines and directories are the only way to do that. Getting good search results is a matter of chance. Depending on what you're searching for, you may get a meaty list of good resources, or you may get page after page of irrelevant drivel. By laboriously refining your search, and using several different search engines and directories (and especially by using appropriate specialty directories), you can usually find what you need in the end."

Search engines are very useful, no doubt. Right from getting a quick view of a topic to finding expert contact info...verily certain issues lie in their lap. Now the very reason we bother about these search engines so much is because they're all we've got! Though there sure is a lot of room for improvement, the hour's need is to not get caught in the middle of the road. By simply understanding what, how and where to seek, you'd spare yourself the fate of chanting that old Jewish proverb "If God lived on earth, people would break his windows."

Happy searching!

Liji is a PostGraduate in Software Science, with a flair for writing on anything under the sun. She puts her dexterity to work, writing technical articles in her areas of interest which include Internet programming, web design and development, ecommerce and other related issues.


More Resources

Unable to open RSS Feed $XMLfilename with error HTTP ERROR: 404, exiting

More SEO Information:

Related Articles


Press Release + PRWeb = Top Google Rank... True or False?
I've always been a big time press release aficionado.And up until recently I've steadfastly taken the low tech route by sending them out via snail-mail.
Keywords Finalization Methodology
To arrive at the set of keywords that:Describe business correctly (are relevant) Attract traffic (are popular & are searched for) Have less competition (are relatively un-optimized for )StepsStep I:Lets start by saying that the for the keyword finalization of a web site the first step is to device the theme of the web site. The keywords then should be generated which are in sync with the themeing structure of the site.
Search Engine Optimization: Who Do You Trust?
Internet search engines exist to organize the seemingly immeasurable amount of information available on the web. They direct people to pages that are relevant to their searches, pages that discuss the exact keywords they are looking for.
Designing a Website So the Search Engines Will Like You
Before you go and spend big money on a professional website designer, or start designing yourself, read through this article and make sure that you or your designer knows how to design a website that the search engines will like.Being a web designer myself, I know firsthand what they teach you in college about being a good designer.
Valid HTML code is crucial to Search Engine Optimization
Why valid HTML code is crucial to your web site's search engine optimization efforts and subsequent high rankings:Many webmasters and newcomers to web page design overlook a crucial aspect of web site promotion: the validity of the HTML code.What is valid HTML code?Most web pages are written in HTML, and as for every language, HTML has its own grammar, vocabulary and syntax, and each document written in HTML is supposed to follow these rules.
How Do I Improve My Web Site Conversion Rate? Part 1
Question 1.What do you mean by conversion? Do you mean getting someone to answer the simplest call to action such as "read more here" or actually selling a product or service?What you're talking about here are two different ways to measure your website.
What Is Waiting for Us? Tomorrows SEO Industry
Today, SEO is swiftly approaching saturation point. More and more webmasters realise the necessity of learning SEO basics, and as they do so, SEO professionals are facing difficulties finding new clients.
Googles Trap, DMOZs Nap, And Yahoo!s Crap
On November 16th, 2003, Google commenced an update (the Florida update) which had a catastrophic effect for a very large number of websites and, in the process, turned search engine optimization upside down. It is common to give alphabetical names to Google's updates in the very same way that names are given to hurricanes, and this one became known as "Florida".
Google Search Algorithm Patent Application Creates Spring Buzz!
Google applied for a patent on their ranking algorithm as of 15months ago on December 31, 2003 and that application was postedon March 31st at the US Patent Office. It got the discussionforums buzzing this weekend.
Meta Tags - An Important Part of Every Web Page
Meta tags are an absolute must from a search engine optimisation perspective, there are many mistakes that can be made by not including these or even trying to 'spam' the SE's using them so heres a quick summary of the three main ones:Meta Page Title TagThe title tag is supported by all search engines and should be considered as the most important element in your optimisation process. Why is this you may ask? Well the page title is the first thing search engine spiders and human visitors will see.
How to Google; or How to be Easily Distracted
I set out with the intention of writing a self improvement type article with an original temporary working title of 'How To Overcome Fear'.Being the sound marketing man that I am though, my first action was to research whether this was relevant to many peoples lives.
The Unethical SEO Myth
"The use of black hat SEO techniques are completely unethical." Really? I completely disagree.
KEI Concerns and CID Alternative
Like many folks, I have been using KEI for some time now to determine what keywords I should target with my web site. And this has led me to becoming concerned with the results KEI provides and the keywords it suggests.
Spamglish; A Search Engine Comedy With A Language All Its Own
When the movie Spanglish hit the screens in 2004, it was dubbed "A comedy with a language all it's own." I don't think the producers even knew they were slipping a lesson for website owners who want better search engine listings into the movie.
How to REALLY Profit from SEO
I want to give you a few more things to think about as you excel and grow in the craft of search engine marketing. If you are anything like me, you were hooked the first time you really made a difference to someone else's success.
Forget SEO - It's All About Conversion!
Which SEO hat do you wear? Is it white or black? Or perhaps it's a subtle shade of gray. Well, wherever you are on this spectrum, if you are like 99% of the SEO-fixated webmasters out there, you are doing all you can to get visitors to your site.
Without Conversion Rates You Don’t Know If You’re Mickey Mouse Or Mickey Mantle
I couldn’t agree more with the headline of this article and it’s one I’m afraid I can’t take credit for. I found this line in Paco Underhill’s book, Why We Buy – The Science Of Shopping, and found myself comparing many of the things he has measured in the retail world to the tests I’ve done with online, visitor-based activity.
Keyword Demand Isnt Enough
I get half of the world traffic for the term "dirtbagging," on one page of my backpacking site, but that only means ten visitors a month. Without decent keyword demand you can't ever get much traffic.
Link Popularity - Basic Overview
There are many techniques that SEM/SEO experts use to optimize Web sites. One of the more important techniques is "Link Popularity.
SEO #3: Getting Listed In Google in Under 24-Hours!
Yesterday you should have read the second course out of 6 courses that will help you get a TOP rank in the search engines and get EXPLOSIVE LASER TARGETED TRAFFIC for Free. Today we move on to course #3 and reveal how to Get Listed In Google in Under 24-Hours! Today is a short course but it's one that you must have been waiting for.