the Internet Windows Android

What were the first search engines. Search Engine 1 Search Engine

The story of how searching systems appeared, begins in July 1945, when a scientist from America Vanniver Bush (Vannevar Bush) was able to write a famous article "While we think," he was able to predict the emergence of personal computers, and could also formulate the idea of \u200b\u200bhypertext. Note that Vanniver Bush and himself participated in the creation of prototypes of search engines that we use these days. However, then, in the distant 1938, he was able to develop and patent the device that could quickly look for information on microfilm.

Despite the fact that at least Vanniva Bush consider the search technologies and ideas of the Internet, but other scientists have implemented his ideas in practice. In 1958, the US Department was created in the United States (agency of advanced research projects, ARPA, Advanced Research Projects Agency), in it from 1963 to 1969 scientists could work on a completely new concept, which allowed to transfer information through a computer network.

At first, this connection that allowed to transmit encrypted data was planned to use for military purposes, but the security level for information transmission was very low, so the military asked to refuse to continue developments.

However, only by the end of the 1980s, the idea of \u200b\u200bcreating a computer network was re-resurrected. Several universities of the United States helped this, which in their developments were able to combine their library of information, which was educational, using network connection.

In the 1990s, the sharp development of the Internet began. Since February 1993, as soon as Mark Andressen (Mark Andressen) from the NCSP (National Center for Supercomputer Applications, NCSA, National Center for Supercomputing Applications, www.ncsa.uiuc.edu) was able to complete the initial version of the program, which visualized Mosaic hypertext under UNIX After all, it was she who had a convenient graphical interface and she could become a prototype of browsers, which we use in our time. The Internet began to gain popularity.

In the mid-1990s, in order to find information that was needed, it was necessary to use the catalog in which the sites were located. At that time, these catalogs were not much, and they did not blown through sites, but the information was ordered on the headings and topics. It is worth noting that in 1993 the three bots of search engines already had on the network. Development data was non-commercial and after the influx of large amounts of information could not cope with the work, so they disappeared due to the sharp development of the Internet.

Since 1995, the main place in the global Internet has been taken by search engines, which were subsequently very large, in the West - Google, Yachoo, Alta Vista, and in Russia - Yandex, Rambler, Aport.

Distributing to the history of the development of the search engines of Russia. Here, our search engines waited for our easy way. There were also their victories and defeats.

Yandex began to develop in 1990, but only in 1997 he became a search engine that we knew perfectly.

Yandex. It is considered an unconditional leader in Russia, because the coverage of the audience of Yandex for the month according to the estimates of the leading specialists amounted to approximately half of the regular audience of the Internet in Russia. These numbers on the head are superior to the potential audience of the Aport and Rambler. Recently, a fairly powerful search for Go Mail was born from another major electronic service, which is engaged in mail, but in this case the company was able to use the Yandex algorithm and, because of this, search from the Mail system pages we can attribute to the search in Yandex. But the last scandal forced Mail Group to go from the search Yandex. The exact causes of the tightness know no one so far.

In the search for Yandex, headlines are taken into account, as well as the mandatory finding of the word in the body of the document. Preference is given to those words that are phrases are located close to each other and are in one paragraph. The search in Yandex occurs taking into account the morphology of the Russian language, this is its distinctive feature, that is, in the case of the request "Photo of nature" or "Nature", it will also be issued by those and other documents who meet these words.

Rambler It is the first search service of Runet, in the autumn of 1997 by a group of scientists from microbiology in the city of Pushchino in the Moscow region. In Rambler, the search was built by indexing the main words on the page that were highlighted by bold font (Tags Strong and B) and if they often appeared in headlines (Tags H1). Unlike the Rambler search Yandex, Keyword's tags can ignore, because of which it loves to be called clean search, but at the same time the proper search of the search has not yet been noticeable. This problem flashes in other search engines. Currently, the search positions of Rambler fell very fell and experts and forecasters propheate this system retraining into a regular entertainment portal. The only thing that makes it be afloat this system is the own advertising network blogun.

The search engine "Aport" was first demonstrated in February 1996 during the press conference "Agama" in honor of the opening of the "Russian Club", at that time it was not yet a large-scale search engine worldwide. The difference between the Aport from other search engines is that he can search for the specified keywords not only in Keywords, but also in the signatures to the pictures (ALT), and in the description (Description). But this innovation continued not long. Other search engines also repeated the same thing and the apart now there is nothing more to surprise their users. For 2011, the search system of aport is most likely waiting for absorption from larger search market players.

Disadvantages of search

At this time, search engines by any ways continue to improve their search technologies. But, unfortunately, the perfect search for none of them can boast, no matter how highly they are developed. Nowadays, the main disadvantages of search engines may include a query generalization system that are weakly developed and a huge dependence on the choice of information sources. In case of insufficient informativeness, it is still possible to somehow compensate for the abundance of choosing search results. But here to explain the computer with the human language, what they want to find people are not yet possible to translate into reality. Because of this, none of the search engines can call themselves encyclopedia. However, it is no longer a secret that the future is definitely for informative search, which will be focused on treating human concepts.

Which search engine was the first in RuNet? Yandex, Aport or Rambler?

The very first search engines of the Runet (which, according to one of the founders of the Rambler, was 2 or 3) very quickly ringed in the fly. Among them were morphological expansions to the Altavista system, which did not leave us their names. Therefore, we will have to choose from those who remained:

Rambler

The creation of Rambler began in 1996, when there were only a few dozen sites in the Russian segment of the Internet. Development ended in the autumn of the same year. Domain Rambler.ru was registered on September 26, and October 8, 1996.On the birthday of one of the creators, Rambler was open to users.

Rambler - the very first search engine in RuNet Of the current existing.

The search engine "Aport" was developed by February 1996, but at that time he was looking for only on the site russia.agama.com. Gradually, the number of sites grew to the official opening November 11, 1997. The year "Aport" was already looking for 10,000 sites. Thus, "Aport" was one of the first search engines in RuNet, but due to a limited search circle, it is impossible to recognize it the oldest.

Yandex.

Comptek developed "Yandex" was founded in 1989. In 1993, COMPTEK has developed "Yandex" - a program to search for a hard disk. In 1996, the program added the ability to search on the network. In 1997, the first search robot was written, the runet was indexed and September 23, 1997. The official presentation of Yandex took place.

"Yandex" from COMPTEK is not the oldest, but their search engines and research in the field of linguistics and morphology are the oldest in Russia.

The search engine architecture usually includes:

Encyclopedic YouTube.

    1 / 5

    ✪ Lesson 3: How the search engine works. Introduction to SEO.

    ✪ Search engine from inside

    ✪ SHODAN - Black Google

    ✪ Cheburashka Search Engine will replace Google and Yandex in Russia

    ✪ Lesson 1 - How the search engine is arranged

    Subtitles

History

Chronology
Year System Event
1993 W3CATALOG?! Running
AliWeb. Running
JumpStation. Running
1994 Webcrawler. Running
Infoseek. Running
Lycos. Running
1995 Altavista. Running
Daum Base
Open TEXT. Web Index Running
Magellan. Running
Excite Running
SAPO Running
Yahoo! Running
1996 Dogpile. Running
Inktomi. Base
Rambler Base
HotBot. Base
Ask Jeeves. Base
1997 Northern Light Running
Yandex. Running
1998 Google Running
1999 AllThereWeb. Running
Genieknows. Base
Naver Running
Teoma. Base
Vivisimo. Base
2000 Baidu. Base
Exalead Base
2003 Info.com. Running
2004 Yahoo! Search. Final launch
A9.com. Running
Sogou Running
2005 MSN SEARCH Final launch
Ask.com. Running
Nigma Running
GoodSearch. Running
Searchme Base
2006 wikiseek. Base
Quaero. Base
Live Search. Running
Chacha. Run (beta)
Guruji.com. Run (beta)
2007 wikiseek Running
Sproose. Running
Wikia search. Running
Blackle.com. Running
2008 Duckduckgo Running
Tooby Running
Picollator Running
Viewzi. Running
Cuil. Running
Boogami. Running
Leapfish Run (beta)
Forestle Running
Vadlo. Running
PowerSet. Running
2009 Bing. Running
Kaz.kz. Running
Yebol Run (beta)
Mugurdy. Closing
Scout. Running
2010 Cuil. Closing
Blekko. Run (beta)
Viewzi. Closing
2012 Wazzub. Running
2014 Satellite Run (beta)

At an early stage of the development of the Internet Tim Berners-Lee supported a list of web servers posted on the CERN website. Sites became more and more, and manually maintained such a list was becoming more complicated. The NCSA website was a special section "What's new!" (English What "s new!), where they published links to new sites.

The first computer program for searching the Internet was a program Archie (eng. Archie - Archive without the letter "B"). It was created in 1990 by Alan Emtage, Bill Heel (Bill Heelan) and J. Peter Doych (J. Peter Deutsch), students studying computer science at McGill University in Montreal. The program downloaded lists of all files from all available anonymous FTP servers and built a database in which you can search by file names. However, Archie's program did not index the contents of these files, since the data volume was so small that everything could be easily found manually.

The development and dissemination of the Gopher Network Protocol invented in 1991 by Mark McCahill (McCahill) at the University of Minnesota, led to the creation of two new search programs, Veronica. And Jughead. Like Archi, they were looking for file names and headlines stored in Gopher index systems. Veronica (eng. Very Easy Rodent-Oriented Net-Wide Index To Computerized Archives) Allowed to search by keywords most of the GOPHER menu headers in all the GOPHER lists. Jughead program (eng. Jonzy "S Universal Gopher Hierarchy Excavation and Display) I extracted the menu information from certain GOPHER servers. Although the name of the Archie search engine did not have a relation to the comic cycle "Archie"Nevertheless, Veronica and Jughead are characters of these comics.

By the summer of 1993, there was not a single system to search for web, although numerous specialized directories were manually supported. Oscar Niershtrasz (Oscar Nierstrasz) in Geneva University wrote a number of scenarios on Perl, which periodically copied these pages and rewritten them into a standard format. It became the basis for W3CATALOG?!, the first primitive search system of the network, launched on September 2, 1993.

Probably, the first search robot written in the PERL language was "World Wide Web Wanderer" - Matthew Gray (Matthew Gray) from in June 1993. This robot created the search index "Wandex". The goal of the Wanderer robot was to measure the size of the World Wide Web and find all the web pages containing words from the request. In 1993, the second search engine "AliWeb" appeared. AliWeb did not use the search robot, but instead I expected notifications from website administrators about the presence of an index file on their sites in a specific format.

JumpStation.Created in December 1993 by Jonathan Fletcher, looking for a web page and built their indices using a search robot, and used a web form as an interface for formulating search queries. It was the first online search tool that combined the three most important search engine functions (check, indexing and actual search). Due to the limited resources of computers of that time, the indexation and, therefore, the search was limited only by names and headers of the web pages found by the search robot.

Search engines participated in the "bubble of the Dotcomms" of the late 1990s. Several companies effectively entered the market, receiving record profits during their primary public offer. Some abandoned the market of publicly available search engines and began to work only with the corporate sector, for example, Northern Light.

Google assigned the idea of \u200b\u200bselling keywords in 1998, then it was a small company that ensured the work of the search engine at Goto.com. This step marked for search engines Transition from the competition with each other to one of the most profitable commercial enterprises on the Internet. Search engines began to sell first places in the search results to individual companies.

Google search engine takes a prominent position since the beginning of the 2000s. The company has achieved high position due to good search results using the PageRank algorithm. The algorithm was submitted to the public in the article "The Anatomy Of Search Engine", written by Sergey Brin and Larry Page, Google founders. This iterative algorithm ranks web pages based on the assessment of the number of hyperlinks on the web page under the assumption that the "good" and "important" pages refer to more than others. Google interface is designed in Spartan style where there is nothing superfluous, unlike many of its competitors who have embedded the search engine in a web portal. Google's search engine has become so popular that the system imitatives appeared, for example, Mystery seeker(Secret search engine).

Search for information in Russian

In 1996, a search was implemented taking into account Russian morphology on the Altavista search engine and the original Russian search engines Rambler and aport were launched. On September 23, 1997, Yandex Search Engine was opened. On May 22, 2014, Rostelecom was opened by the National Satellite Search Engine, which at the time of 2015 is in beta testing. On April 22, 2015, a new satellite service was opened. Children specifically for children with enhanced security.

Methods of cluster analysis and search for metadata received great popularity. Of the international cars of such a plan, the greatest fame received "Clusty" Companies Vivisimo.. In 2005, the Nigma search engine supporting automatic clustering was launched in Russia with the support of Moscow State University. In 2006, the Russian Metamoshin Quintura opened, offering visual clustering in the form of tag clouds. Nigma also experimented with visual clustering.

How the search engine works

The main components of the search engine: search robot, indexer, search engine.

As a rule, the systems work in stages. First, the search robot receives content, then the indexer generates an index available for searching the index, and finally, the search engine provides functionality to search for indexed data. To update the search engine, this indexing cycle is reused.

Search engines work, storing information about many web pages that they get from HTML pages. Search robot or "crawler" (eng. CRAWLER) - a program that automatically passes through all links found on the page, and highlights them. Crowler, based on references or based on a predetermined address list, search for new documents that are not yet known to the search engine. The site owner can exclude certain pages using Robots.txt using which you can prohibit indexing files, pages or site directories.

The search engine analyzes the content of each page for further indexing. Words can be extracted from headlines, page text or special fields - metategs. The indexer is a module that analyzes the page, having previously broken into parts by applying its own lexical and morphological algorithms. All elements of the web page are extended and analyzed separately. Web pages data are stored in the index database for use in subsequent requests. The index allows you to quickly find information on user request. A number of search engines similar to Google store the original page of the whole or part of it, the so-called cache, as well as various information about the web page. Other systems similar to Altavista are stored each word of each found page. The use of cache helps speed up the extraction of information from already visited pages. The cached pages always contain the text that the user asked in the search query. This can be useful when the web page has been updated, that is, no longer contains the text of the user's request, and the page in the cache is old. This situation is associated with loss of links (eng. linkrot.) And friendly to the user (usability) Google approach. This implies the issuance of short text fragments from the cache containing the query text. The principle of the smallest surprise is valid, the user usually expects to see the desired words in the texts of the obtained pages ( USER EXPECTIONS.). In addition, the use of cached pages accelerates the search, the cache pages may contain such information that is no longer available anywhere.

The search engine works with output files obtained from the indexer. The search engine takes user requests, processes them using an index and returns the search results.

When a user enters a search engine query (usually using keywords), the system checks its index and gives a list of the most appropriate web pages (sorted by any criterion), usually with a brief annotation containing a document header and sometimes part of the text. The search index is based on a special methodology based on information retrieved from web pages. Since 2007, the Google search engine allows you to search for time, creating the desired documents (Calling the Search Tools menu and the direction of the time range). Most search engines support the use of boolean operators in queries and, or, not, which allows you to clarify or expand the list of desired keywords. In this case, the system will search for words or phrases exactly as it was introduced. Some search engines have the opportunity approximate searchIn this case, users expand the search area, specifying the distance to keywords. There are also conceptual searchwhich uses a statistical analysis of the use of the desired words and phrases in texts of web pages. These systems make it possible to compile queries in natural language. An example of such a search engine is ASK COM website.

The usefulness of the search engine depends on the relevance of the pages found by it. At least millions of web pages and may include a certain word or phrase, but some of them can be more relevant, popular or authoritative than others. Most search engines use ranking methods to retire the "Best" list. Search engines decide which pages are more relevant, and in what order the results must be shown, in different ways. Search methods, as well as the Internet itself change over time. So two main types of search engines appeared: system of predefined and hierarchically ordered keywords and systems in which an inverted index is generated based on the analysis of text.

Most search engines are commercial enterprises that make a profit at the expense of advertising, in some search engines you can buy first places in extra charges for specified keywords. Those search engines that do not take money for the procedure for issuing results, earn in contextual advertising, while advertising messages comply with the user's request. Such an advertisement is displayed on the page with a list of search results, and search engines earn with each user click on advertising messages.

Types of search engines

There are four types of search engines: with search robots, managed by man, hybrid and meta-systems.

  • systems using search robots
Consist of three parts: Krauller ("Bot", "Robot" or "Spider"), index and software search engine. Crowler is needed to bypass network and create web pages. Index - a large archive of copies of web pages. The purpose of the software is to evaluate the search results. Due to the fact that the search robot in this mechanism constantly examines the network, the information is more relevant. Most modern search engines are systems of this type.
  • man-driven (resource catalogs)
These search engines receive lists of web pages. The directory contains the address, title and a brief description of the site. The resource directory is looking for results only from the descriptions of the page submitted to it by webmasters. The advantage of directories is that all resources are checked manually, therefore, the quality of the content will be better compared to the results obtained by the first type system automatically. But there is also a disadvantage - updating these directories is performed manually and can significantly lag behind the real state of affairs. Page ranking can not instantly change. As examples of such systems, you can bring catalog Yahoo., Dmoz and Galaxy.
  • hybrid systems
Such search engines like Yahoo, Google, MSN combine the functions of systems using search robots, and systems managed by man.
  • meta System
Metapoisk systems combine and rank the results of several search engines at once. These search engines were useful when each search engine has a unique index, and search engines were less "smart." Since now the search has improved much, the need for them has decreased. Examples: Metacrawler. and MSN SEARCH.

Search engine market

Google is the most popular search engine in the world with a market share of 68.69%. Bing takes the second position, its share is 12.26%.

The most popular search engines in the world:

Search system Market share in July 2014 Market share in October 2014 Market share in September 2015
Google 68,69 % 58,01 % 69,24%
Baidu. 17,17 % 29,06 % 6,48%
Bing. 6,22 % 8,01 % 12,26%
Yahoo! 6,74 % 4,01 % 9,19%
AOL. 0,13 % 0,21 % 1,11%
Excite 0,22 % 0,00 % 0,00 %
ASK. 0,13 % 0,10 % 0,24%

Asia

In East Asian countries and in Russia, Google is not the most popular search engine. In China, for example, more popular sOSO search engine?!.

About 70% of Yahoo residents use in South Korea by the search portal of own development Naver Japan and Yahoo! Taiwan is the most popular systems for search in Japan and Taiwan, respectively.

Russia and Russian-speaking search engines

According to LiveInternet in June 2015 on the coverage of Russian-speaking search queries:

  • All-speaking:
    • Yahoo! (0.1%) and owned by this company search engines: Inktomi., AltaVista, AllThereWeb.
  • English-speaking and international:
    • Askjeeves. (Teoma mechanism)
  • Russian-speaking - most "Russian-speaking" search engines index and are looking for texts in many languages \u200b\u200b- Ukrainian, Belarusian, English, Tatar and others. They differ from "all-speaking" systems indexing all documents in a row, the fact that, mainly, the resources located in the domain zones are indexed, where the Russian language dominates, or other methods limit their robots with Russian-speaking sites.

Some of the search engines use external search algorithms.

Quantitative data of the google search engine

The number of users of the Internet and search engines and user requirements for these systems is constantly growing. To increase the speed of searching for the desired information, large search engines contain a large number of servers. Servers are usually grouped into server centers (data centers). Popular search engines, server centers are scattered around the world.

In October 2012, Google launched the project "where the Internet lives", where users are given the opportunity to get acquainted with the processing centers of this company.

The following work is known about the work of the data-centers of the Google search engine:

  • The total power of all google data centers, as of 2011, was estimated at 220 MW.
  • When in 2008, Google planned to open a new complex in Oregon, consisting of three buildings with a total area of \u200b\u200b6.5 million m², in the magazine Harper's Magazine calculated that such a large complex consumes more than 100 MW of electricity, which is comparable to the city's energy consumption with a population of 300,000 human.
  • The approximate number of Google servers in 2012 is 1,000,000.
  • Google expenses on the data centers amounted to $ 1.9 billion in 2006, and in 2007 - $ 2.4 billion.

The size of the World Wide Week, an indexed Google for December 2014, is approximately 4.36 billion pages.

Search engines taking into account religious bans

The global distribution of the Internet and an increase in the popularity of electronic devices in the Arabic and Muslim world, in particular, in the countries of the Middle East and the Indian subcontinent, contributed to the development of local search engines taking into account Islamic traditions. Such search engines contain special filters that help users not to enter forbidden sites, such as sites with pornography, and allow them to use only those sites whose contents are not contrary to the Islamic faith. Shortly before the Muslim month of Ramadan, in July 2013, the world was presented Haralgoogling - A system that issues users only freebies "Right" links, filtering the search results obtained from other search engines, such as Google and Bing. Two years earlier, in September 2011, the search engine I'MHALAL was launched, designed to serve users of the Middle East. However, this search service had to close soon, according to the owner, due to the lack of funding.

The lack of investments and the slow pace of spreading technologies in the Muslim world prevented progress and prevented the success of a serious Islamic search engine. Obvious failure of huge investments in Muslim Lifestyle Web Projects, one of which was Muxlim. He received millions of dollars from investors, such as Rite Internet Ventures, and now - in accordance with the latest message from I'MHALAL before it is closed - stands out with a dubious idea that "the next Facebook or Google may appear only in the countries of the Middle East, If you support our brilliant youth. " Nevertheless, Islamic Internet experts have been engaged in the definition of what matches or does not match the Sharia, and classify websites as "Halal" or "Haram". All the former and real Islamic search engines are simply specially an indexed data set or is the main search engines, such as Google, Yahoo and Bing, with a specific filtering system that uses users to access Haram sites, such As sites about the height, LGBT, gambling and any other, the subject of which is considered anti-Islamic.

Among other religious-oriented search engines are a common jewogle - the Jewish version of Google and Seekfind.org - a Christian site that includes filters that protect users from content that can undermine or weaken their faith.

Personal results and filter bubbles

Many search engines such as Google and Bing use the algorithms of selective guessing of what information the user would like to see, based on its past actions in the system. As a result, websites show only the information that is consistent with the past interests of the user. This effect was called the "Bubble of Filters".

All this leads to the fact that users get much less contrary to their point of view of information and become intellectually isolated in their own "informational bubble". Thus, the "bubble effect" can have negative consequences for the formation of civil opinion.

Bias search engines

Despite the fact that the search engines are programmed to evaluate websites based on some combination of their popularity and relevance, in reality, experimental studies indicate that various political, economic and social factors affect the search issuance.

Such bias can be a direct result of economic and commercial processes: companies that are advertised in the search engine can become more popular in the results of the usual search in it. Deleting search results that do not meet local laws is an example of the influence of political processes. For example, Google will not display some neo-Nazi websites in France and Germany, where the denial of the Holocaust is illegal.

Bias may also be a consequence of social processes, since search engine algorithms are often developed to eliminate non-format points of view in favor of more "popular" results. The indexing algorithms of the main search engines give the priority to American sites.

Search bomb - one example of an attempt to manage search results for political, social or commercial reasons.

see also

  • Qwika.
  • Electronic library # List lists and search engines
  • Web Developer Toolbar

Notes

Literature

  • Ashmanov I. S., Ivanov A. A. Promotion of site in search engines. - M.: Williams, 2007. - 304 p. - ISBN 978-5-8459-1155-1.
  • Baikov V.D. The Internet. Search for information. Website promotion. - St. Petersburg. : BHV-Petersburg, 2000. - 288 p. - ISBN 5-8206-0095-9.
  • Kolisnichenko D. N. Search engines and promotion of websites on the Internet. - M.: Dialectics, 2007. - 272 p. - ISBN 978-5-8459-1269-5.
  • Lande D.V. Search for knowledge in the Internet. - M.: Dialectics, 2005. - 272 p. - ISBN 5-8459-0764-0.
  • Lande D.V., Skarsky A. A., Bezsessudnov I. V. Internetics: Navigation in complex networks: models and algorithms. - M.: Librok (Editorial URSS), 2009. - 264 p. - ISBN 978-5-397-00497-8.
  • Chu H., Rosenthal M.

The Internet has a special web site on which the user on a given query can get links to sites that meet this request. The search engine consists of three components: 1 search robot; 2 system index; and 3 programs, ... ... Financial vocabulary

SUBS., CAL IN SINONISIMS: 3 Fortika (9) Irckka (16) search engine (13) Dictionary of synonyms as ... Synonym dictionary

search system - Search engine site, with which other sites are searched. The search is carried out by entering keywords in the search box. Unlike directories, even if the site was not previously registered, it can be found with a search engine. ... ... Technical translator directory

search system - Ieškos Sistema Statusas T Sritis Automatika Atitikmenys: Angl. Searching System Vok. SUCHSYSTEM, N RUS. Search engine, F pranc. Système de Recherche, M ... Automatikos Terminų žodynas

Search system - - - (English Search Engine, Synonyms: seeker, search server, search engine) - tool for searching for information on the Internet. As a rule, the search for the search engine consists of two stages. Special program (search robot, automatic, agent, ... ... Encyclopedic dictionary of the media

Control, automatic control system (see Automatic Control), in which the control exposure by the search method automatically changes t. O. Is the best (in which sense) control object; With ... ... Great Soviet Encyclopedia

Managered automatic control system, in to the swarm control action, the method of searching automatic changes T. O. To carry out the best control of the object; At the same time, CHAR changes to the object or impacts externally. Wednesdays in advance ... Big Encyclopedic Polytechnic Dictionary

SMP 1 is rednecking to search for rescuers who have fallen into critical conditions associated with the threat to life, as well as the search for parathedral goods and various objects in poor visibility. It consists of: the search radio carrier is active ... ... Dictionary rapid situations

automated information retrieval system - 3.2.5 Automated Information Search Engine: IPS, implemented on the basis of electronic computing equipment source ... Dictionary directory terms of regulatory and technical documentation

This term also has other meanings, see Aport. Aport ... Wikipedia

Books

  • Extreme tasks of the theory of graphs and the Internet. Tutorial, Railgorodsky Andrei Mikhailovich. A real brochure is devoted to the study of various extremal tasks of the theory of graphs, (at least partial) whose solution can be useful when analyzing data. It originated on the basis of ...
  • Extreme tasks of the theory of graphs and the Internet, Railgorodsky FM .. This brochure is devoted to the study of various extreme tasks of the theory of graphs, (at least partial) the solution of which can be useful when analyzing data. It originated on the basis of ...

At the initial stage of the development of the Internet, users were a privileged minority and the amount of information available is relatively small. At that time, access to her had, mostly workers of various major educational institutions and laboratories, and the data obtained were used for scientific purposes. At that time, the use of the network did not have such relevance as now.

In 1990. British scientist Tim Berners Li (who is also the inventor URI, URL, HTTP, World Wide Web) created a website info.cern.ch.which is the world's first available online catalog. From that moment on, the Internet began to gain popularity not only among scientific community, but also among the simple owners of personal computers.

Thus, the first way to facilitate access to information resources on the Internet was the formation of site catalogs. Links to resources in them were grouped on topics.

The first project of this kind is taken to be Yahoo, open in April 1994. Due to the rapid increase in the number of sites in it, soon there was the possibility of finding the necessary information on request. Of course, it was not yet a full-fledged search engine. The search was limited only by the data that were in the catalog.

In the early stages of the network development of the Internet, reference catalogs were used very actively, but gradually lost their popularity. The reason is simple: even in the presence of many resources in modern catalogs, they still show only a small part of the information available on the Internet. For example, the largest network catalog is - Dmoz. (Open Directory Project). It contains information about a little more than five million resources, which is incommensurable with the Google search database containing more than eight billion documents.

The largest Russian-language catalog is the Yandex directory. It contains information about a little over one hundred or four thousand resources.

Chronology of development of search engines

1945 year - American Engineer Vannevar Bush published recordings of the idea that led to the invention of hypertext, and the reasoning about the need to develop a system of rapid data extraction from thus stored information (equivalent of today's search engines). The concept of the memory expander device introduced by it contained the original ideas that, in the end, were embodied on the Internet.

1960-E. - Gerard Salton and his group in Cornell University developed a "witty information retrieval system". (Smart Information Retrieval System). Smart - Abbreviation from Salton's Magic Automatic Retriever of Text, that is, the "Magic Automatic Extractor of Salton Text". Gerard Salton is considered the father of modern search technology.

1987-1989 - Designed Archie. - Search engine for indexing FTP archives. Archie represented a script that automates the introduction into Listing on FTP servers, which were then transferred to local files, and only later in local files, a quick search for the necessary information was carried out. The search was based on the UNIX standard Grep command, and the data access to the data was based on Telnet.

In the next version, the data was divided into separate databases, one of which contained only text file names; And the other is entries with reference to the hierarchical directory of thousands of hosts; And one more connecting the first two. This version of Archie was more efficient than the previous one, as the search was made only by file names, excluding the many previously existing repetitions.

The search engine was becoming more and more popular, and the developers thought how to speed up her work. The above-mentioned database was replaced with another, based on the theory of compressed wood. The new version essentially created a full-way-based database instead of a list of file names and was much faster than before. In addition, minor changes allowed the Archie system to index Web pages. Unfortunately, for various reasons, the work on Archie soon ceased.

In 1993. The world's first search engine for the worldwide network was created. Wandex. WORD WIDE WANDEROP WANDERER BAT was laid in its foundation, developed by Matthew Massachusetts Institute.

1993 year - Martin Bonfire creates AliWeb. - One of the first search engines on the World Wide Web. Website owners should have added them themselves to the AliWeb index so that they appear in the search. Since too few webmasters did it, AliWeb did not become popular

April 20, 1994 - Brian Pinkerton from the University of Washington released Webcrawler. - The first bot, which indexed the pages completely. The main difference of the search engine from its predecessors is to provide the ability to users search for any keywords on any web page. Today, this technology is the standard for finding any search engine. The Search Engine "WebCrawler" has become the first system, which was known to a wide range of users. Alas, bandwidth was low and during the daytime the system was often inaccessible.

July 20, 1994 - Opened Lycos. - Serious development in the search technology created at Carnegie Melon University. Michael Maldin was responsible for this search engine and still remains a leading specialist in Lycos Inc. Lycos opened with a 54,000 document catalog. And in addition to this, the results he provided were ranked, in addition, he took into account the prefixes and approximate coincidence. But the main difference between Lycos was constantly updated catalog: By November 1996, 60 million documents were indexed - more than any other search engine of that time.

January 1994 - was founded Infoseek.. He was not truly innovative, but had a number of useful additions. One of these popular additions was to add your real-time page.

1995. - Started Altavista.. Appearing, the search engine Altavista quickly received recognition of users and became the leader among himself like this. The system has been practically unlimited at that time throughput, it was the first search engine in which it was possible to formulate queries in natural language, as well as formulate complex requests. The users were allowed to add or delete their own URLs within 24 hours. Also Altavista offered many tips and recommendations for the search. The main merit of the Altavista system is considered to ensure support for many languages, including Chinese, Japanese and Korean. Indeed, in 1997, no search engine on the network worked with several languages, especially with rare.

1996 - Altavista search engine launched a morphological extension for the Russian language. In the same year, the first domestic search engines were launched - Rambler.ru and Aport.ru. The appearance of the first domestic search engines marked a new stage of development of the Runet, allowing Russian-speaking users to request in their native language, as well as respond quickly to changes occurring within the network.

May 20, 1996 - Apktomi appeared along with his search engine HotBot.. His creators were two teams from the University of California. When the site appeared, he quickly became popular. In October 2001, Danny Sullivan wrote an article entitled "Based Database Inktomi Sites Open For Public Use", which described how Inktomi accidentally made his database of spam sites, which had already numbered about 1 million URLs available for universal use.

1997 - In Western countries, a turning point occurs in the development of search engines, when S. Brin and L. Page from Standford University founded Google (The initial name of the BACKRUB project). They developed their own search engine, which gave users the opportunity to exercise a high-quality search with morphology, errors in writing words, as well as increase the relevance in the results of issuing requests.

September 23, 1997 - Announced Yandexwhich quickly became the most popular from the Russian-speaking Internet users of the search system. With the launch of the Yandex search engine, domestic search engines began to compete with each other, improving the search and indexing system of sites, issuing results, as well as offering new services and services

Thus, the development of search engines and their formation can be characterized by the stages listed above.

To date, three leaders - Google, Yahoo and Bing settled in the world market. They have their own databases, and their search algorithms. Many other search engines use the results of these three major search engines. For example, AOL uses the Google database while Altavista, Lycos and Alltheweb use the Yahoo database all other search engines in various combinations use the results (issuance) of listed systems.

If you have a similar analysis of search engines, popular in the CIS countries, then we will see that Mail.Ru broadcasts Google's search, while overlapping its new developments, Rambler, in turn, translates Yandex. Therefore, the entire Runet market can be divided between these two giants.

That is why, in the CIS countries, the site promotion is usually carried out only in these two PS.