the Internet Windows Android

Semantic core online program. Instructions for steps for the preparation of the semantic kernel

Hello everyone! When you keep a blog or content site, there is always a need for compiling semantic kernel For the site, cluster or articles. For convenience and system, it is better to work with a semantic kernel according to the well-established scheme.

In this article we'll consider:

  • how is the collection of the semantic kernel for writing an article;
  • what services can and need to be used;
  • how to fit the keys to the article;
  • my personal experience Selection.

How to collect semantic kernel online

  1. First of all, we need to use the service from Yandex -. Here we will make an initial sample of possible keys.

In this article, I will collect Xia on the topic, "how to lay a laminate." Similarly, you can use this instruction on the preparation of the semantic kernel for any topic.

  1. If our article is on the topic "How to lay a laminate", then we will introduce this request to obtain information about the frequency in wordStat.yandex.ru..

As we can see, in addition to the target request, we fell a lot of similar requests containing phrase "Laminate"Here you can sow all unnecessary, all the keys that will not be considered in our article. For example, we will not write on similar topics, such as "How much is it worth putting laminate", "Sneorically laid laminate" etc.

To get rid of many knowingly non-suitable queries, I recommend using operator "-" (minus).

  1. We substitute minus, and after all the words are not the topic.

  1. Now, we allocate everything that remains and copy requests to notepad or in Word.

  1. Inserting everything in word filerunning in the eyes and delete everything that will not be disclosed in our article if false requests are found, then check the key combination for the presence of them in the document Ctrl + F.The window opens (side panel on the left), where we enter the desired words.

The first part of the work is completed, now you can check our workpiece of the semantic kernel of Yandex to clean frequency, the quotation operator will help us.

If the words are not enough, it is easy to do right into the Wordstat, substituting the phrase in quotes and finding clean frequency (quotes show how many requests were with the content of this particular phrase, without additional words). And if, as in our example of the semantic core of the article or site, it turned out a lot of words, it is better to automate this work using the Mutagen service.

To get rid of numbers Use the following steps with Word Document.

  1. Ctrl + A. - To highlight the entire contents of the document.
  2. Ctrl + H. - Calls a window to replace characters.
  3. Substitute in the first line ^ # and press "replace everything" This combination will delete all the numbers from our document.

Be careful with the keys that contain the numbers, the above actions can change the key.

Semantic kernel selection / articles online

So, I wrote in detail about the service. Here we will continue learning the preparation of the semantic kernel.

  1. We go to the site and use this program to compile the semantic kernel, since i have not met better alternative.
  1. First, Otrace Clean frequency for this pass "Parser Wordstat" → "Mass Parsing"


  1. Insert all our selected words from the document in the parser window (Ctrl + C and Ctrl + V) and choose the "frequency of the Wordstat" in quotes.

This automation of processes is worth total 2 kopecks for the phrase! If you pick the semantic kernel for the online store, then this approach will save you time for the singering penny!

  1. We click to check for check and, as a rule, 10-40 seconds (depending on the number of words) you can download words already having frequency in quotes.
  1. The output file has an extension. CSV it opens to Excel. We start filtering the data to make the semantic kernel online.


  • We finish the third column, it is needed to set competition (next step).
  • We put the filter for all three columns.
  • Filter "Descending" column "Frequency".
  • Everything has frequency below 10 - removed.

We got a key listwhich can be used in the text of the article, but first need to check them for competition. After all, if this topic is made on the Internet along and across, does it make sense to write by this key Article? The probability that our article on it will go to the top, extremely small!

  1. To check competition online semantic kernel go to "Competition".


  1. We begin to check each request and the value of the competition is substituted into the appropriate column in our Excel file.

Cost Checks one key phrase is 30 kopecks.

After the first replenishment of the balance, 10 free checks will be available every day.

To determine the phrases that are worth writing an article take the best frequency ratio-competition.

Writing an article is:

  • frequency of at least 300;
  • competition is not higher than 12 (better less).

Drawing up the semantic kernel online applying low-competitive phrases Give you traffic. If you have a new site, then it will not appear immediately will have to wait from 4 to 8 months.

Almost any subject, you can find SC and RF with low competition from 1 to 5, in such keys really per day receive from 50 visitors.

To cluster semantic kernel requests, they will help you make the right structure of the site.

Where to insert the semantic kernel in the text

After collecting the semantic kernel for the site it's time to enter key phrases into the text and here are some recommendations for beginners, and those who "do not believe" into the existence of search traffic.

Signing rules in keyword text

  • The key is enough to use 1 time;
  • words can be inclined by cases, change places;
  • you can dilute phrases in other words, well, when all key phrases are diluted and readable;
  • you can remove / replace prepositions and question words (such as, and so on.);
  • you can insert into the phrase signs "-" and ":"

eg:
There is a key: "How to put a laminate with your own hands" In the text, he may look like this: "... In order to lay the boards of laminate with your own hands we need ..." or so "Everyone who tried to put laminate with their own hands ...".

Some phrases already contain otherFor example phrase:
"How to lay a laminate with your own hands in the kitchen" already contains a key phrase "How to put a laminate with your own hands". In this case, it is allowed to omit the second, as it is already present in the text. But if there are few keys, it is better to use it in the text or in Title, or in Description.

  • if it is impossible to shove the phrase into the text, then leave, do not do this (at least two phrases can be used in Title and Description and do not write them into the body of the article);
  • Before, one phrase is the title of the article (the fattest frequency and competition)The webmasters language is H1, this phrase is enough to use once in the body of the text.

Contraindications to the Scriptures of Keys

  • it is impossible to separate the key phrase of the comma (only in extreme cases) or point;
  • it is impossible to enter the key to the text in direct form so that it will look no naturally (not readable);

Title and page description

Title and Description. - This is the title and the description of the page, they are not visible in the article, Yandex and Google shows them search results user.

Speech Rules:

  • title and description should be "journalistic", that is, attractive to the transition;
  • to contain the thematic (relevant query) text, for this fit into the title and in the description of the key phrases (diluted).

General symbol requirements at the plugin ALL IN ONE SEO PACKThe following:

  • Title - 60 characters (including spaces).
  • Description - 160 characters (including spaces).

Check for plagiarism your creation or obtained from you can with.

On this with the topic, what to do with the semantic core after compilation we figured out. In conclusion, I will share my own experience.

After compiling the semantic kernel according to the instructions - my experience

You might think that I'm sad to you, something is not plausible. So as not to be unfounded here is a screenshot of statistics for the last, (but not the only one) year of this site, how I managed to rebuild the blog and start receiving traffic.

This training is the preparation of the semantic kernel, though long but effective, because in the site building the main thing the right approach And patience!

If you have any questions or have criticism, then write in the comment, I will be interested, also share your experience!

In contact with

Given the constant struggle search engines With various chemicals of reference factors, the correct structure of the site is increasingly entering the fore when conducting search engine optimization Site.

One of the main keys for the competent study of the site structure is the most detailed study of the semantic kernel.

At the moment, there is enough a large number of General instructions How to make a semantic kernel, so in this material, we tried to give more details exactly how to do it and how to do with the minimum time spent.

We have prepared a leadership that answers step by step to the question of how to create a semantic site core. With specific examples and instructions. Applying which, you can independently create semantic kernels for promoted projects.

Because this post is quite practical, then a lot of different work will be performed through Key Collector, since it is quite a lot of time saving when working with the semantic core.

1. Formation of generating phrases for collecting

Expand the phrases for the parsing of one group

For each query group, it is very desirable to immediately expand synonyms and other wording.

For example, take a request for swimsuits for swimsuits and get more different reformulations using the following services.

WordStat.Yandex - Right Column

As a result, for a given initial phrase, we can still get 1-5 others different re The formulations for which you will need to be able to collect requests within one query group.

2. Collecting search queries from various sources

After we have identified all phrases within the same group, go to the data collection from various sources.

The optimal set of parsing sources to obtain the highest possible output data for Runet this is:

● WordStat.yandex - left column

● Search tips Yandex + Google (with busting at the end and substitution of letters before the specified phrase)

prompt : If you do not use proxy when you work, then in order for your IP to be launched by search engines, it is advisable to use such delays between requests:

● In addition, it is also desirable to manually import data from the PRODVIGATOR database.

For bourgeoinewe use the same except data from WordStat.yandex and data on search prompts Yandex PS:

● Google search tips (with busting at the end and substitution of letters before the specified phrase)

● Semrush - Relevant Regional Base

● Similarly, use imports from the PRODVIGATOR database.

In addition, if your site is already collecting search traffic, then for general analysis search queries In your topic, it is desirable to unload all phrases from Yandex.Metrika and Google Analytics:

And already for a specific analysis of the desired query group, you can, with filters and regular expressions, can be determined by those queries that are needed to analyze a specific group of queries.

3. Cleaning requests

After all requests are assembled to pre-clean the obtained semantic kernel.

Cleaning with Ready Lists Stop Words

It is desirable to immediately take advantage of the finished lists of stop words as common and special on your subject.

For example, for commercial themes there will be such phrases:

● Free, download, ...

● Abstracts, Wikipedia, Wiki, ...

● Used, old, ...

● Work, profession, vacancies, ...

● Dream, sleep, ...

● And other such a plan.

In addition, immediately clean from all cities of Russia, Ukraine, Belarus, ....

After we downloaded the entire list of our stop words, we select the option to find the type of entry "independent of the word stop-word" and click "Mark phrases in the table":

Thus, we remove obvious phrases with minus with words.

After we have cleared words from obvious stop, it is already necessary to view the semantic kernel in manual mode.

1. One is fast ways This is: When we meet the phrase with obvious words that are not suitable for us, for example, the brand that we do not sell, then we

● opposite such a phrase click on the left to the specified icon,

● Select the word stop

● Select a list (it is desirable to create a separate list and call it, respectively),

● Immediately, if necessary, you can highlight all phrases that contain the specified stop words,

● Add to Stop Words

2. The second way to quickly reveal the stop words to use the "analysis of groups" functionality, when we group phrases according to the words that are included in these phrases:

Ideally, in order to re-return to a certain feet, it is desirable to return all marked words to a certain list of words stop.

as a result, we will receive a word list to send to the Stop Words list:

But, it is desirable this list Also quickly see so that there are not unambiguous stop words come there.

Thus, you can quickly go through the main stop words and remove phrases that contain the status of the word stop.

Cleaning implicit oaks

● We are sorting in descending frequency for this column

As a result, we only leave the most frequency phrases in such subgroups, and you delete everything else.

Cleaning phrases that do not carry a special semantic load

In addition to the above cleaning of words, you can also remove phrases that do not carry a special semantic load and will not particularly affect the search for groups of phrases to create separate landing pages.

For example, for online stores you can remove such phrases that contain the following keywords:

● Buy

● sale,

● Online store, ....

To do this, we create another list in stop words and enter the word data to this list, mark and delete from the list.

4. Customer grouping

After we cleaned from the most obvious garbage and unsuitable phrases, then you can already start grouping requests.

This can be done in manual mode, and you can use some kind of help from search engines.

We collect issuance by the desired search engine

In theory, it is better to collect on the desired region in the Google PS

● Google understands semantics well enough

● It is easier to collect it, not so banitis various proxies

Nuances: even for Ukrainian projects it is better to collect issuance on Google.ru, since there are better sites built by structure, therefore, throughout boarding pages We get significantly better.

Harvesting such data can be produced

● So with the help of other tools.

If you have a lot of phrases, it is already clearly needed for collecting data issuing search engines. Optimally, the speed of collecting and work shows a bunch of A-Parser'a + proxy (both paid and free).

After we have collected issuance data, now we are grouping inquiries. If you have collected data in Key Collector, then you can then produce a grouping of phrases right in it:

We do not really like how it makes KC so we have our own developments that allow you to get significantly better results.

As a result, with the help of such a grouping, we can quickly combine requests with different formulation, but with one problem of users:

As a result, this leads to good savings of the final processing of the semantic kernel.

If you do not have the opportunity to collect issuance with the help of a proxy, then you can use various services:

They will help you in a quick queries grouping.

After such clustering based on issuance data, in any case, it is necessary to carry out a further detailed analysis of each group and combine similar in meaning.

For example, such groups of requests in the end need to be combined to one page of the site.

The most important thing:each individual page on the site must comply with one user need.

After such processing of semantics at the outputs, we must get the most detailed structure of the site:

● Information requests

For example, in the case of swimsuits, we can make this site structure:

which will contain their Title, Description, text (for need) and goods / services / Content.

As a result, after we have already unloaded all the requests in detail, you can already begin a detailed collection of all key queries within the same group.

For fast collection phrases in Key Collector We:

● We select Fundamentant phrases for each group.

● We go, for example, to tip parsing

● We choose to distribute by groups

● Select from the drop-down list "Copy phrases from Yandex.WordStat

● Press Copy

● And begin collecting data from another source already, but according to the same distributed phrases within groups.

Eventually

Let's look at the numbers now.

For the theme "Swimwear" initially from all sources, we collected more than 100,000 different queries.

At the query cleaning stage, we managed to reduce the number of phrases by 40%.

After that, we collected frequency on Google AdWords and only those that were with frequency greater than 0 were left for analysis.

After that, we made a grouping of requests based on the issuance of PS Google and we managed to get about 500 groups of requests in which we have already conducted a detailed analysis.

Conclusion

We hope that this guide will help you much faster and qualitatively collect semantic kernels for our sites and step by step will answer the question of how to collect the semantic kernel for the site.

Successful collection of semantic nuclei, and as a result of high-quality traffic on your sites. If you have any questions, we will be happy to answer them in the comments.

(78 Ratings, average: 4,90 out of 5)

If you are familiar with the pain from "dislike" search engines to the pages of your online store, read this article. I will talk about the way to enhancing the visibility of the site, or rather, about its first stage - collecting keywords and the preparation of the semantic nucleus. About the algorithm for its creation and tools that are used.

Why make a semantic core?

To increase the visibility of site pages. To come so that search robots Yandex and Google began to find on requests for users of the page of your site. Of course, collecting keywords (preparation of semantics) is the first step towards this goal. The conditional "skeleton" is dropped further to distribute keywords on different landing pages. And then the articles / metaTags are already written and implemented.

By the way, on the Internet spaces you can find many definitions of the semantic kernel.

1. "The semantic core is an ordered set of search words, their morphological forms and phrases, which most accurately characterize the type of activity, product or service offered by the site." Wikipedia.

To collect the semantics of competitors in SerPStat, enter one of the key queries, select the region, click "Search" and go to the "Keyword Analysis" category. Then select "SEO analysis" and click "Selection of phrases". Export results:

2.3. We use Key Collector / Word for creating a semantic kernel

If you need to make a semantic kernel for a large online store, without Key Collector can not do. But if you are a novice, it is more convenient to use a free tool - a word (let this name not scares you). Download the program, and in the Yandex.Direct settings, specify the username and password from your Yandex.Pes:
Create a new project. In the Data tab, select the Add Phrases feature. Select the region and enter the requests you received earlier:
Tip: Create a separate project for each new domain, and under each category / Landing page, make a separate group. For example: Now gather semantics from Yandex.Upptat. Open the Data Collection tab - "Packet collection of words from the left column Yandex.Wordstat." In the window that opens, select a tick "Do not add phrases if they are already in any other groups." Enter the most popular among users (high-frequency) phrases and click "Start Collection":

By the way, for large projects in Key Collector, you can collect statistics from SEMRUSH competitors analysis services, Spywords, SerpStat (EX. PRODVIGATOR) and other additional sources.