Webmaster remove page from search. How to clear the search bar

How to remove a page from the search engine index and why is it necessary in general? In this article, we'll go over all the options and answer frequently asked questions.

Why remove page (s) from the index?

There can be many reasons to remove a page or pages from the index. The most common are:

  • The page is a duplicate (for example, the pages site.ru/cat/stranica.html and site.ru/cat/stranica can be duplicates). And as everyone knows, duplicates in the index are evil that harms the site as a whole.
  • The page has ceased to exist (for example, the information is no longer relevant).
How long will the page be removed from the index

The speed of removal does not depend on us. The maximum period is about 60..90 days. Average term removal in my experience lies in the region of 25..35 days. If we indicate manually through the Yandex Webmaster or Google Webmaster panel that the page needs to be deleted, the process will be faster. The most important thing here is for the search robot to visit the forbidden page of the site, and then, when the index is updated, it will exclude it from the search results.

Ways to remove a page from the index

In the options discussed below, the page will exist (will be open to users), but we will ban it (delete it) in the search engine index.

1. Through the meta tag

It is convenient to block pages from indexing using robots.txt because you can immediately specify the entire section in bulk, or prohibit indexing a group of pages of the same type. but search engines clearly say that for them the robots file is not mandatory in the execution plan. Those. the search engine, in theory, can leave a document in the index that is closed in robots. True, I do not know such examples.

3. Via the search engine webmaster panel

Both in Yandex and Google in the webmaster's panel there is such an opportunity. However, if you delete the page in this way, then you need to understand that the page must first be closed using one of the above methods, otherwise nothing will happen. With such a deletion, we are only hinting to search robots that it is imperative to visit these pages in the next round.

3.1. Yandex Webmaster Panel

The page address is http://webmaster.yandex.ru/delurl.xml. Having entered this page, you just need to indicate in the form the address of the page that you want to remove from the index.

3.2. Google Webmaster Panel

Page address - https://www.google.com/webmasters/tools/ ... To get to the required form, select a site from the list (if you have multiple sites) and then select the "Google Index" -\u003e "Remove URLs" tab.

In Google, you can immediately register a directory of URLs (just like in robots.txt).

4. X-Robots-Tag Headers

This method is only used by Google. The ban entry must be in the http headers:

X-Robots-Tag: noindex, nofollow

5. Through a redirect or 404 error

You can also remove a page from the index using a redirect or return a 404 error from the page. In such a case, search engines will also remove the pages from the index.

We've covered the main ways to remove a page from the index. As I wrote above, the removal speed is different in each case. But one thing is for sure - it's not fast. In any case, it will take at least 5-7 days.

There are situations when a business owner needs to remove a page from Google or Yandex search. Sometimes the resource gets into the SERP by mistake or the information on it loses its relevance. But worst of all, when search engines give out service pages with confidential customer data.

To avoid such situations, you need to know how to remove a page or section of a site from the index.

There are several ways to do this, depending on the search engine. Let's take a look at the pros and cons of each option.

Before choosing a method, decide:

  • you need to remove access to the page only from search engines;
  • you need to remove access for absolutely everyone.

Error 404

Important! This is the simplest method to perform, however, the time for removing information from search results may take up to 1 month. Removes the page from both the search engine and the site as a whole.

Periodically, when searching for certain information, the user is faced with a 404 error message - "Page not found." This is exactly the result of the actual deletion of the site page.

This is done by deleting the page in the administrative panel of the site. In the language of search engines, the server is configured, which provides an http status with a 404 not found code for a specific URL. At the next visit to the search robot, the server informs him about the absence of a document.

After that, the search engine realizes that the page is no longer available and removes it from the search results so that users do not get from the search to the 404 error page.

This method has its own characteristics:

  • Simplicity: settings are made in just a few clicks.
  • The page completely disappears from the site, so if you just need to hide confidential information from issuance, then it is better to turn to other methods.
  • If there are incoming links to the page you want to hide, then it will be more efficient to set up a 301 redirect.
  • It is not the removal of the page from the site that directly leads to the dropout of the page from the search, but the subsequent indexing. On average, it takes from 1–2 days to a month for the robot to visit the resource and ping the page.

While this option is one of the simplest and most convenient for a webmaster, a 404 error message is rarely a pleasure for a website visitor. In some cases, it can lead to the fact that the client will no longer return to this resource.

To avoid such consequences, today more and more webmasters are trying to creatively design a page with a 404 error or place information and suggestions there that may interest the user. This policy makes the site more customer-friendly, increasing its popularity.

Robots.txt

Important! This method does not remove the page from the search, it only hides it from the results. The page remains viewable from other traffic channels.

Quite a common way to get rid of individual objects and entire sections. Through robot.txt, both permission and prohibition of indexing are provided, so it is not surprising that many have been created on the topic of page deletion algorithms in this way. useful instructions on the Internet, such as Devaka. But they are based on one principle: the use of the Disallow directive.

To prevent the search engines from crawling the page, you need to have access to the domain root folder. Otherwise, you will have to use meta tags.

  • User-agent: the name of the robot to which you want to apply the ban is entered here (the name can be taken from the Scanner Database, but in case, but if you want to close the page from everyone in the future, then just use "User-agent: *");
  • Disallow: This directive directly specifies the address in question.

It is this pair that forms a command in relation to a specific URL. If necessary, several objects of the same site can be prohibited in one file, which will be completely independent from each other.

After closing a page or section through robots.txt, you need to wait for the next indexing.

It is worth noting here that for search engines the action in robots.txt is only a recommendation, which they do not always adhere to. Even if this directive is followed, the resource will still appear in the SERP, but already with an inscription that it will be closed via robots.txt.

Only over time, if the status of the object in the file does not change, then the search engines will remove it from their database.

In any case, the deleted objects will remain available for viewing when following external links, if any.

Meta robots tag

Important! This method removes the page from search engines, but the page remains viewable from other traffic channels.

To some extent, this option is called an alternative to the previous one, only here the work is carried out in the html code among the head tags:

<meta name \u003d "robots" content \u003d "noindex, nofollow" /\u003e

After entering the command, you must wait for the next indexing of the resource, after which the changes will take effect.

Why is this method good?

Using meta tags, you can remove url from Google or Yandex both for one page and for a whole list at once. This keeps the robots.txt file simple. This option is recommended for beginners who work step by step while creating new website pages.

Interesting fact! Using this method, you can remove a page from one search engine and leave it in the rest.

Close page via mega tags - the best way remove the page from the Google index, leaving it active in Yandex, if necessary. It is also recommended to use this method in situations where you need to remove a page from the index, while keeping its information on the site for internal use.

Example

Closes the page only for Google Search engines

Closes the page only for Yandex PS

The indisputable advantage of meta tags over robots.txt is the ability to close the page from indexing even if there are external links. To do this, just use the noindex mega tag.

The disadvantage of using meta tags is that if you do not have Wordpress, you may have problems with implementation. In Wordpress, the issue is solved by installing the Yoast SEO plugin, where each page can be closed using a meta tag.

301 redirect

Important! Implementation will lead to the fact that the page content will no longer be available to absolutely all visitors, including the site owners.

The essence of this method is that when a user searches for a page that no longer exists, the site will automatically redirect him to a different URL.

This option is not the most convenient and simple for a webmaster, because the algorithm of work differs depending on the CMS. However, from the user's point of view, this is the most comfortable way out, much more convenient and pleasant than the 404 error message.

If desired, the redirect can be subordinated to marketing tricks and transfer the user not just to home page site, but on a specific section, in the promotion or active sales of which the administration is interested.

This method is often used when it is necessary to process a large number of obsolete pages or with a complete change in the structure of the resource. After all, a redirect allows you to maintain a position in the rankings of search engines, so that efforts to promote the site are not wasted.

Re-indexing in search engines will take on average 1-3 days, depending on the site, but only after visiting the robot, the changes will take effect for visitors to the resource.

Learn more about setting up 301 redirects on the Devaka website.

Manual removal via the webmaster's panel

Important! The method works to speed up the removal of information from the search engine.

Fast (takes from 8 to 48 hours) way to delete a site or page from Yandex or another search engine. Each system has its own algorithm in this case, but one thing unites them - the need to use additional methods to close the page. It can be a 404 error, robots.txt, a mega tag of your choice, but you can't do without such preparation.

Remove site from google search can be through google Search Console:

  1. Log in to the toolbar.
  2. Select the resource you want.
  3. Then find the "Remove URLs" subsection in Google Index.
  4. Here we create a new request for deletion, and then enter in the window that opens desired link and click "Send".

In a separate list, you can monitor the status of the request. It usually takes 2-3 hours to 24 hours to remove from the Google index. If you wish, you can send an unlimited number of pages for deactivation in this way.

The system also offers its users the function of temporary (up to 90 days) page freezing.

We go in a similar way in Yandex Webmaster. The instructions of this search engine immediately warn about the need for a preliminary ban on indexing through robots.txt or meta tags.

After that, the system will check the object for a long time to change its status, and if the page is still inaccessible, the robot will delete it from its database.
To speed up this process, immediately after making changes to robots.txt or mega tags, go to your account on Yandex. Webmaster.

Here, in the "Delete URL" section, enter the page address and confirm its deletion. Deactivation of no more than 500 objects is allowed per day.

It will take more time to remove a url from Yandex than in the case of Google: from several hours to several days.

The URL Removal Tool is great for situations where there is an urgent need to remove sensitive pages or remove information added when a site is compromised.

Removing the entire site

Sometimes there are situations when it is necessary to remove not just a few pages from the search results, but also the entire resource.

This can be done using all of the above methods + Google or Yandex webmaster panel, only you need to make changes in the details. When you delete the entire resource, the domain name is replaced with the URL.

Close with login and password on the server. This is the best solution for sites at the development stage and for test versions. The algorithm of actions in this case depends on the CMS on the basis of which the resource was created.

To this method earned, tell the developers to configure access to the site only by login and password.

Total

Removing a page or even an entire site will not be difficult for its owner or administrator. Among the existing methods, everyone can choose the most convenient option for themselves. But if there is a need to achieve a result as soon as possible, then it is better to use several methods at the same time.

Still have questions? Our team will help you with your question. Leave your question to.

(2)

Almost every webmaster, especially at the initial stage of creating a project, faced the need to remove website pages from the search engine index. Even despite the fact that the procedure, at first glance, is quite simple, many still have difficulties.

Sometimes the owner of a web resource has to hide a document from search engines for the following reasons:

  • the site is under development and got into the SERP quite by accident;
  • the content on the page is no longer relevant;
  • the document duplicates another page that is already on the site;
  • the index includes service pages where the personal data of clients are located.

To avoid such cases, today we will talk about 4 effective ways removing a page from the index of search engines.

How to close a page from search engines using the webmaster's panel?

This method of blocking access to the pages of your site for search robots is considered one of the easiest. Moreover, this tool is ideal for those cases when certain URLs need to be removed urgently.

Yandex

To do this, you need the Yandex.Webmaster service. We have told you how to add a site to this site to speed up indexing. Follow the link https://webmaster.yandex.ru/tools/del-url/ and add the address of a specific page in the corresponding field, then click “Delete”.


With a high degree of probability, Yandex will ask you to speed up the process of deleting a page from the system database. To do this, you need to first close it from bots through a file or a robots meta-tag, or make the server return a 404 error. We will talk about how to do this a little later.

It will take several hours or even days before the bots delete the document from the database. This is due to the fact that the system will need to track its status and make sure that it does not change anymore.

Google

Log in to the site Google Webmaster Tools ... Add your site to the index in advance if you haven't already. Then find the "Google Index" tab there, and under it "Remove URLs". In the window that appears, select the option to create a delete request, and enter the address of the document to be deleted in the field. Then submit your request.

Server 404 error

Surely every user looking for necessary information on the Internet, got to the page where it gave error 404 - "Page not found"... This means that the document you were looking for was removed from the resource.

The webmaster can do this in the site control panel, for example,. For search engines, this means that you configure the response server of this page so that the code appears at a specific address 404 notfound... And when the robot visits the given URL again, the server will inform him that the page no longer exists. This makes it clear to search engines that the document has been removed from the site, and they will remove it from the search results so that visitors do not go to it and read the 404 error.

The characteristic features of this method include:

  1. Easy setup in just a few clicks.
  2. Complete disappearance of the document from the web resource. Because of this, it is not recommended to use this method in the case when you need to remove a service page from the index (confidential information of clients, etc.).
  3. It is also worth resorting to another option for hiding the page, for example, a 301 redirect, if incoming links lead to it.

Important! The page falls out of the search index not due to its removal from the resource, but due to further reindexing. Therefore, to remove it, you will have to wait about 2 weeks until the bot visits the resource again.

For webmasters, this method is one of the most convenient, but the visitor may not like the 404 error, and there is a risk that the user, seeing it, will stop visiting the site. But there is a way out of this situation as well.

On a note. Very often, site builders are engaged in an interesting design of the page on which the 404 not found error crashes. They put there useful information and offer to visit other pages of the resource, which will certainly attract the attention of the visitor. This will make this page more attractive to the user, which will certainly have a positive effect on his ranking and recognition.

Modifying the robots.txt file

Another common method used by webmasters. It allows you to hide individual documents and entire sections. In the robots file, you can not only deny, but also allow search bots to index a site or certain pages. Such commands are provided by the Disallow directive.

To hide a page from search engines, you need to access the root folder of the site. The robots.txt document basically contains 2 lines:

  1. User-agent. Here is the name of the robot of a specific search engine, which you forbid to crawl the page, or the code is written User-agent: *that is applicable to all bots at once.
  2. Disallow. The URL of the page to be deleted is written here.

Together, they create a command to search engines for the specified URL. If required, you can hide several documents on one resource in one file at once, and they will not affect each other in any way.

For example, this is how we tell all search robots not to index the index and password pages on our site.

User-agent: * Disallow: / index Disallow: / password

When you write this command, you will need to wait for the next indexing. You should know that all changes to the robots file are advisory in nature for search engines, so do not be surprised if the object still remains in the index, but is marked to be hidden through robots.txt. But if after a certain period of time the status of the document does not change, it will still be deleted from the search engine database.

Important! If there are external links on the remote pages, then they will remain available for following them. This is due to the fact that the commands in the robots file do not actually remove the object from the search results, but only hide it from the results.

Robots Meta Tag

If you do not go into details, then this method is similar to the above, only all commands are written in the html-code of the site inside the head tags:

All changes will also be made after indexing. The advantage of using the robots meta tag is that you can remove multiple URLs from your search using it without having to change robots file. This method ideal for novice site builders who gradually create new pages.

It is also a great option when you need to close the page from the Yandex index, but at the same time leave it available on Google. Or when you want to remove an object from the search, but leave the information available on the site itself.

An example of closing a URL for Yandex only:

Important! Writing meta tags differs from changing robots.txt in that you can use them to remove an object from search, even if external links lead to it. This is facilitated by the noindex meta tag. However, if your site is not built on WordPress, then the procedure will be more complicated. The fact is that WP has a special Yoast SEO plugin that allows you to delete pages with meta tags.

Conclusion

If you suddenly need to remove an individual page of your site from the search index, you can easily use all the methods indicated in this article. Choose the one that is more convenient for you, but at the same time take into account the characteristic features of each of them. In general, if you want to close the document as soon as possible, use several options at once.

Articles in the same category

Oh, these growth mistakes ...

There was a case in my practice. I wrote articles, tried my best, there were sixty articles written at that time and suddenly!

I find such a detail, mine are configured on the blog not quite correctly, you can see how you need it in this article.

Well, you know, the blog is in the index and visitors come from search engine results. Some articles in the TOP in the second-fifth positions are held and bring a lot of people to the blog.

And now, an ambush, then you need to redo all the links? I turned to experienced SEO specialists, they say this question is always in discussions, and the structure, whatever one may say, is broken. It seems to work like that ...

But a blogger friend said that he had over 170 articles, and began to sausage, and he reworked.

And just at the end of the URL of each article, there is no slash [ / ] should be, but [.html] !!!

I thought and thought and took it. And redid it. Of course, the attendance fell, then gradually began to grow, but here's the problem.

Pages with a slash in the index remain and visitors from search engines naturally come to me and stumble upon a 404 page. There is no such article here, damn it ... What the fuck ...

So we have come to the essence of the article, we need to remove these URLs from the index.

Removing pages from the search engine index

Having studied this issue, I got down to business. It turns out that non-existent pages can appear for many reasons.

Why closed and deleted pages remain in search

There are several reasons. Let me explain that under closed pages we mean service and other pages that are prohibited from indexing by robot.txt rules or meta tags.

Non-existent pages exist in the search for the following reasons:

- Deleted, and therefore not existing
- due to manual editing of the web page address. This cannot be done categorically, it immediately becomes inaccessible.
- the work of the server is not configured correctly, in this regard, a non-existent page will generate a 404 error.

Extra pages are generated in the index if:

- Pages are closed, but in fact they are in search and open to search robots (robots.txt is not configured correctly)
- were indexed earlier than they were closed
- these pages are linked from other sites or are linked from internal pages.

Well, if the reasons are known to us, the diagnosis has been made, you can start treatment.

It is not superfluous to mention that having done the work to eliminate all the jambs, they will exist in the search for some time. It all depends on the frequency of the robots passing through your site.

How to remove a page from Yandex search

Sent and the next time the robot comes in, the delete request will be executed.

How to remove from index in Google search engine

In Google, open the webmaster's tools in the Optimization list Remove URLs using the link https://www.google.com/webmasters/tools/url-removal?hl\u003den&siteUrl\u003dhttp://www.site/

Or type in the Google search "Tool for Webmasters" and at the top click on the top line.

A Search Console window will open, where click on the URL of your site, if any. If not, add your resource (button on the left).

Then there will be a menu on the left, on it the path Google Index -\u003e Remove URLs.

Click on the Temporarily hide button and enter the address to be deleted in the window. Continue button.

Then select a reason and submit your request. The status request will be displayed.

And after a while the page will leave the index.

Well, that's the whole story)))

This is an extremely hot topic. Many people use Yandex.String in Windows or form yandex search in browser... Traditionally, answers to popular queries and search suggestions are displayed in the input field itself as a live search. This is very convenient, but ... search engines, including Yandex, collect information about your interests on the Internet, then try to guess your preferences taking these interests into account. Those. once you look for a recipe for buns in Yandex, and the next time, as soon as you enter the letter "P", a bunch of tips with the word "bun" will pop up. This indirectly indicates your past search history and online interests. Sometimes a computer is used by several people, and you don't always want another person to see your search history. How to delete queries in the Yandex search bar if you don't want to share your interests with other people read on.



We will look at how to deal with the search bar. in the browserand for search in Yandex on the taskbar Windows.

To prevent hints from popping up in the browser

First thing... We delete the search history in the browser itself. This does not apply to Yandex search, but what's the point of hiding your past requests if you can easily find a list of all visited sites in the browser.

You can see how to clear the browser history in the pictures. Or find more details on the Internet if your browser is not presented below.

Clearing browsing history in Mozilla FireFox
Clearing your browsing history in Google Chrome
Clearing your browsing history in Opera
You can also clear cookies after each search... In this case, the search history information will no longer be associated with your browser. After clearing the cookies, the search tips will not carry any information about you.

However, it is easiest to configure " Personal search"so that the hints do not carry information about your search history.

The algorithm is as follows:


To delete requests in Yandex Line

In Windows 10, Yandex.String is located on the taskbar, default. Yandex.String makes it possible to search query to the Internet directly from the taskbar without directly launching the browser. Another line searches for folders on your computer. And can respond to voice commands. Quite convenient and safe if only you use the computer.

Where does Yandex.Stroka get search suggestions from?

The string draws information from several sources
  • list of links on request from Yandex search
  • links to sections of the selected site
  • answers to popular Yandex queries
  • results of indexing computer media (HDD, SSD, etc.)