Robots blocked by pages webmasters tools
-
- a mistake made in software. How can I solve the problem quickly? help me.
-
Ok,
Updated.
I need to wait 1 week then
Thank You, Gaston
-
That sitemap line place it at the end of the file.
like this:User-agent: *
Disallow:Sitemap: https://site.com/sitemap.xml
Best luck!
-
-
its better to set it as I've discribed before:
User-agent: *
Disallow:It would take over a week, depending in scale of the website.
-
-
Hello there!
Have you identified all pages that you want to be blocked?
If you just do not want any page to be blocked, just edit you robots.txt as:User-agent: *
Disallow:For more information, check these resources:
Cómo bloquear URLs con robots.txt - Google Webmasters
About /robots.txt - robotstxt.orgBest Luck!
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Competing with doorway pages
Hi all, it's my understanding that 'doorway pages' are bad practice. However, when googling for the services that our company offers, along the lines of '[service] [location]', businesses turn up in Google SERPs that outrank us purely with doorway pages. Take this as an example: https://www.google.co.uk/search?q=seo+dorking One of the results is this company who seem to rank for pretty much every town modifier: https://prioritypixels.co.uk/seo-agency-dorking/ If you look at their sitemaps you'll see thousands of these pages: https://prioritypixels.co.uk/page-sitemap16.xml All the content is slightly different but broadly speaking it is very similar. It seems that, in the short term, we can't compete with this company but we could if we employed the same tactics. So my question is: is what they are doing really risking a penalty? b1Lpp5
Intermediate & Advanced SEO | | Bee1590 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | | THandorf0 -
Pagination on a product page with reviews spread out on multiple pages
Our current product pages markup only have the canonical URL on the first page (each page loads more user reviews). Since we don't want to increase load times, we don't currently have a canonical view all product page. Do we need to mark up each subsequent page with its own canonical URL? My understanding was that canonical and rel next prev tags are independent of each other. So that if we mark up the middle pages with a paginated URL, e.g: Product page #1http://www.example.co.uk/Product.aspx?p=2692"/>http://www.example.co.uk/Product.aspx?p=2692&pageid=2" />**Product page #2 **http://www.example.co.uk/Product.aspx?p=2692&pageid=2"/>http://www.example.co.uk/Product.aspx?p=2692" />http://www.example.co.uk/Product.aspx?p=2692&pageid=3" />Would mean that each canonical page would suggest to google another piece of unique content, which this obviously isn't. Is the PREV NEXT able to "override" the canonical and explain to Googlebot that its part of a series? Wouldn't the canonical then be redundant?Thanks
Intermediate & Advanced SEO | | Don340 -
Noindex search pages?
Is it best to noindex search results pages, exclude them using robots.txt, or both?
Intermediate & Advanced SEO | | YairSpolter0 -
Bing Webmaster Tools failed to reach sitemap any suggestions?
My sitemap has been submitted to Bing webmaster tools well over a year ago and I have never had any problems. Starting last week it showed failed, for some reason it can't reach it. I have resubmitted several times and it fails every time. I can go to the url with no problems, and Google Webmaster Tools does not have any problems. We have made no changes in over a year to how the sitemap is made and submitted. Anyone have any ideas?
Intermediate & Advanced SEO | | EcommerceSite0 -
Disavow Tool - WWW or Not?
Hi All, Just a quick question ... A shady domain linking to my website is indexed in Google for both example.com and www.example.com. If I wan't to disavow the entire domain, do I need to submit both: domain:www.example.com domain:example.com or just: domain:example.com Cheers!
Intermediate & Advanced SEO | | Carlos-R0 -
Why is Google Webmaster Tools reporting a massive increase in 404s?
Several weeks back, we launched a new website, replacing a legacy system moving it to a new server. With the site transition, webroke some of the old URLs, but it didn't seem to be too much concern. We blocked ones I knew should be blocked in robots.txt, 301 redirected as much duplicate data and used canonical tags as far as I could (which is still an ongoing process), and simply returned 404 for any others that should have never really been there. For the last months, I've been monitoring the 404s Google reports in Web Master Tootls (WMT) and while we had a few hundred due to the gradual removal duplicate data, I wasn't too concerned. I've been generating updated sitemaps for Google multiple times a week with any updated URLs. Then WMT started to report a massive increase in 404s, somewhere around 25,000 404s per day (making it impossible for me to keep up). The sitemap.xml has new URL only but it seems that Google still uses the old sitemap from before the launch. The reported sources of 404s (in WMT) don't exist anylonger. They all are coming from the old site. I attached a screenshot showing the drastic increase in 404s. What could possibly cause this problem? wmt-massive-404s.png
Intermediate & Advanced SEO | | sonetseo0 -
Will Google Visit Non-Canonicalized Page Again and Return Its Page's Original Ranking?
I have 2 questions about canonicalization. 1. Will Google ever visit Page A again if after it has been canonicalized to Page B? 2. If Google will still visit Page A and found that it is not canonicalizing to Page B already, will the original rankings and traffic of Page A returned to the way before it's canonicalized? Thanks.
Intermediate & Advanced SEO | | globalsources.com0