Unsolved Blog archive pages in Craw Error Report
-
Hi there,
I'm new to MOZ Pro and have a question. My scan shows Archive pages as having crawl issues, but this is because Yoast is set up to block robots on these pages.
Should I be allowing search engines to crawl these pages, or am I fine to leave them as I have it set up already?
Any advice is greatly appreciated.
Marc -
@fcevey
If blog archive pages are showing up in the crawl error report, it indicates that search engine bots are encountering issues while attempting to crawl and index those pages. To address this:Check URL Structure: Ensure that the URLs for your blog archive pages are correctly formatted and follow best practices. Avoid special characters, and use a logical and organized structure.
Update Sitemap: Make sure that the blog archive pages are included in your website's XML sitemap. Submit the updated sitemap to search engines using their respective webmaster tools.
Robots.txt File: Review your website's robots.txt file to ensure it's not blocking search engine bots from crawling your blog archive pages. Adjust the file if needed.
HTTP Status Codes: Check if the archive pages return the correct HTTP status codes (e.g., 200 OK). Crawl errors might be triggered if pages return 4xx or 5xx status codes.
Internal Linking: Ensure that there are internal links pointing to your blog archive pages. This helps search engines discover and index these pages more effectively.
Redirects: If you've recently changed the URL structure or migrated your website, implement proper redirects from old URLs to new ones to maintain SEO authority.
Server Issues: Investigate if there are any server-related issues causing intermittent errors when search engine bots try to access the blog archive pages.
-
Blog archive pages in the crawl error report indicate issues or problems encountered while indexing or accessing the archive pages of a blog or website. These errors need attention to ensure that all content remains accessible to users and search engines. ( what is project management) ( PMP Exam Prep) (study abroad)
-
@mhenshall The decision to allow search engines to crawl archive pages in YOAST SEO or leave them as they are currently configured depends on your specific goals and needs.
If the archive pages contain valuable and relevant content for both search engines and users, allowing them to be crawled could enhance the visibility of that content in search results. However, if the content on the archive pages is not important or is duplicated from other pages, blocking crawling could be a valid option to prevent indexing issues and improve the user experience.
I would recommend evaluating the content on the archive pages and considering how their visibility in search engines will be affected by allowing or blocking crawling. You can use tools like Google Search Console to monitor how Google is indexing those pages and make informed decisions based on the data.
Keep in mind that configuring YOAST SEO is a strategic decision that should align with your SEO goals and your webs```
code_textFree Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. Follow us [https://posicionamiento-web-seo.com.ar/](link url)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved How to check the Domain Age?
Check the Domain Age on THE SEO TOOLS KING
Moz Tools | | seotoolsking
Get Free seo tools only on THE SEO TOOLS KING4 -
Shopify SEO - Double Filter Pages
Hi Experts, Single filter page: /collections/dining-chairs/black
Technical SEO | | williamhuynh
-- currently, canonical the same: /collections/dining-chairs/black
-- currently, index, follow Double filter page: /collections/dining-chairs/black+fabric
-- currently, canonical the same: /collections/dining-chairs/black+fabric
-- currently, noindex, follow My question is about double filter page above:
if noindexing is the better option OR should I change the canonical to /collections/dining-chairs/black Thank you0 -
Unsolved GMB Local SEO question
I am trying to diagnose how one particular competitor is smoking us in local rankings. I came across a text field “Service Details' within Google My Business Services. This allows me to put in a brief description of each service we offer. My thought is that this could be a good place for keywords. That said, the descriptions are not public facing (or to the best of my knowledge) so I am reluctant to do all the work for nothing. I am wondering if anyone has filled these out and if there were any noticeable results. Any insight is appreciated
Local SEO | | jorda0910 -
Page authority
Hello, How can my page authority be different across various page built exactly on the same model and none of them having links ? Thank you,
Moz Pro | | seoanalytics0 -
Issue: Duplicate page title
Hello, I have run the "Crawl Diagnostics" report using SEOmoz pro and it says that I have a total of 56 errors. 18 of those errors being duplicate content and another 38 errors being duplicate title tags. Now I have looked at both reports and detail and the reason I am getting there errors is due to the fact the it is checking "http" and "https". So for example: my website is http://www.widgets.com On the crawl diagnostics report, it also checks https://www.widgets.com So it looks like I have duplicate content and duplicate title tags because of this Now my question is this: Is this really duplicate content? If so, how do I fix this? Any help is greatly appreciated.
Moz Pro | | threebiz0 -
Colors in keyword Difficulty Report
Hi Everyone Two quick questions today 1. How can I find out hat the different colors within the keyword Difficulty Report represent and how can I see examples of how this information can help us with our data analysis? 2. The second question I have is regarding the Term Extractor. Seems when I ran a domain it provided the wrong data. For example, it stated that a certain keyword exists certain number of times within the description and title of the page but when I looked at the source this was not the case so it made studying the competition harder.
Moz Pro | | DRTBA
Any suggestions or has anyone else noticed this? Thanks in advance for all your help.0 -
Can I do a campaign for just a page?
We've been doing a lot of building and work on just one category page, but when i try to put it in the campaign it won't let me do any url that has a sub folder like www.mainsite.com/keyword-page. I can only do www.mainsite.com, and when i select the other campaign options like root domain or sub folder, roger pops up with an error. Is anyone else having this problem?
Moz Pro | | anchorwave0 -
My campaigns are not analyzing all my pages.
Hi I created a campaign against http://www.universalpr.com, and this campaign reports that only one page has been crawled. This site uses a jsvascript redirect to the real page which can be found through the following: www.universalpr.com/wps/portal/universal/univhome/!ut/p/c5/04_SB8K8xLLM9MSSzPy8xBz9CP0os_hQdwtfCydDRwN_Jw9LA0-LAOPQYCdDI_9QY_1wkA6zeAMcwNFA388jPzdVPzi1WL8gO68cANNcdLU!/dl3/d3/L2dBISEvZ0FBIS9nQSEh/ Now I also attempted to create a campaign against this page in case that the javascript redirect was breaking things, but that campaign also reported 1 page crawled. Can anyone instruct me as to what I'm doing wrong? Thank you
Moz Pro | | jcmoreno0