Unsolved Blog archive pages in Craw Error Report
-
Hi there,
I'm new to MOZ Pro and have a question. My scan shows Archive pages as having crawl issues, but this is because Yoast is set up to block robots on these pages.
Should I be allowing search engines to crawl these pages, or am I fine to leave them as I have it set up already?
Any advice is greatly appreciated.
Marc -
@fcevey
If blog archive pages are showing up in the crawl error report, it indicates that search engine bots are encountering issues while attempting to crawl and index those pages. To address this:Check URL Structure: Ensure that the URLs for your blog archive pages are correctly formatted and follow best practices. Avoid special characters, and use a logical and organized structure.
Update Sitemap: Make sure that the blog archive pages are included in your website's XML sitemap. Submit the updated sitemap to search engines using their respective webmaster tools.
Robots.txt File: Review your website's robots.txt file to ensure it's not blocking search engine bots from crawling your blog archive pages. Adjust the file if needed.
HTTP Status Codes: Check if the archive pages return the correct HTTP status codes (e.g., 200 OK). Crawl errors might be triggered if pages return 4xx or 5xx status codes.
Internal Linking: Ensure that there are internal links pointing to your blog archive pages. This helps search engines discover and index these pages more effectively.
Redirects: If you've recently changed the URL structure or migrated your website, implement proper redirects from old URLs to new ones to maintain SEO authority.
Server Issues: Investigate if there are any server-related issues causing intermittent errors when search engine bots try to access the blog archive pages.
-
Blog archive pages in the crawl error report indicate issues or problems encountered while indexing or accessing the archive pages of a blog or website. These errors need attention to ensure that all content remains accessible to users and search engines. ( what is project management) ( PMP Exam Prep) (study abroad)
-
@mhenshall The decision to allow search engines to crawl archive pages in YOAST SEO or leave them as they are currently configured depends on your specific goals and needs.
If the archive pages contain valuable and relevant content for both search engines and users, allowing them to be crawled could enhance the visibility of that content in search results. However, if the content on the archive pages is not important or is duplicated from other pages, blocking crawling could be a valid option to prevent indexing issues and improve the user experience.
I would recommend evaluating the content on the archive pages and considering how their visibility in search engines will be affected by allowing or blocking crawling. You can use tools like Google Search Console to monitor how Google is indexing those pages and make informed decisions based on the data.
Keep in mind that configuring YOAST SEO is a strategic decision that should align with your SEO goals and your webs```
code_textFree Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. Follow us [https://posicionamiento-web-seo.com.ar/](link url)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify SEO - Double Filter Pages
Hi Experts, Single filter page: /collections/dining-chairs/black
Technical SEO | | williamhuynh
-- currently, canonical the same: /collections/dining-chairs/black
-- currently, index, follow Double filter page: /collections/dining-chairs/black+fabric
-- currently, canonical the same: /collections/dining-chairs/black+fabric
-- currently, noindex, follow My question is about double filter page above:
if noindexing is the better option OR should I change the canonical to /collections/dining-chairs/black Thank you0 -
How good is my page?
Hello I've been using moz for a while, using the tools to try and best optimize our pages, I'm curious to see if we're missing anything blatant or if you have any little tips. Any feedback would be greatly appreciated. Here is one of our most popular pages:
SEO Tactics | | JamesDavison
https://www.longstonetyres.co.uk/classic-car-tyres/jaguar/e-type.html Cheers.0 -
Malware, Google Ranking and cleanup
Hi My website was infected with the malware. We have cleaned it and during the process we had to delete the cache. Now website is showing like this on the google. How can i fix this? 3e8e9111-eac7-4238-a691-9678542eb4cf-image.png Secondly, As i have lost all the ranking, how can i get back quickly? Please help Thanks in advance
On-Page Optimization | | fslpso0 -
Unsolved What are the best SEO plugins for WordPress?
Before embarking on a programming language learning journey, the first thing to think of is your motivation.
Local Website Optimization | | Kate_balls0 -
Pages with Duplicate Content Error
Hello, the result of renewed content appeared in the scan results in my Shopify Store. But these products are unique. Why am I getting this error? Can anyone please help to explain why? screenshot-analytics.moz.com-2021.10.28-19_53_09.png
Moz Pro | | gokimedia0 -
Moz Reporting a brand page as KW stuffed for the the brand KW, not a problem i presume ?
Hi if I'm getting a keyword stuffing warning on Moz for a brand page for the brand name kw that shouldn't really matter should it? Moz just oversimplifying it since seen the brand name says 27 times on a page think it's stuffed, but if its a 2000 word page all about the brand, those occurrences are totally natural & hence Google will think so too generally speaking? Thanks Dan
Moz Pro | | Dan-Lawrence0 -
I have another Duplicate page content Question to ask.Why does my blog tags come up as duplicates when my page gets crawled,how do I fix it?
I have a blog linked to my web page.& when rogerbot crawls my website it considers tags for my blog pages duplicate content.is there any way I can fix this? Thanks for your advice.
Moz Pro | | PCTechGuy20120 -
Can I do a campaign for just a page?
We've been doing a lot of building and work on just one category page, but when i try to put it in the campaign it won't let me do any url that has a sub folder like www.mainsite.com/keyword-page. I can only do www.mainsite.com, and when i select the other campaign options like root domain or sub folder, roger pops up with an error. Is anyone else having this problem?
Moz Pro | | anchorwave0