Has anybody else had unusual /feed crawl errors in GWT on normal url's?
-
I'm getting crawl error notifications in Google Webmaster tools for pages that do not exist on my sites?!
Basically normal URL's with /feed on the end..
http://jobs-transport.co.uk/submit/feed/
http://jobs-transport.co.uk/login/feed
Has any body else experienced this problem? I have no idea why this is happening.
Simon
-
Thanks Richard, very helpful
-
It's right in your code, it does exist.
rel="alternate" type="application/rss+xml" title="Transport Jobs » Submit Comments Feed" href="http://jobs-transport.co.uk/submit/feed/" />
You may have comments hidden/disabled for the page but the feed for the comments is still there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
UTM source errors in google search console
Dear Friends, I need help with UTM source and UTM medium errors. There are 300 such errors on my site which is affecting the site i think, The URL appended at the end is utm_source=rss&utm_medium=rss&utm_campaign= How do i resolve this? Please help me with it.Thanks ccEpFDn.png ccEpFDn.png
Reporting & Analytics | | marketing910 -
How can I remove parameters from the GSC URL blocking tool?
Hello Mozzers My client's previous SEO company went ahead and blindly blocked a number of parameters using the GSC URL blocking tool. This has now caused Google to stop crawling many pages on my client's website and I am not sure how to remove these blocked parameters so that they can be crawled and reindexed by Google. The crawl setting is set to "Let Google bot decide" but still there has been a drop in the number of pages being crawled. Can someone please share their experience and help me delete these blocked parameters from GSC's URL blocking tool. Thank you Mozzers!
Reporting & Analytics | | Vsood0 -
GWT Change of Address not working
I work at a .jsp site where we have vanity urls that 301 to the www,domainname.com/index.jsp?c_id. Now when I do a change of address in GWT it tells me that it can't fulfill the change of address cause the redirect goes to www.domainname.com/index.jsp instead of www.domainname.com. I could create a GWT account for www.domainname.com/index.jsp/ but that is a 404 (it needs the c_id). How would I do my change of address?
Reporting & Analytics | | mattdinbrooklyn0 -
Blocking our IP's but wondering if Google still uses our search data?
The company owner here has our (company) website as his home page. I excluded our static IP’s on Google Analytics, but is that good enough to keep Google from using his search traffic as an indicator of anything negative. Does Google still take into account his activity, but simply block it from my reporting? Finally, does one person actually have that kind of influence as far as time on site, bounce rates, etc. Should I convince him to find a new home page?
Reporting & Analytics | | Ticket_King0 -
500 errors
We are accumulating a signficant number of 500 errors, now reaching 3000 URLs after only 2 months since the re-coding of our site in Expression Engine. I haven't gotten a straight answer re: implications or solutions... by default, suggesting that it's not of any consequence. History: The site was initially developed in EE (prior to that, an HTML platform) with a host of site issues. We then contracted an EE specialist to properly code the site. The 'new; site was released Sept 21st. I'd appreciate some guidance and recommendations, so I can go back in hand to the developer. What are the considerations or consequences, if any, for ignoring the 500 errors? What are strategies or solutions for removing them from Google Webmaster Tools and preventing future 500 errors? Thanks. Alan
Reporting & Analytics | | ahw0 -
Search within search? Weird google URLs
Good morning afternoon, how are you guys doing today? I'm experiencing a few Panda issues I'm trying to fix, and I was hoping I could get some help here about one of my problems. I used Google analytics to extract pages people land on after a Google search. I'm trying to identify thin pages that potentially harm my website as a whole. It turns out I have a bunch of pages in the likes of the following: /search?cd=15&hl=en&ct=clnk&gl=uk&source=www.google .co.uk, and so on for a bunch of countries (.fi, .com, .sg, .pk, and so on, maybe 50 of them) My question is: what are those pages? their stats are awful, usually 1 visitor, 100% bounce rate, and 0 links. Do you think they can explain my dramatic drop in traffic following Panda? If so, what should I do with them? NOINDEX? Deletion? What would you suggest? I also have a lot of links in the likes of the following: /google-search?cx=partner-pub-6553421918056260:armz8yts3ql&cof=FORID:10&ie=ISO-8859-1&sa=Search&siteurl=www.mysite.com/content/article They lead to custom search pages. What should I do with them? Almost two weeks ago, Dr. Pete posted an article untitled Fat Panda and Thin Content in which he deals with "search within search" and how they might be targeted by Panda. Do you think this is the issue I'm facing? Any suggestion/help would be much appreciated! Thanks a lot and have a great day 🙂
Reporting & Analytics | | Ericc220 -
Site crawler hasn't crawled my site in 6 days!
On 4.23 i requested a site crawl. My site only has about 550 pages. So how can we get faster crawls?
Reporting & Analytics | | joemas990