RSS Hacking Issue
-
Hi
Checked our original rss feed - added it to Google reader and all the links go to the correct pages, but I have also set up the RSS feed in Feedburner. However, when I click on the links in Feedburner (which should go to my own website's pages) , they are all going to spam sites, even though the title of the link and excerpt are correct.
This isn't a Wordpress blog rss feed either, and we are on a very secure server.
Any ideas whatsoever? There is no info online anywhere and our developers haven't seen this before.
Thanks
-
Thanks so much for your help - I think this should fix it. You've saved me hours of time. It's our own cms so I should be able to fix it today.
-
I don't think you're being linked to spam, specifically. What you're seeing is the Feedburner page linking your post titles to feeds.feedburner.com/[whatever the guid of the post is] -- URLs of different feeds from different sites entirely.
I believe this is the problem referenced in the FeedBurner FAQ - http://www.google.com/support/feedburner/bin/answer.py?hl=en&answer=79014&topic=13190 - "Why don't my feed content item links work?"
In which case, the isPermalink attribute on the feed guids should be false. I'd post about this on the support forum for your CMS.
-
Mmm, actually maybe if I change that guid entry that came up in the validator to false that will fix it?
-
Some answers to your checks:
- Feed is correct - still my feed
- No FeedMedic reports -says everything is fine
- Feedburner url and url people are directed to from the blog are the same
- No malware reports
- Ran tool on blog article page, rss, feedburner page, and feedburner article link page - doesn't pick up any malware
- Validity check brings up one issue: guid must be a full URL, unless isPermaLink attribute is false:
129
- Current entry for guid for one article is <a id="l16" name="l16">
<guid ispermalink="true">129</guid>
</a>
Sure, here's the feed: http://feeds.feedburner.com/EnjoyTravelBlog (check in Chrome or IE as for some reason someone looking in Firefox didn't see them)
Here are screencasts of what I see if I click on any of the article titles:
- http://screencast.com/t/PNvrItea3ky - see articles 1 & 2
- http://screencast.com/t/bZI8qlg74 - what I see if I click on article 1 - clicking on link goes to spam site
- http://screencast.com/t/cER9Fm9RTunm - what I see if I click on article 2
Like this for every single article - even got some links to Baidu, Ebay and all sorts in there.
Would welcome suggestions on other forums to post on if this goes beyond technical seo!
-
A few avenues to check out:
- Log into your feedburner account and make sure the feed it's processing is still your blog's actual feed.
- Under feedburner's "Troubleshootize" tab, check if there are any FeedMedic reports, and under Tips and Tools run the feed validity checks.
- Check and make sure the Feedburner URL shown in your account is the same one people are being directed to on the blog.
- Go to Google Webmaster Tools. Under Diagnostics, check and see if there are any malware reports.
- Run a malware scan on the site URL and the Feedburner URL through a tool like http://sitecheck.sucuri.net/scanner/
Can you provide us more information? Screenshots showing links and the URLs they direct you to?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issues... en-gb V en-us
Hi everyone, I have a global client with lots of duplicate page issues, mainly because they have duplicate pages for US, UK and AUS... they do this because they don't offer all services in all markets, and of course, want to show local contact details for each version. What is the best way to handle this for SEO as clearly I want to rank the local pages for each country. Cheers
Technical SEO | | Algorhythm_jT0 -
Email and landing page duplicate content issue?
Hi Mozers, my question is, if there is a web based email that goes to subscribers, then if they click on a link it lands on a Wordpress page with very similar content, will Google penalize us for duplicate content? If so is the best workaround to make the email no index no follow? Thanks!
Technical SEO | | CalamityJane770 -
Is there a good Free tool that will check my entire subdomain for mobility issues?
I've been using the Google tool and going page by page, everything seems great. But I'd really like something that will crawl the entire subdomain and give me a report. Any suggestions?
Technical SEO | | absoauto0 -
What online tools are best to identify website duplicate content (plagiarism) issues?
I've discovered that one of the sites I am working on includes content which also appears on number of other sites. I need to understand exactly how much of the content is duplicated so I can replace it with unique copy. To do this I have tried using tools such as plagspotter.com and copyscape.com with mixed results, nothing so far is able to give me a reliable picture of exactly how much of my existing website content is duplicated on 3rd party sites. Any advice welcome!
Technical SEO | | HomeJames0 -
All images are noindex will opening this at once be an issue?
Hi, All images are noindex will opening this at once be an issue? Not sure how a few months ago all my images were set as noindex which i realized last week. We have 20K images which were indexed fine but now when i check Site:sitename it shows 10 or 12 and the inspect element via Chrome i see the noindex is set for all images. We have been renaming the images and adding ALT tags for most of them and would it be an issue if we change the noindex in one shot or should we do them few at a time? Thanks
Technical SEO | | mtthompsons0 -
Duplicate Content Issues
We have some "?src=" tag in some URL's which are treated as duplicate content in the crawl diagnostics errors? For example, xyz.com?src=abc and xyz.com?src=def are considered to be duplicate content url's. My objective is to make my campaign free of these crawl errors. First of all i would like to know why these url's are considered to have duplicate content. And what's the best solution to get rid of this?
Technical SEO | | RodrigoVaca0 -
Canonicalization Issue?
Good day! I am not sure if my company has a Canonicalization issue? When typing in www.cushingco.com the site redirects to http://www.cushingco.com/index.shtml A visitor can also type in http://cushingco.com/index.shtml into a web browser and land on our homepage (and the url will be http://www.cushingco.com/index.shtml) A majority of websites that link to our company point to: http://www.cushingco.com/index.shtml We are in the process of cleaning up citations and pulling together a content marketing strategy/editorial calendar. I want to be sure folks interested in linking to us have the right url. Please ask me any questions to help narrow down what we might be doing incorrectly. Thanks in advance!! Jon
Technical SEO | | SEOSponge0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0