RSS Hacking Issue
-
Hi
Checked our original rss feed - added it to Google reader and all the links go to the correct pages, but I have also set up the RSS feed in Feedburner. However, when I click on the links in Feedburner (which should go to my own website's pages) , they are all going to spam sites, even though the title of the link and excerpt are correct.
This isn't a Wordpress blog rss feed either, and we are on a very secure server.
Any ideas whatsoever? There is no info online anywhere and our developers haven't seen this before.
Thanks
-
Thanks so much for your help - I think this should fix it. You've saved me hours of time. It's our own cms so I should be able to fix it today.
-
I don't think you're being linked to spam, specifically. What you're seeing is the Feedburner page linking your post titles to feeds.feedburner.com/[whatever the guid of the post is] -- URLs of different feeds from different sites entirely.
I believe this is the problem referenced in the FeedBurner FAQ - http://www.google.com/support/feedburner/bin/answer.py?hl=en&answer=79014&topic=13190 - "Why don't my feed content item links work?"
In which case, the isPermalink attribute on the feed guids should be false. I'd post about this on the support forum for your CMS.
-
Mmm, actually maybe if I change that guid entry that came up in the validator to false that will fix it?
-
Some answers to your checks:
- Feed is correct - still my feed
- No FeedMedic reports -says everything is fine
- Feedburner url and url people are directed to from the blog are the same
- No malware reports
- Ran tool on blog article page, rss, feedburner page, and feedburner article link page - doesn't pick up any malware
- Validity check brings up one issue: guid must be a full URL, unless isPermaLink attribute is false:
129
- Current entry for guid for one article is <a id="l16" name="l16">
<guid ispermalink="true">129</guid>
</a>
Sure, here's the feed: http://feeds.feedburner.com/EnjoyTravelBlog (check in Chrome or IE as for some reason someone looking in Firefox didn't see them)
Here are screencasts of what I see if I click on any of the article titles:
- http://screencast.com/t/PNvrItea3ky - see articles 1 & 2
- http://screencast.com/t/bZI8qlg74 - what I see if I click on article 1 - clicking on link goes to spam site
- http://screencast.com/t/cER9Fm9RTunm - what I see if I click on article 2
Like this for every single article - even got some links to Baidu, Ebay and all sorts in there.
Would welcome suggestions on other forums to post on if this goes beyond technical seo!
-
A few avenues to check out:
- Log into your feedburner account and make sure the feed it's processing is still your blog's actual feed.
- Under feedburner's "Troubleshootize" tab, check if there are any FeedMedic reports, and under Tips and Tools run the feed validity checks.
- Check and make sure the Feedburner URL shown in your account is the same one people are being directed to on the blog.
- Go to Google Webmaster Tools. Under Diagnostics, check and see if there are any malware reports.
- Run a malware scan on the site URL and the Feedburner URL through a tool like http://sitecheck.sucuri.net/scanner/
Can you provide us more information? Screenshots showing links and the URLs they direct you to?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Discovered - currently not indexed issue
Hello all, We have a sitemap with URLs that have mostly user generated content. Profile Overview section. Where users write about their services and some other things. Out of 46K URLs, only 14K are valid according to search console and 32K URLs are excluded. Out of these 32K, 28K are "Discovered - currently not indexed". We can't really update these pages as they have user generated content. However we do want to leverage all these pages to help us in our SEO. So the question is how do we make all of these pages indexable? If anyone can help in the regard, please let me know. Thanks!
Technical SEO | | akashkandari0 -
Redirect Chain Issue
I just found I'm having a redirect chain issue for http://ifixappliancesla.com (301 Moved Permanently). According to Moz, "Your page is redirecting to a page that is redirecting to a page that is redirecting to a page... and so on" These are the pages involved: 301 Moved Permanently
Technical SEO | | VELV
http://ifixappliancesla.com
https://ifixappliancesla.com https://www.ifixappliancesla.com/ This is what Yoast support told me: "The redirect adds the https and then the www, ending at: https://www.ifixappliancesla.com/. You want all variants of your site's domain to end up at: https://www.ifixappliancesla.com/ " - which is totally true. But I would also like not to have the redirect chain issue! Could you please give me an advise on how to properly redirect my pages so I don't have that issue anymore?0 -
Fake 404 Issue
Hello, I just had a problem my site started showing up 404 issues for all my wordpress pages and post but visually the page was loading with content but yet all pages and software including google WMT was showing the 404 issue. I never found the issue but was able to move the site into a new hosting and restore from a backup and it work. Google found the issue on Jan 27th and they remove all the pages with 404 from the index and I lost most of my top ranking I have since fix the issue and was wondering if google would restore my ranking in such a case? Regards, M
Technical SEO | | thewebguy30 -
Duplicate Meta Titles and Descriptions Issue in Google Webmaster Tool
Hello All, We have one site named http://www.bargains-online.com.au/ & have some categories along with filter option on left side like filter by price & by brand, ect. We have already set rel canonical tags on all filtered pages, but still those all pages showing duplicate page titles and description warning in HTML Improvements section in Google Webmaster Tool. For Example: http://www.bargains-online.com.au/pressure-cleaners.html We've set rel canonical tag on below pages. http://www.bargains-online.com.au/pressure-cleaners/l/manufacturer:black-eagle.html http://www.bargains-online.com.au/pressure-cleaners/l/price:2,100.html http://www.bargains-online.com.au/pressure-cleaners/l/price:3,100.html Kindly request if anybody has any solutions for the same, please share with us. Thanks, Akshay
Technical SEO | | akshaydesai0 -
Can a Novice Fix Parallelize Issues?
I was working yesterday on making my WP site quicker (sellingwarnerrobins.com) and after updating the htaccess file to solve some "Leverage Browser Caching" issues I re-ran a scan on Pingdom Tools and am now getting a zero for "Parallelize downloads across hostnames" with a list of 34 items to fix. I did some web searches and when the articles started talking about cnames, subdomains, and hostname distribution it went beyond my capabilities. Are these Parallelize "issues" something a novice like myself can easily fix? If so, how?
Technical SEO | | Anita_Clark0 -
Squarespace Duplicate Content Issues
My site is built through squarespace and when I ran the campaign in SEOmoz...its come up with all these errors saying duplicate content and duplicate page title for my blog portion. I've heard that canonical tags help with this but with squarespace its hard to add code to page level...only site wide is possible. Was curious if there's someone experienced in squarespace and SEO out there that can give some suggestions on how to resolve this problem? thanks
Technical SEO | | cmjolley0 -
Duplicate page content issue needs resolution.
After my last "crawl" report, I received a warning about "duplicate page content". One page was: http://anycompany.com and the other was: http://anycompany.com/home.html How do I correct this so these pages aren't competing with each other or is this a problem?
Technical SEO | | JamesSagerser0 -
What's the best way to solve this sites duplicate content issues?
Hi, The site is www.expressgolf.co.uk and is an e-commerce website with lots of categories and brands. I'm trying to achieve one single unique URL for each category / brand page to avoid duplicate content and to get the correct URL's indexed. Currently it looks like this... Main URL http://www.expressgolf.co.uk/shop/clothing/galvin-green Different Versions http://www.expressgolf.co.uk/shop/clothing/galvin-green/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/1 http://www.expressgolf.co.uk/shop/clothing/galvin-green/2 http://www.expressgolf.co.uk/shop/clothing/galvin-green/3 http://www.expressgolf.co.uk/shop/clothing/galvin-green/4 http://www.expressgolf.co.uk/shop/clothing/galvin-green/all http://www.expressgolf.co.uk/shop/clothing/galvin-green/1/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/2/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/3/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/4/ http://www.expressgolf.co.uk/shop/clothing/galvin-green/all/ Firstly, what is the best course of action to make all versions point to the main URL and keep them from being indexed - Canonical Tag, NOINDEX or block them in robots? Secondly, do I just need to 301 the (/) from all URL's to the non (/) URL's ? I'm sure this question has been answered but I was having trouble coming to a solution for this one site. Cheers, Paul
Technical SEO | | paulmalin0