Hi Ryan,
Wouldn't that cause issues with crawl efficiency?
Also, webmaster guidelines say "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi Ryan,
Wouldn't that cause issues with crawl efficiency?
Also, webmaster guidelines say "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
I've done quite a bit of searching, but can't seem to find a time efficient way to accurately analyze keyword difficulty for large sets of keywords.
All of the keyword difficulty tools out there that I've tried are either 1) not accurate 2) slow (like seomoz's kw difficulty tool only allows 5 entries at a time).
Can anyone recommend any shortcuts / tools / processes to analyze kw difficulty for large sets of keywords?
Thank you. Are you sure about that?
So, we have a page, like domain.com/searchhere, but results are being crawled (and shouldn't be), results look like domain.com/searchhere?query1. If I block /searchhere? will it block users from crawling the single page /searchere (because I still want that page to be indexed).
What is the recommended best practice for this?
On one site I work on, I noticed that pages in the upper navigation that have less links in the main navigation vs pages that have more links tend to perform better in organic search.
Has anyone else noticed this?
Since, November 14th, the impressions on our slideshows on our site are down by 43%.
The slideshows are built using # for pagination, and all content lives on one URL (through the user must click 'next' to see more content). Does anyone else have this issue?
Hi All, I'm a bit frustrated with the fact that I can only enter 5 words at a time into SEOmoz's keyword difficulty tool.
Does anyone know of a better way (or tool) to analyze keyword difficulty for hundreds of keywords?
Since the penguin update near Oct 6th, both landing page & query impressions are up 120%, but google traffic is actually down 4.89%.
CTR is down 45% but we didn't change any of the meta tags on our site.
Any ideas why impressions would be up and traffic down? And / or how CTR could decrease without making any edits to the meta data for our pages?
Question for all the SEO's out there. Do you always include your target keyword in the image alt tag?
For example, if you had an article on osteoarthritis, and you included a photo of an old man, would you put "old man on a bench" or "old man suffering from osteoarthritis" -- even though you have no idea if the old man suffers from osteoarthritis?
Hi WIll,
Yes, I understand that 'jump to navigation' is determined algorithmically.
We can't actually link to the anchors because, as mentioned, the UI we've developed has better user engagement (one of our main goals is to improve user engagement site-wide). The anchors exist in a sort of expand / collapse format, so that the user can see the entire content and click on titles to see more.
I suppose the other option would be to put it in a hidden div, and add javascript so that the user could see the links if they wanted (even though, essentially, there isn't any value-add to the user since they can already see the content list)?
I'm working with a client that owns a medical site. All content is reviewed by someone from their medical board (doctors or nurses), but the content is written by a variety of authors.
I'm wondering if we could create authorship profiles for the doctors and nurses. Would there be any problem with that? (even though they didn't write the content, they just reviewed it for medical accuracy). The name of the reviewer is included on every article.
Any thoughts / feedback / similar experiences would be helpful.
Mainly, would google use 'jump to' sections of our page in the SERPs. We have anchors, but no links to the anchors, and are hoping that by adding a hidden div with links to the anchors, it will activate 'jump to navigation.'
The hidden div would be added just for the sake of adding the links to the anchors--it wouldn't be visible to users. We've found user engagement is higher for the type of navigation we built, but want to make sure 'jump to' works (is visiible in google SERPs).
Thanks in advance for your help.
Will jump to navigation work when using a hidden div? Basically, we use a navigation system such that when a user clicks on a title, it expands to show the rest of the article, each title has an anchor associated with it, but no where else on the page / site do we link to those anchors.
In order to make jump to navigation work, we are considering adding a hidden div with links to the anchors. Does anyone have experience doing this? Did it work?
In fact in this post http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html, they mention using a canonical when syndicating content, if the content is similar enough--not sure why they don't mention a canonical in the webmaster guidelines link I included above.
Hi, Cross domain canonicalization is a common practice as well (http://googlewebmastercentral.blogspot.com/2011/10/raising-awareness-of-cross-domain-url.html).
In this article http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66359 google mentions if you syndicate content, you should include a link and, ideally noindex, the content, if possible.
I'm wondering why google doesn't mention including a canonical instead the link + noindex?
Is one better than the other?
Any ideas?
Thanks. I would love to hear if anyone has seen any SEO benefit from just straight syndication.
Can anyone provide some guidance on both how to submit your site to Yahoo News as well as some tips for how to get accepted into Yahoo News?
Google doesn't say 'don't syndicate content' they say 'syndicate carefully' and include a link back to the original source: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66359
I think our site would be fine given that:
a) we published the content first (its already been indexed in google)
b) this is content syndication -- not scraping. We are permitting our client to use our content.
c) there will be a link back to us, in the form of a byline, to identify us as the original source of each article.
This is a major client for us, and they really don't want to use the canonical tag, so I''m looking for advice/ best practices / ideas
I stumbled across this old SEOmoz whilteboard http://www.seomoz.org/blog/whiteboard-friday-leveraging-syndicated-content-effectively and was wondering if this is still a valid technique given the Panda & Penguin updates.
Is anyone here still doing this (and seeing results)?
One of our clients wants to use about 200 of our articles on their site, and they're hoping to get some SEO benefit from using this content.
I know standard best practices is to canonicalize their pages to our pages, but then they wouldn't get any benefit--since a canonical tag will effectively de-index the content from their site.
Our thoughts so far:
add a paragraph of original content to our content
link to our site as the original source (to help mitigate the risk of our site getting hit by any penalties)
What are your thoughts on this? Do you think adding a paragraph of original content will matter much? Do you think our site will be free of penalty since we were the first place to publish the content and there will be a link back to our site?
They are really pushing for not using a canonical--so this isn't an option. What would you do?
For anyone that has gone through the process of getting accepted into Bing news---do you have any suggestions for what we can do?
Any resources you'd recommend reading?
why do you think it would be better to space it out?
Been trying to extract kw difficulty for a list of 1400 kws and wondering what the most efficient way to do this is? Any links to tutorials would be much appreciated.
Hi Oleg,
Just tried that, but, it is only showing 300 URLs for the past week and 600 for the past month..
We have about 1000 pages we need to eliminate from our site (of about 18000 URLs). these URLs don't see a ton of traffic, but may have some valuable links.
Would we be better to 404 these or re-direct them to our homepage? Could re-directing to our homepage hurt us?
We saw a spike in the total number of indexed URLs (17,000 to 165,000)--what would be the most efficient way to find out what the newly indexed URLs are?
Thanks to everyone for the response
Based on everyone's responses, I'm assuming you all think that videos count as google's definition of 'content': http://insidesearch.blogspot.com/2012/01/page-layout-algorithm-improvement.html
I am concerned though that Google can't actually read the content of a video--what if you added quite a few videos accross a site without adding transcripts or schema.org?
What's the best way to get video thumbs to appear in SERPs?
We've been using an outreach method that targets resource links & improvements seem to be minor, even though links are coming from .edu's and .gov's -- has anyone else noticed this trend?
Guest posting seems to work much better in terms of ranking / traffic improvements.
Building a site and wondering if we have one page that changes depending on where how it is accessed if that is a good / bad idea.
Thanks in advance!
We are considering adding videos to thousands of article pages, and were wondering if it would be better to add video above or below the fold?
They take up quite a bit of space, and push the article content below the fold--would this hurt us?
So, I know indexing search results is a big no-no, but I recently started working with a site that sees 50% of its traffic from search result pages.
The user engagement on these pages is very high, and these pages rank well too.
Unfortunately, they've been hit by Panda. They already moved the section of the site with search results to a subdomain, and saw temporary success.
There must be a way to preserve their traffic from these search result pages and get out from under Panda.
We're seeing soft 404 errors appear in our google webmaster tools section on pages that are blocked by robots.txt (our search result pages).
Should we be concerned? Is there anything we can do about this?
Hi,
A reporter recently mentioned us in a leading publication, and that article was picked up by two other big publications.
Do we benefit from all three links, or do we only benefit from the link once since it is the same article?
I'm reviewing https://developers.google.com/webmasters/smartphone-sites/redirects, and wondering where exactly to add HTTP Vary:
We need to remove about 50 articles from our site, and we can find similar content to re-direct these to.
Would it be a good or bad idea to re-direct this content? Or should we just 404 it?
Should a 301 re-direct always be an exact match?
Working on a site, and noticed their canonical tags follow the structure:
They cited their reason for this as http://www.ietf.org/rfc/rfc3986.txt. Does anyone know if Google will recognize this as a valid canonical? Are there any issues with using this as a the canonical?