Thanks for your input.
IMHO...If I exclude ? , then paginated pages like ?page=xx wont be crawled , thus the rel=next prev tags on the page are rendered useless.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Thanks for your input.
IMHO...If I exclude ? , then paginated pages like ?page=xx wont be crawled , thus the rel=next prev tags on the page are rendered useless.
If you engage with your follows and users on any of those social media accounts I would advise to link to them from your site. Perhaps you can move them to a less prominent location and open them in a new tab when users click on them.
If you don't engage with your followers then might as well get rid of them
I would say test it out and see what happens. I would love to know the result. ( youmoz post perhaps ? )
what I assume would happen :
The new link only counts when G-bot crawls the page ( and obviously not on each page load ), and each time Gbot crawls the page it will see that an old link is dropped and a new one is added. So what ever value you gain from the new link , you will lose from the old one which is no longer there. So I really don't see the value to be had from an SEO point of view . But repeat visitors to you page may click through to those pages. ( Again testing it will give you solid proof )
Hi
I was wondering if anyone willing to share your experience on implementing pagination and canonical when it comes to multiple sort options . Lets look at an example
I have a site example.com ( i share the ownership with the rest of the world on that one ) and I sell stuff on the site
I allow users to sort it by date_added, price, a-z, z-a, umph-value, and so on . So now we have
example.com/for-sale/stuff1 **has the same result as **example.com/for-sale/stuff1?sortby=date_added ( that is the default sort option )
similarly for stuff2, stuff3 and so on. I cant 301 these because these are relevant for users who come in to buy from the site. I can add a view all page and rel canonical to that but let us assume its not technically possible for the site and there are tens of thousands of items in each of the for-sale pages. So I split it up in to pages of x numbers and let us assume we have 50 pages to sort through.
This is where the shit hits the fan. So now if I want to avoid duplicate issue and when it comes to page 30 of stuff1 sorted by date do I add
or
or
or
or
None of this feels right to me . I am thinking of using GWT to ask G-bot not to crawl any of the sort parameters ( date_added, price, a-z, z-a, umph-value, and so on ) and use
My doubts about this is that , will the link value that goes in to the pages with parameters be consolidated when I choose to ignore them via URL Parameters in GWT ? what do you guys think ?
Site 1 ---nofollow link--> PDF Doc(on Site 2 ) ---link in PDF ---> Your Site
Assuming Site 1 is a high profile site the PDF will get some benefit out of that nofollow link and the link from the pdf will help your site. So in theory there is some small amount of indirect SEO value to be had there.
To Block search pages from the index you can try adding the META NOINDEX tag in the head section of the search pages
Hi Dave
404 errors will happen on website and you dont have to usually worry about them ( unless they are in alarmingly high numbers ) . You only want to worry about 301ing 404 pages when you are losing link juice with those.
I would use these 3 methods to find 404s on the site
Like Chris mentioned using Screaming Frog
Use your Analytics Package and search for traffic landing on the 404 page
Use Google Bing Webmaster Tools and see the 404 message warning ( in Crawl Stats area )
Form here you would want to 301 all valid 404 error pages to the close resembling pages ( that visitors will find useful ).
I have seen IIS servers usually throwing in index.php in the urls ( while working on wordpress sites ) . It's best to talk to your hosting company about it, they will have a good idea why this is the case. Usually you will have to edit the web.config file to rewrite the urls without index.php in them.
Again this would depend a lot on your server configuration and the CMS you are using , best to ask your host.
try going to to Bing and Google WMT and try to fetch the site and see if their spiders has any issues. Usually they will tell you want the problem is.
As a general rule , if the content is the same then it most likely is duplicate. Might be worth plugging the site url in to copyscape to see how similar they are.
Also You usually get a warning in GWT if Google sees your content as duplicate .
I agree with the statement about the 1st one too but what if it were a choice between these two ?
Fish oils may raise prostate cancer risks, study confirms
**vs **
Fish Oils may raise Prostate Cancer risks, Study Confirms
Hi Chris
I see what you did there and I am certainly against using it that way.
I was think more along the lines of
The answer Is easy enough--do an A/B Test on some of your pages and see what Result You Get. I'd love to know Your Results.
Since www.thedomain.com is the canonical version. I would link to http://www.thedomain.com from the other pages.
Jump in to GWT and fetch the homepage as Gogole Bot and submit it tot he index. See if you get any errors or warnings there
There are a few things you could do. ( NOT that I recommend you do that )
Use a plugin that cuts of the title after roughly 69 characters ( http://wordpress.org/plugins/limit-a-post-title-to-x-characters/ )
You could eliminate the Brand Name in the Title .. if its automatically appended ( perhaps that is what causes the title to go over 70 characters )
BUT what I would recommend is export out the list of pages that has title over 70 characters and write proper titles for them, its the best option BUT as you can imagine its not going to an easy task. I guess NO Pain NO Gain resonates well here. I would do this for the important pages first and slowly work at them over time.
If you are sending Facebook and Twitter Trafic to these pages try and use Facebook OpenGraph and Twitter Cards
Hey Guys
I am wondering if any of you have done any study or testing on this ( or perhaps you might have come across one at some point in your career )
Personally I feel that while adding a description , it makes sense to Capitalize the Keywords and other words ( first letter only ) that I want to emphasise on ( perhaps stuff like Buy, High Quality, Best, etc ) . I want to pick your brains on this and see what you guys think about it. I have not tested the effects on CTR yet .. if someone else has then it will be a good resource for me to go through. ( and if no one else has done any relevant study I might do it at some stage ).
Regards
Saijo
Personally I would 301 the tag page "Products Tagged 'Knicks'" to the "New York Knicks Shirts" collections page . The only issue I can think of is that , will the collections be updated over time to add all the new items that might be potentially tagged as Knicks ?
So perhaps it might be better to keep the tag page for Knicks as the go to page for " Knicks TShirts ". You can edit the tag template to add custom banners and descriptions and I would also change the h1 tag on the tag page to say something more relevant than just " PRODUCTS "
Try to fetch the url as Google bot and see if the issue still remains . If it does you might need to review your robots.txt file. Else it probably was some issue GoogleBot had while it tried to crawl the site.
You will need to setup both http://clientwebsite.com and http://www.clientwebsite.com to set the Preferred domain from GWT. so its ok to have both in there. But your site should ( in most cases ) only be accessible from either the one with www or without.
I am sure if you searched for " ~images -images" before Google decided to drop the ~ operator , pictures would have come up as an alternate keyword and so, they would have seen it as a close match for pictures and served your site for your query.
It was one of my fav tool for finding alternate keywords to optimise for , sadly that is gone now
As for on page vs off page both are important .. it not an "either or" condition you should do both to get good results , especially if the competition is strong.
The Title for that Page is
News & Events - July 2013 - Newman Senior High School - Newman Senior High School is located in the mining town of Newman 1200km North of Perth and is part of the Pilbara region. The school benefits from a diverse cultural student and staff population. This has promoted cultural acceptance of peoples from diverse backgrounds enriching the school community.
and for individual pages like : http://www.newmanshs.wa.edu.au/news-events/events/07-2013/4/ , http://www.newmanshs.wa.edu.au/news-events/events/07-2013/5/, etc are the same tile gets reused
Events - Newman Senior High School - Newman Senior High School is located in the mining town of Newman 1200km North of Perth and is part of the Pilbara region. The school benefits from a diverse cultural student and staff population. This has promoted cultural acceptance of peoples from diverse backgrounds enriching the school community.
Depending on the CMS you use there might be ways to generate a better title ( best to keep it between 60-69 characters ). You seem to be using something called BamCMS, and you might probably need to get them involved in finding a solution.
Best to ask a staff this , but I am pretty sure Moz does not take canonical into account while scanning for duplicate titles.
If you properly use Rel Prev Next ( http://googlewebmastercentral.blogspot.com.au/2011/09/pagination-with-relnext-and-relprev.html ) on paginated content you DONT have to worry about duplicate titles.
You page http://www.solidconcepts.com/industries/aerospace-parts-manufacturing/ says the canonical version is the https version of the page : https://www.solidconcepts.com/industries/aerospace-parts-manufacturing/, but the https version does a 301 to the non https version, so you are sending mixed signals to google.
You could try one of the following listed here https://code.google.com/p/sitemap-generators/wiki/SitemapGenerators
Yes its a good practice to mention your sitemap in robots.txt and dont forget to submit it ( at the very least ) via Google/Bing WebMasterTools
Here is the general advice .
301 all the old urls to the appropriate new urls ( that exists )
For the stuff that you think are not worth moving across ( events or time-sensitive information ) you could redirect all of those individual post that to the category page ( if you have one )
eg:
gets 301 redirected to
would that be a possible solution, if not let us know why and we can look at other alternatives
Personally I would index comments but have a nofollow attribute to links within the comments. See what sort of tile and description the comments url generate ( you dont want duplicate titles and meta for those )
EDIT:
do you have some sample urls we can look at ?
Hi Folks
When it comes to malware , if I have a site that uses iframe to show content off 3rd party sites which at times gets infected. Would you recommend 404 or 503 ing those pages with the iframe till the issue is resolved ? ( I am inclined to use 503 .. )
Then take the 404/503 off and ask for a reindex ( from GWT malware section )
OR
Ask for a reindex as soon as the 404/503 goes up. ( I do understand we are asking to index as non existing page , but the malware warning gets removed )
PS : it makes sense for this business to showcase content using iframe on these special pages . I do understand these are not the best way to go about SEO.
I dont have much experience with any of them , so cant personally recommend one over the other. That said I think wpmu plugin comes with support , so if you are already using that why not reach out to support and ask them whats the best way to resolve your issue ?
There are a few options on Wordpress
And some paid one on CodeCanyon or WPMU
If they are both about the same topic then a sitewide link could help, if not you might want to consider adding a post that talks about your industry with a well place link.
You could try and exact match query search ( using " ) and try individually contacting all parties , if that does not sound like a feasible option you might want to consider using the link disavow tool
Since you have consolidated the content and pasted it altogether on a single page who not 301 the old pages to the new consolidated page which has got all the content from the old ones ?
If you can answer NO to all these then I would suggest doing a 301
User-agent: Googlebot-Mobile
Disallow: /m
would't that statement prevent Googlebot-Mobile from crawling the mobile pages ? Why would you want to do that ?
I seriously doubt the issue is with WP. perhaps a plugin might be conflicting with how the page is rendered. Make sure you have a database and file backup. You could FTP in and rename the plugins folder to _plugins ( this will disable all plugins ) and check if the issue persists. Rename the folder back to plugins once you finish testing it.
NOTE : with some plugins you will have to manually go in a enable and configure them after you do this.
By default WP outbound links are follow links - u can use plugins to change that behaviour
There is no point having rel=canonical on 404 pages , since the search engine spiders won't crawl them
I have had good experience with using http://wordpress.org/plugins/redirection/ for my redirection issues , you can try that . If you are not familiar with wordpress get someone who knows their way around wordpress to give you a hand
Your permalinks looks really weird http://www.yourinstrument.com.au/blog/index.php/2013/06/gigging-musicians-guide-principals-professionalism/ with index.php in between the domain and the post title. I have seen that come up on IIS servers. You can read this for more info on how to avoid that http://codex.wordpress.org/Using_Permalinks . It's also a good idea to get rid of the date from the url .
I would seriously consider hiring someone to have a look over how things are setup on the site.
When it comes to subdomain vs subfolder, SEO's usually prefer to go subfolder. But if it's easy and financially suitable to go that way I would say go for subfolder. In this day and age the SEO impact is minute ( if any )
Matt Cutts's view ( Oct 2012 ) : http://www.youtube.com/watch?feature=player_embedded&v=_MswMYk05tk
You might want to "noindex,follow" them . So they do not appear in the SERP and the links from there will be followed by spiders and can possibly pass link juice
I would start by adding a higher priority for http://www.candygalaxy.com/bit-o-honey/ in your sitemap.
You use an H3 for the tile on http://www.candygalaxy.com/bit-o-honey/ , while http://www.candygalaxy.com/bit-o-honey-bulk/#.UcOrvj4smF4 and http://www.candygalaxy.com/brands/Bit%252dO%252dHoney.html has an H1. I would also change that
Set up authorship for your content. ( http://www.google.com/insidesearch/features/authorship/index.html ) .
You can ask the 3rd part site to remove it ( which will most likely be a waste of time ), DMCA takedown notice , getting google to de-index the content ( https://support.google.com/bin/static.py?hl=en&ts=1114905&page=ts.cs ) , etc
I assume you are already nofollowing those links .
Is the referral links appearing in the trackback section ( or something similar ). What kind of CMS do you run , perhaps there is a way to blacklist certain domains . ( On wordpress http://akismet.com/ does a pretty good job on the comment and trackback front for me )
I run a blog and the sitemaps index the post,categories,tags and pages. Without seeing the actual site , I dont think I can offer much valuable advice , but generally speaking
With categories and tags I make sure to add a onpage description , this way they are not just a list of posts. I am also very careful when adding categories and tag , I dont want to have a lot of duplicate content around them ( I am still in the process of cleaning up the tags after I went a little wild with them )