If that is on a product page, then you should have a product description that would make each page unique, so duplicate errors is surprising.
If that information is it's own page, I would find a way of intergrating it with existing ones.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
If that is on a product page, then you should have a product description that would make each page unique, so duplicate errors is surprising.
If that information is it's own page, I would find a way of intergrating it with existing ones.
For a simple quick way, I would use a bulleted indented list. Works at a glance, is less demanding then a diagram.
Something like (taking your list as an example and moving all the level 2s up a level, I'm sure you don't want www.url.com/Home/About-us)
If your Moz stats show 0/1 then it's very likly the page/domain has not been crawled by the Moz ap. It usualy takes 1-2 months to see acurate data for a new domain in terms of PA/DA.
EMD's are VERY powerful. In the industry I work in there is almost always a EMD on page 1 positions 4-6 that has a link profile thats 100th of the power of the sites it is beating, and usualy with similar optimisation otherwise.
I expect to see it being one of the things google stops taking into account sooner rather then later though due to this. So I'd use your inflated position to build a stronger natural link profile for such a time.
I can't quite anser how google uses it, but after looking at this recently, I was suprised at the large portion of the adult workforce has a reltivly low reading comprehension. Make sure your target audiance can read your content, then worry about google second. You might find that this alone restricts you to the point where theres not enough wiggle room to worry about the score.
If the large number of keywords were natural (An SEO agency talking about SEO in a large blog post for example, might mention it meny times naturaly) then I wouldn't worry about it.
If you have been artifically stuffing your documents with keywords, I would be more concerned about how it reads to a human long before worrying about google. People tend to not to trust obviously keyword stuffed content.
The 'keyword stuffing' has been a reccomendation long before the coming update. I do remember there being a blog post here and a similar QA question on over optimisation where removing 2-3 instances of a keyword massivly improved rankings - Showing that it has been possable to be penalised for over-optimisation for a long time now.
One of the blog posts can be found here - http://www.seomoz.org/blog/lessons-learned-by-an-over-optimizer-14730
Hope this helps.
Speaking from a users perspective, thats the one I would like to see changed the most myself.
So meny times there is some exact match keyword domain with poor content ranking high on the first page on seemingly the weight of that factor alone.
If the changes are along those lines it will make SEO 'easyer' unless you were using such technicques yourself.
My other guess is that they're going to improve their 'best guesses' for those pages without semanticaly correct html e.t.c. That would 'even the playing feild' but still would favour optimised content.
Last I came accross such an issue I mostly started with making the 'easy' changes that reduced the number the most.
In the last case, it was implimenting a 301 to the www version of the site (cutting the errors in half) and putting a canonical on one search page.
This got the number down to the point where it was easyer to make decisions on 'Is it worth making friendlyer urls' and discover more intresting places dup content was being generated.
It's one of these things I would always aim for 0 where I can. It usualy means that the url or site structure can be improved sugnificantly, or it's such an easy fix that it's hard to justify not doing.
If you try SEOMoz's Rank tool(http://www.seomoz.org/rank-tracker), it is showing you correctly. I'd use that to manualy get your rankings for your upcoming report.
The tracker and reports are only run once a week, so they will probably reflect you correctly at that time as well.
What's most likly to have happoned is that when it was last checking your position in the SERPs fluxuated and it missed you. We've had a page or two drop out of the rankings briefly at times and it isn't something to worry about unless it persists.
I know people in england at least use google UK. Not sure who uses GB
The rank tracker is showing you at position 5 as well, so presuming your report/campaign is set up correctly, it should show it as that after your next crawl.
If it's not, your likely problems are a mispelling, the wrong engine being used, or the campaign is set up for a certain sub-domain and the other sub-domain is apearing in the results.
Hopefully that should clear everything up. If not I'll try to answer all I can. Best of luck beating justfooderp.com... (Who probably didn't notice the 'derp' in their url)
SERPS for Food ERP softwere look like this to me. I do not know which site is yours, but if it's here, then it's possable that either the report/campaign is set up wrong, or at the exact time that the report was last made, the results fluxuated. Try using the rank tracker tool to get a fresh result (remember to change it to google uk and check 'Check Entire Subdomain'<label for="engine"></label>).
<cite>www.justfooderp.com/</cite>JustFoodERP is more than food software; Food ERP solutions include food processing_software_, food business services plus food business solutions, food ...
<cite>www.tgiltd.com/industries/food-and-beverage-software-solutions.html</cite>Food processing software solutions from TGI deliver fully-integrated _food ERP software_functionality for improved operational efficiency and bottom-line business ...
<cite>www.smcdata.com › Software Choices › Food Software</cite>Our food ERP software provides the information needed to tighten control over payables and receivables, improve cash flow, and react to business cycles.
<cite>www.afsi.com/</cite> - <cite>United States</cite>AFS Technologies (877) 821-3007 | Providing food distribution and food ... software, customized food distribution software and ERP software for foodservice and ...
<cite>columbusglobal.com/en-GB/Food</cite>Food manufacturing, processing and supply chain ERP software from Columbus based on the Microsoft Dynamics platform. 6000+ ERP software ...
<cite>www.fooddecisionsoftware.com/</cite>WinFDS - Food Distribution Software, Food Manufacturing Software, Food ERP Software, Food Traceability Software, Food Processing Software.
<cite>www.tecman.co.uk/foodware</cite>Technology Management provides an integrated Microsoft business system for UK _food_manufacturing and food processing businesses.
<cite>www.bcfooderp.com/</cite>bcFood ERP - top food industry software ERP solutions includes food processing_software_, food business services plus food industry distribution - manufacturing ...
<cite>panorama-consulting.com/industries/food-and-beverage-software/</cite>Panorama's ERP consultants have experience with food software vendors and can locate the a solution for your manufacturing and food distribution software ...
<cite>www.eic**software**.com/</cite>EIC _ERP Software_ For _Food_ Distributors and _Food_ Manufacturers - Top Of The Line, Affordable, Easy To Use, and Powerful Distribution Systems.
Most of those methods are not foolproof. Using other computers, IP adreses and clearing your search history are not going to get you de-personalised results.
If you're going to look yourself, start a new 'incognito' window in Google Crome, and you should get completely de-personalised results.
If your still showing on page 1, double check your report. Make sure the search engine is set to google UK, and thta you are using the keyword exactly (letter for letter)
If all that fails, if your willing to provide the keyword, Someone here could probably try to confirm what you are experiancing.
Here's an article from Google webmaster central with instructions on how to impliment it.
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
And a quick example of implimentation by Yoast for 'Page 2' of results.
http://yoast.com/rel-next-prev-paginated-archives/
Just a quick note, on 'page 1' there should be no rel=prev (your mysite.com/main-category in this case) On on the final page there should be no rel=next. All other pages should have both.
Hope these help.
I'd impliment rel=rev and rel=next on the pages to imply that their paginated, with the first page mentioned being the first in the chain.
rel=canonical then should point to the actual url, not the view-all page.
I think that is the 'correct' implimention for paginated content since rel=prev and rel=next were introduced.
A canonical tells google that all 3 are the same page, I'm not honestly sure if there is a loss, but if there is no re-direct involved it should in theory not be a problem.
What I /would/ recomend however is setting up some 301's and cannonicals to the homepage.com address if you can. If it /is/ a problem or ever becomes one, this will ensure that your getting the most use out of the link.
On the html, I personally just checked your source. I also use a Firefox extension that auto-validates it while I look at it which is great for an eyeball. You can find it here http://users.skynet.be/mgueury/mozilla/
On Panda, it's run on a regularish basis, often with a change to the algorithm, so it is possible your page was re-classified.
As for on-page ads, http://www.seomoz.org/blog/just-how-smart-are-search-robots, this article points out the the crawlers take into account above the fold content, and are rendering the page to determine this.
My greatest suspicion would still be that you lost some kind of freshness boost, especially as it's a page detailing an offer. Or alternatively lost a few very powerful links to the offer (knocked off the page of a list of offers e.t.c.)
Also on the ad, I'm with everyone else in surprise that putting up that ad didn't massively increase your bounce rate of the page and reduce the conversion rate of the page for the offer (as opposed to the e-book). There was an article recently pointing out that Google may (as in it's the subject of debate) be taking bounce rate into account, though that would only explain why it's not recovering as opposed to the initial drop if you ran the ad after the drop.
Generaly, you would expect your first time visitors to be leaving the site and never returning when you use such an ad, which could trigger such issue. One such article is here -> http://www.searchenginejournal.com/actual-bounce-rate-vs-bounce-rate-and-why-the-difference-matters-for-seo/31852/
I would try improving the pages html, fatten up the content somewhat and disable the ad for the page, then wait a couple months to see if there is any change. If not then it at the very least rules out the most likly culprits.
Other avenues you can explore are to investigate other sitewide changes, see if you can check historical link data for the page to check if you lost any powerful links, and to ensure your not just being legitimatly reduced in rankings by better competitor pages.
The first thing I would try is reverting the changes on page, also one thing to check would be if any sitewide changes have been made.
When reverting, make doubly sure that you havn't done something like remove the h1 tag e.t.c. when you were removing the keywords.
To add my experiance to this, it seems to happon more commonly when it feels your h1 or the serch term e.t.c. is more relivent to the search.
Depending on the search, it can harm or hurt. In terms of ranking for your keyword it shouldn't be a problem, but clickthrough wise it is more iffy.
Theres a few possable things here.
First the page might have recived a 'freshness' boost, as it is an offer. Which would explain an excellent ranking with few links.
The page also has a /lot/ of html errors which could cause serch engines to misintuprate the page if their bad enough.
The overlay ad might be seen as spammy, and the content could be seen as thin.
As for over optimisation, it seems within tollerance to me, but theres to meny on and off page issues to be sure.
Google is definatly OK with this, Bing aparently might have issues, but the only way around that would be implimenting it for all the dupe pages but not the original (which is less trivial to detect, or impossable, and why google allows it)
Due to the nature of the objection (Bing claims your telling it that the page is a duplicate of itself, see the article John linked), I would actualy expect Bing to change that in the future to something more sensable if true.
Overall, I would impliment it on every page just to prevent all those links to it with random tracking paramiters e.t.c. that people could throw on.
Without seeing the site in question I cannot be 100% sure, but if OSE is seeing the page as redirecting, there is a good chance google does too. (And as such, would be indexing your error page and not the product page) I would suspect that a searchbot coming to the page via an external link is always triggering an error (or alternativly all pages are being redirected through the error page first) Perhaps you could try google webmaster tool's 'fetch as googlebot' to see if it is causing a problem?
If this occours as part of labeling the images naturaly, I suspect you'll avoid penalisation (but I wouldn't expect you to get a bonus for MINKW, you might for the long tail however)
Generaly, I would be carful using alt tags for pure SEO benifit. They are there to help with accessability, and you risk losing that if your alt doesn't describe you image. (or at least most of them)
I'm taking a bit of a guess at your question that by 'peru' you mean page. And by that you mean 'What position on the serach results page I appear on'. If I am wrong, then feel free to ignore this responce!
In order to find you true position, you need to be using a de-personalised search (Google tends to rank things you serch for or visit a lot higher based on your browsing habits!). Generaly the easyest way of doing this is using google crome, selecting the 'new incognito window' via the spanner option, and using that window to perform your search.
The rank tracker on SEOMoz uses a very similar method, and you may well find doing it this way that you are not actualy in the top 50. In which case you will need to look up an external tool,or go through the pages yourself.
I hope that answers you question.
I am not entirely sure if this will prevent the dup content issue, but you could try setting up rel canocal = next / prev for the pages to make it explicit that they are paginated content, and then change the rel canoncal on the individual pages to point to themselves instead of the index page.
If it's the rel cannocal causing confusion, that should help.
Thats means on those 16 root domains, at least one of them has several 100 links to your domain.
ie
www.example.com might have a sidebar or footer link to you page, thus generating a link on every page of it's website to yours.
If your metric includes internal link, it's also possable you have 2,000 pages on your website with a logo or footer link to the page, thus causing a lot of links from one (yours) root domain.
Uisng open site explorer you should be able to see which websites are responsable for the large number of links.