Can't understand poor rankings
-
Hi Guys
Our site gets an A for on page optimisation and has much more backlinks and content than our competitors yet we rank no where for majority of keywords
Please Help!
Mike
-
Thank you I will try this and let you know how it has worked out
-
I think it will be a taxonomy issue. Go to your Wordpress dashboard and seo (Yoast). Click on "Tiles and Metas" and make sure meta robots noindex, follow is checked for categories and tags. Woo Commerce probably creates additional taxonomies which need to be set the same. Also click on "other" and set author archives and date archives disable and noindex, follow. I see author archives "Yossil" is creating duplicate pages. You need to learn the Yoast settings I think and you will get rid of these duplicate pages. It is not straight forward by any means, but worth the time.
-
check you have a cumulative disavow file and not just overwritten the 1st disavow with the 2nd.
-
Hi Chris
i did notice the 5000+ pages but can't located where they come from
I can get as far as 230 and then it just ends but it does return the result 5000+ still
Not really sure how to resolve that
-
First check GWMT and see if you have received a Manual Penalty. You cannot file a reconsideration request if you have not received a Manual Penalty. I don't think you have since using the operator site:http://www.aquaspresso.co.za/ you show up indexed for 5240 pages. That seems like there may be some duplicate content Panda issue. Wordpress may be causing duplicates with incorrect category/tag settings. I see you have Yoast and this is a good way to manage this. Also Woo Commerce and I think Yoast has a plugin specific to Woo Commerce. Hoping someone with more knowledge chimes in, but 5250 pages seems odd.
-
Here is how to do a reconsideration request:
https://support.google.com/webmasters/answer/35843?hl=en
which should lead you to this page:
https://www.google.com/webmasters/tools/reconsideration?pli=1 where you'll check the manual penalty.
I hope this helps you guys.
-
Thanks Cole.
I do that in GWT right?
Mike
-
Yeah I wonder if your domain name has been given a manual penalty.
I would do a reconsideration request now - (having done the two disavows already).
GWT will still show those links that have been disavowed. Don't let that worry you - they've still been disavowed.
-
Hi Cole
sorry saw the rest of you reply now. Yes in the last year disavowed a lot of spammy backlinks but it seems like they have not been excluded.
We had an seo a few years ago do some bad stuff and done 2 disavows since the which seems to have helped a little
-
Thanks for the thorough attempt nope always owned it
-
I've looked at the following:
- your DA and PA of home page is at the same level if not better than your competitors.
- your website speed is fine.
- you don't have any backlinks that are spammy.
- Your on-site optimization is good
- you have good content
- your site is visually appealing so it can't be CTR and bouncing.
- you appear to have good indexation
- your robots.txt file isn't the issue
Man, I'm very curious as to what others say. How long have you owned that domain? Could it be that someone previously had it and received a penalty?
ETA:
*the site that appear to rank higher than you have part of the keyword search query in their domain name but this isn't a "be all" scenario. So that can't FULLY be it either. It doesn't explain why you don't show up on second or third page.
*I didn't see you on the second or third page of "coffee vending machines south africa." I really wonder if this site has had a penalty before.
Have you ever disavowed any backlinks in GWT before? If so, what sites?
-
We need more information. What is the domain?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Best way to handle indexed pages you don't want indexed
We've had a lot of pages indexed by google which we didn't want indexed. They relate to a ajax category filter module that works ok for front end customers but under the bonnet google has been following all of the links. I've put a rule in the robots.txt file to stop google from following any dynamic pages (with a ?) and also any ajax pages but the pages are still indexed on google. At the moment there is over 5000 pages which have been indexed which I don't want on there and I'm worried is causing issues with my rankings. Would a redirect rule work or could someone offer any advice? https://www.google.co.uk/search?q=site:outdoormegastore.co.uk+inurl:default&num=100&hl=en&safe=off&prmd=imvnsl&filter=0&biw=1600&bih=809#hl=en&safe=off&sclient=psy-ab&q=site:outdoormegastore.co.uk+inurl%3Aajax&oq=site:outdoormegastore.co.uk+inurl%3Aajax&gs_l=serp.3...194108.194626.0.194891.4.4.0.0.0.0.100.305.3j1.4.0.les%3B..0.0...1c.1.SDhuslImrLY&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=ff301ef4d48490c5&biw=1920&bih=860
Technical SEO | | gavinhoman0 -
We registered with Yahoo Directory. Why won't this show up as a a linking root domain in our link analysis??
Recently checked our link analysis report for 2 of our campaigns who are registered in the dir.yahoo.com (yahoo directory). For some reason, we don't see this being a domain that shows up as linking to our website - why is this?
Technical SEO | | MMP0 -
Sitemap for pages that aren't on menus
I have a site that has pages that has a large number, about 3,000, pages that have static URLs, but no internal links and are not connected to the menu. The pages are pulled up through a user-initiated selection process that builds the URL as they make their selections, but,as I said, the pages already exist with static URLs. The question: should the sitemap for this site include these 3,000 static URLs? There is very little opportunity to optimize the pages in any serious kind of way, if you feel that makes a difference. There is also no chance that a crawler is going to find its way to these pages through the natural flow of the site. There isn't a single link to any of these pages anywhere on the site. Help?
Technical SEO | | RockitSEO0 -
If two links from one page link to another, how can I get the second link's anchor text to count?
I am working on an e-commerce site and on the category pages each of the product listings link to the product page twice. The first is an image link and then the second is the product name. I want to get the anchor text of the second link to count. If I no-follow the image link will that help at all? If not is there a way to do this?
Technical SEO | | JordanJudson0 -
Can someone break down 'page level link metrics' for me?
Sorry for the, again, basic question - can someone define page level link metrics for me?
Technical SEO | | Benj250 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0 -
Can I noindex most of my site?
A large number of the pages on my site are pages that contain things like photos and maps that are useful to my visitors, but would make poor landing pages and have very little written content. My site is huge. Would it be benificial to noindex all of these?
Technical SEO | | mascotmike0