Why if PR and DA are higher is this site lower in SERPS
-
Hi there
Why if PR and DA are higher is this site lower in SERPS.
Example:
Pos 1
www.inkfactory.com/ink-cartridges/samsung/sf4200-series
PR:1 DA:48
Pos 3
www.internet-ink.co.uk › SAMSUNG INK
PR:25 DA:60
I though is you had top DA and PR you should out rank those below you?
-
This question is asked over and over again in the SEOmoz Q&A.
"Wah! My DA is the best but my site isn't #1. WTF?"
This question rarely gets a satisfying answer and when it is answered well the person who asked the question is usually not happy to hear the truth.
This is one of the most widely used SEO tools on the internet and people don't know what it means.
Since these same questions recur so often and with such passion SEOmoz should address it with a detailed article that is obviously posted where the people who use this tool will clearly find it.
That will ensure that a good tool is used properly and so many people are not going around with their panties in a wad.
-
There are a lot of rank factors. One of the most important is "fresh content". If a site is updating and creating new content weekly, this site will be higher than other who his last post/article/edit was a long time a go.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our Sites Organic Traffic Went Down Significantly After The June Core Algorithm Update, What Can I Do?
After the June Core Algorithim Update, the site suffered a loss of about 30-35% of traffic. My suggestions to try to get traffic back up have been to add metadata (since the majority of our content is lacking it), as well ask linking if possible, adding keywords to alt images, expanding and adding content as it's thin content wise. I know that from a technical standpoint there are a lot of fixes we can implement, but I do not want to suggest anything as we are onboarding an SEO agency soon. Last week, I saw that traffic for the site went back to "normal" for one day and then saw a dip of 30% the next day. Despite my efforts, traffic has been up and down, but the majority of organic traffic has dipped overall this month. I have been told by my company that I am not doing a good job of getting numbers back up, and have been given a warning stating that I need to increase traffic by 25% by the end of the month and keep it steady, or else. Does anyone have any suggestions? Is it realistic and/or possible to reach that goal?
Algorithm Updates | | NBJ_SM2 -
Any important change in SERPs between Nov 17th and Nov 20th?
I've noticed important changes in visibility for some websites, between Nov 17th and Nov 20th. Also some of the webs that monitor SERPs have detected similar stuff (Including Mozcast). Do you know if an important change in SERPs took place during those days?
Algorithm Updates | | emerlo0 -
Confused about PageSpeed Insights vs Site Load for SEO Benefit?
I was comparing sites with a friend of mine, and I have a higher PageSpeed Insights score for mobile and desktop than he does, but he his google analytics has his page load speed higher than. So assuming all things equal, some quality of conent, links, etc, is it better to have a site with a higher PageSpeed score or faster site load? To me, it makes more sense for it to be the latter, but if that's true, what's the point of the PageSpeed insights? Thanks for your help! I appreciate it. Ruben
Algorithm Updates | | KempRugeLawGroup0 -
Canonical when using others sites
Hi all, I was wondering if this is a good way to safely have content on our website. We have a job search website, and we pull content from other sites. We literally copy the full content text from it's original source, and paste it on our own site on an individual job page. On every individual job page we put a canonical link to the original source (which is not my own website). On each job page, when someone wants to apply, they are redirected to the original job source. As far as I know this should be safe. But since it's not our website we are canonical linking to, will this be a problem? To compare it was indeed.com does, they take 1 or 2 senteces from the original source and put it as an excerpt on their job category page (ie "accountant in new york" category page). When you click the excerpt/title you are redirected to the original source. As you might know, indeed.com has very good rankings, with almost no original content whatsoever. The only thing that is unique is the URL of the indeed.com category where it's on (indeed.com/accountant-new-york), and sometimes the job title. Excerpt is always duplicate from other sites. Why does this work so well? Will this be a better strategy for us to rank well?
Algorithm Updates | | mrdjdevil0 -
New Google SERPs page title lengths, 60 characters?
It seems that the new Google SERPs have a shorter page title character length? From what I can gather they are 60 characters in length. Does this mean we all need to now optimise our page titles to 60 characters? Has anyone else noticed this and made any changes to page title lengths?
Algorithm Updates | | Adam_SEO_Learning0 -
How does this site rank no 1 for big terms with no optimisation?
Hi, A client recently asked me abut a site that appears to have popped up out of nowhere and is ranking for big terms within their industry: http://bit.ly/11jcpky I have looked at the site for a particular term: Cheap Beds I was using unpersonalised search on google.co.uk with location set to London. The site currently ranks no 1 for that term and other similar terms. The question is how? SEO Moz reports no backlinks (they must have blocked?) Ahrefs and Majestic report report some backlinks but not many and no anchor text with the term in. The Page title and meta do not contain the term nor does the page seem to contain the term anywhere. The domain does have some age though has no keyword match in the URL. I'm a little stumped to how they are achieving these results. Any Ideas Anyone?
Algorithm Updates | | JeusuDigital0 -
Unable to increase the site traffic since 2 yrs
Hello friends, I am new to seomoz forum and this is my first query. Even i asked this query in many forums, i didnt get the right answer. it will be a big help if anyone answers my question. Since 2yrs i am doing seo for my site. even i am following all the white hat techniques and doing every submission manually. Still my site traffic is below 100 visits. Can any one help me to increase the site traffic? What are the techniques i need to follow to increase site visits? Also one of my sites recently got disappeared from google. I have checked all the pages listed in google for my site's major keywords. I didnt find the site anywhere. Can u hep me why this condition wll happen and what to do to overcome such issues?
Algorithm Updates | | Covantech0 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0