Wondering why PR hasn't increased?
-
Hi there,
I’ve been working on a website for about 6 months now and the page rank still remains at 0 - Google Page Rank.
Fresh content has been created across the majority of the site, blog implemented, titles and meta’s, schema.org, we've built some good links etc.
There are a lot of 404’errors but a lot of this is to do with stocking issues, products being sold/taken down and new products being put up. Do you think this is the major reason the page rank is not moving – but 404’s are a regular occurrence on a lot of E-Commerce sites. Also, the server went off line on two occasions(obviously Google frowns upon this) but in general server is grand.
Also when we started working on the website it wasn't in the best of shape DA: 11, now it's DA:17. I know still not great but moving in the right direction.
Just wondering yer thoughts on the PR?
-
I agree with all the opinions voiced here. PR isn't really relevant any more. With all the algorithm changes from Google, it can be tough to keep up. I always suggest following Matt Cutts' blog and video channel. Any time there is even the hint of a change, he usually mentions what is going on at Google.
He is currently on leave, but his blog and video channel provides great information!
-
The PageRank that is reported to users on the Toolbar is updated only a couple of times a year these days, and is known to only be an approximation of the actual PR (Google themselves have said this).
Google does have an internal PR they use that is frequently updated, but not revealed to users.
-
You PR hasn't changed because Google hasn't updated it, because they rarely update PR.
PR should be ignored.
-
Hey niamhomahony!!
PR & Domain Authority are useless. Don't focus too much (if at all) on these as they will drive you crazy and not deliver any ranking increase or $$.
-
First off ignore PR
Page rank is updates once maybe twice a year, Google has said they don't really use it as much and if you base your metrics on it you're going to have a bad time.
FEAR NOT! Moz's domain authority or Majestic SEO's Trust flow or ahref's domain rank (or if you combine them all) are all good metrics to look at.
Hopefully that helps you base how well your doing better than pr. (pr can be an used on a longer term plan but i don't put much weight on it any more.)
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I better noindex 'scripted' files in our portfolio?
Hello Moz community, As a means of a portfolio, we upload these PowerPoint exports – which are converted into HTML5 to maintain interactivity and animations. Works pretty nicely! We link to these exported files from our products pages. (We are a presentation design company, so they're pretty relevant). For example: https://www.bentopresentaties.nl/wp-content/portfolio/ecar/index.html However, they keep coming up in the Crawl warnings, as the exported HTML-file doesn't contain text (just code), so we get errors in: thin content no H1 missing meta description missing canonical tag I could manually add the last two, but the first warnings are just unsolvable. Therefore I figured we probably better noindex all these files… They appear to don't contain any searchable content and even then; the content of our clients work is not relevant for our search terms etc. They're mere examples, just in the form of HTML files. Am I missing something or should I better noindex these/such files? (And if so: is there a way to include a whole directory to noindex automatically, so I don't have to manually 'fix' all the HTML exports with a noindex tag in the future? I read that using disallow in robots.txt wouldn't work, as we will still link to these files as portfolio examples).
Intermediate & Advanced SEO | | BentoPres0 -
Why isn't Google caching our pages?
Hi everyone, We have a new content marketing site that allows anyone to publish checklists. Each checklist is being indexed by Google, but Google is not storing a cached version of any of our checklists. Here's an example:
Intermediate & Advanced SEO | | Checkli
https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach Missing Cache:
https://webcache.googleusercontent.com/search?q=cache:DfFNPP6WBhsJ:https://www.checkli.com/checklists/ggc/a-girls-guide-to-a-weekend-in-south-beach+&cd=1&hl=en&ct=clnk&gl=us Why is this happening? How do we fix it? Is this hurting the SEO of our website.0 -
Canonical Tags increased after putting the appropriate tag?
Hey, I noticed that the number of duplicate title tags increased from 14k to 30k in Google Search Console. These dup title tags derived from having the incorrect canonical tags. For instance, http://www.site.com/product-name/product-code/?d=Mens
Intermediate & Advanced SEO | | ggpaul562
http://www.site.com/product-name/product-code/?d=Womens These two are the same exact pages with two parameters (These are not unisex by the way). Anyway, when I viewed the page source, it had the parameter in the canonical tag so.... it would look like this So whether it be http://www.site.com/product-name/product-code/
http://www.site.com/product-name/product-code/?d=Mens
http://www.site.com/product-name/product-code/?d=Womens The canonical tag had the "?d=Womens" I figured that wasn't best practices, so for the canonical tag I removed the parameter so now the canonical tag is http://www.site.com/product-name/product-code/ for that specific page with parameter (if that makes sense). My question is, why did my number of errors doubled after what I thought fixed the solution?0 -
PR & DA
What are the best ways to increase a website's page rank and domain authority?
Intermediate & Advanced SEO | | WebMarkets0 -
ECommerce Google PR mystery
Dear all, Our eCommerce site has the following structure Home .. PR=5 Category .. PR=4 (linked from home) Sub-Category Linked from Category PR=Un-ranked The domain has several years and perform well the site is here: http://tinyurl.com/5v9hrql Any idea or suggestion? Thank you Claudio
Intermediate & Advanced SEO | | SharewarePros0 -
What's going on with my organic traffic from Google?
I am working on eCommerce website Vista Stores. My website's traffic is going down due to certain reason. I have done R & D and have assumption with auto generated content which I have added on few product pages. You can find out attachment to know more about current situation of traffic. 6789134845_d1a1578960_b.jpg
Intermediate & Advanced SEO | | CommercePundit0 -
.com ranking over other ccTLD's that were created
We had a ecommerce website that used to function as the website for every other locale we had around the world. For example the French version was Domain.com/fr_FR/ or a German version in English would be Domain.com/en_DE/. Recently we moved all of our larger international locales to their corresponding ccTLD so no we have Domain.fr and Domain.de.(This happened about two months ago) The problem with this is that we are getting hardly any organic traffic and sales on these new TLD's. I am thinking this is because they are new but I am not positive. If you compare the traffic we used to see on the old domain versus the traffic we see on the new domain it is a lot less. I am currently going through to make sure that all of the old pages are not up and the next thing I want to know is for the old pages would it be better to use a 301 re-direct or a rel=canonical to the new ccTLD to avoid duplicate content and those old pages from out ranking our new pages? Also what are some other causes for our traffic being down so much? It just seems that there is a much bigger problem but I don't know what it could be.
Intermediate & Advanced SEO | | DRSearchEngOpt0 -
Best solution to get mass URl's out the SE's index
Hi, I've got an issue where our web developers have made a mistake on our website by messing up some URL's . Because our site works dynamically IE the URL's generated on a page are relevant to the current URL it ment the problem URL linked out to more problem URL's - effectively replicating an entire website directory under problem URL's - this has caused tens of thousands of URL's in SE's indexes which shouldn't be there. So say for example the problem URL's are like www.mysite.com/incorrect-directory/folder1/page1/ It seems I can correct this by doing the following: 1/. Use Robots.txt to disallow access to /incorrect-directory/* 2/. 301 the urls like this:
Intermediate & Advanced SEO | | James77
www.mysite.com/incorrect-directory/folder1/page1/
301 to:
www.mysite.com/correct-directory/folder1/page1/ 3/. 301 URL's to the root correct directory like this:
www.mysite.com/incorrect-directory/folder1/page1/
www.mysite.com/incorrect-directory/folder1/page2/
www.mysite.com/incorrect-directory/folder2/ 301 to:
www.mysite.com/correct-directory/ Which method do you think is the best solution? - I doubt there is any link juice benifit from 301'ing URL's as there shouldn't be any external links pointing to the wrong URL's.0