Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to outrank a directory listing with high DA but low PA?
-
My site is at 4th place, 3 places above it is a gumtree (similar to yell, yelp) listing. How can you figure out how difficult it would be outrank those pages? I mean obviously the pages would have low PA and they are top based on the high DA of the site.
This also seems to go back to keyword research and difficulty, when I'm doing keyword research and I see a wikipedia site in top 5 rank, or a yell.com or perhaps an article in forbes.com outranks your site. Typically the problem seems to be Google giving a lot of credit to these pages rankings based on the high DA rather than PA of the pages. How would you gauge the difficulty of that keyword then if the competition are pages with very high DA which is impossible to compete with but low PA?
Thanks
-
Most of my work is writing articles that take between three days and a week to author. I also have employees who assist with these articles by taking photos, making graphics, doing research, collecting data and posting them to websites. Some of these articles attack very difficult keywords.
After doing this for about 12 years on the same website, I still don't know how these articles are going to rank. A year or two after posting some are on the first page of Google defeating popular websites that surprise me. Others, perplex me because I am being beaten by pissants - in SERPs that I would judge to be much easier. I suspect that semantics, keyword diversity and titles that elicit clicks help the pissants beat me but I don't know for sure.
I can't predict how my own rankings will turn out on a website that I know well, in an industry where I have worked for 40 years and against competitors who are often people who I know by name or are even my own customers. The SERPs can be very difficult to understand. One thing that I will say with confidence is that DA and PA explain nothing and give zero guidance in winning a fight. They count as zero importance in my decisions. I can't even tell you those numbers for my own websites unless I go look. That's how little attention I give to them.
-
Sorry I couldn't help.
From above DA is a factor and one that is very difficult for you to catch up to or surpass - BUT certainly not a massive % of the ranking algorithm.
-
Actually I don't, hence why I came here to ask the question. I think Egol answered it above, they are beatable as small businesses beat them every day. That's what I needed to know, whether they are beatable, how easy/hard is it to beat them as that is a deciding factor as to whether to invest more time and money into SEO or not (money could be better spent on ads for example).
You gave me a philosophical answer that basically said, "you can't change what they do or have so work on what you can change in yourself", which is all fine and dandy but its a loose, vague, cookie cutter spiritual science answer. I mean could I theoretically outrank "British Cancer Research" Website for the keyword cancer "cancer research"? Obviously the answer is yes, using your advice, I can just keep working on my cancer research site, maybe throw a million £ into its SEO and a couple of years and I'll outrank them. We know that, everyone knows that, everyone knows that with hard work and enough time/money/effort you can achieve anything - that is not the question, the question was "how hard/easy is it?" as that is obviously a big factor when considering to continue with that strategy or not.
I mean no disrespect, I think you just misunderstood my question from the start as a "complaining type of question", perhaps you interpreted it as me whinging about the high DA competition and you were trying to encourage me to not focus on that DA. I wasn't complaining, it was a straight up question of how much that high DA is a factor in their site outranking mine as I have built a lot of backlinks to my page, they have none to theirs and therefore must rely on their site DA and traffic.
-
If you did not want constructive suggestions, why even ask? You obviously already knew the answer you wanted.
Best
-
You came here asking how to beat a directory and you got good answers and an action plan.
Unfortunately you look at metrics that google does not use, metrics that are based upon a domain, metrics that have nothing to do with the methods of winning a SERP. Don't allow rubbish metrics to frighten you away.
These are directory sites that you are trying to defeat. Directories.
They are not the Library of Congress or the Pope. Pages on these sites are defeated by small businesses every day. Pages on Amazon are defeated by small businesses every day. These small businesses didn't run because they faced competition. They got to work.
If you are willing to work hard you should not fear competition. Because where there is competition there is usually a lot of search volume on a lot of diverse keywords. And, where there is competition there is usually a lot of money changing hands. Attack there with long content with diverse keywords and excellent quality. There is a good chance that you will earn traffic. Attack that keyword with multiple pages, each of excellent quality and targeting the long tail. One of more of those pages might eventually gain rankings for the short tail keyword.
Maybe you will not win if you fight. But you will never win if you run.
-
PA is built with inbound links to that page. That page has 0 backlinks. If it has a 29 PA which I had already checked, it is boosted probably by traffic, internal links to that page which is all a direct cause of Gumtree having massive traffic and DA.
I am not taking it personally, I'm just looking for a reasonable answer as to how much DA weigh in as a factor in rankings? I have a pragmatic approach to things so for example:
Site A has DA30 PA29, my site has DA25 PA28 - This gives me a good idea of what I need to do. I need to study site A's backlink profile, onsite SEO and try to raise my DA to match or beat theirs. Its clear pragmatic approach.
Now example B:
Site A has DA90 and PA 29, my site has DA30 and PA45. - Now for a logical approach, this is much harder to approach pragmatically on how to beat it. Mainly because we know we can never achieve a DA90, and my higher PA isn't overcoming that DA gap. So the question is, how much of a factor is that huge DA? This is important from a business decision perspective because if it means 6 months of high quality backlinks could overcome that, they maybe it's a go, but if it means that I may need to achieve a DA of 60 and PA 60 to outrank that site, then there would be no point. And yes I know DA/PA doesnt mean ranking, but its the closest measuring stick we have to how successful a site will be in ranking.
And I don't mean to be rude but the idea of just "overcoming" something by doing things better in things you can change is a very vague idea. If I want to be the best boxer in the world and I'm 50 years old, you could follow some rara and claim that you can't change your age so you might as well just train and work on what you can change. But fact of the matter is, its nearly impossible to be the best boxer in the world at 50yrs old so although theres a 0.01% chance it could be done, its not a worthwhile investment. And this is why I asked a simple question of how much of a factor that DA is, its to figure out whether its worth investing into more SEO.
And you're right in the context that keyword difficulty doesn't change based on whos above you, however if you are aiming for no.1 position and you are no.2 but no.1 is almost impossible to take over, then yes that keyword is hard. When building niche sites, part of keyword research is to research your competition for that keyword before embarking on that niche site project.
But thank you for taking your time to answer.
-
Magusara,
I think you are taking this a bit personally. Yes the pages above you are ultimately owned by someone even if that someone is a stockholder in the company that owns the page. The link you provide (https://www.gumtree.com/removal-services/bournemouth) has a page authority of 29.
When you complain about the authority reference with the Facebook example, you are missing the point. The issue is not 100% DA and you are fixated on that. Sorry, it is simply my opinion that you cannot fixate on the thing you say is insurmountable (increase your DA above theirs) and then say any other way to deal with the roadblock is in some way not relevant or that whomever supplied the suggestion is just wrong.
You are wrong about keyword difficulty. You say: And I disagree, who ranks above you is a reflection on keyword difficulty. Obviously if you are trying to rank #1 for the keyword "dog training", you are currently 2nd but no.1 is occupied by Facebook deciding to have a specific dog training page target for your area, it would be next to impossible to overtake them. Hypothetical situation but you get what I mean.
Who ranks above you is NOT a reflection on keyword difficulty. If today I erect a page that is purple dog jellybeans and I add content weekly to it. If in three months you erect a page that is purple dog jellybeans and index it, most likely, it will initially rank below my page. That doesn't mean that the term purple dog jellybeans is a competitive keyword.
That is determined by the keyword within the context of all the competition in a given vertical for that term. It is not determined by the site above you with a lot of DA.
Everyone involved in SEO on these Q&A pages have faced the same hurdles you are experiencing and all we can give you is our experience. Yes, DA is a factor and one that is very difficult for you to catch up to or surpass - BUT certainly not a massive % of the ranking algorithm.
The points we were making were to suggest you quit looking at that one roadblock (DA) and go about developing everything else that your competitor cannot. You KNOW the area the directory participates in. Use what your advantage is against them and quit being argumentative with those who only want to assist you. Focus is where we place our gaze and apply our energy. You are focusing on the roadblock of DA. We are suggesting you focus on everything else and make the roadblock cease to exist.
I will apply a golf analogy and then say goodbye:
If you are faced with water you must cross to get to the green with your shot, you can focus on the water or on the green. Ask any golfer where the ball goes if you are focused (caught up with avoiding) the water. Just a fact of life for me.
We wish you the very best,
Robert
-
I don't hate directories or envy their DA. The pages above me are NOT owned by a person. They are a directory listing.
I say "obviously they have a low PA" because these pages are not owned by anyone therefore they dont have any SEO or owner building traffic or backlinks to them. They are the result of the directory listing. Example: https://www.gumtree.com/removal-services/bournemouth
That URL has 0-1 backlinks according to ahrefs or moz.com site explorer. The page ranks highly literally because the DA of the site is 70. As for researching what they are doing, well thats kinda like saying research what Facebook or yell.com is doing to see if you can achieve the same DA as their site.
And I disagree, who ranks above you is a reflection on keyword difficulty. Obviously if you are trying to rank #1 for the keyword "dog training", you are currently 2nd but no.1 is occupied by Facebook deciding to have a specific dog training page target for your area, it would be next to impossible to overtake them. Hypothetical situation but you get what I mean.
The purpose of the question is not to hate on those sites above me, it's to estimate how difficult directory results are to outrank in search engines. Because they can't be held to the same ranking factors as other personally owned sites because of the huge DA they hold, such as a Forbes article. You can build more links to your page than there are to the Forbes article but the Forbes DA will also weigh in, I guess the question is how much of a factor is the DA over the PA.
-
I really enjoyed Robert's answer because we all see so many of these DA and PA envy questions.
To build on Robert's theme of "leverage your advantages" - these directories are usually built by people who know very little about your local area or industry. They are also generally cookie-cutter sites built by factory workers who live 1000 miles away. For those reasons, it is often very easy for a local expert or an industry expert to build content that is vastly superior and to provide a landing page experience that impresses the visitor enough that he will share it with others. You probably also know the visitor better than the factories who build these sites too.
Put some time into your content and presentation. Winning there can significantly improve the success of your page in the SERPs. It can also significantly improve your chances of attracting the searcher to your business instead of the business of your competitors. Show your expertise!
-
Magusara
In my opinion, if you wish to outrank anyone, you should first take a step back and not draw conclusions without first researching. You state, "...obviously the pages would have low PA and they are top based on the high DA of the site." If it is obvious, then why research to see what exactly they are doing?
If you have spent any time on Moz's Q&A you will have seen ton's of "How is this page outranking me?" questions. The key to combatting those pages is to learn more about them than they know about themselves. You cannot do that if you assume or believe anything to be obvious.Yes, I hate directories (when it serves me to not work with them) as much as any other SEO professional, but they do get many things correct. One thing you can do is grow your DA and yes, it will take a bit of time. But look closer, is everything as it really appears?
My point is this: Knowing they have an advantage is not going to assist you in combatting them, knowing what the advantages and disadvantages they have will help you fight them if you choose to exploit them. So, is it worth the time to fight it? If it is, then know your opponent and leverage your own advantages.
Hope that helps,
Robert
PS the keyword difficulty does not change based on who is above you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practices for Title Tags for Product Listing Page
My industry is commercial real estate in New York City. Our site has 300 real estate listings. The format we have been using for Title Tags are below. This probably disastrous from an SEO perspective. Using number is a total waste space. A few questions:
Intermediate & Advanced SEO | | Kingalan1
-Should we set listing not no index if they are not content rich?
-If we do choose to index them, should we avoid titles listing Square Footage and dollar amounts?
-Since local SEO is critical, should the titles always list New York, NY or Manhattan, NY?
-I have red that titles should contain some form of branding. But our company name is Metro Manhattan Office Space. That would take up way too much space. Even "Metro Manhattan" is long. DO we need to use the title tag for branding or can we just focus on a brief description of page content incorporating one important phrase? Our site is: w w w . m e t r o - m a n h a t t a n . c o m <colgroup><col width="405"></colgroup>
| Turnkey Flatiron Tech Space | 2,850 SF $10,687/month | <colgroup><col width="405"></colgroup>
| Gallery, Office Rental | Midtown, W. 57 St | 4441SF $24055/month | <colgroup><col width="405"></colgroup>
| Open Plan Loft |Flatiron, Chelsea | 2414SF $12,874/month | <colgroup><col width="405"></colgroup>
| Tribeca Corner Loft | Varick Street | 2267SF $11,712/month | <colgroup><col width="405"></colgroup>
| 275 Madison, LAW, P7, 3,252SF, $65 - Manhattan, New York |0 -
Conditional Noindex for Dynamic Listing Pages?
Hi, We have dynamic listing pages that are sometimes populated and sometimes not populated. They are clinical trial results pages for disease types, some of which don't always have trials open. This means that sometimes the CMS produces a blank page -- pages that are then flagged as thin content. We're considering implementing a conditional noindex -- where the page is indexed only if there are results. However, I'm concerned that this will be confusing to Google and send a negative ranking signal. Any advice would be super helpful. Thanks!
Intermediate & Advanced SEO | | yaelslater0 -
Should I use the on classified listing pages that have expired?
We have went back and forth on this and wanted to get some outside input. I work for an online listing website that has classified ads on it. These ads are generated by companies on our site advertising weekend events around the country. We have about 10,000 companies that use our service to generate their online ads. This means that we have thousands of pages being created each week. The ads have lots of content: pictures, sale descriptions, and company information. After the ads have expired, and the sale is no longer happening, we are currently placing the in the heads of each page. The content is not relative anymore since the ad has ended. The only value the content offers a searcher is the images (there are millions on expired ads) and the descriptions of the items for sale. We currently are the leader in our industry and control most of the top spots on Google for our keywords. We have been worried about cluttering up the search results with pages of ads that are expired. In our Moz account right now we currently have over 28k crawler warnings alerting us to the being in the page heads of the expired ads. Seeing those warnings have made us nervous and second guessing what we are doing. Does anybody have any thoughts on this? Should we continue with placing the in the heads of the expired ads, or should we be allowing search engines to index the old pages. I have seen websites with discontinued products keeping the products around so that individuals can look up past information. This is the closest thing have seen to our situation. Any help or insight would be greatly appreciated! -Matt
Intermediate & Advanced SEO | | mellison0 -
Repeatedly target a rolling list of kws..or is that cannibalization? Biggest Confusion in SEO Ive found
Also suggesting a WBF topic. Ive read and researched with no luck here... would love a Moz staff reply too! Is it better to blog repeatedly on the same topic (writing multiple blogs around the topic of "content marketing" for example in hopes Google sees you as an authority on the topic over time) OR is this keyword cannibalization? Is it better to have one powerful and comprehensive page on a topic if it makes sense. Thanks!
Intermediate & Advanced SEO | | RickyShockley0 -
Any tips on how tp get reddit or pinterest posts rank high on google images?
Hello I have noticed that for a keyword that has high competition it has on top image searches not that popular pinterest post & a reddit post, explorergram , youtube etc., the keywork is "24k gold iphone" and I am wondering if I could create somehow myself a pinterest or reddit post or something similar that would have images with my product rank high on that keyword, since my website does not rank well in mage search for some reason... https://www.google.fi/search?q=24k+gold+iphone+6&source=lnms&tbm=isch&sa=X&ved=0CAcQ_AUoAWoVChMI1f2LkpTxxgIVhI8sCh1SGwjy&biw=978&bih=550#tbm=isch&q=24k+gold+iphone thanks a lot
Intermediate & Advanced SEO | | bidilover0 -
Is there a way to get a list of Total Indexed pages from Google Webmaster Tools?
I'm doing a detailed analysis of how Google sees and indexes our website and we have found that there are 240,256 pages in the index which is way too many. It's an e-commerce site that needs some tidying up. I'm working with an SEO specialist to set up URL parameters and put information in to the robots.txt file so the excess pages aren't indexed (we shouldn't have any more than around 3,00 - 4,000 pages) but we're struggling to find a way to get a list of these 240,256 pages as it would be helpful information in deciding what to put in the robots.txt file and which URL's we should ask Google to remove. Is there a way to get a list of the URL's indexed? We can't find it in the Google Webmaster Tools.
Intermediate & Advanced SEO | | sparrowdog0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
SEOmoz recommended Directories
SEOmoz recommends a bunch of directories and some cost money. How much influence do these directories have? Is it worth investing in some where the category makes sense or all where the category makes sense?
Intermediate & Advanced SEO | | SEODinosaur0