Difference between urls and referring urls?
-
Sorry, nit new to this side of SEO
We recently discovered we have over 200 critical crawler issues on our site (mainly 4xx)
We exported the CSV and it shows both a URL link and a referring URL. Both lead to a 'page not found' so I have two questions?
What is the difference between a URL and a referring URL?
What is the best practice/how do we fix this issue? Is it one for our web developer?
Appreciate the help.
-
No. The referring URL is a page on your site that has a broken link on it. These are damaging your rankings so so fix ASAP. Go to all the referring pages and fix or remove the links with the URL in.
-
I believe "URL" is the page on your website that is 404ing/Broken, the "Referring URL" is the website someone found your URL on and clicked through to. For example, if you had a broken link on a Facebook post you did, it would show the URL as "yourwebsite.com/examply" (broken link) and the Referring URL would be "facebook.com/yourprofile".
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Overly Dynamic URL in vBulleitin
I've got quite a few overly dynamic URLs reported like this one URL: http://www.phplinkdirectory.com/forum/forumdisplay.php?s=4a07050d7e48e8bae86ef7880d9f91e8&f=13&order=desc&page=3 Anyone know the quick fix to this problem?
Moz Pro | | dvduval0 -
Difference in MozPoints between My Account and Q&A forum
I noticed that there is a big difference between the mozpoints showing in my account and the mozpoint showing in the Q&A section of the website. My Account shows me that I now have: 262 Mozpoints. When i look at my Q&A total is says: 303 mozpoints. There is a big hole between the two. How is this possible and which of the 2 numbers is correct?
Moz Pro | | JarnoNijzing0 -
URL paramters and duplicate content
Hello, I have a 2-fold question: Crawl Diagnostics is picking up a lot of Duplicate Page Title errors, and as far as I can tell, all of them are cause by URL parameters trailing the URL. We use a Magento store, and all filtering attributes, categories, product pages etc are tagged on as URL parameters. example: Main URL:
Moz Pro | | yacpro13
/accessories.html Duplicated Title Page URLs: /accessories.html?dir=asc&order=position
/accessories.html?mode=list
/accessories.html?mode=grid
...and many others How can I make the Crawl Diagnostics not identify these as errors? Now from an SEO point of view, all these URL parameters are been picked up by google, and are listed in WedMaster Tools -> URL parameters. All URL parameters are set to "let google decide". I remember having read that Google was smart enough here to make the right decision, and we shouldn't have to worry about it. Is this true, or is there a larger issue at hand here? Thankas!0 -
In my errors I have 2 different products on the same page?
Hello, I have 2039 duplicate page errors and most of them are 2 different products on 1 page, I haven't set it up in the CMS, how has this happened? here's 2 examples, the 1st example has ghd's on the back of a different brand and the 2nd has gift packs on the back of the same brand 'rockaholic'? and what does 'norec' mean? http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/ghd-straightening-irons/norec http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/tigi-bed-head-gift-packs/norec Thanks Mark
Moz Pro | | smoki6660 -
Extension-less URLS to extension and vice versa - does it affect PA?
Quick question: Will adding an extension such as .html or .php to a URL affect the Page Authority? Long explanation: My site is built in Drupal, and has the rewrite rules in place to redirect URLs with .php extension to no extension URLs. For example, the real URL for one of my pages is: http://www.trueresults.com/index.php?q=get-started. Because of the rewrite rule, it is rewritten to http://www.trueresults.com/get-started by Drupal. If I wanted to keep the url the same, but add an extension to the end (ie. ".html") would that affect my Page Authority? Would Google consider this an entirely new URL? The reasoning behind this is I am working on setting up some goals and events in my analytics and it requires urls with an extension, it's not accepting my "extension-less" urls. thanks!
Moz Pro | | TrueResults0 -
How do I get the Page Authority of individual URLs in my exported (CSV) crawl reports?
I need to prioritize fixes somehow. It seems the best way to do this would be to filter my exported crawl report by the Page Authority of each URL with an error/issue. However, Page Authority doesn't seem to be included in the crawl report's CSV file. Am I missing something?
Moz Pro | | Twilio0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0