Our site is recieving traffic for both .com/page and .com/page/ with the trailing slash.
-
Our site is recieving traffic for both .com/page and .com/page/ with the trailing slash. Should we rewrite to just the trailing slash or without because of duplicates. The other question is, if we do a rewrite, google has indexed some pages with the slash and some without - i am assuming we will lose rank for one of them once we do the rewrite, correct?
-
Thanks mate! All i wanted to know.
-
No thats not a problem, SE will follow what you do, its if you have external links pointing to those pages is the proble, they will be 301 redirected and you will lose a little link juice. buyt if you dont fix it, you will keep getting teh problem.
If you have 2 external links pointing to url/ and url, then SE see that as 2 different pages each with one link. If you 301 it, they will see it as 1 page with 2 links, one a 301 redirect that loses a little LJ. My asumpution is that a 301 leaks 15%, others say a bit less, i wint go into who is right and wrong, we really dont know, but it is confiremmd that it does leak. so haveing 2 pages each with 1 links is not as good as having 1 page with 1.85 links. It would be better if you had both links pointing to the same verion of the page of cause.
Get that fixed pronto, then make sure all your internal links point to the correct version.
Do it site wide, as i said the search engines will ajust next crawl or 2. -
Hi Alan,
First of all thanks for the quick resposne.
At the moment there is no redirect - basically, both pages exist and as you said we have duplicate content and losing link juice. But because google has indexed some pages with slash and some without, i wasn't sure if we should set up a rewrite rule for all slashes to remove the slash OR just do a redirect for the individual pages.
E.g.
we have
example.com/page - not indexed
example.com/page/ - indexed
example.com/page2 - indexed
example.com/page2/ - not indexed
so im not 100% sure if i should individually set up redirects, or just rewrite to get rid of all slashes and lose indexing from google on the slashed versions.
Does that make sense?
Cheers mate,
-
This can be a big problem, I asume that your site 301 redirect to one or the other. All your internal links should point to the url that it redirect to. Every link leaks link juice, if you have a 301 it leaks twice. site wide this can mean a lot of link juice going up in smoke. there is not much you can do about the external links you already have, but if you fix the problem you can eliminate it in the future.
I have a tutorial on how to create a 301 to fix this for Windows servers
http://perthseocompany.com.au/seo/tutorials/how-to-fix-canonical-issues-involving-the-trailing-slashIf you want too provide your url, either here or private message, i can loook into this probllem for you.
I should also mention that SE's see the 2 urls with and without as 2 different pages and your rank is split
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to handle blank, auto generated system pages/urls
Hi Guys Our backend system has been creating listing pages based on out of date and irrelevant data meaning we have hundreds of thousands of pages that are blank but currently indexable and active. They're almost impossible to access from the front end and have 0 traffic pointing at them but you can access these pages if you have the URL and i'm pretty sure due to the site architecture, google is crawling them regardless. For the most part, I think its likely best to 301 these pages to the most closely related page on the site but I'm concerned we're wasting crawl budget here. We don't want these pages to be crawled or found. Would a sound solution be to make them inactive, no-index and create a custom 404 in the event anyone (or the crawler) managed to get to them? Would this enormous increase in 404 pages cause us issues? Many thanks
Intermediate & Advanced SEO | | Jon.Kennett0 -
Google WMT/search console: Thousands of "Links to your site" even only one back-link from a website.
Hi, I can see in my search console that a website giving thousands of links to my site where hardly only one back-link from one of their page to our page. Why this is happening? Here is screenshot: http://imgur.com/a/VleUf
Intermediate & Advanced SEO | | vtmoz0 -
Is it possible to find out where traffic is comming from on someone elses website?
Is it possible to find out where traffic is coming from on someone else website? I want to know where the new buyers are coming from who are interested in outsourcing. Attached are some of the pages they would be looking at. Who are visiting these pages and where are they coming from: https://www.upwork.com/blog/ https://www.upwork.com/hiring/ https://www.upwork.com/i/howitworks/client/ https://www.upwork.com/signup/create-account/client_direct https://www.upwork.com/o/profiles/browse/ https://www.upwork.com/press/ https://www.freelancer.com/ https://www.freelancer.com/about https://www.freelancer.com/info/how-it-works.php https://www.freelancer.com/showcase https://www.freelancer.com/community https://www.freelancer.com/hire/ https://www.freelancer.com/contest/ https://www.freelancer.com/feesandcharges/ https://www.freelancer.com/freelancers/ http://www.guru.com/ http://www.guru.com/howitworks.aspx http://www.guru.com/about/ http://www.guru.com/help/ http://www.guru.com/blog/ http://www.guru.com/blog/category/hiring-advice/ http://www.guru.com/d/freelancers/ http://www.guru.com/directory http://www.guru.com/answers/
Intermediate & Advanced SEO | | Hall.Michael0 -
Outside Top 10 Even though - Higher Domain/Page Authority/Higher On Page Grade
Hi, Note: this is for Australian search results - for people in Perth.
Intermediate & Advanced SEO | | HeadStud
The website is: http://thedj.com.au I am trying to optimise for the keyword 'perth wedding dj', but also 'wedding dj perth' and for some reason my website isn't even in the top 10 results. Here is what's weird though: My on-page grade with the On-Page Grader for the keyword 'wedding DJ perth' is an 'A' for http://thedj.com.au (http://awesomescreenshot.com/0135135hca) When checking the Keyword Difficulty in the Google Australia search enginge for 'wedding DJ perth' - there are 4 results which have a lower domain authority than 15 (in fact one result has a domain authority of 1) - http://awesomescreenshot.com/03f5134zd1 http://thedj.com.au has a Domain Authority of 23/100 and a Page Authority of 34/100. (http://awesomescreenshot.com/0bb5134tb8) So seeing as the page has gotten an A for on-page optimisation for the keyword 'wedding DJ Perth' and has a higher domain authority then many results in the top 10... why isn't it in the Top 10?! Bonus Question:
Why is DJ Avi showing up at the top of search results (Local listing) depsite the fact that:
a) He has no website to link to
b) No reviews for his listing
c) No keywords that I can see (other than the fact that he's a DJ)
Screenshot: http://awesomescreenshot.com/05151349cb Meanwhile our Local Places - Thanks,
Kosta
http://www.headstudios.com.au0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Can Google read content/see links on subscription sites?
If an article is published on The Times (for example), can Google by-pass the subscription sign-in to read the content and index the links in the article? Example: http://www.thetimes.co.uk/tto/life/property/overseas/article4245346.ece In the above article there is a link to the resort's website but you can't see this unless you subscribe. I checked the source code of the page with the subscription prompt present and the link isn't there. Is there a way that these sites deal with search engines differently to other user agents to allow the content to be crawled and indexed?
Intermediate & Advanced SEO | | CustardOnlineMarketing0 -
End of March we migrated our site over to HubSpot. We went from page 3 on Google to non existent. Still found on page 2 of Yahoo and Bing. Beyond frustrated...HELP PLEASE "www.vortexpartswashers.com"
End of March we migrated our site over to HubSpot. We went from page 3 on Google to non existent. Still found on page 2 of Yahoo and Bing under same keywords " parts washers" Beyond frustrated...HELP PLEASE "www.vortexpartswashers.com"
Intermediate & Advanced SEO | | mhart0 -
30-40% drops in organic Google traffic starting early/mid Feb to Now??
Hello All: I would really appreciate any insight that can be provided into this dilemma. I've ruled out bad links issues - we don't have them and never engaged in grey to black hat SEO linkbuilding. I cannot find the cause for this drop off. Maybe I'm missing something because I am so close to it? The site is markspace Thanks in advance for any and all help or insight!! 🙂
Intermediate & Advanced SEO | | holdtheonion0