Change in url structure - added category page
-
I have recently started an e-commerce website and have now changed the url structure and added another level to my category pages.
So where it before was www.website.com/shirts it is now www.website.com/clothes/shirts. So I added the clothes category (just an example) before the shirt category and am now finding that the old url is still found in the search index and is still live on my site. How could this be?
I use wordpress and simply change the urls in the backend. The products are still under www.website.com/product/blue-shirt-123 so they won't be affected but I suppose it now means I have duplicate category pages?
So my question is: Should I 301 the the old category page (www.website.com/shirts)to the new url (www.website.com/clothes/shirts). And how can the old url still be live on my site?
If this was a bit unclear, please let me know.
Appreciate your replies!
-
Hey There
What Barry says is true - you can throw anything in there and it will load, as long as the category is at the end.
But yes, for certain, in your case I would 301 redirect /shirts to /clothes/shirts (and all other categories). Crawl the site with Screaming Frog and keep an eye on 404 errors in Webmaster Tools for anything you might have missed.
I don't think there's any issue in regards to duplicate content.
-Dan
-
Ok, I see that now, thanks. Does this also mean I do not have a duplicate pages (duplicate content) issue then?
-
You can literally put anything in the category part of the URL and it will resolve.
Try www.website.com/fhqwhgads/shirts and it will still resolve, heck throw another directory in there and it will still work www.website.com/fhqwhgads/zomg/shirts
As for why it does that, I'm probably not the best person to explain, but WP effectively just looks for the end of the URL, which is sometimes why naming pages and posts the same can cause problems.
I'm not sure what would happen if you 301'd /shirts to /clothes/shirts as it may just be looking at the last part anyway (could quickly try).
I'd consider adding a canonical tag instead.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have recently re-done my website. My buyers guide and my category page are ranking for keywords I'm after.
I have recently re-done my entire site (only a few days). I believe Google is still re-crawling and updating (however, the amount of movement on other searches has been significant). My buyers guide is ranking very high for its intended keywords, as well as high for the keywords of the category page. Both are at the beginning of the second page and I wonder if its dragging me down. What do you think I should do? Is it to early to take action as everything has been completed redone.
Technical SEO | | Code2Chil0 -
Moving Some Content From Page A to Page B
Page A has written content, pictures, videos. The written content from Page A is being moved to Page B. When Google crawls the pages next time around will Page B receive the content credit? Will there not be any issues that this content originally belonged to Page A? Page A is not a page I want to rank for (just have great pictures and videos for users). Can I 301 redirect from Page A to B since the written content from A has been deleted or no need? Again, I intent to keep Page A live because good value for users to see pictures and videos.
Technical SEO | | khi50 -
Issue: Duplicate Page Content > Wordpress Comments Page
Hello Moz Community, I've create a campaign in Moz and received hundreds of errors, regarding "Duplicate Page Content". After some review, I've found that 99% of the errors in the "Duplicate Page Content" report are occurring due to Wordpress creating a new comment page (with the original post detail), if a comment is made on a blog post. The post comment can be displayed on the original blog post, but also viewable on a second URL, created by Wordpress. http://www.Example.com/example-post http://www.Example.com/example-post/comment-page-1 Anyone else experience this issue in Wordpress or this same type of report in Moz? Thanks for your help!
Technical SEO | | DomainUltra0 -
50,000 pages or a page with parameters
I have a site with about 12k pages on a topic... each of these pages could use another several pages to go into deeper detail about the topic. So, I am wondering, for SEO purposes would it be better to have something like 50,000 new pages for each sub topic or have one page that I would pass parameters to and the page would be built on the fly in code behind. The drawback to the one page with parameters is that the URL would be static but the effort to implement would be minimal. I am also not sure how google would index a single page with parameters. The drawback to the 50k pages model is the dev effort and possibly committed some faux pas by unleashing so many links to my internal pages. I might also have to mix aspx with html because my project can't be that large. Anyone here ever have this sort of choice to make? Is there a third way I am not considering?
Technical SEO | | Banknotes0 -
Removing a lot of content & changing url structure.
I recently moved an existing ecommerce site, which I recently purchased, from Volusion to Shopify. The new site has a completely different link structure. The old site also had about 120 products which are not even close to being up to par with the products I now have on the site. So I had to remove all of those pages too. I was just wondering which measures I need to take to deal with this? I created a really nice 404 page. I also 301 redirected the pages which still exist. But I was wondering if there is anything else I should do? Should I request a removal of all the old pages, which no longer exist? Should I do something else I'm not thinking about? Any help would be greatly appreciated. Thanks. jim
Technical SEO | | PedroAndJobu0 -
Determine the best URL structure
Hi guys, I'm working my way through a URL restructure at the moment and I've several ideas about the best way to do it. However, it would be good to get some views on this. At the moment I'm working on a property website - http://bit.ly/N7eew7 As you can quickly see, the URL structure of the site needs a lot of work. Similar websites - http://bit.ly/WXH5WG http://bit.ly/Q3UiLC One of the sites has http://www.domain.ie/property-to-let/location/ And the other has http://www.domain.ie/rentals/location/property-to-let/ I could do with some guidance about the best steps to take with this. I've a few ideas myself but this is a massive project. Cheers, Mark
Technical SEO | | MarkScully0 -
What can be the cause of my inner pages ranking higher than my home page?
If you do a search for my own company name or products we sell the inner pages rank higher than the homepage and if you do a search for exact content from my home page my home page doesn't show in the results. My homepage shows when you do a site: search so not sure what is causing this.
Technical SEO | | deciph220 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0