E-Commerce Site Crawling Problem
-
Our website displays all of the products in our website If you attempt to visit a category or page that doesn't exist but conforms to our site url structure. Somehow google crawled these pages and indexed them, and they have TONS of duplicate content that hurt us. How do I deal with this problem?
-
Two ways:
1. Find out where Google followed a link to the non-existent category pages, and get those links removed and the category pages redirected or blocked as EGOL mentioned.
2. Change your code so that non-existent categories show a 404 page, preferably a 404 crafted to gently push your user to something they may be interested in.
-
Can you block them from indexing with robots.txt.....
.... or redirect them with .htaccess.?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website crawl error
Hi all, When I try to crawl a website, I got next error message: "java.lang.IllegalArgumentException: Illegal cookie name" For the moment, I found next explanation: The errors indicate that one of the web servers within the same cookie domain as the server is setting a cookie for your domain with the name "path", as well as another cookie with the name "domain" Does anyone has experience with this problem, knows what it means and knows how to solve it? Thanks in advance! Jens
Technical SEO | | WeAreDigital_BE0 -
Moving site from html to Wordpress site: Should I port all old pages and redirect?
Any help would be appreciated. I am porting an old legacy .html site, which has about 500,000 visitors/month and over 10,000 pages to a new custom Wordpress site with a responsive design (long overdue, of course) that has been written and only needs a few finishing touches, and which includes many database features to generate new pages that did not previously exist. My questions are: Should I bother to port over older pages that are "thin" and have no incoming links, such that reworking them would take time away from the need to port quickly? I will be restructuring the legacy URLs to be lean and clean, so 301 redirects will be necessary. I know that there will be link juice loss, but how long does it usually take for the redirects to "take hold?" I will be moving to https at the same time to avoid yet another porting issue. Many thanks for any advice and opinions as I embark on this massive data entry project.
Technical SEO | | gheh20130 -
Duplicate content problem?
Hello! I am not sure if this is a problem or if I am just making something too complicated. Here's the deal. I took on a client who has an existing site in something called homestead. Files cannot be downloaded, making it tricky to get out of homestead. The way it is set up is new sites are developed on subdomains of homestead.com, and then your chosen domain points to this subdomain. The designer who built it has kindly given me access to her account so that I can edit the site, but this is awkward. I want to move the site to its own account. However, to do so Homestead requires that I create a new subdomain and copy the files from one to the other. They don't have any way to redirect the prior subdomain to the new one. They recommend I do something in the html, since that is all I can access. Am I unnecessarily worried about the duplicate content consequences? My understanding is that now I will have two subdomains with the same exact content. True, over time I will be editing the new one. But you get what I'm sayin'. Thanks!
Technical SEO | | devbook90 -
404 Errors After Site Migration
Hello - I'm working on a website selling fashion accessories. The site just went through a site migration from Yahoo! to Big Commerce. Now we have a high level of warnings and errors from the crawl. Few are mentioning sites I never seen before on the Yahoo! platform. I also notice that the pages crawled has doubled. How can I fix or did I do something wrong with migration? I was running the website with minimal errors and now overwhelmed with errors all the error updates. If I can get some assistance on what could be wrong, I would greatly appreciate. Thanks.
Technical SEO | | ShopChameleon0 -
Two sites
Hi there just joined had nightmere of a time trying to get a website up and running..... now i have 2 .... one marketing person did and one i did the one i did performing better on google but other onre looks more profetional is there a way i can conbine the 2 under one site..... the one that looks better and getting the benifit of the one thats performing better...... Thanks steve......
Technical SEO | | stevetemple0 -
Exchange Links - Problem or Not ?
There's a company that sells a real estate portal sites ready for several companies.
Technical SEO | | imoveiscamposdojordao
And when they install this system they always leave each site in a file calledimobiliarias.php that lists all properties that use your system, so there is a hugeexchange of links between the same sites.
So you can see with the Open Site Explorer that all sites have the same Backlinks.
This would not cause problems with regard to exchange links?
Loss of position or something? Thank you guys.! Sorry. 😛 Google Translator.0 -
Correct Indexing problem
I recently redirected an old site to a new site. All the URLs were the same except the domain. When I redirected them I failed to realize the new site had https enable on all pages. I have noticed that Google is now indexing both the http and https version of pages in the results. How can I fix this? I am going to submit a sitemap but don't know if there is more I can do to get this fixed faster.
Technical SEO | | kicksetc0 -
Robots.txt blocking site or not?
Here is the robots.txt from a client site. Am I reading this right --
Technical SEO | | 540SEO
that the robots.txt is saying to ignore the entire site, but the
#'s are saying to ignore the robots.txt command? See http://www.robotstxt.org/wc/norobots.html for documentation on how to use the robots.txt file To ban all spiders from the entire site uncomment the next two lines: User-Agent: * Disallow: /0