What should I do with a large number of 'pages not found'?
-
One of my client sites lists millions of products and 100s or 1000s are de-listed from their inventory each month and removed from the site (no longer for sale). What is the best way to handle these pages/URLs from an SEO perspective? There is no place to use a 301.
1. Should we implement 404s for each one and put up with the growing number of 'pages not found' shown in Webmaster Tools?
2. Should we add them to the Robots.txt file?
3. Should we add 'nofollow' into all these pages?
Or is there a better solution?
Would love some help with this!
-
I would leave the pages up but mark them as "no follow". When I worked in eCommerce, this was a great tactic. For UX purposes, you could try to steer people to similar-products, but keep the originating page as "no follow" or "no index".
-
Thanks Jane and Lesley for your responses. Great ideas from you both. I think I'll keep the pages but change the content/buying options, as you've both suggested.
I had considered 410s and might fall back on this for historical URLs in the instance that we can no longer retrieve the content.
-
I always take notes from giants on how to handle things like this. Amazon is the giant in this arena, what do they do? They do not disable the product, they leave it on the site as unavailable. I would do the same thing personally. What platform are you using, does it have a suggested products module / plugin? If so, it can be modified to be more promient on pages that are disabled from selling. But I would keep the page and keep the authority of the page.
If you 301 it to another product, the search satisfaction level goes down and your bounce rate will rise. I would be careful with this, because Google wants to serve results that are relevant and what people are looking for.
The other option I would give is to return a 410 status code to get them de-indexed.
-
Hi Claire,
If you really can't 301, consider serving a page providing alternative products, a search function and an explanation of why the page's former content is no longer available. Many estate websites are quite good at this. Using real estate as an example, some maintain the URLs of properties that regularly go on the market (big city apartments, for example) but grey out the information to show a user that the property is not currently for lease. Other URLs will show properties in the former listing's post code.
Your robots.txt file is going to get out of control if you are having to add millions of pages to it on a regular basis, so I would personally not pursue that route.
-
Why aren't 301s an option?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site architecture? I've got a free user report, that shoots back a page with their data for them to share with co-workers and friends.
Hi, I have a site about to go online that users can run a free report that connects to their calendar app to get 12 months of statistics for their meetings, and then it shoots out a report. So they go to a.com/freereport and they get back a.zom/freereport/report/xxxxxx The content of those reports is different, but the structure is the same as it is a fun way to show off meeting stats to co-workers and friends. I don't see the point of Google indexing those as the traffic to those pages is going to be from social networks and viral, but I do want the backlink credit. Will I get backlink credit if I nofollow that folder? I am having a hard time deciding what to do seo wise and would love some thoughts and advice, what would you recommend? Do nothing fancy. Mark the report folder no follow. Try to do something with rel=cannonical to point those pages to the root page? Thoughts?
Technical SEO | | bwb0 -
On-Page Problem
Hello Mozzers, A friend has a business website and the on-page stuff is done really bad. He wants to rank for: conference room furnishing, video conference, digital signage. (Don't worry about the keywords, it's just made up for an example.) For these three services he has a page: hiswebsite.com/av AV stands for audio and video and is the h1. If you click on one of the service, the url doesn't change. Like if you click on video conference, just the text changes, the url stays /av. All his targeted pages got an F Grade, I am not surprised, the services titles are in . Wouldn't it be a lot better to make an own page for every service with a targeted keyword, like hiswebsite.com/video-conference All this stuff is on /av, how will a 301 resirect work to all the service pages, does this make sense? Any help is appreciated! Thanks in advance!
Technical SEO | | grobro1 -
Should I disavow links from pages that don't exist any more
Hi. Im doing a backlinks audit to two sites, one with 48k and the other with 2M backlinks. Both are very old sites and both have tons of backlinks from old pages and websites that don't exist any more, but these backlinks still exist in the Majestic Historic index. I cleaned up the obvious useless links and passed the rest through Screaming Frog to check if those old pages/sites even exist. There are tons of link sending pages that return a 0, 301, 302, 307, 404 etc errors. Should I consider all of these pages as being bad backlinks and add them to the disavow file? Just a clarification, Im not talking about l301-ing a backlink to a new target page. Im talking about the origin page generating an error at ping eg: originpage.com/page-gone sends me a link to mysite.com/product1. Screamingfrog pings originpage.com/page-gone, and returns a Status error. Do I add the originpage.com/page-gone in the disavow file or not? Hope Im making sense 🙂
Technical SEO | | IgorMateski0 -
GWT Malware notification for meta noindex'ed pages ?
I was wondering if GWT will send me Malware notification for pages that are tagged with meta noindex ? EG: I have a site with pages like example.com/indexed/content-1.html
Technical SEO | | Saijo.George
example.com/indexed/content-2.html
example.com/indexed/content-3.html
....
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html
.... Here all the pages like the ones below, are tagged with meta noindex and does not show up in search.
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html Now one fine day example.com/not-indexed/content-2.html page on the site gets hacked and starts to serve malware, none of the other pages are affected .. Will GWT send me a warning for this ? What if the pages are blocked by Robots.txt instead of meta noindex ? Regard
Saijo UPDATE hope this helps someone else : https://plus.google.com/u/0/109548904802332365989/posts/4m17sUtPyUS0 -
Consistent top 10 in G image search - but a different 'stolen' version every time!
I have a photo that was uploaded back in 2005. It is an aerial shot and has received a fair bit of traffic over the years. I'm pretty sure it was ranked #1 in Google Images for the town name for a while. Now, however, it never ranks. Well actually it does. But every single time it is a version on a different website that is being used without permission.
Technical SEO | | Cornwall
And I'm not talking about one website. Every time I fill out a DMCA and have the image removed only to see a completely different website featuring in the top 10. This has happened 5 times so far and I'm just about to fill out another DMCA request. What is going on? Surely Google in its infinite wisdom is smart enough to check the timestamp or date cues on page to figure out which is the original. These other sites are often complete unknowns compared to my site which is a 12yr old authority site on the subject.
Don't get it!0 -
Pages not being indexed
Hi Moz community! We have a client for whom some of their pages are not ranking at all, although they do seem to be indexed by Google. They are in the real estate sector and this is an example of one: http://www.myhome.ie/residential/brochure/102-iveagh-gardens-crumlin-dublin-12/2289087 In the example above if you search for "102 iveagh gardens crumlin" on Google then they do not rank for that exact URL above - it's a similar one. And this page has been live for quite some time. Anyone got any thoughts on what might be at play here? Kind regards. Gavin
Technical SEO | | IrishTimes0 -
If multiple links on a page point to the same URL, and one of them is no-followed, does that impact the one that isn't?
Page A has two links on it that both point to Page B. Link 1 isn't no-follow, but Link 2 is. Will Page A pass any juice to Page B?
Technical SEO | | Jay.Neely0 -
I have 15,000 pages. How do I have the Google bot crawl all the pages?
I have 15,000 pages. How do I have the Google bot crawl all the pages? My site is 7 years old. But there are only about 3,500 pages being crawled.
Technical SEO | | Ishimoto0