Best action to take for "error" URLs?
-
My site has many error URLs that Google webmaster has identified as pages without titles.
These are URLs such as: www.site.com/page???1234
For these URLs should I:
1. Add them as duplicate canonicals to the correct page (that is being displayed on the error URLs)
2. Add 301 redirect to the correct URL
3. Block the pages in robots.txt
Thanks!
-
From my understanding, if you no links going to that url, it might be the crawler verifying existence of the 404 page.
So displaying a 404 page is best option for that.
However, if you have links going to that url, you should either fix those(if internal) or do 301 redirect as you said.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is "Article 1 - x of y" showing up in this SERP?
Does anybody have an explanation why this is showing up in the SERP? Ju3VYsW.png
Technical SEO | | jmueller0 -
"Fourth-level" subdomains. Any negative impact compared with regular "third-level" subdomains?
Hey moz New client has a site that uses: subdomains ("third-level" stuff like location.business.com) and; "fourth-level" subdomains (location.parent.business.com) Are these fourth-level addresses at risk of being treated differently than the other subdomains? Screaming Frog, for example, doesn't return these fourth-level addresses when doing a crawl for business.com except in the External tab. But maybe I'm just configuring the crawls incorrectly. These addresses rank, but I'm worried that we're losing some link juice along the way. Any thoughts would be appreciated!
Technical SEO | | jamesm5i0 -
Is the " meta content tag" important?
I am currently trying to optimize my companies website and I noticed that meta content is exactly the same for all of the pages on our website. Isn't this problematic? The actual content on the webpage is not the same and a lot of the pages don't have these keywords in the content.
Technical SEO | | AubbiefromAubenRealty0 -
Best practice for eCommerce site migration, should I 301 redirect or match URLs on new site
Hi Guys, I have been struggling with this one for quite some time. I am no SEO expert like many of you, rather just a small business owner trying to do the right thing, so forgive me if I say something that makes no sense 🙂 I am moving our eCommerce store from one platform to another, in the process the store is getting a massive face lift. The part I am struggling with is whether I should keep my existing URL structure in place or use 301 redirects to create a cleaner looking URLs. Currently the URLs are a little long and I would love to move to a /category/product_name type format. Of course the goal is not to lose ranking in the process, I rank pretty well for several competitive phrases and do not want to create a negative impact. How would you guys handle this? Thanks, Dinesh
Technical SEO | | MyFairyTaleBooks0 -
Carl errors on urls that don't normally exist
Hi, I have been having heaps (thousands) of SEOMoz crawl errors on urls that don't exist normally like: mydomain.com/RoomAvailability.aspx?DateFrom=2012-Oct-26&rcid=-1&Nights=2&Adults=1&Children=0&search=BestPrice These urls are missing siteids and other parameters and I can't see how they are gererated. Does anyone have any ideas on where MOZ is finding them ? Thanks Stephen
Technical SEO | | digmarketingguy0 -
Rel="author" showing old image
I'm using http://www.google.com/webmasters/tools/richsnippets to test my rel="author" tag which was successful, but I noticed I wanted to change my image in Google+ as it is not what I want. I changed my image in Google+, it's been over 14 hours now and still not showing the new picture using the RichSnippets tool. I know Google can take a couple weeks at least to show changes in search results, but this RichSnippet tool I thought was immeidate. Am I missing something here or am I just impatient? I want my new photo to show.
Technical SEO | | Twinbytes0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Honeypot Captcha - rated as "cloaked content"?
Hi guys, in order to get rid of our very old-school captcha on our contact form at troteclaser.com, we would like to use a honeypot captcha. The idea is to add a field that is hidden to human visitors but likely to be filled in by spam-bots. In this way we can sort our all those spam contact requests.
Technical SEO | | Troteclaser
More details on "honeypot captchas":
http://haacked.com/archive/2007/09/11/honeypot-captcha.aspx Any idea if this single cloaked field will have negative SEO-impacts? Or is there another alternative to keep out those spam-bots? Greets from Austria,
Thomas0