All urls seem to exist (no 404 errors) but they don't.
-
Hello I am doing a SEO auditing for a website which only has a few pages. I have no cPanel credentials, no FTP no Wordpress admin account, just watching it from the outside.
The site works, the Moz crawler didn't report any problem, I can reach every page from the menu.
The problem is that - except for the few actual pages - no matter what you type after the domain name, you always reach the home page and don't get any 404 error.
I.E.
Http://domain.com/oiuxyxyzbpoyob/ (there is no such a page, but i don't get 404 error, the home is displayed and the url in the browser remains Http://domain.com/oiubpoyob/, so it's not a 301 redirect).
Http://domain.com/WhatEverYouType/ (same)
- Could this be an important SEO issue (i.e. resulting in infinite amount of duplicate content pages )?
- Do you think I should require the owner to prevent this from happening?
- Should I look into the .htaccess file to fix it ?
Thank you Mozers!
-
Hi,
This is indeed an SEO issue, because you could possibly get an infinite ammount of duplicate content pages of your homepage. I do not think it is a major issue (i think that Google in Webmaster tool handles it as a soft 404). But from a user standpoint it is weird and not really use friendly.
It could be managed in the HTACCES but could also be a Wordpress setup. So it depends.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why a certain URL ( a category URL ) disappears?
the page hasn't been spammed. - links are natural - onpage grader is perfect - there are useful high ranking articles linking to the page...pretty much everything is okay.....also all of my websites pages are okay and none of them has disappeared only this one ( the most important category of my site. )
Intermediate & Advanced SEO | | mohamadalieskandariii0 -
How to get a large number of urls out of Google's Index when there are no pages to noindex tag?
Hi, I'm working with a site that has created a large group of urls (150,000) that have crept into Google's index. If these urls actually existed as pages, which they don't, I'd just noindex tag them and over time the number would drift down. The thing is, they created them through a complicated internal linking arrangement that adds affiliate code to the links and forwards them to the affiliate. GoogleBot would crawl a link that looks like it's to the client's same domain and wind up on Amazon or somewhere else with some affiiiate code. GoogleBot would then grab the original link on the clients domain and index it... even though the page served is on Amazon or somewhere else. Ergo, I don't have a page to noindex tag. I have to get this 150K block of cruft out of Google's index, but without actual pages to noindex tag, it's a bit of a puzzler. Any ideas? Thanks! Best... Michael P.S., All 150K urls seem to share the same url pattern... exmpledomain.com/item/... so /item/ is common to all of them, if that helps.
Intermediate & Advanced SEO | | 945010 -
A client rebranded a few years ago and doesn't want to be associated with it's old brand name. He wishes not to appear when the old brand is searched in Google, is there something we can do?
The problem is there was redirection between the old branded site and the new one, and now when you type in the name of the old brand, the new one comes up. I have desperately tried to convince this client there is nothing we can do about it, dozens of news articles crop up with the two brands together as this was a hot topic a few years ago, but just in case I missed something I thought I'd ask the community of experts here on Moz. An example for this would be Tyco Healthcare that became covidien in 2007. When you type tyco healthcare, covidien crops up here and there. Any ideas? Thanks!
Intermediate & Advanced SEO | | Netsociety0 -
Moving to https with a bunch of redirects my programmer can't handle
Hi Mozzers, I referred a client of mine (last time) to a programmer that can transition their site from http to https. They use a wordpress website and currently use EPS Redirects as a plugin that 301 redirects about 400 pages. Currently, the way EPS redirects is setup (as shown in the attachment) is simple: On the left side you enter your old url, and on the the right side is the newly 301'd url. But here's the issue, since my client made the transition to https, the whole wordpress backend is setup that way as well. What this means is, if my client finds another old http url that he wants to redirect, this plugin only allows them to redirect https to https. As of now, all old http to https redirects STILL work even though the left side of the plugin switched all url's to a default HTTPS. But my client is worried the next plugin update he will lose all http to https redirects. While asking our programmer to add all 400 redirects to .htaccess, he states that's too many redirects and could slow down the website. Well, we don't want to lose all 400 301's and jeopardize our SEO. Question: what does everyone suggest as an alternative solution/plugin to redirect old http urls to https and future https to https urls? Thank you all! Ol8km
Intermediate & Advanced SEO | | Shawn1240 -
404 broken URLs coming up in Google
When we do a search for our brand, we are get the following results in google.com.au (see image attachment). As outlined in red, there are listings in Google that result in 404 Page Not Found URLs. What can we do to enable google to do a recrawl or to ensure that these broken URLs are no longer listed in Google? Thanks for your help here! sBqpvtj
Intermediate & Advanced SEO | | Gavo0 -
Penguin 3.0 - Very minor drops across the board. Don't think its a penalty, any ideas?
Hey All, I just can't figure this out. My site has been ranking well for years, i've never done anything suspicious with it and since the penguin update, my rankings have dropped across the board but only by about 4 - 8 places each, some terms have went up from nowhere to page 8 etc. I don't think i've been hit with a penalty, so I don't know what the problem is or how to recover from it. Does anybody have any ideas on what could be wrong? Update: Perhaps some sites that were linking to mine have been hit with a penalty? Update 2: I just found myself somehow in some spammy link network for 600 sites that looked identical, I don't know how or why my website is in this! I have disavowed all of these links 5 days ago, no change to rankings. pY80Dzi
Intermediate & Advanced SEO | | Paul_Tovey0 -
Can't get page moving!
Hi all. I've been working on a page for months now and can't seem to make any progress. I'm trying to get http://www.alwayshobbies.com/dolls-houses on the first page for term 'dolls houses'. I've done the following: Cleaned up the site's overall backlink profile Built some new links to the page Added 800 words of new copy Reduced the number of keyword instances on the page below 15 Any advice would be much appreciated. I don't think it's down to links as the DA/PA isn't wildly different from its competitors. Thanks!
Intermediate & Advanced SEO | | Blink-SEO0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0