Inurl: search shows results without keyword in URL
-
Hi there,
While doing some research on the indexation status of a client I ran into something unexpected. I have my hypothesis on what might be happing, but would like a second opinion on this.
The query 'site:example.org inurl:index.php' returns about 18.000 results. However, when I hover my mouse of these results, no index.php shows up in the URL. So, Google seems to think these (then duplicate content) URLs still exist, but a 301 has changed the actual goal URL? A similar things happens for inurl:page. In fact, all the 'index.php' and 'page' parameters were removed over a year back, so there in fact shouldn't be any of those left in the index by now. The dates next to the search results are 2005, 2008, etc. (i.e. far before 2013). These dates accurately reflect the times these forums topic were created.
Long story short: are these ~30.000 'phantom URLs' in the index out of total of ~100.000 indexed pages hurting the search rankings in some way? What do you suggest to get them out? Submitting a 100% coverage sitemap (just a few days back) doesn't seem to have any effect on these phantom results (yet).
-
Hi Theo,
We encountered something similar when we migrated a site. We properly redirected all the old url's to the new one, however, in the weeks after the migration, we saw a huge increase of 404 in the webmastertools.
When we took a closer look to these url's, we noticed that these where using an url structure we had abandoned several years ago. On the "old" site, these were redirected, but we didn't implement these old redirections after migration as we assumed that these very old url's wouldn't be in the index anymore. We proved wrong. We could delete them manually from the index using webmaster tools, because they used folders we are not using any longer, this is not probably not possible in your case.
While it is a bit annoying, I don't think that having these "phantom" url's in the index is doing you any harm in terms of SEO. They will probably never pop-up for normal search queries, only when you do in-depth queries, limiting the results to only your site.
rgds,
Dirk
-
A few days plus ~100,000 decade's old pages in Google is usuallynot enough time to see a change. You can spot check the 301s and run a fetch / render from GWT to see if the changes should be working though. Other than that, you'll probably have to wait a bit longer.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with existing URL when replatforming and new URL is the same?
We are changing CMS from WordPress to Uberflip. If there is a URL that remains the same I believe we should not create a redirect. However, what happens to the old page? Should it be deleted?
Technical SEO | | maland0 -
URL Parameters as pagination
Hi guys, due to some changes to our category pages our paginated urls will change so they will look like this: ...category/bagger/2?q=Bagger&startDate=26.06.2017&endDate=27.06.2017 You see they include a query parameter as well as a start and end date which will change daily. All URLs with pagination are on noindex/follow. I am worrying that the products which are linked from the category pages will not get crawled well when the URLs on which they are linked from change on a daily basis. Do you have some experience with this? Are there other things we need to worry about with these pagination URLs? cheers
Technical SEO | | JKMarketing0 -
No Search Results Found - Should this return status code 404?
A question came up today on how to correctly serve the right status code on pages where no search results are found. I did a couple searches on some major eccomerce and news sites and they were ALL serving status code 200 for No Search Results Found http://www.zappos.com/dsfasdgasdgadsg http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=sdafasdklgjasdklgjsjdjkl http://www.ebay.com/sch/i.html?_trksid=p5197.m570.l1313&_nkw=dfjakljgdkslagklasd&_sacat=0 http://www.cnn.com/search/?query=sdgadgdsagas&x=0&y=0&primaryType=mixed&sortBy=date&intl=false http://www.seomoz.org/pages/search_results?q=sdagasdgasdgasg I thought I read somewhere were it was recommended to serve a status code 404 on these types of pages. Based on what I found above, all sites were serving a 200, so it appears this may not be the best practice. Any thoughts?
Technical SEO | | WEB-IRS0 -
Does Server Location have anything to do with Search Results
Good Morning Everyone... Does having a site hosted in Europe have any effect on Search Engine results in the US? Thanks
Technical SEO | | Prime850 -
Domain that ranked 4 has now disappeared from search results
Hi Guys, I have a website for a realestate property, it use to rank 4 but has now suddenly disappeared from search results altogether, a search for the domain 1boydstreetalbertpark.com will bring it up (so I assume it has not been blacklisted), but if I search for '1 boyd street albert park' (it use to come up at 4) it doesn't seem to come up at all anymore. I know the content is not original and it is the same on other sites (it is the same content the real estate agents send to everyone) but why it suddenly disappear and I would of thought having the actual search term in the domain would help it at least appear in the results. Any Idea?
Technical SEO | | mypropertyaddress0 -
/$1 URL Showing Up
Whenever I crawl my site with any kind of bot or a sitemap generator over my site. it comes up with /$1 version of my URLs. For example: It gives me hdiconference.com & hdiconference.com/$1 and hdiconference.com/purchases & hdiconference.com/purchases/$1 Then I get warnings saying that it's duplicate content. Here's the problem: I can't find these /$1 URLs anywhere. Even when I type them in, I get a 404 error. I don't know what they are, where they came from, and I can't find them when I scour my code. So, I'm trying to figure out where the crawlers are picking this up. Where are these things? If sitemap generators and other site crawlers are seeing them, I have to assume that Googlebot is seeing them as well. Any help? My developers are at a loss as well.
Technical SEO | | HDI0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0 -
Blog URLs
I read somewhere - pretty sure is was in Art of SEO - that having dates in the blog permalink URLs was a bad idea. e.g. /blog/2011/3/my-blog-post/ However, looking at Wordpress best practice, it's also not a good idea to have a URL without a number - it's more resource hungry if you don't , apparently. e.g. /blog/my-blog-post/ Does anyone have any views on this? Thanks Ben
Technical SEO | | atticus70