Parked former company's url on top of my existing url and that URL is showing in SERPs for my top keywords
-
I have the URL from my former company parked on top of my existing URL. My top keywords are showing up with the old URL attached to the metadsecription of my existing URL. It was supposed to be 301 redirected instead of parked but my web developer insists this was the right way to do it and it will work itself out after google indexes the old URL out of existence. Are there any other options?
-
Thanks, again. Will try these options today. It'll be nice going in more knowledgeable so it's a very good thing you do Mr. Kley.
-
Nothing he can do? Lmao what a terrible answer. On the old site, you should still have Ftp setup. In that account, go into your htaccess file and add the rule that redirects all traffic to your existing domain, or the one you want to get indexed. Also add a robots rule denying any access to the old domain ftp.
Option 2 is to delete any and all old site files in the domain Ftp you want to get rid of, have the site urls return a 404 error, and do a url removal request in webmaster tools. Option 1 would be safer imo, but doing option 2 will get rid of the old domain for good.
-
Thank you both for your responses.
@DavidKley, They do both show up and the developer says there's nothing more he can do since the old site no longer exists. Everything I've read online seems to contradict his though.
The domains in question are:
old - www.aceystowing.com
new - www.jonnystowingnow.com
Any further insight would again be greatly appreciated.
-
Just wanted to add:
Do both urls show up for a page? Meaning if you had a page about dog treats, can that page be accessed through both urls on the Web (manually or in serp results)? If so, you need to redirect the domain you don't want to use immediately to prevent duplications. Just parking one on top of the other usually will not take care of replacing the other url. You don't want to have both indexed at the same time.
-
In addition to parking the domain, did you add a parked domain htaccess rule? In addition to search engines, make sure your visitors are getting to the right place, without duplicate content.
After a while, all the new urls should replace the old ones, but I have seen this process take up to 6-8 months.
-
The definition of words like "parked" can vary in the FAQ documents of one hosting company to another. When I have moved domains I have "parked" them on my hosting and then 301 redirected specific old URLs on the old domain to specific URLs on the new domain.
There are a lot of really competent people out there, but sometimes webdevelpers have a "mechanical knowledge" of how things work but for search engines to treat your domain perfectly something else is required.
If this was my site I would have a technical SEO look at it. I've done this stuff for myself but always paid someone else to review my plan and check to see if it is workin' properly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old site name showing in SERPs
Hi all, We've recently re-launched one of our sites with a substantial redesign, refreshed content, meta data, descriptions and functionality. We noticed in SERPs that some of the page titles are showing the old name for the site, which hasn't been used for a few years and the site's been through a few updates and a URL change since then. All the meta titles showing up as they should in crawls through Search Console and Moz and it's my understanding that if Google were pulling a cached version of a title it would have gone for a more recently cached one? Any thoughts on why Google's turned back the clock on our site's name would be greatly appreciated! -Jamie
Technical SEO | | JamieCMF0 -
Help! How to Remove Error Code 901: DNS Errors (But to a URL that doesn't exist!)
I have 2 urgent errors saying there are 2 x error code 909's detected. These don't link to any page - but I can tell there is a mistake somewhere - I just don't know what needs changing. http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/printed-promotional-keyrings http://www.justkeyrings.co.ukhttp/www.justkeyrings.co.uk/blank-unassembled-keyrings Could someone help please? screen-shot-2015-08-11-at-13.18.17.png?t=1439292942
Technical SEO | | FullSteamBusiness0 -
What's Worse - 404 errors or a huge .htaccess file
We have changed our site architecture pretty significantly and now have many fewer pages (albeit with more robust content and focused linking). My question is, what should I do about all the 404 errors (keep in mind, I am only finding these in Bing Webmaster tools, not Moz or GWT)? Is it worse to have all those 404 errors (hundreds), or to have a massive htaccess file for pages that are only getting hits by the Bing crawlbot. Any insight would be great. Thanks
Technical SEO | | CleanEdisonInc0 -
Should I noindex my blog's tag, category, and author pages
Hi there, Is it a good idea to no index tag, category, and author pages on blogs? The tag pages sometimes have duplicate content. And the category and author pages aren't really optimized for any search term. Just curious what others think. Thanks!
Technical SEO | | Rignite0 -
Robots.txt crawling URL's we dont want it to
Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?
Technical SEO | | ShearingsGroup0 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
URL's for news content
We have made modifications to the URL structure for a particular client who publishes news articles in various niche industries. In line with SEO best practice we removed the article ID from the URL - an example is below: http://www.website.com/news/123/news-article-title
Technical SEO | | mccormackmorrison
http://www.website.com/news/read/news-article-title Since this has been done we have noticed a decline in traffic volumes (we have not as yet assessed the impact on number of pages indexed). Google have suggested that we need to include unique numerical IDs in the URL somewhere to aid spidering. Firstly, is this policy for news submissions? Secondly (if the previous answer is yes), is this to overcome the obvious issue with the velocity and trend based nature of news submissions resulting in false duplicate URL/ title tag violations? Thirdly, do you have any advice on the way to go? Thanks P.S. One final one (you can count this as two question credits if required), is it possible to check the volume of pages indexed at various points in the past i.e. if you think that the number of pages being indexed may have declined, is there any way of confirming this after the event? Thanks again! Neil0