How did my dev site end up in the search results?
-
We use a subdomain for our dev site. I never thought anything of it because the only way you can reach the dev site is through a vpn. Google has somehow indexed it. Any ideas on how that happened? I am adding the noindex tag, should I used canonical? Or is there anything else you can think of?
-
Personally, I'd still recommend using robots.txt to disallow all crawlers, even if more steps are taken.
-
Don't use tool removal, it can go bad indeed. Now, are you sure that there are no external links coming from anywhere?
For now I'd recommend putting noindex, nofollow on that dev subdomain and do manual recrawl through GWT.
-
It just uses internal links. Do you think I should try the webmaster tools removal? That seems like it could go wrong.
-
I never used screaming frog, does it check both external and internal links?
-
I have ran screaming frog to see if there are any links to any pages and but couldn't see any. Even if Google did try to follow it the firewall would stop them. It is so strange.
-
Then my first assumption is that it's linked from somewhere - read my comment a little above.
-
Then there is a leak somewhere - Google bots can "see" your subdomain.
Or it's been simply linked from somewhere. Then Google will try to follow the link and that would make it indexed.
-
They are telling me that there are no holes, and I have tried getting to the pages but can not do it unless I am on my vpn.
-
We never updated the robots.txt because the site was behind a firewall. If you click on any of the results it will not load the page unless on my VPN.
-
Robots.txt won't help anyhow. Bots still can see that there is such directory, they just won't see what's inside of those directories/subdomains.
-
Hi there.
If what you say is true, then there are only two answers: you got a leak somewhere or your settings/configuration is messed up.I'd say go talk to your system admin and make sure that everything what's supposed to be closed is closed, IPs, which are supposed to be open for use are open and those IPs only.
-
Have you updated the dev sites robots.txt to disallow everything? It is up to the bot to listen, but that combined with removing all of the dev URLs from Google Webmaster tools should do the trick.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Web Site Ranking
Hi Folks, I made some changes on my website www.gemslearninginstitute.com and published it two days ago. It was ranking on Google first page for a few keywords. I did not touch the pages which were ranking on first page. Since then I am not seeing the website ranking on the Google. Does it take a few days to rank again? How can I ensure that next time if I update the website or publish some blog on my website then it should not effect the ranking. Secondly, if I would like to rank in three different cities then do I need to create separate pages for each city or how should I proceed with this. Thanks
Intermediate & Advanced SEO | | fslpso0 -
Launching a new website. Old inherited site cannot be saved after lifted penalty. When should we kill the old site and how?
Background Information A website that we inherited was severely penalized and after the penalty was revoked the site still never resurfaced in rankings or traffic. Although a dramatic action, we have decided to launch a completely new version of the website. Everything will be new including the imagery, branding, content, domain name, hosting company, registrar account, google analytics account, etc. Our question is when do we pull the plug on the old site and how do we go about doing it? We had heard advice that we should make sure we run both sites at the same time for 3 months, then deindex the old site using a noindex meta robots tag.We are cautious because we don't want the old website to be associated in any way, shape or form with the new website. We will purposely not be 301 redirecting any URLs from the old website to the new. What would you do if you were in this situation?
Intermediate & Advanced SEO | | peteboyd0 -
When Mobile and Desktop sites have the same page URLs, how should I handle the 'View Desktop Site' link on a mobile site to ensure a smooth crawl?
We're about to roll out a mobile site. The mobile and desktop URLs are the same. User Agent determines whether you see the desktop or mobile version of the site. At the bottom of the page is a 'View Desktop Site' link that will present the desktop version of the site to mobile user agents when clicked. I'm concerned that when the mobile crawler crawls our site it will crawl both our entire mobile site, then click 'View Desktop Site' and crawl our entire desktop site as well. Since mobile and desktop URLs are the same, the mobile crawler will end up crawling both mobile and desktop versions of each URL. Any tips on what we can do to make sure the mobile crawler either doesn't access the desktop site, or that we can let it know what is the mobile version of the page? We could simply not show the 'View Desktop Site' to the mobile crawler, but I'm interested to hear if others have encountered this issue and have any other recommended ways for handling it. Thanks!
Intermediate & Advanced SEO | | merch_zzounds0 -
Site dropped after recovery
Hi everybody! I've been working for http://www.newyoubootcamp.com for some time now. They came to me as they had dropped heavily for their main term, "boot camp". This turned out to be due to a manual penalty, which was in part due to their forum being hacked, as well as some bad link building. Here's an example of the dodgy forum links - http://about1.typepad.com/blog/2014/04/tweetdeck-to-launch-as-html5-web-app-now-accepting-beta-testers.html. The anchor is "microsoft". They've all been 410'd now. Also, we cleaned up the other bad links as best we could, and got through the manual penalty. The site then returned to #5 for "boot camps", below its pre-crash peak of #2, but OK. Over the past few weeks, it has started to slide though. I'm certain it is not down to a lack of quality links - this site has great PR and links from national newspapers and magazines. There's been a few on-site issues too, but nothing outrageous. I'm getting a bit stumped though, and any fresh eyes would be much appreciated!
Intermediate & Advanced SEO | | Blink-SEO0 -
Penguin recovery, no manual action. Are our EMD sites killing our brand site?
Hi guys, Our brand site (http://urban3d.net) has been seeing steady decline due to algorithm updates for the past two years. Our previous SEO company engaged in some black-hat link building which has hurt us very badly. We have recently re-launched the site, with better design, better content, and completed a disavow of hundreds of bad links. The site is technically indexed, but is still nowhere in the SERPs after months of work to recover it by our internal marketing team. The last SEO company also told us to build EMD sites for our core services, which we did: http://3dvisualisation.co.uk/ http://propertybrochure.com/ http://kitchencgi.com/ My question is - could these EMD sites now hurting us even further and stopping our main brand site from ranking? Our plan is to rescue our brand site, with a view to retiring these outlier sites. However, with no progress on the brand site, we can't afford to remove these site (which are ranking). It seems a bit chicken and egg. Any advice would be very much appreciated. Aidan, Urban 3D
Intermediate & Advanced SEO | | aidancass0 -
Is linking to search results bad for SEO?
If we have pages on our site that link to search results is that a bad thing? Should we set the links to "nofollow"?
Intermediate & Advanced SEO | | nicole.healthline0 -
On-Site Optimization Tips for Job site?
I am working on a job site that only ranks well for the homepage with very low ranking internal pages. My job pages do not rank what so ever and are database driven and often times turn to 404 pages after the job has been filled. The job pages have to no content either. Anybody have any technical on-site recommendations for a job site I am working on especially regarding my internal pages? (Cross Country Allied.com)
Intermediate & Advanced SEO | | Melia0