My website (non-adult) is not appearing in Google search results when i have safe search settings on. How can i fix this?
-
Hi,
I have this issue where my website does not appear in Google search results when i have the safe search settings on. If i turn the safe search settings off, my site appears no problem. I'm guessing Google is categorizing my website as adult, which it definitely is not. Has anyone had this issue before? Or does anyone know how to resolve this issue? Any help would be much appreciated.
Thanks
-
Let us know how it goes, either with a response here, or a case study on YouMoz. Best of luck!
-
Yeh i also noticed that there are some pages indexed but the homepage and some other pages on the website don't show up. We have a network of 35 sites, 8 of these sites are experiencing this same problem, the rest are fine. Im not exactly sure when this issue started but i beleive it has only happened recently.
Thanks for the help, i will sumbit the URL's for reconsideration and see how we go from there.
-
And here's the place to ask that you site be reconsidered:
-
I enabled safe search, then did a site:blackcupid.com search on Google. There are a lot of pages that show up as indexed, though I didn't see the home page right off. If I search for Black Cupid, I do see pages from that domain, but not the home page.
I took a snippet from the help page at http://www.blackcupid.com/help/helpcategory.cfm and searched for it in Google, with safe search on. Google is showing dozens of results from similar sites, such as Brazil Cupid and Japan Cupid (then I went to the home page and saw that all of those sites are indeed related).
How are the other sites performing? Do they have the same problem? Have you always had this problem, or is it new?
Any messages from Google in GWT?
-
Thanks for the quick response Keri - http://www.blackcupid.com/
-
Could you include your URL? That could help us out.
I've had some filters block access to strikemodels.com presumably because of the "models" in the URL (it's actually model warships), but haven't had a problem with Google filtering it out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Subdirectory site / 301 Redirects / Google Search Console
Hi There, I'm a web developer working on an existing WordPress site (Site #1) that has 900 blog posts accessible from this URL structure: www.site-1.com/title-of-the-post We've built a new website for their content (Site #2) and programmatically moved all blog posts to the second website. Here is the URL structure: www.site-1.com/site-2/title-of-the-post Site #1 will remain as a normal company site without a blog, and Site #2 will act as an online content membership platform. The original 900 posts have great link juice that we, of course, would like to maintain. We've already set up 301 redirects that take care of this process. (ie. the original post gets redirected to the same URL slug with '/site-2/' added. My questions: Do you have a recommendation about how to best handle this second website in Google Search Console? Do we submit this second website as an additional property in GSC? (which shares the same top-level-domain as the original) Currently, the sitemap.xml submitted to Google Search Console has all 900 blog posts with the old URLs. Is there any benefit / drawback to submitting another sitemap.xml from the new website which has all the same blog posts at the new URL. Your guidance is greatly appreciated. Thank you.
Intermediate & Advanced SEO | | HimalayanInstitute0 -
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Can 301 redirects that are inaccurate cause Google suppressions on rankings?
In an interesting study by DeganSEO titled 'Negative Impact of 301 Redirects - A Case Study' a drop of rankings was observed when popular blog posts were redirected to product pages. One hypothesis is that the suppression is due to topical difference between the redirected pages (blog posts) and the target page. The topical difference issue is an interesting one when you consider it in the context of website migrations. We always recommend that 301 redirects are done at a page level and that if an equivalent page doesn't exist to just 301 anyway but to the most logical page. If you think about it Google are likely to frown on this because a) it's not a good experience for the user - 404 would be more accurate for them
Intermediate & Advanced SEO | | QubaSEO
b) it's lazy - if you have good content that has gained authority/trust then create the same content on the new site don't trytp pass that to an entirely different page. Thoughts? Experiences?0 -
Google is not indexing an updated website
We just relaunched a website that has 5 years old, we maintain all the old URLs and articles but for some reason google is not picking up the new website https://www.navisyachts.com. In Google Webmaster Tools we can see the sitemap with over 1000 pages submitted but shows nothing as indexed. The site is loosing traffic rapidly and positions, from the SEO side all looks fine for me. What can be wrong? I’ll appreciate any help. The new website is built over Joomla 3.4, we have it here at MOZ and other than some minor details it doesn't show that something can be wrong with the website. Thank you.
Intermediate & Advanced SEO | | FWC_SEO0 -
How can I remove my old sites URL from showing up in Google?
Hi everyone. We have had a new site up for over a year now. When I search site:sqlsentry.net the old url still shows up and while those pages are redirected to .com I'd like to get the .net URL's out of google forever. What is the best way I can go about that?
Intermediate & Advanced SEO | | Sika220 -
How can Google index a page that it can't crawl completely?
I recently posted a question regarding a product page that appeared to have no content. [http://www.seomoz.org/q/why-is-ose-showing-now-data-for-this-url] What puzzles me is that this page got indexed anyway. Was it indexed based on Google knowing that there was once content on the page? Was it indexed based on the trust level of our root domain? What are your thoughts? I'm asking not only because I don't know the answer, but because I know the argument is going to be made that if Google indexed the page then it must have been crawlable...therefore we didn't really have a crawlability problem. Why Google index a page it can't crawl?
Intermediate & Advanced SEO | | danatanseo0 -
301 Re-Directs Puzzling Question on Page Returned in Search Results
On our website, www.BusinessBroker.net, we have 3 different versions of essentially the same page for each of our State Business for Sale Pages. Back in August, we did a test and did 301 redirects using 5 States. For a long while after doing the redirects, the pages fell out of Google search results - we used to get page 1 rankings. Just recently they started popping back up on Page 1. However, I noticed that the new page meta data is not what is being picked up -- here is the example. Keyword Searched for in Google -- "Maine Business for Sale" Our listing shows up on Page 1 -- # 8 Result URL returned is correct preferred version: - http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx However, the Page Title on this returned page is still the OLD page title - OLD TITLE -- maine Business for Sale Ads - maine Businesses for Sale & Business Brokers - Sell a Business on Business Broker Not the title that is designated for this page - New Title - Maine Businesses for Sale - Buy or Sell a Business in ME | BusinessBroker.net Ditto for Meta Description. Why is this happening? Also have a problem with lower case showing up rather than upper case -- what's causing this? http://www.businessbroker.net/state/maine-Businesses_For_Sale.aspx versus -- http://www.businessbroker.net/State/Maine-Businesses_For_Sale.aspx Any help would be appreciated. Thanks, MM
Intermediate & Advanced SEO | | MWM37720 -
When you buy a domain or website, does that trigger a fresh look by Google?
I recently purchased a domain and the corresponding website. For as far as I could tell, in the 12 months prior to my purchase, the site was well optimized within Google and had over 40 search terms on page 1 of Google in a really competitive space (lending-related). When I made the purchase, the domain was transferred from seller's GoDaddy account and into my GoDaddy account and I placed privacy protection on the domain. We did not move the hosting of the site--I took over his hosting account. And I did not make any significant changes to the website. About 1 week later, the site was totally removed from Google's index and I received notice in Google Webmaster Tools that the site may violate Google's quality guidelines. I filed reconsideration request telling Google that I was the new owner and that if there were any violations, they were caused by old owner. One week later, I got note back from Google saying they had received my reconsideration request and if they think issues are cured, then they will reindex the site. That was over a week ago and so seemingly they are not putting it back. My question is this: Does Google somehow automatically know when domains change hands and does this cause them to manually review sites? The site in question was aggressively optimized but I don't understand what would have caused Google to take action on the site when they did. In other words, if they were going to take action, why wouldn't they have done it in the prior 12 months or does the domain transfer put the site into some queue that makes them review it? BTW, the site in question has a SEOMoz domain authority grade of 85 and still is showing up as PR 5 Thanks very much for your time and consideration
Intermediate & Advanced SEO | | whodatyat0