Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does 'XXX' in Domain get filtered by Google
-
I have a friend that has xxx in there domain and they are a religious based sex/porn addiction company but they don't show up for the queries that they are optimized against. They have a 12+ year old domain, all good health signs in quality links and press from trusted companies. Google sends them adult traffic, mostly 'trolls' and not the users they are looking for.
Has anyone experienced domain word filtering and have a work around or solution? I posted in the Google Webmaster help forums and that community seems a little 'high on their horses' and are trying to hard to be cool. I am not too religious and don't necessarily support the views of the website but just trying to help a friend of a friend with a topic that I have never encountered.
here is the url: xxxchurch.com
Thanks,
Brian
-
Hmmm... This is a hard one. (Oh man, did not mean to make the intentional sex referrence)
Yes, Google has made changes in it's algorithm in the past year that makes porn harder to search for on the Internet. These changes don't filter the porn per se - except when "Safe search" is set to on - but it does mean that you must be much more specific in your search queries to find what you are looking for. For example, the query "boobs" generally returns almost no porn in Google, but the query "boobs porn" will.
If I were building an algorythm to separate porn sites from non, a large amount of XXX in the incoming anchor text, or in the URL, would probably trigger it.
Oh the other hand, I'm inclined to agree with George - seems like there's something more going on here. The backlink profile isn't terrible.... but there's definitely a footprint of comment spam in there. I won't link directly, but some of the suspect, off-topic links I found include:
http://www.takarat.com/forums/showthread.php?tid=750&page=3
http://www.omyogapages.com/forum/showthread.php?t=43&page=7
http://www.atthepicketfence.com/2011/09/behind-blog-with-savvy-southern-style.html
http://www.marypoppins-homesweethome.com/2011/07/what-is-it-with-us-girls-and-ikea.htmlThese are pretty terrible
It's possible that there's 100's or 1000's more we're not seeing, and these are causing either a manual or algorithmic penalty.My advice:
-
Check with Google Webmaster Tools for any messages - especially unnatural link warnings.
-
File a reconsideration request, even if you don't have any messages in GWT. Explain your concerns. Matt Cutts, the head of the Webspam team, helped write the original adult filter algorithms. He might take a special interest if you can get it to his attention.
But mostly, what you're looking for is verification, or not, of a penalty.
-
You may need to clean up the links. Do your best to remove any suspect links. Use the disavow tool as a last resort.
Hope this helps! Best of luck with your SEO.
-
-
I doubt there's a filter against xxx, but that doesn't mean there isn't something in the algos that checks for a spammy link profile more aggressively if the xxx is there.
I ran through the first 5 pages of links in Open Site Explorer, and their highest authority links mainly contain the branded keyword phrase "xxx church". Could use some diversity in anchor text. Just because Penguin hit for exact match anchor text for spammy links (from spammy sites and tactics), it doesn't mean you can't use "Check out this porn addiction recovery site if you're having issues with porn in your house." and link to the site with the underlined text.
There may be some more questions to ask. What are their link building efforts?
A number of pages from http://blog.internetsafety.com with incoming links no longer resolve (404 not found). There are lots of links that actually do look Penguin bait.
It could be link diversity. It could be low quality links. It could be tons of links coming from pages that are now resolving as 404s.
Sorry the news isn't great, but I really don't think it's the domain name that is the problem.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
The particular page cannot be indexed by Google
Hello, Smart People!
On-Page Optimization | | Viktoriia1805
We need help solving the problem with Google indexing.
All pages of our website are crawled and indexed. All pages, including those mentioned, meet Google requirements and can be indexed. However, only this page is still not indexed.
Robots.txt is not blocking it.
We do not have a tag "nofollow"
We have it in the sitemap file.
We have internal links for this page from indexed pages.
We requested indexing many times, and it is still grey.
The page was established one year ago.
We are open to any suggestions or guidance you may have. What else can we do to expedite the indexing process?1 -
Virtual URL Google not indexing?
Dear all, We have two URLs: The main URL which is crawled both by GSC and where Moz assigns our keywords is: https://andipaeditions.com/banksy/ The second one is called a virtual url by our developpers: https://andipaeditions.com/banksy/signedandunsignedprintsforsale/ This is currently not indexed by Google. We have been linking to the second URL and I am unable to see if this is passing juice/anything on to the main one /banksy/ Is it a canonical? The /banksy/ is the one that is being picked up in serps/by Moz and worry that the two similar URLs are splitting the signal. Should I redirect from the second to the first? Thank you
On-Page Optimization | | TAT1000 -
Google ranking content for phrases that don't exist on-page
I am experiencing an issue with negative keywords, but the “negative” keyword in question isn’t truly negative and is required within the content – the problem is that Google is ranking pages for inaccurate phrases that don’t exist on the page. To explain, this product page (as one of many examples) - https://www.scamblermusic.com/albums/royalty-free-rock-music/ - is optimised for “Royalty free rock music” and it gets a Moz grade of 100. “Royalty free” is the most accurate description of the music (I optimised for “royalty free” instead of “royalty-free” (including a hyphen) because of improved search volume), and there is just one reference to the term “copyrighted” towards the foot of the page – this term is relevant because I need to make the point that the music is licensed, not sold, and the licensee pays for the right to use the music but does not own it (as it remains copyrighted). It turns out however that I appear to need to treat “copyrighted” almost as a negative term because Google isn’t accurately ranking the content. Despite excellent optimisation for “Royalty free rock music” and only one single reference of “copyrighted” within the copy, I am seeing this page (and other album genres) wrongly rank for the following search terms: “free rock music”
On-Page Optimization | | JCN-SBWD
“Copyright free rock music"
“Uncopyrighted rock music”
“Non copyrighted rock music” I understand that pages might rank for “free rock music” because it is part of the “Royalty free rock music” optimisation, what I can’t get my head around is why the page (and similar product pages) are ranking for “Copyright free”, “Uncopyrighted music” and “Non copyrighted music”. “Uncopyrighted” and “Non copyrighted” don’t exist anywhere within the copy or source code – why would Google consider it helpful to rank a page for a search term that doesn’t exist as a complete phrase within the content? By the same logic the page should also wrongly rank for “Skylark rock music” or “Pretzel rock music” as the words “Skylark” and “Pretzel” also feature just once within the content and therefore should generate completely inaccurate results too. To me this demonstrates just how poor Google is when it comes to understanding relevant content and optimization - it's taking part of an optimized term and combining it with just one other single-use word and then inappropriately ranking the page for that completely made up phrase. It’s one thing to misinterpret one reference of the term “copyrighted” and something else entirely to rank a page for completely made up terms such as “Uncopyrighted” and “Non copyrighted”. It almost makes me think that I’ve got a better chance of accurately ranking content if I buy a goat, shove a cigar up its backside, and sacrifice it in the name of the great god Google! Any advice (about wrongly attributed negative keywords, not goat sacrifice ) would be most welcome.0 -
How to deal with filter pages - Shopify
Hi there, /collections/living-room-furniture/black
On-Page Optimization | | williamhuynh
/collections/living-room-furniture/fabric Is that ok to make all the above filter pages canonicalised with their main category /collections/living-room-furniture Also, does it needs to be noindex, follow as well? Note - already removed the content from filter pages, updated meta tags as well. Please advice, thank you1 -
Content hidden behind a 'read all/more..' etc etc button
Hi Anyone know latest thinking re 'hidden content' such as body copy behind a 'read more' type button/link in light of John Muellers comments toward end of last year (that they discount hidden copy etc) & follow up posts on Search Engine Round Table & Moz etc etc ? Lots of people were testing it and finding such content was still being crawled & indexed so presumed not a big deal after all but if Google said they discount it surely we now want to reveal/unhide such body copy if it contains text important to the pages seo efforts. Do you think it could be the case that G is still crawling & indexing such content BUT any contribution that copy may have had to the pages seo efforts is now lost if hidden. So to get its contribution to SEO back one needs to reveal it, have fully displayed ? OR no need to worry and can keep such copy behind a 'read more' button/link ? All Best Dan
On-Page Optimization | | Dan-Lawrence0 -
Hiding body copy with a 'read more' drop down option
Hi I just want to confirm how potentially damaging using java script to hide lots of on page body copy with a 'read more' button is ? As per other moz Q&A threads i was told that best not to use Javascript to do this & instead "if you accomplish this with CSS and collapsible/expandable <DIV> tags it's totally fine" so thats what i advised my clients dev. However i recently noticed a big drop in rankings aprox 1 weeks after dev changing the body copy format (hiding alot of it behind a 'read more' button) so i asked them to confirm how they did implement it and they said: "done in javascript but on page load the text is defaulting to show" (which is contrary to my instructions) So how likely is it that this is causing problems ? since coincides with ranking drop OR if text is defaulting to show it should be ok/not cause probs ? And should i request that they redo as originally instructed (css & collapsible divs) asap ? All Best Dan
On-Page Optimization | | Dan-Lawrence0 -
Duplicate Content for Men's and Women's Version of Site
So, we're a service where you can book different hairdressing services from a number of different salons (site being worked on). We're doing both a male and female version of the site on the same domain which users are can select between on the homepage. The differences are largely cosmetic (allowing the designers to be more creative and have a bit of fun and to also have dedicated male grooming landing pages), but I was wondering about duplicate pages. While most of the pages on each version of the site will be unique (i.e. [male service] in [location] vs [female service] in [location] with the female taking precedent when there are duplicates), what should we do about the likes of the "About" page? Pages like this would both be unique in wording but essentially offer the same information and does it make sense to to index two different "About" pages, even if the titles vary? My question is whether, for these duplicate pages, you would set the more popular one as the preferred version canonically, leave them both to be indexed or noindex the lesser version entirely? Hope this makes sense, thanks!
On-Page Optimization | | LeahHutcheon0 -
Page title getting cut off in SERPS even though it's under 70 characters?
I re-wrote the page title of a home page for a site I'm working on and made sure it's under 70 characters (68 to be exact) to comply with best practices and make sure it doesn't get cut-off in the SERPS. It's still getting cut-off though and right when it gets to the brand/website name. Does a "-" have anything to do with it? Does that translate to an elipsis? Format: keywords - website/brand.com Can anybody tell me why this would be happening?
On-Page Optimization | | MichaelWeisbaum0