Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Escort directory page indexing issues
-
Re; escortdirectory-uk.com, escortdirectory-usa.com, escortdirectory-oz.com.au,
Hi, We are an escort directory with 10 years history. We have multiple locations within the following countries, UK, USA, AUS. Although many of our locations (towns and cities) index on page one of Google, just as many do not. Can anyone give us a clue as to why this may be? -
Cardiff escorts is an important keyword for us that always needs assistance with first-page indexing, we have worked extensively with link building and content production via our website blog. I am always keen to research new ideas and professional advice, thanks.
-
@anita012 Whenever you do SEO of an escort service website, you have to keep some things in mind. Like technical SEO in the first place because it is done only once. Like whatever photo we upload, it should have proper image size (should be less than 50 Kb), format (WEBP), dimension. I have done SEO for a client's website with proper Mumbai call girls which is ranking.
-
Is my Internal structure Good? use - Screaming frog
Should have content means no Thin Content Pages
Internal duplicate content issue. You can have internal duplicate content which is normal but it should not be more than 30% look at my website. https://selectgirls99.com/call-girls/delhi
I have also the same issue i was trying to rank my main keyword Call Girl in Delhi but no luck i followed above step and now it's fine -
If your escort directory pages are not getting indexed, follow these steps:
- Check Robots.txt: Ensure it doesn't block search engines.
- Meta Robots Tag: Set it to "index, follow."
Quality Content: Provide valuable and relevant content. - Avoid Cloaking: Display the same content to search engines and users.
- Structured Data Markup: Use Schema.org to help search engines understand your content.
- XML Sitemap: Submit it to search engines for efficient content discovery.
- Legal Compliance: Adhere to local laws regarding adult content.
- Backlink Profile: Monitor and manage your backlinks.
- Google Search Console: Use it to identify and address indexing issues.
- Follow Guidelines: Adhere to webmaster guidelines for better search visibility.
-
@ZuricoDrexia For Indexing you need to understand few question
- Is my Internal structure Good? use - Screaming frog
- Should have content means no Thin Content Pages
- Internal duplicate content issue. You can have internal duplicate content which is normal but it should not be more than 30% look at my website. htttps://www.thegirlscurls.com
I have also the same issue i was trying to rank my main keyword Call Girl in Delhi but no luck i followed above step and now it's fine
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do articles written about artificial intelligence rank on Google?
This is my personal website. I wonder, will the articles written about artificial intelligence rank on Google, or will the site not rank? https://withpositivity.com/
Community | | lowzy0 -
Unsolved Landing pages report has no data even if I have ranking keywords and traffic
Is there any reason my landing page report does not include data for pages? I'm sure there is organic traffic on them, and I have tracked the correct keywords. Any similar insight will be helpful.
Moz Tools | | davidevans_seo0 -
Does a no-indexed parent page impact its child pages?
If I have a page* in WordPress that is set as private and is no-indexed with Yoast, will that negatively affect the visibility of other pages that are set as children of that first page? *The context is that I want to organize some of the pages on a business's WordPress site into silos/directories. For example, if the business was a home remodeling company, it'd be convenient to keep all the pages about bathrooms, kitchens, additions, basements, etc. bundled together under a "services" parent page (/services/kitchens/, /services/bathrooms/, etc.). The thing is that the child pages will all be directly accessible from the menus, so there doesn't need to be anything on the parent /services/ page itself. Another such parent page/directory/category might be used to keep different photo gallery pages together (/galleries/kitchen-photos/, /galleries/bathroom-photos/, etc.). So again, would it be safe for pages like /services/kitchens/ and /galleries/addition-photos/ if the /services/ and /galleries/ pages (but not /galleries/* or anything like that) are no-indexed? Thanks!
Technical SEO | | BrianAlpert781 -
Is it better to use XXX.com or XXX.com/index.html as canonical page
Is it better to use 301 redirects or canonical page? I suspect canonical is easier. The question is, which is the best canonical page, YYY.com or YYY.com/indexhtml? I assume YYY.com, since there will be many other pages such as YYY.com/info.html, YYY.com/services.html, etc.
Technical SEO | | Nanook10 -
What is the best way to find missing alt tags on my site (site wide - not page by page)?
I am looking to find all the missing alt tags on my site at once. I have a FF extension that use to do it page by page, but my site is huge and that will take forever. Thanks!!
Technical SEO | | franchisesolutions1 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Dynamically-generated .PDF files, instead of normal pages, indexed by and ranking in Google
Hi, I come across a tough problem. I am working on an online-store website which contains the functionlaity of viewing products details in .PDF format (by the way, the website is built on Joomla CMS), now when I search my site's name in Google, the SERP simply displays my .PDF files in the first couple positions (shown in normal .PDF files format: [PDF]...)and I cannot find the normal pages there on SERP #1 unless I search the full site domain in Google. I really don't want this! Would you please tell me how to figure the problem out and solve it. I can actually remove the corresponding component (Virtuemart) that are in charge of generating the .PDF files. Now I am trying to redirect all the .PDF pages ranking in Google to a 404 page and remove the functionality, I plan to regenerate a sitemap of my site and submit it to Google, will it be working for me? I really appreciate that if you could help solve this problem. Thanks very much. Sincerely SEOmoz Pro Member
Technical SEO | | fugu0