Sitemap indexation
-
3 days ago I sent in a new sitemap for a new platform. Its 23.412 pages but until now its only 4 pages (!!) that are indexed according to the Webmaster Tools. Why so few? Our stage-enviroment got indexed (more than 50K pages) in a few days by a mistake.
-
Thanks! I'll see if this changes anything.
-
Its not that complicated, it is really easy...
In Google Webmaster tools go to the Crawl/Fetch as Google. The top level will be displayed at the top of the page. Press the Fetch Button to the right.
Goolge will fetch the page and this will be displayed underneath on the same page. To the right of this line, you will see a button to submit to index. When you press this a pop up box will appear and you can select to either submit just this page or this page and all links from it. Select the all links from it. (you can only do this full crawl/submit option 10 times in a calendar month, to submit just single pages you can do this 500 times a month) and then press Submit.
Google will then submit all the pages to its index.
Hope that helps.
Bruce
-
In regard of the error, Google crawled our https://stage.musik.dk instead of just https://musik.dk. We now have authorization on the subdomain, which gives errors in our account. I made another post about this and it seems it shouldn't harm our ranking.
Webmaster Tools is an extremely messy tool when working with various subdomains + no-http
-
Yeah. I've tested it several times, but with no errors. today its up on 35 indexed pages, but a loong way to go...
-
What do you mean by manual submit the site? Its more than 23.000 links, so a manual process is kinda of a no go
-
Hi,
Are you sure you submitted the right site map format / files? We've had in in the past that are sitemap was broken up into multiple files and we had to send sitemap-index.xml, sitemap-1.xml ... sitemap-16.xml. Have you checked it again and again?
regards
Jarno
-
No Sure what the problem was with the "by mistake"
Go to Google Webmaster tools and "manually" submit the site for the home page and all links. This will at least get the ball rolling whilst you investigate the other possible problems once you revist the sitemap etc just to check that it is complete and has not missed off a bunch of pages
Bruce
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URLs dropping from index (Crawled, currently not indexed)
I've noticed that some of our URLs have recently dropped completely out of Google's index. When carrying out a URL inspection in GSC, it comes up with 'Crawled, currently not indexed'. Strangely, I've also noticed that under referring page it says 'None detected', which is definitely not the case. I wonder if it could be something to do with the following? https://www.seroundtable.com/google-ranking-index-drop-30192.html - It seems to be a bug affecting quite a few people. Here are a few examples of the URLs that have gone missing: https://www.ihasco.co.uk/courses/detail/sexual-harassment-awareness-training https://www.ihasco.co.uk/courses/detail/conflict-resolution-training https://www.ihasco.co.uk/courses/detail/prevent-duty-training Any help here would be massively appreciated!
Technical SEO | | iHasco0 -
Should I Edit Sitemap Before Submitting to GWMT?
I use the XML sitemap generator at http://www.auditmypc.com/xml-sitemap.asp and use the filter that forces the tool to respect robots.txt exclusions. This generator allows me to review the entire sitemap before downloading it. Depending on the site, I often see all kinds of non-content files still listed on the sitemap. My question is, should I be editing the sitemap to remove every file listed except ones I really want spidered, or just ignore them and let the Google spiderbot figure it all out after I upload-submit the XML?
Technical SEO | | DonB0 -
Removing indexed website
I had a .in TLD version of my .com website floated for about 15 days, which was a duplicate copy of .com website. I did not wish to use the .in further for SEO duplication reasons and had let the .in domain expire on 26th April. But still now when I search from my website the .in version also shows up in results and even in google webmaster it shows the the website with maximum (190) number of links to my .com website. I am sure this is hurting the ranking of my .com website. How can the .in website be removed from googles indexing and search results. Given that is has expired also. thanks
Technical SEO | | geekwik0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
Index Category Archives?
I'm using Wordpress categories to add products. Normally I normally noindex category archives to prevent duplicate content issues, with the blog page serving as the index, but I don't have one with this site http://66.147.244.50/~proflowc/ Should I index the category archives to ensure that products are indexed, or will Google see them anyway?
Technical SEO | | waynekolenchuk0 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0 -
XML Sitemap without PHP
Is it possible to generate an XML sitemap for a site without PHP? If so, how?
Technical SEO | | jeffreytrull11 -
How to remove a sub domain from Google Index!
Hello, I have a website having many subdomains having same copy of content i think its harming my SEO for that site since abc and xyz sub domains do have same contents. Thus i require to know i have already deleted required subdomain DNS RECORDS now how to have those pages removed from Google index as well ? The DNS Records no more exists for those subdomains already.
Technical SEO | | anand20100