3724 pages submitted, 3591 indexed
-
You probably know what I mean, the report in Google Webmaster Tools > Sitemaps.
So how do I locate the pages that are NOT indexed?
Thanks,
Ben
-
Thanks.
-
...and check the last reply in this http://moz.com/community/q/how-to-determine-which-pages-are-not-indexed. I have not tried but it looks promising.
-
This may go some way to answering your question - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=2642366
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site got hacked and now i have 1000s of 404 pages and backlinks. Should i transfer my site to a new domain name and start again?
My site was hacked and I had 1000s of pages that should not exist created and has had 1000s of backlinks put in. Now i have the same pages and backlinks redirecting to 404 pages. Is this why my site crashed out of google and my SEO fixes since have made no progress to my problem?
Industry News | | KeithWarbyUK0 -
Backlink Query. Unranked pages of High Ranking sites.
Hi, So I was just wondering if someone with more knowledge than myself can answer this question for me. I have a site - currently sat on page 2 of google. On-site optimisation is done, however I am struggling to get backlinks that are from highranking pages. I am new to SEO so need a hand. My understanding of backlinks is that the higher the PR of the site that links to your 'money' site, the better that link is, and that these links are very hard to come by. (something that I am finding). Many times I have found sites that have a high rank and offer for a free listing, only for me to fill in details and get listed on a sub-page that has no ranking whatsoever. So my question is, are these kind of links worth the effort? Do they actually have any effect on rankings? And generally would anyone have any tips on the best sites to get links? Thanks
Industry News | | Chstphrjohn0 -
Dex Landing Pages!
Hello Moz-ers! Have a question for the community that I'd like some feedback on, here is the situation My client owns a local business, which has a website that we built for them last year. Rankings and traffic have been great, and they're doing well in a competitive market. This client also works with Dex Media, who manages some of the marketing for this business. In this effort Dex has purchased a few variations of the branded domain names, and has placed landing pages at these URL's with basic information, phone number, etc. Seems like a standard page for one of these sites. These sites have different phone numbers, used for tracking leads from those particular pages. To me, these seem totally unnecessary and also as though it could be something pulling visitors away from the main website, which details all of their services, etc. It also seems like it could pretty easily confuse or frustrate potential visitors with the 4 similar domain names, each with different numbers of the same business. I told the client that I thought this might be the case, and at the very least they should ask Dex about acquiring ownership of those extra domains containing their brand in addition to seeing if they would be able to just redirect those URLs to the main domain. They responded by saying that Google doesn't care about any of this, citing this article: https://support.google.com/adwordspolicy/answer/2643759?hl=en# To me, that support link appears to not address the issue at hand, as it focuses mainly on proxy URL's being used giving the example of proxy.example.com Can anyone weigh in here with an opinion? Is there any reason to think that these extra domains would not have a potential negative impact on client traffic/rank for the main domain? Thanks! Tyler
Industry News | | kbaltzell0 -
If I have a Google+ Business page, do I need a Google Places page as well?
It seems like the two are redundant? Any official word on this? I'm fairly OCD about things being tidy and I dont want to split my reviews / shares / etc between two profiles. Are they not the same thing? I searched for my company, both my plus business page and my places page came up. I attached a SS of the situation. placesvplus.png
Industry News | | jonnyholt1 -
Need a contractor to create Wikipedia pages
Hey guys! Can anyone recomend a good contractor to create/maintain Wikipedia pages? We are in the publishing business and I want to create Wikipedia pages for our authors/products. Need someone who has successfully created wikipedia pages before, can make basic research to find sources that wikipedia will consider reliable, has good academic writing skills and can start a debate in case they want to remove our article. If anyone knows good contractors please recommend. Thanks!
Industry News | | Alexey_mindvalley0 -
Hello, Actually I have bit of doubt. If I create Google plus business page. Will it helpful or effects for my website ranking?
If I create Google plus business page. Will it helpful or effects for my website ranking?
Industry News | | jaybinary0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690 -
My websites Internal page is not ranking.
Hi, I had "x1", "x2","x3" keywords ranked for my websites internal page that is "www.somesite.com/internal-page.htm" i was fairly doing well for my keywords mostly in top 5, but now the case is from last couple of weeks, my internal page url is gone & replaced with home page. And I don't see internal page anywhere mostly in top 1000, is that penalty to internal page ?If yes how can i troubleshoot this ? I see my other pages in top 20-30 but that specific page is not anywhere. Please let me know. -Sunny.
Industry News | | sunny.popali0