Most recent blog post isn't being indexed?
-
http://www.howlatthemoon.com/dueling_piano_bar/kids-activities-denver/
Even if I put the URL into Google it doesn't show up....
-
Thanks! Sorry I freaked out, the other day I noindexed archives and some other things and I was worried my settings screwed things up. Phew!
-
I see the entry for the page in the XML now. I also just searched the URL in Google and see it there as well. Looks like this was just a timing issue.
-
http://www.howlatthemoon.com/dueling_piano_bar/sitemap.xml
It says it was updated last night?
-
I do have an RSS feed, it just hasn't been displayed. I just added it to my sidebar
-
You can submit more than one sitemap in GWT and also, Google will read an XML sitemap and an RSS feed. I have Google reading an XML sitemap and it also found my RSS feeds. I would say, whatever feed you can control XML or RSS get that to your liking and add in GWT for Google to chew on.
-
Ping it ...
Will help it get indexed quicker.
I would suggest using your RSS feed to get the page indexed by listing the address but you don't have an RSS feed.
You really should if you are taking blogging seriously and trying to engage with your customer database ..
Regards
John
-
Thanks for the response. I'm confused with the sitemap stuff. My root domain has Joomla installed on it and the person managing it before I started working here did the sitemap.. I'm not sure what they did.
My blog, /dueling_piano_bar, is run off of Wordpress. I'm using Yoast SEO and I see it has sitemap options in there. I checked the box to enable XML sitemap functionality because I didn't notice I already had the sitemap in the settings until just now. Also, the XML-Sitemap in the settings has "Rebuild sitemap if you change the content of your blog" checked so I don't know why it wouldn't be up to date unless this SEO plugin is interfering?
I thought I just needed one sitemap for the whole site, located at www.howlatthemoon.com/sitemap.xml . Is that incorrect? And how can I keep this updated. We have Joomla 1.5 (which I hate but we don't have the budget and I don't have the time to switch it over to another CMS) so I don't think there's a way to just rebuild the sitemap with Joomla.
Sorry if this is confusing, I'm confusing myself trying to write it out. I really appreciate your help.
-
The sitemap for your site doesn't reflect your latest post. The WordPress plugin you use to generate the sitemap can be manually rebuilt so you don't have to wait for it to do it automatically. Now that I'm looking at it - it doesn't appear to have been updated in quite some time...
Go into Settings > XML-Sitemap then click "rebuild the sitemap" in the middle of the first section. Check your sitemap again to make sure that it's been updated.
-
Hello there!
I see that based on the date listed on the page you posted this yesterday on 7/24/2013. Depending on how often Google visits your site, it may not spider all your pages every day. It has just been 24 hours so you may need to give it more time.
Things that can help speed this up is to make sure that you have this page listed in your sitemap
http://www.howlatthemoon.com/sitemap.xml
I did not see the page listed there and that is a common place that Google looks for new additions etc.
Good luck
-
Yes but normally it only takes an hour or so. And I can't even find it with the URL which is strange for sure
-
you've created it yesterday. Why don't wait a bit longer? It isn't crawled yet by Google. All results of 2 days ago are in Google.
BTW: the page is very slow: http://www.webpagetest.org/result/130725_27_2XB4/. You should use at least cache
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl and Indexation Error - Googlebot can't/doesn't access specific folders on microsites
Hi, My first time posting here, I am just looking for some feedback on a indexation issue we have with a client and any feedback on possible next steps or items I may have overlooked. To give some background, our client operates a website for the core band and a also a number of microsites based on specific business units, so you have corewebsite.com along with bu1.corewebsite.com, bu2.corewebsite.com. The content structure isn't ideal, as each microsite follows a structure of bu1.corewebsite.com/bu1/home.aspx, bu2.corewebsite.com/bu2/home.aspx and so on. In addition to this each microsite has duplicate folders from the other microsites so bu1.corewebsite.com has indexable folders bu1.corewebsite.com/bu1/home.aspx but also bu1.corewebsite.com/bu2/home.aspx the same with bu2.corewebsite.com has bu2.corewebsite.com/bu2/home.aspx but also bu2.corewebsite.com/bu1/home.aspx. Therre are 5 different business units so you have this duplicate content scenario for all microsites. This situation is being addressed in the medium term development roadmap and will be rectified in the next iteration of the site but that is still a ways out. The issue
Intermediate & Advanced SEO | | ImpericMedia
About 6 weeks ago we noticed a drop off in search rankings for two of our microsites (bu1.corewebsite.com and bu2.corewebsite.com) over a period of 2-3 weeks pretty much all our terms dropped out of the rankings and search visibility dropped to essentially 0. I can see that pages from the websites are still indexed but oddly it is the duplicate content pages so (bu1.corewebsite.com/bu3/home.aspx or (bu1.corewebsite.com/bu4/home.aspx is still indexed, similiarly on the bu2.corewebsite microsite bu2.corewebsite.com/bu3/home.aspx and bu4.corewebsite.com/bu3/home.aspx are indexed but no pages from the BU1 or BU2 content directories seem to be indexed under their own microsites. Logging into webmaster tools I can see there is a "Google couldn't crawl your site because we were unable to access your site's robots.txt file." This was a bit odd as there was no robots.txt in the root directory but I got some weird results when I checked the BU1/BU2 microsites in technicalseo.com robots text tool. Also due to the fact that there is a redirect from bu1.corewebsite.com/ to bu1.corewebsite.com/bu4.aspx I thought maybe there could be something there so consequently we removed the redirect and added a basic robots to the root directory for both microsites. After this we saw a small pickup in site visibility, a few terms pop into our Moz campaign rankings but drop out again pretty quickly. Also the error message in GSC persisted. Steps taken so far after that In Google Search Console, I confirmed there are no manual actions against the microsites. Confirmed there is no instances of noindex on any of the pages for BU1/BU2 A number of the main links from the root domain to microsite BU1/BU2 have a rel="noopener noreferrer" attribute but we looked into this and found it has no impact on indexation Looking into this issue we saw some people had similar issues when using Cloudflare but our client doesn't use this service Using a response redirect header tool checker, we noticed a timeout when trying to mimic googlebot accessing the site Following on from point 5 we got a hold of a week of server logs from the client and I can see Googlebot successfully pinging the site and not getting 500 response codes from the server...but couldn't see any instance of it trying to index microsite BU1/BU2 content So it seems to me that the issue could be something server side but I'm at a bit of a loss of next steps to take. Any advice at all is much appreciated!0 -
How would you handle these pages? Should they be indexed?
If a site has about 100 pages offering specific discounts for employees at various companies, for example... mysite.com/discounts/target mysite.com/discounts/kohls mysite.com/discounts/jcpenney and all these pages are nearly 100% duplicates, how would you handle them? My recommendation to my client was to use noindex, follow. These pages tend to receive backlinks from the actual companies receiving the discounts, so obviously they are valuable from a linking standpoint. But say the content is nearly identical between each page; should they be indexed? Is there any value for someone at Kohl's, for example, to be able to find this landing page in the search results? Here is a live example of what I am talking about: https://www.google.com/search?num=100&safe=active&rlz=1C1WPZB_enUS735US735&q=site%3Ahttps%3A%2F%2Fpoi8.petinsurance.com%2Fbenefits%2F&oq=site%3Ahttps%3A%2F%2Fpoi8.petinsurance.com%2Fbenefits%2F&gs_l=serp.3...7812.8453.0.8643.6.6.0.0.0.0.174.646.3j3.6.0....0...1c.1.64.serp..0.5.586...0j35i39k1j0i131k1j0i67k1j0i131i67k1j0i131i46k1j46i131k1j0i20k1j0i10i3k1.RyIhsU0Yz4E
Intermediate & Advanced SEO | | FPD_NYC0 -
How can a recruitment company get 'credit' from Google when syndicating job posts?
I'm working on an SEO strategy for a recruitment agency. Like many recruitment agencies, they write tons of great unique content each month and as agencies do, they post the job descriptions to job websites as well as their own. These job websites won't generally allow any linking back to the agency website from the post. What can we do to make Google realise that the originator of the post is the recruitment agency and they deserve the 'credit' for the content? The recruitment agency has a low domain authority and so we've very much at the start of the process. It would be a damn shamn if they produced so much great unique content but couldn't get Google to recognise it. Google's advice says: "Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you'd prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content." - But none of that can happen. Those big job websites just won't do it. A previous post here didn't get a sufficient answer. I'm starting to think there isn't an answer, other than having more authority than the websites we're syndicating to. Which isn't going to happen any time soon! Any thoughts?
Intermediate & Advanced SEO | | Mark_Reynolds0 -
Google isn't seeing the content but it is still indexing the webpage
When I fetch my website page using GWT this is what I receive. HTTP/1.1 301 Moved Permanently
Intermediate & Advanced SEO | | jacobfy
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-alive What I used to get is this: HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8 xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"> <title>Inscopix | In vivo rodent brain imaging</title>0 -
Why my site it's not being indexed?
Hello.... I got to tell that I feel like a newbie (I am, but know I feel like it)... We were working with a client until january this year, they kept going on their own until september that they contacted us again... Someone on the team that handled things while we were gone, updated it´s robots.txt file to Disallow everything... for maybe 3 weeks before we were back in.... Additionally they were working on a different subdomain, the new version of the site and of course the didn't block the robots on that one. So now the whole site it's been duplicated, even it´s content, the exact same pages exist on the suddomain that was public the same time the other one was blocked. We came in changes the robots.txt file on both server, resend all the sitemaps, sent our URL on google+... everything the book says... but the site it´s not getting indexed. It's been 5 weeks now and no response what so ever. We were highly positioned on several important keywords and now it's gone. I now you guys can help, any advice will be highly appreciated. thanks Dan
Intermediate & Advanced SEO | | daniel.alvarez0 -
How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
We're using the Google snapshot method to index dynamic Ajax content. Some of this content is from tables using pagination. The pagination is tracked with a var in the hash, something like: #!home/?view_3_page=1 We're seeing all sorts of calls from Google now with huge numbers for these URL variables that we are not generating with our snapshots. Like this: #!home/?view_3_page=10099089 These aren't trivial since each snapshot represents a server load, so we'd like these vars to only represent what's returned by the snapshots. Is Google generating random numbers going fishing for content? If so, is this something we can control or minimize?
Intermediate & Advanced SEO | | sitestrux0 -
My website hasn't been cached for over a month. Can anyone tell me why?
I have been working on an eCommerce site www.fuchia.co.uk. I have asked an earlier question about how to get it working and ranking and I took on board what people said (such as optimising product pages etc...) and I think i'm getting there. The problem I have now is that Google hasn't indexed my site in over a month and the homepage cache is 404'ing when I check it on Google. At the moment there is a problem with the site being live for both WWW and non-WWW versions, i have told google in Webmaster what preferred domain to use and will also be getting developers to do 301 to the preferred domain. Would this be the problem stopping Google properly indexing me? also I'm only having around 30 pages of 137 indexed from the last crawl. Can anyone tell me or suggest why my site hasn't been indexed in such a long time? Thanks
Intermediate & Advanced SEO | | SEOAndy0 -
To index or not to index search pages - (Panda related)
Hi Mozzers I have a WordPress site with Relevanssi the search engine plugin, free version. Questions: Should I let Google index my site's SERPS? I am scared the page quality is to thin, and then Panda bear will get angry. This plugin (or my previous search engine plugin) created many of these "no-results" uris: /?s=no-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Akids+wall&cat=no-results&pg=6 I have added a robots.txt rule to disallow these pages and did a GWT URL removal request. But links to these pages are still being displayed in Google's SERPS under "repeat the search with the omitted results included" results. So will this affect me negatively or are these results harmless? What exactly is an omitted result? As I understand it is that Google found a link to a page they but can't display it because I block GoogleBot. Thanx in advance guys.
Intermediate & Advanced SEO | | ClassifiedsKing0