Indexing issue or just time?
-
Hey guys,
When I publish a post on our blog, I notice that it barely shows up in SERPs even if I copy and paste the title verbatim into Google. All my settings in Yoast are correct from what I've seen.
Is this just Google slowly getting around to crawling our site? Or is something else wrong here? We recently shut down and relaunched our site about 3 weeks ago.
Here is the site URL: The Tech Block
-
Yep, I also spotted them. And are all the canonical tags point to the right pages?
If all is in place and is configured correctly, it should be a matter of time yes. I would however try to speed up the process by doing some linkbuilding, for instance using social media.
-
Actually Steven. I just looked at my source code and I see the canonicals.
We also have sitemaps too that are being indexed by Google and I can track that in Webmaster. I guess it's just a matter of time?
-
I would recommend reading the Google Help section on this. It's quite complete: http://support.google.com/webmasters/bin/answer.py?hl=en&hlrm=nl&answer=139394.
-
Steven,
Thanks for your help. What is the proper way to set up canonicals?
-
Yes, even with new posts. How are the search engines supposed to find it?
If you don't have the measures I stated in place it will take them longer to find and index the content. The way it's now you want the search engines to find the content (new and old). When having the sitemaps and proper canonicals you're handing it to them.
Good luck!
-
Even with new posts? I can understand old post taking a while to crawl, but new posts too? If I write something, and post it, I'll check hours later and Google hasn't indexed it. Is this normal?
-
Hi Abdel,
As CleverPhD already pointed out: shutting down and relaunched (with a changed information architecture) can search engines to take a while until they're up to speed with indexing your website again. It will take some time, and quite some content and links to speed up that process. In the meantime you can of course help the search engines by:
- Having a HTML sitemap
- Having a XML sitemap (be sure to link the XML sitemap in the HTML sitemap, add the XML sitemap to the robots.txt and submit the XML sitemap and RSS feed in Google Webmaster Tools)
- Having proper canonical tagging and 301-redirects (if possible)
It's important to get crawled, but it also important to let the search engines crawl the right pages. Why are you linking to those tag pages in the bottom right of your website? Perhaps it's better to create category pages for that (better URL structure).
Good luck and keep us posted!
-
If you shut down and relaunched your site 3 weeks ago and lets say you also changed your URL structure and title tags and if you also do not have 301 redirects for old to new content and you don't have a sitemap.
All those things, even if you did them all "correctly" can cause Google to take a while to respider an reindex your pages. Google ranks "pages" vs "sites" generally speaking and so that can impact rankings.
Looks like you canonical all the pages to themselves? Example
<link rel="<a class="attribute-value">canonical</a>" href="[http://thetechblock.com/the-ios-interface-concept](view-source:http://thetechblock.com/the-ios-interface-concept)" />
Were you www.thetechblock.com before and now you are trying to change to the non www? If you did change to non www then Google would see this as a new site and so the rankings would start over
If you want to look at crawl rate, you should be able to go into Google Webmaster Tools and see how often they are spidering. Similarly, you can submit a sitemap and see how many are indexed
I
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Having issues crawling a website
We looked to use the Screaming Frog Tool to crawl this website and get a list of all meta-titles from the site, however, it only resulted with the one result - the homepage. We then sought to obtain a list of the URLs of the site by creating a sitemap using https://www.xml-sitemaps.com/. Once again however, we just go the one result - the homepage. There is something that seems to be restricting these tools from crawling all pages. If you anyone can shed some light as to what this could be, we'd be most appreciative.
Intermediate & Advanced SEO | | Gavo0 -
Best practice to prevent pages from being indexed?
Generally speaking, is it better to use robots.txt or rel=noindex to prevent duplicate pages from being indexed?
Intermediate & Advanced SEO | | TheaterMania0 -
Google Not Indexing XML Sitemap Images
Hi Mozzers, We are having an issue with our XML sitemap images not being indexed. The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are. The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt. There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml. Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load. To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1. Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed? The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue. Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough. Thank you for taking the time to read and help. Regards, Mark Oz6HzKO rYD3ICZ
Intermediate & Advanced SEO | | edlondon0 -
Can Google index PDFs with flash?
Does anyone know if Google can index PDF with Flash embedded? I would assume that the regular flash recommendations are still valid, even when embedded in another document. I would assume there is a list of the filetype and version which Google can index with the search appliance, but was not able to find any. Does anyone have a link or a list?
Intermediate & Advanced SEO | | andreas.wpv0 -
To index or not to index search pages - (Panda related)
Hi Mozzers I have a WordPress site with Relevanssi the search engine plugin, free version. Questions: Should I let Google index my site's SERPS? I am scared the page quality is to thin, and then Panda bear will get angry. This plugin (or my previous search engine plugin) created many of these "no-results" uris: /?s=no-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Ano-results%3Akids+wall&cat=no-results&pg=6 I have added a robots.txt rule to disallow these pages and did a GWT URL removal request. But links to these pages are still being displayed in Google's SERPS under "repeat the search with the omitted results included" results. So will this affect me negatively or are these results harmless? What exactly is an omitted result? As I understand it is that Google found a link to a page they but can't display it because I block GoogleBot. Thanx in advance guys.
Intermediate & Advanced SEO | | ClassifiedsKing0 -
Penguin Update Issues.. What would you recommend?
Hi, We've been pretty badly hit by this penguin Update. Site traffic is down 40-50%. We suspect it's for a couple of reasons 1)Google is saying we have duplicate content. e.g. for a given category we will have 4-5 pages of content (products). So it's saying pagenum=2 , pagenum=3 etc are duplicate pages. We've implemented rel=canonical so that pagenum=2 point to the original category e.g. http://mydomain/widgets.aspx We've even specified pagenum as a url parameter that pagniates. Google still hasn't picked up these changes. How long does it take - it's been about a week 2)They've saying we have soft 404 errors. e.g. we remove a category or product we point users to a category or page not found. is it best to block googlebot from crawling these page by specifying in robots.txt. because we really don't care about these categories or product pages. How best to handle? 3)There are some bad directory and crawlers that have crawled our website but have put incorrect links . So we've got like 1700 product not found. I'm sure that's taking up a lot of crawling time. So how do we tell Google not to bother with these link coming from specific sources e.g. ignore all links coming from xxx.com. Any help will be much appreciated as this is Killing our business. Jay
Intermediate & Advanced SEO | | ConservationM0 -
Page Authority Issue
My home page http://www.musicliveuk.com has a domain authority of 42 and page authority of 52. However I have set up other pages on the site to optimise for one keyword per page as I thought this was best practice. For example http://www.musicliveuk.com/home/wedding-bands targets 'wedding band' but this has a page authority of 24 way below my competitors. Having used the keyword difficulty tool on here it appears that is why I am struggling to rank highly (number 9). This is the same problem for several of my main keywords. I am building links to this and other pages in order to increase their authority and eventually rank highly but am I not better off optimising my home page that already has a good page authority and would probably out rank my competitors? Or am I missing something?
Intermediate & Advanced SEO | | SamCUK0 -
Diagnosing duplicate content issues
We recently made some updates to our site, one of which involved launching a bunch of new pages. Shortly afterwards we saw a significant drop in organic traffic. Some of the new pages list similar content as previously existed on our site, but in different orders. So our question is, what's the best way to diagnose whether this was the cause of our ranking drop? My current thought is to block the new directories via robots.txt for a couple days and see if traffic improves. Is this a good approach? Any other suggestions?
Intermediate & Advanced SEO | | jamesti0