Help!!! Website won't index after taking it over from another IT Company
-
Hi,
A while back we took over a website that was built in Wordpress. We rebuilt it on another platform and switched the servers over whilst retaining the same domain.I had access to the old GA Account however so did the old IT company. Therefore I created a new GA account and used that in the new website pages.Recently we found the website had been blacklisted (previous to us taking it over) and now after being crawled a lot, only 2 pages have been indexed (over a 2month period).We have submitted a request for revision (to relist the website) buthave had no movement.**Just wondering if having a old, active account that was still linked to their old website would affect our Google listing?****Will dropping the old GA Tracking code/script into the site and deleting the new account enable Google to index?**Also, there is ample content, metadata and descriptions on the site.I welcome any help on this please!
-
Hi there, Paul and Kevin asked a great question. Did you see their responses? If you can give us some more details, we would love to help.
Christy
-
Also interested in what you mean by blacklisted. Can you provide a url as well?
-
The presence or absence of a new/old Analytics account associated with a site will have no effect on its indexing, one way or the other, Johann.
What do you mean when you say the old site was "blacklisted"? Do you mean it had a manual penalty applied? (As would be specified in Google Webmaster Tools). Or do you mean the domain had been marked as unsafe due to malware? Or that the domain's email servers were blacklisted?
To me, (without knowing the details of the "blacklisting") if the site is being crawled but not indexed, it sounds like there may be a noindex metarobots tag or header, or some other blocking issue. Hard to give much more info unless you can give us the site's URL.
Can you fill us in on what you mean by "blacklisted"?
Paul
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Category URL Pagination where URLs don't change between pages
Hello, I am working on an e-commerce site where there are categories with multiple pages. In order to avoid pagination issues I was thinking of using rel=next and rel=prev and cannonical tags. I noticed a site where the URL doesn't change between pages, so whether you're on page 1,2, or 3 of the same category, the URL doesn't change. Would this be a cleaner way of dealing with pagination?
Technical SEO | | whiteonlySEO0 -
Hreflang Tags - error: 'en' - no return tags
Hello, We have recently implemented Hreflang tags to improve the findability of our content in each specific language. However, Webmaster tool is giving us this error... Does anyone know what it means and how to solve it? Here I attach a screenshot: http://screencast.com/t/a4AsqLNtF6J Thanks for your help!
Technical SEO | | Kilgray0 -
What's Moz's Strategy behind their blog main categories?
I've only just noticed that the Moz' blog categories have been moved within a pull down menu. See it underneath : 'Explore Posts by Category' on any blog page. This means that the whole list of categories under that pull-down is not crawlable by bots, and therefore no link-juice flows down onto those category pages. I imagine that the main drive behind that move is to sculpt page rank so that the business/money pages or areas of the website get greater link equity as opposed to just wasting it all throwing it down to the many categories ? it'd be good to hear about more from Rand or anyone in his team as to how they came onto engineering this and why. One of the things I wonder is: with the sheer amount of content that Moz produces, is it possible to contemplate an effective technical architecture such as that? I know they do a great job at interlinking content from one post onto another, so effectively one can argue that that kind of supersedes the need for hierarchical page rank distribution via categories... but I wonder : "is it working better this way vs having crawlable blog category links on the blog section? have they performed tests" some insights or further info on this from Moz would be very welcome. thanks in advance
Technical SEO | | carralon
David0 -
What should I do with a large number of 'pages not found'?
One of my client sites lists millions of products and 100s or 1000s are de-listed from their inventory each month and removed from the site (no longer for sale). What is the best way to handle these pages/URLs from an SEO perspective? There is no place to use a 301. 1. Should we implement 404s for each one and put up with the growing number of 'pages not found' shown in Webmaster Tools? 2. Should we add them to the Robots.txt file? 3. Should we add 'nofollow' into all these pages? Or is there a better solution? Would love some help with this!
Technical SEO | | CuriousCatDigital0 -
Google Still Taking 2 - 3 Days to Index New Posts
My website is fairly new, it was launched about 3.5 months ago. I've been publishing new content daily (except on weekends) since then. Is there any reason why new posts don't get indexed faster? All of my posts gets +1's and they're shared on G+, FB and Twitter. My website's at www.webhostinghero.com
Technical SEO | | sbrault740 -
Are these 'not found' errors a concern?
Our webmaster report is showing thousands of 'not found' errors for links that show up in javascript code. Is this something we should be concerned about? Especially since there are so many?
Technical SEO | | nicole.healthline0 -
Https indexed - though a no index no follow tag has been added
Hi, The https-pages of our booking section are being indexed by Google. We added But the pages are still being indexed. What can I do to exclude these URL's from the Google index? Thank you very much in advance! Kind regards, Dennis Overbeek ACSI Publishing | dennis@acsi.eu
Technical SEO | | SEO_ACSI0 -
Struggling to get my lyrics website fully indexed
Hey guys, been a longtime SEOmoz user, only just getting heavily into SEO now and this is my first query, apologies if it's simple to answer but I have been doing my research! My website is http://www.lyricstatus.com - basically it's a lyrics website. Rightly or wrongly, I'm using Google Custom Search Engine on my website for search, as well as jQuery auto-suggest - please ignore the latter for now. My problem is that when I launched the site I had a complex AJAX Browse page, so Google couldn't see static links to all my pages, thus it only indexed certain pages that did have static links. This led to my searches on my site using the Google CSE being useless as very few pages were indexed. I've since dropped the complex AJAX links and replaced it with easy static links. However, this was a few weeks ago now and still Google won't fully index my site. Try doing a search for "Justin Timberlake" (don't use the auto-suggest, just click the "Search" button) and it's clear that the site still hasn't been fully indexed! I'm really not too sure what else to do, other than wait and hope, which doesn't seem like a very proactive thing to do! My only other suspicion is that Google sees my site as more duplicate content, but surely it must be ok with indexing multiple lyrics sites since there are plenty of different ones ranking in Google. Any help or advice greatly appreciated guys!
Technical SEO | | SEOed0