Crawl at a stand still
-
Hello Moz'ers,
More questions about my Shopify migration...it seems that I'm not getting indexed very quickly (it's been over a month since I completed the migration) - I have done the following:
- used an Seo app to find and complete redirects (right away)
- used the same app to straighten out title tags, metas and alt tags
- submitted the sitemap
- re-submitted my main product URL's via Fetch
- checked the Console - no reported blocks or crawl errors
I will mention that I had to assign my blog to a sub-domain because Shopify's blog platform is awful. I had a lot of 404's on the blog, but fixed those. The blog was not a big source of traffic (I'm an ecomm business) Also, I didn't have a lot of backlinks, and most of those came along anyway.
I did have a number of 8XX and 9XX errors, but I spoke to Shopify about them and they found no issues. In the meantime, those issues pretty much disappeared in the MOZ reporting.
Any duplicate page issues now have a 200 code since I straightened out the title tags.
So what am I missing here?
Thanks in advance,
Sharon
www.zeldassong.com -
Hi Dan,
Thank you so very much!! I think things have caught up...I chatted with Google a few days ago and they said everything is ok. I am starting to see some keywords surface; it probably will kick in shortly, as I did quite a bit of SEO work with the new site.
I very much appreciate your help!
Best,
Sharon
-
Hi Sharon
Is there specific content or pages you're not seeing indexed that should be?
I checked with a site search and see that about 282 pages are indexed right now. I crawled the site, got about 578 active URLs, and subtracting /wp-content/ URLs and subpages, that leaves about 280ish URLs (correction, 380ish), which is the number indexed in Google. which is only 100 more than the number indexed in Google.
Perhaps things caught up, let me know if there's a URL not indexed that you expect to be.
Thanks!
-Dan
-
Hi Nicolas,
Yes, I did. I chatted with Google twice on Saturday, and they assured me that the site is being crawled - so, I dunno. I will wait a week or so and recheck the console and MOZ stats.
Thank you for taking the time to respond.
Sharon
-
Hello,
Have you send a new sitemap to help Google knows new URLs ?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can the search engines can crawl my java script generated web pages
For example when I click in a link of this movie from the home page, the link send me to this page http://www.vudu.mx/movies/#!content/293191/Madagascar-3-Los-Fugitivos-Madagascar-3-Europes-Most-Wanted-Doblada but in the source code I can't see the meta tittle and description and I think the search engines wont see that too, am I right? I guess that only appears the source code of that "master template" and that it is not usefull for me. So, my question is, how can I add dynamically this data to every page of each movie to allow crawl all the pages to the search engines? Thank you.
Technical SEO | | mobile3600 -
Can anyone help me understand why google is "Not Selecting" a large number of my webpages to include when crawling my site.
When looking through my google webmaster tools, I clicked into the advanced settings under index status and was surprised to see that google has marked around 90% of my pages on my site as "Not Selected" when crawling. Please take a look and offer any suggestions. www.luxuryhomehunt.com
Technical SEO | | Jdubin0 -
My report only says it crawled 1 page of my site.
My report used to crawl my entire site which is around 90 pages. Any idea of why this would happen? www.treelifedesigns.com
Technical SEO | | nathan.marcarelli0 -
Crawling a subfolder with a dev site
I am trying to set up a campaign where I am crawling a subfolder of our main site where I have dev version of the new site. However, even though the new site resolves and I have included the full resolving URL but the crawl results come back saying that only one page has been crawled. The site has had a protected block on it for a period of time but this has now been removed. Any ideas? Thanks Nick
Technical SEO | | Total_Displays0 -
Having a massive amount of duplicate crawl errors
Im having over 400 crawl errors over duplicate content looking like this: http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Ftag%2Fmahjon http://www.mydomain.com/index.php?task=login&prevpage=http%3A%2F%2Fwww.mydomain.com%2Findex.php%3F etc.. etc... So there seems to be something with my login script that is not working, Anyone knows how to fix this? Thanks
Technical SEO | | stanken0 -
Will 301 redirecting a site multiple times still preserve the original site value?
Hi, All! If site www.abc.com was already 301 redirected to site www.def.com, and now the site owner wants to redirect www.def.com to www.ghi.com - is there any concern that it's not going to work, and some of the original linkjuice, rank, trust, etc. is going to vanish? Or as long as the 301s are set up right, should you be able to 301 indefinitely? Does anyone have any experience with actually doing this and seeing good/bad/neutral results? Thanks in advance! -Aviva B
Technical SEO | | debi_zyx0 -
False Negative Warnings with Crawl Diagnostic Test
Ok... I will try to explain as clear as possible. This issue is regarding close to 5000 'Warnings' from our most recent seomoz pro crawl diagnostic test. The top three warnings have about 6000 instances among them: : 1. Duplicate Page Title 2. Duplicate Page Content 3. 302 (Temporary Redirect) We understand that duplicate titles and content are "no-no's" and have made it top priority to avoid duplication on any level. Here is the issue lies... we are using the Volusion eCommerce solution and they have a variety of value add shopping features such as "Email A Friend" and "Email Me When Back In-Stock" on each product page. If one of these options is clicked, you are then directed to the appropriate page. Now each page has a different url with the sole variable of each individual product code. But with it being a part of Volusion's ingrained functionality... the META title is the same for each page. It takes from the title of our store homepage. Example below: Online Beauty Supply Store | Hair Care Products | Nail Care | Flat Irons http://www.beautystoponline.com/Email_Me_When_Back_In_Stock.asp?ProductCode=AN1PRO7130 Online Beauty Supply Store | Hair Care Products | Nail Care | Flat Irons http://www.beautystoponline.com/Email_Me_When_Back_In_Stock.asp?ProductCode=BI8BIOSI34 The same goes for the duplicate content warnings. If you click on one of these features, it directs you to a page with pretty much the same content except for different product. Basically each page has both duplicate content and duplicate title. SEOMOZ description is Duplicate Title: Content that is identical (or nearly identical) to content on other pages of your site forces your pages to unnecessarily compete with each other for rankings. Duplicate Page Content: You should use unique titles for your different pages to ensure that they describe each page uniquely and don't compete with each other for keyword relevance. Because I know SEO is not an exact science, the question here is does Google recognize that although they are duplicates, it actually is generated from a feature that makes us even more of a legitimate eCommerce site? Or, from seomoz description, if duplication is bad only because you do not want your pages to be competing with each other... should I not worry because i could care less if these pages don't get traffic. Or does it effect my domain authority as whole? Then as for a solution. I am still trying to work out with Volusion how we can change the META title of the pages. It's highly unlikely but we'll see. As for the duplicate content, there is no way to change one of these pages. It's hard coded. Solution... so if it is bad (even though it shouldn't be) would it be worth it to disable these features. I hope not. Wouldn't that defeat the purpose of Google trying to provide the most legitimate, value add sites to searchers? As for the 302 (Temporary Redirect) warning... this is only appearing on all of our shopping cart pages. Such as the "Email A Friend" feature, there is a page for every product. For example: http://www.beautystoponline.com/ShoppingCart.asp?ProductCode=AN1HOM8040 http://www.beautystoponline.com/ShoppingCart.asp?ProductCode=AN1HOM8050 The description semoz provides is: 302 (Temporary Redirect): Using a 302 redirect will cause search engine crawlers to treat the redirect as temporary and not pass any link juice (ranking power). We highly recommend that you replace 302 redirects with 301 redirects. So the probably solution... I do have the ability to change to a 301 redirect but do I want to do this for my shopping cart? Does Google realize the dead end is legitimate? Or... does it matter if link juice is passed through my shopping cart? And again, does it impact my site as a whole? It is greatly appreciated if anyone could help me out with this stuff 🙂 Thank you
Technical SEO | | anthonyjamesent1 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0