Moving Code for Faster Crawl Through?
-
What are best practices for moving code into other folders to help speed up a crawling for bots? We once moved some javascript from an SEO's suggestion and the site suddenly looked like crap until we undid the changes. How do you figure our what code should be consolidated? What code do you use to indicate what has been moved and to where?
-
Yes, I misread your post.
Best,
Christopher -
But isn't this really for things like Google Analytics code? The code I am talking about is stuff like CSS and Javascript that could be in external files but are not currently.
-
But isn't this really for things like Google Analytics code? The code I am talking about is stuff like CSS and Javascript that could be in external files but are not currently.
-
Have you considered using Google Tag Manager? It's much cleaner, and the scripts run asynchronously to speed up page load times.
Best,
Christopher
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Suggested Screaming Frog configuration to mirror default Googlebot crawl?
Hi All, Does anyone have a suggested Screaming Frog (SF) configuration to mirror default Googlebot crawl? I want to test my site and see if it will return 429 "Too Many Requests" to Google. I have set the User Agent as Googlebot (Smartphone). Is the default SF Menu > Configuration > Speed > Max Threads 5 and Max URLs 2.0 comparable to Googlebot? Context:
Intermediate & Advanced SEO | | gravymatt-se
I had tried NetPeak SEO Spider which did a nice job and had a cool feature that would pause a crawl if it got to many 429. Long Story short, B2B site threw 429 Errors when there should have been no load on a holiday weekend at 1:00 AM.0 -
Site move-Redirecting and Indexing dynamic pages
I have an interesting problem I would like to pick someone else’s brain. Our business has over 80 different products, each with a dedicated page (specs, gallery, copy etc.) on the main website. Main site itself, is used for presentation purpose only and doesn’t offer a direct path to purchase. A few years ago, to serve a specific customer segment, we have created a site where customers can perform a quick purchase via one of our major strategic partners. Now we are looking to migrate this old legacy service, site and all its pages under the new umbrella (main domain/CMS). Problem #1 Redirects/ relevancy/ SEO equity Ideally, we could simply perform 1:1 - 301 redirect from old legacy product pages to the relevant new site products pages. The problem is that Call to action (buy), some images and in some cases, parts of the copy must be changed to some degree to accommodate this segment. The second problem is in our dev and creative team. There are not enough resources to dedicate for the creation of the new pages so we can perform 1:1 301 redirects. So, the potential decision is to redirect a visitor to the dynamic page URL where parent product page will be used to apply personalization rules and a new page with dynamic content (buy button, different gallery etc.) is displayed to the user (see attached diagram). If we redirect directly to parent URL and then apply personalization rules, URL will stay the same and this is what we are trying to avoid (we must mention in the URL that user is on purchase path, otherwise this redirect and page where the user lands, can be seen as deceptive). Also Dynamic pages will have static URLs and unique page/title tag and meta description. Problem #2 : Indexation/Canonicalization The dynamic page is canonicalized to the parent page and does have nearly identical content/look and feel, but both serve a different purpose and we want both indexed in search. Hope my explanation is clear and someone can chip in. Any input is greatly appreciated! vCm2Dt.jpg
Intermediate & Advanced SEO | | bgvsiteadmin1 -
Code to Redirect Mobile Subdomain to Desktop Site
Hi Everyone, My client is switching from a mobile subdomain to a responsive site. All URLs are the same on mobile subdomain vs desktop so we just need a wildcard rule to redirect m. to www. Anyone have this wildcard redirect code for an .htaccess file? Much appreciated! Dan
Intermediate & Advanced SEO | | kernmedia0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hi Everyone, I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually. I think it has something to do with this little message in the Mobile crawl report. "Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)." If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
Intermediate & Advanced SEO | | Carla_Dawson0 -
Will a disclaimer affect Crawling?
Hello everyone! My German users will have to get a disclaimer according to German laws, now my question is the following: Will a disclaimer affect crawling? What's the best practice to have regarding this? Should I have special care in this? What's the best disclaimer technique? A Plain HTML page? Something overlapping the site? Thank you all!
Intermediate & Advanced SEO | | NelsonF0 -
Crawl efficiency - Page indexed after one minute!
Hey Guys,A site that has 5+ million pages indexed and 300 new pages a day.I hear a lot that sites at this level its all about efficient crawlabitliy.The pages of this site gets indexed one minute after the page is online.1) Does this mean that the site is already crawling efficient and there is not much else to do about it?2) By increasing crawlability efficiency, should I expect gogole to crawl my site less (less bandwith google takes from my site for the same amount of crawl)or to crawl my site more often?Thanks
Intermediate & Advanced SEO | | Mr.bfz0 -
Somthing weird in my Google Webmaster Tools Crawl Errors...
Hey, I recently (this past may) redesigned my e-commerce site from .asp to .php. I am trying to fix all the old pages with 301 redirects that didn't make it in the switch, but I keep getting weird pages coming up in GWT. I have about 400 pages under crawl errors that look like this "emailus.php?id=MD908070" I delete them and they come back. my site is http://www.moondoggieinc.com the id #'s are product #'s for products that are no longer on the site, but the site is .php now. They also do not show a sitemap they are linked in or any other page that they are linked from. Are these hurting me? and how do I get rid of them? Thanks! KristyO
Intermediate & Advanced SEO | | KristyO0