How do I exclude my blog subfolder from being crawled with my main domain (www.) folder?
-
I am trying to setup two separate campaigns for my blog and for my main site.
While it is easy enough to do through the wizard the results I am getting for my main site still include pages that are in my blog sub folder.
Please Advise!
-
Hi,
I would like to know if this idea, of excluding a subfolder from a campaign, is available now? If so, how do I set it up?
We have a forum in a subfolder and all the data from it kind of blurs my main interest using Seomoz for this website.
-
Hey,
now with the new Moz Analytics platform, has this been implemented yet or still on the plan? I would like to exclude a forum placed in a subfolder as this forum is not our main focus to work with SEO on. It has so many URLs and "eats" up the moz crawl budget for us.
Thanks!
-
Hey Henry,
Thanks for writing in. Unfortunately, there isn't a way to exclude the blog from the crawl for the main site at this time. If you exclude our crawler from those pages, then we wouldn't be able to crawl for the separate blog campaign either. We do plan to allow you to ignore or remove certain pages from your crawl when we switch to the new Moz Analytics platform, but it may be some time before that is available for all of our users. I'm sorry for the inconvenience that causes in the meantime.
Please let me know if you have any other questions.
Chiaryn
Help Team Ninja -
i was just thinking, you might be able to put a robots file in the subfolder telling seomoz bot not to crawl that folder, not sure if that's possible. But make sure you only target seomozs bot only (don't want to tell google not to crawl it).
but that would stop the subfolder being crawled completely, which I don't think you want
-
Yes I am referring to the seomoz bot and I understand that google sees the two sub folders as one domain.
I would like them to be separate just for my own work flow/ability to mange each individually with no spill over.
But if it can't be done it can't be done.
Thank you!
-
I'm guessing you mean the seomoz, bot crawling your site? If so why do you want to crawl them separately, because google won't, they are on the same domain.
to answer your question, I don't think it can be done
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best blog practices for website
For my Insurance website blog, I use MOZ to help me find high DA authoritative sites, then either generate ideas from them, or rewrite the copy. If I rewrite the copy, I tend to pull from 2 - 3 top authoritative sites. Just so I don't get in trouble, but still offer the most concision information. _My question is, Is this ok to do? _ Secondly, I just read that on some .Gov sites the information is public, and that you can use it as long as you give credit. _My questions is, how do I tell which information is public? _ Thank you in advance 🙂
Moz Pro | | MissThumann0 -
Domain Authority
Hi, I've been adding links to my site on directories, article sites and relevant industry blogs. I've also been given and a budget for Google Ads, which has also increased traffic. In the last week my Keyword positions have risen. However, my domain authority has reduced. I know the most important thing is improving traffic and keywords but I'd like to know how I can improve my domain authority. What is causing it to reduce and how can I improve it? Cheers,
Moz Pro | | desktop_nev
Neville0 -
Crawl Diagnostics Summary Problem
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
Moz Pro | | iskq0 -
Crawl Diagnostics - Crawling way more pages than my site has?
Hello all, I'm fairly new here, more of a paid search guy dabbling in SEO on the side. I have a client that I have in SEOMoz and the Crawl Diagnostics report is showing 10,000+ pages crawled and I think the site has at most 800 pages (e-commerce site using freewebstore.org as the platform). Any reasons this would be happening?
Moz Pro | | LodestoneGen0 -
Does crawling help in optimisation.?
the website is as it was last week. no optimisation from my side for 10 days now. i was ranked 5 with my keyword not much competition there. however 2 days ago i registrred at seomoz and created a campaign for my website with my keywords that were ranked 5 in search. today i see that my rank has gone up to 2. i have nt done any optimisation neither have ii created any backlinks. so how and why did i climb up? i just created a campaign and let seomoz crawl my website for 2days. am i to assume seomoz crawl optimises website? if that is the case then can i create a campaign crawl pages, climb up in searches, delete the campaign after a week, create it again crawl pages and climb up and so on ? please advise?
Moz Pro | | wahin10 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Wild fluctuation in number of pages crawled
I am seeing huge fluctuations in the number of pages discovered the crawl each week. Some weeks the crawl discovers > 10,000 pages and other weeks I am seeing 4-500. So, this week for example I was hoping to see some changes reflected for warnings from last weeks report (which discovered > 10,000 pages). However, the entire crawl this week was 448 pages. The number of pages discovered each week seems to go back and forth between these two extremes. The more accurate count would be nearer the 10,000 mark than the 400 range. Thanks. Mark
Moz Pro | | MarkWill0 -
SEomoz slow to crawl?
Hello - I am just trying out the trial and it said the next crawl was nov 1st but I see no change in any of the errors since the initial crawl... so just waiting to find out if what I changed was fixed or not. Is this normal ?
Moz Pro | | Bethany_BabyBrowns0