Question about construction of our sitemap URL in robots.txt file
-
Hi all,
This is a Webmaster/SEO question. This is the sitemap URL currently in our robots.txt file:
http://www.ccisolutions.com/sitemap.xml
As you can see it leads to a page with two URLs on it. Is this a problem? Wouldn't it be better to list both of those XML files as separate line items in the robots.txt file?
Thanks!
Dana
-
Hi Jarno,
Thanks so very much! I have to say I am really liking the A1 generator. How awesome of you to follow up. I really appreciate that. Yes, if you want to send me the complete sitemap via PM that would be awesome. I certainly hope I can return the favor Happy Holidays!
Dana
-
Yes, we definitely use XENU, but I think I like Screaming Frog a bit better (although our IT Director swears it's broken).
-
Hi Christopher,
Thanks for the update. Yes, I looked at it too and other than it not being "pretty" XML, the data seemed to be okay. The one thing the A! generator did that we couldn't do was assign the values for importance and frequency specific pages are modified. If that data is accurate, that's pretty cool. I'm just not sure, although it seems it did identify pages that are modified more frequently correctly. I have 30 days to play with the free trial, but so far I think I like it a lot.
Dana
-
Dana,
It just finished scanning here are the results:
Internal Sitemap URL's:
- Listed found: 5248
- Listed deduced: 5301
- Analyzed content: 3110
- Analyzed references: 3176
External URL's:
- Listed found: 700
When i look at the overview of the result i see a number of 301 redirects, canonical redirects (when tested again the get code 200 OK). But I see a lot op pages.
When i build the sitemap it generates one file (no idea why not more then one) with all the links in the document. Google's sitemap protocol states it should be like the schema at sitemaps.org which it does. The entire protocol of sitemap.org states that a sitemap can not hold over 50,000 links and should be smaller then 10 MB in filesize.
The one I just build for you is only 1 MB and contains less url's then 50,000 and thus is it allowed by Google.
http://www.sitemaps.org/protocol.html
I can send you the entire version of the sitemap if you'd like in a personal message or through e-mail?
Hope this helps you further.
kind regards
Jarno
-
i started the scan and it's still busy:
2500 analyzed references so far.
Let you know how it turns out.
Jarno
-
Thanks Jarno. I really appreciate that. Yes, I had it selected to just scan for images (as prompted when I attempted to create an image sitemap). Let me know what you see? I am wondering if it is going around in circles?
Dana
-
Dana,
sometimes that happens. Are you scanning for images or are you scanning the site?
i will check your site tomorrow with my full version and see what it does.
Sometimes with some websites you'll get things like this but it can be loads of things. 3500 pages should not take 2 hours but only a couple of minutes. I'll check it first thing tomorrow. A1 is not installed on my laptop..
Let you know tomorrow.
Kind regards
Jarno
-
A1 Sitemap does 2 things:
1 ) It builds a file names sitemap.xml which contains all files on the website (not conform the google requirements
-
It builds a number of files listed in sitemap-index.xml for every 100 pages in one sitemap. So if you're website contains 2800 pages You'll get loads of files: 28 sitemap-1.xml etc and 1 sitemap-index.xml file. Which does meet the Google standards. Afterwards you can do 2 things in Google webmasters:
-
enter the sitemap-index.xml file as a sitemap -> Google will follow everything and come to the grand total of 2800 pages.
-
Enter each sitemap separately.-> same result but you can pinpoint better where you have a 100 pages and google only indexes fewer (can happen).
Hope this helps
-
-
Hi again Jarno,
Is it normal for A1's sitemap generator's "Scan website" function for images to take over two hours? Our site is about 3,500 URLs. So far it has under "Internal 'sitemap' URLs" Listed found: 82076 (and climbing every few seconds).
I am wondering if there isn't something wrong? (I don't have any frame of reference since I've never used it before). Thanks!
Dana
-
I'm not familiar with the A1 Sitemap generator, but regarding the sitemap protocol, there is a limit on the size of a single sitemap.xml file, so for large sites, the sitemap must be split into multiple sitemap.xml files. And, the protocol has a method for indexing these multiple sitemap.xml files. It's sort of like an index to an index. None of my sites exceed the sitemap file limit, so I don't know which sitemap generators use this approach, but I would guess many of them do.
Sitemap generators I have used include DMXZone which is a Dreamweaver plugin, and xml-sitemaps.com which includes a video sitemap generator.
Best,
ChristopherEDIT: PS: Your current sitemap looks fine to me.
-
Thanks Christopher,
Your answer took a noment to sink in, but I think I get it (I think I am coffee deprived this morning).
So, if I am using the A1 Sitemap generator that Jarno suggested, this sitemap index should automatically be generated based on the size of my generated sitemap. Is that correct?
-
Thanks Jarno,
I have downloaded and am trying the 30-day free trial of the A1 Sitemap Generator right now. Thanks for the tip. Can you comment on Christopher's remark below concerning sitemap indexes for larger sitemaps?
Can either you or Christopher give me more clarification on that. Is this what our IT director has attempted to do with the sitemap in our robots.txt file? If so, has it been done correctly?
Thanks!
-
There is a limit on the size of a sitemap and to allow for large sitemaps to be split into smaller sitemaps, the sitemap protocol includes a sitemapindex. See "Using Sitemap index files (to group multiple sitemap files)" here http://www.sitemaps.org/protocol.html. Of course, it's also possible to include the multiple sitemaps in the robot.txt file, but automated sitemap generators will likely use the sitemapindex feature so that the robots.txt file does not have to be modified as the size of the site changes.
Best,
Christopher -
Another tool to help generate a sitemap and even check broken links is called Xenu (weird logo, but good free product).
-
Dana,
the buildup of your sitemap.xml is very strange to me. I use an external program to build my sitemap.xml for me entire website.
You now have a link in your robots.txt file pointing to a sitemap which contains 2 files (both .xml) with een map of the site?
Why not use a program (free or paid like Microsys A1 (the one I use)) to build 1 sitemap.xml en point to this file from your robots.txt?
hope this helps
if you do have any questions, please let me know.
kind regards
Jarno
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I submitted Sitemaps from AIO SEO to google search console, if I now delete the AIO plugin, do my sitemaps become invalid?
I use Yoast as SEO for my new Wordpress website https://www.satisfiedshoes.com/, however I couldn't get the sitemaps with Yoast as it was giving me error 404, and regardless of what I tried, it wasn't working. So I then got the All In One SEO while still having Yoast installed, I easily got the AIO sitemaps and then submitted them successfully to the Google search console. My question is that now I got the sitemaps on Google, since I'd rather use Yoast, If I want to delete AIO, will the sidemaps given to Google become invalid? There is no point keeping both SEO plugins active right? Thank You
Technical SEO | | iamzain160 -
Can you help by advising how to stop a URL from referring to another URL on my website please?
Stopping a redirect from one URL to another due to a 404 error? Referred URL which is (https://webwritinglab.com/know-exactly-what-your-ideal-clients-want-in-8-easy-steps/%5Bnull%20id=43484%5D) Referring URL (https://webwritinglab.com/know-exactly-what-your-ideal-clients-want-in-8-easy-steps/)
Technical SEO | | Nichole.wynter20200 -
Use existing page with bad URL or brand new URL?
Hello, We will be updating an existing page with more helpful information with the goal of reaching more potential customers through SEO and also attaching a SEM campaign to the specific landing page. The current URL of the page scores 25 on Page Authority, and has 2 links to it from blog articles (PA 35, 31). The current content needs to be rewritten to be more helpful and also needs some additional information. The downsides are that it has an "bad" URL- no target keyword and uses underscores. Which of the following choices would you make? 1. Update this old "bad" URL with new content. Benefit from the existing PA. -or- 2. Start with a new optimized URL, reusing some of the old content and utilizing a 301 redirect from the previous page? Thank you!
Technical SEO | | XLMarketing0 -
Easy Question: regarding no index meta tag vs robot.txt
This seems like a dumb question, but I'm not sure what the answer is. I have an ecommerce client who has a couple of subdirectories "gallery" and "blog". Neither directory gets a lot of traffic or really turns into much conversions, so I want to remove the pages so they don't drain my page rank from more important pages. Does this sound like a good idea? I was thinking of either disallowing the folders via robot.txt file or add a "no index" tag or 301redirect or delete them. Can you help me determine which is best. **DEINDEX: **As I understand it, the no index meta tag is going to allow the robots to still crawl the pages, but they won't be indexed. The supposed good news is that it still allows link juice to be passed through. This seems like a bad thing to me because I don't want to waste my link juice passing to these pages. The idea is to keep my page rank from being dilluted on these pages. Kind of similar question, if page rank is finite, does google still treat these pages as part of the site even if it's not indexing them? If I do deindex these pages, I think there are quite a few internal links to these pages. Even those these pages are deindexed, they still exist, so it's not as if the site would return a 404 right? ROBOTS.TXT As I understand it, this will keep the robots from crawling the page, so it won't be indexed and the link juice won't pass. I don't want to waste page rank which links to these pages, so is this a bad option? **301 redirect: **What if I just 301 redirect all these pages back to the homepage? Is this an easy answer? Part of the problem with this solution is that I'm not sure if it's permanent, but even more importantly is that currently 80% of the site is made up of blog and gallery pages and I think it would be strange to have the vast majority of the site 301 redirecting to the home page. What do you think? DELETE PAGES: Maybe I could just delete all the pages. This will keep the pages from taking link juice and will deindex, but I think there's quite a few internal links to these pages. How would you find all the internal links that point to these pages. There's hundreds of them.
Technical SEO | | Santaur0 -
Questionable SEO
Chess Telecom appears first when you search for 'business phone lines' in the UK so I used a campaign to check them out. It seems they've got tons of unrelated links and using comment spamming to increase their ranking. Along with fake twitter accounts and other things. Search for 'jewel jubic chess' and you'll see what i mean. I assumed this wasnt a good idea and been trying to get my link on relevant websites only. Any comments or suggestions? Should I simply trust that google will hopefully punish them eventually? Or should I be fighting fire with fire? Thanks Dan
Technical SEO | | DanFromUK0 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Robots.txt versus sitemap
Hi everyone, Lets say we have a robots.txt that disallows specific folders on our website, but a sitemap submitted in Google Webmaster Tools that lists content in those folders. Who wins? Will the sitemap content get indexed even if it's blocked by robots.txt? I know content that is blocked by robot.txt can still get indexed and display a URL if Google discovers it via a link so I'm wondering if that would happen in this scenario too. Thanks!
Technical SEO | | anthematic0