Question about construction of our sitemap URL in robots.txt file
-
Hi all,
This is a Webmaster/SEO question. This is the sitemap URL currently in our robots.txt file:
http://www.ccisolutions.com/sitemap.xml
As you can see it leads to a page with two URLs on it. Is this a problem? Wouldn't it be better to list both of those XML files as separate line items in the robots.txt file?
Thanks!
Dana
-
Hi Jarno,
Thanks so very much! I have to say I am really liking the A1 generator. How awesome of you to follow up. I really appreciate that. Yes, if you want to send me the complete sitemap via PM that would be awesome. I certainly hope I can return the favor Happy Holidays!
Dana
-
Yes, we definitely use XENU, but I think I like Screaming Frog a bit better (although our IT Director swears it's broken).
-
Hi Christopher,
Thanks for the update. Yes, I looked at it too and other than it not being "pretty" XML, the data seemed to be okay. The one thing the A! generator did that we couldn't do was assign the values for importance and frequency specific pages are modified. If that data is accurate, that's pretty cool. I'm just not sure, although it seems it did identify pages that are modified more frequently correctly. I have 30 days to play with the free trial, but so far I think I like it a lot.
Dana
-
Dana,
It just finished scanning here are the results:
Internal Sitemap URL's:
- Listed found: 5248
- Listed deduced: 5301
- Analyzed content: 3110
- Analyzed references: 3176
External URL's:
- Listed found: 700
When i look at the overview of the result i see a number of 301 redirects, canonical redirects (when tested again the get code 200 OK). But I see a lot op pages.
When i build the sitemap it generates one file (no idea why not more then one) with all the links in the document. Google's sitemap protocol states it should be like the schema at sitemaps.org which it does. The entire protocol of sitemap.org states that a sitemap can not hold over 50,000 links and should be smaller then 10 MB in filesize.
The one I just build for you is only 1 MB and contains less url's then 50,000 and thus is it allowed by Google.
http://www.sitemaps.org/protocol.html
I can send you the entire version of the sitemap if you'd like in a personal message or through e-mail?
Hope this helps you further.
kind regards
Jarno
-
i started the scan and it's still busy:
2500 analyzed references so far.
Let you know how it turns out.
Jarno
-
Thanks Jarno. I really appreciate that. Yes, I had it selected to just scan for images (as prompted when I attempted to create an image sitemap). Let me know what you see? I am wondering if it is going around in circles?
Dana
-
Dana,
sometimes that happens. Are you scanning for images or are you scanning the site?
i will check your site tomorrow with my full version and see what it does.
Sometimes with some websites you'll get things like this but it can be loads of things. 3500 pages should not take 2 hours but only a couple of minutes. I'll check it first thing tomorrow. A1 is not installed on my laptop..
Let you know tomorrow.
Kind regards
Jarno
-
A1 Sitemap does 2 things:
1 ) It builds a file names sitemap.xml which contains all files on the website (not conform the google requirements
-
It builds a number of files listed in sitemap-index.xml for every 100 pages in one sitemap. So if you're website contains 2800 pages You'll get loads of files: 28 sitemap-1.xml etc and 1 sitemap-index.xml file. Which does meet the Google standards. Afterwards you can do 2 things in Google webmasters:
-
enter the sitemap-index.xml file as a sitemap -> Google will follow everything and come to the grand total of 2800 pages.
-
Enter each sitemap separately.-> same result but you can pinpoint better where you have a 100 pages and google only indexes fewer (can happen).
Hope this helps
-
-
Hi again Jarno,
Is it normal for A1's sitemap generator's "Scan website" function for images to take over two hours? Our site is about 3,500 URLs. So far it has under "Internal 'sitemap' URLs" Listed found: 82076 (and climbing every few seconds).
I am wondering if there isn't something wrong? (I don't have any frame of reference since I've never used it before). Thanks!
Dana
-
I'm not familiar with the A1 Sitemap generator, but regarding the sitemap protocol, there is a limit on the size of a single sitemap.xml file, so for large sites, the sitemap must be split into multiple sitemap.xml files. And, the protocol has a method for indexing these multiple sitemap.xml files. It's sort of like an index to an index. None of my sites exceed the sitemap file limit, so I don't know which sitemap generators use this approach, but I would guess many of them do.
Sitemap generators I have used include DMXZone which is a Dreamweaver plugin, and xml-sitemaps.com which includes a video sitemap generator.
Best,
ChristopherEDIT: PS: Your current sitemap looks fine to me.
-
Thanks Christopher,
Your answer took a noment to sink in, but I think I get it (I think I am coffee deprived this morning).
So, if I am using the A1 Sitemap generator that Jarno suggested, this sitemap index should automatically be generated based on the size of my generated sitemap. Is that correct?
-
Thanks Jarno,
I have downloaded and am trying the 30-day free trial of the A1 Sitemap Generator right now. Thanks for the tip. Can you comment on Christopher's remark below concerning sitemap indexes for larger sitemaps?
Can either you or Christopher give me more clarification on that. Is this what our IT director has attempted to do with the sitemap in our robots.txt file? If so, has it been done correctly?
Thanks!
-
There is a limit on the size of a sitemap and to allow for large sitemaps to be split into smaller sitemaps, the sitemap protocol includes a sitemapindex. See "Using Sitemap index files (to group multiple sitemap files)" here http://www.sitemaps.org/protocol.html. Of course, it's also possible to include the multiple sitemaps in the robot.txt file, but automated sitemap generators will likely use the sitemapindex feature so that the robots.txt file does not have to be modified as the size of the site changes.
Best,
Christopher -
Another tool to help generate a sitemap and even check broken links is called Xenu (weird logo, but good free product).
-
Dana,
the buildup of your sitemap.xml is very strange to me. I use an external program to build my sitemap.xml for me entire website.
You now have a link in your robots.txt file pointing to a sitemap which contains 2 files (both .xml) with een map of the site?
Why not use a program (free or paid like Microsys A1 (the one I use)) to build 1 sitemap.xml en point to this file from your robots.txt?
hope this helps
if you do have any questions, please let me know.
kind regards
Jarno
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
.htaccess Question
Hi,I have a website www.contractor-accounts.co.uk that has an .htaccess file that strips .php and forces a closing brace /. The site is now over 6 months old and still has a very low ranking with MOZ also rating the site as DA/PA = 1 which seems to indicate some sort of issue with the website. Can anyone offer any suggestions as to why this site is ranking poorly as much of the onpage SEO has been completed to a level of 90%+ for specific keyterms so I'm probably either looking at routing of the framework of so other technical SEO issues possibly? Any help much apreciated... <ifmodule mod_rewrite.c=""><ifmodule mod_negotiation.c="">Options -MultiViews</ifmodule> RewriteEngine On # Redirect Trailing Slashes...
Technical SEO | | ecrmeuro
# RewriteRule ^(.)/$ /$1 [L,R=301]
RewriteCond %{REQUEST_URI} /+[^.]+$
RewriteRule ^(.+[^/])$ %{REQUEST_URI}/ [R=301,L]
# Redirect non-WWW to WWW...
RewriteCond %{HTTP_HOST} ^contractor-accounts.co.uk [NC]
RewriteRule ^(.)$ http://www.contractor-accounts.co.uk/$1 [L,R=301] # Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]</ifmodule>0 -
URL Format
Often we have web platforms that have a default URL structure that looks something like this www.widgetcompany.co.uk/widget-gallery/coloured-widgets/red-widgets This format is quite well structured but would it just be more effective to be www.widgetcompany.co.uk/red-widgets? I realise that it may depend on a lot of factors but generally is it better to have the shorter URL if targeting the key phrase "red widgets" One thing, it certainly looks a bit keyword stuffy with all those "widgets"
Technical SEO | | vital_hike0 -
Canonical Question
Can someone please help me with a question, I am learning about Canonical URls at the moment and have had some errors come up, it is saying ```![Priority 1](http://try.powermapper.com/Reports/89db420a-2cf2-46dc-bae4-543efbefc241/report/Report/p1.png)This page has multiple rel=canonical tags.Line 9 Best Practice[![](http://try.powermapper.com/Reports/89db420a-2cf2-46dc-bae4-543efbefc241/report/Report/dropbox.png)](http://try.powermapper.com/Reports/89db420a-2cf2-46dc-bae4-543efbefc241/report/res/2.view.htm#)![Help](http://try.powermapper.com/Reports/89db420a-2cf2-46dc-bae4-543efbefc241/report/Report/help.png)Search engine behavior is unpredictable when a page has multiple canonical tags. <link rel="canonical" href="http://www.finalduties.co.uk/" /><link rel="alternate" type="application/rss+xml" title="Final Duties – Low cost probate RSS Feed" href="http://www.finalduties.co.uk/feed/" /> <link rel="alternate" type="application/atom+xml" title="Final Duties – Low cost probate Atom Feed" href="http://www.finalduties.co.uk/feed/atom/" /><link rel="pingback" href="http://www.finalduties.co.uk/xmlrpc.php" />That canonical link to Feed? should that be there, I know the Plugin has done this but I am lost to what should be there, I have no duplicate pages as far as I am aware than needs a canonical URL ??Thanks ``` >
Technical SEO | | Chris__Chris0 -
Schema address question
I have a website that has a contact us page... of course and on that page I have schema info pointing out the address and a few other points of data. I also have the address to the business location in the footer on every page. Would it be wiser to point to the schema address data on the footer instead of the contact page? And are there any best practices when it comes down to how many times you can point to the same data, and on which pages? So should I have schema address on the contact us page and the footer of that page, that would be twice, which could seem spammy. Haven't been able to find much best practices info on schema out there. Thanks, Cy
Technical SEO | | Nola5040 -
Variables in URLS?
How much do variables in URLs hurt indexing of that page? I'm worried that with this huge string of variables that the pages won't get indexed. Here's what I think we should have: http://adomainname.com/New/Local/State/City/Make/Model/ Here's the current URL:http://adomainname.com/New/Local/MN/Bayport/Jeep/Liberty?curPage=1&pageResultSize=50&orderDir=DESC&orderBy=ModifiedDate&conditionId=1&makeId=7&modelId=141&stateProvinceName=Minnesota&mc=1
Technical SEO | | CFSSEO0 -
Blocked by meta-robots but there is no robots file
OK, I'm a little frustred here. I've waited a week for the next weekly index to take place after changing the privacy setting in a wordpress website so Google can index, but I still got the same problem. Blocked by meta-robots, no index, no follow. But I do not see a robot file anywhere and the privacy setting in this Wordpress site is set to allow search engines to index this site. Website is www.marketalert.ca What am I missing here? Why can't I index the rest of the website and is there a faster way to test this rather than wait another week just to find out it didn't work again?
Technical SEO | | Twinbytes0 -
Confused about robots.txt
There is a lot of conflicting and/or unclear information about robots.txt out there. Somehow, I can't make out what's the best way to use robots even after visiting the official robots website. For example I have the following format for my robots. User-agent: * Disallow: javascript.js Disallow: /images/ Disallow: /embedconfig Disallow: /playerconfig Disallow: /spotlightmedia Disallow: /EventVideos Disallow: /playEpisode Allow: / Sitemap: http://www.example.tv/sitemapindex.xml Sitemap: http://www.example.tv/sitemapindex-videos.xml Sitemap: http://www.example.tv/news-sitemap.xml Is this correct and/or recommended? If so, then how come I see a list of over 200 or so links blocked by robots when Im checking out Google Webmaster Tools! Help someone, anyone! Can't seem to understand this robotic business! Regards,
Technical SEO | | Netpace0