Should I add my blog posts to my sitemap.txt file?
-
This seems like it should be an obvious no, just because of the amount of work that would entail, and then remembering to do it every time I make a post, but since I couldn't find anything on Google about it and have never heard anyone mention it, I figured I'd ask.
-
Is the place where you submit a txt sitemap in Webmaster tools different from where you'd submit an XML version?
-
I was using an XML sitemap, but it got all screwed up somehow, and I just deleted it and haven't been using one since. This was about a month ago.
I'll try that website, though. Thanks.
But in Google Webmaster Tools, where you can submit a txt sitemap, I'm just going to assume to leave the blog posts out of it.
-
Are you utilizing an XML sitemap? In my opinion it's much easier to do it this way than manage text files.
I have XML sitemaps autogenerate (there are websites like http://www.xml-sitemaps.com/ that work, too), and then point Google Webmaster to those XML files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Sitemap Rules
Hello there, I have some questions pertaining to sitemaps that I would appreciate some guidance on. 1. Can an XML sitemap contain URLs that are blocked by robots.txt? Logically, it makes sense to me to not include pages blocked by robots.txt but would like some clarity on the matter i.e. will having pages blocked by robots.txt in a sitemap, negatively impact the benefit of a sitemap? 2. Can a XML sitemap include URLs from multiple subdomains? For example: http://www.example.com/www-sitemap.xml would include the home page URL of two other subdomains i.e. http://blog.example.com/ & http://blog2.example.com/ Thanks
Technical SEO | | SEONOW1230 -
Robots.txt blocking Addon Domains
I have this site as my primary domain: http://www.libertyresourcedirectory.com/ I don't want to give spiders access to the site at all so I tried to do a simple Disallow: / in the robots.txt. As a test I tried to crawl it with Screaming Frog afterwards and it didn't do anything. (Excellent.) However, there's a problem. In GWT, I got an alert that Google couldn't crawl ANY of my sites because of robots.txt issues. Changing the robots.txt on my primary domain, changed it for ALL my addon domains. (Ex. http://ethanglover.biz/ ) From a directory point of view, this makes sense, from a spider point of view, it doesn't. As a solution, I changed the robots.txt file back and added a robots meta tag to the primary domain. (noindex, nofollow). But this doesn't seem to be having any effect. As I understand it, the robots.txt takes priority. How can I separate all this out to allow domains to have different rules? I've tried uploading a separate robots.txt to the addon domain folders, but it's completely ignored. Even going to ethanglover.biz/robots.txt gave me the primary domain version of the file. (SERIOUSLY! I've tested this 100 times in many ways.) Has anyone experienced this? Am I in the twilight zone? Any known fixes? Thanks. Proof I'm not crazy in attached video. robotstxt_addon_domain.mp4
Technical SEO | | eglove0 -
Wordpress Blog Blocked by Metarobots
Upon receiving my first crawl report from new pro SEOMoz acc (yaay!) I've found that the wordpress blog plugged into my site hasn't been getting crawled due to being blocked by metarobots. I'm not a developer and have very little tech expertise, but a search dug up that the issue stemmed from the wordpress site settings > privacy > Ask search engines not to index this site option being selected. On checking the blog "Allow search engines to index this site" was selected so I'm unsure what else to check. My level of expertise means I'm not confident going into the back end of the site and I don't have a tech guy on site to speak to. Has anyone else had this problem? Is it common and will I need to consult a developer to get this fixed? Many thanks in advance for your help!
Technical SEO | | paj19790 -
Invisible robots.txt?
So here's a weird one... Client comes to me for some simple changes, turns out there are some major issues with the site, one of which is that none of the correct content pages are showing up in Google, just ancillary (outdated) ones. Looks like an issue because even the main homepage isn't showing up with a "site:domain.com" So, I add to Webmaster Tools and, after an hour or so, I get the red bar of doom, "robots.txt is blocking important pages." I check it out in Webmasters and, sure enough, it's a "User agent: * Disallow /" ACK! But wait... there's no robots.txt to be found on the server. I can go to domain.com/robots.txt and see it but nothing via FTP. I upload a new one and, thankfully, that is now showing but I've never seen that before. Question is: can a robots.txt file be stored in a way that can't be seen? Thanks!
Technical SEO | | joshcanhelp0 -
Robots.txt file question? NEver seen this command before
Hey Everyone! Perhaps someone can help me. I came across this command in the robots.txt file of our Canadian corporate domain. I looked around online but can't seem to find a definitive answer (slightly relevant). the command line is as follows: Disallow: /*?* I'm guessing this might have something to do with blocking php string searches on the site?. It might also have something to do with blocking sub-domains, but the "?" mark puzzles me 😞 Any help would be greatly appreciated! Thanks, Rob
Technical SEO | | RobMay0 -
Mobile sitemaps - how much value?
Hi, We have a main www website with a standard sitemap. We also have a m. site for mobile content (the mobile site only contains our top pages and doesn't include the entire site). If a mobile client accesses one of our www pages we redirect to the m. page. If we don't have a m. version we keep them on the www site. Since we already have a www sitemap, is there much value in creating a mobile site map? The mobile site (although missing all pages) is pretty robust and contains most content people are looking for. Will the mobile sitemap help for Mobile searches (more so than our standard sitemap)? I'm also planning on rel canonical the m. pages to the www. pages (per other suggestios on SEOMoz) Thanks
Technical SEO | | NicB10 -
Blog on a subdomain vs subfolder?
Hi, Does anyone have data to show that a subfolder is better than a subdomain for a blog? From what I've read, it sounds like both are a viable option but you choose subdomain if you want to build your blog as a distinct entity. Do you get ranked more quickly with a subfolder? Do you see X% more lift? Has anyone tested or seen tests around this subject? Any input is appreciated! Thanks in advance.
Technical SEO | | sportstvjobs0