Is having a sitemap.xml file still beneficial?
-
Hi,
I'm pretty new to SEO and something I've noticed is that a lot of things become relevant and irrelevant like the weather.
I was just wondering if having a sitemap.xml file for Google's use is still a good idea and beneficial?
Logically thinking, my websites would get crawled faster by having one.
Cheers.
-
It's worth looking at the Webinar video on Sitemaps for this one. It's just a month old so it's completely up to date.
http://www.seomoz.org/webinars/getting-value-from-xml-sitemaps
Sitemaps aren't just a case listing every page on your site anymore, they require a bit of thought and attention.
-
Absolutely relevant, infact i'd say essential.
A great article posted on the site already here: http://www.seomoz.org/blog/xml-sitemaps-guidelines-on-their-use
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I add my html sitemap to Robots?
I have already added the .xml to Robots. But should I also add the html version?
Technical SEO | | Trazo0 -
Protecting sitemaps - Good idea or humbug?
Is there a way to protect your sitemap.xml so that only Google can read it and would it make sense to do this?
Technical SEO | | Roverandom0 -
Once on https should Moz still be picking up errors on http
Hello, Should Moz be picking up http errors still if the sites on https? Or has the https not been done properly? I'm getting duplicate errors amoung other things. Cheers, Ruth
Technical SEO | | Ruth-birdcage1 -
Is "commented out" text still read by the SEs?
A site I reviewed was showing up in Google rankings for key phrases specific to a city, however the page that was showing up had the 'city' key phrases commented out. Does Google still read and utilized commented out text? Or is it more likely that the page in question got indexed before the key phrases were commented out and it's just still appearing for the related search queries?
Technical SEO | | MLTGroup1 -
Missing files in Google and Bing Index
We uploaded our sitemap a while back and we are no longer see around 8 out of 33 pages. We try submitting the sitemap again about 1-2 weeks ago and there but no additional pages are seen when I do site: option in both search engines. I reviewed the sitemap and it includes all the pages. I am not seeing any errors in the seo moz for these pages. Any ideas what I should try?
Technical SEO | | EZSchoolApps0 -
Kill your htaccess file, take the risk to learn a little
Last week I was browsing Google's index with "site:www.mydomain.com and wanted to scan over to see what Google had indexed with my site. I came across a URL that was mistakenly indexed. It went something like this www.mydomain.com/link1/link2/link1/link4/link3 I didn't understand why Google had indexed a page like that of mine when the "link" pages were links that were on my main bar which were site wide links. It seemed to be looping infinitely over and over. So I started trying to see how many of these Google had indexed and I came across about 20 pages. I went through the process of removing the URL's in Webmaster Tools, but then I wanted to know why it was happening. I had discovered that I had mistakenly placed some links on my site in my header in such a manner link1 link2 link3 If you know HTML you will realize that by not placing the "/" in the front of the link I was telling that page to add that link in addition to the URL that is was currently on. What this did was create an infinite loop of links which is not good 🙂 Basically when Google went to www.mydomain.com/link1/ it found the other links which then told Google to add that url to the existing URL and then go to that link. Something like: www.mydomain.com/links1/link2/... When you do not add the "/" in front of the directory you are linking too it will do this. The "/" refers to the root so if you place that in front of your directory you are linking too it will always assume that first "/" as the root then the url will follow. So what did I do? Even though I was able to find about 20 URL's using the "site:" search method there had to be more out there. Even though I tried to search I was not able to find anymore, but I was not convinced. The light bulb went on at this point My .htaccess file contained many 301 redirects in my attempt to try and redirect those pages to a real page, they were not really relevant pages to redirect too. So how could I really find out what Google had indexed out there for me since Webmaster Tools only reports the top 1000 links. I decided to kill my htaccess file. Knowing that Google is "forgiving" when major changes to your site happen I knew Google would not simply just kill my site for removing my htaccess file immediately. I waited 3 days then BOOM! Webmaster Tools was reporting to me that it found a ton of 401's on my site. I looked at the Crawl Errors and there they were. All those infinite loop links that I knew had to be more out there, I was able to see. How many were there? Google found in the first crawl over 5,000 of them. OMG! Yeah could you imagine the "Low quality" score I was getting on those pages? By seeing all those links I was able to determine about 4 patterns in the links. For example: www.mydomain.com/link1/link2/ www.mydomain.com/link1/link3/ www.mydomain.com/link1/link4/ www.mydomain.com/link1/link5/ Now my issue was I wanted to keep all the URL's that were pointing to www.mydomain.com/link1 but anything after that I needed gone. I went into my Robots.txt file and added this Disallow: www.mydomain.com/link1/link2/ Disallow: www.mydomain.com/link1/link3/ Disallow: www.mydomain.com/link1/link4/ Disallow: www.mydomain.com/link1/link5/ Now there were many more pages indexed that went deeper into those links but I knew I wanted anything after the 2nd URL gone since it was the start of the loop that I detected. With that I was able to have from what I know at least 5k links if not more. What did I learn from this? Kill your htaccess file for a few days and see what comes back in your reports. You might learn something 🙂 After doing this I simply replaced my htaccess file and I am on my way to removing a ton of "low quality" links I didn't even know I had.
Technical SEO | | cbielich0 -
How to create a delayed 301 redirect that still passes juice?
My company is merging one of our sites into another site. At first I was just going to create a 301 redirect from domainA.com to domainB.com but we decided that would be too confusing for customers expecting to see domainA.com so we want to create a page that says something like "We've moved. please visit domainB.com or be redirected after 10 seconds". My question is, how do I create a redirect that has a delay and will this still pass the same amount of juice that a regular 301 redirect would? I've heard that meta refreshes are considered spammy by Google.
Technical SEO | | bewoldt0 -
How do i Organize an XML Sitemap for Google Webmaster Tools?
OK, so i used am xlm sitemap generator tool, xml-sitemaps.com, for Google Webmaster Tools submission. The problem is that the priorities are all out of wack. How on earth do i organize it with 1000's of pages?? Should i be spending hours organizing it?
Technical SEO | | schmeetz0