Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is "last modified" time in XML Sitemaps important?
-
My Tech lead is concerned that his use of a script to generate XML sitemaps for some client sites may be causing negative issues for those sites.
His concern centers around the fact that the script generates a sitemap which indicates that every URL page in the site was last modified at the exact same date and time. I have never heard anything to indicate that this might be a problem, but I do know that the sitemaps I generate for other client sites can choose server response or not.
What is the best way to generate the sitemap? Last mod from actual time modified, or all set at one date and time?
-
Glad to be of help Sha
-
Thanks Alan,
I will continue to use the server response setting when generating other sitemaps and recommend that our Techs ditch the home grown script that assigns the single date and time in future.
II must say also, it is great to have such clear and reliable advice - very glad to have you around!
Thanks again.
-
Sitemap.xml files are one of many "hints" search engines use to evaluate, classify and otherwise associate relevance, importance and freshness of individual pages, and in turn, an entire site.
When the entire file flags every page with the same date/time it can have a negative impact, purely from the single-point signal perspective. If the actual pages themselves have different date/time stamps at the HTML code level, those would counter the sitemap.xml file reporting, and either resolve it or just cause confusion.
Any time search engines have a potential conflict that needs to be resolved, the potential for less than maximum value exists.
Because of these combined potential problems, SEO best practices dictate that this issue be resolved, so as to ensure it does not, in fact, lead to problems, however minor they might be on a per-page basis. If resolving the issue takes an extensive amount of time, an evaluation of how important the issue is to overall SEO. At a certain point, you cross into the realm of diminishing returns.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
.xml sitemap showing in SERP
Our sitemap is showing in Google's SERP. While it's only for very specific queries that don't seem to have much value (it's a healthcare website and when a doctor who isn't with us is search with the brand name so 'John Smith Brand,' it shows if there's a first or last name that matches the query), is there a way to not make the sitemap indexed so it's not showing in the SERP. I've seen the "x-robots-tag: noindex" as a possible option, but before taking any action wanted to see if this was still true and if it would work.
Technical SEO | | Kyleroe950 -
"Noindex, follow" for thin pages?
Hey there Mozzers, I have a question regarding Thin pages. Unfortunately, we have Thin pages, almost empty to be honest. I have the idea to ask the dev team to do "noindex, follow" on these pages. What do you think? Has someone faced this situation before? Will appreciate your input!
Technical SEO | | Europarl_SEO_Team0 -
Quick Fix to "Duplicate page without canonical tag"?
When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something?
Technical SEO | | CREW-MARKETING1 -
Exclude Child URLs from XML Sitemap Generator (Wordpress)
Hi all, I was recommended the XML Sitemap Generator for Wordpress by the very helpful Keith Bloemendaal and John Pring - however I can't seem to exclude child URLs. There is a section Exclude items and a subsection Exclude posts. I have tried inputting the URLs for the pages I don't want in the sitemap, however that didn't work. So I read that you have to include a list of "IDs" - not sure where on earth to find that info, tried the page name and the post= number from the URL, however neither worked. I hope somebody can point me in the right direction - and apologies, I am a Wordpress novice, and I got no answers from the Wordpress forums so turned right back to SEOmoz! Cheers.
Technical SEO | | markadoi840 -
Use webmaster tools "change of address" when doing rel=canonical
We are doing a "soft migration" of a website. (Actually it is a merger of two websites). We are doing cross site rel=canonical tags instead of 301's for the first 60-90 days. These have been done on a page by page basis for an entire site. Google states that a "change of address" should be done in webmaster tools for a site migration with 301's. Should this also be done when we are doing this soft move?
Technical SEO | | EugeneF0 -
How valuable is content "hidden" behind a JavaScript dropdown really?
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks? You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess: http://www.insider-boersenbrief.de/ http://www.deko-und-kerzenshop.de/ How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
Technical SEO | | jfkorn0 -
Why crawl error "title missing or empty" when there is already "title and meta desciption" in place?
I've been getting 73 "title missing or empty" warnings from SEOMOZ crawl diagnostic. This is weird as I've installed yoast wordpress seo plugin and all posts do have title and meta description. But why the results here.. can anyone explain what's happening? Thanks!! Here are some of the links that are listed with "title missing, empty". Almost all our blog posts were listed there. http://www.gan4hire.com/blog/2011/are-you-here-for-good/ http://www.gan4hire.com/blog/2011/are-you-socially-awkward/
Technical SEO | | JasonDGreat
MaeM3.png TLcD8.png
0 -
What is best practice for redirecting "secondary" domain names?
For sites with multiple top-level domains that have been secured for a business or organization, I'm curious as to what is considered best practice for setting up 301 redirects for secondary domains. Is it best to do the 301 redirects at the registrar level, or the hosting level? So that .net, .biz, or other secondary domains funnel visitors to the correct primary/main domain name. I'm looking for the "best practice" answer and want to avoid duplicate content problems, or penalties from the search engines. I'm not trying to game the system with dozens of domain names, simply the handful of domains that are important to the client. I've seen some registrars recommend hosting secondary domains, and doing redirects from the hosting level (and they use meta refresh for "domain forwarding," which I want to avoid). It seems rather wasteful to set up hosting for a secondary domain and then 301 each URL.
Technical SEO | | Scott-Thomas0