Some Old date showing in SERP
-
I see some old date Jan 21 2013 showing up for some categories in Google search results. These are category pages and I do not see the date in view source. This is not a wordpress site or a blog page. We keep changing this page by removing/adding items so it is not outdated.
-
Do you have this on any other pages? If so is there a similarity between any of them? Is there anything that you're using that auto generates dates anywhere?
-
I'm actually kinda stumped. For whatever reason, Google is ignoring the sitemap date. Here's what I would do:
1. Even though the sitemap is valid, I'm still unclear if Google is reading it. The only way to know for sure is by checking the Sitemap function in Google Search console here and verifying indexation: https://www.google.com/webmasters/tools/sitemap-list
2. You could try to put a date on the page. Something like "Last Updated" at the bottom of the page.
3. A longshot, but you could add the <lastreviewed>Schema markup to the page, and see if Google honors that.</lastreviewed>
If you try any of these, let us know if any of them worked!
-
Any tips? Manually submitted to Google but still has old date
-
How odd. I'm not sure of the answer, but before we go any further I was hoping you could verify a couple of things;
1. In Google Search console, can you verify that your sitemaps are submitted and that Google is indexing/reading them? I would think since you have a "last mod" date in your sitemap it would send a signal to Google that the page was more up to date.
2. When looking at the cache of your page in Google, it doesn't look like all the resources are loading. http://webcache.googleusercontent.com/search?q=cache:example.com
Based on this, if you perform a fetch and render in Google console, does it show that you are blocking any resources?
-
Please search gmax helmets in google and ours show up with date Jan 21 2013
-
You sure the date is not picked by some additional details somewhere on the page itself? Or, are you able to share the URL?
-
Hi there.
Go to Google Webmaster Tools/Search Console and request manual recrawl of that page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My backlinks are not showing in webmaster tools? Why
Hi Experts, I have follow backlinks from a domain for 6 months, but its not apear in Links to Your Site tools (search console) that domain has 302k indexed pages in google! Could you please explain me why google not showing this type of backlinks?
Technical SEO | | denakalami7 -
SERPs showing hundreds of nonexistent pages
My site has this page (URL) that actually exists on my site https://mysite.com/ranking/rankings-of-nike-running-shoes but somehow I can see the following versions in SERPs (these URLs are not physically available on my site): https://mysite.com/ranking/rankings-of-rankings-of-rankings-of-nike-running-shoes
Technical SEO | | webdesyaug17
https://mysite.com/ranking/rankings-of-rankings-of-nike-running-shoes I even started getting traffic from these URLs that are in Google's index. So, only the first one actually existins on my site: https://mysite.com/ranking/rankings-of-nike-running-shoes https://mysite.com/ranking**/rankings-of-rankings-of-**rankings-of-nike-running-shoes https://mysite.com/ranking/**rankings-of-**rankings-of-nike-running-shoes Questions are:
a. I'm not sure how the latter two - 2) and 3) - got in Google's index.
b. what do I do with them. My idea is to 301 redirect them to the actual page that exists https://mysite.com/ranking/rankings-of-nike-running-shoes Do I need to also somehow remove the faulty URLs - 2) and 3) - from Google's index or 301 redirecting is enough?0 -
Hundreds of 404 errors are showing up for pages that never existed
For our site, Google is suddenly reporting hundreds of 404 errors, but the pages they are reporting never existed. The links Google shows are clearly spam style, but the website hasn't been hacked. This happened a few weeks ago, and after a couple days they disappeared from WMT. What's the deal? Screen-Shot-2016-02-29-at-9.35.18-AM.png
Technical SEO | | MichaelGregory0 -
302 redirect used, submit old sitemap?
The website of a partner of mine was recently migrated to a new platform. Even though the content on the pages mostly stayed the same, both the HTML source (divs, meta data, headers, etc.) and URLs (removed index.php, removed capitalization, etc) changed heavily. Unfortunately, the URLs of ALL forum posts (150K+) were redirected using a 302 redirect, which was only recently discovered and swiftly changed to a 301 after the discovery. Several other important content pages (150+) weren't redirected at all at first, but most now have a 301 redirect as well. The 302 redirects and 404 content pages had been live for over 2 weeks at that point, and judging by the consistent day/day drop in organic traffic, I'm guessing Google didn't like the way this migration went. My best guess would be that Google is currently treating all these content pages as 'new' (after all, the source code changed 50%+, most of the meta data changed, the URL changed, and a 302 redirect was used). On top of that, the large number of 404's they've encountered (40K+) probably also fueled their belief of a now non-worthy-of-traffic website. Given that some of these pages had been online for almost a decade, I would love Google to see that these pages are actually new versions of the old page, and therefore pass on any link juice & authority. I had the idea of submitting a sitemap containing the most important URLs of the old website (as harvested from the Top Visited Pages from Google Analytics, because no old sitemap was ever generated...), thereby re-pointing Google to all these old pages, but presenting them with a nice 301 redirect this time instead, hopefully causing them to regain their rankings. To your best knowledge, would that help the problems I've outlined above? Could it hurt? Any other tips are welcome as well.
Technical SEO | | Theo-NL0 -
Https-pages still in the SERP's
Hi all, my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content. Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow> I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up. Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index? Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))... thanx in advance 😉
Technical SEO | | accessKellyOCG0 -
301ed Pages Still Showing as Duplicate Content in GWMT
I thank anyone reading this for their consideration and time. We are a large site with millions of URLs for our product pages. We are also a textbook company, so by nature, our products have two separate ISBNs: a 10 digit and a 13 digit form. Thus, every one of our books has at least two pages (10 digit and 13 digit ISBN page). My issue is that we have established a 301 for all the 10 digit URLs so they automatically redirect to the 13 digit page. This fix has been in place for months. However, Google still reports that they are detecting thousands of pages with duplicate title and meta tags. Google is referring to these page URLs that I already have 301ed to the canonical version many months ago! Is there anything that I can do to fix this issue? I don't understand what I am doing wrong. Example:
Technical SEO | | dfinn
http://www.bookbyte.com/product.aspx?isbn=9780321676672
http://www.bookbyte.com/product.aspx?isbn=032167667X As you can see the 10 digit ISBN page 301s to 13 digit canonical version. Google reports that they have detected duplicate title and meta tags between the two pages and there are thousands of these duplicate pages listed. To add some further context: The ISBN is just a parameter that allows us to provide content when someone searches for a product with the 10 or 13 digit ISBN. The 13 digit version of the page is the only physical page that exists, the 10 digit is only a part of the virtual URL structure of the website. This is why I cannot simply change the title and meta tags of the 10 digit pages because they only exist in the sense that the URL redirects to the 13 digit version. Also, we submit a sitemap every day of all the 13 digit pages so Google knows exactly what our physical URL structure is. I have submitted this question to GWMT forums and received no replies.0 -
Clarification from old seomoz post
I would need clarification from an old seomoz post - http://www.seomoz.org/q/rankings-changing-based-on-location-within-a-country-normal Particulary the following part - If you type in "Clear browser cache", Google KNOWS what browser you are using and can add the "Firefox" term in on your behalf, without it being apparent to the user What does it mean ? Thanks
Technical SEO | | seoug_20050