My website internal pages are not getting cached with latest data
-
In our website we have sector list, in home page main category list is displayed click on main category user has to select sub category and reach the result page.
EX: Agriculture->Agribusiness->Rice
Agriculture page is indexed,but Agribusiness and Rice page is not getting cached,it is showing old indexed date as 23 July 2013,but i have submitted the sitemaps after this 4 times, and some url i have submitted manually in web master tool, but after this also my pages are not cached recently,
Please suggest the solution and what might be the problem
Thank you In Advance,
Anne
-
Hi Anne
I would make sure the page is in fact accessible via the crawler.
1. First check the page its self in something like URI Valet and make sure it's responding with a 200 OK code. Use Googlebot as the user agent.
2. You can also "fetch as Googlebot" in Webmaster Tools and from there submit the URL. So do the fetch and assuming it returns your 200 code you can then re-submit to the index.
3. You can also try crawling the site with Screaming Frog SEO Spider (with Googlebot as the user agent) and see if those pages come up in the crawl.
Lastly, I am curious how you know the "indexed date" of the page? I know if the page is cached you can see cache date, but not sure where indexed date would be. And sometimes Google may just not re-cache or update the index of a page for a while if it has a lower PageRank and/or the content is not new and fresh - it will not see a reason to update the cache.
Also, have these URLs ever been cached?
-Dan
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Google's search results display my home page instead of my target page?
Why does Google's search results display my home page instead of my target page?
Technical SEO | | h.hedayati6712365410 -
Tough SEO problem, Google not caching page correctly
My web site is http://www.mercimamanboutique.com/ Cached version of French version is, cache:www.mercimamanboutique.com/fr-fr/ showing incorrectly The German version: cache:www.mercimamanboutique.com/de-de/ is showing correctly. I have resubmitted site links, and asked Google re-index the web site many times. The German version always gets cached properly, but the French version never does. This is frustrating me, any idea why? Thanks.
Technical SEO | | ss20160 -
Contact Page
I'm currently designing a new website for my wife, who just started her own wedding/engagement photography business. I'm trying to build it as SEO friendly as possible, but she brought up an idea that she likes that I've never tried before. Typically on all the websites I've ever built, I've had a dedicated contact page that has the typical contact form. Because that contact form on a wedding photographers website is almost as important as selling a product on an e-commerce site, she brought up the possibility of putting the contact form in the footer site-wide (minus maybe the homepage) rather than having a dedicated contact page. And in the navigation, where you have links such as "Home", "Portfolio", "About", "Prices", "Contact", etc. the "Contact" navigation item would transfer the user to the bottom of the page they are on rather than a new page. Any thoughts on which way would be better for a case like this, and any positives/negatives for doing it each way? One thought I had is that if it's in the footer rather than it's own page, it would lose it's search-ability as it's technically duplicate content on each page. But then again, that's what a footer is. Thanks, Mickey
Technical SEO | | shannmg10 -
How do I fix issue regarding near duplicate pages on website associated to city OR local pages?
I am working on one e-commerce website where we have added 300+ pages to target different local cities in USA. We have added quite different paragraphs on 100+ pages to remove internal duplicate issue and save our website from Panda penalty. You can visit following page to know more about it. And, We have added unique paragraphs on few pages. But, I have big concerns with other elements which are available on page like Banner Gallery, Front Banner, Tool and few other attributes which are commonly available on each pages exclude 4 to 5 sentence paragraph. I have compiled one XML sitemap with all local pages and submitted to Google webmaster tools since 1st June 2013. But, I can see only 1 indexed page by Google on Google webmaster tools. http://www.bannerbuzz.com/local http://www.bannerbuzz.com/local/US/Alabama/Vinyl-Banners http://www.bannerbuzz.com/local/MO/Kansas-City/Vinyl-Banners and so on... Can anyone suggest me best solution for it?
Technical SEO | | CommercePundit0 -
I need help compiling solid documentation and data (if possible) that having tons of orphaned pages is bad for SEO - Can you help?
I spent an hour this afternoon trying to convince my CEO that having thousands of orphaned pages is bad for SEO. His argument was "If they aren't indexed, then I don't see how it can be a problem." Despite my best efforts to convince him that thousands of them ARE indexed, he simply said "Unless you can prove it's bad and prove what benefit the site would get out of cleaning them up, I don't see it as a priority." So, I am turning to all you brilliant folks here in Q & A and asking for help...and some words of encouragement would be nice today too 🙂 Dana
Technical SEO | | danatanseo0 -
Unnecessary pages getting indexed in Google for my blog
I have a blog dapazze.com and I am suffering from a problem for a long time. I found out that Google have indexed hundreds of replytocom links and images attachment pages for my blog. I had to remove these pages manually using the URL removal tool. I had used "Disallow: ?replytocom" in my robots.txt, but Google disobeyed it. After that, I removed the parameter from my blog completely using the SEO by Yoast plugin. But now I see that Google has again started indexing these links even after they are not present in my blog (I use #comment). Google have also indexed many of my admin and plugin pages, whereas they are disallowed in my robots.txt file. Have a look at my robots.txt file here: http://dapazze.com/robots.txt Please help me out to solve this problem permanently?
Technical SEO | | rahulchowdhury0 -
Best way to get SEO friendly URLSs on huge old website
Hi folks Hope someone may be able to help wit this conundrum: A client site runs on old tech (IIS6) and has circa 300,000 pages indexed in Google. Most pages are dynamic with a horrible URL structure such as http://www.domain.com/search/results.aspx?ida=19191&idb=56&idc=2888 and I have been trying to implement rewrites + redirects to get clean URLs and remove some of the duplication that exists, using the IIRF Isapi filter: http://iirf.codeplex.com/ I manage to get a large sample of URLS re-writing and redirecting (on a staging version of the site), but the site then slows to crawl. To imple,ent all URLs woudl be 10x the volume of config. I am starting to wonder if there is a better way: Upgrade to Win 2008 / IIS 7 and use the better URL rewrite functionality included? Rebuild the site entirely (preferably on PHP with a decent URL structure) Accept that the URLS can't be made friendly on a site this size and focus on other aspects Persevere with the IIRF filter config, and hope that the config loads into memory and the site runs at a reasonable speed when live None of the options are great as they either involve lots of work/cost of they involve keeping a site which performs well but could do so much better, with poor URLs. Any thoughts from the great minds in the SEOmoz community appreciated! Cheers Simon
Technical SEO | | SCL-SEO1 -
Tips to get rid of a link from an infected website ?
Hi, During some netlinking analysis I found that a website linking to one of the sites I do SEO for triggers my antivirus... It seems infected by JS/Dldr.Scripy.A Java script virus. Being the first time I deal with this kind of problem, and having not found any info on the Q&A or anywhere else, I wonder a few things : 1°) How to verify the reality of the threat and be sure it's not a false positive ? Is there some tool to scan the website, maybe an online vrus scanner ? 2°) How to contact the webmaster since I cannot look for a "contact us" page ? I looked in a whois, but I only got the e-mail of his hosting service, can I contact them directly ? 3°) Any tips or important things I should know ? Thanks for your help
Technical SEO | | JohannCR0