New pages need to be crawled & indexed
-
Hi there,
When you add pages to a site, do you need to re-generate an XML site map and re-submit to Google/Bing? I see the option in Google Webmaster Tools under the "fetch as Google tool" to submit individual pages for indexing, which I am doing right now.
Thanks,
Sarah -
Maybe so. I'll have to find out from our web team. Thanks!
-
Does your platform automatically regenerate the sitemap and resubmit it to Google for you? If so, then don't update it again. You can fetch and render but you don't have to. Once the sitemap updates, Google will most likely recrawl the page that has been edited or added.
-
Yes what you said should do it
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best practices for types of pages not to index
Trying to better understand best practices for when and when not use a content="noindex". Are there certain types of pages that we shouldn't want Google to index? Contact form pages, privacy policy pages, internal search pages, archive pages (using wordpress). Any thoughts would be appreciated.
Technical SEO | | RichHamilton_qcs0 -
Unable to demote contact us & about us pages from sitelink?
Hey all, It's been 3 months now I demoted contact us & about us page via search console but it still appearing in my sitelink. Is there any other guidelines to be followed? Do anyone have the same experience? Susan.
Technical SEO | | promodirect0 -
Webmasters tools - Need to update every time you add a new product/page to an ecommerce
Hi guys, I run an ecommere store and we are constantly receiving and uploading new products. Do we need to update the sitemap every time we upload a product? Google Webmasters tools shows that the number of URLs received is higher than the number of indexed URLs. They should match right? Thanks and regards
Technical SEO | | footd0 -
Beating big brands for rankings on Google page 1 post Panda & Penguin
Hi all, so having followed lots of SeoMoz guidelines that we have read here and standard SEO ideas we seem to no longer be able to rank for our core keywords.. and certainly not rank in front of the big brands. We're a small eCommerce company and have historically ranked Google positions 1-4 for many of our keywords (a year or two ago)... but now no where near this any more. We always write unique content for our products of usually around 300-400 words per product we include our keywords in Title, meta description and H1 tags. We include buyers guides and set up articles on the site and generally have a reasonable amount of good quality and always uniquely written content Recently we have concentrated to ensure that page load speed is above average and Google Web Master Tools page speed gives us around 80-90 out of 100 We carry out linking and have always done... in the most recent past this has been weighted towards 'content for links' to gain purely incoming links (although in the early days from 2005 we did swap links with other web masters as well as write and publish on article sites etc). product category pages have an intro piece of text that includes the key phrases for that page and is placed as close to the body tag as possible. From what I understand if you are hit by Panda or Penguin the drop off is invariably over night, but we have not seen this... more of a gradual decline over the last year or two (although there was a bit of a downward blip on Panda update 20). Now we're lucky to be on page 2 for what were our main keywords / phrases such as "portable DVD players" or "portable DVD player"... in front of us in every position is a big national brand.. and certainly on page 1 it is purely only a big brand in every postion. They don't have great info from what we can see for these keywords and certainly don't give as much info as we do. For the phrase "portable DVD player" our portable DVD accessories page ranks better than our actual portable DVD player category page... which we also can't understand? This is our portable DVD category page: http://www.3wisemonkeys.co.uk/portable-dvd-players-car Currently we're starting to produce 2 minute product demo videos for as many of our product detail pages as we can and we plan to host these on something such as Vimeo so that content will be unique to our site (rather than YouTube) in order to give us a different format of unique content on many of our product detail pages to improve rankings (and conversion rates as the same time ideally). So ... I am hoping that some one out there can point us in the right direction and shed some light on our declining positions. Are we doing or have done something wrong... or is it in these post Panda / Penguin days extremely difficult for a small business to beat the big brands as Google believes these are what every one wants to see when shopping? Thanks for any comments and / or help.
Technical SEO | | jasef0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Changed cms - google indexes old and new pages
Hello again, after posting below problem I have received this answer and changed sitemap name Still I receive many duplicate titles and metas as google still compares old urls to new ones and sees duplicate title and description.... we have redirectged all pages properly we have change sitemap name and new sitemap is listed in webmastertools - old sitemap includes ONLY new sitemap files.... When you deleted the old sitemap and created a new one, did you use the same sitemap xml filename? They will still try to crawl old URLs that were in your previous sitemap (even if they aren't listed in the new one) until they receive a 404 response from the original sitemap. If anone can give me an idea why after 3 month google still lists the old urls I'd be more than happy thanks a lot Hello, We have changed cms for our multiple language website and redirected all odl URl's properly to new cms which is working just fine.
Technical SEO | | Tit
Right after the first crawl almost 4 weeks ago we saw in google webmaster tool and SEO MOZ that google indexes for almost every singlepage the old URL as well and the new one and sends us for this duplicate metatags.
We deleted the old sitemap and uploaded the new and thought that google then will not index the old URL's anymore. But we still see a huge amount of duplicate metatags. Does anyone know what else we can do, so google doe snot index the old url's anymore but only the new ones? Thanks so much Michelle0 -
If a page isn't linked to or directly sumitted to a search engine can it get indexed?
Hey Guys, I'm curious if there are ways a page can get indexed even if the page isn't linked to or hasn't been submitted to a search engine. To my knowledge the following page on our website is not linked to and we definitely didn't submit it to Google - but it's currently indexed: <cite>takelessons.com/admin.php/adminJobPosition/corp</cite> Anyone have any ideas as to why or how this could have happened? Hopefully I'm missing something obvious 🙂 Thanks, Jon
Technical SEO | | TakeLessons0 -
I have 15,000 pages. How do I have the Google bot crawl all the pages?
I have 15,000 pages. How do I have the Google bot crawl all the pages? My site is 7 years old. But there are only about 3,500 pages being crawled.
Technical SEO | | Ishimoto0