Is Sitemap Issue Causing Duplicate Content & Unindexed Pages on Google?
-
On July 10th my site was migrated from Drupal to Google. The site contains approximately 400 pages.
301 permanent redirects were used. The site contains maybe 50 pages of new content.
Many of the new pages have not been indexed and many pages show as duplicate content.
Is it possible that there is a site map issue that is causing this problem? My developer believes the map is formatted correctly, but I am not convinced.
The sitemap address is http://www.nyc-officespace-leader.com/page-sitemap.xml [^]
I am completely non technical so if anyone could take a brief look I would appreciate it immensely.
Thanks,
Alan| |
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Problem with Duplicate Page Wordpress
Hi all My name is Riccardo and i work for a web agency. I'am working on a new client website and i have found this kind of errors through MOZ (Image 1). I checked all the URLs; they work and they remind to the Homepage.
Intermediate & Advanced SEO | | advmedialab
The website is made with Wordpress. I have already tried to solve this problem with 301 redirect but, as i supposed, it didn't work.
I think that is a problem related to Wordpress URL in Wordpress settings (Image 2). However i would like to know if anybody had the same problem or if there are other possibile causes. Thank you in advance! zDVL0pj aB7MeGe0 -
Google update this wknd or page title issue?
Hi, I've seen a big ranking drop for many major terms, for a particular site, just on Google. This happened Fri 20th or Sat 21st just gone. I don't see any news on an algorithm update over the weekend.I had changed many of the sites major page title protocols 2 weeks ago but a) I would have expected any negative effect before now and not all at once b) the protocols were carefully crafted to avoid traffic drops for major terms and c) i'm seeing traffic drops for keywords that still start at the beginning of the page title d) im seeing drops for some pages which are still using the OLD page titles. I had even tested the protocol on a number of pages in advance to ensure it wouldn't cause problems. As a bit of background - the title protocols were changed to make them more user friendly and less keyword heavy. CTR from search improved so was hoping for better not worse rankings! Ideas, gratefully appreciated.Andy
Intermediate & Advanced SEO | | AndyMacLean0 -
Duplicate Content with URL Parameters
Moz is picking up a large quantity of duplicate content, consists mainly of URL parameters like ,pricehigh & ,pricelow etc (for page sorting). Google has indexed a large number of the pages (not sure how many), not sure how many of them are ranking for search terms we need. I have added the parameters into Google Webmaster tools And set to 'let google decide', However Google still sees it as duplicate content. Is it a problem that we need to address? Or could it do more harm than good in trying to fix it? Has anyone had any experience? Thanks
Intermediate & Advanced SEO | | seoman100 -
Duplicate content within sections of a page but not full page duplicate content
Hi, I am working on a website redesign and the client offers several services and within those services some elements of the services crossover with one another. For example, they offer a service called Modelling and when you click onto that page several elements that build up that service are featured, so in this case 'mentoring'. Now mentoring is common to other services therefore will feature on other service pages. The page will feature a mixture of unique content to that service and small sections of duplicate content and I'm not sure how to treat this. One thing we have come up with is take the user through to a unique page to host all the content however some features do not warrant a page being created for this. Another idea is to have the feature pop up with inline content. Any thoughts/experience on this would be much appreciated.
Intermediate & Advanced SEO | | J_Sinclair0 -
Google Generating its Own Page Titles
Hi There I have a question regarding Google generating its own page titles for some of the pages on my website. I know that Google sometimes takes your H1 tag and uses it as a page title, however, can anyone tell me how I can stop this from happening? Is there a meta tag I can use, for example like the NOODP tag? Or do I have to change my page title? Thanks Sadie
Intermediate & Advanced SEO | | dancape0 -
Duplicate Content Question
Brief question - SEOMOZ is teling me that i have duplicate content on the following two pages http://www.passportsandvisas.com/visas/ and http://www.passportsandvisas.com/visas/index.asp The default page for the /visas/ directory is index.asp - so it effectively the same page - but apparently SEOMOZ and more importantly Google, etc treat these as two different pages. I read about 301 redirects etc, but in this case there aren't two physical HTML pages - so how do I fix this?
Intermediate & Advanced SEO | | santiago230 -
Duplicate content
I have just read http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world and I would like to know which option is the best fit for my case. I have the website http://www.hotelelgreco.gr and every image in image library http://www.hotelelgreco.gr/image-library.aspx has a different url but is considered duplicate with others of the library. Please suggest me what should i do.
Intermediate & Advanced SEO | | socrateskirtsios0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0