Content change within the same URL/Page (UX vs SEO)
-
Context:
I'm asking my client to create city pages so he can present all of his appartements in that specific sector so i can have a page that ranks for "appartement for rent in +sector". The page will present a map with all the sector so the user can navigate and choose the sector he wants after he landed on the page.
Question:
The UX team is asking if we absolutly need to reload the sector page when the user is clicking the location on the map or if they can switch the content within the same page/url once the user is on the landing page.
My concern:
1. Can this be analysed as duplicate content if Google can crawl within the javascript app or if Google only analyse his "first view" of the page.
2. Do you consider that it would be preferable to keep the "page change" so i'm increasing the number of page viewed ?
-
Google can read dynamic content, so I would not do this. In terms of UX, why don't you come up with something really cool while the users are waiting? How many seconds do you need for the change? As a rule, I would keep the sections with their own content because Google can read dynamic content.
-
So by example, if the user is on the "New York" page that he clicks on the "Boston" location withing the app, is there any problem changing the content inside the New York page and propose all the boston location without changing the page to the Boston url but just by changing the content inside the New York page
-
Hi Cristian,
I totally agree with what you're telling me. My plan is to keep the separate pages for every sector so I have my pages that rank for every sector. My question was more oriented on when the user is on-site for example on a specific location, is there any problem if the content change to a new location withing the web app without changing static pages.
-
Hello. You really need to have separate pages if you want to rank with all of them. Basically, think of the title for example. How do you want to index a specific region if you have only one page? How should googlebot understand that you have multiple content and which content/section to show if a person does a specific query? Escaped fragments could have been used in the past but it was not a great solution and it was discontinued (https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html). As such, I would try to provide separate pages with as much qualitative content as possible and with strong internal linking.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will adding /blog/ to my urls affect SEO rankings?
Following advice from an external SEO agency I removed /blog/ from our permalinks late last year. The logic was that it a) doesn't help SEO and b) reduces the character count for the slug. Both points make sense. However, it makes segmenting blog posts from other content in Google Analytics impossible. If I were to add /blog/ back into my URLs, and redirected the permalinks, would it harm my rankings? Thanks!
Technical SEO | | GerardAdlum0 -
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
How To Change Descriptions On Category Page 2 / 3 etc
I have a quick question that I hope someone might be able to help me with. On a wordpress website I have a lot of posts in each category. My problem is there are now several category pages. ie: https://www.mywebsite.com/category/cat-name/ https://www.mywebsite.com/category/cat-name/page2 https://www.mywebsite.com/category/cat-name/page3 The problem is on the category page I can set page title / description etc. But the problem is I cant do that on page2 / page 3 etc. Does anyone know how I can change the titles and decriptions etc on those pages. Thanks
Technical SEO | | DaleZon0 -
Avoiding duplicate content on product pages?
Hi, I'm creating a bunch of product pages for courses for a university and I'm concerned about duplicate content penalties. While the page names are different and some of the test is different, much of the text is the same between pairs of pages. I.e. a BA and an MA in a particular subject (say 'hairdressing' will have the same subject descriptions, school introduction paragraph, industry overview paragraph etc. 1. Is this a problem? In a site with 100 pages, if sets of 2 pages have about 50% identical content... 2. If it is a problem, is there anything I can do, other than rewrite the text? 3. From a search perspective, would both pages show up in search results in searches related to 'hairdressing courses' 'study hairdressing' etc? Thanks!
Technical SEO | | AISFM0 -
Pages with content defined by querystring
I have a page that show traveltips: http://www.spies.dk/spanien/alcudia/rejsemalstips-liste This page shows all traveltips for Alcudia. Each traveltip also has its own url: http://www.spies.dk/spanien/alcudia/rejsemalstips?TravelTipsId=19767 ( 2 weeks ago i noticed the url http://www.spies.dk/spanien/alcudia/rejsemalstips show up in google webmaster tools as a 404 page, along with 100 of others urls to the subpage /rejsemalstips WITHOUT a querystring. With no querystring there is no content on the page and it goes 404. I need my technicians to redirect that page so it shows the list, but in the meantime i would like to block it in robots.txt But how do i block a page if it is called without a querystring?
Technical SEO | | alsvik0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Follow up from http://www.seomoz.org/qa/discuss/52837/google-analytics
Ben, I have a follow up question from our previous discussion at http://www.seomoz.org/qa/discuss/52837/google-analytics To summarize, to implement what we need, we need to do three things: add GA code to the Darden page _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.darden.virginia.edu']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Change links on the Darden Page to look like http://www.darden.virginia.edu/web/MBA-for-Executives/ and [https://darden-admissions.symplicity.com/applicant](<a href=)">Apply Now and make into [https://darden-admissions.symplicity.com/applicant](<a href=)" > onclick="_gaq.push(['_link', 'https://darden-admissions.symplicity.com/applicant']); return false;">Apply Now Have symplicity add this code. _gaq.push(['_setAccount', 'UA-12345-1']);_gaq.push(['_setAllowLinker', true]);_gaq.push(['_setDomainName', '.symplicity.com']);_gaq.push(['_setAllowHash', false]);_gaq.push(['_trackPageview']); Due to our CMS system, it does not allow the user to add onClick to the link. So, we CANNOT add part 2) What will be the result if we have only 1) and 3) implemented? Will the data still be fed to GA account 'UA-12345-1'? If not, how can we get cross domain tracking if we cannot change the link code? Nick
Technical SEO | | Darden0