Dynamic page
-
I have few pages on my site that are with this nature
/locator/find?radius=60&zip=&state=FL
I read at Google webmaster that they suggest not to change URL's like this
"According to Google's Blog (link below) they are able to crawl the simplified dynamic URL just fine, and it is even encouraged to use a simple dynamic URL ( " It's much safer to serve us the original dynamic URL and let us handle the problem of detecting and avoiding problematic parameters. " )
_http://googlewebmastercentral.blogspot.com/2008/09/dynamic-urls-vs-static-urls.html _It can also actually lead to a decrease as per this line: " We might have problems crawling and ranking your dynamic URLs if you try to make your urls look static and in the process hide parameters which offer the Googlebot valuable information. "The URLs are already simplified without any extra parameters, which is the recommended structure from Google:"Does that mean I should avoid rewriting dynamic URLs at all?
That's our recommendation, unless your rewrites are limited to removing unnecessary parameters, or you are very diligent in removing all parameters that could cause problems"I would love to get some opinions on this also please consider that those pages are not cached by Google for some reason.
-
I think this is an answer that goes beyond Google. We use rewrites extensively and do not have any problems. There are some caveats
- Regarding GoogleBot missing information, you just need to make sure that the new URL has all the info.
Lets say you are a plumbing portal and use
/locator/find?radius=60&zip=&state=FL
rewrite to
/plumbers/florida-fl/miami/33110/
Your search radius can be a default value vs having to put it in as a parameter.
It helps with site structure to think of things as how they would be as a static directory. In this case, you are actually giving more information to GoogleBot with the rewritten URL vs the old one as you have included who you are searching for (a plumber) the city (miami), state (fl) and zip code (33110). The previous URL only indicated the state. If you dont like using all the folders, you can simply have a longer file name with dashes in between the words.
-
If you use rewrites, make sure Google is not spidering the original URLs otherwise you get penalized for duplicate errors. Monitoring Webmaster Tools or using spider software will help you find the holes. You can then use things like Canonical Links and Noindex Tags to get the old URLs out of the index and make sure Google has the correct pages. This all depends on how you implement your rewrites.
-
If you take some time to look at how you want to organized your site to start with then the first two items will take care of themselves usually. A good exercise is to write down how all of this would work within a breadcrumb navigation. This forces you to get organized and also helps you setup how you want all your pages to be shown to Google. If you do start to add parameters on top of this basic structure like pagination or other sortable options, you need to think how you would noindex, follow those pages to make sure that your main page would rank for a given key phrase vs all the other sorted versions of the same page.
-
One thing that is overlooked in setting up this kind of structure is that you can use it to your advantage in your analytic tools to look at global trends to your site. This could be in any site. Using the example above, all US states are at the 2nd level directory, cities are 3rd and zip is 4th. Makes it really super easy to use a regexp on urls to group them. For example, you could setup a filter in you analytics to easily combine all sessions that looked at pages in Florida and wanted to see what the next action was.
Cheers!
-
Hi,
I think that this does not mean, that you have to avoid rewriting dynamic urls at all but take care of the accessibility of your information.
for your url it could be interesting to build your domain like:
/locator/florida/find?radius=60
/locator/24786/find?radius=60or even better:
/stores-near-florida/find?range=60 /stores-near-24786/find?range=60
the suggestion of google just sais, that you have to avoid that information is being lost by mapping your dynamic url to a static. you should leave the radius parameter in the url because google could vary this parameter.
-
correction the pages are found by Google
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old pages - should I remove them from serps?
Hi guys, I need an advice from you, a recommendation. I have some old LPs from old campaigns, around 70 pages indexed on Google, campaigns that are not available anymore. I have removed them from my DB, but they still remained on server so Google still sees them as URLs on my site, witch I totally agree. What should I do with this pages? Should I remove them completely? (url removal tool) or use rel=canonical? How will this affect my domain authority and rankings? This pages doesn't bring traffic any more, maybe a view now and then, but overall this pages don't bring traffic.
Technical SEO | | catalinmoraru0 -
Unavoidable duplicate page
Hi, I have an issue where I need to duplicate content on a new site that I am launching. Visitors to the site need to think that product x is part of two different services. e.g. domain.com/service1/product-x domain.com/service2/product-x Re-writing content for product x for each service section is not an option but possibly I could get over that only one product-x page is indexed by search engines. What's the best way to do this? Any advice would be appreciated. Thanks, Stuart
Technical SEO | | Stuart260 -
WP image pages
I used Dreamweaver for years but have recently been switching to Wordpress. On the whole the results have been very positive. However, I don't like the way that WP generates a page for images when the image is inserted into a blog post. I was just reading this http://www.eyeflow.com/content-strength-audit/ excellent article on Content Strength Audit and it referred to this problem as well. Often, when I insert an image into a blog, I delete the reference to the image page and link directly to the image. Is this an effective way to deal with the is problem? Is there a better approach? Best,
Technical SEO | | ChristopherGlaeser
Christopher0 -
Too Many Page Links
I have 8 niche websites for golf clubs. This was done to carve out tight niches for specific types of clubs then only broadens each club by type - i.e. better player, game improvement, max game improvement. So far, for fairly young sites, <1 year, they are doing fairly well as I build content. Running campaigns has alerted me to one problem - too many on-page links. And because I use Wordpress those links are on each page in the right sidebar and lead to the other sites. Even though visitors arrive via organic search in most cases they tend to eventually exit to one of the other sites or they click on a product (Ebay) and venture off to hopefully make a purchase. Ex: Drivers site will have a picture link for each of the other 7 sites. Question: If I have one stie (like a splash page) used as one link to that page listing all the sites with a brief explanation of each site will this cause visitors to bounce off because they will have one click, than the list and other clicks depending on what other club/site they would like to go to. The links all open in new windows. This would cut down on the number of links per page of each site but will it cause too much work for visitors and cause them to leave?
Technical SEO | | NicheGuy0 -
Page Content
Our site is a home to home moving listing portal. Consumers who wants to move his home fills a form so that moving companies can cote prices. We were generating listing page URL’s by using the title submitted by customer. Unfortunately we have understood by now that many customers have entered exactly same title for their listings which has caused us having hundreds of similar page title. We have corrected all the pages which had similar meta tag and duplicate page title tags. We have also inserted controls to our software to prevent generating duplicate page title tags or meta tags. But also the page content quality not very good because page content added by customer.(example: http://www.enakliyat.com.tr/detaylar/evden-eve--6001) What should I do. Please help me.
Technical SEO | | iskq0 -
Wordpress Archive pages
In the SEOMOZ site report a number of errors were found. One of which was no or duplicate meta desctions on certain blog pages. When I drilled down to find these i noticed thosepages are the wordpress autocreated archive pages. When I searched for these through the wordpress control panel through both pages and blogs they were nowhere to be found. Does anyone know how to find these pages or are they not something I need to worry about?
Technical SEO | | laserclinics0 -
Page for Link Building
Hello guys, My question is about link building and reciprocal links. Since many directories request a reciprocal link, makes me wonder if is not better to create a unique page in the website only for this kind of links. What do you guys recommend? Thanks in advance, PP
Technical SEO | | PedroM0 -
Consolidate page strength
Hi, Our site has a fair amount of related/similiar content that has been historically placed on seperate pages. Unfortuantely this spreads out our page strength across multiple pages. We are looking to combine this content onto one page so that our page strength will be focused in one location (optimized for search). The content is extensive so placing it all on one page isn't ideal from a user experience (better to separate it out). We are looking into different approaches one main "tabbed" page with query string params to seperate the seperate pages. We'll use an AJAX driven design, but for non js browsers, we'll gracefully degrade to separate pages with querystring params. www.xxx.com/content/?pg=1 www.xxx.com/content/?pg=2 www.xxx.com/content/?pg=3 We'd then rel canonical all three pages to just be www.xxx.com/content/ Same concept but useAJAX crawlable hash tag design (!#). Load everything onto one page, but the page could get quite large so latency will increase. I don't think from an SEO perspective there is much difference between options 1 & 2. We'll mostly be relying on Google using the rel canonical tag. Have others dealt with this issue were you have lots of similiar content. From a UX perspective you want to separate/classifiy it, but from an SEO perspective want to consolidate? It really is very similiar content so using a rel canonical makes sense. What have others done? Thoughts?
Technical SEO | | NicB10