URL's for news content
-
We have made modifications to the URL structure for a particular client who publishes news articles in various niche industries. In line with SEO best practice we removed the article ID from the URL - an example is below:
http://www.website.com/news/123/news-article-title
http://www.website.com/news/read/news-article-titleSince this has been done we have noticed a decline in traffic volumes (we have not as yet assessed the impact on number of pages indexed). Google have suggested that we need to include unique numerical IDs in the URL somewhere to aid spidering. Firstly, is this policy for news submissions? Secondly (if the previous answer is yes), is this to overcome the obvious issue with the velocity and trend based nature of news submissions resulting in false duplicate URL/ title tag violations? Thirdly, do you have any advice on the way to go?
Thanks
P.S. One final one (you can count this as two question credits if required), is it possible to check the volume of pages indexed at various points in the past i.e. if you think that the number of pages being indexed may have declined, is there any way of confirming this after the event?
Thanks again!
Neil
-
Hi Neil,
Your URL structure is affecting Google News. There is information here about the URL structure requirements, which can be waived if you use a Google News sitemap. http://www.google.com/support/news_pub/bin/answer.py?answer=151309.
I don't know of any way to see the number of pages indexed historically, unfortunately.
-
SO, for this one, make a small research on any news website, and on the seomoz blog articles, you will see every url is unique, and for this question, i prefer you urgently to read the seomoz ebook for starters, and read the next book from Randfishkin,and another seos, the name of book is: http://www.amazon.com/Art-SEO-Mastering-Optimization-Practice/dp/0596518862
and google website optimizing, you must understand the search engine friendly,really good, how to make urls,why, what kind of url, avoid the duplicate content, page=moves, because you are moving the published articles, to the new url, and this make a duplicate content on the google-crawl.
But the best answer is, read the art-of-seo ebook,and you will learn all and do perfectly.
hope it helps,
thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Conversion of URL's for Readability
Reading over Rands latest Post about URL structure I had a quick question about the best way to convert URL's that don't have perfect URL structure... Current the Structure of our E-commerce store has a structure that is not friendly with domain.com/product/zdcd-jobd3d-fdoh what is the easiest way to convert these to read URL's without causing any disruptions with the SERP. Are we talking about a MOD-Rewrite in the CMS.......
Technical SEO | | CMcMullen0 -
Yoast's Magento Guide "Nofollowing unnecessary link" is that really a good idea?
I have been following Yoast's Magento guide here: https://yoast.com/articles/magento-seo/ Under section 3.2 it says: Nofollowing unnecessary links Another easy step to increase your Magento SEO is to stop linking to your login, checkout, wishlist, and all other non-content pages. The same goes for your RSS feeds, layered navigation, add to wishlist, add to compare etc. I always thought that nofollowing internal links is a bad idea as it just throwing link juice out the window. Why would Yoast recommend to do this? To me they are suggesting link sculpting via nofollowing but that has not worked since 2009!
Technical SEO | | PaddyDisplays0 -
On March 10 a client's newsroom disappeared out of the SERPS. Any idea why?
For years the newsroom, which is on the subdomain news.davidlerner.com - has ranked #2 for their brand name search. On march 10 it fell out of the SERPs - it is completely gone. What happened? How can I fix this?
Technical SEO | | MeritusMedia0 -
How to solve the meta : A description for this result is not available because this site's robots.txt. ?
Hi, I have many URL for commercialization that redirects 301 to an actual page of my companies' site. My URL provider say that the load for those request by bots are too much, they put robots text on the redirection server ! Strange or not? Now I have a this META description on all my URL captains that redirect 301 : A description for this result is not available because this site's robots.txt. If you have the perfect solutions could you share it with me ? Thank You.
Technical SEO | | Vale70 -
Why is the report telling I have duplicate content for 'www' and No subdomain?
i am getting duplicate content for most of my pages. when i look into in your reports the 'www' and 'no subdomian' are the culprit. How can I resolve this as the www.domain.com/page and domain.com/page are the same page
Technical SEO | | cpisano0 -
Javascript to manipulate Google's bounce rate and time on site?
I was referred to this "awesome" solution to high bounce rates. It is suppose to "fix" bounce rates and lower them through this simple script. When the bounce rate goes way down then rankings dramatically increase (interesting study but not my question). I don't know javascript but simply adding a script to the footer and watch everything fall into place seems a bit iffy to me. Can someone with experience in JS help me by explaining what this script does? I think it manipulates the reporting it does to GA but I'm not sure. It was supposed to be placed in the footer of the page and then sit back and watch the dollars fly in. 🙂
Technical SEO | | BenRWoodard1 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0 -
Are 301s advisable for low-traffic URL's?
We are using some branded terms in URLs that we have been recently told we need to stop using. If the pages in question get little traffic, so we're not concerned about losing traffic from broken URLs, should we still do 301 redirects for those pages after they are renamed? In other words, are there other serious considerations besides any loss in traffic from direct clicks on those broken URLs that need to be considered? This comes up because we don't have anyone in-house that can do the redirects, so we need to pay our outside web development company. Is it worth it?
Technical SEO | | PGRob0