Are 1x Event pages considered thin content? Should they be archived or redirected?
-
Since past event pages will become stale after the event, should they be keep alive and archived with only a link from a couple of places (for instance the main event page and html sitemap).
Or should they be "retired" and redirected to the main event page if they are really no longer needed?
They would probably be considered thin content because they won't have much traffic and will have very few links pointing to them. Right?
Thanks. Inquiring minds want to know...
-
Keep them or remove them, don't redirect them unless they have external links, you want to keep things simple,
if you do remove them. make sure you remove all links pointing to them from your own site, you don't want broken links.
-
You may have already gotten your answer from previous responses, but if you haven't: what sorts of events do you have pages for? Is it possible to have one URL (since you mention the main event page) which changes content based on what is coming up next? This may make the process easier.
-
Hi,
If it's really for temporary events that have a limited life you could use the unavailable-after metatag - which indicates until which date the event should remain in the index. If it's about recurring events - I would do like Egol says.
More info on the tag can be found here: http://googleblog.blogspot.be/2007/07/robots-exclusion-protocol-now-with-even.html & here is an interview with Matt Cutts on the tag (the context is e-commerce - but it could as well apply to temporary events): http://searchengineland.com/googles-matt-cutts-seo-advice-unavailable-e-commerce-products-186882
rgds,
Dirk
-
If these events are recurring, like once per year. Then I would make a permanent page for them.
If that event is featured on other websites then I would make sure that the next upcoming event is always on the page and always reflected in the title tag. People will search for it with the year.... "2015 Spring Widget Dealer's Convention" The bigger the event the more important this is.
I attend a couple of events regularly and I search for them a few times each year. I can't remember the URL of their website so I just google the name of the event. I want to mark next year's event on my calendar, I want to see what they have on the preliminary schedule, I want see a final schedule, I need a map, I am looking for deals on lodging. Event sites can get multiple visits from attendees each year. Lots of websites have info on this event but only one of them does a great job. I remember the URL when I see it in the SERPs. It is also usually the only URL that has the current year in the title tag. Other webmasters doing update their title tags promptly or don't include the year. That's a mistake because google suggest shows a year.
A friend of mine maintains a massive event calendar with all of the events in this industry listed in chronological order on a huge LONG page. He gets tons of traffic because he is the only person who puts this work into it. Lots of people know his site because of this calendar. Lots of websites link to it. I have an industry news blog and link to his events page every time I see that he has updated it (about six times per year). I have a robot monitoring it Lots of people click from my blog to his page every time I list it. Events pages can be valuable if you do a great job on them.
-
It might depend on how large the event is. For really large scale things (sporting events, annual conferences, etc.) it makes more sense for there to be an archive of some sorts. With more iterative events that are collectively focused on topical awareness, I'd be more inclined to redirect those page sot which ever even in the series is most current.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Different content on pages with the same URL--except one is at www and the other at www2
Hi! I have two pages with unique content on each. However, they have virtually the same URL--except one is a www and the other is a www2. As far as I know, both pages were meant to gain organic traction. How should this situation be handled for SEO purposes? Thanks for any help! ---Ivey
Intermediate & Advanced SEO | | Nichiha0 -
Wondering if creating 256 new pages would cause duplicate content issues
I just completed a long post that reviews 16 landing page tools. I want to add 256 new pages that compare each tool against each other. For example: Leadpages vs. Instapage Leadpages vs. Unbounce Instapage vs. Unbounce, etc Each page will have one product's information on the left and the other on the right. So each page will be a unique combination BUT the same product information will be found on several other pages (its other comparisons vs the other 15 tools). This is because the Leadpages comparison information (a table) will be the same no matter which tool it is being compared against. If my math is correct, this will create 256 new pages - one for each combination of the 16 tools against each other! My site now is new and only has 6 posts/pages if that matters. Want to make sure I don't create a problem early on...Any thoughts?
Intermediate & Advanced SEO | | martechwiz0 -
Trailing Slashes for Magento CMS pages - 2 URLS - Duplicate content
Hello, Can anyone help me find a solution to Fixing and Creating Magento CMS pages to only use one URL and not two URLS? www.domain.com/testpage www.domain.com/testpage/ I found a previous article that applies to my issue, which is using htaccess to redirect request for pages in magento 301 redirect to slash URL from the non-slash URL. I dont understand the syntax fully in htaccess , but I used this code below. This code below fixed the CMS page redirection but caused issues on other pages, like all my categories and products with this error: "This webpage has a redirect loop ERR_TOO_MANY_REDIRECTS" Assuming you're running at domain root. Change to working directory if needed. RewriteBase / # www check If you're running in a subdirectory, then you'll need to add that in to the redirected url (http://www.mydomain.com/subdirectory/$1 RewriteCond %{HTTP_HOST} !^www. [NC]
Intermediate & Advanced SEO | | iamgreenminded
RewriteRule ^(.*)$ http://www.mydomain.com/$1 [R=301,L] Trailing slash check Don't fix direct file links RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ $1/ [L,R=301] Finally, forward everything to your front-controller (index.php) RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php [QSA,L]0 -
Publishing pages with thin content, update later?
So I have about 285 pages I created with very, very thin content on each. Each is unique, and each serves its own purpose. My question is, do you guys think it is wise to publish all of these at once to just get them out there and update each as we go along? Each page is very laser targeted and I anticipate that a large handful will actually rank soon after publishing. Thanks! Tom
Intermediate & Advanced SEO | | TomBinga11250 -
Parameter Strings & Duplicate Page Content
I'm managing a site that has thousands of pages due to all of the dynamic parameter strings that are being generated. It's a real estate listing site that allows people to create a listing, and is generating lots of new listings everyday. The Moz crawl report is continually flagging A LOT (25k+) of the site pages for duplicate content due to all of these parameter string URLs. Example: sitename.com/listings & sitename.com/listings/?addr=street name Do I really need to do anything about those pages? I have researched the topic quite a bit, but can't seem to find anything too concrete as to what the best course of action is. My original thinking was to add the rel=canonical tag to each of the main URLs that have parameters attached. I have also read that you can bypass that by telling Google what parameters to ignore in Webmaster tools. We want these listings to show up in search results, though, so I don't know if either of these options is ideal, since each would cause the listing pages (pages with parameter strings) to stop being indexed, right? Which is why I'm wondering if doing nothing at all will hurt the site? I should also mention that I originally recommend the rel=canonical option to the web developer, who has pushed back in saying that "search engines ignore parameter strings." Naturally, he doesn't want the extra work load of setting up the canonical tags, which I can understand, but I want to make sure I'm both giving him the most feasible option for implementation as well as the best option to fix the issues.
Intermediate & Advanced SEO | | garrettkite0 -
Server responds with 302 but the pages doesn't appear to redirect?
I'm working on a site and am running some basic audits, including a campaign within Moz. When I put the domain into any of these tools, including response header checkers, the response is a 302 that says there is a redirect to an Error Page. However, the page itself doesn't redirect, and resolves fine in the browser. But all of the audit tools cant seem to get any information from any of the pages. What is the best way to troubleshoot what is going on here? Thanks.
Intermediate & Advanced SEO | | jim_shook0 -
Can Googlebots read canonical tags on pages with javascript redirects?
Hi Moz! We have old locations pages that we can't redirect to the new ones because they have AJAX. To preserve pagerank, we are putting canonical tags on the old location pages. Will Googlebots still read these canonical tags if the pages have a javascript redirect? Thanks for reading!
Intermediate & Advanced SEO | | DA20130 -
Can a website be punished by panda if content scrapers have duplicated content?
I've noticed recently that a number of content scrapers are linking to one of our websites and have the duplicate content on their web pages. Can content scrapers affect the original website's ranking? I'm concerned that having duplicated content, even if hosted by scrapers, could be a bad signal to Google. What are the best ways to prevent this happening? I'd really appreciate any help as I can't find the answer online!
Intermediate & Advanced SEO | | RG_SEO0