Questions created by mjk26
-
How to deal with rel=canonical when using POST parameters
Hi there,
On-Page Optimization | | mjk26
I currently have a number of URLs throughout my site of the form: https://www.concerthotels.com/venue-hotels/o2-academy-islington-hotels/256133#checkin_4-21-2024&checkout_4-22-2024&rooms_1&guests_2&artistid_15878:256133 This sends the user through to a page showing hotels near the O2 Academy Islington. Once the page loads, my code looks at the parameters specified in the # part of the URL, and uses them to fill in a form, before submitting the form as a POST. This basically reloads the page, but checks the availability of the hotels first, and therefore returns slightly different content to the "canonical" version of this page (which simply lists the hotels before any availability checks done). Until now, I've marked the page that has had availability checks as noindex,follow. But because the form was submitted with POST parameters, the URL looks exactly like the canonical one. So the two URLs are identical, but due to POST parameters, the content is slightly different. Does that make sense? My question is, should both versions of this page be marked as index,follow? Thanks
Mike0 -
Which pages should I index or have in my XML sitemap?
Hi there, my website is ConcertHotels.com - a site which helps users find hotels close to concert venues. I have a hotel listing page for every concert venue on my site - about 12,000 of them I think (and the same for nearby restaurants). e.g. https://www.concerthotels.com/venue-hotels/madison-square-garden-hotels/304484 Each of these pages list the nearby hotels to that concert venue. Users clicking on the individual hotel are brought through to a hotel (product) page e.g. https://www.concerthotels.com/hotel/the-new-yorker-a-wyndham-hotel/136818 I made a decision years ago to noindex all of the /hotel/ pages since they don't have a huge amount of unique content and aren't the pages I'd like my users to land on . The primary pages on my site are the /venue-hotels/ listing pages. I have similar pages for nearby restaurants, so there are approximately 12,000 venue-restaurants pages, again, one listing page for each concert venue. However, while all of these pages are potentially money-earners, in reality, the vast majority of subsequent hotel bookings have come from a fraction of the 12,000 venues. I would say 2000 venues are key money earning pages, a further 6000 have generated income of a low level, and 4000 are yet to generate income. I have a few related questions: Although there is potential for any of these pages to generate revenue, should I be brutal and simply delete a venue if it hasn't generated revenue within a time period, and just accept that, while it "could" be useful, it hasn't proven to be and isn't worth the link equity. Or should I noindex these "poorly performing pages"? Should all 12,000 pages be listed in my XML sitemap? Or simply the ones that are generating revenue, or perhaps just the ones that have generated significant revenue in the past and have proved to be most important to my business? Thanks Mike
Technical SEO | | mjk260 -
How quickly does Domain Authority and Page Authority update
Hi, I think I read somewhere that domain authority and page authority are updated twice a month - is this true? The reason I ask is because I had a very successful content piece released last month (around 20th May) that received some great links (mainly to a landing page, but also many links to my home page) from highly respected sources such as Time.com, Spin.com, RollingStone.com, DailyMail.co.uk... However, my domain authority hasn't changed at all, and the page authority for the page is only 47, which I was hoping would be higher for a page that received millions of visitors, 100k+ Facebook likes, and a wealth of sites linking in. Also, in OpenSiteExplorer, when checking against my home page, it says there are no Just-Discovered links for the last 60 days - again, I'm puzzled, as my home page has been referenced by many leading news sites. Many thanks
Link Explorer | | mjk260 -
Is it normal for my homepage's page authority to be higher than my domain authority?
Hi everyone, I've just been checking my MOZ authority scores, and the page authority of my home page, www.concerthotels.com is 53 (mozRank of 6.16) which seems pretty good to me. However, the domain authority is only showing as 45. Is this typical? There is such a big difference between these two scores - what does that suggest? Is there anything obvious that I can do to bring the domain authority up to the level of authority that the homepage commands? Similarly, the UK version of my site has a homepage authority of 41, but the domain authority is only 30. Looking at numerous other websites, quite often the homepage and domain authority scores are very similar, or the domain authority is slightly higher, so it does look as if this is unusual. Any comments would be greatly appreciated, Many thanks Mike
Moz Bar | | mjk260 -
Does text, initially hidden within a tabbed structure, carry the same weight in Google?
Hi everyone, my site has suffered from a number of organic drops this year, following a redesign, panda, and penguin. An example of one of my key pages is shown below: http://www.concerthotels.com/venue-hotels/bridgestone-arena-hotels/326895 Earlier this year, I redesigned my site, so that, for example, 4 pages associated with each Bridgestone Arena (a page with nearby hotels, one for user reviews, one for upcoming events, one for general information) were combined into one "Bridgestone Arena Hotels" page. The reason I did this is because I felt that many of the pages were very thin. My new page has tabs for reviews, tickets etc., with the default tab listing nearby hotel information - the primary aim of my website. I'm worried that all the great unique user review information that I'm collecting is not being given the weighting it deserves, because it is content that is not immediately visible when the user lands on the page - only click the Reviews tab makes the content visible. The hidden content is definitely being picked up by Google e.g. searching for a portion of the review content in Google such as "We were here for the Aerosmith concert. The workers were so friendly and helpful - great experience!" serves up the Bridgestone Arena page in the results. But do you think Google still sees the page as being pretty thin in content, because much of the unique content is initially hidden? I am considering introducing a little featured reviews section to the visible content, that just includes a couple of the latest venue reviews, with a link to open the reviews tab. But if I have some review content here, and the same reviews in a hidden section of the same page, is Google likely to treat this as spammy? Thanks for your help and advice, Mike
Intermediate & Advanced SEO | | mjk260 -
Following Penguin 2.0 hit in May, my site experienced another big drop on August 13th
Hi everyone, my website experienced a 30% drop in organic traffic following the Penguin 2.0 update in May. This was the first significant drop that the site has experienced since 2007, and I was initially concerned that the new website design I released in March was partly to blame. On further investigation, many spammy sites were found to be linking to my website, and I immediately contacted the sites, asked for the removal of the sites, before submitting a disavow file to Google. At the same time, I've had some great content written for my website over the last few months, which has attracted over 100 backlinks from some great websites, as well as lots of social media interaction. So, while I realise my site still needs a lot of work, I do believe I'm trying my best to do things in the correct manner. However, on August 11th, I received a message in Google WMTs : Googlebot found an extremely high number of URLs on your site I studied the table of internal links in WMTs and found that Google has been crawling many URLs throughout my site that I didn't necessarily intend it to find i.e. lots of URLs with filtering and sorting parameters added. As a result, many of my pages are showing in WMTs as having over 300,000 internal links!! I immediately tried to rectify this issue, updating the parameters section in WMTs to tell Google to ignore many of the URLs it comes across that have these filtering parameters attached. In addition, since my access logs were showing that Googlebot was frequently crawling all the URLs with parameters, I also added some Disallow entries to robots.txt to tell Google and the other spiders to ignore many of these URLs. So, I now feel that if Google crawls my site, it will not get bogged down in hundreds of thousands of identical pages and just see those URLs that are important to my business. However, two days later, on August 13th, my site experienced a further huge drop, so its now dropped by about 60-70% of what I would expect at this time of the year! (there is no sign of any manual webspam actions) My question is - do you think the solutions I've put in place over the last week could be to blame for the sudden drop, or do you think I'm taking the correct approach, and that the recent drop is probably due to Google getting bogged down in the crawling process. I'm not aware of any subsequent Penguin updates in recent days, so I'm guessing that this issue is somehow due to the internal structure of my new design. I don't know whether to roll back my recent changes or just sit tight and hope that it sorts itself out over the next few weeks when Google has more time to do a full crawl and observe the changes I've made. Any suggestions would be greatly appreciated. My website is ConcertHotels.com. Many thanks Mike
Intermediate & Advanced SEO | | mjk260 -
Penguin 2.0 drop due to poor anchor text?
Hi, my website experienced a 30% drop in organic traffic following the Penguin 2.0 update, and after years of designing my website with SEO in mind, generating unique content for users, and only focusing on relevant websites in my link building strategy, I'm a bit disheartened by the drop in traffic. Having rolled out a new design of my website at the start of April, I suspect that I've accidentally messed up the structure of the website, making my site difficult to crawl, or making Google think that my site is spammy. Looking at Google Webmaster Tools, the number 1 anchor text in the site is "remove all filters" - which is clearly not what I want! The "remove all filters" link on my website appears when my hotels page loads with filters or sorting or availability dates in place - I included that link to make it easy for users to view the complete hotel listing again. An example of this link is towards the top right hand side of this page: http://www.concerthotels.com/venue-hotels/agganis-arena-hotels/300382?star=2 With over 6000 venues on my website, this link has the potential to appear thousands of times, and while the anchor text is always "remove all filters", the destination URL will be different depending on the venue the user is looking at. I'm guessing that to Google, this looks VERY spammy indeed!? I tried to make the filtering/sorting/availability less visible to Google's crawl when I designed the site, through the use of forms, jquery and javascript etc., but it does look like the crawl is managing to access these pages and find the "remove all filters" link. What is the best approach to take when a standard "clear all..." type link is required on a listing page, without making the link appear spammy to Google - it's a link which is only in place to benefit the user - not to cause trouble! My final question to you guys is - do you think this one sloppy piece of work could be enough to cause my site to drop significantly following the Penguin 2.0 update, or is it likely to be a bigger problem than this? And if it is probably due to this piece of work, is it likely that solving the problem could result in a prompt rise back up the rankings, or is there going to be a black mark against my website going forward and slow down recovery? Any advice/suggestions will be greatly appreciated, Thanks Mike
Web Design | | mjk260 -
Comparing the site structure/design of my live site to my new design
Hi SEOmoz team, for the last few months I've been working on a new design for my website, the old, live design can be viewed at http://www.concerthotels.com - it is primarily focused on helping users find hotels close to concert venues throughout North America. The old structure was built in such a way that each concert venue had a number of different pages associated with it (all connected via tabs) - a page with information about the venue, a page with nearby hotels to the venue, a page of upcoming events, a page of venue reviews. An example of these pages can be seen at: http://www.concerthotels.com/venue/madison-square-garden/304484 http://www.concerthotels.com/venue-hotels/madison-square-garden-hotels/304484 http://www.concerthotels.com/venue-events/madison-square-garden-events/304484 http://www.concerthotels.com/venue-reviews/madison-square-garden-reviews/304484 The /venue-hotels/ pages are the most important pages on my website - and there is one of these pages for each concert venue - they are the landing pages for about 90% of the traffic on the website. I decided that having four pages for each venue was probably a poor design, since many of the pages ended up having little or no useful, unique content. So my new design attempts to bring a lot of the venue information together into fewer pages. My new website redesign is temporarily situated at: (not currently launched to the public) http://www.concerthotels.com/frontend The equivalent pages for Madison Square Garden are now: http://www.concerthotels.com/frontend/venue/madison-square-garden/304484 (the page above contains venue information, events and reviews) and http://www.concerthotels.com/frontend/venue-hotels/madison-square-garden-hotels/304484 I would really appreciate any feedback from you guys, based on what you think of the new site design compared to the old design from an SEO point of view. Of course, any feedback on site speed, easy of use etc compared to the old design would also be greatly appreciated. 🙂 My main fear is that when I launch the new design (the new URLs will be identical to the old ones), Google will take a dislike to it - I currently receive a large percentage of my traffic through Google organic search, so I don't want to launch a design that might damage that traffic. My gut instinct tells me that Google should prefer the new design - vastly reduced number of pages, each page now contains more unique content, and it's very much designed for users, so I'm hoping bounce rate, conversion etc will improve too. But my gut has been wrong in the past! 🙂 But I'd love to hear your thoughts, and thanks in advance for any feedback, Cheers Mike
Web Design | | mjk260 -
Could a HTML <select>with large numbers of <option value="<url>">'s affect my organic rankings</option></select>
Hi there, I'm currently redesigning my website, and one particular pages lists hotels in New York. Some functionality I'm thinking of adding in is to let the user find hotels close to specific concert venues in New York. My current thinking is to provide the following select element on the page - selecting any one of the options will automatically redirect to my page for that concert venue. The purpose of this isn't to affect the organic traffic - I'm simply introducing this as a tool to help customers find the right hotel, but I certainly don't want it to have an adverse effect on my organic traffic. I'd love to know your thoughts on this. I must add that in certain cities, such as New York, there could be up to 450 different options in this select element. | <select onchange="location=options[selectedIndex].value;"> <option value="">Show convenient hotels for:</option> <option value="http://url1..">1492 New York</option> <option value="http://url2..">Abrons Arts Center</option> <option value="http://url3..">Ace of Clubs New York</option> <option value="http://url4..">Affairs Afloat</option> <option value="http://url5..">Affirmation Arts New York</option> <option value="http://url6..">Al Hirschfeld Theatre</option> <option value="http://url7..">Alice Tully Hall</option> .. .. ..</select> Many thanks Mike |
Intermediate & Advanced SEO | | mjk260 -
tags inside <a>tags - is this bad?</a>
Hi, I'm currently redesigning my website, and in many places, I've now decided to make links a little bit more obvious for the user, using tags within a <a>tag in order to make the entire block of text clickable. I was just wondering if this could have a negative impact in the search engines. My gut feeling is no, since I'm actually improving usability, but I guess it could have an impact on how Google looks at the anchor text? An example of the HTML is as follows: </a> <a></a> <a></a> [Cristal Night Club Hotels <address>1045 5th Street
Intermediate & Advanced SEO | | mjk26
Miami Beach, FL33139</address> 6.4 miles from Miami Dade County Auditorium](http://localhost:8080/frontend/venue-hotels/cristal-night-club-hotels/301022 "Hotels near Cristal Night Club") Thanks for your thoughts and comments, Best wishes Mike0 -
Will duplicate content supplied from a hotel provider damage my website, or simply just the pages that it appears on?
Hi, I currently have a lot of hotel listings pages with little or no content, as I'm scared that if I place duplicate hotel descriptions on the pages then Google will stop ranking the page. I've found that having descriptions of some kind do help conversion significantly, so I'm considering generating unique hotel descriptions on each main page (page 1 in each set of listings) - these are the pages that Google indexes. On subsequent pages (page 2, page 3 etc.) I'm thinking about resorting to displaying the duplicate affiliate content hotel descriptions - these pages can be crawled but are set to noindex. My question is, do you think this is likely to have an effect on my website in the rankings, and as a result push my primary pages (that contain 100% unique content) down in SERPs. Thanks Mike
On-Page Optimization | | mjk260