Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • I have a website with lots of traffic and sometimes the backends fail. I want to use lighttpd to show that the website is under mantenence and should be back up shortly. I was thinking of using Soft 503 errors or doing a 302 for every page to /maintenance.html. What would you do (besides fixing the backends, we are already doing that :P) to avoid hurting your SEO efforts? Thanks in advance Mariano

    | marianoSoler98
    0

  • Hi, I manage a property listing website which was recently revamped, but which has some on-site optimization weaknesses and issues. For each property listing like http://www.selectcaribbean.com/property/147.html there is an equivalent PDF version spidered by google. The page looks like this http://www.selectcaribbean.com/pdf1.php?pid=147 my question is: Can this create a duplicate content penalty? If yes, should I ban these pages from being spidered by google in the robots.txt or should I make these link nofollow?

    | multilang
    0

  • Hi All, Just wondering if anyone could shed some light on the following. If I was ranking number 1 for a term, what would the effects be of creating another site, hosted on the same server / IP, same whois info, same URL but a different TLD, and trying to get this to rank for the term also. Does G restrict search results to one IP per page or is this perfectly possible? (The term is fairly uncompetitive) Thanks, Ben

    | Audiohype
    0

  • We have several dynamic Servlets on our Website which help Users to calculate Mortgages: Example: http://www.interhyp.de/interhyp/servlet/interhyp?view=showKaufMietSchieber&STYLE=b2c&alttrackwidth=546 Until now these Servlets are blocked via robots.txt to avoid "stange results being in the Googleindex". Problem / Question: These Servlets are used from lots of external Affiliate Websites, and i like to know how to optimize these inbound Links? Thx.
    Manuel

    | BerndAngermann
    0

  • When you perform a search in Chrome, click through to a result, then hit "back", you get a nice little option to "Block all example.com results" listed next to the result from which you backed out. I am assuming Google collects this information from Chrome users whose settings allow them to? I am assuming this is a spam signal (in aggregate)? Anyone know? Thanks!

    | TheEspresseo
    0

  • We need to conduct a SEO analysis for a website that is on a private, password protected development site -- is there anyway for SEOMoz tools to access and analyze a PW protected site? Thank you, Sara Merten

    | kev11
    0

  • The front page of our site dropped in late March from PR4 to PR1.  Yes, I know toolbar PR isn't terribly reliable, isn't much of an indicator of overall SEO, etc. - however, upper management will want to know what happened and what is being done to fix it. Of course, the answer is obvious: go build links.  But what might the cause be?  As I mentioned in a past Q&A, the site is entirely encrypted and as a result may be causing us to leak some juice (http backlinks of course make up the vast majority of our links).  We're planning to fix this once the site is ported over to a CMS, but that's still months off.  Other than that, what might be the problem?  Any ideas?

    | ufmedia
    0

  • Hi, We're getting 'Yellow' Search Engine Blocked by Robot Txt warnings for URLS that are in effect product search filter result pages (see link below) on our Magento ecommerce shop. Our Robot txt file to my mind is correctly set up i.e. we would not want Google to index these pages. So why does SeoMoz flag this type of page as a warning? Is there any implication for our ranking? Is there anything we need to do about this? Thanks. Here is an example url that SEOMOZ thinks that the search engines can't see. http://www.site.com/audio-books/audio-books-in-english?audiobook_genre=132 Below are the current entries for the robot.txt file. User-agent: Googlebot
    Disallow: /index.php/
    Disallow: /?
    Disallow: /.js$
    Disallow: /.css$
    Disallow: /checkout/
    Disallow: /tag/
    Disallow: /catalogsearch/
    Disallow: /review/
    Disallow: /app/
    Disallow: /downloader/
    Disallow: /js/
    Disallow: /lib/
    Disallow: /media/
    Disallow: /.php$
    Disallow: /pkginfo/
    Disallow: /report/
    Disallow: /skin/
    Disallow: /utm
    Disallow: /var/
    Disallow: /catalog/
    Disallow: /customer/
    Sitemap:

    | languedoc
    0

  • My client has an existing domain, domain A. They recently purchased and absorbed another company with their own domain, domain B. For marketing purposes company B will be rebranded as company A. They want to redirect domain B to domain A. The problem is that company B has by far the more visible domain, with 4x the number of inbound links. If I redirect domain B to domain A, what will happen to these links? I'm thinking their value will be lost.

    | waynekolenchuk
    0

  • Hello, I have a problem with my website. I have a page on my website http://www.ensorbuilding.com/page.php/aboutus but if i type in www.ensorbuilding.com/page.php/aboutus/f8e45e9d9df6140bb5a7ff1173e8d828 or www.ensorbuilding.com/page.php/aboutus/0f0eea5e9ab0a3e8d91fad8fc0d3ce9c it still displays the about us page. Google is seeing this as duplicate content so what I would want to do is 301 redirect anything after www.ensorbuilding.com/page.php/aboutus . How could I implement a 301 redirect in this way?

    | danielmckay7
    0

  • A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
    Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler

    | kchandler
    0

  • Currently our meta title says "Network Security Audit | Pivot Point Security" which is pretty broad considering how many services we offer.  In trying to restructure our keywords, marketing and SEO focus, I came up with a new title.  The problem I have is figuring out which keywords to use in the title, and with a company name with 3 words - I am running out of room. The new title idea is "Information Security Assessments - Penetration Testing | Pivot Point Security" So my questions are the following.  Do I need to put the company name? Should I choose different keywords?  I'm sort of at a stand still trying to figure out the best possible title since meta keywords or description won't really help ranking.

    | pivotpointsecurity
    0

  • By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?

    | scanlin
    0

  • We have an existing e-commerce site built on x-cart. The default store location is www.site.com/store. The domain root however is just a static HTML page (currently using mainly graphics) and a nav menu. What would be a better option: 1. Move the install location to the root directory and get rid of the static HTML page. We would have to manually 301 redirect all the old pages to the new location. Not sure if there are negative implications with that. 2. Just optimize the HTML landing page? Seems like it is better to have products and categories as close to the root domain as possible... 3. 301 redirect the domain to www.site.com/store/ and optimize the homepage within the store. This option means we dont have to worry about 2000 redirects or the hassle of moving the store. Anyone had any experience with this and suggestions?

    | BlinkWeb
    0

  • One of my clients is an email subscriber-led business offering deals that are time sensitive and which expire after a limited, but varied, time period. Each deal is published on its own URL and in order to drive subscriptions to the email, an overlay was implemented that would appear over the individual deal page so that the user was forced to subscribe if they wished to view the details of the deal. Needless to say, this led to the threat of a Google penalty which _appears (fingers crossed) _to have been narrowly avoided as a result of a quick response on our part to remove the offending overlay. What I would like to ask you is whether you have any safe and approved methods for capturing email subscribers without revealing the premium content to users before they subscribe? We are considering the following approaches: First Click Free for Web Search - This is an opt in service by Google which is widely used for this sort of approach and which stipulates that you have to let the user see the first item they click on from the listings, but can put up the subscriber only overlay afterwards. No Index, No follow - if we simply no index, no follow the individual deal pages where the overlay is situated, will this remove the "cloaking offense" and therefore the risk of a penalty? Partial View - If we show one or two paragraphs of text from the deal page with the rest being covered up by the subscribe now lock up, will this still be cloaking? I will write up my first SEOMoz post on this once we have decided on the way forward and monitored the effects, but in the meantime, I welcome any input from you guys.

    | Red_Mud_Rookie
    0

  • Hi, We have been performing our own onsite of offsite SEO along with external assistance and have ranked well over the years with minimal impact from Google updates. Howevr the last so called Panda update has affected us heavily pushing our main phrase 'web design melbourne' from 2nd to 7th where we have been for almost 2 months now on Google.com.au irrespective of onsite or offsite work. We have been trying to find signs of any onsite, IP, duplicate content, titles or other issues that may be holding us back to no avail. The only flag that Google webmaster tools is showing is a number of bad internal site links, which I think is a glitch with the CMS we are using. Even the SEO MOZ tool gives us a higher ranking compared to most competitors on page 1 of Google.com.au for our main phrase. The biggest difference between us and competitors is we chose to target an internal page specific to the topic rather than our homepage. With this sadi we have also reduced our keyword density and content quantity inline with the other sites homepages. Can anyone help shed some light on this? and perhaps something obvious that we have missed, or where we should be looking? Thanks.

    | paulsid
    0

  • I have been working on a domain for site which has only been up for a few months. I have mainly been working on building trust and authority for the site, with only a few links to my keyword phrases, which I was saving for the end, and concentrating on good quality non-spammy type links. It was progressing nicely until last weekend when I went from the first page on Google to the 3rd and 4th pages. Was there some kind of update?

    | waynekolenchuk
    0

  • Working with a large number of duplicate pages due to different views of products.  Rewriting URLs for the most linked page.  Should rel=canonical point to the rewritten URL or the actual URL? Is there a way to see what the rewritten URL is within the crawl data? I was taking the approach of rewriting only the base version of each page and then using a rel=canonical on the duplicate pages.  Can anyone recommend a better or cleaner approach? Haven't seen too many articles on retail SEO when faced with a less than optimized CMS. Thanks!

    | AmsiveDigital
    0

  • I have an odd situation. I have a CMS that has a global robots.txt which has the generic User-Agent: *
    Allow: / I also have one CMS site that needs to not be indexed ever. I've read in various pages (like http://www.jesterwebster.com/robots-txt-vs-meta-tag-which-has-precedence/22 ) that robots.txt always wins over meta, but I have also read that robots.txt indicates spiderability whereas meta can control indexation. I just want the site to not be indexed. Can I leave the robots.txt as is and still put NOINDEX in the robots meta?

    | Highland
    0

  • I believe that the Google and Bing crawlbots understand wildcards for the "disallow" URL's in robots.txt - does Roger?

    | AspenFasteners
    0

  • hi, Which is the best , seo speaking, about the homepage display with a CMS (joomla here). 1/ To display articles as "blog" and thus articles are always recent but not the same  (chronological).. 2/ To display always a same article (just 1)  but updated sometimes to times ? 3/ Both of them are good ? Tks a lot in advance..

    | mozllo
    0

  • A photographer client has a flash website, purchased as from a (well respected) template company. The main site is at the root domain, and the HTML version is at www.example.com/?load=html If I visit the site on a browser without Flash installed, I am re-directed automatically to the HTML version. I'm concerned as the site has some great links and the HTML version is well optimised, but doesn't appear anywhere in Google for chosen keywords (ranks perfectly for brand related searches). Google is indexing the Flash version of the site, but I would rather it didn't (there's no real content (just Javascript to load the SWF) and all of the pages load under one URL). How can I block the Flash version from Google but still make the incoming links count towards the HTMl version of the site? If I re-direct Google to the HTML version, is this cloaking, and is it frowned upon? Thanks for any advice you can offer.

    | cmaddison
    0

  • On my homepage, I currently link to about 40 internal pages. I'm considering altering the internal linking structure to have 50-100 links on the 2nd level pages. If I was to do this, I'd only need 8 homepage links. Do you think the 8 pages linked from the homepage would go up in the SERPs as the pagerank would be less diluted? I've heard so many mixed views on this. Be interested to see what people here think. Thanks, Pete

    | PeterM22
    0

  • One of my clients has a template page they have used repeatedly each time they have a new news item. The template includes a two-paragraph customer quote/testimonial for the company. So, they now have 100+ pages with the same customer quote. The rest of the page content / body copy is unique. Is there any likelihood of this being considered duplicate content?

    | bjalc2011
    0

  • I am working with a client, and when I check on SERP placement, I never see the "www" in the SERP's only nameofcustomer.com not www.nameofcustomer.com. Of course there is a redirect going on...Question is...should this matter at all? I dont understand the relationship between this kind of redirect and SEO. Thank Mozzers

    | Giggy
    0

  • Hi, If you change a template (wordpress, joomla cms..) without changing information organization..(urls..) Does this can have an impact on your serp ? Tks a lot ...

    | mozllo
    0

  • Hi, I'm working to optimize a site that currently has about 5K pages listed in the sitemap.  There are not in face this many pages.  Part of the problem is that one of the pages is a tool where each sort and filter button produces a query string URL. It seems to me inefficient to have so many items listed that are all really the same page.  Not to mention wanting to avoid any duplicate content or low quality issues. How have you found it best to handle this?  Should I just noindex each of the links?  Canonical links? Should I manually remove the pages from the sitemap?  Should I continue as is? Thanks a ton for any input you have!

    | 5225Marketing
    0

  • I have the following problem - one of my clients (a Danish home improvement company) decided to block all the international traffic (leaving only Scandiavian one), because they were getting a lot of spammers using their mail form to send e-mails. As you can guess this lead to blocking Google also since the servers of Google Denmark are located in the US. This lead to drop in their rankings. So my question is - What Shall I do now - wait or contact Google? Any help will be appreciated, because to be honest I had never see such thing in action until now 😄 Best Regards

    | GroupM
    0

  • I am working on an ecommerce site and my crawl report came back with 7000+ 302 redirects and maxed out at 10,000 pages because of all the redirects. The site really only has maybe 1500 pages (dynamic content aside). After looking into it a little more I see it is because of the product rating system. They have a star rating system that kinda looks like amazons. The only problem is that each star is a link to a dynamic address that records the vote and then 302's back to the original page the vote was cast from. So virtually every page on this site links out anywhere from 15 to 45 times and 302's back to itself, losing virtually all of its PR.  Am I correct in that assumption or am I missing something? I don't see the links being blocked by robots.txt or noindex, nofollowed. Also it is an anonymous rating system where a rating can be cast from any category page displaying a product or any product page. To make matters worse every page links to a printable version which duplicates the issue by repeating the whole thing over again. So assuming I am correct that is site has a major PR leak on virtually every page, what is the best recommendation to fix this. 1. Block all of those links in robots.txt, 2. no index, nofollow these links or 3. put the rating system behind a submit button or disallow anon ratings 4. something else??? Looking at their product ratings on the site virtually everything is between 2-3 starts  out of 5 and has about the same number of votes except less votes on deeper pages. I dont believe this is real at all since this site gets almost no traffic and maybe 1 sale a week, there is no way that any product has been rated 50 times. I think the crawler is voting as it crawls and doing it 5 times for every product which is why everything is rated 2.5 out of 5. This is an x-cart site in case anyone cares. Any suggestions?

    | BlinkWeb
    0

  • When I started SEO - I didn't really know what I was doing (still don't!) Just wondering if anyone can help me with this small problem. I now understand that I basically have 4 URLs www.ablemagazine.com (Page Authority: 38/100) www.ablemagazine.co.uk (Page Authority: 47/100) ablemagazine.com (Page Authority: 3/100) ablemagazine.co.uk (Page Authority: 51/100) What should be configuration be to ensure I'm not loosing masses amounts of linkjuice? At the moment I have ablemagazine.co.uk set as my default domain in webmaster tools. www.ablemagazine.com www.ablemagazine.co.uk and ablemagazine.com all 301 redirect here (I think)

    | craven22
    0

  • Hi folks, I have a website (www.mysite.com) where I can't host a 404 page inside the subdomain www because a CMS issue. In order to provide a 404 page, I can create a subdomain like “404.mysite.com” that returns 404 and then if I find that a page does not exist I can redirect it to 404.mysite.com. My question is: should I redirect as 301 or 302? Does it have any difference?

    | fabioricotta-84038
    0

  • We have external links pointing to both mydomain.com and www.mydomain.com. I read this: http://www.stepforth.com/resources/web-marketing-knowledgebase/non-www-redirect/ and wondered if I should add this to my .htaccess file: RewriteCond %{HTTP_HOST} ^mydomain.com
    RewriteRule (.*) http://www.mydomain.com/$1 [R=301,L] so that the link juice all flows to the www version of the site? Any reason not to do it?

    | scanlin
    0

  • Hi everyone,
    One of my ecommerce sites uses BigCommerce.  They have a feature where you can add different currency buttons to change the currency that the customer can shop as. This is great because if people from the UK visit our site, they can change the currency to their own rather than US. It just ads a variable on the end of the URL string to change the currency. However, in my webmaster tools I noticed that I think i am getting a bunch of duplicate content.  For example, it thinks i have duplicate title tags for the following: domainname/pages/my-cool-widget.html
    domainname/pages/my-cool-widget.html?setCurrencyId=1
    .domainname/pages/my-cool-widget.html?setCurrencyId=2
    domainname/pages/my-cool-widget.html?setCurrencyId=3
    domainname/pages/my-cool-widget.html?setCurrencyId=4 I thought about adding "rel=no-follow" but unfortunately I don't have access to this file to edit the code.  Any suggestions?

    | BeachDude
    0

  • I have a questions regarding Converse.com. I realize this ecommerce site is needs a lot of seo help. There’s plenty of obvious low hanging seo fruit.  On a high level, I see a very large SEO issue with the site architecture. The site is a full page flash experience that uses a # in the URL. The search engines pretty much see every flash page as the home page. To help with issue a HTML version of the site was created. Google crawls the Home Page - Converse.com http://www.converse.com Marimekko category page (flash version) http://www.converse.com/#/products/featured/marimekko Marimekko category page (html version, need to have flash disabled) http://www.converse.com/products/featured/marimekko Here is the example of the issue. This site has a great post featuring Helen Marimekko shoes http://www.coolmompicks.com/2011/03/finnish_foot_prints.php The post links to the flash Marimekko catagory page (http://www.converse.com/#/products/featured/marimekko) as I would expect (ninety something percent of visitors to converse.com have the required flash plug in). So the flash page is getting the link back juice. But the flash page is invisible to google. When I search for “converse marimekko” in google, the marimekko landing page is not in the top 500 results. So I then searched for “converse.com marimekko” and see the HTML version of the landing page listed as the 4<sup>th</sup> organic result. The result has the html version of the page. When I click the link I get redirected to the flash Marimekko category page but if I do not have flash I go to the html category page. ----- Marimekko - Converse All Star Marimekko Price: $85, Jack Purcell Helen Marimekko Price: $75 ... www.converse.com/products/featured/marimekko - Cached So my issues are… Is converse skating on thin SEO ice by having a HTML and flash version of their site/product pages? Do you think it’s a huge drag on seo rankings to have a large % of back links linking to flash pages when google is crawling the html pages? Any recommendations on to what to do about this? Thanks, SEOsurfer

    | seosurfer-288319
    0

  • I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on. I'm thinking that it would be appropriate to use it on: legal pages such as privacy policy and terms of use
    search results page
    blog archive and category pages Thanks for any insight of this.

    | mmaes
    0

  • I'm getting a duplicate content error for: http://www.website.com http://www.website.com/default.htm I searched for the Q&A for the solution and found: Access the.htaccess file and add this line: redirect 301 /default.htm http://www.website.com I added the redirect to my .htaccess and then got the following error from Google when trying to access the http://www.website.com/default.htm page: "This webpage has a redirect loop
    The webpage at http://www.webpage.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer." "Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects." How can I correct this? Thanks

    | Joeuspe
    0

  • Hello, I'd like to redirect all versions of my homepage to www.inthelighturns.com so I force Google to see only one page. Is the below rewrites all I need? rewritecond %{http_host} ^inthelighturns.com [nc]rewriterule ^(.*)$ http://www.inthelighturns.com/$1 [r=301,nc] RewriteRule ^index.(htm|html) http://www.inthelighturns.com/ [R=301,L]RewriteRule ^(.*)/index.(htm|html) http://www.inthelighturns.com/$1/ [R=301,L] Thanks Tyler

    | tylerfraser
    0

  • My site never shows the www in the yahoo SERPS.  I don't do well with ranking in yahoo.  Could these two facts be related? How do I get one canonicalization - preferably the www in yahoo?

    | DavidS-282061
    0

  • I've cut and paste webmaster reports showing the webmaster the "301" page perm moved, however they still believe the pages are running up and normal, as 200s. Its been tough to get them to acknowledge the problem, however I'm certain its negatively affecting results. Any help would be greatly appreciated, asap!

    | ankurv
    0

  • Hi, We launched tmart 60 days ago and submitted to google, bing, yahoo 20 days later. But google had never indexed our website still when yahoo indexed it in one week. What we have checked or tried: 1.  We got 20~50 inlinks in one month and now 81 inlinks via yahoo site explorer. 2.  This domain has registered for 13 years and we purchased it from sedo last year. We
    did not find any problems from domain archive pages. 3.  Page similar: the homepage is 50% similar to one of our competitors when we just launched.
    So we adjusted the page structure and modified the content one month later and decreased the similarity to 30% (by tools from webconfs.com) 4.  Google Robots: googlebot crawled our website every day after we submitted for indexing.
    We opened GWT account for it and added the xml sitemap last week. GWT said nothing
    was wrong except the time of page loading. Our questions: Why google did not indexed our website? What should we do? Thanks, wu

    | zt673
    0

  • Hi there, When searching site:mysite.com my keyword I found the "same page" twice in the SERP's. The URL's look like this: Page 1: www.example.com/category/productpage.htm Page 2: www.example.com/category/productpage.htm**?ss=facebook** The ?ss=facebook is caused by a bookmark button inserted in some of our product pages. My question is... will the canonical tag do to solve this? Thanks!

    | Nobody1556552953909
    0

  • Hi!! I am launching a newly recoded site this week and had a another noobie question. The URL structure has changed slightly and I have installed a 301 redirect to take care of that. I am wondering how Google will handle my "old" pages? Will they just fall out of the index? Or does the 301 redirect tell Google to rewrite the URLs in the index? I am just concerned I may see an "old" page and a "new" page with the same content in the index. Just want to make sure I have covered all my bases. Thanks!! Lynn

    | hiphound
    0

  • Hi guys: What is the best way to keep Google from crawling certain urls with parameters? I used the setting in Webmaster Tools, but that doesn't seem to be helping at all. Can I use robots.txt or some other method? Thanks! Some examples are: <colgroup><col width="797"></colgroup> www.mayer-johnson.com/category/assistive-technology?manufacturer=179 www.mayer-johnson.com/category/assistive-technology?manufacturer=226 www.mayer-johnson.com/category/assistive-technology?manufacturer=227 <colgroup><col width="797"></colgroup> www.mayer-johnson.com/category/english-language-learners?condition=212 www.mayer-johnson.com/category/english-language-learners?condition=213 www.mayer-johnson.com/category/english-language-learners?condition=214 <colgroup><col width="797"></colgroup>
    | www.mayer-johnson.com/category/english-language-learners?roles=164 |
    | www.mayer-johnson.com/category/english-language-learners?roles=165 |
    | www.mayer-johnson.com/category/english-language-learners?roles=197 | | |

    | DanaDV
    0

  • Adios! (or something), I've noticed in my SEOMoz campaign that I am getting duplicate content warnings for URLs with extensions. For example: /login.php?action=lostpassword /login.php?action=register etc. What is the best way to deal with these type of URLs to avoid duplicate content penelties in search engines? Thanks 🙂

    | craigycraig
    0

  • Hi!! Another noob question: Should I be nofollowing my site's cart and checkout pages? Or as SEs can't get to the checkout pages without either logging in or completing the form is it something I shouldn't worry about? Have read things saying both. Not sure which is correct. Thank you! Appreciate the help. Lynn

    | hiphound
    0

  • One of my clients has an odd domain redirect situation. See if you can get your head round this: Domain A is set-up as a domain alias of Domain B Entering domain A or domain B takes you to default.asp on domain B. The default.asp includes VB script to check the HTTP_HOST variable. It checks whether the main doman name for domain A is present in the HTTP_HOST and if so redirects it to domain A/sub-folder/index.htm. If not present it redirects to domain B/index.htm. In both cases the redirect uses a response.Redirect clause. I think what is trying to be achieved is to redirect requests to Domain A to a sub-folder of Domain B. It works but seems extremely convoluted. Can anyone see problems with this set-up? Will link juice be lost along the redirect paths?

    | bjalc2011
    0

  • Out of habit, I've always put a "-" or dash to separate items in the title tag. However, I've noticed that more and more sites are using either a ":" or "|" in the title.  Is there one that is better to use than the other?

    | beeneeb
    0

  • hi, Looks for advices,tips, links ressource to improve local seo optimisation in google places for domain "google. fr"  as website business is in France ! Tks a lot in advance..

    | mozllo
    0

  • Hello we are using Joomla as our cms, months ago we used a component to create friendly urls, lots of them got indexed by google, testing the component we created three different types of URL, the problem now is that all of this tests are showing in google webmasters as 404 errors,  37,309 not found pages and this number is increasing everyday. What do you suggest to fix this?? Regards.

    | Zertuxte
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.