Set base-href to subfolders - problems?
-
A customer is using the <base>-tag in an odd way:
<base href="http://domain.com/1.0.0/1/1/">
My own theory is that the subfolders are added as the root because of revision control.
CSS, images and internal links are used like this:
I ran a test with Xenu Link Sleuth and found many broken links on the site, but I can't say if it is due to the base-tag.
I have read that the base-tag may cause problems in some browsers, but is this usage of base-tag bad in some SEO-perspective? I have a lot of problems with this customer and I want to know if the base-tag is a part of it.
-
Hi Highland!
I know that relative URLs is anything but good, especially when you also use URL rewrite.
The only question is how Google will react to this?
Thanks for your answer!
-
Hi Cyrus and thanks for your answer!
The client is using the base tag on all pages on the site, but with different URLs. For example:
Root page: <base href="http://domain.com/1.0.1.0/2/1/">
Subpage:
<base href="http://domain.com/1.0.1.0/5/1/"> OR
<base href="http://domain.com/1.0.1.0/13/1/">Productpage:
<base href="http://domain.com/1.0.1.0/14/1/">As you can se they are using a lot of different base locations and unfortunately we are unable to change the base URL and test.
We have problems with both broken links and rankings. Whenever a new version of the system is created, all base URLs will be changed. This may mean that old links are still there and will be broken.
What do you think Cyrus, can this hurt us from a SEO perspective? It must be confusing for Google with all the strange base URLs?
I think the best would be to rebuild the structure and remove the base tag!
-
Most of the time you don't need to specify a base URL. The browser already knows this location. In some situations defining a base is helpful, such as mirrored sites when the URL used is not the same URL that is needed to resolve files.
Is your clients using a universal base tag that is the same across the entire site? I can't tell from the question, but this is a common situation that could potentially cause problems.
There's nothing inherently wrong with using a base tag. Most of the time, if you use it, you simply want to set it to the URL of the current page.That said, to avoid complications, the only time you really want to use the Base tag is when relative URLs wouldn't work without it.
You might want to test how the links on your site resolve and see if removing or modifying the base tag helps clear up your broken links.
-
Those are some sloppy URLs. I especially advise people to avoid the problems of relative paths in ANY URL. And, yes, <base> probably isn't helping.
Links starting with / are fine. That's the root of your site. Anything using "../" should be nixed and use a fixed path. And never, ever use "./".
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hey all -- ever seen a client with URLs that keep repeating the domain? Something like: client.com/client.com/client.com/subfolder-name. Any idea what glitch could cause that?
Hey all -- ever seen a client with URLs that keep repeating the domain? Something like: client.com/client.com/client.com/subfolder-name. Any idea what glitch could cause that?
Technical SEO | | TDC_SEO0 -
Both links with ".html" and without are working , Is that a problem ?
Default format of my url ending with ".html" , I know it's not a problem .. But both links with ".html" and without are working , Is that critical problem or not ? and how to solve it ?
Technical SEO | | Mohamed_Samer0 -
We just recently moved site domains, and I tried to set up a new campaign for the new root domain, but it threw an error?
It threw an error saying we cannot access the SERPs of this site? Any reason why? It is an https:// site instead of the http://, but even our older domain had an https://
Technical SEO | | josh1230 -
Content Duplication and Canonical Tag settings
Hi all, I have a question regarding content duplication.My site has posted one fresh content in the article section and set canonical in the same page for avoiding content duplication._But another webmaster has taken my post and posted the same in his site with canonical as his site url. They have not given to original source as well._May I know how Google will consider these two pages. Which site will be affected with content duplication by Google and how can I solve this issue?If two sites put canonical tags in there own pages for the same content how the search engine will find the original site which posted fresh content. How can we avoid content duplication in this case?
Technical SEO | | zco_seo0 -
Google+ Authorship, Rich Snippits and Three Names - a Problem?
Hello All, I have a conundrum that I thought I'd resolved - but that's popped its gnarly old head over the parapet again. I have a number of websites that I'd like to have show my ugly Google+ mug as author in the Google SERPS. I jumped through all the authorship verification hoops that Google threw at me and I thought I'd won. The problem? I have three names: Nick Beresford-Davies. One example of a page that I'm trying to achieve authorship with is: http://www.graphic-design-employment.com/illustrator-how-to-make-a-pattern.html I have verified authorship of the above website on my Google Profile:
Technical SEO | | Tinstar
https://plus.google.com/u/0/107765436751760696335/about Originally I footed the page with Nick Beresford-Davies (hyphenated) and the Structured Data Testing Tool ignored the hyphen and just saw Nick Beresford. So I tweaked my online name (to please Google!) to Nick Beresford Davies (no hyphen). Initially this seemed to work - but I just checked again and now Google, for reasons only known to itself, sees "nick davies" as the author, completely ignoring the name in the footer of the page (by Nick Beresford Davies) and the fact that the site has been verified by Google+. This is also the case for all other websites that I contribute to - and not all the bylines are in the footer - some are by the headline. When I test pages on the structured testing tool and enter my Google+ profile, it replies: nick davies, we've found your name as one of the authors from the page. You can use "Authorship verification by email" method above to verify your authorship.Error: Author name found on the page and Google+ profile name do not match. Please consider adding markup to the site.Much as I would like to succeed on the Google SERPS, I draw the line at changing my name to keep this robot happy - so if anyone has any suggestions, or can see any obvious step that I've missed, I'd be very grateful. I find it hard to believe that no other double-barrelled website author exists - so I'm hoping I'm not the only one to have experienced this... Thanks!0 -
Showing duplicate content when I have canonical url set, why?
Just inspecting my sites report and I see that I have a lot of duplicate content issues, not sure why these two pages here http://www.thecheapplace.com/wholesale-products/Are-you-into-casual-sex-patch http://www.thecheapplace.com/wholesale-products/small-wholesale-patches-1/Are-you-into-casual-sex-patch are showing as duplicate content when both pages have a clearly defined canonical url of http://www.thecheapplace.com/Are-you-into-casual-sex-patch Any answer would be appreciated, thank you
Technical SEO | | erhansimavi0 -
Multiple redirects a problem?
When product is sold out I will 301 redirect to a category page if a similar product is not available, but now our web developer has changed all the url's of the category pages so I need to redirect them all to the new category pages but that means there are some products that are first being redirected to the no longer existent category and then being redirected again to the new category page. This seems like it might me be a problem having two 301 redirects so I wanted to find out for sure if it is. Unfortunately our system for redirecting pages is archaic so it will be difficult and time consuming to go back and redo all the redirects that are going to pages that no longer exist so I wanted to get some additional opinions before I do that.
Technical SEO | | KentH0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0