Base HREF set without HTTP. Will this cause search issues?
-
The base href has been set in the following format:
<base href="//www.example.com/">
I am working on a project where many of the programming team don't believe that SEO has an impact on a website. So, we often see some strange things. Recently, they have rolled out an update to the website template that includes the base href I listed above. I found out about it when some of our tools such as Xenu link checker - suddenly stopped working.
Google appears to be indexing the the pages fine and following the links without any issue - but I wonder if there is any long term SEO considerations to building the internal links in this manner?
Thanks!
-
Thanks for the comment. I was able to get them to make the changes, but I think I have made some new enemies. Oh well, I will move on in a few months anyhow.
Thanks again,
Joe
-
The W3C standards might allow for no protocol, but you would never just put "//" - that's part of the protocol ("http://", "https://", "ftp://", etc.). This usage is technically incorrect. It could cause minor issues on some browsers (although probably not on newer ones).
Does it matter for SEO? Well, that's a bit trickier. Google tend to ignore base href unless there are ambiguous relative URLs, like canonical tags that have no base URL and are unclear. Practically speaking, it's probably not a huge problem, but it is possible for it to cause issues down the road.
Either way, if it's on a sitewide template, it's a 5-minute job, and what they have is wrong. I'm not one to knock devs (I've been a dev and I've managed devs), but they need to stop arguing and just fix it.
-
They have put it on every page. The programming manager is quick to point out that according to W3C neither http nor https are required for proper links. I have just never seen anyone purposely make all internal links begin with double slashes (//). It certainly makes xenu die, but I am not sure if there is any downside other than xenu and a few other tools not working.
Thanks!
-
Is that code on every page or just the homepage?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Setting Up Ecommerce Functionalty for the First Time
Morning Mozers!
Technical SEO | | CheapyPP
We are running up against a technical url structure issue with the addition of eCommerce pages . We are hoping you can point us in the right direction. We operate a printing company so all our current product info page are structured like: website/printing/business-cards
website/printing/rackcards
website/printing/etc The ecommerce functionality needs to go into a sub folder but the question is what should we name it? this how the urls would look like in the main category and product pages
/business-cards
/business-cards/full-uv-coaing-both-sides we were thinking either going with /order
website/order/business-cards
website/order/business-cards/full-uv-coaing-both-sides or maybe shop/ or/print-order/ etc Any ideas or suggestions?0 -
Discovered - currently not indexed issues
Hi there We're having an issue with our site recently where our new blog posts are not indexing properly from Google.
Technical SEO | | Syte
Inspecting it with Google Search Console gives up the errors "Discovered - currently not indexed" and "Excluded". I.e. it seems like Google sees our sitemap but chooses not to crawl and index for some reason. Does anybody have any ideas why this might be, and what we could do to fix it?
Thanks0 -
Discovered - currently not indexed issue
Hello all, We have a sitemap with URLs that have mostly user generated content. Profile Overview section. Where users write about their services and some other things. Out of 46K URLs, only 14K are valid according to search console and 32K URLs are excluded. Out of these 32K, 28K are "Discovered - currently not indexed". We can't really update these pages as they have user generated content. However we do want to leverage all these pages to help us in our SEO. So the question is how do we make all of these pages indexable? If anyone can help in the regard, please let me know. Thanks!
Technical SEO | | akashkandari0 -
We need a bit of help from someone to fix the following issues causing speed issues on our website.
Hi We need a bit of help from someone to fix the following issues causing speed issues on our website.Does anyone know of someone that can help? Reduce server response time Optimize images Eliminate render-blocking JavaScript and CSS in above-the-fold content Avoid landing page redirects Leverage browser caching Minify CSS Minify JavaScript Minify HTML
Technical SEO | | Bev.Aquaspresso0 -
Home page canonical issues
Hi, I've noticed I can access/view a client's site's home page using the following URL variations - http://example.com/
Technical SEO | | simon-145328
http://example/index.html
http://www.example.com/
http://www.example.com/index.html There's been no preference set in Google WMT but Google has indexed and features this URL - http://example.com/ However, just to complicate matters, the vast majority of external links point to the 'www' version. Obviously i would like to tidy this up and have asked the client's web development company if they can place 301 redirects on the domains we no longer want to work - I received this reply but I'm not sure whether this does take care of the duplicate issue - Understand what you're saying, but this shouldn't be an issue regarding SEO. Essentially all the domains listed are linking to the same index.html page hosted at 1 location My question is, do i need to place 301 redirects on the domains we don't want to work and do i stick with the 'non www' version Google has indexed and try to change the external links so they point to the 'non www' version or go with the 'www' version and set this as the preferred domain in Google WMT? My technical knowledge in this area is limited so any help would be most appreciated. Regards,
Simon.0 -
Crawling issues in google
Hi everyone, I think i have crawling issues with one of my sites. It has vanished form Google rankings it used to rank for all services i offered now it doesn't anymore ever since September 29th. I have resubmitted to Google 2 times and they came back with the same answer: " We reviewed your site and found no manual actions by the web spam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team. Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users. If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search. " How i detected that it may be a crawling issue is that 2 weeks ago i changed metas - metas are very slow in getting updated and for some of my pages never did update Do you know any good tools to check for bad code that could slow down the crawling. I really don't know where to look other than issues for crawling. I validated the website with w3c validator and ran xenu and cleaned these up but my website is still down. Any ideas are appreciated.
Technical SEO | | CMTM0 -
Notice of DMCA removal from Google Search
Dear Mozer's Today I get from Google Webmaster tools a "Notice of DMCA removal" I'll paste here the note to get your opinions "Hello, Google has been notified, according to the terms of the Digital Millennium Copyright Act (DMCA), that some of your materials allegedly infringe upon the copyrights of others. The URLs of the allegedly infringing materials may be found at the end of this message. The affected URLs are listed below: http://www.freesharewaredepot.com/productpages/Ultimate_Spelling__038119.asp" So I perform these steps: 1. Remove the page from the site (now it gives 404). 2. Remove it from database (no listed on directory, sitemap.xml and RSS) 3. I fill the "Google Content Removed Notification form" detailing the removal of the page. My question is now I have to do any other task, such as fill a site reconsideration, or only I have to wait. Thank you for your help. Claudio
Technical SEO | | SharewarePros0 -
What is with WordPress Dupe issues?
Hi, Just wondering if anyone can explain for me why it seems every tag that is entered in WP blog posts on a site creates a duplicate page (identified by ROGER and friends in SEOmoz crawl)? Obviously if you can offer a solution (apart from the extremely obvious "don't use tags") I would be immensely grateful. Thanks so much,
Technical SEO | | ShaMenz0