Hi Andy
This is why I have asked the question as the wiki is on its own domain so these aren't internal links.
Andrew
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Job Title: Lead Technical Engineer
Company: SEO Traffic Lab
Favorite Thing about SEO
Enjoy the diversity of what you can do and the satisfaction of driving quality traffic to a website
Hi Andy
This is why I have asked the question as the wiki is on its own domain so these aren't internal links.
Andrew
Hi everyone
This may seem a bit obvious but I am getting conflicting answers on this, we have a client that has a wiki that is basically an online manual of their software.
They do it like this because the manual is so big and is constantly developing, there are thousands of pages with loads of links that are pointing to various sections of relevance on the main site as well, the majority of these are No Follow but I have noticed that they have a single link on the navigation that is a direct link to their main site that is a follow link, obviously this is a sitewide.
Would this be seen as being detrimental to the main site, should I have this set as No Follow as well.
Thanks in Advance
From my understanding if you are using domains just for the sake of redirecting them they don't help much and can even hurt your rankings. This used to work years ago but in this day and age if the domain has had no content or gained any real value in the eyes of the search engines why would you redirect it.
Hi Tymen
It's not really my area of expertise but I have read a really good article on Moz about 'Enabling https without sacrificing your Web Performance' by Billy Hoffman that may be of some assistance.
https://moz.com/blog/enabling-https-without-sacrificing-web-performance
Hope it helps and good luck with improving things to your satisfaction.
Andy
Hi
You will certainly have to update your profile in Search Console (Webmaster Tools) on both Google and Bing, most modern analytics can handle https without any change but if you have older code on the site you may have to update the script.
There is a really good post on switching to https here on the Moz blog that may help with other considerations you may have forgotten to check.
https://moz.com/blog/seo-tips-https-ssl
Hope this helps
Andy
There is a first rate post on this subject here that could explain this far better than most of us could.
https://moz.com/learn/seo/page-authority
Hope this helps.
Andy
I wouldn't particularly class it as thin content but it is almost certainly going to be classed as near duplicate content as the pages only vary by a small amount on each one, even though your descriptions appear varied and well written.
It may be better in this instance to focus on one of the pages as the main page and then canonical or no-index the others.
Andy
Hi Davit
If as you say you would not get anything from this mention in the way of a paying customer because as you say in your own words 'her audience is not our target customer' I would have to ask yourself if her audience is not relevant how would Google see the relevance of a mention to your site in her blog etc.
Are all the other posts on her blog in a similar vein or is she a genuine blogger discussing a particular topic consistently, if its the former then I probably wouldn't bother.
Hope this helps.
Simple answer is yes if you build a landing page based on that keyword then it will be competing with your homepage for that term, worse case scenario is that it could even cause the ranking of the homepage for that term to drop.
Ideally every page on your site should have a single keyword targetted and every page should have a different one, ultimately you should still be creating your pages for the users and not the search engines.
Hope this helps
Hi Tom
Agree with Martijn that it depends for example, the robots.txt is generally the first port of call for bots as it allows them to understand where you want them to spend their finite time crawling your site. You can aslo give direction to all bots at once or specify a subset. It is generally the best option for blocking pages such as you /cart/ etc were they don't need crawling.
The problem with robots.txt is that it doesn't always keep pages from being indexed especially if there are other external sources linking to the pages in question.
The meta tag noindex on the other hand can be applied to individual pages and you are actually commanding the robots to NOT Index the relevant page in serps, use this option if you have pages you don't want appearing in Google (or other search engines) but the page may still be relevant for authority or able to acquire links (make sure to use Noindex follow) as you still want the robots to crawl the page. Otherwise use Noindex Nofollow hope that this helps.
Firstly I cannot actually think of any legitimate reason for hiding or making a category invisible, however if you had one that means you don’t want the content to be indexed then you would be best blocking that Category within your robots.txt file.
If it is for any other reason and the content is indexable by ANY search engine not just Google then you run the risk of being penalised.
In fact, Google’s guidelines state that, “hiding text or links in your content to manipulate Google’s search rankings can be seen as deceptive and is a violation of Google’s Webmaster Guidelines
There is a first rate post on this subject here that could explain this far better than most of us could.
https://moz.com/learn/seo/page-authority
Hope this helps.
Andy
Would agree with Silkstream on this one, this app used to be called Just Unfollow so you have probably heard of it before, it is quick to setup and easy to use allowing you to search on various aspects as described above.
Definitely a useful tool to have in your arsenal.
We had a similar thing happen with a client a while ago and it was to do with point 3 as mentioned by Tom, it turned out that the site had been hacked and had some very adverse and unwanted links added to the footer that were invisible to the naked eye or by searching the code.
We were recommended a little plugin for chrome called User Agent Switcher which identified and revealed these hidden links on the site, once they were dealt with the site recovered to where it was previously.
Hi Tom
Agree with Martijn that it depends for example, the robots.txt is generally the first port of call for bots as it allows them to understand where you want them to spend their finite time crawling your site. You can aslo give direction to all bots at once or specify a subset. It is generally the best option for blocking pages such as you /cart/ etc were they don't need crawling.
The problem with robots.txt is that it doesn't always keep pages from being indexed especially if there are other external sources linking to the pages in question.
The meta tag noindex on the other hand can be applied to individual pages and you are actually commanding the robots to NOT Index the relevant page in serps, use this option if you have pages you don't want appearing in Google (or other search engines) but the page may still be relevant for authority or able to acquire links (make sure to use Noindex follow) as you still want the robots to crawl the page. Otherwise use Noindex Nofollow hope that this helps.
Hi Tymen
It's not really my area of expertise but I have read a really good article on Moz about 'Enabling https without sacrificing your Web Performance' by Billy Hoffman that may be of some assistance.
https://moz.com/blog/enabling-https-without-sacrificing-web-performance
Hope it helps and good luck with improving things to your satisfaction.
Andy
I have checked all of the urls that you have added in your question above and I can confirm that they are all clean and green according to the Structured Data Testing Tool as highlighted by Dirk above https://developers.google.com/structured-data/testing-tool/ so not sure what the SEO Company is looking at.
Just as an update to where we was with this I have seen this article this morning where many others are seeing significant activity around the 20th 21st but still doesn't appear to be clear what is causing this.
Still it is an interesting read and update.
http://searchengineland.com/google-update-their-ranking-algorithms-some-webmasters-believe-so-234185
Hope it helps
Hi
You will certainly have to update your profile in Search Console (Webmaster Tools) on both Google and Bing, most modern analytics can handle https without any change but if you have older code on the site you may have to update the script.
There is a really good post on switching to https here on the Moz blog that may help with other considerations you may have forgotten to check.
https://moz.com/blog/seo-tips-https-ssl
Hope this helps
Andy
Have always found Cognitive SEO to be a reliable and very indepth tool for this kind of work, it is a paid for model but you can have a free trial to see if it will do what you need it to.
There is a great article on Link Audits here as well https://moz.com/blog/link-audit-guide-for-effective-link-removals-risk-mitigation
After spending over 10 years managing multiple ecommerce stores in the UK for leading household brands, I now manage a Technical Team working with numerous clients at delivering quality traffic and results. Away from the office I love Local History having been involved with our Local Heritage Centre as a founder and member for over 20 years for which I can be found most evening on Facebook uploading the latest pictures of times gone by.
Looks like your connection to Moz was lost, please wait while we try to reconnect.