Makes sense, I absolutely don't want to chance with the menu and have it mistaken for cloaking. We will now look at other solutions for a more traditional menu with better internal linking and less links.
Thank you for your input!
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Makes sense, I absolutely don't want to chance with the menu and have it mistaken for cloaking. We will now look at other solutions for a more traditional menu with better internal linking and less links.
Thank you for your input!
It is visible in Google.SE
site:beyondthedeal.com
Hello,
I just like to ask for best practice when it comes to reduce number of internal links on a site with a mega menu.
Since the mega menu lists all categories and all their subcategories it creates a problem when all categories are linking to all categories directly..
Would the method below reduce the number of links and preventing the link juice flowing directly from category to category?
[(link built with JavaScript and the html5 "data-" attribute)
Thinking of using these links to categories in the menu not directly below the parent category.](#)
Hello,
I don't understand why you redirected your index page to the "about us" page, Don't you want your home page to start at www.towerhousetraining.co.uk? Like having a short intro there and where people can read more by then clicking "about us".
Canonical urls are used on pages with very similar content. You basically tell the search engines what page you wishes to rank (of the similar ones).
Hello,
No I don't think the "image block" is why you don't get much search traffic (unless you used to get a lot of traffic from image search).
Have a look at Google Analytics and see when your site started dropping in traffic. Compare your stats with google panda and penguin updates and see if you can find any relation between any update and the drop.
This plugin for Chrome can help you overlay Google's updates on your analytics charts:
https://chrome.google.com/webstore/detail/chartelligence/njhdcfdiifemfnfddhfjmfbkajajceag
Hello,
Do you get traffic from more than one country? If so, are you checking your rankings in each Google? Are you searching depersonalized?
The last thing, the number of searches does vary, it could just be that there are less searches for your keywords at the moment. Check with google trends.
We had similar issues with too many indexed pages (about 100,000 pages) for a site with about 3500 pages.
By setting a canonical url on each page and also preventing google from indexing and crawling some of the urls (robots.txt and meta noindex) we are now down to 3500 urls, The benefit is (besides less duplicate content), much faster indexing of new pages.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
Hello!
I would just check in Google what pages Google has indexed. Then do redirects from those urls to the new ones. Also check what urls other sites link to and make sure those are redirected to their new urls.
For your overview pages I would redirect the empty page and and the overview page. Just in case you get traffic on both.
Non important pages (like your thank you page) I wouldn't redirect.
This is not an issue, it is quite common today with responsive designs. If hiding elements is not for tricking the crawlers then you should be safe.
My guess is a couple of sentences (especially if it's the title or meta description).
Since it is so easy to find "borrowed" content, I would assume Google knows if the content is unique. Not having unique content is a common seo-mistake in ecommerce where many just reuse suppliers information.
Looks like a "parked" domain with ads on it.
Some web hosting companies put up search pages with ads hoping to make some cash on not used domains.
Hello,
You can use the hreflang attribute to specify alternative language and url.
<link rel="alternate" hreflang="x-default" href="http://www.example.com/" />
<link rel="alternate" hreflang="en-gb" href="http://en-gb.example.com/page.html" />
<link rel="alternate" hreflang="en-us" href="http://en-us.example.com/page.html" />
<link rel="alternate" hreflang="en" href="http://en.example.com/page.html" />
<link rel="alternate" hreflang="de" href="http://de.example.com/seite.html" />
To learn more about this http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
Hi David!
Yes they will leak "SEO juice" <- don't like the phrase either!
To be safe from Google I would make sure these banner ads had rel="nofollow".
Since google has changed the rules to prevent PR sculpting the banner ads will still leak link juice (just not to the site linked to).
Works great! This last week is the first time in 6 months I ever had a problem (with the ranking).
Hello,
The robots directive will only prevent google from crawling the pages. In order t remove the pages from index you need to add "meta noindex" to the pages you want to have removed.
<meta name="robots" content="noindex">
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=93710
It is because you have many ALT-tags in your source code. I can count 92; however, not all of them have any description.
IIS loves 302s... Ask your developer to change the 302 to a 301 instead.
The indexed page will then be "/nl/nl/SomeOtherPage.cms" and the "link juice" will flow to it.
Also stick with lowercase in the urls.
The .cms extension is not an issue imo.
Hello,
I don't know if you coded your site, but I would suggest a "no javascript" version with regular url varialbes the crawlers can follow. In your site maps you then list "no javascript" urls.
I have had the same happen to two of our sites. Same language, but different country and Tld.
Google ignores the hreflang element. We even made sure to include the country in the page title and in the footer. Since the we use country specific .tld we have no option and shouldn't need to set specific market in webmaster tools.
I guess it is just the way google works when you have two sites with basically the same content.
In our case we werent penalized, it's just that google think our foreign site is more relevant.
Hello!
One of our websites receives a large amount of traffic from the Baidu crawler. We do not have any Chinese content or do any business with China since our market is Uk.
Is it a good idea to block the Baidu crawler in the robots.txt or could it have any adverse effects on SEO of our site?
What do you suggest?
Hi!
Basically Google treats urls with a "/" on the end as folder and without a "/" as a file. For sub-folders you should decide what format to use.
www.domain.com/blog OR www.domain.com/blog/
Then make a 301 redirect (best) or put a canonical link to preferred version.
For linking to your start page I would just go to without the "/", It doesn't really matter there imo.
More info from Google http://googlewebmastercentral.blogspot.se/2010/04/to-slash-or-not-to-slash.html
Have you tried to search for a term on one of those not listed pages? Perhaps they are just far back in the search result?
Did you try to submit the url with WMT to have it indexed faster?
Do you have any old urls you can do 301 redirects from to the new not listed pages?
Did you as Kevin suggested submit a sitemap?
Hi Nikolaj!
If your page only been live for a month it is too early to expect to start ranking. Your domain need some authority, links, and get fully indexed. Sometimes google can be a bit slow in the beginning. Good links will for sure help you out, just don't buy them.
Perhaps a free photo shoot to people who blog about you
If your changes do affect the urls then make sure you have working 301 redirects ready so the old urls still work.
Looks like your content can be reached from different urls. like the uncategorized category.
Here is an excellent article how to set up Wordpress to avoid common seo traps
Hi,
I don't think it's because of the penguin you went from #1 to #5. From what I understand a penguin hit is much worse.
Could just be that Google doesn't give as much value to an exact match domain or it could be just natural fluctuations? I know our keywords move up and down a few places every week.
I would go with no index on the result page simply because those search pages can vary quite a bit depending on what people search for.
It wouldn't help you to create a new short url unless you redirect the old url to the new. Depending on the competition you can still rank good with a url that doesn't contain the keyword/phrase.
From a user experience it's of course much better with the keyword/phrase in the url. People see what your page is about just by studying the url.
Rule of thumb, the h1 heading should describe the paragraph / content below it (in not more than one sentence). For the next headings you could use the h2 tag.
Write for the users. Of course you should pick the right words / phrases people use when they search.
Make the headings compelling encouraging the user to keep on reading
Hi!
In your case you should do a redirect (301) of all non-www to the www-version of your site. You don't need canonicals for this. It is better to skip the non-www entirely (or other way around). Just decide what version to go with. Old link-juice will follow the redirect to the new page.
Use canonicals if you have a bunch of simular pages you would like to to merge into one. Like if you have two pages with basically the same content.
Check your site and what headers it puts out with this tool: http://www.webconfs.com/http-header-check.php Response of non-www should be 301, very important!
Btw, a lot of canonicals are not bad if they are used wisely
Hello!
It can take a couple of weeks (if if you have a large site) and no you don't need to change anything else for google to pick up the new title.
A tip can be to use google webmaster tools
Health -> Fetch As Google
Then submit the urls to google and "submit to index"
You can manually submit up 500 urls to be indexed and they are usually added to index within 24 hrs.
With "overly dynamic pages" means the search engines might have a problem crawling them.
They will not hurt your seo unless they have poor content, duplicate content etc. So if I were you I wouldn't worry too much.
However if you don't want these pages crawled you can exclude them in your robots.txt file.
Also add a meta tag in the header of these pages to have them removed from the index.
I googled and I found this:
"If you don’t pass the exam, you can retake it after 14 days, but you must pay another $50 each time you do."
http://www.forthea.com/blog/2012/01/11/passing-the-google-analytics-iq-test-you-can-do-it/
You should look into using the rel="alternate" hreflang="x" This way you can specify what country and language the page is created for.
Read about it here:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
Good luck!
Hello Debi-Ann!
In my opinion I think your drop in the serp is because of links coming from link directories and spam blogs. These links don't count much (if any) after Google's Penguin update.
Yes it's possible Google uses the anchor text to spot unnatural link building.
For you getting some really good links from some authoritative sites might be really helpful.
I'm not a Google employee, just my guess.
Yes, it looks like they got their canonicals all wrong...
I think google will ignore the canonical tag in this situation. I remember Matt Cutts talking about if you got the canonicals all wrong Google would still index your site.
"What sunk the pirate ship? Canonical issues!"
It could take a little time. I did some redirects myself earlier this year, but the old pages are still in Google's index.
Maybe someone else can confirm that it can take a little time before the old pages are dropped from Google's index?
Did you verify with a tool like http://www.webconfs.com/http-header-check.php that you get a 301 redirect?
Hello Gary,
You can use the
hreflang="'en-IE" href="www.your-ie.site"> element
For more information see: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
You can also in Google webmaster tools set the country a specific domain is targeting.
Hello!
It looks like the seomoz crawler (and google) follows ajax links. Is this normal behavior? We have implemented the canonical element and that seems to resolve most of the duplicate content issues. Anything else we can do?
Example:
Hello!
I need to filter out the crawl errors found before a certain date/time. I find the date and time the errors were discovered to be the same.
It looks more like the time the report was generated. Fix?