External style sheets are pretty much standard these days on web pages. If you view source on any reputable site, you will see a reference to an external CSS in addition to references to various JS files etc. This techniques also enables the CSS to be cached and re-used as persons go through your site. Faster sites make for happier users make for better rankings.
Posts made by CleverPhD
-
RE: Worth Improving HTML Sort Order?
-
RE: SEO Question - Are 503/504 errors an issue?
Agree with Lesley. 1300 is excessive, and even 50 sounds like a lot to me. Do you see any in Search Console? Would be interesting to see if this were corroborated by Google seeing similar.
-
RE: New Service/Product SEO and rankings
Howdy from Dallas!
For what it is worth, you have 2 pages that you want to rank for the key term, but which one? I took a quick first glance at them and here is what I saw (this was quick first glance remember).
On both of the pages you mention, you only have "Seo Houston" in the title and URL and don't mention it anywhere else on the page with the exception of having your Houston address in the footer. Just glancing at your pages with my human eyes, I cannot see anything on the page that talks about why you are the best SEO in houston.
Your pages, read to me (with a quick glance) that you offer great SEO services and have great SEO resources, but not for houston per se. You competitors do work SEO Houston into the text a lot to the point of being annoying, but at least I could see quickly that they service Houston.
I was thinking, it is a shame he does not have some case studies on that page that talk about what he did for some Houston businesses, that would be a good way to work Houston into the copy with our looking spammy. On second look of your SEO Houston page, I saw the links to the case studies. Now, if I want to see the case studies, you take me to another page, plus I cannot tell by reading the original page if these are Houston based businesses or not. You might be helping some business from New York City for all I know!
I then thought, lemme find out more about this guy, he is probably pretty good, and I go to the about us page. I see the item about the Houston Business Journal, and I think, that info should be on his SEO Houston page, a natural thing to talk about that can pull in the key phrase. I go back to your Houston SEO page a third time and then notice the H1 tag. I think I scrolled past it the first time as I looked at the graphics on the page and scanned.
I went back to writing this response and then looked at the page the fourth time and see, a-ha! SEO houston is in the copy 1x and among all the logos at the bottom there is a logo to the Houston Business Journal.
I hope this makes sense, as a user, I had to do a lot of work to see that you specialize in SEO in houston. I do not think you need to you do not need to H-Town, Hustle Town or Clutch City the heck out of your copy. You have all the elements, but I would say it needs a little more work to find a happy medium between what you have now and how your competitors use the term.
Check out the 5 second test at UsabilityHub http://fivesecondtest.com
If the user cannot tell in 5 seconds what you offer, then tweak the copy until they do without overkill. That would be my first step before any backlink analysis or speed testing etc. I think you have awesome stuff, it is just not hitting me as quickly when I first visit the page.
Good luck!
-
RE: My visibility score dropped 50% ,How would you improve it ?
I would check your title tags.
-
RE: How does PageRank pass through a backlink on a subdomain vs. subdirectory?
Subdomains are considered separate sites and so normally you would consider all the link metrics separate. That is why the most common advice is to use subdirectories for content sections vs subdomains. That way any link equity/page rank you acquire for links into the subdirectory flow up to the main domain.
You also need to be careful not to run into duplicate issues with a www vs non www versions of your site as those two subdomains are seen as separate entities by google.
If you want to consolidate link equity, 301 redirect all the pages in the blog subdomain (i.e. blog.root.com) to the equivalent subdirectory (root.com/blog) then you can have a unified single set of pages/URLs to focus on.
-
RE: Move a blog from a domain to a new domain in the same hosting server
Great input by Ken, wanted to embellish. 301 redirects are key (talk with your developer or host on best way with your setup on how to implement) Google Search Console actually has a "change of address" option to help you migrate. There is also a step by step tutorial https://support.google.com/webmasters/answer/6033049?hl=en
Good luck!
-
RE: Hi - I have a beginner question about organic search results dropping to zero
You bet! Hopefully this helps when you start digging into things!
-
RE: Hi - I have a beginner question about organic search results dropping to zero
Nope you dont need access. You set the 301s up on the current server. When the current server gets a request for an old page, it then knows where to send the user to, using the 301.
If also, if you don't mind, please mark my response as a "Good answer" with Moz.
Cheers!
-
RE: Hi - I have a beginner question about organic search results dropping to zero
301 the old URLs to the new URLs. Try to keep a one-to-one relationship on the old page to the new pages. The old bedroom page 301s to the new bedroom page, the old dining room page 301s to the new dining room page, etc. You would not want to redirect the old bathroom URLs to the new dining room URLs.
You need to pick if the site is www or non www. Lets say you pick non www as your default. Make sure if anyone types in a www URL on your site that you 301 that page to the non www counterpart. Just as above, one-to-one relationship. Make sure that any links on your site link to the proper nonwww URL. Any tools you use (moz etc) make sure they start with the non www version of the website.
Good post here by Cyrus on redirects here at Moz as well.
-
RE: Hi - I have a beginner question about organic search results dropping to zero
- If you want to find out what the general structure of the old site: Google Internet Archive and then enter your site URL. You will see snapshots and be able to pull it all together. Bing webmaster tools actually does a pretty good job of showing old site structure and that may help.
If you want to try and figure out what pages used to rank: You can also use various linking tools (Moz OSE, Majestic, Ahrefs) and Google and Bing webmaster tools to find how people are linking to you. This will not give you all of the old URLs on your site, but at least the most important ones. Those are the most likely that ranked. Google webmaster tools will show some average ranking data going back about 3 months so you may be able to recover it there.
You can also look through your old analytics data and that would tell you what URLs were getting the most organic traffic there and based on the content figure out what was doing well. While the old site is not live, do you have any way to access the old analytics data.
The www vs non www would result in needing 301s as well. Those are two different subdomains and so you would have 2 different pages according to Google.
- You need to talk to the developer to see if anything happened 2 months ago. Maybe they changed from www to non www and that would need a full 301 setup to direct Google from the old site to the new site.
This just sounds like a site migration gone bad and unless you can ask around for data from the previous provider, it will be tricky to figure out.
-
RE: Page Title (Meta descriptions) length... how strict are you?
Title tags - put your main keywords for the page first, or near the beginning. That helps google know what the page is about. The number of characters varies as Google does not look at characters per se, but pixel width. Good article by Dr P.
https://moz.com/blog/new-title-tag-guidelines-preview-tool
You just have to watch what gets cut off at a certain point. Beyond that length the title is getting too long for readability anyway and if you need a longer title to explain a page, just put the longer one in the H1, but try and be sensible. If the client insists on putting the company name and you are not trying to rank for the company name just do something like
Keyword and keyword is really key here because it ranks good! | Company Name
The company name is at the end and will get hidden in the serps anyway, and you have your key word(s) or phrase in at the start.
Meta description is about conversion and click through rate vs ranking. Focus on getting the best call to action with a keyword somewhere in there first. I would say, this could be a good place to sneak in the company name, after you get your call to action right. Your limit is larger there (about 150-160) so you have more room before the cutoff. I tend to worry less about keywords and think about searcher intent and see if I can match that to get them to select my page among others in the search result. Another good article by Dr P
https://moz.com/blog/i-cant-drive-155-meta-descriptions-in-2015
Cheers!
-
RE: Hi - I have a beginner question about organic search results dropping to zero
Hello!
I looked in the internet archive and the most recent version they have of this site is from the end of 2014.
http://web.archive.org/web/20141222162951/http://www.princessdesign.co.uk/
The design and URL structure is different from what you have today. Did you recently relaunch the site?
On the old site this was one of the links to the bedroom section
http://www.princessdesign.co.uk/bedrooms/bedrooms-at-princess-design.html
that URL 301 redirects to this page
http://princessdesign.co.uk/bedrooms/bedrooms-at-princess-design.html
and that page shows a 404 error page.
So, if you did just do a redesign and overhaul of your URL structure on the site, you need to get all the URLs that were ranking and make sure they 301 redirect to the correct page.
You should also to look in your Google webmaster tools to make sure that there are no warnings about penalties or what not. I checked your robots.txt and meta robots and there was nothing there that would have blocked Google from crawling. The recent version of the site is image and JS heavy, I almost thought it was a flash site at first (gasp!). As the current site seems to have very little text on the page and is mostly images there is very little for the search engine to read and then know what to rank based on the text it reads. Similarly, only the home page has a meta description. While meta descriptions are not important for ranking per se, they are important once you rank, to help with click through. That is, once the page ranks, Google will show the title and description and the description can influence if the person clicks through to your site or not. With all the images, the new site is probably slower than the old one, and this can penalize you as well.
This is all based on some assumptions made quickly, so I could be totally wrong. Hope this helps to point you in the right direction!
Good luck!
-
RE: Crawl errors are still shown after fixed
WMT is slow as heck to have those things go away. it may take 3 months to roll off. FYI, if there is a 404 error and it is supposed to be a 404, do not mark as fixed. Otherwise Google will think you "fixed" the 404 back to a 200, recrawl and then put the 404 back in the GWT errors. Just let the 404s roll off over time.
-
RE: Best website IA/structure for SEO?
You can accomplish this IA with folders or with the slug, the key is how you interlink everything. That is how you can show your related articles and what the most important article is on a given topic. The Bruce Clay article (IMHO) is still relevant, I think you do not need to get as granular due to things like Hummingbird. I tend to think of organizing around a topic with a set of key words, vs getting super granular with the keyword siloing. I think for the user it still makes sense that way as you need to make up an organizational structure that is simple and easy to understand vs having so many subcategories that they get confused.
Cheers!
-
RE: Articles marked with "This site may be hacked," but I have no security issues in the search console. What do I do?
It is hacked, you just have to look at the page as Googlebot. Sadly, I have seen this before.
If you set your user agent as Googlebot - you will see a different page (see attached images). Note that the Title, H1 tags and content are updated to show info on how to Buy Zithromax. This is a JS insertion hack where when the user agent is shown as Googlebot they overwrite your content and insert links to pages to help gain links. This is very black hat and bad and yes scary. (See attached images below)
I use "User Agent Switcher" on FF to set my user agent - there are lots of other tools for FF and Chrome to do this. You can also run a spider on your site such as screaming frog and set the user agent to Googlebot and you will see all the changed H1s and title tags,
It is clever as "humans" will not see this, but the bots will so it is hard to detect. Also, if you have multiple servers, you may only have 1 of the servers impacted and so you may not see this each time depending on what server your load balancer is sending you to. You may want to use Fetch as Google in Webmaster console and see what Google sees.
This is very serious, show this to your dev and get it fixed ASAP. You can PM me if you need more information etc.
Good luck!
-
RE: Accurate rankings data? software? tools?
Addendum to this. You may actually want to look more at the pages you are targeting for those keywords and the amount of organic traffic going to that page. Why? This encompasses more than just what your ranking is, but are people clicking through. We have run experiments to change the meta description and looked at traffic vs ranking. While ranking and click through are related, they are still two different things.
Finally, you have to look at what converts. I like to use Google's page value parameter. I can look at not only what content is ranking and getting traffic, but also what converts. I then use that to better plan my content or what content needs to be updated etc. Get your metrics to tie out to the bottom line and that is what really ultimately what matters.
Cheers!
-
RE: Why this site is not hit by google penguin only experts answer please
Typo fixed on penguin vs panda. There have been a lot of Panda updates lately and I have that on my brain. Regardless of the algo mentioned, Google is not perfect and things slip through and they are not all rolled out in all countries at the same time. Just a general FYI.
I did not think you meant to be rude, just letting you know as some people may take the wrong way.
Cheers!
-
RE: Why this site is not hit by google penguin only experts answer please
-
The site is ranking due to spammy links, this is why the site, despite having a "large" number of links has a low DA etc.
-
Why is it not hit with Penguin (corrected from Panda sorry typo)? While Google is pretty good, it is not perfect. People can still get away with this stuff (sad to say). Also, Google does not roll out all updates to all countries that Google works in at the same time. They have to account for things like differences in languages etc. So, it could be possible that this site would not rank in Google US at this time, but maybe in other countries.
-
Word of advice. Beggars can't be choosers. Your question qualification of "expert answers only please", could be considered a little rude as most everyone on here is doing this for free and because they want to help other people in the community. If you want expert advice only, then pay for a consultant, otherwise just be appreciative of the nice folks who are willing to chime in and hopefully point you in the right direction.
Cheers!
-
-
RE: Accurate rankings data? software? tools?
Your question here is accuracy, but I think you are missing the bigger picture on rankings. Google does so much personalization on search results, there is no single "rank" for a given page on a given key term. If you are signed in, Google will rank pages higher that you visit a lot. Even if you are not, Google may rank pages higher that relate to your geographic location. Whenever a tool is pulling a KW ranking, it is doing so with a set of parameters that may be different than another tool and so the results may vary. Some people have even called them useless http://www.wordstream.com/blog/ws/2014/08/11/seo-rank-checking
Dr. Pete did an article on this recently to compare different methods of measuring rank
https://moz.com/blog/comparing-ranktracking-methods-browser-vs-crawler-vs-webmaster-tools
While he found differences, they were not that huge across tools.
Your best advice is to find a tool that will report what you need and stick with it. Consistency is the key. That way you can at least know that if there is a change that the change did occur and in what direction. You want a baseline and then to know once you make a change are you doing better or worse. You will find discrepancies if you were to compare Moz, Keyword Planner, Google Webmaster Tools, CognitiveSEO, Authority Labs, etc etc. Everyone measures it slightly differently and you see some differences due to Googles personalization. Heck, take a read of the Search Console Help Google calls is an "average position" due to personalization of results when they show ranking results https://support.google.com/webmasters/answer/35252?hl=en
-
Average position: The average top position of your site on the search results page for that query. With change also shows the change compared to the previous period. Green indicates that your site's average top position is improving.
To calculate average position, we take into account the top ranking URL from your site for a particular query. For example, if Jane’s query returns your site as the #1 and #2 result, and David’s query returns your site in positions #2 and #7, your average top position would be 1.5.
I know this does not answer the question you asked, but I hope it gives you the question you need to understand to put all this in perspective. Get a reputable Keyword tool that gives you the data output that you need for reporting. Stick with it and you will do fine.
Cheers!
-
-
RE: Finding missing GA code
Looks like you are running Google Tag Manager on the page. That is probably where it is being called from. This is assuming you are seeing GA data coming in for this site on your reports.
Follow the directions for using the Chrome Developer Tools and you will see the GTM code firing. These links will help
https://support.google.com/analytics/answer/1032399?hl=en
https://blog.kissmetrics.com/events-in-google-tag-manager/
Cheers!
-
RE: Crawl Diagnostics 2261 Issues with Our Blog
One other thing I forgot. This video by Matt Cutts
It explains why Google might show a link even though the page was blocked by robots.txt
https://www.youtube.com/watch?v=KBdEwpRQRD0
Google really tries not to forget URLs and this video reminds us that Google uses links not just for ranking, but discovery so you really have to pay attention to how you link internally. This is especially important for large sites.
-
RE: Crawl Diagnostics 2261 Issues with Our Blog
Yes, the crawler will avoid the category pages if they are in robots.txt. It sounded like from the question that this person was going to remove or change the category organization and so you would have to do something with the old URLs (301 or noindex) and that is why I would not use robots.txt in this case so that those directives can be seen.
If these category pages had always been blocked using robots.txt, then this whole conversation is moo as the pages never got in the index. It is when unwanted pages get in the index that you potentially want to get rid of that things get a little tricky, but workable.
I have seen issues where there are pages on sites that got into the index and ranking but they were the wrong pages and so the person just blocked with robots.txt. Those URLs continued to rank and cause problems with the canonical pages that should be ranking. We had to unblock, let Google see the 301, rank the new pages then put the old URLs back into robots to prevent the old URLs from getting back into the index.
Cheers!
-
RE: Crawl Diagnostics 2261 Issues with Our Blog
One wrinkle. If the category pages are in Google and potentially ranking well - you may want to 301 them to consolidate them into a more appropriate page (if this makes sense) or if you want to get them out of the index, use a meta noindex robots tag on the page(s) to have them removed from the index, then block them in robots.txt.
Likewise, you have to remove the links on the site that are pointing to the category pages to prevent Google from recrawling and reindexing etc.
-
RE: Https vs Http Link Equity
The https ranking signal is a tiebreaker assuming that all other ranking factors are the same
https://www.seroundtable.com/google-https-dealbreaker-20632.html
You have to decide if you have other reasons to go https site wide. Are people logging in? Are you having them provide sensitive data? That is the reason you move.
If you do want to move everything to https: use the 301 redirect. It will probably be a wash in the end. You lose a little bit of link equity in a 301, but in a tie, you would "win" thanks to the https and assuming that the other page is http. The key to the 301 is to have the 301 be page to page and not global in nature. If you use a 301 and you redirect a page to another that is not on the same topic, you will lose link equity. Google does this so that if you have a page that has a lot of link equity for the topic "red widgets" and then 301 redirect that page to one on "purple fruit" the link equity is lost. You have to redirect the "red widget" page to the new page on "red widgets" to have that pass through. Otherwise, you are just using the 301 to help move people along to the new page, which is not a bad idea, but something you need to think about none the less.
I would not use the canonical as the http to https is not really what it was meant to be used for.
In the end, just be consistent and it will all work out as there are a ton of other factors that are more important to help you rank.
Cheers!
-
RE: How best to clean up doorway pages. 301 them or follow no index ?
Key point by Rebecca, use data to make this decision. I just 410'ed almost 800 old pages/articles from a website I help run. They were all republished press releases that were at least 2 years old, they got less than 9 organic pageviews over the past 6 month period and no link equity. You have to do some work with merging this data from GA and OSE, but it is worth it. I could say that when I deleted these 800 pages I was not losing significant traffic or links and I was improving my crawl efficiency with Google and potentially a quality factor with Google as they were not having to look at crappy old content. Another way to say this is that if users were not visiting the pages nor were they linking to them, how could they be useful and if anything would make my site look less reputable to them.
Cheers!
FYI - the spider Screaming Frog (one of my fav tools) just integrated with the GA API, so you can crawl and get GA data combined. (You can also just play with GA filters as well). If Screaming Frog can get the tool to access the Moz API - BOOM! That would make this work so much easier. (Hint hint mozzers this would be an amazing tool for the Moz crawler as well!)
-
RE: Redirecting old mobile site
Agreed. Used 301s to redirect the old m. pages to their www. counterparts. This way, not only do users get redirected automatically to the proper page with the correct content, but if there is any link equity, it gets passed along as well.
Key point, do not redirect all of your m. pages to the www home page, that would be bad. Also, bonus free advice. If you are setting up global 301 redirects, go ahead and do some additional 301 cleanup in several areas.
-
If your site is indexed in Google with the www subdomain included (i.e. http://www.website.com), make sure that the non www urls for all pages (i.e. http://website.com) 301 redirect to the www version. This needs to be a page to page redirect, not everything to the home page. Reverse this if your website by default uses the non www subdomain.
-
Likewise if you ever used https or moved from http to https, 301 page to page everything.
-
If you have anything where you have
http://www.website.com is the same as http://www.website.com/index.html or http://www.website.com/folder/index.html etc 301 all those "index.htm" type urls to the folder ending in the slash.
The idea here is to remove duplicates and have the 301s to do that.
When you get all this done, run a spider (I like Screaming Frog or Botify) to see if you have any navigation, sitemap or other internal links on your site that are 301ing. Try and think about anywhere else that you control that you might accidentally be pointing to old URLs (rss feed possibly? Your homepage in your Facebook account? etc) You may be pointing (accidentally or on purpose) to old URLs and want to update those. That is another signal to Google that you are not using the old URLs and pay attention to the 301. I have found issues with Google still reporting a 301 in Search Console and it was because I was still pointing to it in my navigation.
Cheers!
-
-
RE: Its posible to use Google Authorship in an online shop?
Just a point here as I just ran into this thread on the "Bounty" QNA section. Google dropped authorship pictures just about the time this thread started, so being more visual will not have the same impact it once had
Honestly, I agree with EGOL, I think using authorship as a tactic on product pages when there is really nothing that you authored. That just sounds like an approach that will get you penalized down the road.
-
RE: Will Google Recrawl an Indexed URL Which is No Longer Internally Linked?
What we run into often is that on larger sites there 1) still are internal links to those pages from old blog posts etc. You have to really scrub your site to find those and manually update. I am only mentioning this as unless you used a tool to crawl the site and looked at it with a fine toothed comb, you might be surprised to find the links you missed 2) there are still external links to those pages. That said, even if 1 and 2 are not met, Google will still recrawl (although not as often). Google assumes that any initial 404 or even 301 may be a temporary error and so checks back. I have seen urls that we removed over a year ago, Google will still ping them. They really hang onto stuff. I have not gone as far as the 301 to a directory that I deindex, but generally just watch to see them show up and then fall out of Webmaster Tools and then I move on.
-
RE: Minimising the effects of duplicate content
Yikes. I think that would be a good one to counsel them to re-write, otherwise I do not know of a way to divide up the canonicalization tag.
-
RE: Site Speed, is it worth it from a SEO point?
One more here and just because I wanted to clarify this further on what Benjamin mentioned. Yes, site speed is important for SEO and we can go through all the reasons. The real reason you want good site speed is that you make more money as your users are happier. People who come to your site are more likely to convert the faster your site is, regardless of if they came from Google, Bing, Organic, Paid, direct, etc etc.
Amazon has actually calculated this. They estimate that a page load slowdown of 1 seconds would cost Amazon 1.6 billion in sales each year.
http://www.fastcompany.com/1825005/how-one-second-could-cost-amazon-16-billion-sales
This type of research has been done at many other companies, including Google. If you are trying to measure the impact of site speed, don't look just at ranking, look at conversion. There is where the real money is.
-
RE: Minimising the effects of duplicate content
The big point here to make as Hashtag mentioned (nice name BTW) is to explain to the client how it harms them. I have one site were we have literally thousands of local businesses that work with us and they will do things like this - mostly because they do not know any better. It is often due to a marketing assistant or outsourced firm that does not know what they are doing.
You actually have an opportunity here to show off your expertise and help your client and get this fixed (whether it is with a canonical or a rewrite). Staying focused on how this practice 1) hurts them and then 2) how you can help them fix it, makes you a winner and makes it an easier conversation than what you think it might be.
If you think about it, for this type of situation, this is the best possible scenario. It is a gift! You can most likely fix this as a you have a good relationship with the client, and even be able to improve your relationship with them (and make more money). If this were some random scraper, there would be nothing you could do besides filing a complaint.
Good luck!
-
RE: Better to use specific cities or counties for SEO geographics?
Agree with William. Show your client the keyword search volume data on searching by city vs by county. Several of the sites I run have localized pages and we have gotten into discussions about getting more specific by using zip code or the names of neighborhood as we were well optimized for city + service. Why not zipcode + service etc.
It came down to, nobody searches for zipcode + service or neighborhood + service in the areas we focus on. Yours may be different, so look at the data first, but I bet it will hover at the city level. You can put it to your client this way, "I can spend a lot of your money on pages that are optimized at the county level and they could even rank for that search. But if no one is searching for those key terms, then I have just wasted your money and time."
Good luck!
-
RE: Why have bots (including googlebot) categorized my website as adult?
I think you have a good question, but may be making too many assumptions. Similarweb.com has its own algo to drop sites in a given category. (Here is the list they use) http://www.similarweb.com/category When I look at your site in similarweb - they have an options for you to change the category and so I would suggest you do that (no algo is perfect).
The assumption you have is that if similarweb.com categorized you as "adult" then maybe Google did, but that may not be the case. Few references on what you should do when you apply to adsense plus the adsense guidelines
http://allbloggingtips.com/2012/08/27/applying-for-google-adsense-program/
http://www.techrez.com/2013/09/how-to-get-approved-google-adsense-account.html
https://support.google.com/adsense/answer/48182?hl=en
It mentioned some other things besides adult content, you need a privacy policy, an about us page. You have those things at the bottom of your page, but because you have infinite scrolling that loads more products on many of your pages (http://www.wishpicker.com/gifts-for/boyfriend), I can't get to the bottom of the page to click on them.
It just seems like you may be focusing on the wrong thing IMHO. Are there any other reasons that you would see that someone might categorize your site as adult? Does not look that way to me when I visited. Also, I ran your site through OSE and from your top 100 domains, did not see any adult sites linking to you.
Good luck!
-
RE: Why can no tool crawl this site?
To expand on Dean's point.
If you look at the source code on https://www.bravosolution.com/ you get a bunch of JavaScript (shown below). It is basically looking at the users location and the sending them to the appropriate version of your website based on country. This is why here in the US we are sent to https://www.bravosolution.com/cms/us
Many spiders/tools (and Googlebot was not really good at this until recently) are not good at (or do not do any) crawling and executing on JavaScript so they get stuck when they hit your home page.
If you want to evaluate any of your localized sites, just run those URLs through various tools like screaming frog etc. You would then ask, "Well, how do I know that my main https://www.bravosolution.com is working properly for SEO?". I don't have as much background in how to optimize for international SEO, but you can do a several things to start with.
-
Google anything having to do with Aleyda Solis and International SEO. She posts a lot of stuff here at Moz and is pretty sharp on this stuff. There may be a more appropriate way to redirect international clients from your main page that how you are executing.
-
Run your home page through Google Webmaster Tools under Crawl > Fetch as Google. See what the page looks like
-
Double check your robots.txt to make sure you are not blocking any folders that would contain a JavaScript library. Based on the code below, I do not see you referencing any external libraries, but if you are dependent on JS to send Google, it would be worth having your developer check things
-
As with everything on what to do, it all depends. If all of your local country sites are independently ranked and successful, this main website may nor may not be doing you any favors currently if it is just a pass through with no domain authority to start with. Spend time on step #1 to see if there is anything else worth doing.
Cheers!
name="description" />
-
-
RE: Is putting an email address in the page title a good idea?
Makes sense to me as well. Just to be Dr. Obvious, you need to make sure that your email (and phone) are also clearly shown on the page itself. It drives me crazy when I cannot find contact information easily on a sales page and that is what is preventing me from getting my question asked so that I can purchase. If you wanted to get some more responses, try an online chat function. You might be surprised how many people use this to contact you.
Cheers!
-
RE: Subdomain VS Subdirectory
Agree with this 100%. If you read through stuff on QNA you see subfolders over subdomains all day long.
-
RE: How should I handle URL's created by an internal search engine?
Basic cleanup
From a procedural standpoint, you want to first add the noindex meta tag to the search results first. Google has to see that tag to then act on it and remove the URLs. You can also enter some of the URLs into the Webmaster tools removal tool.
Next you would want to add /catalogsearch/ to robots.txt once you see all the pages getting out of the index.
Advanced cleanup
If any of these search result URLs are ranking and are landing pages in Google. You may want to consider 301 redirecting those pages to the properly related category pages.
My 2 cents. I only use the GWT parameter handler on parameters that I have to show to the search engines. I otherwise try to hide all those URLs from Google to help with crawl efficiency.
Note that it is really important that you do the work to find what pages/urls Google has cataloged to make sure you dont delete a page that is actually generating some traffic for you. A landing page report from GA would help with this.
Cheers!
-
RE: If Links not in GWT does that mean they havent been Indexed yet?
Yes. GWT links can be out of date. This has been mentioned in various places.
If you want to hear it from the horse's mouth see this GWT hangout
https://www.youtube.com/watch?v=-OuD3NuxiNY
At about 21:45 there is a question about how Google handles 410 and why they keep crawling them. Then at about 27:00 it moves into the re-spidering of dead links and if/when they are updated in GWT. A participant brings up dead links that are mentioned in the links report in GWT, John Mueller from Google states it may take 6 months to 1 year to have them expire from GWT. The video is very interesting - you should watch the whole thing.
The other thing to remember, GWT is really overall a sample that Google is offering so you can't use it as your only source. It usually takes referencing several different databases (Moz, Majestic, Ahrefs, GWT etc) to get an idea of what is out there.
Good luck!
-
RE: Adding web designer credits to properties I create
Add the links if you feel you will get referral business from it, but not for SEO value. The link(s) you do add, no follow them. Duane Forrester from Bing stated in a blog post "You want links to surprise you. You should never know in advance a link is coming, or where it’s coming from."
http://www.bing.com/blogs/site_blogs/b/webmaster/archive/2014/05/09/10-seo-myths-reviewed.aspx
This statement was given the general nod by Matt Cutts of Google
https://twitter.com/mattcutts/status/466449897261367296
http://www.seroundtable.com/google-advance-links-18549.html
Finally, I would ask, can you do this in a way that is also beneficial to your client? Is the client getting a discount? Is there a way you can talk about the services this client provides on your site? Can the two of you as local businesses provide additional information to your visitors (more than just the link) that could turn visitors into customers?
Good luck!
-
RE: How to remove my site's pages in search results?
Why not just 404/410 those pages?
-
Google and JavaScript
Hey there!
Recent announcements at Google to encourage webmasters to let Google crawl Java Script http://www.googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html
http://googlewebmastercentral.blogspot.com/2014/05/rendering-pages-with-fetch-as-google.html
We have always put JS and CSS behind robots.txt, but now considering taking them out of robots.
Any opinions on this?