Good point with the backlinks! Currently, both robots.txt files are open and google does not seem to have canonicalization problems so far. So it makes sense to leave it this way anyways... Thanks Thomas!
- Home
- zeepartner
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
zeepartner
@zeepartner
Job Title: SEO
Company: Swisscom
Website Description
Imagenes Tropicales is an incoming travel agency located in San José, Costa Rica.
Imagenes Tropicales Travel Agency
Favorite Thing about SEO
getting lotsa traffic
Latest posts made by zeepartner
-
RE: Robots.txt on http vs. https
-
Robots.txt on http vs. https
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https.
I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt?
Strangely, I cannot find a single ressource about this...
-
RE: Google indexing despite robots.txt block
Yes, I think the crucial point is that addressing googlebot wouldn't resolve the specific problem I have here.
I would have tried adressing googlebot otherwise. But to be honest, I wouldn't have expected a much different result than specifying all user agents. Googlebot should be part of that exclusion in any case.
-
RE: Google indexing despite robots.txt block
100 points for you Martijn, thanks! I'm pretty sure you've found the problem and I'll go about fixing it. Gotta get used to having https used more frequently now...
-
Google indexing despite robots.txt block
Hi
This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch
This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt
Any clues why this is or what I could do to resolve it?
Thanks!
-
RE: How to content marketing: Should my blog posts link to my sales page?
Maybe linking each blog post might be overdoing it. But link to your sales pages where it makes sense, i.e. where you specifically talk about the products/services you sell. Don't overoptimize your anchor text though. But smart internal linking can indeed improve authority of your linked-to pages.
-
RE: Best practice for Portfolio Links
My pleasure! Have a nice day as well:)
-
RE: Best practice for Portfolio Links
Yes, probably a good idea to use specific portfolio tags (like "portfolio cats", "portfolio dogs", etc.). That way it doesn't get mixed up with other content (unless you want it to, of course).
Then you could create a portfolio overview page where you link to your sub-topics.
-
RE: Best practice for Portfolio Links
Split the portfolio up into various topics? This seems better for usability as well. Tag the various portfolios by category and make it nice and browseable. Does that make sense?
-
RE: Accuracy of search volume for keyword planner v old keyword tool?
Cool, will do. Thanks Dan!
Best posts made by zeepartner
-
RE: Does Bitly hurt your SEO?
the bots can easily identify a url shortener, as this performs a normal 301 redirect
-
RE: Best practice for Portfolio Links
Yes, probably a good idea to use specific portfolio tags (like "portfolio cats", "portfolio dogs", etc.). That way it doesn't get mixed up with other content (unless you want it to, of course).
Then you could create a portfolio overview page where you link to your sub-topics.
-
RE: Best practice for Portfolio Links
Split the portfolio up into various topics? This seems better for usability as well. Tag the various portfolios by category and make it nice and browseable. Does that make sense?
-
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages.
We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be:
1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories).
2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404)
I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option.
Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
-
RE: Does Google crawl the pages which are generated via the site's search box queries?
Google could crawl the dynamic URLs created by your searchbox - but it usually doesn't unless there is a link to such a dynamic url somewhere. Internal searches don't create much problems anymore, but if you want to be sure, you could always block your dynamic search results pages via robots.txt or Google Webmaster Tools (>Site configuration >URL parameters).
So if the URL generated by internal searches is http://www.site.com/search/?searchword=search+query+here, you could add this to robots.txt:
User-agent: *
Disallow: /search/
-
RE: ECommerce: Best Practice for expired product pages
Thanks for your thoughts guys.
@Igal@Incapsula: I like your 302 idea! That might acutally make a lot of sense for some products that are short-lived.
@Matthew: Good to know that lots of 301s were not an issue on your sites. Are you talking about thousands of those, though?
Most importantly, I will have to find something that can be automated and doesn't require much extra-work. I will probably go for 301s and remove those after a few months
Remind me to post my learnings here after implementation:)
-
RE: What should be done with old news articles?
Basically I don't see a reason to remove old news articles from a site, as it makes sense to still have an archive present. The only reason I could think of to remove them is if they are duplicate versions of texts that have originally been published somewhere else. Or if the quality is really crap...
-
Language Detection redirect: 301 or 302?
We have a site offering a voip app in 4 languages. Users are currently 302 redirected from the root page to /language subpages, depending on their browser language.
Discussions about the sense of this aside: Is it correct to use a 302 redirect here or should users be 301 redirected to their respective languages? I don't find any guideline on this whatsoever...
-
RE: Is there any value in having a blank robots.txt file?
No use in having a blank robots.txt. Minimum requirement if you want to have your site crawled is this:
User-agent: * Allow: /
Note that Gagans example above will block the entire site.
-
RE: How to content marketing: Should my blog posts link to my sales page?
Maybe linking each blog post might be overdoing it. But link to your sales pages where it makes sense, i.e. where you specifically talk about the products/services you sell. Don't overoptimize your anchor text though. But smart internal linking can indeed improve authority of your linked-to pages.
started out as linkmonkey and moved on to greater things, i.e. being a linkgorilla. seo'd a Swiss mediahouse for four years. Now optimizing for Swisscom.
Looks like your connection to Moz was lost, please wait while we try to reconnect.