I think that is a side case though on how often that happens. Most people are just using a phone as a phone. Your example is probably most likely occurring with people who are giving presentations etc. You could look at your analytics and see how many users are using a mobile OS with a large screen resolution, I bet it is pretty low.
Posts made by CleverPhD
-
RE: Responsive design (Showing diffrent pages(icons) for Mobile/Tablet users)
-
RE: Responsive design (Showing diffrent pages(icons) for Mobile/Tablet users)
Agree with Lesley. We ended up using a user agent sniffer to detect mobile devices and show a mobile version of the site for those users for that reason. We did not want users on a phone having to download all the assets on the desktop site that 90% were going to be hidden anyway. I think this type of issue will decrease as we get more of the LTE networks online and more users on them, but for our users we went this route so that they get a fast loading mobile site.
-
RE: Duplicate content on sites from different countries
The issue with using Rel=Canonical in this situation is that Google treats that directive as a 301. If you canonical a whole site to another you will end up devaluing one of the sites.
-
RE: When we have 301 page is a Rel=Canonical needed or should we make 1 Noindex?
If you have a 301, you use the 301, done, mission accomplished. Google should drop the original page and start to use the new page in its place in the SERPs. This is also automatic for the user as they are moved from one page to the other. One thing, you want to make sure that the page that you are sending people to is semantically related to the page that they were sent from, otherwise you risk losing rank in the SERPs.
If you use the 301, there is no original "page" that you can put the canonical or a noindex.
If you could not 301, you would want to only use the canonical. Google usually will treat a canonical like a 301.
If you use a canonical, you should not have to use the noindex.
-
RE: Keyword Research: How best to target keywords without using a region as part of the search query.
You are looking at two different types of queries, but yes, you should focus on having the city name plus the product. While Google does localize results based on IP and personalized preferences, it is not not completely independent of keyword optimization.
Another way to think about this is that, lets say that Google only took into consideration the IP of the user versus what the keyword the user typed into the query. How would Google know where you are local to? It is partially due to the keywords you have on the page related to your location.
You should be able to optimize for both your location and product by using a combination of the keyword (and variants of it) and the city
See what the most common term using KW research and then vary off of that. So if Houston Plumbers is the main keyword you can use that and variants of it. (Plumbers in Houston, Plumbers near Houston, Dependable Houston Based Plumbers etc).
I have seen pages that I have optimized using this methodology on large scale Yellow Page type sites for a number of products/services and had the page rank on Page 1 for both types of searches such as "Dallas Plumber" (locality plus product service) and "Plumber" (product/service only). Obviously, your mileage may vary depending on the competition, but, the basic message is that you are going the right direction when targeting locality + product/service vs just service alone. The error in your case would be to leave out the location information.
Good luck.
-
RE: Is there a way to do a mass lookup of Page Authority?
You should be able to via the Moz API. One of the Mozzers should be able to point you to the correct document.
-
RE: What is the best day of the week for email surveys?
I will be honest, I am not sure there is a 100% right answer for this. Most of what you see are general guidelines on what to do, but you probably need to see how your audience reacts.
There is a general article here on when best to email on time of day.
http://blog.getresponse.com/best-time-to-send-email-infographic.html
There is some logic as you do not want to email people when they are commuting etc.
There is a good article by Survey Monkey
http://blog.surveymonkey.com/blog/2011/08/16/day-of-the-week/
They show for B2C that Mondays are best.
Another study by another company mention Mondays and Wednesdays
http://www.servicetick.com/blog/do-survey-response-rates-change-by-day-of-week/
Here is another article that looked at several other articles and they mention Mondays, Tuesdays, Wednesdays and Thursdays
http://www.peoplepulse.com.au/Invite-Timing-Tips.htm
This article mentions Wednesdays and Thursdays
http://blog.sogosurvey.com/sending-out-surveys-its-all-about-timing-get-the-timing-right/
Do you get where I am going here? If I were to keep looking for studies, I bet I could probably even find a few that recommend weekends. Why? Probably because people get sick of all the surveys that are sent during the week!
Take a read through and see if you can find any parallels between the groups that these articles studied and the group you are emailing. It would probably also help if you did some persona work. Who are your website visitors what are they normally doing. Example, if your website caters to a bunch of teachers, maybe Friday afternoon is better as they are done with all the school stuff for the week. Add that to all the above, and do a back of the napkin meta analysis and you will probably have a pretty good starting point.
Good luck!
-
RE: Redirecting old domains for SEO ranking?
Agree with Kevin. That technique is old hat for using them just for the PageRank value on the domain. I recommended this video earlier, but I will mention again as it talks about how "tricks" are going by the wayside and #RCS is how you want to approach this. Watch the video of Wil Reynolds about 1/4 the way down.
http://moz.com/blog/an-interview-with-wil-reynolds
Good luck!
-
RE: Best Way to Replace Lost Links
One thing you need to consider is not just the quantity of the links you lost but the quality. If you are losing thousands of links with a PA of 1 and DA of 0, well then good riddance! They were not helping you to start with and if anything they were hurting you.
So, I would not think, "Well, we lost 1345 links and so we need 1345 links back". I would say, how can we help the client to do some #RCS to earn some quality links.
http://moz.com/blog/an-interview-with-wil-reynolds
Watch the talk by Wil Reynolds about 1/4 the way down.
Good luck!
-
RE: Duplication, pagination and the canonical
The rel next prev is not for duplicated content - it just shows google how the parts relate to the whole.
An alternative to the rel next prev is the "Classic Pagination for SEO" that uses noindex another article by Adam
http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
If you have a duplicate issue, this would solve it as you would noindex all the duplicate pages.
What you need to do (and I can't do this for you), is to look at all the crawl paths that you are providing Google. As I mention above, you are not doing any favors to Google or to your site when you show Google an infinite number of paths to get to the same content. It just wastes Google's time and you don't want to do that when Google also has to crawl the rest of the internet. If you solve this issue, you will solve your duplicate issue.
AJ Kohn just posted an article on the concept of crawl budget that talks about this. I think the article is quite good and it explains why we need to look at all the topics of noindex, nofollow, robots, canonical and rel next prev http://www.blindfiveyearold.com/crawl-optimization
-
RE: Finding Local SERPs
There are solutions out there.
You have Enterprise Level Software like Sycara that tracks local results specifically http://www.sycara.com/overview/local-city-ranking/ You can also check with some of the other providers and get specifics. http://www.brightlocal.com/seo-tools/local-search-rank-checker/ http://www.seoclarity.net/features/local-search-optimization/ http://www.advancedwebranking.com/online/localization.html The only issue is that it can get kind of pricey with these companies.
Good luck!
-
RE: Ranking gcctld?
Matt Cutts must have been reading this thread - check it
http://www.youtube.com/watch?v=yJqZIH_0Ars
He mentions using .io as a "non geographic" specific domain.
-
RE: Duplication, pagination and the canonical
If I am understanding the question - I think pulling in some body copy from each search result (and not just the whole page) would be fine. I think Google will see that this is a search result and that you are pointing to other pages. You are probably going to pull in text from the title too. This is common practice in search results - heck Google does it!
If you are still concerned about the pulled in descriptions, your option is to setup the system to have an alternate description for each page. Use the alternate description when you pull it into your main page. It is more work, but it will eliminate this issue.
Separately, paginated pages no longer need to be canonicaled to the index page. You can use rel next and prev.
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
https://support.google.com/webmasters/answer/1663744?hl=en
It explains to Google the relationship between P1 and P2,3,4,5,n etc.
Beyond that, you need to watch that you do not get into too many paginated pages to get to the exact same product pages. Lets say you had 1,000 widgets that were blue, red and green and also were Free, Expensive or Cheap. You would have several sets of paginated pages (one set for Blue, one for Red, Green, Free, Cheap, Expensive, one for Red and Expensive) etc. It gets to be a little crazy as they all lead to the same set of widget product pages. You need to manage how to have Google crawl all that and not have your Paginated Category pages look like duplicated. Adam Audette writes great stuff on this. Look here for things to consider
http://www.rimmkaufman.com/blog/site-search-dynamic-content-and-seo/01032013/
-
RE: Do Abbreviations Hurt SEO Results?
Hello there,
My Uncle Dan would be upset if I did not correct you in stating that FTO for Fair Trade Organic is actually an acronym and not an abbreviation. Uncle Dan also got mad when I used to talk about my GPS system, but that is another story.
There is an acronym HTML tag http://www.w3schools.com/tags/tag_acronym.asp but I am unaware that Google actively uses it. I would answer your question as follows.
- Is there search volume for the acronym and when you search for that acronym, what type of pages does Google show? In other words are people searching for it and does Google know what it means. When I search FTO https://www.google.com/search?q=FTO I get a pages related to the FTO gene http://en.wikipedia.org/wiki/FTO_gene that is related to obesity and also Fresh Touring Origination a sports coupe from Mitsubishi
I tried "FTO vegetables" but I got instructions on how to make lacto fermented vegetables - I could not find FTO on the page with the instructions though. Results 5 and 6 looked to have to do with fair trade vegetables. I Googled "FTO food" and got sites for "Fresh to Order Food" and "Food Truck Outfitters Atlanta"
My point is, if most people don't use FTO to represent Fair Trade Organic, then Google probably will not, but it still may understand it in the context of use with other words. It may be that FTO is searched a bunch, but it may not be the searches you want.
- Based on my 30 second assessment above, you may want to consider using the acronym in combination with the words Fair Trade Organic and that would work. You need some variety of the words on the page. That goes without saying. I would not though, use it as the primary words in place like your title tag, h1 etc. Makes more sense in the body of the text on the page.
Specifically, "would it provide a weaker result" ? It depends. I would say for FTO searches, you would probably not show up as Google potentially does not associate FTO with Free Trade Organic. For "Free Trade Organic" (full keyword) related searches, I would use it, but not as much as other keywords that are probably searched more often and are more relevant in the results as you need to vary up your keywords anyway.
Hope this helps!
-
RE: Robots.txt blocking Metadata description being read
It sounds like you have something going on like what Matt Cutts talks about here
http://www.youtube.com/watch?v=KBdEwpRQRD0
You have a result showing up in the SERPs, even though it is in robots.txt. Basically, the reason it is still in the SERPs is because other pages are linking to the URL on your page.
I am going to assume that you want to keep these pages out of the index.
As you already have pages in the index, you need to get them removed, not just block them. I would suggest using a noindex meta tag and then letting the crawler crawl the page. The robots.txt stops the bot cold and does not let it read anything else. It does not let the bot read the meta tag. If you let it read the noindex meta tag that tag directs Google to take the page out of the search results.
https://support.google.com/webmasters/answer/93708?hl=en
"When we see a noindex meta tag on a page, Google will completely drop the page from our search results, even if other pages link to it."
That said, if you have made a mistake and have been blocking Google when you did not mean to. Make sure that you do not use the noindex meta tag on those pages, and make sure you are not blocking Google in your robots.txt. If that is the case and you are still seeing the wrong info in the SERPs, you do need to wait a little while. The updates will not be instantaneous and may take a few weeks before being updated in the SERPs. In the mean time, just double check that everything in your robots.txt is correct etc.
Be patient and good luck!
-
RE: Moving low ranking domain
Agree with John. I would also not 301 the old URLs to the new. All that would do is pass the negative link equity from the old site to the new and you are right back where you started.
Kill the old site with a 410, or even better, you can go into GWT and verify yourself for that sub domain. Then you can put in a request to have that entire subdomain removed from Google. You then put up a robot.txt and it is gone. This will also prevent the old site from being found again by Google for all the links.
https://support.google.com/webmasters/answer/1663427?hl=en
We use load balancers on our servers and once had a subdomain www1 that got indexed and there were even some FB shares etc. We ended up going this route to get it out and keep it out of the index.
-
RE: Server is taking too long to respond - What does this mean?
Check with the IT folks or hosting service for your client. I think this is an outside chance, but if you have been running spiders from your home computer to check the site, you may have been hitting it too hard and slowed the site down and the server may be blocking your IP as you are seen as a spammer. That is why you change ISPs you are golden as you are seen as a different "user".
I took down one of our sites once with a spidering tool. They were pushing new code right when I hit the site. Also, the number of requests a second I thought were ok, well, it was during peak traffic time. (DOH!)
I adjusted my crawl rate down and everything was ok. Again, this is just a guess, but worth checking considering your symptoms.
Good luck!
-
RE: Organic Links and Skimlinks Affiliate Program
The selling point makes sense and I could see how that would be true. But if you are not seeing an increase then it is not worth it, especially if your focus is on the organic traffic.
-
RE: Eshop - Prevent Duplicate Product Titles... A Strategy
Sounds good! Thanks!
-
RE: Organic Links and Skimlinks Affiliate Program
Here is what I am seeing. When I view source and look at the link for Floppy Straw Hats I see the URL
http://www.surfdome.com/baku_hats_-baku_congo_hat-_volcano-108584?i
and this link shows me a 200 when I run through it directly. This is probably what Screaming Frog is doing. I would re-run the frog and set the user agent to Google Bot just to see what happens there.
Now when I view that link in the browser and I hover over it and right click the URL and copy I get
When you run this, you get the 302 redirect to the target page
If you scroll down to the bottom, you see the skimlinks JavaScript that is doing this manipulation. FYI it is also adding redirect a link to "Surfdome" at the end of that same line. This is not linked at all in the source code. You have a simple JS rewirte action going on there.
So the bot sees the regular URL and the human sees a redirect via JavaScript.
Depending on if you wear a white or grey hat, this could be considered "cloaking"
https://support.google.com/webmasters/answer/66355?hl=en
"Cloaking refers to the practice of presenting different content or URLs to human users and search engines."
This is not the traditional use of the redirect. You would often see a completely different page shown to the bot vs the human using JS versus your example of just showing a 2 different links on a page. That said, Google is reading more and more JS these days http://googlewebmastercentral.blogspot.com/2011/11/get-post-and-safely-surfacing-more-of.html
Your issue is not about the 302 passing link equity, but if you want to get penalized for cloaking or not.
The other point that comes up is that since you are paying these bloggers to have this link on the site, I would call these paid links
https://support.google.com/webmasters/answer/66356?hl=en
I know you said, "these are organic links" we are now just paying for the referrals. Well, if Google finds that this is worth penalizing, then you have no one to argue with but yourself.
As I see it, you have 2 choices
-
No follow the links and do not use them for link juice but to pay for traffic.
-
Lose the redirects and use the links for ranking benefit (plus you can still get some traffic).
Honestly, seems like you were already getting organic links and traffic for free, so not sure why you would pay people for what you already had. I am sure this helped to get additional links, but you just need to consider the points above to see what is more important for your site.
Hope this helps!
-
-
RE: Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
I think you have it correct there. I always like to end in a slash for index pages
http://inlinear.com/ - this is your home index page
http://inlinear.com/products/ - this is your index page for the /products/ folder/group
http://inlinear.com/products/page.php - this is a page within the /products/folder/group.
Hardly anyone ever sets up index web pages like index.php or index.htm anymore, they are really not needed as they just make the URL longer. End in the slash and make sure that you are consistent with ending with that slash (vs dropping it off) when you link to your index pages.
You would need to test the script you mention that rewrites the URL. It looks like it is making sure that the index page ends in a slash, but I could be wrong.
Side story - I have had a CMS that uses http://inlinear.com/products as the index page for http://inlinear.com/products/ and this creates all kinds of issues
-
Most people are used to not having an index page and the URL simply ending in a slash. So even if you had a non slashed version as your index page, people would link to the slash and then you have to setup 301s to fix that. Otherwise you end up with all kinds of duplicate page issues.
-
I know Google Analytics looks at the slashes to group your content into reports.
So the example index page of http://inlinear.com/products
would NOT be included in reports with all the pages in the /products/ group
e.g. http://inlinear.com/products/page.php
http://inlinear.com/products/anotherpage.php
as /products is not "within" /products/ You then have a report on /products/ that leaves out the index page and this is normally your most important page!
Good luck!
-
-
RE: Numbers (2432423) in URL
I have not seen numbers in a URL hurt SEO. I do have issues with commas in your URL as they do not play nice when you copy and paste them. I would use dashes.
If you are looking to try and become a Google News publisher, Google requires numbers in the URL
https://support.google.com/news/publisher/answer/68323?hl=en
I have also used numbers in the URL to help with my backup 301 redirects
Say you have a url
/articles/34543-slug-goes-here.html
The # in the URL is the article ID in my database. If I get a error on any one of these urls (and they happen)
/articles/34543-slug-goes-here.htm
/articles/34543-slug-goes-h
/articles/34543-slu
/articles/34543
/articles/34543-slug-goes-Here.html
etc
My system just has to match on the ID of the article and then will 301 to the correct URL.
I think the key is not to overkill on the numbers to make them too many digits long etc.
Also, if you are only depending on slugs to differentiate URLs then you have to have a system to make sure they are unique each time. Using an ID number in the URL ensures they are unique.
-
RE: Deal with links that need login to view
You need to nofollow noindex all those links. Your user login sections are private and will not be ranked anyway, dont need the spiders getting in there.
-
RE: Many high value links to printer-friendly versions of our pages
Use of the canonical link should solve all of your problems as far as how the search engines. Bots that view the PF page will take the canonical directive and treat it like a 301 redirect and pass link equity etc.
The other thing you need to consider is the use of rel next and prev for the article that you broke into parts if you consider this one big article broken up into parts. That would be used to "connect" all the sections of your depression article to each other.
Frankly, I do not know why you broke up this page into parts as the article is not that long as a whole There is some data out there that longer articles get more links http://www.quicksprout.com/2012/12/20/the-science-behind-long-copy-how-more-content-increases-rankings-and-conversions/
One issue on the print pages is that you have a meta noindex tag. That tells Google to deindex the page and not to crawl it. You do not need to use that tag as you are using the canonical to tell Google what the "parent" page. If you are using the rel next prev on both the regular and paginated pages, I would advise canonicaling P1 to P1, P2 to P2 and the All to the All page vs canonical everything to the main page.
You are basically telling Google two things.
-
with rel next prev - here are the parts that make up the whole.
-
with the canonical - if they look at the PF version, here is the "real" page they want to pay attention to. This is what passes the link equity along to the proper pages. If you are getting links into the "print all page" this is the key page to have canonical linked to the "real" page.
What I would do is put this all into one page and then it is simple. Just canonical the PF page to the actual page and remove the noindex meta tag off the PF page.
If the above does not make sense, read each of the articles below about 3x and watch the video 3x it should help
Here are the Google Pages on canonicals
http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
and rel next prev
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
http://googlewebmastercentral.blogspot.com/2012/03/video-about-pagination-with-relnext-and.html
Good luck!
-
-
RE: Most recent blog post isn't being indexed?
I see the entry for the page in the XML now. I also just searched the URL in Google and see it there as well. Looks like this was just a timing issue.
-
RE: Most recent blog post isn't being indexed?
You can submit more than one sitemap in GWT and also, Google will read an XML sitemap and an RSS feed. I have Google reading an XML sitemap and it also found my RSS feeds. I would say, whatever feed you can control XML or RSS get that to your liking and add in GWT for Google to chew on.
-
RE: Most recent blog post isn't being indexed?
Hello there!
I see that based on the date listed on the page you posted this yesterday on 7/24/2013. Depending on how often Google visits your site, it may not spider all your pages every day. It has just been 24 hours so you may need to give it more time.
Things that can help speed this up is to make sure that you have this page listed in your sitemap
http://www.howlatthemoon.com/sitemap.xml
I did not see the page listed there and that is a common place that Google looks for new additions etc.
Good luck
-
RE: Pages to be indexed in Google
One key point on using robots.txt vs the meta tag noindex. It is not that the noindex meta tag is "superior" they just work differently.
If you use robots.txt - it will stop the spider from visiting that page, but it will not remove the page from the index. Also, if you have a page in robots.txt and on that page have a 301 redirect, or a canonical or a meta noindex Google will not see the page (due to the robots.txt directive) and then not be able to act on the 301 or canonical or the meta noindex.
A meta noindex, because the spider crawls the page, will not only tell Google not to visit the page anymore, but also tells Google to remove the page from the index. This is key if you want the pages removed from the Google index.
The rule of thumb I use is that
-
If you have a page that is not in the Google index and you want to keep it out of the index put that file in robots.txt.
-
If you have a page that is in the Google index and you want it removed, then use the noindex meta tag, do not put it into the robots.txt for reasons mentioned above. Over time, once the pages are removed (and this may take a while depending on how often the page is cralwed) then you can put into robots.txt for good measure.
-
-
RE: Rich Snippets: rel=”Author” CTR?
Yes it helps. Some have seen 150% improvement
There is a great article on the Moz bog on how you even need to optimize the picture
http://moz.com/blog/google-author-photos
Don't wait - do it!
-
RE: Redirecting https pages
What you want to do is setup the redirect for all pages "except" those pages that you want to require a person to use the https.
As an example on a site I work on, we have two areas /cart/ and /account/ that represent when someone is checking out or when they are logged into their account and want to update payment options, respectively. You would exclude these folders from the https to http 301 redirect so that users could then use that part of the site in secure mode.The rest of the site you want to have the https 301 to http. The reason you go through all this is that a http and https versions of the site, if spidered, would be considered duplicate content and you want to prevent that.
The other part of this would be that you do not want the search engines (usually) to spider the shopping cart and user login sections of a site. Nofollow noindex all links that lead to those pages and also put those folders in robots.txt - that will keep the bots out of there.
One other thing. Make sure that your templates and content within the https sections of the site link out to the non https urls. The 301 will help with this, but why link to the wrong URL anyway?
All of that said. If your site is one that you deal with highly sensitive information (medical, financial come to mind) then you may simply want to have the site run as https. You would need to bulk up your server resources to handle this as https can slow things down a little bit, but it can be done.
-
RE: Can I get posts from a blog host and put them on a private website ?
You mention 301s and canonical not being available - are you absolutely sure? Try and ask the host again, sometimes when you get another customer service person you may get an answer. The canonical would be idea.
I would not delete the old posts. They have links to them and get traffic. Couple of ideas come to mind. You could basically write up a short original summary of each post and put it on the old blog. Then have something like, if you want more information, this article has moved to and then link to the new blog post. That would at least drive referral traffic and would take care of the duplicate issue. In the absence of a canonical link, having a link to the "original" does help give credit. The link to the new site would also work to give credit for the post to the new site. At the same time, this will be kind of messy as when you change the content on the posts on the old site, you could potentially mess up the rankings of those pages.
I would test this out. Select 10-20 articles from the old site, and see what happens. As a comparison, take another 10-20 and just cut off the blog post after 300 words and then link to the full article on the new site.
I will be honest, as I read my suggestion, this will be kind of messy. Go back and push for that canonical. You can then link and copy and you will be totally clean. All you need is access to the HEAD portion of the pages. They look to have a "premium" option if you pay, maybe that would give you access?
Good luck!
-
RE: Is there a way to prevent Google Alerts from picking up old press releases?
Thanks for the post Keri.
Yep, the OCR option would still make the image option for hiding "moo"
-
RE: E-Commerce: Random Cart ID Redirects
I would add, you want to also no follow, noindex all links to any of your shopping cart pages. Ideally, if you have your cart pages in a given folder, you can disallow the whole folder and take care of things as a group.
-
RE: Ranking gcctld?
Here is the Google info on what the Geotargeting does
https://support.google.com/webmasters/answer/62399?hl=en
They would look at the extension, but also where you are hosted, location information on the site (eg your address) etc.
As far as who you target with the settings
"The tool handles geographic data, not language data. If you're targeting users in different locations—for example, if you have a site in French that you want users in France, Canada, and Mali to read—we don't recommend that you use this tool to set France as a geographic target. A good example of where it would be useful is for a restaurant website: if the restaurant is in Canada, it's probably not of interest to folks in France. But if your content is in French and is of interest to people in multiple countries/regions, it's probably better not to restrict it."
So, it depends on what users you want to target. If you truly want to be international, do not set it. I bet if your site is in english and your are hosted in the US and your physical address is in the US, Google will show you as a US site.
-
RE: Using advance segments or primary dimensions?
Are you sure you are looking at visits vs unique visits? I have seen some issues with the custom reports messing up reporting a count of unique visitors. If you could include some screen shots of the two reports you are comparing that would help.
-
RE: Ranking gcctld?
One quick suggestion. Make sure in Google webmaster tools under site settings that when you verify the domain that you properly specify your location. I am betting that you are not based in the middle of the Indian Ocean!
Also there is a great answer here
http://moz.com/community/q/do-domain-extensions-such-as-com-or-net-affect-seo-value
-
RE: Is there a way to prevent Google Alerts from picking up old press releases?
Well that is how to exclude them from an alert that they setup, but I think they are talking about anyone who would setup an alert that might find the PDFs.
One other idea I had, that I think may help. If you setup the PDFs as images vs text then it would be harder for Google to "read" the PDFs and therefore not catalog them properly for the alert, but then this would have the same net effect of not having the PDFs in the index at all.
Danielle, my other question would be - why do they give a crap about Google Alerts specifically. There has been all kinds of issues with the service and if someone is really interested in finding out info on the company, there are other ways to monitor a website than Google Alerts. I used to use services that simply monitor a page (say the news release page) and lets me know when it is updated, this was often faster than Google Alerts and I would find stuff on a page before others who did only use Google Alerts. I think they are being kind of myopic about the whole approach and that blocking for Google Alerts may not help them as much as they think. Way more people simply search on Google vs using Alerts.
-
RE: Is there a way to prevent Google Alerts from picking up old press releases?
Robots.txt and exclude those files. Note that this takes them out of the web index in general so they will not show up in searches.
You need to ask your client why they are putting things on the web if they do not want them to be found. If they do not want them found, dont put them up on the web.
-
RE: Benefit of using 410 gone over 404 ??
The 410 is supposed to be more definitive
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
404 is "not found" vs 410 is "gone
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.
10.4.11 410 Gone
The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.
That said, I had a similar issue on a site with a couple thousand pages and went with the 410, not sure it really made things disappear any faster than the 404 (that I noticed).
I just found a post from John Mueller from Google
https://productforums.google.com/forum/#!topic/webmasters/qv49s4mTwNM/discussion
"In the meantime, we do treat 410s slightly differently than 404s. In particular, when we see a 404 HTTP result code, we'll want to confirm that before dropping the URL out of our search results. Using a 410 HTTP result code can help to speed that up. In practice, the time difference is just a matter of a few days, so it's not critical to return a 410 HTTP result code for URLs that are permanently removed from your website, returning a 404 is fine for that. "
So, use the 410 as a matter of a few days you may see a difference with 30k pages.
All of that said, are you sure with a site that big you would not need to 301 some of those pages. If you have a bunch of old news items or blog posts, would you not want to redirect them to the new URLs for those same assets? Seems like you should be able to recover some of them - at least your top traffic pages etc.
Cheers
-
RE: Temporarily shut down a site
Appreciate the positive comment EGOL!
-
RE: What means a back door link. Please explain and I will give you credit
Just some advice, I would not search for backdoor link on urban dictionary. Nuff said. Yosepr, read the link that SEO 5 Team posted and it explains it all.
Basically, you link to page on a site that then links to a page on the site you are "back door" linking to. It is an indirect way to link to the other site as you link to a page that links to them (and then vice versa).
-
RE: Temporarily shut down a site
Thank you - please mark my response as Good Answer if it helps.
Cheers!
-
RE: Correct way to block search bots momentarily... HTTP 503?
You can do that, but it is less specific on what you are actually doing with your server. The 503 and retry after lets the spiders know exactly what you are doing (no confusion). Thank you for the clever remark below.