Photogallery and Robots.txt
-
Hey everyone
SEOMOZ is telling us that there are to many onpage links on the following page:
http://www.surfcampinportugal.com/photos-of-the-camp/
Should we stop it from being indexed via Robots.txt?
best regards and thanks in advance...
Simon
-
Hey Ryan
Thanks allot for your help, and suggestions, i will try to get more links from rapturecamps.com to this domain. Also your idea about the link adding is not bad, dont know why i didnt came up with that one
Thanks again anyway...
-
Hi Joshua. Since the domain is so new, the tool is basically telling you that you don't have much "link juice" to go around, so you're easily going to have more links on page than Google will consider important. This is natural and as your new domain gains links from around the web you'll be fine. I noticed that www.rapturecamps.com is well established so sending a few more relevant links directly from there will help with the situation.
Also, this is a clever offer that you could post to surfcampinportugal.com as well:
Add a Link and Get Discount
Got your own website, blog, forum?
If you add a link to Rapture Camps website you will receive a discount for your next booking.
Please contact us for further information. -
Hey Aran
Thanks for you fast reply, and nice to hear you like the design.
best regards
-
Personally, I wouldn't stop it being indexed.Its not like your being spammy with the onpage links.
P.s. awesome website, really love the photography on the background images.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I robots.txt an entire site to get rid of Duplicate content?
I am in the process of implementing Zendesk and will have two separate Zendesk sites with the same content to serve two separate user groups (for the same product-- B2B and B2C). Zendesk does not allow me the option to changed canonicals (nor meta tags). If I robots.txt one of the Zendesk sites, will that cover me for duplicate content with Google? Is that a good option? Is there a better option. I will also have to change some of the canonicals on my site (mysite.com) to use the zendesk canonicals (zendesk.mysite.com) to avoid duplicate content. Will I lose ranking by changing the established page canonicals on my site go to the new subdomain (only option offered through Zendesk)? Thank you.
On-Page Optimization | | RoxBrock0 -
Do I need a robots meta tag on the homepage of my site?
Is it recommended to include on the homepage of your site site? I would like Google to index and follow my site. I am using WordPress and noticed my homepage is not including this meta tag, therefore wondering if I should include it?
On-Page Optimization | | asc760 -
Robots.txt file
Does it serve any purpose if we omit robots.txt file ? I wonder if spider has to read all the pages, why do we insert robots.txt file ?
On-Page Optimization | | seoug_20050 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
Robots.txt: excluding URL
Hi, spiders crawl some dynamic urls in my website (example: http://www.keihome.it/elettrodomestici/cappe/cappa-vision-con-tv-falmec/714/ + http://www.keihome.it/elettrodomestici/cappe/cappa-vision-con-tv-falmec/714/open=true) as different pages, resulting duplicate content of course. What is syntax for disallow these kind of urls in robots.txt? Thanks so much
On-Page Optimization | | anakyn0 -
What's the best practice for implementing a "content disclaimer" that doesn't block search robots?
Our client needs a content disclaimer on their site. This is a simple "If you agree to these rules then click YES if not click NO" and you're pushed back to the home page. I have this gut feeling that this may cause an upset with the search robots. Any advice? R/ John
On-Page Optimization | | TheNorthernOffice790 -
What reasons exist to use noindex / robots.txt?
Hi everyone. I realise this may appear to be a bit of an obtuse question, but that's only because it is an obtuse question. What I'm after is a cataloguing of opinion - what reasons have SEOs had to implement noindex or add pages to their robots.txt on the sites they manage?
On-Page Optimization | | digitalstream0