Can you see the 'indexing rules' that are in place for your own site?
-
By 'index rules' I mean the stipulations that constitute whether or not a given page will be indexed.
If you can see them - how?
-
Unfortunately, that would be specific to your own platform and server-side code. When you look at the SEOmoz source code, you're either going to see a nofollow or you're not. The code that drives that is on our servers and is unique to our build (PHP/Cake, I think).
You'd have to dig into the source code generating the Robots.txt file. I don't think you can have a fully dynamic Robots.txt (it has to have a .txt extension), so there must be a piece of code that generates a new Robots.txt file, probably on a timer. It could be called something similar, like Robots.php, Robots.aspx, etc. Just a guess.
FYI, dynamic Robots.txt could be a little dicey - it might be better to do this with a META NOINDEX in the header of the user profile pages. That would also avoid the timer approach. The pages would dynamically NOINDEX themselves as they're created.
-
To hopefully clarify what I'm talking about, I want to provide this example: SEOmoz will remove the "no-follow" tag from the first link in your profile if you get 200 mozpoints.
This is a set rule which I believe will automatically occur once a user reaches the minimum. On my site, a similar rule exists where the meta noindex tag will be removed from a user page if you submit 10 'files'.
There were other rules similar to this created and I need to know what they are. How?
-
On my site, there was a rule created where users are blocked by robots unless they have submitted a minimum number of 'files'. This was done to ensure that only quality user profile pages are being indexed and not just spam/untouched profiles.
There have been other rules like this created but I don't know what they are and I'd like to find out.
-
Hi David,
Do you mean how robots.txt is configured and if the robots file is blocking a certain page from being indexed? If so, yes. If the file is complex and you're not sure if it's blocking a particular page, you can go into Google Webmaster Tool and they have a robots.txt utility where you can input a particular URL and it will tell you if the robots.txt file you are using (or proposing) blocks that URL.
If you mean whether the page is quality enough for a search engine to choose to index it? No, that's part of the algorithm and none of the major engines are that nice and open.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can improve Domain Athturitymy web site
Hi;
Intermediate & Advanced SEO | | tohid1363b
I have a website that had an authoritative domain of 10 but today it has reached 5. I wanted to know what the reason for these changes was.
need improve my rank in Google my goal keyword . Can you guide my servant? need +5 Domain , and over +50 Page Aithroity .. my site: خریدگیفت کارت0 -
Mobile First Index: What Could Happen To Sites w Large Desktop but Small Mobile Sites?
I have a question about how Mobile First could affect websites with separate (and smaller) mobile vs desktop sites. Referencing this SE Roundtable article (seorountable dot com /google-mobile-first-index-22953.html), "If you have less content on your mobile version than on your desktop version - Google will probably see the less content mobile version. Google said they are indexing the mobile version first." But Google/ Gary Illyes are also on the record stating the switch to mobile-first should be minimally disruptive. Does "Mobile First" mean that they'll consider desktop URLs "second", or will they actually just completely discount the desktop site in lieu of the mobile one? In other words: will content on your desktop site that does not appear in mobile count in desktop searches? I can't find clear answer anywhere (see also: /jlh-marketing dot com/mobile-first-unanswered-questions/). Obviously the writing is on the wall (and has been for years) that responsive is the way to go moving forward - but just looking for any other viewpoints/feedback here since it can be really expensive for some people to upgrade. I'm basically torn between "okay we gotta upgrade to responsive now" and "well, this may not be as critical as it seems". Sigh... Thanks in advance for any feedback and thoughts. LOL - I selected "there may not be a right answer to this question" when submitting this to the Moz community. 🙂
Intermediate & Advanced SEO | | mirabile0 -
When you can't see the cache in search, is it about to be deindexed?
Here is my issue and I've asked a related question on this one. Here is the back story. Site owner had a web designer build a duplicate copy of their site on their own domain in a sub folder without noindexing. The original site tanked, the webdesigner site started outranking for the branded keywords. Then the site owner moved to a new designer who rebuilt the site. That web designer decided to build a dev site using the dotted quad version of the site. It was isolated but then he accidentally requested one image file from the dotted quad to the official site. So Google again indexed a mirror duplicate site (the second time in 7 months). Between that and the site having a number of low word count pages it has suffered and looked like it got hit again with Panda. So the developer 301 the version to the correct version. I was rechecking it this morning and the dotted quad version is still indexed, but it no longer lets me look at the cache version. Out of experience, is this just Google getting ready to drop it from the index?
Intermediate & Advanced SEO | | BCutrer0 -
Can you recover from "Unnatural links to your site—impacts links" if you remove them or have they already been discounted?
If Google has already discounted the value of the links and my rankings dropped because in the past these links passed value and now they don't. Is there any reason to remove them? If I do remove them, is there a chance of "recovery" or should I just move forward with my 8 month old blogging/content marketing campaign.
Intermediate & Advanced SEO | | Beastrip0 -
XML Sitemap Index Percentage (Large Sites)
Hi all I'm wanting to find out from those who have experience dealing with large sites (10s/100s of millions of pages). What's a typical (or highest) percentage of indexed pages vs. submitted pages you've seen? This information can be found in webmaster tools where Google shows you the pages submitted & indexed for each of your sitemap. I'm trying to figure out whether, The average index % out there There is a ceiling (i.e. will never reach 100%) It's possible to improve the indexing percentage further Just to give you some background, sitemap index files (according to schema.org) have been implemented to improve crawl efficiency and I'm wanting to find out other ways to improve this further. I've been thinking about looking at the URL parameters to exclude as there are hundreds (e-commerce site) to help Google improve crawl efficiency and utilise the daily crawl quote more effectively to discover pages that have not been discovered yet. However, I'm not sure yet whether this is the best path to take or I'm just flogging a dead horse if there is such a ceiling or if I'm already at the average ballpark for large sites. Any suggestions/insights would be appreciated. Thanks.
Intermediate & Advanced SEO | | danng0 -
Can some brilliant mozzer out there teach a moron/newbie like me how to 301 redirect several URL's I have?
Okay - I am a supermodel. I look pretty. My legs are amazing. My cheekbones are high. But when it comes to 301 redirects I am the ugliest supermodel on the block. Crap, here is the truth: I am not even a supermodel. I am just a middle-aged, goofy looking dude who is a newbie to fixing websites. I have inherited several sites from a friend and I have been helping by creating solid contextual links internally and externally for a while. But, when Roger the wondrous SEOMoz robot talks to me, he says, "oops, it looks like your foolish freak self has a site that has both a www. and a non-www, which can create competition for yourself." What do I do when he says that? I just whisper a "thank-you" but gently press the skip this step button and go on with my life because I do not know how to make my non-www.'s redirect into the www. sites... Now, I have sort of asked this question on the site before, but I was answered by someone who does not understand my level of ignorance. any use of the word canonical or just put this lfwjkshj.htp/php inside the left ear of your mom, does not tell me anything so, is there any willing and kind soul who can walk me through redirecting several of my sites to their proper home - kind of like Carl Chubbs Weathers did for Happy Gilmore in that Academy Award winning classic? Thanks for the help in advance best, dumbhead
Intermediate & Advanced SEO | | creativeguy0 -
Can the template increase the loading time of the site?
Hi, My site was built with WordPress. Very recently I had it redesigned. The problem is that now it takes a long time to download. I have spoken with a web designer who checked my site and said that after it was rebuilt, the template that was created included a lot of hard coding. Can this be the reason why my site now takes a long time to load? The hard coding factor? Thank you for your help. Sal P.S.: FYI the site only has a few plug-ins and the server is a good one.
Intermediate & Advanced SEO | | salvyy0 -
Google sees redirect when there isn't any?
I've posted a question previously regarding the very strange changes in our search positions here http://www.seomoz.org/q/different-pages-ranking-for-search-terms-often-irrelevant New strange thing I've noticed - and very disturbing thing - seems like Google has somehow glued two pages together. Or, in other words, looks like Google sees a 301 redirect from one page to another. This, actually, happened to several pages, I'll illustrate it with our Flash templates page. URL: http://www.templatemonster.com/flash-templates.php
Intermediate & Advanced SEO | | templatemonster
Has been #3 for 'Flash templates' in Google. Reasons why it looks like redirect:
Reason #1
Now this http://www.templatemonster.com/logo-templates.php page is ranking instead of http://www.templatemonster.com/flash-templates.php
Also, http://www.templatemonster.com/flash-templates.php is not in the index.
That what would typically happen if you had 301 from Flash templates to logo templates page. Reason #2
If you search for cache:http://www.templatemonster.com/flash-templates.php Google will give the cahced version of http://www.templatemonster.com/logo-templates.php!!!
If you search for info:www.templatemonster.com/flash-templates.php you again get info on http://www.templatemonster.com/logo-templates.php instead! Reason #3
In Google Webmaster Tools when I look for the external links to http://www.templatemonster.com/logo-templates.php I see all the links from different sites, which actually point to http://www.templatemonster.com/flash-templates.php listed as "Via this intermediate link: http://www.templatemonster.com/flash-templates.php" As I understand Google makes this "via intermediate link" when there's a redirect? That way, currently Google thinks that all the external links we have for Flash templates are actually pointing to Logo templates? The point is we NEVER had any kind of redirect from http://www.templatemonster.com/flash-templates.php to http://www.templatemonster.com/logo-templates.php I've seen several similar situations on Google Help forums but they were never resolved. So, I wonder if anybody can explain how that could have happened, and what can be done to solve that problem?0