350 (Out the 750) Internal Links Listed by Webmaster Tools Dynamically Generated-Best to Remove?
-
Greetings MOZ Community:
When visitors enter real estate search parameters in our commercial real estate web site, the parameters are somehow getting indexed as internal links in Google Webmaster Tools. About half are 700 internal links are derived from these dynamic URLs.
It seems to me that these dynamic alphanumeric URL links would dilute the value of the remaining static links.
Are the dynamic URLs a major issue? Are they high priority to remove?
The dynamic URLs look like this:
These URLs do not show up when a SITE: URL search is done on Google!
-
I believe your problem is in your robots.txt file. You're attempting a wildcard blocking of the search results pages with this line:
Disallow: /listings/search**?***
However, the asterisk ought to precede the question mark. If you want to block all URLs that include a question mark (?), do this:
Disallow: /listings/search***?**
Try that and see what happens. I've also found Aaron Wall's article on robots.txt to be helpful. Good luck!
Also, adding "noindex, nofollow" to the section does not necessarily keep a web page out of Google's index. When you think about it, you realize Google has to crawl the page the see that meta tag in the first place. Robots.txt is much stronger.
-
Hi Dennis:
The pages that display these search results are set to no-index, no-follow. They do not get indexed by Google in search results, just somehow register as about 200 links in internal links in Google Webmaster Tools.
How would I get these removed as links if they are no getting indexed by Google as pages? If I did get them removed is there a way of getting these links from being re-indexed?
I have attached an image of what these internal links look like in Google Webmaster Tools.
Thanks, Alan
-
I have seen both sides of the coin, some get affected, most don't.
I prefer to clean it up though and make sure those can't get indexed. I do this as part of our onpage SEO standards. We dont proceed until it's dealt with so it's a high priority.
It's cleaner, gets crawled easily and more efficient. Doesn't hurt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should we optimise our internal links?
Hi again, We recently had a technical search audit done by a specialist agency and they discovered a number of internal links that caused redirects to happen. The agency has recommended we update all of these links to link directly to the destination so we don't lose out on link equity. We'd just like to know if you think this would be a worthwhile use of our time. Our web team seem to think that returning a 301 to a crawler means that the crawler will stop indexing the original URL and instead index the redirected destination? Thanks all. Clair
Intermediate & Advanced SEO | | iescape2 -
Site not showing up in search - was hacked - huge comment spam - cannot connect Webmaster tools
Hi Moz Community A new client approached me yesterday for help with their site that used to rank well for their designated keywords, but now is not doing well. Actually, they are not on Google at all. It's like they were removed by Google. There are not reference to them when searching with "site: url". I investigated further and discovered the likely problem . . . 26 000 spam comments! All these comments have been removed now. I clean up this Wordpress site pretty well. However, I want to connect it now to Google webmaster tools. I have admin access to the WP site, but not ftp. So I tried using Yoast to connect. Google failed to verify the site. So the I used a file uploading console to upload the Google html code instead. I check that the code is there. And Google still fails to verify the site. It is as if Google is so angry with this domain that they have wiped it completely from search and refuse to have any dealings with it at all. That said, I did run the "malware" check or "dangerous content" check with them that did not bring back any problems. I'm leaning towards the idea that this is a "cursed" domain in Google and that my client's best course of action is to build her business around and other domain instead. And then point that old domain to the new domain, hopefully without attracting any bad karma in that process (advice on that step would be appreciated). Anyone have an idea as to what is going on here?
Intermediate & Advanced SEO | | AlistairC0 -
Dynamic XML Sitemap Generator
Has anyone used a Dynamic XML Sitemap Generator tool? Looking for recommendations!
Intermediate & Advanced SEO | | Matchnode0 -
Do I have to many internal links which is diluting link juice to less important pages
Hello Mozzers, I was looking at my homepage and subsequent category landing pages on my on my eCommerce site and wondered whether I have to many internal links which could in effect be diluting link juice to much of the pages I need it to flow. My homepage has 266 links of which 114 (43%) are duplicate links which seems a bit to much to me. One of my major competitors who is a national company has just launched a new site design and they are only showing popular categories on their home page although all categories are accessible from the menu navigation. They only have 123 links on their home page. I am wondering whether If I was to not show every category on my homepage as some of them we don't really have any sales from and only concerntrate on popular ones there like my competitors , then the link juice flowing downwards in the site would be concerntated as I would have less links for them to flow ?... Is that basically how it works ? Is there any negatives with regards to duplicate links on either home or category landing page. We are showing both the categories as visual boxes to select and they are also as selectable links on the left of a page ? Just wondered how duplicate links would be treated? Any thoughts greatly appreciated thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
Unnatural links to your site—impacts links
I got message in my Google webmaster tool: Unnatural links to your site—impacts links Does anyone knows the difference between "Unnatural links to your site—impacts links" and "Unnatural links to your site" Thank you Sina
Intermediate & Advanced SEO | | SinaKashani0 -
Best Way to Optimize 38 Local Directory Listing In Major Directories
Hi Folks, I am trying to figure out the best way to get our company's 38 U.S. locations in the major local directories. To start, I'd like to get us listed in the major ones: Google, Yahoo, Bing, and Yelp. I do have the resources myself here on staff to do everything manually. So, I don't necessarily need a service like Yext (but would also like any opinions on that offering if anyone can offer it). But, from what I know in the past, every time you try to claim a local listing within each platform, you have to confirm your existence there somehow - whether it be by a mailed postcard or some sort of automated call they give you. Considering that we want to manage all social and local platforms here at corporate, how can we do this? I am not physically at these locations, but I'm sure this is possible to manage everything through one account. The addresses will be local, but the phone numbers on each local profile will route to our customer service here at corporate because the local locations are mostly administrative. In other words, businesses is booked through corporate and carried out at local destinations. Thoughts/Comments?
Intermediate & Advanced SEO | | CSawatzky
I want to do what's best for SEO and also dont' want to harm anything or our link equity. Thanks,
Pedram0 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0