Robots txt is case senstive? Pls suggest
-
Hi i have seen few urls in the html improvements duplicate titles
Can i disable one of the below url in the robots.txt?
/store/Solar-Home-UPS-1KV-System/75652
/store/solar-home-ups-1kv-system/75652if i disable this
Disallow: /store/Solar-Home-UPS-1KV-System/75652
will the Search engines scan this /store/solar-home-ups-1kv-system/75652
im little confused with case senstive.. Pls suggest go ahead or not in the robots.txt
-
Hi Already there is some equity for duplicate links, wht is going to happen?
-
Actually, you have just one option to not index them - the second one. The first will, still keep them in index if google can find them. I currently have roughly 27k URLs indexed that were blocked via robots.txt from the start (generated with a time-based parameter; yeah: ouch.).
Those results do not usually appear in "normal" search but can be forced (currently you may try site:grimoires.de inurl:fakechecknr and showing skipped results to see the effect of that). So basically I'd advise against using robots.txt - it does not prevent indexing, only the visiting/reading of that page.
Regards
Nico
-
Hi Abdul,
Yes, it is case sensitive.
Remember that you must not have many pages like that.
The first thing you should do is elimiate those duplicate pages.In the case you can´t eliminate them, you have 2 way to ask the google bot not to index them:
1- By robots.txt with a 'Disallow:' instruction
2- By a meta tag with a_ '' _in theHope it helps.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Suggested Screaming Frog configuration to mirror default Googlebot crawl?
Hi All, Does anyone have a suggested Screaming Frog (SF) configuration to mirror default Googlebot crawl? I want to test my site and see if it will return 429 "Too Many Requests" to Google. I have set the User Agent as Googlebot (Smartphone). Is the default SF Menu > Configuration > Speed > Max Threads 5 and Max URLs 2.0 comparable to Googlebot? Context:
Intermediate & Advanced SEO | | gravymatt-se
I had tried NetPeak SEO Spider which did a nice job and had a cool feature that would pause a crawl if it got to many 429. Long Story short, B2B site threw 429 Errors when there should have been no load on a holiday weekend at 1:00 AM.0 -
Quotery.com Suggestions?
Hello, I'm hoping to find some advice on how to proceed with a site I started last year — Quotery. I've spent a ton of time and money on it, tried my best to build value to what is typically a copycat niche, and adhered to Google's SEO recommendations (I believe, anyway). Yet, subpar sites rank well above mine. Now I'm wondering if I should find a buyer for the site and cut my losses. Does anyone have any suggestions on what the next steps might be in order to rank higher? Here is what we've tried/accomplished so far: We were mentioned on Netted. We built a well-designed and easy to navigate site that works on all devices. We've added topic descriptions and images, unique author descriptions and pictures, and exclusive picture quotes. We built a well-designed and useful WordPress plugin, and kept the backlinks nofollow (note: our main competitor also has a subpar WordPress plugin with dofollow backlinks, yet they don't get penalized for it). We've published curated blog posts, along with infographics that have been shared millions of time #1, #2, #3. We've built up significant social media profiles on Facebook, Twitter, and elsewhere. We even tried hiring 97thFloor (highly recommended on Moz) for 4 months, although most of their efforts seemed spammy and/or very basic to me (and cost a fortune), so we decided to take SEO into our own hands afterwards. We've added sources, pictures, and relevant information for thousands of quotes (e.g. example seen here). We started working on user profiles, and had plans for much more down the road. However, despite these efforts, sites like BrainyQuote dominate Google's rankings. So is it truly value that will earn you rankings... or is it still all about gaming the system? Of course, any suggestions in my case specifically would be much appreciated.
Intermediate & Advanced SEO | | JasonMOZ0 -
301's, Mixed-Case URLs, and Site Migration Disaster
Hello Moz Community, After placing trust in a developer to build & migrate our site, the site launched 9 weeks ago and has been one disaster after another. Sadly, after 16 months of development, we are building again, this time we are leveled-up and doing it in-house with our people. I have 1 topic I need advice on, and that is 301s. Here's the deal. The newbie developer used a mixed-case version for our URL structure. So what should have been /example-url became /Example-Url on all URLs. Awesome right? It was a duplicate content nightmare upon launch (among other things). We are re-building now. My question is this, do we bite the bullet for all URLs and 301 them to a proper lower-case URL structure? We've already lost a lot of link equity from 301ing the site the first time around. We were a PR 4 for the last 5 years on our homepage, now we are a PR 3. That is a substantial loss. For our primary keywords, we were on the first page for the big ones, for the last decade. Now, we are just barely cleaving to the second page, and many are 3rd page. I am afraid if we 301 all the URLs again, a 15% reduction in link equity per page is really going to hurt us, again. However, keeping the mixed-case URL structure is also a whammy. Building a brand new site, again, it seems like we should do it correctly and right all the previous wrongs. But on the other hand, another PR demotion and we'll be in line at the soup kitchen. What would you do?
Intermediate & Advanced SEO | | yogitrout10 -
Am I on the right way ? any suggestion please ?
Hi : Now it's 3 month from starting seo my website by myself ( my website is like prchecker.info that give users one online service " My both primary keywords have 450.000 and 100.000 **exact usa search , **when I start my goal is to rank my both keywords on second page during the first year , now and after 3 month after creating few quality backlinks ( guest posting and comments on relevant topic on forum ) my both keywords are ranked on 3rd and fifth page. Any suggestion to create quality backlinks that might help me ? should I continue with guest posting ?
Intermediate & Advanced SEO | | Khaledmoalla0 -
I have two sitemaps which partly duplicate - one is blocked by robots.txt but can't figure out why!
Hi, I've just found two sitemaps - one of them is .php and represents part of the site structure on the website. The second is a .txt file which lists every page on the website. The .txt file is blocked via robots exclusion protocol (which doesn't appear to be very logical as it's the only full sitemap). Any ideas why a developer might have done that?
Intermediate & Advanced SEO | | McTaggart0 -
Robots.txt
What would be a perfect robots.txt file my site is propdental.es Can i just place: User-agent: * Or should i write something more???
Intermediate & Advanced SEO | | maestrosonrisas0 -
Do you loose Link Equity when using RanDom CasE?
I seen a site linking internally using Caps from the home page to sub pages, the rest of the site links in lower-case. Are there any disadvantages in terms of link juice or duplication for doing this? Example link from homepage: /blah/Doctors.aspx Example link from other internal page: /blah/doctors.aspx The site is on a Windows based server and not Linux. Thanks in advance
Intermediate & Advanced SEO | | 3wh0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0