How to fix these unwanted URLs?
-
Right now i have wordpress, one page website, but google also show wp-content. KIndly check below in google.
site:http://baltimoreelite.com/
How I can fix this issue?
-
Great job, Mark! I can see from this end that nearly all of those unwanted URLs have already dropped out of the results. That's far quicker than even I expected! And the ones that aren't gone are leading to a 403 Forbidden page, which is great.
One last thing you can do if you want. Because you are on HostGator, they are displaying their custom 403 error page, which has their branding all over it (nasty, kinda ugly) You could create your own simple 403 error page, add your own basic branding to it, and for instance add a line that says something like "You don't have permission to view this page or it is blocked for security reasons. Drop by the home page [link to home page] to find what you're looking for, or to conduct a search."
This basic page can be used to replace the one that HostGator provides by default so any visitors that hit it by accident will still feel like they are on your site, and will have a suggestion for what to do next. Your hosting control panel will have instructions for how & where to provide your own custom error pages.
Hope that last little tweak's useful.
Paul
-
Thank You Paul. All is well now.
-
If I were you, Mark, I'd add it right at the top of your htaccess file. I'd also add in a descriptive comment to make the reason for the directive clear. So:
BEGIN Remove ability to read directory indexes
Options -Indexes
END Remove ability to read directory indexes
These lines would be inserted right at the top of the htaccess file. I would also warn though, that I've had situations where caching plugins have overwritten such directives when they update the htaccess themselves. If that happens, you man need to try inserting it after #END WPSuperCache and before # BEGIN WordPress.
Hope that works for you?
Paul
-
Andy and Nishada - don't forget... Adding robots.txt disallows will do nothing to get already indexed URLs out of the search index after the fact.
Paul
-
You have a much bigger problem than what can be solved just with a robots.txt file, Mark.
All of those URLs are showing up because of a misconfiguration of your theme installation (likely caused by the theme developer) is allowing full display of all of the content of each of those directories. In addition to polluting your search results, as you've noticed, it's a also a pretty major security risk. You can see this in action by going to http://baltimoreelite.com/wp-content/themes/sintia/wpv_theme/assets/css/ What should happen when you go to that URL is you see a blank page, or receive a 403-Forbidden warning. Instead, you're seeing a full listing of the directory contents - bad news.
Since I don't know your hosting configuration, the easiest way to fix this issue is to add a line to your .htaccess file at the root of your site. This should correct for all such instances, You need to add this line:
<code>Options -Indexes</code>
If you're not familiar, the .htaccess file is a text file which you can edit with any text editor. You'll need to use an FTP program or the file manager in your hosting control panel to access it at the root of your site. (You may also need to enable "show hidden files" in your program). I always recommend backing up the existing file before editing just in case. You'll add this new line on its own line with a blank line between it and any other lines in the file. It can go near or at the top of your htaccess file.
Now getting those URLs out of the search index is going to take a bit more work. You'll want to implement the robots.txt exclusions like Andy suggested, and then you'll need to go into Google Webmaster Tools and use the Remove URLs tool to specifically request removal of the directories you blocked with the robots.txt and that you also want removed from the search results. (The robots.txt is a critical part of this process, as Google requires it be in place in order to process the removal requests.)
This, combined with the htaccess edit mentioned above, should keep those URLs from showing up again in teh future.
Hope that all makes sense. If not, be sure to ask!
Paul
-
Hi Mark,
I block the same on my site (which is also a single page). Here is the content of my Robots.txt file.
User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/ Sitemap: http://www.inetseo.co.uk/sitemap.xml.gz
-Andy
-
Hi Mark,
You need to tell crawlers to not to index those content by modifying the robots.txt file. Below is a good link with some examples and instructions
http://stackoverflow.com/questions/17029811/how-to-set-up-robots-txt-file-for-wordpress
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use existing page with bad URL or brand new URL?
Hello, We will be updating an existing page with more helpful information with the goal of reaching more potential customers through SEO and also attaching a SEM campaign to the specific landing page. The current URL of the page scores 25 on Page Authority, and has 2 links to it from blog articles (PA 35, 31). The current content needs to be rewritten to be more helpful and also needs some additional information. The downsides are that it has an "bad" URL- no target keyword and uses underscores. Which of the following choices would you make? 1. Update this old "bad" URL with new content. Benefit from the existing PA. -or- 2. Start with a new optimized URL, reusing some of the old content and utilizing a 301 redirect from the previous page? Thank you!
Technical SEO | | XLMarketing0 -
4xx fix
Hi
Technical SEO | | gavinr
I have quite a lot of 4xx errors on our site. The 4xx occurred because I cleaned poor URLs that had commas etc in them so its the old URLs that now 4xx. There are no links to the URLs that 4xx. What is the best way of rectifying this issue of my own making?! Thanks
Gavin0 -
URL Error "NODE"
Hey guys, So I crawled my site after fixing a few issues, but for some reason I'm getting this strange node error that goes www.url.com/node/35801 which I haven't seen before. It appears to originate from user submitted content and when I go to the page it's a YouTube video with no video playing just a black blank screen. Has anyone had this issue before. I think it can probably just be taken off the site, but if it's a programming error of some sort I'd just like to know what it is to avoid it in the future. Thanks
Technical SEO | | KateGMaker0 -
How do I fix the h1 tag?
No More Than One H1 Tag Easy fix <dl> <dt>Number of H1s</dt> <dd>2</dd> <dt>Explanation</dt> <dd>Best practices for both SEO and accessibility require only a single H1 tag. The H1 is meant to be the page's headline, and thus, multiple H1s are confusing. Consider employing H2, H3 or CSS styles to achieve the same results with text visualization.</dd> <dt>Recommendation</dt> <dd>Remove multiple instances of the H1 tag, so that only one exists on the page.</dd> <dd>I get this error yet it does not tell me how to fix it. I'm not even sure what the H1 tag is?
Technical SEO | | 678648631264
</dd> </dl>0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Canonical URL Issue
Hi Everyone, I'm fairly new here and I've been browsing around for a good answer for an issue that is driving me nuts here. I tried to put the canonical url for my website and on the first 5 or 6 pages I added the following script SEOMoz reported that there was a problem with it. I spoke to another friend and he said that it looks like it's right and there is nothing wrong but still I get the same error. For the URL http://www.cacaniqueis.com.br/video-caca-niqueis.html I used the following: <link rel="<a class="attribute-value">canonical</a>" href="http://www.cacaniqueis.com.br/video-caca-niqueis.html" /> Is there anything wrong with it? Many thanks in advance for the attention to my question.. 🙂 Alex
Technical SEO | | influxmedia0 -
Ignore Urls with pattern.
I have 7000 warnings of urls because of a 302 redirect. http://imageshack.us/photo/my-images/215/44060409.png/ I want to get rid of those, is it possible to get rid of the Urls with robots.txt. For example that it does not crawl anything that has /product_compare/ in its url? Thank you
Technical SEO | | levalencia10