Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • Hi, I have a Wordpress site for which I was ranking #1 for my main key phrase. Then I noticed that my site had plummeted in ranking. Investigating I found the cause to be a hacking issue where my code has lots of content for and backlinks to  Viagra sites! How do I best work on retrieving my ranking and making sure that the site in question gets penalized?

    | vibelingo
    0

  • I have a client that is a franchise. Each franchise location has a different office address. Is it bad for me to do the following? COMPANY NAME of CITY ... .... There are about 10 franchisees. Should I use just the company name? Is the city in there going to be a negative?

    | thomas.wittine
    0

  • We had a client(dentist) hire another marketing firm(without our knowledge) and due to some Google page changes they made, their website lost a #1 ranking, was disassociated with the places page and was placed at result #10 below all the local results. We quickly made some changes and were able to bring them up to #2 within a few days and restore their Google page after about a week, but the tracking/forwarding phone number the marketing company was using shows up on the page despite attempts to contact Google through updating the business in places management as well as submit the phone number as incorrect while providing the correct phone number.  And because the client fired that marketing company, the phone number will no longer be active in a few days. Of course this is very important for a dental office. Has anyone else had problems with the speed and updating Google Places/Plus pages for businesses?  What's the most efficient way to make changes like this?

    | tvinson
    0

  • Question: If we had hundreds of images duplicated on a site, but different URL (tacking "-1" to the end), is that a Panda issue? Will we get penalized. I know duplicate content (web pages) pretty well, but duplicate files? That I'm unsure of.

    | M_D_Golden_Peak
    0

  • I have just seen through ahrefs and found without WWW have more backlinks instead of WWW. Is there any way to forward all those without WWW to WWW domain, is there any harm or effect in serp ranking?

    | chandubaba
    0

  • Hello! Today when I check my blog site search on Google, I can't see my blog home page. Though all my posts and pages are still on the Google results. Today I published a test post, then it also indexed by the Google less than 3 minutes. Still I can't see any traffic changes. 10th of April (yesterday)  when I perform a site search (site:mydomain.com), I saw my site on the Google search result. Today I installed the Ulitmate SEO plug-in and deactivated WordPress SEO plug-in. After a few hours I saw this issue. (I'm not saying this is the issue, I just mentioned it). In addition to that I never used any black hat SEO methods to improve my ranking. my site:- http://goo.gl/6mvQT Any help really appreciate!

    | Godad
    0

  • Hi, I'm working with someone who recently had two websites redesigned. The old permalink structure consisted of domain/year/month/date/post-name. Their developer changed the new permalink structure to domain/post-name, but apparently he didn't redirect the old URLs to the new ones so we're finding that links from external sites result in 404 errors (once I remove the date in the URL, the links work fine). Each site has 3-4 years worth of blog posts, so there are quite a few that would need to be changed. I was thinking of using the Redirection plugin - would that be the best way to fix this sitewide on both sites?Any suggestions would be appreciated. Thanks, Carolina

    | csmm
    0

  • Yesterday, one of my sites got this message from WMT: "Over the last 24 hours, Googlebot encountered 1 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 100.0%." I did a fetch as Googlebot and everything seems fine.  Also, the site is not seeing a decrease in traffic. This morning, a client for which I am doing some unnatural links work emailed me about a site of his that got this message: "Over the last 24 hours, Googlebot encountered 1130 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%." His robots.txt looks fine to me. Is anyone else getting messages like this?  Could it be a WMT bug?

    | MarieHaynes
    1

  • If you have no physical premises (i.e. operate online) but you only serve clients in a specific area, what is best practice for targeting a local area? I know G. Places can be used if you have a premises, and that .co.uk / hosting server location make a difference, but beyond that... ? Thanks!

    | underscorelive
    1

  • Hello! I am working on a website with the following structure: example.com/sub1/sub2/sub3. The page "example.com/sub1" does not exist (I know this is not the optimal architecture to have this be a nonexistent page). But someone might type that address, so I would like it to redirect it to example.com/sub1/sub2/sub3. I tried the following redirect: redirect 301 /sub1 http://example.com/sub1/sub2/sub3. But with this redirect in place, if I go to example.com/sub1, I get redirected to example.com/sub1/sub2/sub3/sub2/sub3 (the redirect just inserts extra subdirectories). If someone types "example.com/sub1" into a browser, I would "example.com/sub1/sub2/sub3" to come up. Is this possible? Thank you!

    | nyc-seo
    0

  • It's saying Roger can't communicate with my site. I've contacted ipage which is the host and they say it's on your end. Please let me know if you need any more info. from me... Thanks, Tom 404-447-2868

    | NextlevelMD
    0

  • Client has an old unused site 'A' which I've discovered during my backlink research. It contains this source code below which frames the client's 'proper' site B inside the old unused url A in the browser address. Quick question - will google penalise the website B which is the one I'm optimising? Should the client be using a redirect instead? <frameset <span class="webkit-html-attribute-name">border='0' frameborder='0' framespacing='0'></frameset <span> <frame src="http: www.clientwebsite.co.ukb" frameborder="0"  noresize="noresize"  scrolling="yes"></frame src="http:> Please go to http://www.clientwebsite.co.ukB <noframes></noframes> Thanks, Lu.

    | Webrevolve
    0

  • Hi all, I've been running an online wine store in Switzerland for a month and have been working hard on SEO (I love learning about it). Anyway, for a couple of years prior to launching the store, I had been running a wine blog whose articles are ranking well in Google. I now want to link the two. My questions are: A) will the addition of the blog (store.com/blog) contribute to the store's domain authority (currently, the blog authority is higher than the site authority)? B) technically, can I 301 the whole blog to store.com/blog? Any help and tips would be appreciated. Thank you!

    | fkupfer
    0

  • Hi, How do we 301 an https version of a domain to a page on another website when the security certificate has run out? We have 301 redirected the http version but IT stuck on how to do the expired https. Thanks

    | Houses
    0

  • I am thinking about additional ways to repurpose blog posts through out my website. I have a blog - http://www.domainname.com/blog I would like to use the blog categories, which are aligned with the site structure, and create on-page RSS Feeds for my regular web pages. Anything here that might not be good for SEO? Thank you

    | evereffect
    0

  • Working with a business that is having some real issues. They had some client information that was showing up in the meta description. Personal phone numbers for example. Our developers removed all the information from the pages in question two days ago, but we are still seeing the info in the meta description. Any idea how long this will take to be recrawled and fixed? Anything I can do to get recrawled sooner? Also, this is only happening in Bing/Yahoo and not in Google. Thanks for any help you can provide!

    | PGD2011
    0

  • Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin <cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?

    | SO_UK
    0

  • I'm having some duplicate content issues with Google.  I've already got my .htaccess file working just fine as far as I can tell.  Rewriting works great, and by using the site you'd never end up on a page with /index.php.  However I do notice that on ANY page of the site you could add /index.php and get the same page i.e.: www.mysite.com/category/article and www.mysite.com/index.php/category/article Would both return the same page.  How can I 301 or something similar all /index.php pages to the non index.php version?  I have no desire for any page on my site to have index.php in it, there is no use to it.  Having quite the hard time figuring this out. Again this is basically just for the robots, the URL's the users see are perfect, never had an issue with that.  Just SEOMOZ reporting duplicate content and I've verified that to be true.

    | b18turboef
    1

  • I do not understand this point in my Campaign Set-Up. They are the same site as fas as I understand Can anyone help please? Quote from SEOMOZ "We have detected that the domain www.neuronlearning.eu and the domain neuronlearning.eu both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here." thanks John

    | johnneuron
    0

  • I hear that when moving your posts from one website to another, if it is done incorrectly it can hurt your ranking on search engines. With this mind. Does changing from on theme to another affect a websites ranking?

    | johnmoon6
    1

  • Our Web store, http://www.audiobooksonline.com/index.html, has struggled with duplicate content issues for some time. One aspect of duplicate content is a page like this: http://www.audiobooksonline.com/out-of-publication-audio-books-book-audiobook-audiobooks.html. When an audio book title goes out-of-publication we keep the page at our store and display a http://www.audiobooksonline.com/out-of-publication-audio-books-book-audiobook-audiobooks.html whenever a visitor attempts to visit a specific title that is OOP. There are several thousand OOP pages. Would Google consider these OOP pages duplicate content?

    | lbohen
    0

  • We are wondering the best way to redirect the traffic from a site that will no longer exist. The Scenario:
    Our client wants to discontinue this website http://www.animalcarepackaging.com/.  We’d like to redirect the traffic from this site to an internal page on our client's other website: http://www.glenroy.com/packaging/.  This internal page is the most appropriate to the content that appears on animalcarepackaging.com (as opposed to just the entire site glenroy.com). Possible Options We Are Considering:
    Option 1: Keep hosting animalcarepackaging.com and add a 301 redirect for all pages to glenroy.com/packaging/.     Our concern with this option is that Google/Bing will see animalcarepackaging.com as a gateway, which could hurt glenroy.com. Option 2: Keep hosting animalcarepackaging.com and add a 301 redirect so all pages are sent to glenroy.com/packaging/; AND file a change of address with Google and Bing.  We believe this will allow people who have bookmarked animalcarepackaging.com to go to glenroy.com/packaging/; while people searching for animalcarepackaging.com will go to glenroy.com's home page.  We would augment this by posting a message on the homepage of animalcarepackaging.com notifiying users that the site will be discontinued and info will be found at glenroy.com/packaging. Option 3: Do a change of address with Google/Bing and send all traffic to glenroy.com (rather than an internal page).  Post information on the homepage of animalcarepackaging.com that the site will be discontinued on X-date, and info about animalcarepackaging.com will be able to be found at glenroy.com/packaging. Looking for feedback on our options and suggestions on how this can be handled.

    | TopFloor
    0

  • I have at least  three duplicate main pages on my website: www.augustbullocklaw.com www.augustbullocklaw.com/index augustbullocklaw.com I want the first one, www.augustbullocklaw.com to be the main page. I put this code on the index page and uploaded it to my site: http://www.augustbullocklaw.com/canonical-version-of-page/" rel="canonical" /> This code now appears on all three pages shown above. Did I do this correctly? I surmise that www.augustbullocklaw.com is pointing to itself. Is that ok? I don't know how to take the cononical code off the page that is the page I want to be the main page. (I don't know how to remove it from www.augustbullocklaw.com, but leave it on www.augustbullocklaw.com/index and augustbullocklaw.com) Thanks

    | Augster99
    0

  • Hi Recently I ask for some work done on my website from a company,  but I am not sure what they've done is right.
    What I wanted was html file extensions to be removed like
    /ash-logs.html to /ash-logs
    also the index.html to www.timports.co.uk
    I have done a crawl diagnostics and have duplicate page content and 32 page title duplicates. This is so doing my head in please help This is what is in the .htaccess file <ifmodule pagespeed_module="">ModPagespeed on
    ModPagespeedEnableFilters extend_cache,combine_css, collapse_whitespace,move_css_to_head, remove_comments</ifmodule> <ifmodule mod_headers.c="">Header set Connection keep-alive</ifmodule> <ifmodule mod_rewrite.c="">Options +FollowSymLinks -MultiViews</ifmodule> DirectoryIndex index.html RewriteEngine On 
     # Rewrite valid requests on .html files  RewriteCond %{REQUEST_FILENAME}.html -f RewriteRule ^ %{REQUEST_URI}.html?rw=1 [L,QSA] 
     # Return 404 on direct requests against .html files RewriteCond %{REQUEST_URI} .html$  
    RewriteCond %{QUERY_STRING} !rw=1 [NC]
     RewriteRule ^ - [R=404] AddCharset UTF-8 .html # <filesmatch “.(js|css|html|htm|php|xml|swf|flv|ashx)$”="">#SetOutputFilter DEFLATE #</filesmatch> <ifmodule mod_expires.c="">ExpiresActive On
    ExpiresByType image/gif "access plus 1 years"
    ExpiresByType image/jpeg "access plus 1 years"
    ExpiresByType image/png "access plus 1 years"
    ExpiresByType image/x-icon "access plus 1 years"
    ExpiresByType image/jpg "access plus 1 years"
    ExpiresByType text/css "access 1 years"
    ExpiresByType text/x-javascript "access 1 years"
    ExpiresByType application/javascript "access 1 years"
    ExpiresByType image/x-icon "access 1 years"</ifmodule> <files 403.shtml="">order allow,deny allow from all</files> redirect 301 /PRODUCTS http://www.timports.co.uk/kiln-dried-logs
    redirect 301 /kindling_firewood.html http://www.timports.co.uk/kindling-firewood.html
    redirect 301 /about_us.html http://www.timports.co.uk/about-us.html
    redirect 301 /log_delivery.html http://www.timports.co.uk/log-delivery.html redirect 301 /oak_boards_delivery.html http://www.timports.co.uk/oak-boards-delivery.html
    redirect 301 /un_edged_oak_boards.html http://www.timports.co.uk/un-edged-oak-boards.html
    redirect 301 /wholesale_logs.html http://www.timports.co.uk/wholesale-logs.html redirect 301 /privacy_policy.html http://www.timports.co.uk/privacy-policy.html redirect 301 /payment_failed.html http://www.timports.co.uk/payment-failed.html redirect 301 /payment_info.html http://www.timports.co.uk/payment-info.html

    | ulefos
    1

  • In google webmaster i have updated my sitemap in Mar 6th..There is around 22000 links..But google fetched only 5300 links for long time...
    I waited for 1 month till no improvement in google index..So apr6th we have uploaded new sitemap (1200 links totally)..,But only 4 links indexed in google ..
    why google not indexing my urls? Is this affect our ranking in SERP? How many links are advisable to submit in sitemap for a website?

    | Rajesh.Chandran
    0

  • I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?

    | KCBackofen
    0

  • When running a report it says we have lots of duplicate content. We are a e-commerce site that has about 45,000 sku's on the site. Products can be in multiple departments on the site. So the same products can show up on different pages of the site. Because of this the reports show multiple products with duplicate content. Is this an issue with google and site ranking? Is there a way to get around this issue?

    | shoedog
    1

  • I just have a quick question about using schema.org markup. Is there any situation where you'd want to include both author & video markup on the same page?

    | justinnerd
    0

  • We have just begun using SEO Moz a few months ago and have been busy cleaning up some of our warnings and errors. One of the errors that has been an issue is ... too many on-page links. I am trying to correct this issue and I am wondering how seo moz counts these links. For instance... we have links to many of our product categories in a drop down from our main menu, those same links are listed in our footer. Does this get counted as two or only one link. If two, should we make one of the link no follow or how would you best suggest correcting this. Our website is www.unikeyhealth.com Since the menu and the footer appear on virtually every page on our site correcting this issue will quickly sort out this problem. Thanks for any advice.

    | unikey
    0

  • Hi i have looking into ways that i can apply shema.org formatting to a website that has a collection of restaurant menus. I found related schema.org formats (eg recipes, reviews, products etc) though can not seem to find a format set out for menu items. I am considering using the product format (http://schema.org/Product) for each menu item on a restaurant menu though not sure if this would add much value or not. I was hoping Google when then know more about the data collected on the website and be able to show it more accurately to the end user when they are searching for this data (restaurant menu), with rich snippets in serp results. Thanks in advance for your feedback.

    | blackrails
    0

  • I can visit and view every page of a site (can also see source code), but Google, SEOmoz and others say anything other than home page is a 404 and Google won't index the sub-pages. I have check robots.txt and HTAccess and can't find anything wrong. Is this a DNS or server setting problem? Any ideas? Thanks, Fitz

    | FitzSWC
    0

  • How to solve the problem of google seeing both domain.com and domain.com/index.htm when I only have one file?  Will the cannonical work? If so which?  Or any other solutions for a novice? I learned from previous blogs that it needs to be done by hosting service, but Yahoo has no solution.

    | Kurtyj
    0

  • How would you 301 redirect and entire folder to a specific file within the same domain? Scenario www.domain.com/folder to www.domain.com/file.html Thanks for your Input...

    | dhidalgo1
    1

  • Hi, How important is it to submit a change of address in WMT? I say that because I am having problems doing it so wondered if it was worth the hassle in trying to fix it. I am getting the error: "We couldn't verify website.co.uk. To submit a change of address, website.co.uk must be verified using the same method as www.website.co.uk. Add website.co.uk to your account and verify ownership, then try again." I have looked on the web to try and find an answer and have come across 2 suggestions: You might have lost the verification with the redirect. If you used a metatag on the home page, the home page is now redirecting. If you had uploaded a verification text file, that file is probably now gone and redirecting as well. You probably need to re-verify the site. Either re-upload the text file and configure it not to redirect (may be difficult) or use the DNS server verification method. You need to verify the non www. version of the website because that's the way Google like it. Not sure why solution 2 would be necessary but it does seem to be what WMT are getting. Because the site already redirects, 1 would then come in to play. Is it worth persevering with because IT will be getting a long list of stuff to do from me as it is.... Thanks all

    | Houses
    0

  • Hi, I have approximately 100 old blogs that I believe are of interest to web browsers that I'd potentially like to noindex due to the fact that they may be viewed poorly by Google, but I'd like to keep on our website. A lot of the content in the blogs is similar to one another (as we blog about the same topics quite often), which is why I believe it may be in our interests to noindex older blogs that we have newer content for on more recent blogs. Firstly does that sound like a good idea? Secondly, can I use Google Tag Manager to implement noindex tags on specific blog pages? It's a hassle to get the webmaster to add in the code, and I've found no mention of whether you can implement such tags on Tag Manager on the usual SEO blogs. Or is there a better way to implement noindex tags en masse? Thanks!

    | TheCarnage
    0

  • On April the 7th SeoMOZ captured 6000 301 redirect on my site, but I cant seem to understand how SEOMOZ finds these links Example http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2-4a.html Makes a 301 Redirect to the following page beneath SEOMOZ says http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2.html The weird thing is that both urls work, but if i browse my site in a normal matter this link will never be created i that way. The -4a in the end os the link is not the normal link structure on the site and has never been like that before. So how does SEOMOZ Create that link? http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2-4a.html Also google only has the right one that are this one beneath http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2.html People would normal come to the category with this url http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/ And page 2 would be http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2.html AND NOT http://www.iphonegadget.dk/dk/apple-tilbeh-r-36/ipad-tilbeh-r-219/bilholder-239/index-2-4a.html Can anyone find out what is going on?

    | noerdar
    0

  • I have added many products in my ecommerce site but most of the google still not indexed yet. I already submitted sitemap a month ago but indexed process was very slow. Is there anyway to know the google to indexed my products or pages immediately. I can do ping but always doing ping is not the good idea. Any more suggestions ?

    | chandubaba
    1

  • I have gone through several revisions of my site. We used to have only static pages in HTML. I had search-engine-optimization.html changed to seo-philippines.html changed to /seo-philippines/ I 301 redirected all of them whenever I change the filenames. This is in the course of 6 years worth of link building and I'm wondering if this has an effect because our rankings go down everytime we do this.

    | optimind
    0

  • Currently working on a retail site that has a product category page with a series of pages related to each other i.e. page 1, page 2, page 3 and Show All page. These are being identified as duplicate content/title pages.  I want to resolve this through the applications of pagination to the pages so that crawlers know that these pages belong to the same series. In addition to this I also want to apply canonicalization to point to one page as the one true result that rules them all. All pages have equal weight but I am leaning towards pointing at the ‘Show All’. Catch is that products consistently change meaning that I am sometimes dealing with 4 pages including Show All, and other times I am only dealing with one page (...so actually I should point to page 1 to play it safe).  Silly question, but is there a hard and fast rule to setting up this lead page rule?

    | Oxfordcomma
    0

  • I am doing SEO for my WP blog but now I am starting my recently launch an eCommerce site where I am selling electronics products. I want to know how can I do the SEO so at least I can top 10 position for my google India. Second how can i avoid duplicate content about copying manufacture contents. Please help

    | chandubaba
    0

  • Hi, We have a main www website with a standard sitemap.  We also have a m. site for mobile content (but m. is only for our top pages and doesn't include the entire site).  If a mobile client accesses one of our www pages we redirect to the m. page.  If we don't have a m. version we keep them on the www site.  Currently we block robots from the mobile site. Since our m. site only contains the top pages, I'm trying to determine the boost we might get from creating a mobile sitemap.  I don't want to create the "partial" mobile sitemap and somehow have it hurt our traffic. Here is my plan update m. pages to point rel canonical to appropriate www page (makes sure we don't dilute SEO across m. and www.) create mobile sitemap and allow all robots to access site. Our www pages already rank fairly highly so just want to verify if there are any concerns since m. is not a complete version of www?

    | NicB1
    0

  • I spent an hour this afternoon trying to convince my CEO that having thousands of orphaned pages is bad for SEO. His argument was "If they aren't indexed, then I don't see how it can be a problem." Despite my best efforts to convince him that thousands of them ARE indexed, he simply said "Unless you can prove it's bad and prove what benefit the site would get out of cleaning them up, I don't see it as a priority." So, I am turning to all you brilliant folks here in Q & A and asking for help...and some words of encouragement would be nice today too 🙂 Dana

    | danatanseo
    0

  • Up until recently I had robots.txt blocking the indexing of my pdf files which are all manuals for products we sell.  I changed this last week to allow indexing of those files and now my webmaster tools crawl report is listing all my pdfs as not founds. What is really strange is that Webmaster Tools is listing an incorrect link structure: "domain.com/file.pdf" instead of "domain.com/manuals/file.pdf" Why is google indexing these particular pages incorrectly?  My robots.txt has nothing else in it besides a disallow for an entirely different folder on my server and my htaccess is not redirecting anything in regards to my manuals folder either.  Even in the case of outside links present in the crawl report supposedly linking to this 404 file when I visit these 3rd party pages they have the correct link structure. Hope someone can help because right now my not founds are up in the 500s and that can't be good 🙂 Thanks is advance!

    | Virage
    0

  • what do I do with 404 errors reported in webmaster tools that are actually URLs where users are clicking a link that requires them to log in (so they get sent to a login page). what's the best practice in these cases? Thanks in advance!

    | joshuakrafchin
    0

  • With the current updates in the Seo world how critical is link diversity. We are revamping our site and planning to add many new pages to our site and planning to build links to relevant pages with relevant anchor texts keywords. Also we are planning to add relevant H1, H2 and H3 tags with metatag description and content with keyword rich content specific to that page.  Any advise

    | INN
    0

  • Hello I have several websites www.example1c.om and www.example2.com and I would like to create e unique shop page where all the users that want to buy would be redirected, something like shop.example.com. Now, each website has its own catalog section but with this new system, if a user clicks on buy he will be redirected to shop.example.com like this: www.example1c.om/catalog click on BUY and redirected to shop.example.com www.example2c.om/catalog click on BUY and redirected to shop.example.com shop.example.com will be on a different server than the other 2 websites. Is it ok if I do a 301 from 1 server to the other server or I can be penalized? thank you

    | andromedical
    0

  • Hi, Our crawl report is showing duplicate content. some of the report I am clear about what to do but on others I am not. Some of the duplicate content arises with a 'theme=default' on the end of the URL. Is this version of a page necessary for people to see when they visit the site (like a theme=print page is) in which case I think we should use a canonical tag, or is it not necessary in which case we should use a 301? Thanks

    | Houses
    0

  • Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin   dev.rollerbannerscheap.co.uk/ A description for this result is not available because of this site's robots.txt – learn more.   This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google. In GWT I have tried to remove the sub domain. When I visit remove URLs, I enter dev.rollerbannerscheap.co.uk but then it displays the URL as http://www.rollerbannerscheap.co.uk/dev.rollerbannerscheap.co.uk. I want to remove a sub domain not a page. Can anyone help please?

    | SO_UK
    0

  • Hi Guys, I have a problem with a website from a customer. His website is not indexed in Google (except for the homepage). I could not find anything that can possibly be the cause. I already checked the robots.txt, sitemap, and plugins on the website. In the HTML code i also couldn't find anything which makes indexing harder than usual. This is the website i am talking about: http://www.xxxx.nl/ (Dutch) The only thing that i am guessing now is the Google sandbox, but even that is quite unlikely. I hope you guys discover something i could not find! Thanks in advance 🙂

    | B.Great
    0

  • Hello I need help writing a .htaccess file that will do two things. URL match abc.com and www.abc.com to www.newabc.com except one subdomain was also changed www.abc.com/blog is now  www.newabc.com/newblog everything after blog matches. Any help would greatly be appreciated. Thanks

    | chriistaylor
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.