Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • How do you use the opensite explorer to weed out bad backlinks in your profile, and then how do you remove them if you cannot contact the various webmasters.

    | marketing-man1990
    0

  • My campaigns are telling me I have some duplicate content. I know the reason but not sure how to correct it. Example site here: Bikers Blog is a "static page" referencing each actual "blog post" I write. This site is somewhat orphaned and about to be reconstituted. I have a number of other sites with a similar problem. I'm not sure how to structure the "page" so it only shows a summary of the blog post on the page not the whole post. Permalinks is set as "/%postname%/" I've posted on Wordpress.org with no answer. Since this is an SEO issue I thought maybe someone with WP experience could chime in. Thanks, Don

    | NicheGuy
    0

  • We are using Hotlink Protection on our server for jpg mostly. What is moz.com crawl url address so we can allow it in the list of allowed domains? The reason is that the crawl statistics gives our a ton of 403 Forbidden errors. Thanks.

    | sergeywin1
    0

  • Hello Moz Community, I've create a campaign in Moz and received hundreds of errors, regarding "Duplicate Page Content". After some review, I've found that 99% of the errors in the "Duplicate Page Content" report are occurring due to Wordpress creating a new comment page (with the original post detail), if a comment is made on a blog post. The post comment can be displayed on the original blog post, but also viewable on a second URL, created by Wordpress. http://www.Example.com/example-post http://www.Example.com/example-post/comment-page-1 Anyone else experience this issue in Wordpress or this same type of report in Moz? Thanks for your help!

    | DomainUltra
    0

  • We have a customer whose site: http://camilojosevergara.com doesn't show up even when you search for his exact domain. http://bit.ly/18RjPPX Wondering why that is. Is it because wikipedia and the other links rank higher? I've submitted his sitemap to google so I'm trying to figure out why its not showing up. Any tips/recommendations to fix this would be greatly appreciated. thanks

    | callmeed
    0

  • Hi there, Does anyone no of a way to noindex all the "previous entries" pages in a wordpress blog. They usally show on domain.com/page/2/ etc. They are the small snippets that provide a summary of the all your posts. I've not been able to find a plugin to do this. Thanks so much!

    | PeterM22
    0

  • I heard about that if you using adwords, google drops your ranking a little bit. Because of you already pay money for results. I think that is reasonable.

    | umutege
    0

  • I own an ecommerce website that had some spammy stuff done on it by an SEO firm through SEOLinkVine a few years ago. I'm working on removing all those links, but some of the sites no longer exist. I'm assuming I don't have to worry about disavowing those in Webmaster Tools? Thanks!

    | CobraJones95
    0

  • Hey - how's things? I have a client who wants to redirect his main domain to a new one.... there are a couple of problems I see and thought I'd ask on moz. 1 - The new domain has been incorrectly parked on the old domain with no redirection in place... when you do "site:domain.com" in Google, there are no serps for the new domain (the old domain still ranks well), it doesn't seem to rank anywhere and doesn't return any results in OSE. Is it wise to redirect to this domain or will rankings drop on both? 2 - The new domain uses .mobi as its suffix and will be replacing a .com - but is much more related to the business keyword wise. Is using mobi a problem. Overall the SEO on the site is abysmal and I will be reworking everything - so there will be lots of changes going on at the same time. I'm just wondering if it is worth redirecting the new domain at all, or trying to get brand new domain and use that.... or just to stick with the original aged domain... I think that is my only concerns at the moment

    | agua
    0

  • Hi all, I'm working on an e-commerce site that sells products that may only be available for a certain period of time. Eg. A product may only be selling for 1 year and then be permanently out of stock. When a product goes out of stock, the page is removed from the site regardless of any links it may have gotten over time. I am trying to figure out the best way to handle these permanently out of stock pages. At the moment, the site is set up to return a 404 page for each of these products. There are currently 600 (and increasing) instances of this appearing on Google Webmasters. I have read that too many 404 errors may have a negative impact on your site, and so thought I might 301 redirect these URLs to a more appropriate page. However I've also read that too many 301 redirects may have a negative impact on your site. I foresee this to be an issue several years down the road when the site has thousands of expired products which will result in thousands of 404 errors or 301 redirects depending on which route I take. Which would be the better route? Is there a better solution?

    | Oxfordcomma
    0

  • How long until 301 redirects get recognized by search engines? I noticed my link on Google isn't forwarding over to my new domain even after the 301 redirect. If I go to the site directly, the 301 redirect works. Anyone know how long it takes for search engines to pick it up? Thanks!

    | timeintopixels
    0

  • Hi all, I recently ran my first diagnostic test with SEOmoz and was alarmed to find my company's site has over 8,000 cases of duplicate content, virtually all of which can be attributed to separate domains, www vs. non-www. So after some research I found that this can be solved easily using .htaccess. However I found a warning on another site that if my site has already been indexed by Google without the www, there could be side effects like a loss in PR. Can anybody tell me how to find out whether my site falls into this category? I do have access to Google Webmaster tools but I can't find anywhere that tells me how my site's been indexed. Thanks in advance.

    | rylaughlin
    0

  • Has anyone been using rel="canonical" to attribute content that has been republished on Domain B... back to Domain A, which is the original source? The videos  below say that this should be working...  I am asking to hear from anyone who has done it. Has it worked as you expected?   Did Domain A get the benefit that you expected? Thanks! ==========   Source Videos   ============= Matt Cutts (April, 2012)  http://www.youtube.com/watch?v=zI6L2N4A0hA Matt Cutts (April, 2010)  http://www.youtube.com/watch?v=x8XdFb6LGtM Rand Fishkin  (August, 2012)  http://www.youtube.com/watch?v=O8drPXudZZc

    | EGOL
    1

  • We recently added several new pages pages to our website. These new pages were constructed on a dev site, and then pushed live.  Since the new site has gone live I have seen a huge decline in links. My external followed links have dropped from 3000 to 500 and my total website links have fallen from 35,000 to 4,500. I have done some research, and I think there is a server side issue.  Where multiple versions of my URL may be running.  The majority of the links built were pointing to the homepage. That being said I do not have access to our in-house dev person this week, so I am trying to identify the problem myself. I have used screaming frog to crawl my site and did not see any errors which stand out. I realize I probably need to use 301 redirects to solve this problem, I just need some guidance on how to identify what I need to 301 redirect. Second question. If I move a landing page out of the global navigation but it can still be reached through other pages on the website , will this cause issues?

    | GladdySEO
    0

  • Hi guys I have a website which is 2 years old. Since 03/01/2013 I have no data in Google Webmaster Tools > Trafic > Search queries. The queries, the impressions and the clics dropped suddenly from one day to the next. I checked the rank of my keywords and the traffic of my site. They are stable and didn't move which means that they don't cause the problem. Has anybody had the same problem ? Is it Google Webmaster Tools bug ? Many thanks.

    | PFX111
    0

  • Hi there, I have a question about a few pages on our site, whom has a no index, nofollow meta tag but they are still indexed and even rank number one in our market for the term. How is that possible or is it that Google just ignores the tags when they think it´s an error from our side? The url is www.drogisterij.net/kilo_killer and the keyword is kilo killer. We rank number 1 if you search from Google.nl Anyone have seen it before and know why this might be? Thanks in advance.

    | JaapWillemDrogisterij
    0

  • Is google to blame?  I feel like something of this magnitude requires legal action but havent found anything online about what I can do legally, whether I should collaborate and if simply bringing all this to google's attention is enough. Thanks

    | Southbay_Carnivorous_Plants
    0

  • Hi, We have a multilingual website with both latin and non-latin characters, We are working on creating a friendly URL structure for the site. For the Latin languages can we use translated version of the URLs within the language folders? For example - www.site/cars www.site/fr/voitures www.site/es/autos

    | theLotter
    0

  • Hi, Does anybody know which meta-robots tag will "win" if there is more than one on a page? The situation:
    our CMS is not very flexible and so we have segments of META-Tags on the page that originate from templates.
    Now any author can add any meta-tag from within his article-editor.
    The logic delivering the pages does not care if there might be more than one meta-robots tag present (one from template, one from within the article). Now we could end up with something like this: Which one will be regarded by google & co?
    First?
    Last?
    None? Thanks a lot,
    Jan

    | jmueller
    0

  • Hi I've just successfully set up authorship for a client according to the rich snippet testing tool although bit perplexed since underneath the results theres a section called 'Extracted Structured Data'. The first section is marked hatom feed and under that it says under the field saying 'Author' it says in red: Warning: At least one field must be set for Hcard.Warning: Missing required field "name (fn)".And then under the URL field & the URL it says:Warning: Missing required field "entry-title".Any ideas what this means or even if its important ? I would have thought the tool wouldnt acknowledge authorship as being set up correctly if this was an issue but that does beg the question what is it doing there and what does it mean ?Theres another section after that called rdfa node which seems all fineAlso says page does not contain publisher mark up although i know publisher has been added to the home page, is it best to add publisher to head section in every page (as i have heard some people say) or just the home page ?Many ThanksDan

    | Dan-Lawrence
    0

  • The CSV export for Crawl Diagnostics contains a column named "blocked_google". It states a blocking date/time but doesn't occur on all our webpages, not even on all pages of the same type / structure. There are no other flags on these records that would explain a blocking of Google; all other agents are not flagged, and our robots.txt doesn't contain any blocks either. The only flag the records have in common is "Page Title > 70 characters". Of course, I could just assume this is the reason for the "blocking_google", but is it? What evaluation makes the crawler fill in this property, and how to handle/solve it's occurrence?

    | EconostoNL
    0

  • I have 8 niche websites for golf clubs. This was done to carve out tight niches for specific types of clubs then only broadens each club by type - i.e. better player, game improvement, max game improvement. So far, for fairly young sites, <1 year, they are doing fairly well as I build content. Running campaigns has alerted me to one problem - too many on-page links. And because I use Wordpress those links are on each page in the right sidebar and lead to the other sites. Even though visitors arrive via organic search in most cases they tend to eventually exit to one of the other sites or they click on a product (Ebay) and venture off to hopefully make a purchase. Ex: Drivers site will have a picture link for each of the other 7 sites. Question: If I have one stie (like a splash page) used as one link to that page listing all the sites with a brief explanation of each site will this cause visitors to bounce off because they will have one click, than the list and other clicks depending on what other club/site they would like to go to. The links all open in new windows. This would cut down on the number of links per page of each site but will it cause too much work for visitors and cause them to leave?

    | NicheGuy
    0

  • Hi All I have looked in WMT and it says I am getting a lot of links from 1 affiliate - they have 100,000 pages on their site but GWT is showing me 200,000 links from their domain - each of their pages has the following code. Mysite I think we have Nofollowed the link but does the img src="http://www.site.co.uk/affiliate/affiliation-images/470x80.gif also act as a link and if so do I need to Nofollow that too? The image is stored on our server so the affiliate is linking to the banner image on our server. Would something such as this affect my rankings in a negative way? Thanks

    | MotoringSEO
    1

  • This may seem like a pretty newbie question, but I haven't been able to find any answers to it (I may not be looking correctly). My site used to rank decently for the KW "Gold name necklace" with this page in the search results:http://www.mynamenecklace.co.uk/Products.aspx?p=302This was the page that I was working on optimizing for user experience (load time, image quality, ease of use, etc.) since this page was were users were getting to via search. A couple months ago the Google SERP's started showing this page for the same query (also ranked a little lower, but not important for this specific question):http://www.mynamenecklace.co.uk/Products.aspx?p=314Which is a white gold version of the necklaces. This is not what most users have in mind (when searching for gold name necklace) so it's much less effective and engaging.How do I tell Google to go back to old page/ give preference to older page / tell them that we have a better version of the page / etc. without having to noindex any of the content? Both of these pages have value and are for different queries, so I can't canonical them to a single page. As far as external links go, more links are pointing to the Yellow gold version and not the white gold one.Any ideas on how to remedy this?Thanks.

    | Don34
    0

  • My website is www.tanyas.ca and I noticed that I can't find a result in the organic directory of google for my main keyword, "bathroom vanities". Your help is greatly appreciated. Cam

    | camc
    0

  • Hi! How are you? I'm having a problem: for some reason I don't understand, Google Webmasters Tool isn't indexing the sitemaps I'm uploading. One of them is http://chelagarto.com/index.php?option=com_xmap&sitemap=1&view=xml&lang=en . Do you see what could be the problem? It says it only indexed 2 website. I've already sent this Sitemap several times and I'm always getting the same result. I'd really use some advice. Thanks!

    | arielbortz
    0

  • Anyone have an idea why dates might be appearing in search results for a small number of my pages and not others? There is no date on any of the pages themselves and nothing in the back-end CMS indicating dates should appear. Some of the dates (presumably from when the snippets were originally created) are over a year old and make all the information look terribly out of date. Any ideas?

    | BrettCollins
    0

  • Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index. Is this enough for the solve problem?

    | iskq
    0

  • Hi I run a site with with a top level domain -  www.tidy-books.com We have a www.tidy-books.co.uk and other country based domains My questions concerns the US and UK for the time being. Basically the UK site was originally on the .com but we changed over to .co.uk about 6 months ago for various reasons (not ideal I know). The US site was on a completely  different URL so became the .com. The final bit is at the moment we have a IP country redirect on. My question is as the .com at presents carries the most of the SEO juice, ranks higher than the .co.uk but as we have the redirect on gets the customer to the relevant site, would it be best to turn of geo location off on the .com? At the moment for the .com its set as United states. Hope that makes sense and looking forward to hearing your opinions. Thanks

    | tidybooks
    0

  • A friend of mine has a successful website which is hosted by the company he used to use for developing his site. As he no longer uses them feels he should use it. Who do you use for hosting a small to medium sized business?

    | Ant71
    0

  • Hi Mozzers, I've been mulling over whether my URLs could benefit a little SEO tweaking. I'd be grateful for your opinion. For instance, we've a product, a vintage (second hand), red Chanel bag. At the moment the URL is: www.vintageheirloom.com/vintage-chanel-bags/2.55-bags/red-2.55-classic-double-flap-bag-1362483150 Broken down... vintage-chanel-bags = this is the main product category, i.e. vintage chanel bags 2.55-bags = is a sub category of the main category above. They are vintage Chanel 2.55 bags, but I've not included 'vintage' again. 2.55 bags are a type of Chanel bag. red-2.55-classic-double-flap-bag = this is the product, the bag **1362483150 **= this is a unique id, to prevent the possibility of duplicate URLs As you no doubt can see we target, in particular, the phrase **vintage. **The actual bag / product title is: Vintage Chanel Red 2.55 classic double flap bag 10” / 25cm With this in mind, would I be better off trying to match the product name with the end of the URL as closely as possible? So a close match below would involve not repeating 'chanel' again: www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-red-2.55-classic-double-flap-bag or an exact match below would involve repeating 'chanel': www.vintageheirloom.com/chanel-bags/2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag This may open up more flexibility to experiment with product terms like second hand, preowned etc. Maybe this is a bad idea as I'm removing the phrase 'vintage' from the main category. But this logical extension of this looks like keyword stuffing !! www.vintageheirloom.com/vintage-chanel-bags/vintage-2.55-bags/vintage-chanel-red-2.55-classic-double-flap-bag Maybe this is over analyzing, but I doubt it? Thanks for looking. Kevin

    | well-its-1-louder
    0

  • Hi, we have had some of our sites hacked and i would like your advice on the situation. We pay a fair but of money for a dedicated server as we thought that by having a dedicated server it would make the sites secure. The language we use for our sites are joomla and wordpress but yesterday a few of them on the dedicated server were hacked. the hosting company have sent us the following info 'There is one extra security improvement on the system we may offer you and it is cloudlinux with cageFS. This improves the overall security on the server but will not stop unsecured code exploiting if such coding is present in your website scripts.' The hosting company is asking for an extra £20 a month to add this on. we asked the hosting company what they meant by unsecured code and they said: 'Unsecure coding is code in your scripts which will allow injections of files from external source. Unfortunately better explanation is not available and for any detailed information you may check with experience local web developer.' We thought that the sites would be secured. The hosting company have said that because one of the sites was not updated from joomla 1.5 to joomla 3.0 which we were planning to do this week, this is the reason why it has happened. However, this does not make any sense, as this is a dedicated server so why has the wordpress sites which are up to date been hacked when they are on the same dedicated server. any advice in understand more on this issue would be great, as i need to find out why this has happened and if i should be taking my sites to another hosting company

    | ClaireH-184886
    0

  • Hey, My website: http://www.electromarket.co.uk is running Magento Enterprise. The issue I'm running into is that the URLs can be shortened and modified to display different things on the website itself. Here's a few examples. Product Page URL: http://www.electromarket.co.uk/speakers-audio-equipment/dj-pa-speakers/studio-bedroom-monitors/bba0051 OR I could remove everything in the URL and just have: http://www.electromarket.co.uk/bba0051 and the link will work just as well. Now my problem is, these two URL's load the same page title, same content, same everything, because essentially they are the very same web page. But how do I tell Google that? Do I need to tell Google that? And would I benefit by using a redirect for the shorter URLs? Thanks!

    | tomhall90
    0

  • This question has been asked before, and I’ve read most of the answers. However, things are somewhat different, as we are a web hosting company and have many clients that link to us site wide in the footer, as well we have a website builder application where we control the footer links on our end user's websites. Most use just our “domain name” or “Powered by Domain” Should we remove them? It does provide visitors some value as they can tell where the website is hosting, has been developed or how to sign up for our website builder or web hosting services. Right now, they are all follow, and we are working on cleaning up our link profile so looking for some great advice on how to proceed. Our link profile is very large since we are a web hosting company that has been around for 10 plus years. Thanks in advanced for your recommendations.

    | goodhost
    0

  • I got a notification from Google Webmaster tools saying that they've found a whole bunch of server errors. It looks like it is because an earlier version of the site I'm doing some work for had those URLs, but the new site does not. In any case, there are now thousands of these pages in their index that error out. If I wanted to simply remove them all from the index, which is my best option: Disallow all 1,000 or so pages in the robots.txt ? Put the meta noindex in the headers of each of those pages ? Rel canonical to a relevant page ? Redirect to a relevant page ? Wait for Google to just figure it out and remove them naturally ? Submit each URL to the GWT removal tool ? Something else ? Thanks a lot for the help...

    | jim_shook
    0

  • This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched.  These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory".  The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls.   Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.

    | reidsteven75
    0

  • I am a consultant who works for a website www.skift.com. Today we received an automated message from Google Webmasters saying our site has quality issues. Since the message is very vague and obviously automated I was hoping to get some insight into whether this message is something to be very concerned about and what can be done to correct the issue.From reviewing the Webmasters Quality Guidelines, the site is not in violation of any of the guidelines. I am wondering if this message is generated as a results of licensing content from Newscred, as I have other clients who are licensing content from Newscred and getting the same message from Google Webmasters.Thanks in advance for any assistance.

    | electricpulp
    0

  • Hi Mozzers, I need your help.
    Our website (www.barnettcapitaladvisors.com) stopped being indexed in search engines following a round of major changes to URLs and content. There were a number of dead links for a few days before 301 redirects were properly put in place. And now, only 3 pages show up in bing when I do the search "site:barnettcapitaladvisors.com". A bunch of pages show up in Google for that search, but they're not any of the pages we want to show up. Our home page and most important services pages are nowhere in search results. What's going on here?
    Our sitemap is at http://www.barnettcapitaladvisors.com/sites/default/files/users/AndrewCarrillo/sitemap/sitemap.xml
    Robots.txt is at: http://www.barnettcapitaladvisors.com/robots.txt Thanks!

    | bshanahan
    0

  • Hey I was wondering if the location of your server (host) effects your local search engine results?Suppose I have an e-commerce website in the Netherlands and I want to host my website in the USA or UK, does this effect my search engine results in the Netherlands?

    | kevba
    0

  • I used to use a comments program on my website that created comment pages in the form of http://www.example.com/web-page.htm?comm_page=2. When I switched to a new comments program, I worried that these old comment URLs would be considered duplicate content. I created a 301 redirect that, for example, would redirect http://www.example.com/web-page.htm?comm_page=2 to http://www.example.com/web-page.htm and disallowed them in robots.txt, which I later learned was not the thing to do.. I have removed the URLs from being disallowed in robots.txt. However, many months later, these comment page URLs keep appearing in Google's index from time to time. I use the "Remove URLs" tool in Google Webmaster Tools to remove the URLs from Google's index, but more URLs appear a few days later. How can I get rid of these URLs for good? Thanks!

    | MrFrost
    0

  • I have duplicate content issue on my site, because i allow to index tags in my wordpress. And the content overlaps on them. What could be a solution to this? How do i fight it, if still want my tag pages to be indexed in Google, but i don't want to to influence my traffic negatively? Currently i have 596 tags! 🙂 Site:
    richclubgirl.com My idea was to put canonical tag for the post i want to rank from the most popular tag pages (with biggest page authority). Would love to hear from You!

    | pycckuu
    1

  • My company has a popular website that has over 4,000 crawl errors showing in Moz, most of them coming up as Duplicate Page Title. These duplicate page titles are coming from pages with the title being the keyword, then location, such as: "main keyword" North Carolina
    "main keyword" Texas ... and so forth. These pages are ranked and get a lot of traffic. I was wondering what the best solution is for resolving these types of crawl errors without it effecting our rankings. Thanks!

    | StorageUnitAuctionList
    0

  • Hi, Is it important to update a page sometimes to score better in SEO?
    Or if i make a page and never change it, it would be good enough.

    | parfumerienl
    0

  • Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?

    | ShearingsGroup
    0

  • www.mywebsite.com**/details/**home-to-mome-4596 www.mywebsite.com**/details/**home-moving-4599 www.mywebsite.com**/details/**1-bedroom-apartment-4601 www.mywebsite.com**/details/**4-bedroom-apartment-4612  We have so many pages like this, we do not want to Google crawl this pages So we added the following code to Robots.txt User-agent: Googlebot Disallow: /details/ This code is correct?

    | iskq
    0

  • Hi everyone! I recently took over a new account and I was running an initial crawl on the site and a weird 404 error popped up. http://www.directcolors.com/products/liquid-colored-antique/top
    http://www.directcolors.com/applications/concrete-antiquing/top
    http://www.directcolors.com/applications/concrete-countertops/top I understand that the **top **could be referring to an actual link that brings users to the top of a page, but on these pages there is no such link. Am I missing something?

    | rblake
    1

  • Hi, I have an issue where I am getting for duplicate page titles for pages that shouldn't exist. The issue is on the blog index page's (from 0 - 16) and involves the same set of attachment_id for each page, i.e. /blog/page/10/?attachment_id=minack /blog/page/10/?attachment_id=ponyrides /blog/page/11/?attachment_id=minack /blog/page/11/?attachment_id=ponyrides There are 6 attachment_id values (and they are not ID values either) which repeat for every page on the index now what I can't work out is where those 6 links are coming from as on the actual blog index page http://www.bosinver.co.uk/blog/page/10/ there are no links to it and the links just go to blog index page and it ignores the attachment_id value. There is no sitemap.xml file either which I thought might have contained the links. Thanks

    | leapSEO
    0

  • Yesterday I was checking the cache date of page of a client. Today the snapshot date has been reversed/reverted. Yesterday it displayed "It is a snapshot of the page as it appeared on 19 Apr 2013" whereas today it reads "It is a snapshot of the page as it appeared on 4 Apr 2013". Has anyone seen this before? Thanks in advance.

    | PhilYarrow
    1

  • Here is the first URL - http://www.flagandbanner.com/Products/FBPP0000012376.asp Here is the 2nd URL - http://www.flagandbanner.com/Products/flag-spreader.asp Granted I am new to this issue on this website, but what is Roger seeing that I'm not?  A lot of our duplicate pages are just like this example.

    | Flaglady
    0

  • This is my understanding of how Google's search works, and I am unsure about one thing in specifc: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched.  These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" connects to the "page directory".  I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls.   Since Google's "page directory" is a cache, would the urls be the same as the live website? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I ask is I am starting to work with a client who has a newly developed website.  The old website domain and files were located on a GoDaddy account.  The new websites files have completely changed location and are now hosted on a separate GoDaddy account, but the domain has remained in the same account.  The client has setup domain forwarding/masking to access the files on the separate account.  From what I've researched domain masking and SEO don't get along very well.  Not only can you not link to specific pages, but if my above assumption is true wouldn't Google have a hard time crawling and storing each page in the cache?

    | reidsteven75
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.